Want to create interactive content? It’s easy in Genially!

Get started free

Superintelligence

Jurgen Gravestein

Created on July 16, 2024

Not a game

Start designing with a free template

Discover more than 1500 professional designs like these:

Transcript

Save humanity

Superintelligence

Superintelligence

PRESS START

This is not a game

MENU

Introduction

Active players

Start the game

© 2024 Jurgen Gravestein

This is not a game

MENU

Introduction

active players

Start the game

© 2024 Jurgen Gravestein

This is not a game

INTRODUCTION

The chance that humans create superintelligence (AIs smarter than us) is not zero. Therefore, everyone should understand the risks of developing such a technology and the consequences for our economy, society, and the world.

© 2024 Jurgen Gravestein

ACtive players

Researchers

Citizens

Governments

Companies

Everyone has a role to play, including you.

Not a game

Complete the mission and save humanity.

4 Fears

3 Hopes

2 Risks

1 Myths

Complete

LEVEL 1/5

Superintelligent AI will definitely be:

Hard to understand

Conscious

Rational

LEVEL 2/5

Superintelligence will affect:

some industries and jobs

All industries and jobs

All jobs but mine

LEVEL 3/5

Controlling superintelligent AI will be:

Challenging

Impossible

Easy

LEVEL 4/5

Superintelligence will be:

One AI system

Many AI systems working together

We dont know

LEVEL 5/5

Superintelligence is:

Inevitable

uncertain but possible

100% Impossible

CONGRATULATIONS!

YOU ARE STILL ALIVE

You're dead!

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

4 FEARS

3 HOPES

2 RISKS

1 MYTHS

COMPLETE

LEVEL 1/5

NEXT

Existential risk is about:

bad actors misusing AI

AI threatening humanity

a post-truth era

LEVEL 2/5

NEXT

Alignment is about making AI:

Always Follow our instructions

Respect human values

protect itself At ALL COsts

LEVEL 3/5

NEXT

The containment problem refers to the idea that a sufficiently smart AI may:

self-exfiltrate

improve itself

deceive us

LEVEL 4/5

NEXT

Deceptive alignment means AI can:

Persuade people

tell when you're lying

appear to be safe when in fact it is not

LEVEL 5/5

NEXT

Intrumental convergence is when AI:

Becomes evil

learns to use tools and instruments

seeks power as a means to an end

CONGRATULATIONS!

You are still alive

You're dead!

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

4 FEARS

3 HOPES

2 RISKS

1 MYTHS

COMPLETE

LEVEL 1/5

Superintelligence could enhance human capabilities via:

Brain-computer interfaces

Mind control

humanoid robots

LEVEL 2/5

In medicine, superintelligent AI might lead to:

Elimination of all diseases

Novel and Faster drug discovery

replacement of all doctors

LEVEL 3/5

Best case scenario, superintelligence could help governments to:

Model complex policy outcomes

neutralize political opponents

create a new world order

LEVEL 4/5

In energy production, superintelligence could:

Eliminate the need for energy

Create perpetual motion machines

optimize renewable energy distribution

LEVEL 5/5

In language and communication, superintelligence could:

Make verbal communication obsolete

Create a global universal language

Eradicate language barriers

CONGRATULATIONS!

THE NUMBER OF THIS MISSION IS 3

You're Dead!

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

4 Fears

3 Hopes

2 Risks

1 Myths

COMPLETE

LEVEL 1/3

By giving the AI more autonomy, we outsource:

Accountability

Responsibility

Decision power

LEVEL 2/3

Authoritarians will likely use superintelligence AI to:

Democratize their political systems

Create the perfect Surveillance state

Improve citizen welfare

LEVEL 3/3

To maintain a sense of meaning alongside superintelligent AI, humans will probably need to:

create in AI-free zones

Pretend AI doesn't exist

Redefine their values and sense of purpose

CONGRATULATIONS!

you are still alive

GAME OVER

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

4 FEARS

3 HOPES

2 RISKS

1 MYTHS

COMPLETE

This is not a game

CONGRATULATIONS

YOU SAVED HUMANITY

RESTART

© 2024 Jurgen Gravestein

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

CommonMyths

Let's debunk the most common myths around AI and superintelligence.

CONTINUE

Researchers

Scientists operating at the forefront of superintelligence carry a weighty responsibility.

Are you sure you want to exit?

You will lose all the progress

NO

yes

Governments

Regulation, international cooperation, and legal frameworks are crucial in shaping the future.

AI Risks

Let's test your knowledge on the dangers of developing smarter-than-human AI systems.

CONTINUE

Hopes

Let's not resort to pessimism yet. There a lot to be hopeful about.

CONTINUE

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

MISSIONfrog

I am a cool subtitle, perfect to provide more context about the topic you are going to discuss

CONTINUE

Are you sure you want to exit?

You will lose all the progress

NO

yes

Citizens

Citizens engaging in public discourse can hold companies and governments accountable.

Companies

Tech giants racing to develop superintelligent AI wield immense power.

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

Are you sure you want to exit?

You will lose all the progress

NO

yes

The Future of Life Institute

Imagined Superintelligence

This game was made as part of a contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world.

Find out more here