More creations to inspire you
CEVICHE ESCAPE GAME
Escape games
SPACE INVADERS GAME
Escape games
THUNBERG CASE. TIME SOLDIERS
Escape games
MINERALS AND ROCKS
Escape games
HARRY POTTER ESCAPE GAME
Escape games
HARRY POTTER
Escape games
MISSION SANTA: MATH BREAKOUT
Escape games
Transcript
PRESS START
Superintelligence
Superintelligence
Save humanity
This is not a game
© 2024 Jurgen Gravestein
Start the game
Active players
Introduction
MENU
This is not a game
© 2024 Jurgen Gravestein
Start the game
active players
Introduction
MENU
This is not a game
© 2024 Jurgen Gravestein
The chance that humans create superintelligence (AIs smarter than us) is not zero. Therefore, everyone should understand the risks of developing such a technology and the consequences for our economy, society, and the world.
INTRODUCTION
Researchers
Citizens
Governments
Companies
Everyone has a role to play, including you.
ACtive players
4 Fears
3 Hopes
Complete
2 Risks
1 Myths
Complete the mission and save humanity.
Not a game
Rational
Conscious
Hard to understand
Superintelligent AI will definitely be:
LEVEL 1/5
X
All jobs but mine
All industries and jobs
X
some industries and jobs
Superintelligence will affect:
LEVEL 2/5
Easy
Impossible
Challenging
Controlling superintelligent AI will be:
LEVEL 3/5
X
We dont know
Many AI systems working together
One AI system
Superintelligence will be:
LEVEL 4/5
X
100% Impossible
uncertain but possible
Inevitable
Superintelligence is:
LEVEL 5/5
X
YOU ARE STILL ALIVE
CONGRATULATIONS!
NO
yes
CONTINUE?
You're dead!
4 FEARS
3 HOPES
2 RISKS
1 MYTHS
COMPLETE
Complete the mission and save humanity.
Not a game
Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar.
a post-truth era
AI threatening humanity
bad actors misusing AI
NEXT
Existential risk is about:
LEVEL 1/5
X
protect itself At ALL COsts
Respect human values
Always Follow our instructions
NEXT
Alignment is about making AI:
LEVEL 2/5
X
deceive us
improve itself
self-exfiltrate
NEXT
The containment problem refers to the idea that a sufficiently smart AI may:
LEVEL 3/5
X
appear to be safe when in fact it is not
tell when you're lying
Persuade people
NEXT
Deceptive alignment means AI can:
LEVEL 4/5
X
seeks power as a means to an end
learns to use tools and instruments
NEXT
Becomes evil
Intrumental convergence is when AI:
LEVEL 5/5
X
You are still alive
CONGRATULATIONS!
NO
yes
CONTINUE?
You're dead!
4 FEARS
3 HOPES
2 RISKS
1 MYTHS
COMPLETE
Complete the mission and save humanity.
Not a game
Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar.
humanoid robots
Mind control
Brain-computer interfaces
Superintelligence could enhance human capabilities via:
LEVEL 1/5
X
replacement of all doctors
Novel and Faster drug discovery
Elimination of all diseases
In medicine, superintelligent AI might lead to:
LEVEL 2/5
X
create a new world order
neutralize political opponents
Model complex policy outcomes
Best case scenario, superintelligence could help governments to:
LEVEL 3/5
X
optimize renewable energy distribution
Create perpetual motion machines
Eliminate the need for energy
In energy production, superintelligence could:
LEVEL 4/5
X
Eradicate language barriers
Create a global universal language
Make verbal communication obsolete
In language and communication, superintelligence could:
LEVEL 5/5
X
THE NUMBER OF THIS MISSION IS 3
CONGRATULATIONS!
NO
yes
CONTINUE?
You're Dead!
4 Fears
3 Hopes
2 Risks
1 Myths
COMPLETE
Complete the mission and save humanity.
Not a game
Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar.
Decision power
Responsibility
Accountability
By giving the AI more autonomy, we outsource:
LEVEL 1/3
X
Improve citizen welfare
Create the perfect Surveillance state
Democratize their political systems
Authoritarians will likely use superintelligence AI to:
LEVEL 2/3
X
Pretend AI doesn't exist
Redefine their values and sense of purpose
create in AI-free zones
To maintain a sense of meaning alongside superintelligent AI, humans will probably need to:
LEVEL 3/3
X
you are still alive
CONGRATULATIONS!
NO
yes
CONTINUE?
GAME OVER
4 FEARS
3 HOPES
2 RISKS
1 MYTHS
Complete the mission and save humanity.
Not a game
COMPLETE
RESTART
© 2024 Jurgen Gravestein
This is not a game
YOU SAVED HUMANITY
CONGRATULATIONS
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
CONTINUE
Let's debunk the most common myths around AI and superintelligence.
CommonMyths
X
Scientists operating at the forefront of superintelligence carry a weighty responsibility.
Researchers
X
NO
yes
You will lose all the progress
Are you sure you want to exit?
Regulation, international cooperation, and legal frameworks are crucial in shaping the future.
Governments
X
CONTINUE
Let's test your knowledge on the dangers of developing smarter-than-human AI systems.
AIRisks
X
CONTINUE
Let's not resort to pessimism yet. There a lot to be hopeful about.
Hopes
X
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
CONTINUE
I am a cool subtitle, perfect to provide more context about the topic you are going to discuss
MISSIONfrog
X
NO
yes
You will lose all the progress
Are you sure you want to exit?
Citizens engaging in public discourse can hold companies and governments accountable.
Citizens
X
Tech giants racing to develop superintelligent AI wield immense power.
Companies
X
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
NO
yes
You will lose all the progress
Are you sure you want to exit?
Imagined Superintelligence
The Future of Life Institute
Find out more here
This game was made as part of a contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world.