Want to create interactive content? It’s easy in Genially!

Not a game

Over 30 million people create interactive content in Genially.

Check out what others have designed:

Transcript

Save humanity

Superintelligence

Superintelligence

PRESS START

MENU

Introduction

Active players

Start the game

© 2024 Jurgen Gravestein

This is not a game

MENU

Introduction

active players

Start the game

© 2024 Jurgen Gravestein

This is not a game

INTRODUCTION

The chance that humans create superintelligence (AIs smarter than us) is not zero. Therefore, everyone should understand the risks of developing such a technology and the consequences for our economy, society, and the world.

© 2024 Jurgen Gravestein

This is not a game

ACtive players

Everyone has a role to play, including you.

Companies

Governments

Citizens

Researchers

Not a game

Complete the mission and save humanity.

1 Myths

2 Risks

Complete

3 Hopes

4 Fears

LEVEL 1/5

Superintelligent AI will definitely be:

Hard to understand

Conscious

Rational

LEVEL 2/5

Superintelligence will affect:

some industries and jobs

All industries and jobs

All jobs but mine

LEVEL 3/5

Controlling superintelligent AI will be:

Challenging

Impossible

Easy

LEVEL 4/5

Superintelligence will be:

One AI system

Many AI systems working together

We dont know

LEVEL 5/5

Superintelligence is:

Inevitable

uncertain but possible

100% Impossible

CONGRATULATIONS!

YOU ARE STILL ALIVE

You're dead!

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

COMPLETE

1 MYTHS

2 RISKS

3 HOPES

4 FEARS

LEVEL 1/5

Existential risk is about:

NEXT

bad actors misusing AI

AI threatening humanity

a post-truth era

LEVEL 2/5

Alignment is about making AI:

NEXT

Always Follow our instructions

Respect human values

protect itself At ALL COsts

LEVEL 3/5

The containment problem refers to the idea that a sufficiently smart AI may:

NEXT

self-exfiltrate

improve itself

deceive us

LEVEL 4/5

Deceptive alignment means AI can:

NEXT

Persuade people

tell when you're lying

appear to be safe when in fact it is not

LEVEL 5/5

Intrumental convergence is when AI:

Becomes evil

NEXT

learns to use tools and instruments

seeks power as a means to an end

CONGRATULATIONS!

You are still alive

You're dead!

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

COMPLETE

1 MYTHS

2 RISKS

3 HOPES

4 FEARS

LEVEL 1/5

Superintelligence could enhance human capabilities via:

Brain-computer interfaces

Mind control

humanoid robots

LEVEL 2/5

In medicine, superintelligent AI might lead to:

Elimination of all diseases

Novel and Faster drug discovery

replacement of all doctors

LEVEL 3/5

Best case scenario, superintelligence could help governments to:

Model complex policy outcomes

neutralize political opponents

create a new world order

LEVEL 4/5

In energy production, superintelligence could:

Eliminate the need for energy

Create perpetual motion machines

optimize renewable energy distribution

LEVEL 5/5

In language and communication, superintelligence could:

Make verbal communication obsolete

Create a global universal language

Eradicate language barriers

CONGRATULATIONS!

THE NUMBER OF THIS MISSION IS 3

You're Dead!

CONTINUE?

yes

NO

Not a game

Complete the mission and save humanity.

COMPLETE

1 Myths

2 Risks

3 Hopes

4 Fears

LEVEL 1/3

By giving the AI more autonomy, we outsource:

Accountability

Responsibility

Decision power

LEVEL 2/3

Authoritarians will likely use superintelligence AI to:

Democratize their political systems

Create the perfect Surveillance state

Improve citizen welfare

LEVEL 3/3

To maintain a sense of meaning alongside superintelligent AI, humans will probably need to:

create in AI-free zones

Redefine their values and sense of purpose

Pretend AI doesn't exist

CONGRATULATIONS!

you are still alive

GAME OVER

CONTINUE?

yes

NO

COMPLETE

Not a game

Complete the mission and save humanity.

1 MYTHS

2 RISKS

3 HOPES

4 FEARS

CONGRATULATIONS

YOU SAVED HUMANITY

This is not a game

© 2024 Jurgen Gravestein

RESTART

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

CommonMyths

Let's debunk the most common myths around AI and superintelligence.

CONTINUE

Researchers

Scientists operating at the forefront of superintelligence carry a weighty responsibility.

Are you sure you want to exit?

You will lose all the progress

yes

NO

Governments

Regulation, international cooperation, and legal frameworks are crucial in shaping the future.

AI Risks

Let's test your knowledge on the dangers of developing smarter-than-human AI systems.

CONTINUE

Hopes

Let's not resort to pessimism yet. There a lot to be hopeful about.

CONTINUE

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

MISSIONfrog

I am a cool subtitle, perfect to provide more context about the topic you are going to discuss

CONTINUE

Are you sure you want to exit?

You will lose all the progress

yes

NO

Citizens

Citizens engaging in public discourse can hold companies and governments accountable.

Companies

Tech giants racing to develop superintelligent AI wield immense power.

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

Are you sure you want to exit?

You will lose all the progress

yes

NO

This game was made as part of a contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world.

Find out more here

The Future of Life Institute

Imagined Superintelligence