2. Sharing the World with Digital Minds (2020)

By Carl Shulman & Nick Bostrom

The minds of biological creatures occupy a small corner of a much larger space of possible minds that could be created once we master the technology of artificial intelligence. Yet many of our moral intuitions and practices are based on assumptions about human nature that need not hold for digital minds. This points to the need for moral reflection as we approach the era of advanced machine intelligence. Here we focus on one set of issues, which arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity. A sensible approach requires reforms of our moral norms and institutions along with advance planning regarding what kinds of digital minds we bring into existence.

Read the full paper:

More episodes at:



(00:39) Abstract

(02:22) Introduction

(07:10) Paths to realizing super-beneficiaries

(09:31) Reproductive capacity

(13:24) Cost of living

(14:55) Subjective speed

(16:54) Hedonic skew

(19:23) Hedonic range

(21:37) Inexpensive preferences

(24:56) Preference strength

(27:49) Objective list goods and flourishing

(31:14) Mind scale

(35:07) Moral and political implications of digital super-beneficiaries

(37:18) Creating super-beneficiaries

(42:13) Sharing the world with super-beneficiaries

(48:37) Discussion

(59:24) Author information

Subscribe to Radio Bostrom

New to Bostrom? Subscribe to the Introduction as well, and start there.

An Introduction to Nick Bostrom

Start here for a deep dive into his ideas, including: existential risk, the ethics of AI, transhumanism, and wise philanthropy.

Listen and subscribe →