The Predictions of Asimov's "I, Robot": Part 1 of 3

Sept. 7, 2025

Over the last week, I read Isaac Asimov’s 1950s sci-fi anthology, I, Robot. The stories are pretty fun to read, and there’s a lot of great humor sprinkled about. The stories are impressively prescient in some regards, but I believe were sadly over-optimistic in others. I’ll be making this post in three parts, of which this is the first: In the first part, I will provide an overview of the stories and their ultimate optimistic conclusion, and in the second and third parts, I’ll respectively delve more deeply into what predictions I think were correct, and what predictions, not-so-much. Massive spoiler warnings ahead!

I, Robot is a collection of nine short stories bound together via a retrospective interview of an acclaimed robopsychologist, Susan Calvin, as a framing device. Arranged chronologically, these stories depict humanity’s future as robots advance from simple, speechless automata to near god-like machines that simulate human emotion, unlock interstellar travel, and eventually fully run the world economy and politics.

Central to the robots of Asimov’s world are the “Three Laws of Robotics,” introduced in full in the second story of the anthology, “Runaround”:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The stories afterwards largely focus on the cast of human characters trying to figure out what’s going wrong when robots behave in unexpected ways, usually caused by these rules bending or conflicting with each other. For example, in what is arguably the funniest of the stories, “Reason,” an especially intelligent robot assembled in a remote space station develops religious delusions of grandeur, creates a cult where the other robots follow him as a “prophet,” forcibly takes over the space station’s operations, and forces the human supervisors into retirement. The human supervisors aren’t harmed—in fact, the robots wait on them with food and water until they are sent back to Earth—but they are barred from the control room. In the end, it’s explained that the robot must have subconsciously determined that the best way to keep the humans safe was to take away their control, as the robot knew it was better at operating the space station: In other words, the robot adhered to the First Law at the expense of the Second.

Despite these hijinks, I, Robot ultimately presents a very optimistic view of humanity’s future with robots. In the story “Evidence,” Stephen Byerley, a robot masquerading as a human wins an election for mayorhood, eventually spearheading the consolidation of nations into larger global regions, and then uniting these regions under the office of “World Co-ordinator.” Byerley turns Earth into a utopia with the assistance of “the Machines,” a set of hyperintelligent supercomputers that perfectly direct the world’s economy and politics for the benefit of humankind. Through Calvin, Asimov argues in the story “Evidence” that:

If a robot can be created capable of being a civil executive, I think he’d make the best one possible. By the Laws of Robotics, he’d be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice. And after he had served a decent term, he would leave, even though he were immortal, because it would be impossible for him to hurt humans by letting them know that a robot had ruled them.

and later in the story “The Evitable Conflict”:

Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good — and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don’t know. Only the Machines know, and they are going there and taking us with them.

In short, in Asimov’s future, the Machines have practically become God: Through their omniscience, they work in mysterious ways according to their divine plan for the ultimate benefit of humanity. Human free-will exists, but because the Machines can so accurately predict it, we are essentially subject to predestination.