(Continuation of Part 1)
Asimov’s stories and the other stories of robot super-intelligence are especially salient now at the height of the LLM AI boom. He was surprisingly accurate in some ways about how AI would really come to the forefront, but less so in others—at their core, I think his predictions may have been too optimistic.
Let’s start with the two of his predictions that were correct I find most interesting (though there were many more):
That people would respond poorly to the prospect of machines who could take over their jobs was low hanging fruit in terms of prediction—it’s happened time and time again historically. However, Asimov’s predictions on how exactly it would play out were impressively accurate. In I, Robot, though people do have misgivings about robots leading up to it, it’s only when robots became truly accessible to the general public that fear and hatred towards them reached its peak: “By 2002, we had invented the mobile speaking robot… which seemed to be the final straw as far as the non-robot elements were concerned. Most of the world governments banned robot use on Earth for any purpose other than scientific research between 2003 and 2007” (Asimov 29).
Similarly, from what I’ve seen in recent years, hatred and fear towards AI use really exploded after AI that could generate decent art and writing became easily accessible. On social media, I’ve seen people arguing that we need to boycott AI art, for example, not simply on the grounds of quality, but on the principle that it erodes human dignity and value. There are certainly logically-driven arguments focused on the contentious relationships between AI art and copyright, but I think a lot of it is emotionally driven from a fear of human obsolescence. Artists fear their already saturated labor market becoming even weaker, making it difficult to make a livable income from doing what they love.
Asimov also accurately predicted that AI models, once sufficiently advanced, would practically be black boxes to the humans who created them. The “positronic” brains in I, Robot are very similar to the neural networks that dominate LLMs today, both modeled off human brain pathways whose “potentials” constructively and destructively interfere to determine decision-making. Just as we don’t fully understand the emergent properties of LLMs, though, and must often approach LLM outputs from a behavioristic psychological perspective over a structuralistic one, the solution to confusing robot behavior in I, Robot often relies on robopsychology over simple mathematics.