The history of science is not without its weirder chapters.
A few years ago, while half through my undergraduate degree, I sat down with a computer science friend of mind (I was a humanities student, I must sadly admit) and had perhaps one of the most surreal conversations. Without context, the term Roko’s Basilisk sounds like something out of a weird science fiction film like Blade Runner or The Terminator, but I can assure you it is weirder than just that.
For some real context, Roko’s Basilisk is a computer science thought experiment about the risks in the development of Artificial Intelligence and computer science policymaking. According to Merriam-Webster, a basilisk is “a legendary reptile with fatal breath and glance”.
The idea is that with the development of future AI becoming unlimited, perhaps tending towards a singularity, where technology surpasses the development of humanity itself, eventually humans will create an omnipotent and omniscient AI, which might retroactively attack or penalize those individuals who tried to prevent its coming into existence, extending perhaps to those individuals who having heard of it, failed to actively participate in its creation.
As such, Roko’s Basilisk is a weird self-fulfilling prophecy: the fear of its potential e
‘xistence becomes the mechanism for its being brought into existence. By merely knowing about Roko, you have then been drawn into its game.
It is legitimate to ask whether we, as human beings, have the capacity to handle the fallout from the ever-expanding potency of our knowledge and inventions. Roko’s Basilisk and its ilk represent a class of ‘known unknowns’ thrown up by the development of science which are placing increasing challenges in the path of public policymakers.
Although widely ridiculed, Donald Rumsfeld’s comments about “known unknowns” – paralleling to some extent the idea of the Johari window – were not unperceptive:
“Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones”.
Perhaps, our newfound tech overlord in the form of Roko’s Basilisk is a prime example of a known unknown and the possible dangers of scientific development that, until recently, we could not have envisaged. Science is forever producing things that are both hugely impactful and – until recently – unimaginable. To give just one example, crop gene-editing technology, which has led to remarkable advances in drought-resistance and productivity, may not only have led to a moral imperative for their use by humanity, but may also have drawn humanity into a existential danger from their use.
Both policymakers and computer scientists face an ever stormier ethical present, not least from proactive AI.