The idea of our reality being the consequence of simulation or mischief is not a modern, or unconsidered notion. Rene Descartes in 1641 famously wondered whether our experience in the world could be the consequence of an evil demon, creating illusions and mirroring real sensations and experiences, with the sole purpose of misleading us until we believe that the world we experience is the real one. Similar concepts have been publicized more recently, such as in the popular film The Matrix, wherein we are all wired up to a giant simulation, experiencing a world that is nothing more than a consequence of fine-tuned electrical impulses.
The notion that we’ll consider today is whether or not we are brains in a vat or BIVs. According to this view, we could be brains, preserved in some nutrient-rich liquid, wired up to receive electrical impulses just as we would from our real bodies. Everything we viewed, felt, or did would only be the result of a very specific firing of neurons, caused by an artificial manipulation. Our bodies would be surplus, and therefore absent, and our entire existence would be defined by the wiring and firing we received. Our envatting could be the consequence of an evil dictator imprisoning individuals and manipulating our brains, or could instead be the result of chance – random bonding of particles and circumstance to create an envatted brain (or brains) with no greater meaning or purpose. For the purposes of what we’ll discuss, it doesn’t really matter, but it’s an interesting thing to think about. (Is it more likely that we are brains in vats, floating through space, or that we are the result of the perfect conditions and millions of years of evolution?)
So how do you know that you’re not a brain in a vat? How do you know that what you’re experiencing is real, and not the consequence of another power? Some, such as Descartes, argue that you don’t know. His famous quotation ‘cogito ergo sum’ or ‘I think, therefore I am’, is an illustration of this. Descartes believed that we cannot know that what we are currently experiencing is real, but due to the fact that we are experiencing anything at all, regardless of whether or not it’s the true universe, we must ‘be’ in some dimension. This point of view is widely termed the ‘sceptic view’ – you can’t know that you’re not in a simulation, but you know you exist.
Others have argued against such a notion, instead proposing that we cannot be in a simulation. Whilst some of those arguments are easily refuted or have a tendency to miss the point slightly (as we’ll discuss now), others provide supposed philosophical proofs that we cannot be in a simulation. For now, we’ll consider the weaker arguments, and then move on to the stronger arguments.
Argument 1: The simulation couldn’t be powerful enough
The first argument against the brain in a vat or BIV scenario, is that no computer could ever be powerful enough to maintain a simulation of reality. Furthermore, one individual who supported this view posited that by filling a room with screens giving a real-time feed of populated areas, one could attempt to overload the simulation and get out of it! There’s a number of obvious problems with this line of thinking, but I’ll only outline two:
The first problem with this rationale is that it puts the cart before the horse – it makes assumptions based on the world we’re currently experiencing as being the real world. For example, it assumes that the computing power in the real world (the world outside our simulation) would be similar to ours, or even that computers work in the same way, or even that computers exist! The important point being that we do not know that in the ‘real world’, computers or simulations run in the same way. In the ‘real world’, computers (or rather, objects that function in the same way as we use computers) may be irrevocably different, and have no link with computers in our simulation. To say that we can’t be brains in vats because computers aren’t powerful enough to run reality-like simulations, is to assume that our simulated reality is identical to the ‘real’ reality where we’re all brains in vats. Ultimately then, this argument is entrenched with the inability to grasp the concept of not being in the ‘real’ world, which leads me to my second, heavily related refutation of Argument 1.
The second problem with this argument is that it kind of misses the point. The BIV scenario is a conceptual philosophical proposition to ask the question “How do you know that what you’re experiencing is the real deal?”, and so whether the agent of deception is a computer program, or an omnipotent demigod, or an ethereal hamster doesn’t really make much difference. The importance of the scenario is the question that it asks.
Argument 2: The simulation wouldn’t be convincing enough
Another common argument often offered alongside Argument 1 is that a computer simulation wouldn’t be convincing enough to fool anyone into believing that it was reality. Instead, cracks and mistakes would appear, removing the shroud of illusion and allowing you to realise that you are in a simulation (If you’ve ever watched the cartoon “Rick & Morty”, this concept is explored when Rick & Morty wake up one day to find themselves in a simulation, and Rick deduces that it must be simulation due to the messy internal organ arrangement of a rat). Again, much like Argument 1, this argument fails to understand the question being asked – it’s not really about if the simulation is possible, it’s more about the questions it raises given that it is.
Furthermore, this argument also displays a naivety in assuming that we would know the ‘true’ nature of the world, even if we were in a simulation. For example, if, since you were born, the world was presented to you in 8-bit graphics. Not only that, but the world also appeared in 8-bit graphics to everyone else in the world. If that were true, how would you know that the world isn’t really in 8-bit graphics? Chances are, you wouldn’t. And so, in relation to Argument 2, the world we’re experiencing now could be in the equivalent of 8-bit graphics compared to the real world and we wouldn’t know, because we would be unaware of the true state of the world.
This argument would only ever have any potential base if we were placed in a simulation after we’d already experienced the real world first, but that’s not what we’re considering here.
Argument 3: Consciousness cannot be simulated
This argument is one of the more hotly debated concepts against the idea of BIV. Whilst I have placed it in amongst the weaker arguments, many would highly contest that Argument 3 is actually true, and indeed a large portion of the psychological community would argue that consciousness is more than the result of neurons firing, and they very well may be right. Personally however, I’m not convinced, and so I’ll present my reasoning as to why this argument isn’t correct, but feel free to disagree.
In my view, to propose that consciousness is more than the consequence of a specific pattern of neuronal firing is to give humans more credit than they are worth. Because of our introspective abilities, and our skill to reason and create and socialize and think, we, as a species, tend to believe we are special. We believe that we have been bestowed with abilities that supersede our observable biology – importantly, our consciousness does not have a physical counterpart. Personally, I don’t believe that this argument is correct, and I’ll outline why…
The animal kingdom is awash with different levels of cognitive and physical ability. While not organised in a hierarchical structure, the varying demands of evolution have resulted in animals with differing levels of introspective, reasoning, and innovative ability. From single-celled amoebas, to simple crustaceans, to small insects, to mammals, to humans. While each organism has a nervous system tuned to their particular environment, we can manufacture an order of abstract thinking ability, starting with the amoeba, and ending with humans.
Now, many would argue that the lower level organisms’ cognitive ability can be easily mapped to an observable nervous system that could consequently be artificially mirrored. For example, the amoeba functions using simple chemical signals. No complex brain-like structures, just extensive chemical reactions. For an amoeba then, I would doubt that anyone would argue that an amoeba’s cognitive skill could not be replicated with a super-computer or machine.
Moving on toward the crustaceans, whilst they may have a rudimentary nervous system, we don’t ascribe consciousness or higher thought to crustaceans, and equally, we don’t think that the cognitions of a crustacean couldn’t be replicated by an artificial means – we could hook a small shrimp up to a supercomputer and make it think it was in the water with a few well-placed electrical impulses.
In regards to small insects and mammals, we continue to attribute a higher level of cognitive capability to each organism (problem solving, better memory, etc.), but again, at this level, we do not believe that the organism possesses any abilities that could not be mapped to a particular neuron firing, or a particular protein being synthesized. We could still replicate the organisms experience if we had the technology.
Why then, once we get to humans, would one assume that our conscious ability is greater than the sum of what fires (or doesn’t fire) in our heads? The fact that we appear to possess what we call consciousness is great, but it is not a transcendent ability – it remains nothing more than the result of millions of neurons firing in a very specific order. Given that we experience the world based on what goes on in our heads and in our bodies (which can certainly be replicated), then consciousness, much like the conscious ability of the crustacean, can be simulated, meaning that Argument 3 cannot be correct.
There we have the three main arguments (or at least, the three most popular arguments I’ve come across) for why we cannot be brains in vats. If you’re like me, I’m guessing that you found these arguments less than convincing, and so in the next post we’ll move onto something a little more substantial.