Am I a Brain At All?

Previously, we addressed the concept of our reality being the consequence of envatted brains or BIVs, fed chemical and electrical signals from a supercomputer designed to trick us into believing that the universe we are in is the real one. We then looked at some of the possible refutations of this concept, and ultimately came down on a sceptical view – you can’t know that you’re not a BIV. However, there are some arguments against Cartesian scepticism (the idea that you can’t know you’re not a BIV) that are more convincing.

One such argument, originally presented by Hilary Putnam, proposes that we cannot be brains in vats, because, if we were, we wouldn’t actually be referring to brains in vats because we wouldn’t know what they are. If that sounds confusing, that’s because it is, but I’ll try and outline the argument as best I can in the following paragraph.

If an ant was walking around in a sand pit, leaving a noticeable mark in the sand outlining its movement, and just so happened to create an outline that had an uncanny resemblance to the face of Winston Churchill, would you say that the outline was referring to Winston Churchill? You probably wouldn’t – it’s just chance. The ant had no intention of making that outline, it just so happens that to us, purely aesthetically, it looks like Winston Churchill. To further illustrate the point, imagine that an alien on Mars who had never seen trees before, saw a blob of ink on a piece of paper that looked identical to a tree. Does that blob of ink represent a tree? No, it just happens to look like one. This train of thought highlights that reference or representation is based on intentionality, and knowledge. If the Martian had seen a tree before, and painted one from its memory, then that painting would represent a tree, because there was an intention to make reference to a tree, and the Martian was aware of what trees are.

Applying this logic to the brain in a vat scenario, we end up with a self-refuting statement – a statement that makes itself untrue (e.g. ‘this statement is false’). Again, I’ll try as best I can to explain.

We’ve already established that to make reference to something, there has to be intentionality and knowledge. So, to say that I am a brain in a vat, I must know what brains are and what vats are. But, if I truly am a brain in a vat, and I am in a simulation, then I don’t really know what brains are and I don’t really know what vats are. Instead, I only know what the brains look like and are like in my simulation and what vats look like and are like in my situation. To highlight this difference, from now on we’ll refer to real brains as ‘brains’, and brains in the simulated world ‘brains*’. The same goes for vats. So, if I am in a simulation, then I know what brains* are, but I do not know what brains are. Similarly, I know what vats* are, but I do not know what vats are. But does this matter?

Well, previously we highlighted that because something is aesthetically similar to something else, that doesn’t mean that one thing refers to something else. Going back to the tree analogy, because the Martian has seen something that looks like a tree, that does not mean that the Martian knows what trees are. In the BIV scenario, just because I know what brains* are, does not mean that I know what brains are. Even though they may be aesthetically similar, and feel the same, they are not the same. One is a tangible, observable, real object, and the other is a figment created by the firing of certain neurons. Whilst a brain and a brain* may be indistinguishable to us, they are two different things and therefore we cannot say that we are a brain in a vat, because we cannot make reference to brains or vats because we don’t actually know what they are! If can’t make reference to a brain, and I utter the statement ‘I am a brain in a vat’, then I cannot be correct. Therefore, I’m not a brain in a vat.

This is essentially Putnam’s argument (although a whole lot more detailed and coherent in his own writings) – we cannot be a brain in a vat, because if we were, we’d only know about brains* and vats*, not the real thing. It’s one of the more widely known arguments against the BIV scenario, and is very well thought out and logical refutation of Cartesian scepticism.

There remains a large amount of debate as to whether Putnam’s rationale suffices as proof against the BIV scenario, and personally, I’m unsure. While I find Putnam’s logic hard to fault, I can’t help but feeling that, again, the argument misses the point of the scenario somewhat. Before I continue however, I should categorically state that there is a high possibility that I have not understood the entirety of Putnam’s argument. While well thought and extremely well laid out, there are various subtle nuances that underlie Putnam’s argument, and I would doubt that I have understood the entirety of them. Nonetheless, I’ll explain my disagreements with Putnam’s argument as I understand it.

My first disagreement is with Putnam’s philosophy of reference. Putnam argues that because two things are identical, or at the very least, appear to be identical, that does not mean that they are the same. But I’m not convinced that this reasoning is true. For example, imagine that, on Martian planet somewhere, there is an organism that converts carbon dioxide in the atmosphere to oxygen. It is surrounding by a hard covering, grows appendages to capture sunlight and converts energy via photosynthesis. Imagine this organism is functionally identical to what we know to be trees. But of course, because this is an alien planet, this organism is not called a ‘tree’, it’s called an ‘eert’. Now if you asked the Martian what a tree is, would it know the answer?

Some would say ‘no’ – whilst the Martian has come into contact with something identical, an eert, but an eert isn’t a tree, and so the Martian does not know what a tree is, but only what an eert is.

Others (including myself) would say ‘yes’ – functionally, the two organisms are identical, and therefore represent the same concept of ‘treeness’. Whilst an eert and a tree are the names that we have prescribed to each organism, these are just referents that hold no meaning. What does hold meaning however, is the function and attributes of the organisms, which are identical. For these reason, I would argue that the two organisms are functionally the same, and therefore the Martian doesnt know what a tree is, but knows what the concept of ‘tree-ness’ is and that’s good enough.

Now that we’ve outlined functional reference, let’s apply this logic to our BIV scenario. Putnam argued that there is a difference between a brain and a brain*, and that if we were in a simulation, we wouldn’t know what brains are so we can’t be a brain in a vat. But I argue that the dichotomy between a brain and a brain* isn’t so great. While Putnam argues that despite the fact they appear identical to us, there are intrinsically different – one is a combination of organic tissue, the other is an illusion created by a computer – I propose that this difference doesn’t matter, but what does matter is their functional similarity. A brain and a brain* are, by definition of being in a reality-like simulation, functionally identical, and we are unable to tell them apart. Therefore, while we can’t refer to a real brain, but only a brain*, that doesn’t matter, because, to us at least, they are the same thing. Whether or not they are the same thing (which inevitably they’re not) doesn’t matter, what matters is their function, and their function is identical. So, it doesn’t matter whether we are a brain, or a brain* or even a brain* being simulated in a brain which is turn being simulated in a brain! Ultimately, when we hypothesize whether we are a brain in a vat, we argue that we may not be a brain (not a brain*) in a vat, what we mean is that our reality may be the consequence of illusion, which Putnam’s argument doesn’t disprove. Which leads me to my second point.

Putnam’s argument is based on the ability to make reference to, but the inability to make reference to something, doesn’t mean that it doesn’t (or can’t) exist. For example, there may be an alien species in another galaxy far, far away that is infinitely different to ourselves. And let’s imagine that this alien has no concept of humans, or even the knowledge that there may be life outside of their galaxy. Now, whilst this alien would be unable to make reference to humans, because it would have no concept that we actually exist, that doesn’t mean that we aren’t real. Concerning the BIV scenario, just because we can’t correctly reference what the ‘real’ brain in the BIV scenario would be, doesn’t mean that it can’t exist.

More complexly, the BIV scenario is a concept, constructed to fit our understanding through the use of logic and language. Just because our construct is, at times, incompatible with the BIV concept, that does not mean that the concept is incorrect, but rather that the construct is insufficient. In other words, refutations of the BIV scenario that rely on the incompatibility of our language and human viewpoint with the BIV scenario do not refute the scenario, but instead only highlight the incompatibility.

Ultimately, although I’m not entirely convinced by Putnam’s argument, I am not entirely unconvinced by it. The BIV scenario is an extremely interesting concept and one that I love because of the debate it throws up. My personal perspective is one of Cartesian scepticism, but I’m definitely prepared to be proved wrong.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s