Rationality – Pure vs. Practical

Human rationality is a very interesting area of study, at least partly because of the divide that it creates amongst the psychological, mathematical, and anthropological community. Some of the previous would fervently argue that humans are the antithesis of rationality and logic, and rely instead on several preconceived and measurable biases to make decisions, whereas others would argue that we are much more capable of making rational, probabilistically sound decisions than we get credit for. My view is much less interesting. Both schools are right, but they’re aren’t arguing the same point.

To outline this belief as clearly as possible, I’ll delineate what I see as two separate versions of “rationality”.

Pure Rationality

A view of rationality I would argue is more popularly held by the mathematicians and statisticians of the world, pure rationality is logic that can be mathematically calculated and is hard to refute. For example, if you had a choice of two betting games, where, in one game, you had a 50% chance of winning, and in the other you only had a 20% of winning, the rational thing to do would be to play the game with a higher chance of winning. This particular example is easily resolved without calculating anything, however imagine that the rewards for winning were different for each game. Perhaps in the 50% game, you only win half of what you put in, whereas in the 20% game you win 5 times what you put in. Which would be the most best game to play? The actual answer is largely irrelevant (although I can tell you that the 20% would likely result in the largest winnings), the important point is that the most logical choice (or the choice that would result in your highest personal gain) can be mathematically determined and is hard to dispute.

Practical Rationality

Practical rationality on the other hand, is less easy to objectively measure, but no less useful. A practically rational choice would also result in the highest reward to the individual, but does not necessarily need to be the ‘pure rational’ thing to do. For example, one example in the field of psychology is an ingroup bias. An ingroup bias is essentially where you act favourable to those who you believe are members of your ingroup. Ingroup examples include nationality, age, gender, race, but also sports-team affiliation, food or drink preference and the like.

One of the reasons for this ingroup bias appears to be that we demonize those in the outgroup, believing them to be of a lower class or status than those in our ingroup, and thus less deserving of our help or worthy of relationship. Now from a purely rational standpoint, this vilification is highly illogical. There is very little evidence to suggest that factors such as sports team affiliation have any causal factors other than demographically, meaning that someone in your out-group is likely to be as similar to you and someone in your ingroup, and is certainly not likely to be of any lower social or moral importance than yourself or your other ingroup members. For this reason, the logical approach would be to treat ingroup members and outgroup members equally.

However, there are evolutionary arguments to suggest that an ingroup bias may be favourable in the long run. Those in your ingroup are statistically more likely to share your DNA, therefore by treating them favourable, you increase the likelihood of passing your genes on to your offspring. But how can we do this without our conscience kicking in? How do we treat those in the ingroup better than those in the outgroup without our emotions coming into play? Well, we pretend that the members of the outgroup are sub-human. That way, we can rationalise a degree of hostility toward them, and still sleep at night thinking that we’re good people.

 

This example outlines the difference between pure rationality and practical rationality quite well. From a pure rationality standpoint, our outgroup bias is incorrect and somewhat ridiculous, but from a practical rationality point of view, it’s worked up until now.* And so this is the fundamental difference between the two concepts: something doesn’t have to be mathematically calculable to be rational.

Examples of practical rationality are abundant in human behaviour, and so I implore you to seek them out. Racism, gambling, dangerous behaviours, dishonesty are some of the more damaging examples, but others including estimation of probability and even time perception present less depressing instances of the phenomenon. If you look back at my post on whether we, as humans, can understand randomness, I think you’ll find some very common themes. Now, I would love to address each of these and highlight the differences in types of rationality, but I fear that would take rather a large number of words and I’ve been told to cut these posts down, so instead we’ll just discuss one; estimations of probability.

Estimations of Probability

Tversky and Kahneman are two Nobel Prize winning laureates who rose to fame because of their work on heuristics. Heuristics are shortcuts that we use to make decisions, and, as Tversky and Kahneman pointed out, they often lead to miscalculations. The pair presented this as evidence of our inability to behave rationally. I’m not going to try to convince you that they’re incorrect, but rather that they only provided proof that we’re not purely rational. They are yet to convince me that we’re are not practically rational.

For example, let us consider one of the studies that Tversky and Kahneman carried out. They told participants that Jack had been selected from a sample of 100 people. Of those 100 people, 30 of them were lawyers and 70 were engineers (or vice versa in other conditions). Participants were then given a description of Jack, (something like “Jack is 36 years old. He has no family and is somewhat introverted…”) which was designed to provide no clues to Jack’s occupation, and asked to judge the probability he was a lawyer or engineer. As you can imagine by the fact that I am mentioning it, the participants supposedly ignored the base-rate of 30-70 and estimated that the likelihood of Jack being a lawyer (or engineer depending on the condition) to be 50/50. When no description of Jack was present however, participants correctly used the base-rate in their estimations. Tversky and Kahnemann presented this as evidence for our deviance from pure rationality, which I would argue it is. They did not, however, propose that this deviance could itself be rational, which is what I’m going to try to do now.

When someone tells you something, there are a number of unspoken rules about what they are saying (referred to as Grice’s Maxims in the literature). One of these rules is that you assume that what someone is saying to you is relevant, or at least hold some importance. For example, if I said to you “That glass is only half full” (ignoring the optimism/pessimism debate), that would more strongly imply that the glass has recently been filled to half from empty rather than emptied to half from full. Even though I didn’t explicitly state that fact, you would implicitly assume that because I am telling you, or because of my phrasing, or because of the context of the situation that the glass was originally empty and has been filled. This is commonly called conversational implicature, and there are many different types.

For example, if someone asked me “Do you know where I can get batteries at this time?”, and I replied, “There’s a convenience store on the corner.”, then that person might assume that the convenience store sells batteries, and that’s why I have mentioned it. In my reply, there is no literal evidence that I have actually answered the question, but nonetheless the fact that I’m mentioning the convenience store is enough to suggest to that other person that it sells batteries and end the conversation.

As I said, there are many types of conversational implicature and I won’t go through them now, but I would recommend doing so if you’re keen because they are very interesting. Nonetheless, the examples I have discussed highlight that there is more to what we say than the specific words. We convey meaning in our syntax, tone, and, somewhat ironically, what we don’t say, as well as our words.

So, let’s relate what we’ve just discussed back to the Tversky and Kahneman study. Tversky and Kahneman proposed that the presence of useless information was enough to make subjects make an irrational probabilistic choice. They argued that the information should have no bearing on the decision of probability, and so to alter one’s decision when the information is present compared to when it is absent is irrational. This, I suggest, is the slightly incorrect part. While I would wholly agree that on this particular occasion, assimilating the information given into a judgement of probability was wrong and led to an incorrect assessment, I believe that the process and the logic is perfectly sound, even if the result wasn’t.

The reason that the process is not irrational is as follows. Conversation is a huge part of our lives, and takes place every single day. Therefore, the utility of conversational implicature is over a long period of time. I can almost assure that over the course of your life, there has been at least one social event where your ability to understand conversational implicature led to you misinterpreted what someone said and it led to a faux pas. But, importantly, that occurrence is in the minority – the vast majority of the time, your ability to understand the additional information allowed by conversational implicature is of benefit to you.

But, Tversky and Kahneman’s paradigm was artificial, it was purposefully created to be the social faux pas. In their study, the rules of conversational implicature were violated, and the rationale that so oft helps us suddenly became disadvantageous. As a result, it’s not surprising that the participants made an incorrect choice! From this perspective, it’s hard to imagine they wouldn’t! Ultimately, the participants were acting practically rationally – they were using all the information at their disposal to make their decision, even if, in this instance, the information that usually is beneficial was specifically designed not to be.

So far, a lot of this is speculation, and so I will outline the primary piece of experimental evidence that led me to believe that this approach of separate rationalities and the influence of conversational implicature is correct. And that is that if you ask the participants the same question, but tell them that it came from a computer, which chose the description presented at random from a database of facts about each person, the effect disappears. In other words, when the participants are specifically primed to ignore their practically rational tendency to seek meaning in the words of other’s, they act in a purely rational way, ignoring the information completely.

That piece of evidence, for me, was the turning point in my belief in the Tversky and Kahnemann paper, and for certain arguments against human rationality in general. Overall, I find it hard to conceptualise why exactly humans would act against pure rationality without further reason. We live in a world of danger and probability, and so to purposefully act irrationally so as to only spite our chances for survival seems counterintuitive. This disbelief is my primary reason for buying into the concept of a ‘practical’ rationality, and whilst one could argue that pure and practical rationality are one and the same because a rationale that works in the long run is mathematically sound and therefore must be pure, that’s not really the point. The point, that I hope I have conveyed, is that we’re not quite as irrational, stupid, and incorrect as we seem. And at the risk of fan-boying evolution once again, I think it’s done a pretty good job of improving our survivability, and making sure we, and our children, make it to the next morning. So the next time you do something that seems irrational, or stupid, or counterintuitive, just think… You likely made that decision because of the millions of years of evolution that has moulded your rationality into what it is today. So, if you get into trouble for it, blame evolution.

 

 

*Personally, I think the advantage for an ingroup bias has run its course, and, in the modern world, its effects are actually quite detrimental to an egalitarian society. But, nonetheless, I can see how such a bias would have been beneficial in the past, and so I believe its utility as an example of practical rationality still stands.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s