I first had to get a better grasp on what is meant by truth. Though I never defined it rigorously in the essay, this implied that truth, in my mind, had something to do with being stable across time and connected to linguistic assertions. Namely, I was assuming that, if a claim, proposition, or piece of information was true, this meant that the physical situation depicted by the claim would always be the case in the actual world [1]. For instance, for the “sun rises every day” to be true, the physical situation of the sun rising every day would actually have to be the case every day. This definition seemed plausible. But since there is no reason to believe that all assertions will necessarily hold across all of time, nothing can be true. If something seemingly could hold across all of time, or would actually describe the world as it actually is across all of time, it would indeed constitute an absolute truth.
I can't help but cringe at myself a bit when I look back on this. I was probably in a slightly nihilistic phase, strewing out vapid thoughts in an attempt to express some latent, angst-filled sentiments at society as a whole. But there was also something to it. I had definitely forgotten to account for mathematical statements ('2 + 2 = 4'), which do seem to intuitively carry some sense of timeless truth to them, but the issue I was hinting at was none other than the problem of induction, which has puzzled sharp minds for centuries.
Having ambiguity around what could be true bothered me, because it felt like we’re dealing with truth as a concept all the time. In talking with other people, we’re saying things that are (presumably) true about the world. But if nothing is true, and we still communicate with others, what are we actually saying to them? Falsehoods? This seems wrong, too.
Maybe the right frame to adopt is that we can approximate truth, and the things we're saying are likely to be true with some degree of accuracy. This approach of assigning truth values – degrees of closeness to being actually the case in the world, ideally across all time – precisely characterizes the philosophical tradition of Bayesian epistemology.
Intuitively, this also means that some statements can be truer than others. For example, I think both the idea that I could run a marathon tomorrow and that the sun will rise tomorrow morning are both true, but I'd be willing to bet more money on the sun rising being true – I'm quite fit, but by virtue of never running a marathon before, I'm not as certain that I could actually complete it, whereas I have always known that the earth spins. So some statements are truer than others, where here being 'truer' means that it is more likely to hold across all time, though not certain (which is what the problem of induction demonstrates). Mathematical statements seem to be closest to certainty, to absolute truth.
This feels like a decent theory so far, but I think this all hints at an even more pertinent question: how one might actually arrive at the truth value of a given statement. Intuitively, we think about what truth means less frequently than actually trying to figure out the degree to which something is true. Even if we take this notion of approximate truth as correct, in our daily lives, we’re confronted with the further question of how to assign these truth values.
Finding rough values for some statements might be easier than for others, or it at least seems easier to rank certain sets of statements according to how close they are to absolute truth than others. For instance, it doesn’t seem too difficult to say which statement out of “The sun will rise tomorrow” and “I will finish a marathon tomorrow” is closer to absolute truth, especially if you happen to know that you struggle with cardio and have never run a marathon before. However, comparing the truth values of the statements “a social democratic political system is most conducive to human flourishing” and “a full capitalist political system is most conducive to human flourishing” feels like a harder question to settle. For one, it requires us to first break down the meaning of terms like “capitalism”, “flourishing” and so on, much further, and then go about figuring out which system is better, which I think differs in an important way from figuring out the truth value of a statement like “the sun will rise tomorrow”. With the latter, you can simply open your curtains tomorrow morning to see if the sun is up, and the more often you do this and find your statement confirmed, the closer it is to certainty – all that’s needed for settling the question is a pair of eyes capable of detecting different colored wavelengths. Yet, in moving to the best political system case, we are moving from a strictly physical arena to a social one. The relevant meaning of flourishing, or what it means for social democracy to be optimal is, I think, highly dependent on being able to identify intuitions about how we ordinarily use the word “flourishing” or about the goodness of social democracy that most humans share collectively, but doing this can be messy as everyone’s intuitions are shaped by both cultural and personal experiences. We do our best by writing books and engaging in debates about the right political system, but it still feels like we have far more ambiguity about which one of these is closer to the truth than with the sunrise example.
Yet, in the midst of this uncertainty, we still have government officials making decisions every day that draw from one of these political philosophies, influencing the lives of people in a very real way in the process. There are clear stakes at play in assigning probabilities for a given body of statements, so it seems that we should act only upon statements that are as close to truth as possible. Even in understanding the need to make practical decisions, this uncertainty feels extremely disappointing, especially when I think about the wrongdoing that could have been averted by acting in accordance with what was true, or closer to the truth.
The point is, figuring out what is true is much thornier than we recognize in our everyday lives, which feels strange, because in our minds we think we’re thinking true thoughts and uttering true things most of the time.
And figuring out what’s (most) true matters. For one, it can allow us to just have a less cluttered mind. But as we saw with the government officials example, we also take actions based on what we know; when we take actions, we have certain aims in taking them, an aim to ensure the course of reality plays out in a certain way. Then, it seems knowing what’s true about the world and how it works, to the best of our ability, seems a key determinant in selecting the best actions for us.
However, realistically, we have bounds on time and attention, and can’t figure out what’s more true, more actually the case in the world. Are we better off, all things considered, by spending more of our time just resorting to working with information that’s closest to being true or trying to speed up the process of figuring out what’s true? Our time is valuable, and we’d like to know what’s right (ideally, in both a normative and objective sense, assuming a meaningful boundary between the two exists) to do in a given moment and make decisions as quickly as possible, but we also want to make sure they are actually high-confidence and not just optimize for making decisions quickly. In other words, the central question I’ve spent time framing here is, how do we compromise between needing to figure out how true some information is, and making decisions based on this information? And furthermore, how is truth connected to the practicality of information?
The perspective I’ve adopted for now, which comes from Nassim Nicholas Taleb, is a strange variant of both and neither. Roughly, it doesn’t deal with the concept of approximate truth of a given proposition as much as the risk from taking a given proposition to being (approximately) false. This framework recognizes the bounds we have on time and attention devoted to evaluating the truth of information for the sake of taking an action, recommending that we resort to working with information that’s essentially least risky if false, and instead of spending our time trying to figure out whether given information is true, we spend it improving our ways of calculating the risk of some information being false. So “truth” is effectively replaced by “least risky”.
At first glance, this substitution might seem strange. The only way “the sun rises every day” could entail risk is if we were to act upon it, such as in the form of placing a bet as to whether this holds (where the risk is losing money). If I were to continually observe evidence that confirms this claim, I might take it to be closer and closer to absolute truth, but there is no risk as long as I don’t act upon it. So something could be true without entailing risk in any way. Why does this replacement make sense if they’re not equivalent?
But eventually, we do act upon the degree-of-truth assignments that we (subconsciously or consciously) have placed on propositions in our minds. Whenever you make a decision, you are presumably drawing from what you already know about the world [2]. So if we’re constantly making decisions, and thus constantly drawing upon our knowledge, we’d probably want the knowledge we use in making a decision to result in the outcome we want, which is to say that we’d want our knowledge to actually be representative of how the world works – for it to be true. Thus, we wouldn’t want to draw upon information that’s less true, because it entails more risk, or a greater likelihood of our decision/intervention/action not resulting in the outcome we want. The association between risk and truth, then, is that risky information corresponds to those that are less likely to be true. Taleb’s view is roughly that, since much of the time, we fundamentally care about knowledge to make decisions, we instead of worrying about whether our knowledge is true, we should more often be asking the question of how much risk it entails if we were to act upon it.
Of course, there will be information that we just never end up using – having heard from someone that Narmer was Egypt’s first pharaoh is probably not something that can have a risky bearing on our everyday lives. So if we want to know whether it actually describes the world (or how it once was, or will be), but it’s never going to affect our decisions so we can’t really assess its risk, how should it be treated under this framework? As I’ll explain later in more detail, it turns out we don’t have to mentally discard such a piece of information in this case, as we would rationally with information we know to be false. We don’t have to concern ourselves with it, if we know it has little to no connection to information that does bear upon our decision-making. In what follows, any discussion of information will refer to information that does have an impact [3].
All this said so far in brief, it seems like there might be aspects of this framework for decision-making and knowledge to object to already. That said, I’m not going to defend this yet, but instead I want to flesh it out more fully and see how it bears on decision-making and on evaluating the truth of claims more generally. For me personally, the view has made a big difference, helping combat the occasional analysis paralysis that comes up when trying to choose between different types of information to act upon, while still allowing me to believe in the idea that any given claim lies somewhere on the spectrum of objective truth, with some lying only arbitrarily far away from absolute truth [4]. I also think it has a chance of serving as an objectively good theory about truth.
I’d like to think I’m not too deeply bound to this view, but it has made a tangible practical difference to my life, which is a sign of good philosophy, and I hope that making it more accessible only increases the size of the audience able to scrutinize it, thus allowing us to see its gaps more clearly in one case or further corroborating it in the other.
Taleb outlines this view across his Incerto series from various angles, with some of the books focusing on more of the grittier mathematical details and others on practical decision-making or living. In presenting the view, I’ll be drawing from bits of each book, focusing mainly on communicating the big philosophical takeaways.
NNT’s View
I think a more intuitive, yet still non-reductive way to discuss Taleb’s core philosophy without getting too into the weeds comes down to grasping one of the biggest ideas in his corpus of work, that of fragility and antifragility.
“Fragility” here coheres with how we’d ordinarily use the word; if something is fragile, if it reacts negatively to disorder. For example, a glass is fragile, because introducing disorder into the picture – dropping it, or moving the container that a glass is in too quickly – causes it to break.
Antifragility, on the other hand, is the opposite of fragility, as it is a characteristic of something that only reacts positively to (a certain amount of) disorder [5]. A virus might be an example of this. If you treat it with an antibiotic, it will kill off the virus, but if you don't finish the dose, any subsequent version of the virus will only be more resistant to that antibiotic. It will, on net, have become stronger as a result of disorder (the initial antibiotic) being introduced in the first place.
Somewhere in the middle between fragility and antifragility is robustness. If something is robust, it means that it remains neutral under disorder. Unlike fragility, it doesn’t react negatively to disorder, but unlike antifragility, neither does it gain anything from it [6]. The focus in Taleb’s books is placed more on fragility and antifragility, though, so I won’t spend much time discussing robustness.
The ideas of fragility, antifragility, and robustness can be applied to many different concepts, such as professions, methods of learning, and information. A fragile profession is one that is vulnerable to disorder, in this case on the relevant level of societal disorder. Taking the COVID pandemic (or another disruption, such as an economic recession) as an example, we might take jobs in the travel industry to be fragile (people can’t afford leisurely activities, such as travel, during hard times), grocery workers to be robust (people always need food), and certain jobs in the creative sector to be antifragile (people might need an escape from their daily lives more than usual due to additional hardship imposed by disorder). This would probably also apply to any other societal shake-up, such as an economic recession. Applying this three-part classification to information, for Taleb, fragile information is information that does not stay relevant (eg. applicable to situations in the world, is closer to truth, helps one achieve their goals, etc.) for long, while antifragile information only becomes seemingly more relevant as time goes on.
Both fragile and antifragile information have characteristic features. The former tends to lean more towards the theoretical, while the latter is more heuristics-based. Relatedly, fragile information tends to be gained from doing lots of reasoning, independent from interfacing with the world around us, often not tied to a specific practical purpose, while antifragile information develops through being tried and tested in the world. Since Taleb is primarily concerned about the use of knowledge, and in “using” anything what we’re actually doing is trying to employ it in our lives in some way, in the world, it makes sense to say that any information that promises to be more true or relevant over time should work when we test it out in the world. Meanwhile, fragile knowledge might just sound good and adhere to our intuitions overall, but if it doesn’t actually hold when we subject it to a test case, we probably wouldn’t want to use it in the future. So, usability and unusability, the main classification that Taleb finds relevant for classifying claims, corresponds to antifragile and fragile information, respectively.
To add some more concreteness to these abstract descriptions, we might take an example of fragile information to be modern economic theory, Taleb's favorite body of knowledge to rake over the coals, while an example of antifragile information might be Stoic philosophy. Taking the Black-Scholes model – a model which one can use to calculate a unique price of options given certain information – as an example of a relevant economic concept, Taleb would claim that this information is fragile, since the exact conditions under which it can be employed are essentially never met in practice, and thus its ability to actually make predictions about option prices (the way in which it would be useful) is still unknown. It is still used, even when the assumptions governing its use aren’t satisfied, as it is claimed to provide a useful approximation, but it has nonetheless had a major negative impact for providing a mathematical justification for the type of trading that led to the 2008 financial crisis (tumbling the world economy and sending ordinary people into a crisis, with no skin taken off the backs of economists) [7]. Overall, Taleb would claim that, as its ability to make predictions overall remains dubious because of not being tested adequately in the real world, economic models such as Black-Scholes are fragile [8].
In contrast to the untested Black-Scholes model, Stoic philosophy has been around for thousands of years, yet people continue to find it generally useful for living today, making it a prime example of antifragile knowledge. To take one example, The Discourses is a recorded series of conversations between the Stoic philosopher Epictetus and his students after lectures at his Nicopolean academy, and it consists of him giving ever-more-practical pointers about how to apply aspects of the more abstract Stoic thought he was teaching. For instance, he emphasized that, though we cannot control the fact that specific thoughts enter our consciousness as a result of simply interacting with the world, we do have some control over our reaction to these thoughts. Namely, he took it that, to the extent that something in the world can be considered good or bad, it seems that there are fundamentally both good (pleasant) and bad (unpleasant) actions or thoughts. Yet as these are both fundamentally reactions to what the external world presents us with, Epictetus concludes that anything we perceive as a bad experience can always be perceived as good, and that performing this act of mental alchemy is merely limited by our ability to interpret seemingly bad events as somehow good. Improving at this skill would basically be placing everything good more within our immediate reach [9].
Epictetus' twist on classic Stoic wisdom enabled people to live better lives in the time that he was around, but the fact that we see The Discourses on bookstore shelves shows that we still find it (increasingly) relevant today [10]. Circumstances for applying the tenets of Epictetus’ Stoicism are easily encountered – we experience unfortunate situations in our lives all the time – so not only is the knowledge conveyed by Epictetus and other Stoics easily applicable in practice, but it also has been shown to work. I've written here before both about Stoicism and about taking it too far, so as with the Black-Scholes model, we more often apply some close variant of the original idea in our lives, but the difference is that the one has been tried and tested in the real world for a much longer amount of time. So certain Stoic ideas clearly endure disorder imposed on them by a continually changing world.
Yet, going further, based on the content of Stoicism, we could actually say these ideas are antifragile in the sense that those who actually exercise this knowledge only become stronger when they're subjected to disorder, to the vicissitudes of life. The more someone improves at interpreting seemingly bad events as good, the more they exercise this ability, and to the extent that feeling good spurs motivation and action, you might say that such a person only becomes more able to act in the world the more misfortunes fall upon them. I personally don’t always find this practice intuitive to execute – in the immediate aftermath of some unfortunate event occurring, it’s difficult to focus and adopt a coherent frame of mind. But once the dust settles a bit, and I ask what I can learn from this, I'd say I'm either wiser in the end or find the energy to bring about something more positive in the world for myself. In many instances, the disorder has made me better. Thus, these Stoic ideas (and/or any variants of them) are antifragile.
Having started the discussion of Taleb’s ideas through the lens of risk, and having now introduced the key concepts of fragility and antifragility present in his literature, we can now attempt to connect the two. Intuitively, using fragile information to decide upon actions entails more risk, due to it not having been tested in the messy world, where actions are ultimately taken, thus it may not accurately inform whether or not an action in question is good to take. Antifragile information, on the other hand, entails less risk [11].
Taking stock
Now, hopefully we can start to see the bigger-picture implications for the conflict between trying to figure out whether the knowledge you’re acting upon is true and needing to simply act. Again, I’m not sure simply understanding fragility and antifragility does justice to grasping Taleb’s full philosophy, but I think it does suffice for starting to address this tension. Namely, I think all of this deliberating leads to the practical implication that we should care about trying to recognize and acquire antifragile knowledge. That is, when we find ourselves experiencing this tension between being uncertain about some given body of information and also knowing that we need to make a decision using it, we should orient our effort towards figuring out whether the information at hand we’re dealing with is fragile or antifragile – high-risk if false, or low-risk if false – and choosing to act upon it if it’s of the latter type, and forgoing using it otherwise.
Connecting the dots, my case would go as follows. Assuming to start that our information about how the world works determines our actions, and that better information – in the sense that it corresponds to how the world actually is, ie. it hasn’t, at any point in the process of us acquiring it or holding it, come to somehow inaccurately represent the world (again, adhering to the correspondence theory of truth) – leads to better actions, we should strive to possess better information [12].
As I hope I’ve established above, antifragile information seems to better represent the way things are than fragile information, so we should then prefer to be working with antifragile information if we assume that we want to take the best actions according to our personal utility function. So, our lens for evaluating information, figuring out whether something is true in a practical sense, is no longer really based on truth, but rather fragility, which is tightly entwined with risk (where the latter serves as a potentially easy way to begin explaining it, or understanding it). Equivalently, truth, or knowledge, is the information in the world that allows us to make correct predictions, that is more antifragile [13].
Now, this doesn’t mean we simply have to toss out all fragile information from our minds. If a model in economics strikes you as beautiful for its mathematical elegance, it feels wrong to ask you to no longer examine it, to not try and see what else can be derived from it, to conjecture what it allows you to conclude about the world. The problem is only in assigning it predictive power or using it as an explanation for a phenomenon when it in fact has not been tested or employed. Of course, it’s difficult to establish boundaries around what counts as use and non-use, but if the conditions around the use of hypotheses makes it almost impossible to test properly, one might also be justified in bringing up the question of whether it has promise to deliver anything of practical value at all.
Taleb has a similar view about the aesthetic domain. It’s not as if there is obvious predictive or practical value gained from looking at a certain painting or reading a poem, but such activities can bring tremendous meaning to our lives nonetheless. Seeing Tishk Barzanji’s art for the first time was a psychedelic experience – the sharpness of the lines, the mix of colors, and the silhouetted characters made me drop what I was doing, and read everything I could about the artist and his work. In admiring it primarily for its aesthetic form, I clearly still value it, even in the absence of it making explicit, predictive claims about the world. They of course may speak truths, but it’s not as if they’re necessarily forced upon us. Incidentally, I found later on that Barzanji’s art strongly articulates the need for continual awareness of the fact that humans are imperfect, yet I was moved to this conclusion on my own. The art clearly has a message, perhaps not the one I picked up, but I don’t feel wrong reacting to it in a way different than what the artist intended. This is, again, in contrast to something like highly idealized economic models that purport to explain the world without actually having been appropriately tested within it. All in all, we can most definitely value something even if it doesn’t aim to precisely describe what is true about the world, which I think is seen commonly with art.
So, where does all of this leave us with respect to the tension we set out to investigate, that of seemingly needing to figure out whether the information we’re working with in our everyday lives is true, while also needing to act? Namely, I think Taleb’s view has helped me address this by trying to assess the fragility, or risk of information, rather than its truth, which I think is more relevant for decision-making while also being in some ways connected to the more abstract idea of whether a body of information describes the world with truth. It lets me act, while also being agnostic about the truth values of individual pieces of information, believing that they could eventually converge to a complete truth or falsehood eventually, of course contingent on us both subjecting them to more rigorous analysis and there also being more philosophical work on whether this is possible in the first place.
Thanks to my friends Paulius Skaisgiris and Sarah Dukić for feedback.
Footnotes:
[1] This is known as the correspondence theory of truth, which is one of many. I personally find this one the most plausible and will proceed with it, as I don’t think adopting any other one will make a huge difference for what follows.
[2] I think this applies to any sense of knowing, so regardless of where you fall on the rationalist (ie. what you know is derived logically from all else you already know, with very few things about the world actually becoming known to us through sensory experience) vs. empiricist (ie. what you know is what you’ve derived through your senses from interacting with the world around you) spectrum. With the former, in making a decision, you just reason from what you’ve already deduced, and with the latter, you’re drawing upon sensory data you’ve already picked up.
[3] So, this statement (and Taleb’s view, overall) is sympathetic to/a variant of pragmatism, which is a school that considers thoughts or ideas as relevant only insofar as they have a practical use in the world, as opposed to describing reality, or what is true about the world. For instance, a beautiful theory about how the mind works describes some aspect of what reality is like, but if it cannot be used to make predictions about the world in some way (or fails at doing so), a pragmatist might say that it ought to be discarded. However, Taleb’s stance might perhaps be more liberal in classifying what makes a claim “useful”, and would consider not eradicating any mention of the said theory of mind, as long as it is only used in the sense that it ought to be used (more on this later). In presenting and advocating for Taleb’s view, I won’t be defending pragmatism directly – rather, I’ll present what Taleb’s view has to say about various things, and then consider its both utility and beauty. That might count as an indirect defense of pragmatism, though that’s not my primary aim.
[4] I think it’s pretty hard to find claims close to absolute truth, especially – but intuitively, mathematical ones seem to be good candidates.
[5] This parenthetical remark is important, as clearly being subjected to too much disorder is not good. As I’ll explain in a few sentences, any bacteria addressed by antibiotics are antifragile, but only if the dose is not finished. The full dose of the antibiotic (ie. too much disorder) will probably kill the bacteria completely, but anything less than that (just enough disorder) only makes it stronger. This is probably one of the most common strawmen launched against Taleb.
[6] Robustness seems colloquially to be taken as the opposite of fragility (as opposed to antifragility being the opposite).
[7] As per the Black-Scholes model Wikipedia article.
[8] Taleb essentially argues for this claim by showing that supposed instances of traders using the Black-Scholes formula are actually making use of practical heuristics and tricks.
[9] https://en.wikipedia.org/wiki/Epictetus
[10] By some measures, Stoicism is also only getting more popular.
[11] I’m treating these terms as being quite similar to each other, but the motivation for drawing a distinction and introducing Taleb’s views under the language of risk was because “fragility” and “antifragility” seem more foreign and thus harder to unpack.
[12] Again, as mentioned at the beginning of this essay, I’m assuming the correspondence theory of truth to hold.
[13] I don't think there's such a thing as false knowledge, even though we might use it colloquially sometimes, but there is false information -- data from the world that may misrepresent to us what is actually the case, whether because of our thinking machinery or sensory systems. This I think is an implicit assumption I'm making throughout this post. I don't think the semantics matter too much, but the point is that there is information about the world that does help us figure out what's actually the case, and that's what I'm calling knowledge. My intuition about knowledge is that is must be something true, which is again something that makes correct predictions.
No comments:
Post a Comment