title: Fallacies in Reasoning tags: reason fallacy There are numerous fallacies to be wary of when thinking about things. There is (or will be) a separate section on this Wiki for logic itself. I'll add some as and when I can. The topic of this Wiki is Mental Health, my perspective on it, and what I've learned in the two decades since my first major episode. Here I'll outline fallacies in their general form, then on a separate page, FallaciesInMentalHealth, I will outline how these fallacies arise when one is reasoning about Mental Health, whether from the Interior perspective or from the Exterior. What I should make clear is that the brain and Mind are unimaginably complex in their behaviour. Thus it is absolutely necessary to make may simplifying assumptions in order to reason about them. But in so doing, fallacies such these are traps that are easy to fall into. * Absence of evidence is not evidence of absence * Correlation does not imply causation * Usually does not imply always * Rarely does not imply never * Sometimes you do get a different outcome * Invalid Simplifying Assumptions # Absence Of Evidence Is Not Evidence Of Absence # Correlation Does Not Imply Causation # Usually Does Not Imply Always # Rarely Does Not Imply Never # Sometimes You Do Get A Different Outcome They say that *madness is doing the same thing but expecting a different outcome*. Seems sensible, but needn't be true. If you roll the same 6-sided dice twice, you may have done the same thing, but you may not get the same outcome. If we ask the question "did we roll a 1?", then we get a "yes" outcome 1/6 of the time, otherwise a "no" outcome. If we replace our 6-sided dice with a 100-sided dice (ignoring the geometric issues in actually constructing such a dice), we would get a "yes" outcome only 1/100 of the time. If we kept rolling this dice, kept getting a "no" outcome (*i.e.* not rolling a 1), some may think us mad for trying again and again. But actually after a few dozen rolls we are likely to get one 'different' outcome: a "yes" rather than a "no". The maxim that "if at first you don't succeed, try, try again" is in part this: sometimes you have to try the same thing over and over again in order to get a desired output. Now I'm not a fan of the kind of 'quantum woo' that alternative medicine practitioners come out with, but to add a little 'quantum fun' to this, consider that if we have, say, 7 electrons whose spin is unknown and independent of each other, then if we measure those electrons, taking them as being the 7 bits of a binary number, along the one axis, we get 7 bits (taking spin up to be 1 and spin down to be 0). If we now measure those same electrons along an axis orthogonal to the original one (that is, at right angles to it), we will essentially get 7 random bits. If we keep doing this repeated measuring, each time doing it orthogonal to the previous directions, we would expect to get seven 0's once per 128 measurements. Something like that. I imagine someone with more of a physics background could make this more solid. But the idea is that this 'you can get a different outcome' thing goes 'right down to the quantum level'. Bit of a silly idea, and not to be taken as any kind of 'quantum woo', and I could be completely wrong. # Invalid Simplifying Assumptions ```cquote It is a reasonable assumption that: a cow is a sphere of mass 500kg and with radius 0.5m. ``` While not what one would ordinarily call a fallacy, in the course of thinking about a problem, we naturally make assumptions in order to make it feasible to think about the problem, and to compare similar problems. Having said that, it is easy to forget, or even simply be unaware of, the assumptions on which one's reasoning rests. When we build upon the conclusions of others, we inherit all the myriad assumptions of the works we rely upon, and all the assumptions that they in turn rely upon. So it is easy to not know all the assumptions one's reasoning relies upon. One particular class of assumptions, ones that I class as fallacies, as assuming incorrectly that one is comparing like with like (or 'apples with apples'). If we do not grant the assumption that any two instances of mania are the same, so far as diagnosis and treatment is concerned, where does that put conclusions drawn from trials of treatment where a diagnosis of mania was the only criteria for selection? More generally, assuming uniformity of some kind, is a class of problems that can undermine what may appear to a casual observer as sound reasoning. My general philosophy about such things is to make your assumptions as explicit as possible, and to carefully consider the implications of the negation of those assumptions. # Not Comparing Like With Like ```cquote What is the average flavour of a piece of fruit? ``` The difference between comparing 'apples with apples' vs comparing 'apples with oranges' matters. But not being able to just say 'fruit is fruit' and treat all fruit as alike is inconvenient. It is the same with many things, and researchers in search of research that they can do and publish, are always drawn towards what is possible. Alas if it is only possible to 'compare fruit with fruit', rather than ensure that one is only 'comparing apples with apples', then many will get sucked into 'comparing fruit with fruit', believing that to be the 'best that can be done'. When that happens, those 'best that can be done' results are seen as akin to 'scientific truths', and the shortcomings are easily and silently ignored. Perhaps any two instances of mania are alike, sufficiently that they may be assumed to be the same. Perhaps not. If not, then trying to study 'treatments for mania' may be as silly as trying to use statistical methods to work out the average flavour of a piece of fruit. (In this case, what sort of fruit matters, but it is easy to lose sight of that in the face of a pile of well-tested mathematics.) # Appeals To Convenience When we are dealing with the Mind and brain in full detail, things get unimaginably complex. There are around 86 billion neurons in the brain, hundreds of trillions of synapses, some astronomical (or even beyond astronomical) number of possible configurations of those neurons and synapses. The behaviour of the brain depends heavily on just how those neurons are connected by synapses. But the complexity doesn't stop there. There are innumerable causal feedback loops that go out from the brain through the nervous system and body, possibly into the world surrounding the body, and back into the brain via senses and the nervous system. All this can also affect how the brain behaves, just as what happens on a server in German can affect how your computer in London behaves when you click a button in a web browser. Faced with such unimaginable complexity and, with it, the total intractability of doing things in exact detail, we have to either make myriad simplifying assumptions, or give up. Those who do not give up thus make those myriad simplifying assumptions. Their reasoning for doing so may be along the lines of "I don't see what else we could do", as somebody once remarked to me in the comments section about something to do with psychosis. The trouble with "I don't see what else we could do" doesn't automatically entail that what you can do is valid and works. That is the fallacy of an *Appeal to Convenience*: a reasoning step, often implicit or silently ignored, along the lines of "since not doing this is too inconvenient, we may conclude that it works". The trouble here is not that simplifying assumptions are made, often they are necessary. The trouble is when those simplifying assumptions are silently ignored, and the dependence upon those assumptions hidden from view. If you claim to prove that something is impossible, and your proof seems valid, and yet that something happens to happen, then what you have actually proven is that one of your underlying assumptions must be false. If you haven't taken the difficult steps of formally taking note of everything your reasoning depends upon, it can be very hard to spot invalid assumptions, or cases of invalid extrapolation or generalisation of some theory or observations. Moreover, with the modern pressure to publish, often researchers have neither the time to go into such detail, nor any kind of professional motivation. And so implicit appeals to convenience take root in the published literature.