Who wants to be wrong? What a great reason to learn math! How Not to Be Wrong is excellently summarized in this review, so take a minute and read that so you can not be wrong anymore. Then you can come back here, and I'll give you some of my examples. (NB: the math herein is very rough. Numbers have been wildly approximated. I tried not to write about anything I didn't understand, but I may have failed. If you want really elegant, clean math and blameless explanations, read the book.)
First of all, math is everywhere, and it turns out that if you know just as much math as you did in the fourth grade, it can be a great party trick. If there are 20 people in the room, and we want to divide into groups that are not too big and not too small, what are our options? What if there are only 19? If your lunch was $7.50 plus tax and tip, what do you owe the person who put down her credit card? If you are having an Easter egg hunt with three kids, and it turns out you have 31 eggs, how many should each one find?
All of these everyday math problems hinge on the beautiful way numbers separate and combine. 4 groups of 5 people can be reorganized into 5 groups of 4 people, and once you know that, you know that 19 people will make 3 groups of 5 and one group of 4, or 4 groups of 4 and one group of 3. $2 is kind of a minimum tip and tax exists and is always more than you think it should be, so $10 would not be too much to contribute towards your $7.50 lunch. And the oldest kid can have the extra Easter egg, because he's the only one who can reliably count that high anyway.
On a more sophisticated level, there's a discussion somewhere in the middle of the book about probabilistic situations that divide into four categories rather than two. For example, there are people who are terrorists and people who are not terrorists, and there are people Facebook flags as possible terrorists and people it does not. In such a situation, the question, "What are the chances that my neighbor is a terrorist?" is completely different from the question, "What are the chances that my neighbor, who was flagged as a possible terrorist, actually is one?" The first question is binary: She is or she isn't. The second is only looking at one quadrant of a four-way possibility matrix, because I am looking at the intersection of people who are actually terrorists and people Facebook identifies as suspicious.
One reason this situation is interesting is the principle that the more extreme a number is, the less useful it's going to be to try to change it by what sounds like an impressive proportion. In our home, that's sometimes known as the Ralph Lauren effect. A half-price sale sounds amazing until you realize that regular price is $125. Even half off just isn't enough to make the shirt affordable.
In our example about terrorists, the chances that anyone is a terrorist are tiny-- like there are 7 billion people in the world, and by the broadest definition there might not be more than a few million terrorists among us. That's if most of them are pretty ineffectual: according to my minutes of research, in 2014, there were about 44,000 deaths and 16,000 injuries as a result of terrorist attacks worldwide. Out of 7 billion people. So the chances of finding a terrorist just by picking a person at random is, let's say, 1 million out of 7 billion, or 1/7000, or .00014%. If I say I can double your chances of picking a terrorist out of the crowd, that sounds great, right? But sadly, that only puts your chances up to 2/7000 or .00028%, which are still terrible odds. Now you see why we have the NSA and so forth.
So here's some more fun with percents. If I say I'm "doubling" your chances, they will go up by 100%. Which is not the same as going up to 100%! A percent is only as good as its starting point-- remember the Ralph Lauren problem? Here's another example. I have to write surveys for work. Getting people to fill out the surveys is a big problem. My survey provider sent me an email recently asking, "If you could improve survey response rates by 50%, would you do it?" Wow! That sounded great! But when I looked more closely, here's what was really happening, more or less (remember, my numbers are rough). Typical survey response rate was only 25%. So, add 50% of 25% (12.5%), and that new, improved shiny response rate was now... 37.5%! That's not so exciting. Less than half my group would be responding, even with my snazzy new strategy. Furthermore, watch this trick: I still have 62.5% not responding (because P plus NOT P equals 100%). Previously I had 75% not responding. So I only reduced not responding rates by 16.67%! Only 16.67% of my not-responders responded to my new bells and whistles! So here's where things get weird: Increasing the response rate by 50% only decreased the non-response rate by 16.67%! That's because so few people were responding in the first place. If half the people were responding, it would work out more nicely: I would go from 50/50 answer/no answer to 75/25, which would be a tidy 50% increase in what we want and 50% decrease in what we don't want. But a percent is only as good as its starting point!
Isn't math fun? Now go read that review, at least, so you can learn more!
Ellenberg closes with a wonderful quote from W.V. Quine that I have used a lot since I read it: "To believe something is to believe that it is true; therefore a reasonable person believes each of his beliefs to be true; yet experience has taught him to expect that some of his beliefs, he knows not which, will turn out to be false. A reasonable person believes, in short, that each of his beliefs is true and that some of them are false." Isn't that great! It doesn't make much sense going around saying, "I could be wrong." For crying out loud, if you think you're wrong, do some more research until you either are pretty sure you're right or have realized the answer is unknowable! But, at the same time, realize that you are perfectly capable of all kinds of errors, even while you are doing your best due diligence to avoid them. That's mathematical thinking!