Isto é ciência, ou É por isto que gosto da New Yorker
"Tetlock(psicólogo que fez o estudo) put most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?). The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.
Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable."
E porquê?
"The experts’ trouble in Tetlock’s study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong. Tetlock describes an experiment that he witnessed thirty years ago in a Yale classroom. A rat was put in a T-shaped maze. Food was placed in either the right or the left transept of the T in a random sequence such that, over the long run, the food was on the left sixty per cent of the time and on the right forty per cent. Neither the students nor (needless to say) the rat was told these frequencies. The students were asked to predict on which side of the T the food would appear each time. The rat eventually figured out that the food was on the left side more often than the right, and it therefore nearly always went to the left, scoring roughly sixty per cent—D, but a passing grade. The students looked for patterns of left-right placement, and ended up scoring only fifty-two per cent, an F. The rat, having no reputation to begin with, was not embarrassed about being wrong two out of every five tries. But Yale students, who do have reputations, searched for a hidden order in the sequence. They couldn’t deal with forty-per-cent error, so they ended up with almost fifty-per-cent error.
(...)
"The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling. This helps explain why specialists fail to outguess non-specialists. The odds tend to be with the obvious. (...) Tetlock’s experts were also no different from the rest of us when it came to learning from their mistakes. Most people tend to dismiss new information that doesn’t fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it. The same deficiency leads liberals to read only The Nation and conservatives to read only National Review. We are not natural falsificationists: we would rather find more reasons for believing what we already believe than look for reasons that we might be wrong. In the terms of Karl Popper’s famous example, to verify our intuition that all swans are white we look for lots more white swans, when what we should really be looking for is one black swan."
Mais:
"And, like most of us, experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely. (...) The classic “Linda problem” is an example. In this experiment, subjects are told, “Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations.” They are then asked to rank the probability of several possible descriptions of Linda today. Two of them are “bank teller” and “bank teller and active in the feminist movement.” People rank the second description higher than the first, even though, logically, its likelihood is smaller, because it requires two things to be true—that Linda is a bank teller and that Linda is an active feminist—rather than one. Plausible detail makes us believers. In 1982, an experiment was done with professional forecasters and planners. One group was asked to assess the probability of “a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983,” and another group was asked to assess the probability of “a Russian invasion of Poland, and a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983.” The experts judged the second scenario more likely than the first, even though it required two separate events to occur. They were seduced by the detail."
Em suma:
"The best lesson of Tetlock’s book may be : Think for yourself."
(Para o artigo completo, carreguem aqui.)