My epistemology allows me to imagine many futures, but never to believe in any ‘the future’. ‘The future’ is a concept incommensurate with any conceivable intelligence. It is a conceptual oxymoron.
In the world that I think I live in, nothing interesting is foreseeable, the future is quite in-calculable, there are far too many causes with far too many effects, each caused by far too many causes, many un-measurable, and the total system having very far too many non-linear feedbacks, chaos feeding chaos everywhere.
That is not to state that it is not worthwhile to think about what might be, but if there is one thing recorded history is unanimous on, it is that brilliant analysis rarely sees the interesting twists, the ones that could make you some serious $, or avoid starvation or bombings or … Whole populations routinely miss the so-obvious-in-retrospect signals that told them, loudly and often, disaster was on its way. For example, the 50 million people who died in WWII. Quite a lot of Europe went hungry in those times, endured bombings, they all should have emigrated, in retrospect.
Not just civilians with interests only in their lives and treasures, also professional militarys and professionals of statescraft and intelligence, their best analyses do not control their futures. WWII, I think the detailed histories show that very little of the political calculations or military planning got things right, even the winners. No doubt they allow occasionally missing a catastrophe, but so many of those are also ones they almost caused.
When we first meet aliens, we will brag of our science and technology, always and forever based on tinkering with reality, not our fabulous history of successes based on brilliant analyses based on words and shared understandings of how things are and should be. The groundings and feedback are very different in those arenas, only the scientists and engineers of the world seem to appreciate that difference.
The future is different things, some are easy to know, the static bits. That book will likely be sitting on my desk in the same spot a year from now, as it has for some years. On the other hand, the amount of rain falling on the patio tomorrow is much less certain.
People generally find the dynamic parts are the most valuable, very few of the static bits can have any differential effect on their lives, being static.
It is important to categorize information by the source and dynamics of its dynamism, as characteristics of that source controls the reliability of predictions about it. Just because the sun dynamically comes up every morning does not make orbital mechanics of celestially-sized objects a dynamic that can change in the blink of an eye, nor because we can sometimes predict rain 5 days in the future better than random, that we can do so for stocks with enough precision to make a profit in trading.
We could end this epistemological analysis here : analyses of human affairs are based on our shared understandings of what it means to be a government, political opposition, a military*. But the entity to which the labels, those shared understandings key to our analyses, are applied is under no obligation to behave as predicted by the properties common to the label. As one recent example, after the US’s use of NGOs, new properties are assumed, but the transition caught the world by surprise. Many such shared understandings turn out to be too static, and prevent prediction. Not a playable game, as it turns out, and the reason for the high failure rate of everything not based solidly on science and engineering. Even medicine’s successes are far from perfect.
Or, looked at from the other side, entities are unlikely to behave in expected ways if that does not maximize their prospects given their current understanding of their interests. The analysis can be completely out of date as fast as any single individual can understand something new, and the probability of such a change is proportional to their view of their interests and their emotional intensity in associated areas.
The point of that is that it is largely a waste of time to analyze anything in human affairs, unless you intend to make $. Only that can focus you enough on the task and prevent the wiggle-room that allows explaining away your lousy track record. I guarantee you that our security-industry executives do well with the economic espionage side of their operations, their personal investment incentives guarantee that.
For everything else, as America’s spies demonstrate nearly every day, important things are not foreseeable.
*vs everything else is implied here, but the ‘shared human understandings without hard meanings’ is the problem, mental maps based on tighter definitions tied to scientific measurement do well enough to base a civilization on. We can’t be sure about the human-level stuff, yet.
Added 3 days later.
I just read that. The situation is even worse than present above, as the institutions themselves are very noisy, and all of that thinking about the uncertainty in the opposition also applies to their internal workings. Institutions are not mechanisms, they are also systems.
Thus, even if your intelligence organizations are perfect in every respect and their sources of information correct in every nuance, Important Things Are Not Foreseeable. But they won’t be. Rules for Flying Blind
I read a couple of long documents by CIA insiders on its performance. So erudite and so useless. Before an agency becomes part of the noise, assuming it is perfect in every respect, they are still useless because there are so few external referents to so many of the concepts and so not enough ways of checking reasoning.
Science works because every thing it does is physically tied back to reality, every experimental setup, … begins by replicating something to prove the setup works. Tech and engineering, test test test because nothing works the first time. Products of any complexity, many many tests of everything all the way through the production process, a huge focus on QA, every minor factor is measured if possible, and production problems often are found to begin with 2 minor shifts of measures not known to be relevant, followed by unexpected interactions with later processes. The number of surprises in those situations is apparently very large, new ones appear with every generation of technology.
Science, engineering, technology and practical arts of life progress because every small step can be checked against hard reality. Anything that does not allow easy checking or quick checking, for example medicine, does not progress quickly.
And arenas with no reality to check against, no progress at all can even be contemplated.
We do not know where normal intelligence analysis or economic analysis falls, because the connections between terms such as ‘intelligence asset’ and ‘reliable’ and the actual people and their history can not normally be checked, are quite flexible in meaning. Even if those could be quantified, you have no idea whether the question being answered with the data is at all important, or rather that was the one chosen because they could answer it. It was not so important what the USSR’s annual tonnage of steel was, rather what were they doing with it.
Based on the high-precision thinking in science, etc, and the constant checking that all fields dealing with physical reality require to do anything, my conclusion is that analysis must be useless. As constant checking against hard reality is not possible in most of the fields we deal with, thus there can never be precision thinking and so analysis is useless.
Some analysis, of course, will be ‘correct’. But you will have no idea what is correct and what isn’t.
Therefore, the important question is not whether the analyst is correct. Rather, if an analyst comes up with a convincing alarm, can it be hedged? If not, why bother? So, analysts cannot be pointed at problems that have no solution.
But you can’t answer that either.
The argument that intelligence prevents mistakes is so seductive. It just can’t be completely wrong, so many problems can be hedged. Added later. Not that they can be hedged, but actions are not taken. We can’t know what would have happened if we had done them, and that result is the same as if we had done no analysis, had never intended to risk anything, and allow us to think the analysis was valid. Activist foreign policies and other such force errors, reveal the weakness of analysis.
So, I could be missing something. And, on the other hand, everyone in Europe missed WWII. Also, WWI, and a few others. We can assume there were many of the best analysts on all sides working on those, it does not appear to have helped many people avoid the consequences.
Further thinking says the specificity of the question has a great deal to do with whether it can be answered, e.g. “Are tanks close enough to oppose the landing?” vs “What is the economic growth rate?” We can’t make that latter prediction for our own economy, and estimates of the past are revised continuously, even when the gov isn’t trying to fudge the numbers.
Again, this is the same ‘analysis’ as I am claiming does not work for questions such as “What is the system stability of the USSR?”, why would I expect it to work for “Are a intelligence organization’s claims of predicting the future credible?”
Nevertheless, objectively, the track record of organizations predicting the future is very poor, you will be largely flying blind for important decisions but may believe otherwise. Thus, don’t have a foreign policy, it can’t work. And does not, in history, nations that remain neutral rarely have wars and so objectively outperform economically and socially nations with wars. Empires all die of military spending and war, a point apparently never emphasized enough in their intelligence agencies’ briefings.