Just now, I got email from edge.org about a conversation, ‘what to think about machines that think?’.*
The Edge is an interesting group, always very intelligent thoughts, a wide range of backgrounds, so insightful from dozens of points of view, interesting and dense to read, and I learn a lot, or at least pick up a new point of view or two. And I would have also with this, except I decided to have lunch first, and while eating idly thought about what I should think about machines thinking? I don’t find any of their answers in my understandings of things.
The Edge’s answers range from ‘it will end the human race’ to ‘lead to new heights of arts and science and civilization’. ‘Human stupidity will always impede artificial intelligence’ was an interesting thought.
But my thought is “what is it you mean by ‘thinking’, exactly?“. All of their answers assume, implicitly claim, that we have a good understanding of thinking and know how to apply it consistently. Incredible hubris, against every bit of history that I think I know. Surely any empire with geniuses who could do so would not now be failing? Would have avoided at least one of the major wars of the 20th century? Could explain the yo-yos of the DJIA, and thereby avoid it? Would have cleanly solved poverty, drugs, education, medical care, homeless, …?**
Extremely, vividly, ultra-high-definition in wide-band obviously, we do not have processes that apply even ordinary thinking consistently to problems. In fact, the history of humans illuminates !! we are abysmal at thinking !!, it is a long string of disasters and failures and missing-the-points.
The historical record provides very little evidence of effective use of thought to decide on even-adequate paths into the future. The harder and more systematic the attempts that might have been, the worse we have to believe they did.
That civilizational failure rate is not often discussed, things like intelligence agencies never providing the advice that avoided the smashup of the empire they served, the surprisingly high proportion of intelligence chiefs among ministers executed after changes of government, the simply astonishing variations of obvious errors large groups of people produce on such a regular basis. Nobody won any of the wars, not even the winners, when measured in full context with opportunity costs. All the empires died, their leaders + professional intelligence services + military commanders + diplomats did not save a single one of the many hundreds of kingdoms and empires, are now failing to do so for democracies, republics, constitutional monarchies and other modern forms.
No honest person could have intended those results. Every one of those failures thought they were being intelligent, except for the unknown number who were dishonest.
The evidence is thinking is not simple, nor well understood and/or training of human minds to do so is not reliable, so it is more art than engineering. Despite much discussion and application of tremendous brain power, thinking as systematic application of reasoning against real-world problems is not even taught at the level of systematic art by more than a handful of universities, nowhere is even an engineering-101-level process for reliably avoiding error.
Modern power of thought, pitiful as it is, did not arrive overnight, rather developed very slowly. Not until the 1800s did we have a firm grasp of the scientific method, and Ioannidis and replication tests say the process works fitfully, at best. Statistics didn’t arrive until the 1880s is very often mis-applied, one of the reasons for problems of reproducability. ‘Scientific medicine’, the systematic application of statistics to different treatments did not arrive until 1920, and they still screw up more often than not. We did not have the beginnings of modern control theory until the 1940s, and use of computers was not widespread until the 1960s, they are now the largest source of insecurity in modern society, and are about to be entrusted with far more control of our lives. Our most sophisticated minds still allow witchdoctors to pretend they are at least artisans, if not engineers. (So easy to add to that list.)
We do not believe that reasoning is trivial in mathematics, why do we think answers would be obvious in our complex social reality?
We humans have few tools to assist thinking, although we have many to assist with calculations or part of particular analyses. We do not have measures of precision of thought, the imprecision of a single word meaning slightly different at different points in an analysis. Measures of the quality of data are expensive and complex. All relationships, causal and correlational are not known. AI can do nothing about these.
The intelligence and insight of an AI relative to humans doesn’t matter, an Artificial Intelligence will produce something to be understood and humans will ask questions. Exactly as they would ask a human. And then the problem is to understand that and make use of it, exactly as if it was done by a human.
But is not that the problem? Are we good at evaluating the thinking of our fellow humans? Isn’t that the problem civilizations all seem to have, that they don’t get results expected by plans?
How are AIs going to fix that?
Further, I think that the development of AIs will not be done by AIs because the difficult part is the evaluation of the thinking of the AI and the feedback, correction, education. Humans have to do that, we can’t allow the AIs to grade themselves. That is not as simple as checking against external reality, our edge in understanding science and technology.
I think, however, that this process will do for science and technology what computer chess has done for chess, and computer poker has done for poker. In both of those areas, the computer programs have changed and improved human’s games.
Also, this process may finally produce the understanding of thinking we need to assess its precision. AI discusses its programs as ‘inference engines’, but conceptually it is the same as any other program processing data and producing results. Like a climate model, for instance. We know that is a model, but normally don’t think of an AI as a model of reasoning.
AIs are models of reasoning, are different than human reasoning, and we can expect different models and structures of knowledge to have different strengths and weaknesses relative to each other and humans. Relative to other fields of science and engineering, evaluation of results and correction of models and data are more like feedback correcting our institutions than those of science and engineering, areas where the imprecision of thought produces so much failure.
AIs represent much potential for the improvement of human thinking, but progress won’t happen quickly, and the AIs are not going to be much help in their own development for a long time.