Musings on 5th Generation Warfare, #D

Continued from here.

The Best Rationalizations Are True

My epistemological angst had nagged me* increasingly after the turn of the century. Everything was wrong, everywhere I looked. People thought I was being negative, but they didn’t seem to want to deal with the evidence I saw all around me.  “Oh, people must be better” is an excuse you stop believing after you see entire bureaucracies fail in lockstep. After the millionth or so case of someone in the military, police, Catholic priestood, Muslim mullahood, Jewish rabbihood, Buddhist guruhood, ex-large national management consulting firm, large  corporation, political party, law firm, or university graduate, etc covering up for a fellow, blithely violating the rules on behalf of a fellow of ephemeral linkage, you get the idea that the society we have wrapped around humans is the problem.

It isn’t repeated individual failure we are seeing, people not living up to the minimum standards of civilization.  I mean, yes, it is those things, but it is also the way people are.  Instead of seeing them as failures, we could turn it around and say that people are good at trying to deal with people as tho they are special, one of us. That we are very good at finding connections to give us a human level linkage so we can consider them one of us, and be at ease in dealing with them, to give them what we would want were we in their shoes.

If we can possibly do so, we deal with people as special.  We work to get them a good deal. That is consistent with the many lines of research, e.g. experimental game theory, that show that people are as cooperative as they can be, given the level of trust they can have. In fact, it seemed to me that a great deal of the modern results in psychology, social psychology and anthropology revealed people to be very cooperative : we have combined social and political power with cooperation to manage everything.  People are far more cooperative than any other species, and even our wars are spectacular examples of cooperation.

So, we shouldn’t go on trying to make people conform to the way we have evolved and scaled the world and its institutions, the current divisions of knowledge and responsibility and power. That was what was wrong, all of the the world’s social roles were based on social structures and social expectations that did not fit modern reality.

It seemed to me that every crime involving sex is a measure of the mismatch.  Every bribe, every sale of anything illegal, every abuse of anyone that goes without effective correction, every bit of dishonesty.  All of these things are a result of an environment that prevents people from dealing with other people as individuals, forces them to deal with people as anonymous. Worse for all, forces some of us to deal with arbitrary power from a position of great need, of great weakness. That is set up for failure on all sides leading to catastrophe down the road.

I had been a believer in personal involvement before Taleb’s idea of skin in the game being an important measure of the likely utility of your opinions. ‘All hat, no cattle’ was a common condition with many variants.

I had had another experience that convinced me the end was nigh, that civilization simply could not continue in its current modes, nor in the forms that produced them.  This epipheny was induced by a coffee maker I was forced to buy after our last had given out after 15 years. It was a percolator into an insulated carafe design, nothing else will do if you like coffee.

So my wife put me in charge of the effort. I did the research, considered features, looked at reviews, and bought a highly rated item for mid-range $.  I didn’t see any negative comments.

Well, the reality was, it was pretty, sitting there on the counter.  Lovely aesthetics, no doubt won awards for design.  A nice carafe, which didn’t actually have much insulation, but looked good, black plastic and brushed stainless steel.  It was designed to sit to the left of the counter, ours could only occupy the far right, what with everything else in places equally fixed.  In that position, you can’t see the gauge on the right side that tells you how much water you have poured into the tank, and it is easy to over-fill, there being an extra inch and a half of space at the top of the tank, no marks on the inside to guide you.  If you overfill it, coffee everywhere. So you break your back holding a 3 gallon jug of water up at arms length while you lean over to see how much water you should put into the gd tank.

A very common error, all of us have made it more than once, with this rig is forgetting to empty the carafe, because there is nothing to force you to take it out of its position. Coffee everywhere.

Then coffee is perced, smells great, remove the carafe, and begin to pour it into your cup, and the cussing starts.  You see, the thread around the lid is very coarse and not deep, with plenty of clearance (the lid moves side-to-side 1/8th inch), so there is little resistance to the coffee flowing through those threads, they are no barrier at all.  If you carefully position the notch in the lid through which the coffee is supposed to flow over the notch in the pot from which it flows out into the spout, center the lid and and push down hard to keep the seal tight, you can manage to spill only one drop.  I have never done better than that, and any inattention or misalignment, it takes 2 paper towels to mop up.

I get angry at the failure of civilization that represents every time I spill even a drop. Our 15-year-old German moderate-price coffee maker was superior to this new one in every way.  Bigger pot, kept the coffee hot for 10 yours, you couldn’t overflow the pot, the tank wasn’t that big, and the tank full was how much you needed for the 10 cup pot.  It was the old fashioned kind where you put the filter holder on top of the pot where the lid would go, and it perked the coffee through that filter.  So you never forgot to empty the carafe, easy to use, flexible no limitations about where it would go on the counter, and it just worked for 15 years, but they had no equivalent in their products now.

This replacement is a bad product that causes continuing aggravation, when every single element of it could have been improved trivially, but someone had to know it was wrong, be motivated to produce a good product, and be allowed to fix it. Modern companies can succeed financially without doing any of those things, in fact, seem designed to accomplish the opposite. You can see the poor management in the design : how can the tank be bigger than the pot?  So they got the thing designed, priced it out, the margin wasn’t big enough, and the cut the pot size down and the quality of the pot to make the required gross margin.  Some Chinese company had a carafe they had cut too much quality out of and a batch was rejected, those would have to do.

The fact that I have remembered the name of the company and will never buy anything from them again will not affect anything at all, in fact, the fact that they fail in a few years is designed in, another company picks up their assets and the charade begins again, new loans, a new name, beautiful products, great reviews, and nobody ever tests anything to see whether it actually works, or much cares.

The problem is scale and being required to deal with people as anonymous samenesses, and all of the bad aspects of treating people as special with few of the good ones. Nobody has skin in the game of reality, only the various games of power and supporting them, enabling them, mutual survival in an increasingly hard world.  To say the least, this system design was not bringing out the best of humanity.

Whatever the answer was, the social organization and institutions had to allow people to deal with people they know they can trust, no further than one personal reference away in a social network.  I hereby claim “The Generalissimo’s Law : no complex civilization requires more than one level of network trust, provided the social organization is optimal. No complex civilization can function if it requires more than one.”  It sounds good, and has the advantage of being hard to disprove, always important in the competition for meme survival.

In addition to the philosophical issues, I had been working hard on understanding the brain.  it was sometimes hard to believe that all of the different specialties were working on the same organ, their discussions of results and theories were so different.

However, at this point, I dropped everything else. The changes we had seen in the servicebots shouted that I had to know more about Artificial Intelligence.  Fortunately an early associate of mine and my current collaborator had investigated AI and pointed me to the best sources. AI is a human-designed world, and so much easier to grasp with conventional science and technology and community and personality terms. So easier to summarize than my background tasks in sociology, biology, genetics, neuroscience, …

First, AI has proceeded on several nearly-independent fronts for nearly 60 years.  One front is games, e.g. the chess programs, which represent ever-increasing sophistication in brute-force overcoming the complexity of the games.  All special code, no generality to other problems except for clever coding tricks. However, these programs are still being implemented and extended because a) There is a market for them. b) Poker’s strategies are only partly matters of probability of cards, the probabilities of people making decisions are another part and require mechanisms to determine those at every point in the game.  That aspect of research and development has very general utility. and c) These programs have been responsible for major increases in human skill in these games over the last 20 years. It seems to be happening with ‘Go’ now, some article I just read analyzed how good the AI’s game was, how perplexed was the Go’s world top player. That is now a sufficiently-general phenomena that its study may well produce opportunities for human-AI synergies.

Another front is abstract versions of that, e.g. theorem provers.  I haven’t heard of those being worked on much since Lenat’s dissertation, but probably there have been a lot of Master’s theses in CS.  Same lack of generality as games.

There are many neural simulations as another research effort in AI, for instance the $1.5B dollars being put into the Swiss brain simulator. I don’t recall a big government blue sky project ever producing anything of note since Apollo set such a bad example and and succeeded with Engineers as managers, actual people who still cared about their reputations as engineers.  No government has allowed mere engineers to manage anything since, they all must be neutered via long immersion in ‘administration’, did not investigate further.

Those neural simulators come and go on the hardware side, are relatively easy to implement in FPGAs so as to get the benefit of parallel processing, tho most of that has been moved into software written in C for the CUDA/OPENCL framework. Both of those parallel processing engines, FPGAs and GPGPUs, are the supercomputer components that scale from desktop through the most powerful systems in 2016. Games and graphics and supercomputers have advanced in tandem. But neural simulations had the limitation that small scale simulations had, at best, proven simulations can be precise enough to convince researchers their simulations reproduce natural neural systems such as the retina or a very small part of different specialized cortical areas.  Anything large-scale were too far away from what we really knew about the detailed connections, one area to another.  At least, that was the last time I had looked, and nothing less than electron microscope resolution at the synapse level counted at all.  Those were still confined to tenths of a cubic millimeter, where cortex can be 5 millimeters thick.

Another general line of work is represented by Lenat’s Cyc, IBM’s Watson, and Wolfram’s Alpha.  These are not at all the same technology, but all are reasoning engines using human-designed representations of knowledge.  The representations are as general-purpose as their designers can make them, the code of the reasoning engine emulating thought will work over future knowledge converted to those forms. Much of their knowledge base has been coded by humans.  Only after there is a vast store of common sense can the knowledge system itself be allowed to begin to learn on its own, to add to its own knowledge base. I offer this and this as examples of the sophistication of human thought relative to anything in this group, impressive though I think all of these technologies are, and state they will not be judging their own efforts for some time.

This was the line of research that had lead to the servicebots, the latest big surprise in my world. They were behind the Siris and Alexas and … that came out of all the big R&D groups, all of these groups had strengths and weaknesses, but the base technologies grew rapidly, they got better fast for ordinary conversations in very circumscribed arenas of discourse.  Within those, correct understanding approached 90+%, so a well-trained human wasn’t often annoyed.

The problem has been that the difference between 99.9% and 99.99% is a lot of missed communication in the course of our current versions of all of these computer-human natural language interfaces. It is the reason computers can’t do many kinds of work, if a missed message could cause injury, nobody can use a servicebot.

We used them with our babies in our lab nurseries because caring for babies is successfully done by female apes, the rules are pretty simple and babies are pretty resilient to small errors, eventually the manufacturer’s engineers had tested them with their own babies.  I chose not to believe the rumors of threats of job loss (joke), and to believe the claims of safety.  As those had used a new technology to validate the base robot’s movement limitations, the automated checks before any movement that prevent them from putting their very strong fingers through the baby’s soft belly or eye, it seemed safe. The insurance companies said the same, so we put up with all of the mis-understandings.

Finally, the other large line of AI technology is Neural Networks.  These were inspired by what was known of neuronal interactions about 1960, and bits of left-over analog computing lore all bleeding across the hardware/software interface as the technologies switched around, and development has been fits and starts ever since, with a new burst beginning with ‘Deep Learning’ technology. These have had a rapid development because the network provides the required millions of high-quality images or sounds or … of everything that a NN needs to learn to distinguish.

NNs were getting as good as humans on interpretation of photographs, but were not as good at speech-to-text.  Speech-to-text has a necessary social-semantic component, humans are exquisitely sensitive to who said what in what context and the tone in which they said it, and so rarely mis-understand the meaning of the spoken sentence, even tho it is clearly ambiguous if they had thought about it. Not everybody, and not all the time, but NNs were easily confused.

The problem is combinatorics. The permutation space is all of the social situations and all the combinations of people in their many social positions, shared and opposing interests, and possible conversations in light of all possible different local and world events that might have occurred recently enough to elicit a conversation. Humans had an easy time with all that, for the most part, tho some humans were blind to metaphor, humor, or other less-than-literal interpretation of events.  Even people start to mis-understand if there is noise, or an unusually soft voice, etc.

Ultimately, NNs and reasoning engines were limited by the complexity of language and the complexity of a world enabled by language : that world required semantic understanding, and human minds were limited to human semantics.  No AI had achieved human semantics, which cognitive psychology indicated required a human brain in a human body.

If the ideas behind ’embodied mind’ thinking were correct, AIs were many years from communicating with humans as humans.  Until then, non-human minds were rearing my Tessels.

*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp. Cleverly Gently Martial In Spirit


3 thoughts on “Musings on 5th Generation Warfare, #D

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s