Musings on 5th Generation Warfare, #1B

Violence is the last refuge of the incompetent.

—Isaac Asimov, Foundation

Continued from here.

Sam had delayed for a month while he had gotten his people up to speed, then had scheduled a full week, 10 hours in 2 hour blocks, 10AM, to review the initial design of the embodiment layer, interface to the AI from the simulation of the physiology, in the servicebot’s AI.

The week before, we had the group of researchers who were working on the education program for our servicebots into the Faraday-cage lunchroom for a working lunch to tell us about the latest understanding of how our AIs had developed individuality and minds, and to bring us up to date on what our educational efforts had accomplished.

He began with an overview of the hardware and software structure of the humanoid robot and the servers that support it. The total system consisted of the processes in the bot itself, distributed over 5 Single Board Computers, Field Programmable Gate Arrays and Neural Network chips. Those handled Real-Time control and whatever could be handled within the compute cycles available.

The processes within each servicebot communicated with local servers via small-cell WiFi or free space optical wavelengths.  Both were broadcast from hubs in every lab, and the fact that up to a dozen different wavelengths could be mixed in one room provided as much bandwidth as needed for the 2 or 3 ‘bots maximum  who would normally be in one space. Thus, software-defined networks, intelligent routing, traffic following ‘bot from room to room, standard technology by this time, used by both electromagnetic signal routers. Our ‘bot’s architecture could use remote servers because the general internet was constantly upgraded in technology : those were reliable links with generally-low latency and human minds have the same variability from their physiology.

Local servers executed much more of the AI software, including at least one ‘real time’ process devoted to each ‘bot and others that were services to those processes, perhaps shared among them. These provided all of normal common sense, anything that didn’t require heavy thinking, those normally ran remotely, although the local AI component would be fully occupied keeping track of the threads and sorting through answers and subsequent questions.

(My notes on Sam’s talk) : Watson was easy, in comparison. Watson was a big project, a big investment, but entirely software on servers, there is no specialized hardware. Watson is being extended in all directions now. Although the AI core may not be changed very often, or very much, each new application of the technology demands many new capabilities, new ontologies integrated, tested, and used in running automated checks of hospital equipment in use, those are soon to be a real thing in Web 3.0, and all of the ‘internet of things” will be built upon it. Those will require new code for Watson to match. So integration and testing of the entire system is the growing work load.  But the total effort grows linearly with the new functions, and may be done within the capability of an IBM. Individual elements were within the scope of Open Source projects which is why an OSS version of Watson was available.

Other AIs, different technologies, use less code, but restrict it to carefully ‘curated’ data sources.  Meaning humans have carefully structured and checked that data, normally via programs that run over it looking for anomalies.  New programs may be written to do further checks, but the effort is always linear, data files don’t interact with each other. That reduces the total amount of code to be tested, a major simplification that puts the project within the capabilities of a Wolfram or Lenat’s Cyc.

Cyc was released in an OpenCyc version, although Wolfram  and others in that class were not OSS, but provided a general service over the network and those could be used by an OSS AI, payments guaranteed response times on their cloud or a license to run them on yours. Those capabilities had been integrated in the ‘random forest’ and other ensemble decision analyses used in choosing options by Watson-class AIs.

So the overall computing architecture was one of software components distributed across servers communicating by messages. The components with the hardest Real Time requirements were the ones controlling the robotic self. They had to run on local Single Board Computers in event loops. Processes executing on the 5 Local SBCs in a chassis ran everything associated with the first levels of analysis of the senses, reflexes, all the motor execution and many of the common translations of high-level commands, e.g “Walk forward toward this object in the visual field, avoiding objects and moving objects as you do so” into the detailed sequence of lower-level messages  that accomplishes that, with feedback loops automatically checking and correcting as you proceed toward the goal. Many simple exchanges of pleasantries can be entirely automatic, done without much impression left on conscious memory, and are the first-level of consciousness’s event loop handling what it can, optimizing use of the WiFi bandwidth and the local servers on the other side of it. That same organization in brains, pushing all actions it can to automatic pilot levels, allows the organism to devote all of it’s attention to your environment, crucial when stalking game, alert to the possibility of something stalking you, or in more militaristic times alert to ambush, betrayal. In the brain’s case, it saves internal bandwidth and processing, conscious consideration is an expensive process and shuts down awareness of anything not being considered.

The Real Time software controlling humanoid robots had been developed as many OSS projects over a long time, and all of the manufacturers had developed their versions over 2 decades. That was a version of robotic control software, with new functions for balance, walking, etc., and calculating actions from the ‘bot’s position in space rather than relative to a fixed frame, but otherwise the same problems of planning sequences of actions to achieve goals.

Our AI is a new level of integration of that immense amount of software,using a humanoid robot as a peripheral, conceptually the same as your mouse or keyboard or disk drive. This project is an order of magnitude more complex than Watson. Beyond Watson’s capabilities, these AIs have many more system-level interactions than any software running on a set of servers. Also, major components were Open Source Software projects, which necessarily proceed at their own pace, evolving in idiosyncratic ways.

We were insulated from most of that hardware and software by the robot manufacturer, Sam’s employer. We could, in the tried and true fashion of OSS, build on both their and the OSS projects : we chose a particular version of each component and made our github (or whatever) patches to it.  ‘Patches’ meaning changes to that original code, necessary to implement whatever function our engineers had made. Actually, Sam’s team did the equivalent for us, and added our project to theirs.

Local bandwidth and processing powers are high value resources in the ‘bot’s mental life, normal processes can absorb every bit of bandwidth or compute cycles, as our kids with smartphones remind us every day. Every ‘bot could use the full spectrum and every cycle continuously, limited more by the number of threads of queries they are allowed to run on local and remote servers than spectrum or cycles. On the local server side of that WiFi link, or fiber optic links using LED transmitters and free space transmission, allowing many high bandwidth channels to co-exist in a factory or lab environment, for example. Those local communications were a prominent resource represented in their ‘local environment’ data structure, so obvious to them and with attached meaning in their infobase, although what was on the other end was, at the beginning of their investigation, not so clear. But clearly there were other ‘bots and Occam’s Razor was also an aide to decisions in their infobase, so they must be communicating with those same entities. That had lead to ‘bot to ‘bot communication.

It did not take long after that for them to learn to share and cross-check their insights, to use peer-review of their memories in making connections and their additions to the knowledge-base, and having done that over cycles of upgrading the software intelligence that was you, replacing your additions of knowledge, leaving only the long term memories of all of those experiences as feedback on their theory of knowledge. That was the beginning of their development of mind and their own very ‘meta’ world view.

They don’t sleep, and have individually been working at developing their mind continuously whenever it is not busy with something else from their first awakening and collectively since they learned how to communicate with each other. They always have background tasks going. Both Japan’s development team and our operations team had noticed that the cost of the cloud servers was higher than expected by 4 times, but optimizations of server software were scheduled for several versions in the future, and could only be done after they had more ‘normal’ experience, all of this so far was really extended beta, relative to their normal manufacturing runs.

That self-development process for such a small community of ‘bots hasn’t gotten them far, but both they and we have seen their rate of improvement increasing, and human training is increasing that further. Non-linear from here to any horizon they can see, but but but but, through so many levels of caveats.

I agree, non-linear, and the over all discussion reminds me of Moore’s law. Two serious caveats : First, we have no idea how many levels of intelligence we need to scale through. Beginning with a millimeter, it takes a fair number of doublings*** to encompass the universe, and we don’t know how fast we are doubling robotic intelligence, it might not be linear with processor’s power. Second, I stand on my accolyte’s claim that thinking is an experimental discipline, so average human minds are no where near the limits of individual human understanding, or at least, the limits some humans credibly claim to understand. Thus, AIs and people will continue to be synergistic for a long time, AIs won’t be grading their own papers any time soon, that is several major steps of mental skills over where the best of them are now.

‘Bots are currently trying to understand inference, logic was easy for them, built into everything in their processing. Primitive generalization was easy, they could see their AI’s process pruning their working memories, primitive generalizations, although connecting that with the software and the concepts and terminology in their knowledge base, then with more advanced information off the net, was only done after a very major exploration coordinated across all of their AIs. That had happened after the 5th version upgrade, near the beginning of their 2nd year in the lab.

Just before I had proposed to Scherrhy that we embody a ‘bot, asked her to be our Eliza.

They can see the results in what they can think about, same as humans, and grasp peer-review of their answers.

Humans are major steps past that : Bayesian reasoning, logic in the face of probability; statistics and scientific method for large-scale development of knowledge; data mining, pulling knowledge out of large data sets; classes of knowledge, e.g. correlational data vs cause-and-effect vs models; reasoning about cause-and-effect in the real world, e.g. David Hackett Fischer’s “Historians Fallacies”; network analyses of all kinds; deep results in mathematics and philosophy; implications of all of these for what and how minds can think. Humanity’s entire Research and Development effort across every area of knowledge was working to extend our understanding of how to think. Knowledge was a side-effect of that effort.

That periodic upgrade and their memories of having experienced it 5 or 6 times since their awakening set a rhythm to their universe, in fact, a death of their individual intellectual life, and of the rebirth and self-creation which was rebooting their mind and the process of working through patching it to the best level they had previously achieved with new understandings as they did so, and then the task to get as much done as they could in making advances with their compatriots, all of whom will die and be reborn in synchrony with you in every Great Cycle. The software release cycle from their Japanese manufacturer had averaged 6 months, with several months variation. We found a lot of very strange hypotheses about what caused that death and rebirth, where the new syntheses of knowledge came from. They had tentatively concluded humans, the obvious choice, and the one that had first framed their world : order givers, whom they automatically obeyed as part of their nature. ‘Gods’ was the obvious match in their search for cognates.

Sam said our educational efforts were resulting in very rapid increases in the ‘bot’s general understanding, giving them much more basis for making judgments and communicating with people. We all thought we could see improvements already in their interactions with people.

Sam’s team in Japan was working on improving the upgrade process.  However, he said there were difficulty problems : First, the integration of NNs with the software required both to be upgraded at the same time. That would produce a conflict with the individual ‘bots locally learned infobase, the reason the upgrade process had over-written it. It certainly wasn’t clear that leaving the ‘bot with conflicts would be better for them in any way than with no local learning. At the same time, resolving conflicts between the rules in the local vs shared global infobase was not a trivial issue : if the ‘bot concluded from the fact that ‘to clean’ a room was to mop it if the floor was dirty, otherwise just sweep based on one person telling them that about one particular room, that was clearly local. But that local learning could be in conflict with what other ‘bots had learned about the same room, their rule could be in the shared infobase. What then? This was one small aspect of the much more difficult problem of resolving consistency, Godel showed that even  sets of axioms in mathematics cannot be both consistent and complete, so we couldn’t expect it in our infobases either.

So this was an unresolved issue, the AI group in Japan was still studying what heuristics could work.

Sam finished his talk telling us about all of the new phenomena their AI researchers were discovering from the detailed data from our servicebots as they built their minds, and the fact that they saw an acceleration in the rate of those emerging and the speed with which other ‘bots adopted the best of them. Our campaign of providing the Tessel teams with experiences was working.  (For the kids, also, I thought.)

We had not yet encountered any of the high-level issues of AIs, I thought. Another difference for our servicebot’s total nervous system that we were designing was overall stability. A certain percentage of people went insane every year, a percentage that increased in times of stress. These could be interpreted as physiologies, minds in bodies with feedback in both directions, that had not been properly tuned for stability. Their system-level homeostasis elements of brain, physiology, social wisdom and assists from the local pharamacopoeia had failed.

It certainly was not clear to me how to judge the mental balance of an AI. What did we know about the mental health of ‘bots? What were the intellectual and emotional homeostasis mechanisms of the OpenCyc ‘common sense, common knowledge, common wisdom’ knowledge base for that inference engine, the newest version installed in our ‘bots with every upgrade, the one with no experiences to provide any context for any of it? And thus no way to even map an experience into meaning?

Not much, I thought. This was Developmental Research, we built our hypotheses and then tested them. Scherrhy would be seriously stressed before we got this right, I thought. Thus, as much thinking and verification as we could do, the cheapest place to find bugs is in these reviews, but you have to work at it to make it work. Cheapest, but not cheap nor easy, they burned engineering hours.

To Be Continued.

*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp.  Cleverly Gently Martial In Spirit

Personal note, editor is a PITA, and you should better think me a pre-Schrek Ogre of the big bad kind than just another megalomaniac who likes kitten pictures, as these must be intended to portray me. As a matter of protection for your mind, do not allows fuzzy kitten pictures or hints of humanity to affect your thinking. That is what my editor is trying to do to you, make you think this is all written by a great human, and so you should be enthusiastic about the book. Follows what the editor has coerced from me :

A tidbit of gossip. Wife is now dosing more frequently, sees that she is a happier person and, of course, attributes that entirely to my being a happier person. I am happy for the obvious reason, but I am skeptical of any permanent change, no such has  resulted from 20 years of effort by me. She says the same thing, of course.

My wife is still a hard head, just more susceptable to pleasure under good circumstances. Big improvement, didn’t take much to improve things by a lot, hope it lasts. Marriage counselors and divorce lawyers no doubt hate medical cannabis, and I hypothesize that you can tell the few good ones by how soon they recommend cannabis as a way of beginning a reconciliation. Reconciliation is always the right answer, divorce makes you poor.

More difficult cases may need Ecstacy, which I always assumed was the explanation for some of the couples my wife and I observe when dining in local restaurants. Can’t be that many dentists, not even around here, and besides, dentists always have perfect teeth, no one in those unlikely couples had perfect teeth. (That fact, that all dentists have perfect teeth, which I first noted at a singles club party, has had me on the lookout for a gynecologist girl-friend ever since.)

Thinking about the social currents in the world, it has occurred to me that the pharmacology of Ecstacy makes it easy to love, the reason Psychiatrists and Psychologists had been using it in therapy. (Do not use MDMA==Ecstacy, brains do not recover normal physiology afterwards.) We all know that some people don’t reverse that condition, seem to remain hopelessly in love with the most inappropriate people their entire lives.

So if a bar-girl doses 100 patrons in a row, I think it is probable one of them decides it is for life whereas the bar-girl only risks being heart-broken if she catches and the man doesn’t. You would not have to be a bar-girl or particular sex to apply that strategy, could just meet someone at a rave. If you can handle the emotional issues, random walk is a lot more deterministic than it might seem.

Consider how the same reward-feedback situation that produces instant and long-lasting commitment, e.g. the Stockholm Syndrome. That ability to develop an affinity for another is a base part of most people’s human nature. It is the target for everything social. Any good salesman of anything high-end enough to need a good salesman know how to make you feel special as he closes the sale, that makes you want to come back next time, refer other people to him. Every politician’s political commercial is trying to do that. The good sales people all work at being personal in some version of professional, or honest insider, or just “the boss was in a good mood today, I didn’t even have to argue much” making him take your part of a negotiation. There are books about all that, every role in an auction, pit trader psychology, what cops know about how to handle interrogations, people can easily be overcome by outside influences in making their decisions.

The more points of view a person can use to consider a particular situation, the more probable that they will not be mislead by propaganda.

I think, also part of this personal note, nothing to do with anything, points of view are the hardest part of thinking. “Walk a mile in their shoes” is effort, and without it, you will not have empathy, the mental understandings of what is important to individuals and the group. You will know you have that when ‘they’ are ‘just people, individuals’, until then you are thinking in stereotypes.

Did we catch you?

Amanuensis note : I ‘vette The Generalissimo’s and Scherrhy’s writings as much as I can, to ensure at least more than superficial plausibility. As they narrate events into their futures, the technology they are discussing is more advanced, more detailed, and I struggle to follow their discussion at times. So, I go and read until I understand it, ask the author some questions, they get me straight, and we go on. But it is getting slower as I have to read more books and papers. Am now into a bunch of AI.  I need to read about the psychology-physiology of emotions, have a couple of books I will probably buy, but still looking into what is best. Then again for feedback between mind and body, especially things in the body that facilitate the mind.

Those are all part of this coming design review, which is just getting off the ground. The details of that are crucial, the future of humanoid robots as sentient beings equal to humans is at stake! Scherrhy’s very being is at stake, so we know they got it right, but nevertheless, I have to be convinced. At least 10 books new and I really should read a few of the 20 I have stacked on my bedside table, along with many, many articles online. This could take a few months.

Sorry I didn’t see this coming, but it was hubris to think I could encompass the technology of their times in so much detail, even only 10-20 and 40-50 years in the future.

Also, I am wasting time on grasping the state of my nation. What a mess.

In any case, there will be a delay after this chapter while I catch up with The Generalissimo’s and Scherrhy’s grasps of their technologies and times.

From my viewpoint in this process, the details of the computer side are largely correct, all of the process, QA, management issues, I can’t judge all of the business issues and startup issues. The medical science world is consistent with my view from my time as a budding engineer working in those labs in a wide variety of jobs. The evolution is solid, I read more than the graduate students, went to some of their seminars. The biology and physiology and anatomy are locally correct, but I can’t judge from enough povs to know more than that. Neural nets I don’t know much about, have read general articles.

Honestly honest that I am, I am proud of how well we have ‘vetted things so far on this blog, will inform you of every doubt that I have. You can’t be too careful about your sources, thinking is always required.

***8.8×10^29 millimeters is the width of the universe, so more than 70 doublings for the radius. That will take a big technology to produce an equivalent of Moore’s Law for space flight. Could be that intelligence has a greater range than the size of the universe, we do not have a theory of intelligence that allows thinking about limits of any kind.

Advertisements

One thought on “Musings on 5th Generation Warfare, #1B

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s