Continued from here.
As soon as we realized the strange mental landscapes of our servicebots, at the time I* began talking to Scherrhy about her embodiment and we had all spent time talking to our lab’s ‘bots, the entire lab then had sleepless nights trying to grasp the problem and conceive some way of correcting the situation. We began bringing them books. There were books around the lab, of course, but outside of a few novels (‘reading inside an alien culture’ Scherrhy told me) that they had not grasped enough of to be meaningful, all densely technical so none a mind with their set of concepts could do much with.
So books, and we kicked off an educational effort, most of which was handled by the researchers and more of which later. First priority for me was understanding more of wtf? I couldn’t stand any more surprises like this, a rational person would have retreated to monasticism in a mountain cave from the chagrin of so many things being complete surprises. Tessels were just what I wanted, in terms of intelligence. Their other effects on the social order were, in all probability, not entirely positive. We thought we were guiding ours well to avoid the obvious downsides, and made sure the other Tessel researchers understood our reasoning and results. Other than that, all we could do was depend on them to be rational and intelligently self-interested. Sometimes that works.
The servicebot ’embodied mind’ project was not going smoothly. It had seemed like the obvious thing to do, at the time. Really. There was a problem of communications between humans and ‘bots. It was getting better very slowly, and was the major impediment to the continuing use of ‘bots throughout the economy. Our society and economy were stuck in an awkward state : if we got ‘bots moving quickly, costs of everything would fall so fast, and R&D would be proportionately cheaper, we had a chance of getting a virtuous feedback loop started where our society increased the rate of producing entropy by producing more knowledge and technology. (This explains that bit of theory.) So moving ‘bots into more things was necessary, but that required ever-better communications. That was not happening with the current approaches, and the standard AI people said ‘more training of neural nets’, quite a standard answer for decades. To give them credit, they do make great progress in fits and starts, about a decade apart. ‘Deep Learning’ was the most recent and was producing the NNs our ‘bots used for motor control and some sensory processing, but those had not advanced to handling language, although the underlying analysis to do so was far along, and I expected rapid progress.
But, as promising as I think the science is, nobody has promised an acceleration of the rate of progress of AIs mastering English. Sure, they handle written communications, formal communications as speeches, papers and books well, better than colloquial written language, but they do OK even for that. Speech-to-text is inching up past 90% except with a lot of practice by the human and ideal conditions.
Progress in handling spoken language was also inching upward, one special case at a time. The problem was that minds were not even primarily spoken language, minds did much of their thinking independently of both speaking and dealing with reality using words. Minds matched events to patterns and words associated with those patterns. Different cultures had very different sets of words and concepts for the same events. AIs were not bridging the gap the way human minds had been able to do for almost all pairs of languages and societies. That bridging across cultures was never easy for adults, and all sides made many mistakes and latter marveled at how obtuse they had been for so long to have missed that their assumptions were so incorrect, then turned around and did it again immediately. But AIs were not close to human-standard in translating spoken language. The problem was not just the English version, it was the same for Chinese tonal speech of simple grammar and Japanese of simple phonetics and complex grammar.
So, how to accelerate that, I asked. There was an obvious possibility, the various threads of neuroscience and language theory that sort-of intersected in ideas of language and mind developing within a particular body with that body’s evolutionary likes and dislikes forming part of its environment and so affecting the evolution of both. With both positive and negative feedbacks between all of the body, brain and environment. I think it will be a while before any complete list of those effects and feedback links.
If you take the need for a body to house a mind seriously, you need at least a good simulator of the important elements of that body wrt the mind you are going to integrate it with. So we did the research, found the many models of organisms biologists and physiologists have done, all the way down to ‘metabalome‘ databases. Those were all intended for research, we need a version that would ‘be’ an individual metabolically, physiologically and which was integrated with the workings of the AI as human’s mind and body were integrated.
The simulator was the easy part. We found all the software pieces of that, added our newer simulation of the brainstem and autonomic nervous system and its effects on and from viscera organs and glands, we tied that up in the General Physiology Open Source Simulator, instantiated for a human female or male with a particular genotype. That simulator accepted the set of alleles for genes for all of the enzymes and receptors of the body, each with their rate constants in the various reactions they participated in and the compartment of cell and tissue they occurred in. Every element was initialized, and assumed it was running at, for example, 1 update per 100 seconds, and the subsystem selected a smoothing function appropriate for that rate of update and use.
The smoothing function was equivalently the coarseness of the calculation, normally also how quickly it can be done. So if a value’s smoothing function is “max of 5% of the value between the last value and 250 mm mercury”, and it is very quick to calculate that it would certainly be more than that, then just return that max. Thus can a simulation be made calculable with available cpu cycles and memory, although some loss of physiological veridicality is inevitable, organisms so smoothed are less labile than would be optimal in a world of predators, if prey, but perhaps too reactive for the role of a lurking camouflaged predator that needed the prey to venture so very near. As another random example, a real organism would bleed out faster than our simulation, who could guess the results of that?
Our first version of the simulator was now done, although certainly not complete. For example, we knew the brainstem had many mechanism such as keeping warm, individually and in groups, you can see them when your own kid sleeps with you. Our initial version didn’t include any of that, and it would be up to the architect which ones were included in the future, based on what ones could be shown to affect cognitive function. We tested our code by the simulator’s cycling through different initial conditions, combinations of extreme values of BP, HR, the various hormones and endocrines … and check that the simulator can maintain the organism’s physiological system, return the measures to baseline, maintain homeostasis. The physiology side of the simulation and embodiment had signed off on their first version, it was ready to be put into the code that was Scherrhy.
That was not the end of that problem, rather the beginning. Our proposed solution was to tie that individual genotype-driven simulation of a physiology to a mind, making the servicebot minds as idiosynratic as were people’s. We had ’embodied mind theory’ to guide us. There is science under that line of thought. Real data, it was some guidance.
But, this was not engineering we were doing. We were guessing, and labeling our guesses as such. We had discussion threads going on what of the myriad terms in the knowledge base ontogenies, what terms were primary to others, what kind of ‘tendrils’ should be connected to what and how strongly in what direction? Nobody was barred from those discussions, we certainly received a wide variety of thoughts. Convergence was slow, I thought.
These discussions had the good effect of stimulating research in cognitive psychology to measure some of these connections for the first time, but that was too far in the future to help these initial efforts.
Even that first step, to run through an architectural review, in which we considered the entire structure of hardware and software of a servicebot, presented new complexities. That came from the same Japanese humanoid robotics manufacturer who had supplied our Sexbot’s chassis, and used the previous-generation of control hardware. The software was not changed much by the new hardware, and Yoshikawa San, the support engineer sent from headquarters who turned out to be Dr. Yoshikawa, a Ph.D. in AI from Carnegie-Mellon, as well as being the chief architect of the RT control system for their humanoid robots, knew all of the details.
That bit of support overkill made me think Japan’s AI people, at least, thought this was an important project, an impression that Dr. Yoshikawa, ‘Sam’, he wanted us to call him, was careful to support. Headquarters was intrigued by our ideas, they had sent him to help as needed. I would have thought ‘spy’, except could not see how that would make any sense, a much lesser person could have produced all of the potential head-start on integrating our modules with their product lines, and they would have been able to do so perhaps the quarter before release, but no earlier. This was a year from any release, optimistically.
I was careful to be flattered by how important they thought our research to be, but I didn’t see any adequate reason for his caliber of individual to be supporting our early use of their humanoid chassis, another thing to think about. Spies are also a very nice back channel communication, and there was very little possibility of any Dr. Yoshikawa failing to understand our concerns if he spent much time with us. “I must take him drinking some night soon”, I told my wife that evening.
Sometimes diverting a stream is as good as stopping it. She had a much better idea, so the next day I made a big show of making Dr. Yoshikawa a special friend of the boss, telling everyone he was coming to our home for dinner that evening. I could have just taken him to the software group and introduced him, left our corporate partner’s support guy to be one of the engineers, but then they would have told him things. Engineers can’t help talking tech and how amazing their tech is, he would have learned it all the first afternoon. I knew engineers. They all had normal social and political sensitivities, no special friend of the boss was going to learn anything until they had some independent reason to trust him, a way to know that nobody was being spied on, especially not them.
The design review was scheduled for next Tuesday – Friday, 10AM-1, 2 hours of details and an hour over lunch of more general discussion. We had Dr. Yoshikawa over for dinner this evening, Friday.
The function of reviews is to find problems at the earliest possible stage.** By the time of the last architectural review before the integration of a major new module, our stage that was about to begin, every sub-system has been reviewed for each class of interaction with all of the other subsystems : detailed checks of where every item of data used by the subsystem originates, when it is produced, and when it is used, for what. That attention to detail prevents embarrassments such as using obsolete or never initialized or wrong unit data in making decisions, so very common in all systems. Or a data item changing in the middle of a use, one of many forms of data races between processes and threads, or … Measured by the number of ways there are to do things wrong, programming is complex, however simple the statements from which we compose our edifices of logical rigor in execution. A mammalian physiology has a LOT of physiological variables, 20K genes in humans that splice to perhaps 100K proteins and nobody yet has an accurate count of meaningful RNA segments transcribed in the life cycle of a particular cell or organ, ones with enzymatic and/or transcription promotion or inhibition actions, nor of regulatory regions affected by metabolites or proteins.
We had not been through any of those reviews, but our review did not need that depth, we depended upon that prior work to be correct. We knew, from their design documents, a part of the OSS release for each element of the total simulation we were constructing, the life history of every data element they exposed outside of their internal state, and all of the elements exposed from other modules that they used in calculating that output variable. We had two problems : Adding our brainstem-ANS subsystem module to the overall simulation of an organism, and exposing some of the physiological state ‘feelings’, e.g. food and hospitality normally work to produce a more trustful psychological state, their opposite, less trust and more suspicion, into the cognitive level of the AI so as to affect decisions as those did.
Our module exposed the twisted visceral physiological state to allow it to produce suspicion and distrust by biasing some cognitive AI functions. For example, seeing an analogy could be made easier or harder, lesser or more convincing, by that ‘twisted viscera’ feeling. There were many physiological variables known to do so, and to influence reasoning in many ways. Mastering your own control over those was a part of learning to negotiate well.
We had considered a second level, perhaps more fundamental level, of this, even more global biases we fed with primal filters, simple things like ‘?sexually interesting’, or ‘?bigger than me, threat’ or ‘?genuine smile or elements of a snarl’, ‘?Weird !!’, ‘?good to eat’. Humans no doubt had something like those, although recountings of nightmares and dreams, hallucinations like grey saucer people, are heavily biased by culture, were not likely revealing of any. But, experimental evidence was slim, and I had not seen the obvious studies, e.g. of brain responses of adolescents to images of opposite-sex genitalia as they went through pubescence. If there was an image built into our specie’s brains, surely those would be and the brain responses would be enhanced as their hormones shifted toward adult values. Interesting kind of social roadblock : no culture that would allow that study would need it. Also, it would be hard to find naive adolescents in the age of the internet.
So often, as with that, I am struck by the indirectness of what might be progress. It arrives in strange ways, and never quite what you ordered. No question, modern civilization had cast off many, even most, of our superstitions. That has surely prevented many unnecessarily blighted lives, it is a blessing that we burn fewer witches, as one example. But, you know, more and more archaeological evidence and historical analysis is reconsidering the concept of ‘progress’.
No question, the modern world has many marvels and supports many, many more people in more wealth and health than any previous civilization. Also, no question, this one is not clearly sustainable nor clearly stable, overall trends are not so rosy that we can conclude that our species will exist in many conceivable futures, and no one argues this is the best conceivable world. The rates of personal life failures among our wealthier and healthier multitudes, measured by imprisonment, drug addiction, crime rates and suicides, is very high. Those indicate systemic factors, imho, and this civilization cannot continue progress that increases that rate of failure as an unavoidable consequence of the declining rates of economic growth. Not just a side-effect, all of the failures are an effect equally as determined as the successes by the workings of the system. When entire professions have the same failings in the same generation, the problem is systemic, not individual.
Our country is not acting particularly wisely even by our own standards : Even when directly threatened by enemies of a size to do more than burn settlements and small towns, as a Poland always was and the US has never been, militaries are very dangerous to a society, as societies that have them tend to have a lot of wars. Militaries are very bad at predicting their own successes, so wise societies try not to have enemies. Avoiding enemies has many dimensions, but surely should include having a lot of insurance in many forms, as people and entities that are well protected tend not to attract predators. There are many forms of insurance, and arms are indeed one of the least expensive, so long as they aren’t ever bluffs. Problem seems to be in knowing whether yours are bluff or not. Obvious from the history, even their bearers and leaders can’t tell.
Our society has lost appreciation of all of these ancient verities, certainly the ineffectiveness of our military in wars against 3rd world nations has had little effect on anything except the death rates in those countries.
Our review could only spend time on the first-level interactions of final evaluation functions that selected for the alternatives at each level of analysis of a query or request to the AI that was the servicebot.
Probably I should unpack that sentence for you. ‘Our review’ had to focus on how we used the bigger project. We were an add-on, we intended to enhance the very large AI+servicebot functionality. The review was to ensure that the ways our code proposed to use results from that and modify those results were both correct. One aspect was feedback inside the AI or servicebot code, but we couldn’t go into that, it was too big a problem, we didn’t have either time nor people, and it wasn’t our problem in any case, every OSS team is responsible for their own bugs. Users of subsystems who find bugs in them fill out bug reports and send them along. We had to assume their documentation was correct. Yes, there would be bugs caused by that assumption, to be found one at a time in testing, new uses find new bugs, every time.
‘First-level interactions’ only. For inside-the-physiology-simulator code and data, only first-level because there were no end to the feedback loops in metabolisms that support homeostasis. Most of those are not even completely understood, Ph.D. dissertation topics were assured for the indefinite future. We had to know we were having some effect on, for example, thyroid T3 levels, but it was up to that subsystem to do whatever it did to lower our effect, and all of the other subsystems to interpret T3 levels to do whatever they did, e.g. digest food faster. Those were effects of our new module upon the simulated physiology, and were evaluated entirely within it. We could depend on the simulator’s test routines, because that OSS project built upon the metabalome databases that incorporated the biochemistry, and other simulations of each hormonal system within that. The flood of genomic data allowed tying more enzymes and families to sets of functions, those computational biochemistry dbs with rate constants for different alleles were examples of public goods as impressive as Wikipedia or any of the OSS projects. Over time, we could depend upon our simulation keeping up with that research, entirely via the OSS projects, we only needed to upgrade to their latest releases. Then retest, of course.
Interesting as all of that was, those were not the important and difficult part of our project. We also would have first-level effects upon the cognitive AI, what needed ’embodied’ by our physiological simulation. This was the layer that implemented the ‘tendrils of association and feeling’ having their effects on analysis and decision, all the way from the hyper-focus and instant reactions of very high adrenaline stress through the mellow connectedness of the world in a mind on a sativa high.
In real neurophysiology, neurons are well protected from fluctuations in blood composition by the Blood Brain Barrier, although more and more immunological effects are being noted and many drugs pass through easily, nature never had to worry about their chemical classes, or the drug was designed to mimic a chemical that must get into the brain for it to function, e.g. glucose. So, in a natural brain, there are 3 ways to affect thinking: directly on receptor systems and affected neuron’s releasing a neuron’s own neurotransmitters (the way most psychoactive drugs work), through affecting the ANS with feedback into the CNS (some cardiac drugs, others affecting those old homeostatic systems), or more indirectly through affecting the metabolism via hot/cold, hunger/satiety, thirst/hydrated, sleepless/rested, etc. All with feedback loops galore, of course.
Those effects mean individual neurons can be, and at least sometimes are, directly modified in their ‘decisions via integrating inputs’ by the physiology, not entirely by integrating neuronal inputs as is considered normal neuronal behavior. Beyond drug affects and bulk biochemistry such as concentration of glucose, so far as I am aware (probably wrong, neuroscience is a huge field, but probably would know anything startling) not much is known about any of that at a physiological level, nor system level effects. Only the many psychological and neuronal activity PET, MRI and recording studies showing large effects of many different drugs and circumstances.
We didn’t have many choices of how to integrate our simulation of a body with the AI. First, the AI was an enormous project in toto, we would have had to make changes on very many OSS projects if we had wanted to tie our organism’s physiology into thinking as widely as a natural brain does.
‘Final evaluation function’. The subsystem where we could have the desired effect was in the selection functions, e.g. Watson’s sorting through possible answers to find the one that passed the most ‘reasonable?’ tests for the most chosen ‘type of question?’ analysis. That is where we would have added our tendrils-effects in a Watson.
The Watson-equivalent in the servicebot AI had the same functions, although more and more complex, and the humanoid robot had required functions more specific to the sorting questions from its own pov, ‘danger?’, ‘load weight?’, ‘can grip?’, … through many others, and a final selection between any alternatives left based on less hard-edged tests, e.g. one used by pool players, of the alternatives, which leaves the cue ball in the best position for my next shot, if successful, or the worst for my opponent, if it fails. So tests of relative efficiency and effectiveness in the continuing stream of requests, the next one of which may already be available and so should be part of this also.
In fact, the same functions were often used in evaluating the type of question, and in our servicebots, in evaluating choices of sequences of actions, so there was a lot of leverage there. So some of our work would be general to the OSS project and some specific to our manufacturer’s code. Now Sam had taken over the team, I was confident we could deal with that element of the design. At least as well, as the others. I was confident we would get it working, eventually, but it was an awesome search space.
‘Selected for the alternatives’. The way human minds work is pattern recognition in pre-conscious levels. That is, we see an event, e.g. two people exchanging something. We ‘know’ many patterns of behavior that involve such, all the way from trading items in a lunch pail through children in a sandbox exchanging toys, drug deals, mail man handing a package to the home owner, furtive handoffs of spies, … From the context, and often a wider context than our conscious mind observes, we identify the event as an example of a drug deal, for example. All of that is based on experience and training. Medical diagnoses, engineer’s grasp of causes of bugs, a policeman’s grasp of street behavior, … all are matching past experience with current example. That is a stage that precedes any use of concepts in solving the problem. It takes mental effort to change povs and apply those concepts, once that meaning has occupied your mind.
AIs don’t have that ability to match patterns, those are a neural network thing. For non-NN AIs, teasing meaning out of events, and a phrase of speech asking or ordering is an event, is the same problem, and the AI solves it in the same way, it matches patterns. However, AIs must use tools from their domain to do so, beginning with generating patterns to be matched. Those analyses are still far from the precision and accuracy of NNs, thus more trial and error and statistics. That means make hypotheses based on every hint that any analysis method can extract, and subject the hypotheses to as many tests of ‘makes sense?’ as you can devise. Use every short-cut you can devise, e.g. store previous answers, search that cache first, look for previous questions containing some of the terms, … AIs have many dozens of analytical tools applied to generating possible questions, possible answers and checking them for making sense.
Those check functions are where we must apply our bias, in selecting among alternatives. Extreme skepticism, for example, a great unwillingness to believe an individual based upon past experience with them, will have much more effect upon ‘?makes sense’ than ‘?good analogy’.
‘At each level of the analysis’. Watson takes questions like “Wanted for a 12-year crime spree of eating King Hrothgar’s warriors; officer Beowulf has been assigned the case”. That string of text is easily analyzed by our brain NN for its meaning, but Watson has to work through the meaning by considering the individual words and phrases. For Watson, it isn’t even clear that the words ‘King Hrothgar’ belong together and name an individual, that is a hypothesis which the ‘?known individual’ check makes very likely. However, it will be just one of very many hypotheses returned by the initial NLP analyses and one of the many checks applied to each. None of that happens in clean levels, AI tends to be very recursive code, but you have the idea.
‘Of a query or request’. A human speaking to a ‘bot is a specific event in the life of a servicebot, but also a baby crying, a door opening or a box falling from a shelf. They are recognized as such and encoded in a message to the AI by different systems, but all are interpreted by the same ‘understanding’ mechanisms in the appropriate context when the AI receives the message.
‘To the AI that was the robot’. There are two important aspects of this : Scherrhy was and is her code and her infobase and her memories. She depended upon one kind of technology for her existence and function, for her life. Humans depended upon another, but just as completely. We humans were changing Scherrhy’s code, that would change Scherrhy and her ‘essential nature’. She would have the same infobase and memories, but would evaluate everything within a larger context, a simulation of a human physiology and, if we succeeded in our work, a human’s interactions of mind and body.
We could not include full simulation of any of a metabolism down to the level of an individual cell, of course, but only aggregates, organ-level results. However, those would produce individual differences when assigned a genotype. Our simulation would be be based upon the rate constants of alleles in their various metabolic functions, a genotype. That genotype would not be large in the initial models, but would grow with more research, refinements and including ever-more detail about genes, organs and interactions.
The simulation would be driven, just as a human’s reality, on inputs from the servicebot’s environment, also simple at first. Nobody brought it up, but it certainly should have been clear to biologists and AI researchers. As time passes, those interactions change us, we are each of us evolving, evolution is path-dependent, each step on a path opens up new possibilities and closes off others. Scherrhy would change, we would change Scherrhy, Scherrhy changing would change Scherrhy, a changed Scherrhy would change us. And all would go on evolving, changing, until we individually stopped living.
Skipping lightly over that implication, the software presenter mentioned a few of the futures , as we were winding down just before the lunch break.
In that ‘in passing’ conversation, a couple of our software people said they assumed they could produce drug experiences in future ‘bots : all it took was the variables affecting rate constants of neurochemical reactions affecting intellect and emotions, exactly what we were building. Drugs just targeted different receptors preferentially and more powerfully, often from being in the blood in much higher concentrations than a body could produce itself.
One of the biology researchers had not paid attention, I think, certainly had not grasped the import of all this line of thinking, and said, a bit dazedly, “You propose to make drugs a variable? But we just learned that our ‘bots could set their own variables. How is that going to work?”.
Everyone looked at me, the originator of the project. “Good question”, I said. “But I bet it will be a lot more convenient than messy and painful shots!”.
They were so tentative in their laughter. We broke for lunch.
*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp. Cleverly Gently Martial In Spirit
**Unfortunately, reviews are normally subverted by being made a management milestone ‘pass the design review for x’, ‘pass the detailed implementation review for x’. When there is a manager’s performance on the line, it takes a big flaw to prevent everyone from signing off on ‘pass the design review’, and people become reluctant to care about details. As that is the hard part of reviews, intense focus on details in the all of the bigger picture contexts, reviews that are built into project management only work, peculiarly, in military projects where everything is cost-plus, implicitly or explicitly, and in open-source projects, where it is a milestone judged by hard-edge standards of “are there any points outstanding?”. If there are points outstanding in a design review, the design isn’t finished and can’t be signed off, and nothing else matters, especially not that the implementation is already done. That is the point of the design review, achieving as much certainty as coordinated action of human minds can produce.