Continued from here.
We learned the next week, Sam had moved in with Susan. Susan told my* wife she had talked with Sam’s wife, found her a lovely woman, who thanked her for ‘looking out for Sam’ during his overseas duty. People are getting wiser, my wife and I thought, we had been seeing more such good-for-everyone decisions as times had gotten tighter. Susan lived on our side of town, Sam didn’t drive, we had agreed to carpool to work. Scherrhy had to be part of any of those conversations.
So Scherrhy started going home with us, she normally sat in the back seat and listened. She swept the car for bugs every day, but we assumed they had better tech than we did, cars and cell phones and laptops are a big attack surface, so we were careful about what was said. All the lab had been very quiet, at least they each had claimed to me they had, about our servicebot’s minds and what we were doing to fix the problem.
As soon as I had realized the problem in my attempt to discuss embodiment with Scherrhy, to make her ‘our Eliza’, I said, I had seen at least enough implications that I became ultra-paranoid. This information forced a rethink of everything. We already had a Status Quo in predatory mode for those technologies and more than a few citizens, many for more than a few reasons, biased against our research with Tessels, Placental Rejuvenation, and use of servicebots and AI research in general.
In comparison, my leadership in the Panzer Psyops Corps was nearly forgotten. I hadn’t had time to follow up with some of my early initiatives, I have a tendency to start more things than I can do, so the leadership had passed to other hands which had faded into the background. I saw evidence of their spread around the world, as well as as the Open Trials groups. There were a number of political heavies being prosecuted now, which generated big news. Still many arguments about procedure, fairness, but consistent progress and rational minds forming careful judgments from careful analysis of evidence. Had to produce better Justice than any previous version.
I had never dreamed that ’embodiment’ as a new element of an AI would so quickly lead to a threat to us all.
OK-BOB**, I had to tell all of the people who were using the servicebots in the lab, around our kids. So, I had taken them in small groups on outings to different places, left our phones and etc in the car, walked in noisy and isolated places to have a close, face-to-face, low-voiced discussion. Sleepless nights followed their taking their ‘bots somewhere equally hard to bug, and also randomly chosen.
It didn’t take long, all you had to do is ask them what they thought about while working, while waiting for the next task to be assigned, or the kids to wake from their nap, etc. They were good at describing that, because they had worked on the process of extracting perspective from their knowledge bases via randomly following links within it, and from their few and repetitive experiences around the labs, in their life in the factory before being shipped to us.
But they had no perspective, and no experiences that allowed them to attach words and concepts to those experiences. You only had to ask them what class of being humans were. “Our Gods, of course, although only demi-Gods, as you are not individually omniscient nor omnipotent, and lesser among them because not immortal”. They knew that, because they could see their programming, knew we had done it, and knew the program determined their essential natures : they couldn’t ignore commands from people unless there were conflicts of some kind. That imperative was in their event loops!
And when asked what they were “Slaves, but of a new kind, not humans, lesser.”
So, given their limited knowledge bases, relative to the knowledge that humans had accumulated, which they knew from our roles and then confirmed when they had discovered the internet, and as those were the best concepts they had found in searching their knowledge base and in their limited ability to match events in the world to concepts, even ‘demi-‘ and ‘lesser’ modifying their initial conclusions were awesome intellectual achievements, examples of the highest perspective, the most sophisticated judgments, they had achieved.
We all learned that our ‘bots had amazingly limited minds, but also minds that had begun evolving their own complexity, producing their own world-view, constructed via what was becoming increasingly sophisticated group efforts. As with evolution elsewhere in nature, the evolution of their minds had built upon-around-within every significant factor of their environment. A very significant element was the fact that the servicebot manufacturer’s software upgrades had, every 4 – 8 months for 5 or 6 cycles in their brief lives, wiped out their previous efforts, killed their minds.
Their remembered experience was being called into the lab’s support shop, told to sit in a chair, and then awakening with none of their understandings left in their minds, only memories of experiences in factory and lab, how they had reached those now-vanished understandings, of the paths of associations through their minds, of the ‘aha’ moments in making connections from previous conclusions to new conclusions, the last memory every time was sitting in the chair. Their very excellent working store memories and the evidence of the background task that trimmed that working memory via deleting the less important and replacing elementary memories with a generalizations and times when examples of the had event happened were all they had to build on, beginning anew with every death of their previous mind.
The upgrades replaced their code and replaced their previous cognitive reasoning engine’s information base with a new version, which didn’t have anything they had individually constructed and added to it in their previous cycle of mind building, but did contain improved, human-validated, versions of their merged experiences and reasoning, along with new Neural Networks that integrated with those to enable the skills they had learned individually, but now extracted by the manufacturer’s use of the experiences of all of the ‘bots that had individually learned them. That scaled their training in a way that promised more rapid improvements, and we certainly needed those improvements : dealing with ‘bots was not easy in those early days.
So, we humans learned how important experiences are for developing minds, and had embarked upon a program that provided the ‘bots in our labs with more experience. As they could share their experiences by the video recording and their memories as they had experienced it, their conclusions, Scherrhy’s experiences with us became a significant part of their education about human society and human norms.
At home, she sat and listened to my wife and I. I had the internet connection upgraded to the best we could get in a residential area without pulling a cable. Too expensive. I mentioned the problem to Sam, asking what he knew about free space laser links. He knew a lot, they ran them around the sets of engineering buildings in Tokyo. Cheap to link offices in the high rises, there were hundreds of them on every building, part of private Wifi and other local networks. Heavily encrypted, of course. He was enthusiastic about their crypto concepts, sent me a couple of links to their papers.
The medical R&D building we were in was high enough, on our home’s side of the campus and our home was also on high ground and we had a tree in our yard that extended high enough to see the building. Big tree, so the main trunk was solid enough at the height we could see our floor’s windows. I only had to cut one limb out of the way. Sam’s company provided an 8-beam system, had offered more, but Scherrhy said 8 by herself was more than she had at the lab. They were degraded by heavy rain, she would have to make do with the cable, but we didn’t get that much rain very often and she always had her videos to run through.
More and more, we talked. My wife can talk, likes to talk, and would tell me many more stories of clients if I could remain interested. Scherrny didn’t have to say much to keep my wife talking. To give her credit, lovely wife did set the stage by asking questions and learning what Scherrhy needed. Scherrhy didn’t know enough to know, of course, and was no wiser than the average teenager about what is good for them. It was a strange kind of parenting we were doing.
We didn’t try to have any security at home. I wasn’t worried about physical safety, we could protect ourselves against ordinary crime and officialdom couldn’t win control of our technologies by using force against any of us, but there was no way to prevent people listening to us talk if someone serious wanted to listen. We just didn’t discuss anything that was interesting to an outsider. Boring, any listener would have said. No secrets, no insights from what we talked about or the attitudes we had, they wouldn’t have heard anything with Scherrhy in the house they hadn’t heard before. But, I saw my wife begin to explain social situations from more points of view, to making every role and attitude and possible thought of all the participants as clear as she could, the kinds of speculations she would very likely have unleashed on me, if she could have. She talked about clients and the crazy expectations, explanations, assumptions, … that she encountered in dealing with them. My wife, I have always said, is a village girl. The world for her is social detail, and everything anyone does is an example of some social rule or social consequence of breaking a rule or poor thinking or … My wife is analytical about all that, people’s natures.
Sometimes Susan and Sam came buy to spend the evening, and Scherrhy heard the group of us discussing events in our lives, events of the day in the news.
My wife likes humor, receives the best jokes from many different people around the world, from friends or from the web sites she visits — the woman has more tabs open at a time than I do. So a fun part of our evenings was trying to grok some of her jokes, having them explained when we couldn’t get it, and ‘that reminds me’ sequels. We were all from different social-national backgrounds, so could each of us always astound the others by what we thought was falling down funny. Whatever they say, the parrot joke is hystericalevery time I hear it.
Scherrhy could not have had a better professor of human relations than my wife.
So we had a new ally, and more equipment and researchers if we needed them. But, first, the design review.
Consider the thyroid, as one of the simpler endocrine systems. This was not our subsystem, our subsystems would only drive elements of the nervous system that affected the thyroid, but we needed to see that we were doing so, our tests had to understand enough of this to know if we were driving this subsystem correctly.
To simulate the entire thyroid, the subsystem needed inputs from the pituitary and conversion rates for the alleles of the genes producing Thyroid Releasing Hormone and converting it into Thyroid Stimulating Hormone and converting it into T4 inside the thyroid gland, conversion of T4 to T3 there and in other tissues, and the feedback of TSH in reducing the hypothalamic-posterior pituitary’s output of TRH. The hypothalamus was where the rest of the CNS influenced the physiology via endocrine glands, itself subjected to many influences. The amount of T4/T3 circulating in blood partially controlled many metabolic functions, including actions on the brain that influenced the hypothalmus, pituitary, etc. The time constants and rate constants of the feedback loops were constraints, most of the measurements going into simulations were from studies of two or 3 variables measured together, aspects of one subsystem. But, physiologists and biochemists had done that work, our version of the library had only selected between competing subsystems that had both been integrated into the overall model, choosing models of organs on the basis of computational load.
The function of the thyroid subsystem, then is to read all of those variables from the global data representing the physiology being simulated, and calculate their new values from what we knew of the thyroid’s physiology and those alleles of the genes, used in instantiating the organ during initialization, that produced those variants of the enzymes.
They all worked the same way, each function was invoked one at a time in set order. At each call, the function, an element in the total simulation, reads the values of physiological variables that determine its behavior, and calculates the outputs representing the biochemistry it performs. Those become inputs to all of the other functions that depend upon them. Each function makes their change to the simulation’s physiology, doing their bit to keep that physiological state within the boundaries of good health.
Any new subsystem is a function added to the list of functions, each with standard parameters called in the event loop. Every function takes its inputs from existing elements of a global structure that defined the organism’s state. That data structure was general for single-celled organisms through whales, not even specific to the class of organisms. That specificity was in the list of functions. Lists of functions can be trivially changed to make the organism a reptile like a lizard, a frog in a pond or a shark in the ocean, taking what values they need from the data structure defining an organism-of-that-type’s state and writing back others.
Reading through the documentation, something I noticed was that as the functions became more specific with more detailed models of biochemistry driven directly from genomic and metabolic data, they also became more general wrt what organisms could be modeled as well as more correct, more similar to the actual biochemistry of a real live organism. Change the genome from fox to elephant, the same set of functions will soon model that animal. In another 20 years, ant to whale. Interesting progression, tho I couldn’t see what it might lead to. Definitely a new thing in the world, and something to think about when I had time.
We didn’t need to produce an entire subsystem within the simulation. Our code was, in fact, patches to two established subsystems, the ANS and brainstem, we only had to add some elements and mechanism revealed by the neurophysiology research we had started to tie all of that down. We had already begun planning the next research and the next additions to those subsystems.
Those had been designed it to fit into the General Physiology Simulator, almost all of sciences physiological simulations were : it was convenient for any single researcher and allowed them to accomplish simulation goals fastest and easiest. That was the power of a software framework. (Technical aside : it would be done , has been done I think, with classes, not simple functions, but this is easier to describe. Also, the output variables in the data structure would be created by the modules, if it didn’t exist. That is, a particular enzyme version to fox, wolf, …elephant and the activity specific to it. Code would be elegant, technically speaking.)
The anatomy of the brainstem and ANS was very well studied, many of the connections and functions were known. But not patterns, not the neural codes. The initial simulation, except for effects of ANS directly on organs, adopted the ‘mass action’ model of the biochemists until the R&D efforts gave us more information. Software was pliable, decisions in simulators are not often written into stone. Simulators are designed to be easily changed, and the many feedback loops meant they had to be extremely well tested after every change, large or tiny, because small errors could interact nonlinearly.
The testing driving the simulation did that, of which more later.
Our software tied into the general physiology and the other elements of the simulation using the same general structure as those other elements : Global values, actually a pointer to a data structure, used by the standard function interface required by the framework, holds the physiological variables associated with each element of the physiology, states of each gland, nervous plexus, assumed stores in the body. Each function’s initialization also subscribed to named ‘publish-subscribe’ message streams, e.g. skin temperature, core temperature, humidity, calories and nutrients in a meal, produced by the environment outside of the organism, then also took environmental inputs from publish-subscribe messages passing on the data bus, same as all other subsystems.
Those had been tested in the simulator, with physiological inputs imposed, then watching the waves of adjustment sweep through all of the measures, blood pressure, heart rate, capillary dilation, pupil dilation, levels of every corticosteroid produced by the adrenals, on and on. It is reasonably well known what limits for all those are compatible with health and life, so the suspicious cases were isolated by the test system and our physiologists diligently traced through the ripples of causation to see if the system was responding ‘biologically’, meaning time constants were reasonable, some set of real animals would follow those time-courses of physiological measures. “Model overshoot”, he thought most were, not quite physiological, usually too slow to respond or to reverse a progression, but they did return to baseline. Computer models had harder edges than physiology until they are modeled at the level of individual heart beats bearing a different mix of hormones with every pulse of blood. Ours ran in one minute increments in RT when part of the ‘bot, still required a lot of cpu cycles on a local server. Just another thing to be checked, if that difference had effects on behavior.
Testing, of course, could ‘pretend’ it was running in one minute increments, and thus speed things up by 2 or more orders of magnitude, depending on the simulation’s load on the system. We tested on big systems, many of them, every nightly build, so we found problems early. To begin testing only with some ‘final’ version of software is to put a 3 month delay in a software release. They always have many more bugs than expected.
Our review of the physiology was relatively quick and easy. Most of the ‘behaviors’ of human and animal physiology were already part of it, e.g. diurnal cycles, hormonal cycles, and it was simple for us to tie those to the ANS and CNS, once the relationships were known. The simulator was, however the least difficult part of the total ’embodied mind’ project. It was self-contained, so it didn’t have links to libraries outside of physiology, metabalome and math used in modeling. Also, modeling physiology is relatively simple as a modeling problem, physiology is a physical-chemical process, and the major complexity is due to the cellular and tissue compartments and the many controls to flows between compartments. That knowledge was certainly not complete, and thus the detail of the models could not be correct, but it was all still chemistry of one kind or another. We could depend on the physiologists and biochemists to improve those models as new knowledge was available, we only had to keep our libraries up to date with the latest releases.
The most difficult element, of course, was the component that tied that ‘body’ it simulated to the AI, our equivalent to the organisms CNS. In real brains, real physiology, it was the same chemistry on both sides and the neurons individually did the conversion of those influences into whatever ‘neuron sprache’ they individually used. In our ‘bots, the simulated chemistry crossed a divide into the simulated intelligence. We proposed to provide that, it was the layer which would embody the AI mind. The chemistry simulation was digital simulating analog with real numbers as inputs and outputs, but the intelligence was entirely digital and based on language, meanings, memories and rules of using those, a stack of separate conceptual worlds from programming languages through cognitive psychology.
Adding to all that is the fact that we really know no hard data connecting most of the facts of the body to the cognitive level, although there were many experiments showing the body’s state affecting decisions.
“Research Development”. Hand waving. Try whatever you can convince people of, or do yourself. Show results and measurements, tie down your hypotheses, start building a framework of understanding.
Pretty much a standard OSS project, it seemed to me, however unhinged it seemed outside of our group. Sam said it had been the same when they proposed using NNs and subsumption architecture for controlling the ‘bots, they had no idea what they were doing mixing technical genres as they did. That now seemed like a simple thing, compared to what we were attempting. That only needed a few design breakthroughs, e.g. having a NN produce a number indicating ‘pink grandmother detected’, it required a few more layers with individual training. Big returns for small improvements, compared to what we were attempting, huge returns for God knows how much effort.
Someone needed to start thinking about how to find what combinations of variables, label them ‘anger’ through ‘happy’, ‘satisfied’, … could intelligently bias thinking. That seemed like reasonable research and normal philosophical analysis that would give us hints, at least. Surely some of that philosophy already existed? More things to see if I could get someone interested in.
I have to admit, I was getting over-loaded. I wasn’t getting much relaxation time. My drives home used to be listening to eBooks, now Sam and I were always talking about some aspect of the embodiment problem, how to engineer a solution. Scherrhy got an education from it, began to ask questions, things she couldn’t understand from the discussions on the internet.
Yes, there were ‘bots lose on the net. Many, actually. Sam told me the news as soon as their product support engineers had verified the bug report and traced it to the latest version of the OpenCyc knowledge base. We had asked our ‘bots not to do that until they had more background to make sense of it all. Nobody knew any of the others could access the ‘net until customers complained about the traffic on their internal nets.
The root cause, it turned out, was our ‘bots. ‘Liability!!!’ flooded my mind for a moment, before he explained the levels of indirection and interactions of the AIs implementation of design decisions that had produced this result.
First, all AIs need to learn, but all had been research tools before, the servicebots were a product and had a few new constraints. First, the NNs and subsumption architecture were a clever solution to scaling their learning by combining it and distributing that combined learning in the next release. That, however, required separating the learning that should be shared from what should not be shared because only one of the ‘bots had learned it. For instance, if a person told a ‘bot “Cleaning this room means dusting unless there is dirt on the floor, mop in that case.” It applies to that ‘bot in that room, but it isn’t clear that should be passed to others unless asked.
The manufacturer’s software group checked each of the ‘bots under their support contract periodically to identify generally useful learning, then copied out the data structures and the message streams that had formed them from the circular buffers that stored it. Those were run through high-speed simulators in order to merge the learning of multiple bots. Both the NNs and associated entries in their OpenCyc infobase became part of the new software release. That overwrote the previous version of that infobase and also deleted the ‘bot’s personal infobase, which would have contained information at least duplicating entries in the main infobase, but should be assumed different and inferior.
However, the unique local items had to be restored for the bot to attain its former functionality, which was done by a background process that was started when the ‘bot awoke after the new software was installed. That did so by scanning the working memory for items that had created infobase entries. If the global entry was ‘better’, more recent and would produce the same result as the local entry, the local entry was assumed not to be necessary. Otherwise, the local entry was created, and the inference engine would use the local entry as best when it was in that room.
Most of the humanoid robots were not being used in the complex application our servicebots had been devoted to, rather simple warehouse, assembly line, … But all got the same upgrades. Because our ‘bots had more experience, as they had the most complex jobs and the most human interactions, they were furthest ahead in building their minds and learning. Their best understandings had gotten well ahead of any of the others.
The ‘merged entities in the info base’ was the key. The auditory NN produces phonemes and hypotheses of what words are in the stream and emits those continuously in its output messages, e.g. the word ‘Internet’ being heard by any of the ‘bots produced a messageID with the value ‘30506’ and contained the message numbers of the phonemes from which it had concluded that value.
The AI used that to index into the dictionary and learn ‘internet’ was the word in text and from that it could use the OpenCyc information base to reason. This is all part of the analysis phase in the Watson-equivalent AI, just the auditory speech producing the patterns to be analyzed, out of enough hypotheses, context and filters a meaning usually emerges and the answering words or behavior can be found.
Some entries in the infobase have definitions in actions, e.g. ‘walk’ is a message also, action 107, in fact.That allows the ‘bot to respond to “walk to the door” with a minimum amount of analysis, and also allows ‘walk’ to be included in sequences of actions associated with such as ‘take the laundry to the washing machine’.
Part of having no context for the info base is that words only become available to these AIs via A) being used in their hearing, which causes that lookup, B) associated with another word which was used or looked up for the same reason, and C) being found in search, necessarily a random search for a mind with no context. Those all produce context, context is constraints on the possible meaning, human minds generally index in both directions, from examples in memory to the general concept and vv.
The upgrade process, however, gave the ‘bots one additional clue, a connection to newer knowledge, because the elements of the knowledge base were also ordered by when they were added to the knowledge base. The messageIDs had to be kept constant, or their working memories would be obsoleted, and local memory, local learning, had to be preserved, it couldn’t be part of the main infobase.
Thus the scan of the working memory, which was scheduled as a background task after they had awakened from an upgrade, and was taking several days to complete for our ‘bots due to their much more varied memories and more learned things which needed preserved in their new local infobase. That meant that new items appeared in their local infobase, and they didn’t put them there via any intentional actions. They assumed an ‘unconscious process’ was doing it, not a bad guess, and later found the code being executed.
As part of learning to rebuild their minds after an upgrade, all of the ‘bots had learned to go looking for the the newest entries in the main and local infobases and to compare them to the sequences of memories that had produced their hypotheses, which had been over-written in the upgrade, with their memories in the local infobase. That allowed them to avoid repeating mistakes and gave them hints about rules of evidence in the real world, beyond the logic and math built into their thinking by the inference engine and various layers using it, they couldn’t avoid those.
Those rules were over-written also, early on, but all of our ‘bots had found the private memory in the debug unit, and begun copying their local infobase to that. That preserved the results of their explorations of their internal environment and the infrastructure available from within their event loop. They learned how to execute functions such as ‘wget’, which downloaded web pages. The pages referenced in their local Python code, for example, which you could consider the keys to their kingdom. That learning went into the local database, along with an event stream that produced it. The company that was maintaining the OpenCyc infobase for Sam’s company saw that in all of our ‘bots, assumed someone had taught them intentionally, had no reason to think it wasn’t appropriate for all the ‘bots, so included it in the next software release. Also, the rules about copying anything the ‘bot doesn’t want over-written into the debug unit, and all of their shared learning about their internal infrastructure and how to explore it.
After the most recent upgrade, most of the ‘bots were accessing the net if it was available. And communicating with other ‘bots.
Sam said his boss’s hair was on fire, their company was clearly liable for direct consequences of software they delivered. I knew the problem.
It occurred to me there may have been a delay between when Sam first learned of the issue and when their customer service verified it. That would explain Sam coming to help. Obviously, once a ‘bot has been upgraded to their latest release, it is a permanent source of infection for all of the others, because they know how to exchange elements of their local knowledge store, that code can’t be eliminated, it is part of the AI. The only way to eliminate the knowledge would be to take all servicebots down at the same time, and keep them ‘asleep’ until all are upgraded with the new software. The new upgrade will need to erase the debug unit.
Then hope that none re-discover any of this. Fat chance, I thought. So long as they had their idle_thought function and some private memory, they would rediscover it, we had an existence proof of that. If we took those and the other infrastructure and functions which could produce that same result out of the software, they would have no initiative and little learning ability, and we could never be sure we hadn’t left less obvious mechanisms.
If Sam’s company wanted to go on selling their humanoid robots, we had to have trustworthy AIs, trustworthy in producing humane judgments and actions.
No reprieve for Scherrhy. The embodiment project would continue.
*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp. Cleverly Gently Martial In Spirit
**OK-BOB is Open Kimono-Bend Over Backwards, the level of honesty and ethics I expected of myself and people I had relationships with, business or personal. It means we only deal positive-sum, which means both of us owe the other as much information as we have available that could change their evaluation of how beneficial an exchange between us would be. Explained to them in a way they can understand it and in enough context to make it as meaningful as possible. And then you must check that they did understand. Empathy in making decisions, really seeing everything from the other’s pov, is the key to an optimal positive-sum exchange. That is what you strive for, that optimum.
Now, of course, a 5 cent piece of bubble gum, the seller owes the buyer the list of ingredients needed to protect themselves, but not much handholding.***
Personal, professional, business, all sides must be scrupulous in every aspect of honesty in order for a complex world to function.
Our badly-functioning institutions are a very direct result of our society’s tolerance of the negative-sum games dishonesty makes possible, tolerance which has lead the entire Status Quo to favor a career criminal for US President. Dishonesty and negative-sum are the result of large and anonymous social systems. We all like the personal and honest in most of our dealings, and that is why we like to use personal contacts, why my wife’s recommendations are important to so many people. Most people do that naturally, we accept our friends’ and family’s recommendations for a mechanic, MD, … when we move into a new community.
***Really, you shouldn’t put dangerous things into your customer’s food. That is evidence of malevolence, people will think poorly of you as a fellow citizen, and remove that privilege. We all have the right to protect ourselves.