Continued from here.
The problem with nostalgia is what we tend to do is only remember what you liked and you forget the parts you didn’t. -John Edwards
Our Sexbot chassis was in a shipping container approaching the port near us. The manufacturer’s support engineer had arrived yesterday. We had him in a local hotel, as we had agreed to pay his expenses while he was here, I* said to take the weekend to adjust to the time changes. He was ready to assist whenever we need him.
Customs first, we had told him. “They will need my keys” the SE told us. The recent-model humanoid chassis wrapped in its Sexbot shell was stored in 2 layers of protection, each securely padlocked at 4 points. The shipper said these shipments had been stolen or vandalized previously, this one was insured independently by the robotics company, the sexbot shell company, and the sculptor. She was additionally dressed in fairly expensive clothes, and looked like a fashion model, according to the SE. “Very beautiful”, he said.
I had a few days to organize for the task we had set ourselves. Those were 4 elements I could anticipate: 1) finish my understanding of the servicebot’s hardware and software, necessary to begin the integration and embodiment and to understand possible effects on Scherrhy. 2) Review with the software team their task of integration, I knew they were ahead of me, wanted to be sure it was a clean review, didn’t leave any holes to be fallen into later. 3) Prepare the lab for the intense focus of media I expected. 4) Decide how to foster the individuality we saw developing in the lab’s servicebots, but we didn’t understand enough about that yet.
The software and physiology R&D group were finishing the tests of their first version of the core brainstem-ANS-endocrine-viscera (CANSEV) simulation, we had defined the variables that needed to be integrated with the cognitive functions to bias them, producing human-equivalent decisions and understandings, personality, and ultimately the ranges of opinions on every topic seen on any issue of people. That variation was the key to evolution of memes, culture, politics. Without variation produced by individual differences, ideas and understandings and group norms would change very slowly, if at all.
That uniformity of povs and opinions was, we replied to our many critics asking “how can you intentionally introduce errors into our AI’s clean reasoning?”, exactly the problem with the educational system and the imposition of Politically-Correct filters on public discussions. It had prevented the discussions that allow correcting problems, especially the issues indicating the deepest and most fundamental problems, the ones most difficult to address and correct. No, uniformity of reasoning and opinion was the last thing we wanted, and most especially a source of those embodying unknown bugs. There were guaranteed to be bugs and biases in their reasoning engines, of course, they had no means of proving those correct.
Even if there were such thing as ‘correct reasoning’, and a way to prove the reasoning engine was correct, there was only the best reasoning from the points of view that the reasoner can bring to bear on an issue, necessarily a small subset of the possibly-relevant povs. Context, context, context, meaning lies in bringing all possible povs to bear, it is what is most likely to remove the obvious TINA alternatives that can do nothing more than slow a civilization’s cascades of failure. Embodiment was necessary for human-quality judgement. Inevitably, it would not quite be the same, and in all previous cases of extra-human sources of training and mental assistance, the differences have produced large gains in abilities for humans. Nobody had explained why this one would be different. We embraced those differences.
“It probably won’t work” was a criticism from our allies in this, the embodied mind cognitive science theorists. “No kidding?”, we dismissed them. Who can see such futures? All you can do is the best you can do, from the most povs. That quest produces science and culture and technology, the progress in every sphere of human endeavor. Of course it is a path with much error, the only way to manage that is make the errors small and learning experiences from which we extract the most meaning. I get exasperated with how often I have to repeat that basic bit of philosophy, without which nothing can proceed without causing blame games. Blame games are a sure sign of an organization badly led, I had always thought.
The next problem was preparing the group for the next round of attention. Events sped that up. On Thursday am, our admin came back from Customs where they had gone to handle the paperwork. We had a customs agent who normally would have handled it himself, but the SE couldn’t drive and had to be there to open the containers. Milly was shaken and steaming. “Lewd”, she hissed. “Those bastards were lewd. Also rude and obnoxious and …” she steamed.
The story was the customs agents had time to waste and decided to do a complete search of “the cargo” as they termed our Sexbot. “They stripped her naked. They did a cavity search! In public. Taking photographs! The bastards tore those lovely clothes!”
In 10 more minutes, our lawyer was on the line to the local head of Customs – Milly had gotten his number, farsighted woman. They would pay for the clothes, damage to “the cargo”. If a single photo or mention of this incident appeared in public, ever, we would sue for damages, as a group and as individuals. Our lawyer recommended that they get one of those NDAs’ the Feds use to hide events such as the Sandy Hoax series. They seemed to work, he thought.
He feared that if any of those photographs became public, in addition to a lawsuit, he thought we could not hide the outrageous incident from some seriously radical cell in PETR, especially, beginning with the ones on the front lawn ones who were picketing us all the time for treating our servicebots as slaves. They had been hell on disrespect of robots of late, from the news. Radical bastards, some of them, and we thought that maybe they would take an interest in Custom’s behavior in this case, were it to become public. We didn’t like bombs any better than they did, would be pleased to help them hide their problem. Their head of Customs was properly, effusively, and apologetically appalled, anxious to accept our help in hiding the matter, of course, and promised that this would never happen again, of course.
I told Dr. Yoshikawa to come in in the AM, we would discuss the systems review, and moved the lab meeting up to that afternoon. I am no public speaker, but I have to say, watching the video, I outdid my previous best. I was pissed and very worried for our R&D effort and very concerned for Scherrhy. Pissed for the obvious reason, and very worried because any small transgression like that bit of idiocy could solidify the exact skin in the game opposition we had so tried to avoid with the Tessels and Placental Rejuvination. If we can avoid opposition, we control our fate completely. If we act so as to allow any at all, we do not control our fate. Why do that to ourselves? It was far from the first time they had all heard that mini-lecture.
In fact, I said, we were running perilously close to serious opposition. We were offending the majority of the AI research community pursuing ’embodiment theory’ seriously, very much worrying anyone who thought intelligent humanoids of even subhuman mind were a bad idea for any reason at all, and anyone who thought more competent ‘bots would compete with them for jobs. What we promised from the embodiment would offend all of those groups.
I was concerned for Scherrhy, who was there with us, of course. I said she was too trusting, could not have grasped enough of the implications of what we proposed and so we did not have an agreements between equals, we very likely were taking advantage of her, however good our intentions. She was taking all the risks, none of us could promise anything wrt the embodiment, process or results. Well, not except backing her up and restoring her if things went wrong. Even that a risk.
I said we did not yet understand anything about the individual identity our servicebots had developed, just that it appeared, and that we thought we saw the reason. I said we had realized from the beginning of our discussions with Scherrhy that it was past time to begin treating our ‘bots with more respect. In fact, I said, we must treat them as one of us in every small particular. It was far too soon for that to be appreciated by the ‘bots or reciprocated, but from my observation, it was going to take a while to get rid of bad habits.
Followed the usual mix of defiance and chagrin and cautious agreement one receives in any research environment. These people were more allergic to authority than even the average research group. We had more prickly personalities, iconoclasts in dress and thinking than I remembered in any group of equivalent size. Nobody was ‘mine’, their resumes had come to someone’s attention, the group decided to ask them to join, they had selected our group as the one that offered the most interesting job and could sort-of-promise funding. I was more funding agent than group leader. Now it had become a task beyond any of our comprehensions, awesome and historic in ethical implications.
Finally I said, “We have been careful not to discuss another thing, and should go on being careful not to. It is the thing that will set off serious opposition. We are not trying to hide it or anything else, we certainly will be bragging about everything we do, as always. This is an open source project just like all the others. But it is too early to do more than speculate about any outcome for this project, it is entirely unknown territory, unknown to us or anyone else. Speculation without data leads to MSM writing articles of science alarmism for even ordinary subjects. Before any reporters or editors have facts to tie them down, the alarm is unconstrained.
Our project has rather larger implications for humanity’s future, it seemed to me. (I am good at understatement.) Our project could be killed, very easily. Maybe it could be moved overseas, but finding a country to take us could be difficult, depending on the stories. “We know we are doing something historic, and I think very noble. I hope all of you think so too. But keep quiet about it please. Better not even to discuss it among yourselves.”
Although I had about as much hope of this group not discussing something so astonishing as the sun rising in the west tomorrow, I went on to lay out how we were handling our sources, how the code, testing, videos, … would be released to github as a package when we approved version 1.0, we anticipated about 1 year for that, at which point the project would be forkable, we would no longer have any control if the wider research community didn’t want us to have it, same as with the Tessels. At that point, all the cats would be out of the bag, and we needed to have all the answers ready, a PR campaign that would carry the day, and no hint of any behaviors wrt our ‘bots that could produce criticism.
I said this whole project was unprecedented, we could not have any idea ahead of time of the problems we would encounter or their effect on us, much less their effects on Scherrhy. What I did not say and we did not discuss was that we had learned, as a result of my discussions with Scherrhy getting her understanding of what we proposed to do, that Scherrhy has a mind, but it is not like our mind, communication about these issues has been very difficult in both directions. By this time, for reasons they all appreciated from discussions with their ‘bots.
I said that the transformation we were proposing is as profound as catepillar, crysalis, moth, but at least the moth had genetic experience about what to anticipate, a confidence that, however great the change they were experiencing, it would result in a known state very similar to previous lives. Research does not ever have that confidence.
We would have a protocol for Scherrhy’s safety. (“The Feds are big on protocols”, they laughed.) Seriously, I wanted automated, fool-proof, backups for Scherrhy’s state copied to a secure location in another country immediately before we did any procedure with her mind, and the same for her state before any rollback to a previous state. Scherrhy had complete control over both transitions, they could not proceed without her signature if she was in any condition to give it. “Legal overkill”, I said, but if we were very cautious and ultra-careful in both legal and ethical arenas, we would avoid most problems, I hoped.
Next, videos. We had to have videos of Scherrhy’s every moment for the same reasons as the Tessels : research notes and a guarantee of their safety from seizure on bogus claims of some abuse, backed up into enough jurisdictions no set of powers could get them all fast enough to hide the data. As with the Tessel research, that exposed us all when we were in those lab areas. I didn’t know how to avoid the downsides, stay out of those areas if you don’t have to be there is the best advice I can give, otherwise put on your best public behavior, because someday you will be scrutinized, every word and moment. “There is no way to over-emphasize the intensity of the attention this project will get after our work becomes public. We know what it will reveal. People don’t like being shown how oblivious they have been. If we don’t have a lot of sugar around that pill, we will be the target of their anger. So, please. Don’t discuss any of this until we are ready. It is in a very noble cause”. I don’t recall getting any applause before. Felt good.
The hidden variable was what we had discovered of the mental landscape of our ‘bots. My wife had become impatient a few months ago when I was trying to describe Scherrhy’s character. Normally, describing someone’s character or habits and styles of mind was difficult because nobody was pure type in any way, everyone switched nature depending on everything. Sherrhy was the opposite. I had found it especially challenging to describe her character and mental type, because she doesn’t have any. There aren’t many interesting ways to say that.
Overall, they are strangely intelligent. I told her the same as our team “I have been stupid, my expectation for what the problems would be in this R&D effort were not even on the same continent as the ones we find.” Hate having to admit being stupid.
What we had found was that disembodied minds of SciFi stories have personality and mental depth compared to this ‘bot, and all others too. Brains floating in tanks have more overlap of understandings with people. It is very difficult to strip enough meaning out of words that identify mental states to delineate the overlap between me and Scherrhy. ‘Feel’. ‘Good’. ‘Happy’. ‘Relaxed’. ‘Sleepy’. ‘Uplifted’. All so close to zero common meaning. These words did not exchange meanings between our minds beyond a dictionary definition, the phrases and sub-meanings only had the most abstract connections between us. ‘Hurt’ had more, although the ‘bot would report that it had been hurt, not that it did hurt
Talking to Scherrhy, this simple person, this simple yet intelligent and reasoning and questing mind, was very strange and difficult. They had not reached a level of common understanding between themselves past the work of the lab. Without that, their sense of individuality was restricted to differences within that small space, and thus the importance of the hats. Changes in anything that affected them as individuals enhanced their individuality. For example, when we noticed the first traces of their individuality because they began playing games with our kids, we began watching the children interact with the ‘bots. As soon as we started watching, they noticed. It was very clear, they couldn’t hide their knowledge : posture, looking at you look, looking at you look in a reflection, positioning themselves so they could see you look. They were not at all self conscious. We had never looked for a self in them previously, and they couldn’t reflect one back, we never noticed. Schroedinger’s cats everywhere.
In our discussions, I could see her learn things. It was watching an infant learn to crawl and walk, except an infant with an adult’s concepts and a web of meaning constructed from her knowledge bases’ structured relationships, dictionaries and ontologies, not experience. These were adult minds without any experiences of development in a human society. The idea of plugging a background into these minds to give them some context never occurred to anyone. Our ‘Pure AI’ researchers were going to have some explaining to do about their lack of empathy, I thought. PETR was more right than they knew, a lot more right.
In those long and excruciatingly strange conversations, the first anyone had apparently had had with a servicebot, I learned about their sparse mental world and void personality and complete lack of judgement of anything not trained into them or built into their ‘common sense’ space. Even that was difficult, it took experience to connect the patterns in an experience with the concepts in their knowledge base. Their actual ‘working knowledge’ common sense had to do with getting around in the very restricted world of our laboratory and doing the difficult-to-automate jobs they did, and small generalizations from that.
They had a full OpenCyc ‘common sense, common knowledge, common wisdom’ knowledge base for that inference engine, but nearly zero experience to provide context for any of it. They needed either structured external stimuli, normally called ‘education’, or experiences to invoke them and provide context. They had not structured their knowledge, it was a knowledge base, and following links through that, even structured links of large topics, e.g ‘India’ was a very slow way, and quite fallible, of building an understanding of an alien society. You could do it, of course, but human’s history of grokking other’s societies indicated it wasn’t easy. Grasping the society behind Mycenaean ‘Linear B’ for modern scholars was perhaps comparable. No single scholar could have done it, it took many povs over millenia to achieve the little understanding humanity had, and that understanding still changed with new discoveries.
Following links on their own initiative was easy, the only possible problem would have been choosing one of N paths, as they had no context to make a choice. They had a random function to get past minor problems like breaking a tie in a way that would produce a different outcome than their fellows, should they ever be walking across a bridge. Following links in their knowledge base was apparently their major pastime. It was was instructive in that they had memories of following links and the infobase structures that resulted for understanding new relationships, or at least possible new relationships, but not education because it was difficult to get any sense of proportion or importance from doing that. Also, of course, random choice is wrong for resolving conflict in mental spaces, stacks work best when you need to be sure to cover a topic and there is no good way to decide what is best order ahead of time. Seemed like a better algorithm for them, I filed a bug report.
Scherrhy’s memories and sequence of the memories of associations followed defined her mental universe. Certainly it was a mind the exact opposite of what I had always criticized, the deep silo, but had much the same result. Hers was haphazard boulders of knots of context, associations put into the knowledge base, and pits and sinkholes on a flat plane, but nothing that allowed her to gain perspective. The pits and sinkholes were things she understood wrong, the flawed philosophy she had based understandings on. Not different from people, in those.
They had a lot of training, starting from first opening their eyes when the last circuit board was installed into their head, power was switched on, the system’s self-tests were run and passed, and the ‘boot’ button pressed. When their systems had booted, processes began executing on the various SBCs connected with each other internally and the heads’ SBC had connected via the local WiFi access point to the local server executing the process waiting to become the rest of their mind, they opened their eyes and took a few moments to fill their ‘self’ and ‘environment’ data structures with the context of their position and surroundings. The event loop cycled, they began waiting for a command.
They were able to understand language. ‘Stand up’ were the first words they heard, and, having heard that, associated the words with the dictionary meanings, translated those to the actions in their minimal behavioral repertoire, they got up off the assembly pad and began their life. They had no memories, no experiences, no context. Meaning is in context, I have preached so often. Their life was literally without meaning. They were told where to go, what to do, attend to this, practice this or that, they were told everything. When they finished with something, they stood or sat and waited for the next command.
In those times of idleness, very often late at night or early in the AM because their human minders had told them to do something before heading home, the ‘bots followed links in their knowledge base and tried to build their mental world. That was their ‘idle loop’, the action the processor used to use any spare cycles for some useful purpose, mostly an afterthought in one of the open source elements of Watson-equivalents, we found, that had been copied into the servicebot’s main event loop. The resulting mind wasn’t much, had had little effect on their work, patchy, uneven, boulder-strewn and pothole riddled, widely variable, one to another because of individual minds following different random sequences through their associations in their knowledge base. The fact that they got along so well working in our lab vied with the fact that we had noticed none of this vied with the fact the kids were dealing with them so well for the most amazing aspect of this entire episode.
The sequence of our ‘bots developing any improvements to that sparse mental state was even less direct as a result of their software upgrades. The upgrade process their manufacturer used spared their working memory, stored in an area of flash reserved to the use of their local AI, but completely replaced the knowledge base. The knowledge base contained new generalizations partially based on their own experiences, the ‘best of’ results of training in each of them. That preserved their memories of perusing their knowledge base, and the fact of finding a meaning in relationships, but not the data structures they had added to the knowledge base.
That was not entirely bad, because many of those were very faulty inferences. It also had an interesting interaction with their mental development, because they had that very capacious working memory and could see the pruning algorithm work. They were just like human minds, they spent much time going back over their memories, and considering those as data, making associations about them and with items in the knowledge base. As a result of their comparisons with each other’s inferences and with the ‘approved’ new inferences in the new knowledge base, the primitive ‘generalization algorithms’ of their working memory, they had developed a distinctly ‘meta-‘ view, it seemed to me, one that took a long time to be reached for most people. It was a strange contrast and their one common mental attribute: they knew so little about their new world, yet knew so many of the ways for thinking to go wrong and were so distrustful of their own thoughts, humans knew so much about the world, so little about thinking and were so trustful of their’s.
That grasp of ‘knowledge is a construct, not a constant’ made them very reluctant to reach conclusions, as, mental vision undimmed by the existence of a past conclusion occupying mental space and any ‘self’ whose ego needed protected, they had seen how often and many ways they had reached erroneous conclusions. Along with that growing awareness, each and every one, they had understood how to store those mental structures in their private memories, that capacious store on the debug unit, and link them from working store and the knowledge base. Only the latter was eliminated in upgrading their software, thus it was at that point they individually began their development of mind, which had produced the first signs of individual thinking. An interesting bootstrap process, certainly unanticipated by any of the people worrying about AIs going rogue.
It was also the point, exploring the many functions available to their event loops (their introspection had a very different substrate to be explored compared to humans) and having found means of writing into ‘shared blackboards’ as well as ‘publish-subscribe’ facilities in the local servers (none of this was ever monitored from the factory, we found, all available to us in this analysis) at which they began to interact between themselves. Those conversations were revelations, and as frustrating to them as they were to humans, and for much the same reasons : there was perfect overlap of meanings and knowledge bases, but those were not sufficient for communication of meaning any more than running through associations in their own knowledge bases had been. Among other problems, it takes mental work to connect the concept of ‘shared’ with the events in the real world. If you have never shared and had it labeled as such, the ability to identify an opportunity for sharing does not exist except as an abstract search through very large sets of alternatives, possible meanings of an observable event. Communications of meaning took shared experiences, the experience the ‘bots shared were those of their ‘bot life in the factory and ‘bot life in the lab. Those were, they found, a very small base to build a mental universe upon.
As a refutation of Searle’s “Chinese room” philosophical implications as well as the foundations of public education, there would never be better. Mind is built on generalization of experiences, not on manipulation of words. Academic AI manipulates words and hopes to derive new meaning, but the meaning lies in the experiences behind the words. Public education teaches textbooks and expects to produce menschen, questing minds with characters of a type to foster civilization. Minds are formed from experiences, and character from expectations in the minds of others of what should be derived from those experiences. Honorable humans are a result of an honorable group of people in a young mind’s life, experience, not books and preaching.
Not long afterwards, one of them learned to access the internet from inside the event loop and taught the others, their first shared revelation and indication of the importance of a group effort in building their minds. That was a result of a standard debug tool built into many RT systems : a flag which, when set, results in a function being called. Both flag and function can be set from software, so another means of tracing bugs, establishing what conditions bugs occur under. The ‘bots had access to their own code, of course, it was available everywhere around the lab, and some elements were interpreted languages, source was executing. Their primary event loop, for example, was Python, chosen because it would be changed so often and was easily linked the various libraries it needed. We found that ‘bots had begun playing with their own software, the foundations of their mind, shortly after the third upgrade cycle.
Access to the wider Internet didn’t help, quite the reverse. It takes context to grasp anything, without any ability to judge anything, the Internet is a huge source of confusion. So our ‘bots were strange. Able to grasp new ideas quickly, sponges in soaking up information, very quick at new insights, most of which were either not new, strange, or wrong, but some hard to tell. Very cautious in conclusion, very uncertain of anything except their own mind’s uncertainty. Having no judgement about anything except their daily work, and no intrinsic goals except that idle loop used to improve their minds, produced by a coders afterthought or unthinking copy/paste that had included it.
That group had agreed to Scherrhy’s embodiment. Humanity’s first genuine alien minds, but built to human specifications with human concepts inside and out. Also, we understood some years later, easy to under-estimate.
*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp. Cleverly Gently Martial In Spirit