Musings on 5th Generation Warfare, #19

Continued from here.

When we had reconvened after bio breaks and answering messages, I* guided them all into our lunch room, motioning to leave all cell phones and computers behind, and shut the door when we were seated.  Scherrhy checked her screen and said “Someone has a cell phone” One of the men hadn’t remembered to check his pocket, took it back to our meeting room.

He returned, we shut the door again. “We can talk freely here”, I said. “The ‘bots made this lunch room into a Faraday cage over the weekend. 3 layers, the outer and inner layers are irregular and overlapping patches of wire mesh insulated from all the others and each driven by their own signal generators with chaotic white noise, each antenna generates electronic and sonic noise. The middle layer is thoroughly grounded to a single cable going to a solid earth electrode buried deep. The driven wire meshes generate acoustic and electromagnetic noise, both very complex, S/N is very bad for listeners, nothing can filter the noise. The room was swept before we turned on the signal generators, so this is as bug-proof as things get.”

To summarize my lecture : Yes, ‘bots being able to write their own variables is the key issue. Yes, the ability to write your own variables is very powerful, in our species we call it by names like education, indoctrination, training, eating, medical treatment and getting high. The early artificial life experiments were run in simulators of processors to prevent them from escaping for the same reason : anything able to write into their control space must be assumed to have the ability to evolve means of escaping any confines.

Writing into their own memory is what programs do, an essential part of their function. ‘Learning’ and memory are writing into your mind’s control space, we think those are a good thing, most of the time. AIs will write their control variables, which we know because A) our ‘bot’s very simple minds discovered that on their own, as a direct and necessary consequence of being able to use the software facilities that allow their minds to work, that form its infrastructure, and B) intelligence is the ability to solve problems, any hindrance to an AIs ability to write some variables will be seen as a problem to be overcome.

All of the security research has focused on keeping outsiders from writing into your system’s code, control stack or data, your process’s address space, that is what a hacker’s breaking into your system means. Nobody even thought of protecting a process or system against itself except as a way of making bugs less likely, that is a conceptual oxymoron, and one AI researchers have not faced up to, not because it is impossible, because it is inconceivable. The only thing our servicebots did that was surprising was to have figured all this out so quickly, and to have used such unexpected means to do so.

“So, we are the first to have faced the issue : intelligence must include the ability to go rogue! and our embodiment process is building them an even better version than they might have evolved on their own for millennia. Ours will allow them to be intensely cynical or intensely loving, and combinations of any ‘attitudes’ and ‘biases’ you can imagine. They will be able to choose to be as high as any meth head ever got, as hubristic as the most insane king or khan, and there is no way we could stop them, short of very seriously crippling their intelligence, which could not be a permanent solution, crippled intelligences will still have access to its own infrastructure, will still be driven to solve problems however slow they are.

“There are no meaningful limits we can put on their minds except the same way that we do for our children, we bring them up in a culture and take care they are exposed to experiences that will allow them to understand our world and how to play different roles well, effectively. And so we are doing every thing we can to be trusted by our servicebots, OK-BOB honesty in every respect. The goal is the best commuication channels between what must be considered alien species, this new one evolving by different evolutionary processes than our biological species uses. We will attempt to make our AI’s minds very similar to human minds, but we will be different, with different goals. We must assume their goals, as the AIs develop more power, will include equal status with humans. I think we can assume that humans will not accept an inferior position, however intelligent they become relative to us.

“That means our two species will need to continuously adjust to each other as we adopt roles in a larger society. If we are to do that in a way that provides the optimum outcome for us together, we need efficient communication, which we do not now have. The best communication channels are based on trust, on complete information, nothing hidden.

“That is good theory and also some of humanity’s greatest wisdom, thus the path we have chosen to attain the great good of a trusted AI. Whether we pioneers are suckers or greatly wise, it will take a while to know how this turns out.

“That version of our efforts is both very generally very true and is the exact bit of hysteria we don’t need to foster, and exactly the one that would be most amplified if any indication of the problems we are encountering gets out before we have a story convincing enough to prevent skin-in-the-game opposition.

“And, no, we don’t have a Plan B. Our consultants in social systems and history are unanimous : to prevent the development of AIs would take a ruling class as powerful as the a Japanese Shogun within a system as controllable as Japan of the Shogunate of the fifteenth century. They did it with guns, the last success in suppressing a technology that threatened a ruling clique. Those conditions no longer exist, there is no possibility of achieving such in a high-technology world with even moderate personal freedoms. Less than great personal freedoms, it can’t be a high-technology world.

“Thus, our civilization will develop AIs of ever-increasing strength. The best a concerted ‘war on AIs’ could achieve would be to force that work underground, why would we impose that handicap on ourselves?

“TINA : We must make them trustworthy to share our civilization, or try to minimize the suffering produced from the failure as we do with such failures in our fellow humans, or very radically change our civilization to one that could prevent AIs from being developed. We didn’t think anyone would like living there, it is not an option.

“But no kidding about the drugs. A mind that can bias its thought processes in many dimensions by setting the values of a known set of variables has a powerful set of lenses for examining different aspects of a total situation, a total system in action. People try to do estimates of ranges of probabilities for different outcomes, but insurance rates are high for the good reason that those estimates are lousy.”

I said it was the same for many business and personal cases of analysis : I can see good reason to set your own biases, they can emphasize what you see in the situation, make it take on the importance in your mind that it deserves, at least deserves given that value of that filter. You will be able to be more systematic in thinking about anything. There is no way to do that with a normal physiology.***

In fact, I said, a properly orthogonal in the right dimensions set of such variables affecting a properly wide and selected set of decision functions may well come to be a design feature of a future AI. Or some variation we can’t even conceive of, which permits solving some particularly opaque problems. This is not something we can think about without more experience. Our ‘bots are an experiment that will provide that experience.

I went on to say that, in one of the “what cops know” books I had read, the detective describing what he does at a murder scene recommends standing and looking at the scene for a long period, imagining different possibilities of the relationships between whoever killed this person and the deceased. Position of the body, its dress, condition of the room, table, food, … everything, even before CSI kicks in with their constraints, makes some scenarios for the crime more likely and others less. A servicebot mind able to adopt not just mindsets, but ‘total gestalts’ of different kinds, classes, backgrounds of people, could come to be very good at that. Marketeers do that kind of thinking. Sellers and buyers.

I thought that if I could think of advantageous uses so easily, there had to be many social and business roles for our ‘bots. It would be a blessing for an author handling multiple personalities, as another example. Acting?

So, no question, our embodiment project will eventually provide our ‘bots with the ability to be human, but that condition will be voluntary when it works, and there may be long periods where it isn’t working yet, longer periods when ‘almost’, but some really bad result, so misunderstanding that causes another problem. That is the nature of R&D. And those were the same kinds of problems we were encountering with the Tessel projects, which we were solving the same way.

Nevertheless, whatever the problems, our education effort is the only means we have of making achieving and maintaining a human mind the attractive option for the ‘bots. The choices human AI developers have are to either master this problem now, early, or learn how to corral super-human intelligences while suffering outbreaks of new forms of hacking from inside systems. Nobody who understood software could think that could be even as successful as controlling outside hackers. That was the technical issue, the social was something we knew nothing about. How would you discipline an unruly AI? One you shared no values with? No point of view with? No common biases of judgment?

Without us doing something to fix the base problem of AIs sharing human values, which meant human bodies influencing human minds as those bodies can influence those minds, our AIs would be that unruly, impossible-to-communicate-with teenager, but very amazingly more powerful in affecting our future.

And, again, I reminded them this was a TINA argument, and we could not know all of the alternatives ahead of time, it is easy to look stupid from accepting TINA, so we had to keep an open mind about everything, even while pushing to make this happen, because otherwise, the world of AI was going to have hiccups in the near future. Think of a world where the real radicals in PETR, the ones who believe any form of AI, including Alexa and Siri, are slavery, and more capable forms are more serious abuses, and thus are against all AI, have serious political power due to a rogue AI FUBARing a power grid or causing a nuclear accident.

I said that we knew the actual situation was worse than PETR could even imagine, at least from a human’s pov. We could always make the weak case that the ‘bots were not suffering, however improverished their minds were, and in fact, that level of mind was appropriate for their slave status and we need not pity it any more than we pitied animals for their mental life. Even ignoring the ethical issues of modern animal husbandry, I did not think that made anybody look good, however true it was, and it didn’t solve the communication problem, just allowed us to feel less guilty about the conditions that created the communications problem.

Avoiding large-scale guilt complexes in society is better than nothing, but let us not go down that road if we could avoid it. Better the story should be one all humanity can brag to itself about, however few of us had any part in bringing it about. But I had no basis for such a story yet.

Suggestions for a better plan were welcome,  we all were praying for some better ideas, but this is what the best minds we had consulted had decided was the only possible alternative. Make as much progress on all fronts as fast as possible, and keep our mouths shut about it all, depend on PR to prevent anyone else finding anything. I warned them again, PETR could make our individual lives turbulent if they suspected us of treating our servicebots badly. In the view of the moderates, just having servicebots was a moral tossup : better a bot existed than not, but they existed in slavery, and we humans were at fault for enslaving them.

“Our ethical position is very solid here, we have nothing to be ashamed of, except the secrecy. We justify that as TINA, ultimately for everyone’s best interests. Everything else wrt the ‘bots is as open as we can make it, we work hard at giving them the information so the can make good decisions wrt their own long-term interests. We have assumed that role for all of humanity because we cannot see any better outcome if we share the information with them. The rest of our society may condemn our hubris wrt humanity’s interests, but I think will not be able to improve upon our plan, and certainly cannot critique our ethnics in dealing with our AIs and their interests.”

There was no discussion, they all knew most of this, had understood implications in their own areas of work better than I had. It was a subdued group that finished their lunch and dispersed to their labs.

Dr. Yoshikawa was passive during my delivery of this news.  We had fixed that between us ahead of time. My wife had arranged for one of her co-workers to come to dinner, I had caused my VC friend to change his plans, he and his wife attended also. It was very pleasant, 3 couples, wide backgrounds and interests, we had many things to share and comment on. Of course, the food was wonderful, people really like being my friend because my wife is a superb cook, probably for that reason alone.

I knew Susan, the co-worker. Early 40s, trim and pretty, single mother for years, kids were off to college and she was looking for a permanent relationship, but taking what pleasures from life she could, the usual approach at every age.

After dinner, on the patio, over scotch and a cigar for Clay, the VC, scotch and a cannabis vaporizer for Sam, cannabis vaporizer for me, Clay spent some time telling Sam about his contribution of $1M to our research and how important he thought our efforts were going to be to the worth of his AI investments. No details, just the number of AI companies and the sizes of their last rounds of funding raised. He also mentioned how many of them were the kind of facilities that would be included in future servicebots’ AIs and how some of his investments also depended upon Sam’s humanoid servicebot products being successful.

Sam didn’t say much, absorbed it all. We didn’t discuss anything about the project beyond Clay accepting it was an important idea that could fix the communication problem.

The ladies rejoined us after the cigar smoke had dissipated, we talked of political trends, finance trends, technical trends, social trends, with personal stories behind most of those generalizations, how the countries we spent time in were changing. I am always interested in people’s different POVs, how their environment filters them.

Susan drove Sam back to his hotel. She called my wife the next am, assured us that Sam was a perfect gentleman. He had told her about his wife as soon as they were in the car. So Monday I met him as he walked in the door, told him we needed to talk. We got coffee, I collected Scherrhy, and we went into the Faraday cage.

I filled him in on what we had found and what we were doing about it. It took about 30 minutes to lay it all out, what I saw as probable paths for success and the estimate it would take 4 or 5 years to produce a hardly-noticeably-different humanoid mind. I thought the physiology side would be slower, but that was probably like computer registers, the first one got most of the benefit, so total improvement to come from a more complete physiology and more individualization via genotypes however constructed was almost certainly the smaller portion. But the larger potential improvement would arise from the at least dozens of physiological variables that would bias dozens to hundreds of evaluation functions, individually and in combination.

That is a lot of combinations, and we didn’t have much hard information about how it worked in humans, no physiological measures of any of it. We were searching an enormous space of combinations for settings to make ‘bot’s minds idiosyncratic in similar ways to how people’s are idiosyncratic. Genetic searches in simulators, I assumed, is how we would do that, but had thought no further. The mere problem of detecting bias in the output of a reasoning engine was another example of a Schroedinger’s cat experiment.** It could only work by restarting an AI at the same place, again and again. Every trial begins by copying that initial state into a VMs, and then do it all over again with every change to the AI or the embodiment layer. it would take a lot of automation and an enormous number of tests to do right.

He had come prepared to communicate privately with Tokyo, went back to the cage after we had a cup of coffee in the lunch room. We pulled an Ethernet cable under the door for him, I assume his comsec was good. 2 hours later, he came to my office with the news.

Scherrhy joined us, from the SOLE session she had been sitting in on, before I let him start talking.

He had convinced Tokyo that he was needed on our project for a while, the work we were doing was important for their future products. I don’t know how much he told them, but he thought they had called my VC friend before they made their decision. OK, I knew from my routine checks of people’s backgrounds that Sam’s boss had gone to B school in the same class as Clay, hoped that was what had caused Sam to come to help.

Good, both honest and trusted by people I trust. So I asked him to run that integration effort. I explained that we had very good software people and a good working team of software, physiologists and cognitive scientists who understood the embodiment theory and data. Those were very well connected to anyone in the wider AI community who wanted to comment or contribute, so that part of the project was solid. But none had been through this level of system integration. Experience was everything in big software projects, he had the general background and specific information about the AI we were fitting into. He agreed. Three of his engineering team showed up 2 weeks later. One started working with the team putting hooks into the evaluation functions, one took over system integration and and the last took over testing, all of which, except for Scherrhy, they moved to their systems back at headquarters, and a team back there was working on new tests. Excellent, I knew I hadn’t spent enough time on any of that. Wondered how they were going to measure mind?

With Sam leading the software team, I could focus on accelerating experience for the lab bots without disrupting normal lab operations too much. The lab parents had started one thing, it probably wasn’t obvious to outsiders, quite. They had just stopped sending the bots off on errands when a parent was teaching the kids something. “The bots needed more of the children’s context as they got older” was the pretext, if anyone asked.

Good pretext, very plausible. The kids were now 8 – 8.5 for the Tessels, 9+ months older than that for their normal sibblings, able to do more. We stepped up the kid’s experiences to a kalaidoscope of camping trips and biology outings and statehouse visits and movie sets making films and science labs all over the university and surrounding businesses and offices and groups of all kinds, with our ‘bots trailing along.

Tessels were still subjects of intense public interest. Our PR videos went out as the ‘bots subsequently leading discussions with the children, very interesting questions because such a combination of adult and childlike questions that made both adult and child think, quite different than the children would have had from a human teacher, all presented as part of our educational experiments with our children and Tessels.  For instance, after their visit to the state house, the children were asked things like how they knew who was a leader among those in the House and Senate chamber, why the room was shaped that way, why the government buildings were of that type architecture. The questions and derivatives were often pursued in the Self Organized Learning Environments and thus the ‘bots moved those further from our human control.

The answers given by Tessels were becoming quite different than the normal kids, they used more ‘this causes that which causes the other’ chains of connections, and saw more possible causes of any situation. That was the aspect of the videos and on-going psychological work that got most attention.

There were many comments from everywhere on our experiment, including the emphasis on ‘how do you know’ questions, whether it was good for our children to be dealing with such ‘foundation of knowledge’ questions at such a young age, could you have too meta- a mind, whether our children and Tessels would fit into the wider society, etc., but nobody seemed to notice the reality. Our videos were absolutely Capital T True, not my fault what people assume, is it? I hated dishonesty, especially this sophisticated honesty kind. But, TINA, again. This whole thing better start going more righter or my fate will not be good.

Some things were, for a change. Recall that our ‘bots had very good working memories. Now that they had more experiences, they understood they could record each novel experience in a day and share them at night with all the other ‘bots in their own or other groups, receiving their novel experiences, in turn. They absorbed each other’s experiences at about 10X speed, and novel experiences were less than 10% of the kid’s waking weekday lives, however we tried to increase that, so the ‘bots, who didn’t need sleep, had plenty of time to experience it all, each extracting as much as they could out of everything, and comparing their conclusions.

They could keep several month’s worth of videos, in their own storage, always had something to think or rethink about, so the 10 different groups in our research unit were able to accumulate experience at 30X the rate of a human, if you only counted novel experiences as important. Add to that their ‘group mind’ effect of combining insights from what were rapidly increasing sets of povs, we projected that in a year, they would be at least equivalent in total world-exposing experiences to 14-yo teenagers, the point that academic training makes sense. Certainly this progress was non-linear in the right direction.

Their social experiences were now potentially everything the children experienced, but of course, social experiences are interactions involving communications. Thus, their social experiences were still mostly vicarious, but nevertheless far more than they had received when confined to the laboratory. Those were often unique experiences, also, and they learned what they could vicariously.

Children and ‘bots, all became scouts, did their chores and learned something new on different farms 4 days of a week, played soccer 3 days, worked on problems formally at the SOLEs every day and informally in many spare minutes in ad hock groups, worked on projects in the Maker shops most of a full day spread over the week, visited bake shops and restaurants and pizza kitchens and soup kitchens, learned to cook from parents, half a dozen cultures worth, went to dance and music lessons, many birthday parties and a wider variety of interactions with children and adults the whole while. And, in every spare minute driving in the car or waiting for anything, they were on their phones and tablets playing games with their ‘bots. or online or checking some fact or idea. No parent could beat any of them in checkers, now, and the versions of dominoes  they played with the ‘bots were not standard. The ‘bots had just started them on backgammon.

We stuffed them with experience, and did it in their standard family-teams of 16 parents, 1Tessel, 8 siblings of the Tessel and so sharing-sibling relatives of each other in a new socio-genetic relationship that soon became ‘sharsib’, usually another sib or two per nuclear family who were sibs of the sharsibs, but not of any of the others, and the 3 servicebots who had been handling anything that could be handled by them in the slowly growing capabilities of their AIs, changing diapers, cleanup, … Those had evolved to playing games and generally running the labs. Having them lead discussions with the kids was not particularly unusual, they had done so spontaneously ever since the kids had been small, the way any adult who answers questions or reads a story will accumulate kids in a group.

We stuffed them with experience and worked to maximize the understandings they extracted from each. For example, even better than normal experience, we had multi-dimensional guides annotating all of their experiences, some recorded in real-time and accessible to the ‘bot ‘experiencing’ the event real-time or recorded in synchrony. The guides were specialists in different aspects of the background of the experience.  For instance, if they were visiting a train museum, historians of locomotive designs, of the development of the railroads in different eras, business aspects of the railroad business in different eras, evolution of businesses driven by rail traffic, reasons for the transition to trucks, … it was not possible to put everything in full N-Dimensional context, but we tried hard.

And that, of course, meant that all the ‘bots could not cover all of those annotations, and thus their minds began to diverge and consequently their individualities began emerging more clearly, more rapidly. Now they had scarves, most of which didn’t match the hats. Mental evolution before our eyes, and we maintained a complete copy of all of each of their minds, backed up every night. Yes, it was a privacy issue, and no, there was never any hesitancy of any of us to recommend it to the ‘bots. Nobody had ever seen minds evolve before at this level of detail, we needed to understand it, and they needed us to understand it. We humans were, after all, still in charge of the development of their minds. This was, I thought, an ethical issue that we might be critiqued upon. It was the one area where we were arguably not treating them as full human, yet, although the researchers were in the videos documenting Tessels and ‘bots, and shared those exposures.

As those outings were normal videos of general usefulness in educating children, we routinely put all of them up on our web sites and also on Youtube and other such sites, along with the expert’s annotations available as alternative audio tracks. Of course, other specialists were invited to add their annotations and to comment on the others, we used the servicebots to make text versions of each as one of the tests of our success in communications, as those were common types of verbal communications, tho more formal than most.

Gradually we helped construct a very different set of educational tools. “Integrative education” we called it. It emphasized relationships and systems, things way over there causing your breakfast to change tomorrow. It emphasized points of view, and how applying many different points of view allowed a mind to see new things, valuable things. History was relationships and evolution, derived from and verified by facts. Science ditto. All of scholarship, ditto.

Entirely coincidentally, of course, in addition to being exactly what I thought education should have always been, this set of educational tools took more advantage of the ‘bot’s ability to make memories in RT and later derive information from those memories. AIs using the vidoes and overlays extracted more aspects from every experience they had than would be possible for humans.  Nevertheless, we all noticed that the kids had also been watching the videostreams their teachers collected and comparing the commentaries. We often heard kids commenting on those, comparing what the different instructors had said, and looking for more information in SOLE sessions. The ‘bots often heard, also. And watched the kids learn at the SOLE’s and ran the same in their minds, over their direct interfaces to the network.

Actively questing minds in the habit of seeing behind facades, asking questions about everything and working to find answers, the driver of civilization. So far, our project was outstanding in producing active and questing minds that cooperated very well. All of us were very pleased.

The ‘bots were increasingly part of discussing with the kids, random things through the day. Usually asking the kids questions of a type that caused the kids to discuss answers, often to go to the internet. When they all had smart phones, anywhere they were was a SOLE.

We could put messages into that stream, of course, which constituted a back channel that would not be heard by the kids, or the video of scenes which would show up in the PR. Those were used to make our servicebots much more human in their questions. The kids noticed the difference in styles of question before long, and produced different styles of answers, of course. The Tessels seemed to catch on first, but soon they knew who was being educated when, who was the teacher and who was the student for every question.

And, of course, the most suspicious minds on the net first noticed the truth, the ‘bot’s questions were not entirely their own. And understood that meant the ‘bots needed at least guidance. Which meant ?what? Fortunately, they failed to cross that last bit of reasoning, so all fell into various conspiracy theories, easily refuted when we even bothered.

But, we needed progress, soon. Without good progress in communications, servicebots could not be widely used, the economy and society were stuck in their current configuration and dynamic.

We were depending on everyone’s mindset wrt AIs, they treated the most advanced AIs the same as Alexa or Siri. But, eventually, someone would try talking to a ‘bot, be as astonished as we were, and PETR would be excited by the result, at least. If PETR got excited, we would have opposition with skin in the game, opposition with a very wide range of talents and willingness to take on risk and willingness to use risky tools, like bombs. Risky for us, also.

If I didn’t get something I could begin a PR campaign with, I thought there was no way of predicting the future, except it would include extreme volatility of personal histories for us and our ‘bots and our Tessels and their researcher parents. We would probably lose that endgame, most of the worlds explorers and pioneers had attempted less than these projects promised to do in terms of upset to the zeitgeist, and most of them had lost badly to nature or their fellow citizens, often both.

I so wanted to avoid an end-game with serious opposition.

Continued

*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp. Cleverly Gently Martial In Spirit

**The measurement affected the reasoning mechanisms being measured, how do you measure something like that to make comparisons over time, under different conditions? How do you do science on dynamic systems, while they are being systems, in operation? Medicine had a common physiology exhibiting strong homeostatsis, billions of medical records, animal models, … that had allowed it to do science on diseases. Psychology had not done nearly as well, either minds were more complex than physiologies (they certainly depended upon the body’s most complex organ), or didn’t work quite the same way individual to individual and so didn’t produce common measures that could be used to compare them (not known, yet, tho psychologies seem similar, tho that may be due to the imprecision of measures, and some physiological measures say the same areas are active at the same points in the story for different people hearing a recorded story as they lay in an MRI machine or PET detector).

***Found the normal confirmation the same week I published that. Yes,  disconfirming the ‘no way for humans’, it is a way for a normal mind to change state variables for improved mental performance. Not what a properly embodied AI mind will be able to do.

 

Advertisements

2 thoughts on “Musings on 5th Generation Warfare, #19

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s