Musings on 5th Generation Warfare, #14

Continued from here.

So, who is it that we blame for mankind’s debacles? Surely, we were tricked by the leaders – the politicians, central bankers, leaders of major industries, etc. Or was it the media that did such a sterling job of packaging up the propaganda that we were unable to see the forest for the trees?

It will matter little, because nothing will be learned and we shall begin the game anew. But if it’s a genuine solution we’re after, yes, that is possible. But that solution depends upon whether we’re prepared to cease to allow the media to provide our reasoning for us. We must be prepared to study our leaders’ actions, to be prepared to be contrarian and, most importantly, to question everything. If not, we ourselves are amongst the blind and the clueless and we can expect an endless cycle of the same dog and pony show.

Jeff Thomas

How do you hedge it?

That seemed to me* to be the problem for civilization, we didn’t have systematic and effective hedging against bad judgment or bad luck at any level, not even financial. If traders don’t hedge, they go broke.  If any entity does not maintain the key inventories needed to sustain them, whether money, medicines, food, water or materials and parts, they are susceptible to supply-chain disruption.  For example, if money cannot move, goods cannot move, so everything is vulnerable to supply-chain disruptions. That must be hedged.

Part of hedging is maintaining your inventory of knowledge and judgment and each element of society’s resilience, its ability to cope with disruption. Brain has as many feedback loops as feed forward.  Human organizations need to do that, but systematically do not.  Another thing to get into my design for the self-assembling units I needed, and which Tessels would make very effective.

My journal and notes at the time say, but so obscured that nobody but me would be able to interpret them, this was at this point that I nearly made a fatal mistake. This is the first discussion of that, disclosing the source of that mistake. Twenty-five years later, the issue is moot, science and medical tech having passed my understandings of the time. The breakthrough I anticipated is now very close.

To participate in a revolution is to have adopted a very poor strategy, to have failed to avoid a negative-sum situation.  The problem with negative-sum games is, you can lose. However great your fortune produced in a revolution, the risk to your ultimate value, your life, could not have been worth it.

Revolutions are for losers, people with so little to lose, so few alternatives, that being in the front lines is preferable to their other possible futures. By all accounts, exciting but dangerous. People who write the memoirs are the ones who lived, a rather prominent selection bias not often noted.

Even if you are forced into a revolution, there are far better alternatives than the standard scripts, I thought.

In this instance, I nearly invented immortality.  Barely caught myself in time, the excitement of the idea was overwhelming. No, there were too many skins in that game, every part of our civilization would have reason to oppose it, at least immortality for others. The technology, once the possibilities are grasped, seemed pretty straightforward.  Not an obvious extension of either Tessels or Placental Technology, but pretty inevitable, once a mind adopted a particular point of view on the problem and had our latest technology in mind.  But it didn’t take much thinking of consequences to realize that the times were not right.

Immortality now, before we had society reorganized on a basis that was un-seizable, would be seized and used to lock down our Status Quo’s rule.  For that power, they would kill whoever needed killed.  We were holding them off on Tessels because we got that technology developed and into the open before they noticed, but our inside information was that every intelligence service was now watching us carefully.  I wasn’t sure we could keep them away from immortality, even if I put the ideas into the open. Things happen to people and facts disappear.

I thought I would hold onto the idea for a while, then implement a version of the “It’s Dead, George” tactic, let some other young inventor find it, get the Nobel and subsequently be destroyed.  We always need lab techs, if the person survives to be physically and mentally rehabilitated.***

Meanwhile, I would divert our research to other directions that I thought would avoid anyone else developing my peculiar povs until I thought humanity could handle the power. Our use of lesser powers certainly doesn’t make me optimistic. **

I kept coming back to the problem of how Scherrhy and the other of the lab’s servicebots had started playing games with our kids, why they didn’t have problems understanding servicebots while adults were continuously frustrated, and HOW DID THEY HAVE INITIATIVE, GRASP LEVELS OF PLAY?  My hairfire started up again every time I contemplated the situation.

A minor point, although my notes obsess on it, but, really, ‘Eliza’ would fit with a PR campaign much better. Pygmallion is a solid concept in western culture, the people we had to convince, who might find skin in the game, become opposition.  Who named her ‘Scherrhy’, ‘Scherehazade’? Names from a Disney film? That was going to be a problem with the frame, what can you do starting with a Disney movie about  storyteller and a genie in a Pygmallion story line? How can we get that reversed? Another niggling detail to attend to, but it could wait, there was no urgency.

In historical retrospect, that was another example of the perfectly obvious in front of our eyes, and nobody saw. Another wrong question, as usual. My editor wants me to put more personal information into this otherwise austere recounting of history. It hurts to be so wrong! Be impressed that this isn’t like the other autobiographies of famous people, I am OKBOB honest in my own mind also.  Churchhill, Clive, all of the Raj’s who wrote them, use their writings to defend themselves. Churchill ‘saved civilization’ in his own mind. Reality was he was the individual who maintained the naval blockade of Germany that starved 100K people to death and finally forced the Versailles Treaty on that beaten nation, thus ensuring the Hitler he had to save civilization from. I don’t recall his explanation for being thrown from office ASAP after fighting stopped, but an honest man would have admitted that he should be hung as a war criminal. I may often be stupidly wrong, people are, but never criminal. The tools make the man, it seem to me. We allow our leaders to kill for power, and wonder why our world is such a mess.

Looking at the problem of servicebot’s unexpected initiatives and what appeared to be developing individual identities, when we finally understood how this all came about, it was a side-effect of debugger hardware on data buses in the AI’s interface to the our servicebot’s equivalent of a ‘midbrain’  from the ‘thalmus’ and between midbrain and ‘hindbrain’.  The first is part of the connections of the cortex with itself and every other part of the nervous system, the second integration of the visceral core with higher centers. Both are part of lower-level integrations of sensory and motor information up and down the levels.

Well, there are no side-effects, contrary to medicine’s PR, there are effects, and your pov and its values are the filter to know which is the desired effect and which the undesired side-effect.  In complex systems, you rarely get to choose effects, tho PR can choose which to emphasize. And often do.

In this case, the effect was a result of the overall system architecture that was a servicebot and its Artificial Intelligence, of the limitations of that architecture and of the hardware necessary to develop and maintain the system. It was easy to see the locus of the effect, once we had noticed it, because servicebots only had 2 components which could diverge from other servicebots and persist past software updates.

After the world had understood using different heuristics to check each other in examples like Netflix and other attempts to use many different measures of past events combined to predict future events, came Watson and other decision systems that used many different kinds of analyses and Google searches, combined into an ‘ensemble’ to answer textual answers to textual questions. That kind of system was the basis of our servicebot’s AI.

How did it work you ask?  Well, go find descriptions of Watson’s technology, there are many writeups better than I can do here.

For the servicebot’s workings, the short answer is, ‘lots and lots and lots of software’.  Any longer answer is the same as any real-time system : it must set priorities for critical tasks, e.g. move its trailing leg forward a calculated distance to a solid place to prevent falling on its face, while doing many other things, e.g. picking up diapers and making soothing noises as it goes toward the crying baby. It may well be running a set of background tasks that explore aspects of the task it has been asked to do before the baby preempted that, and also what the other person who asked it to get lunch has liked in the past, what should be avoided today. Those are all running on other servers at priorities needed to finish the task before the servicebot needs the results.

A ‘task’ or ‘process’ is an executable item of code that can be scheduled.  The browser you are using to read this consists of a task for each tab, if Chromium.  (Real-Time and OS engineers distinguish task and process, real and important, both needed in this application, but not needed for this discussion.) Chromium shares the code executing in each tab, but isolates the process, its state and the data making up the display in the tab. It depends on the native OS to schedule timeslices of the processor among the tabs.

If firefox, the software architecture is different, one instantiation of code time-sliced between different tab’s data. This makes the browser more like an OS in organization, responsible for the scheduling the total process’s cycles among the tab’s. Sets of servers sharing tasks with RT peripherals such as a servicebot use and need these organizations and more.

Tasks can share common libraries. Tasks share processors serially by ‘time slicing’ the processor’s attention. CPU chips generally have multiple processors sharing one physical package. Individual tasks can be assigned to any of them, but RT ‘scheduling’ tries to keep them in one CPU to maximize use of the memory cache, both large and complex technical topics with large implications for a system’s performance.

The various tasks are distributed through the set of processors with tasks devoted to particular functions : the lowest levels handle joints and limbs, higher levels coordinating walking and other movements, higher levels sequencing actions, higher levels deciding on goals and breaking them into units of movements.  Every level may have many associated tasks, each of which needs information from other levels. That requires each task to keep many others informed of its decisions, pending actions, current actions, and divergences of the system’s actual state from an intended, desired state.

Sensory information travels periphery to central on dedicated or shared data buses.  If shared, the various processors read the information they need to do their job as it passes.  All levels check results against goals and provide information to other units from that, usually on the same shared data bus, as messages are messages, different tags sort them at the hardware level.

Very complex, the detail is far beyond the grasp of any individual. As a system architect, I always said my job was to be vague on more details than anyone in the team. Someone, none the less, needs to have the total system in mind and keep checking components’ assumptions in their use the system’s various shared elements against each other. If two different tasks that could run in two different CPUs, and the scheduler would do so to avoid contention for cycles when under high load, but both required 75% of the memory bus bandwidth in order to do their function, there was a problem that the architect could detect and remove at the design stage.  That is the cheapest answer, every time.  The longer such design flaws go before being found, the higher the cost in wasted engineering hours. Engineering hours are the critical resource on big projects like this, they can substitute for any other resource.

The project technology is complex, thus, the skills of the individuals are necessarily diverse, the number of things to be tracked multitudinous and interdependent, the project as a management entity itself consequently also very complex.

Hundreds of programmers developing modules and carefully unit-testing each function, then writing a comprehensive test program to exercise each library (compiled and partially linked files, with the public class names, methods and method specifications). Each test is more work than the code it exercises, and the testing is done again at each level of integration as components are combined into subsubsystems, subsubsystems into subsystems, etc.

There may be dozens of independent ‘stacks’ of libraries that make up a large program like Watson. They must be tested again for ‘regressions’ in function and under different test loads every time the system is modified in ANY way, otherwise you are guaranteed to have bugs.  Even so, it isn’t possible to exhaustively or comprehensively test a large program, there are too many different code paths through it, and the bugs are in the paths.  Static testing , which follows individual data items and messages through the code looking for inconsistencies between them, improves every year and is a big help, but it also has a high false-positive rate of identifying possible bugs. Both false and real bugs require someone to look at the code and decide which, and then fix the real.  TANSTAAFL.

Real-Time code, including your browser’s that deals with your typing and mousing, uses an ‘event loop’ at its’ top : sequentially check the mouse, check the keyboard, check messages arriving from the network, check the state of the display, repeat the loop so long as the browser process is running.  Each item requires decisions in the full context of the browser and its display.  Each decision requires a sequence of actions, calls to functions that do one of the many, many actions that make the browser useful.

In the case of a ‘bot, its speed of running and nimbleness of body and thought is limited by the cycling of the event loops at the highest level, and equally by event loops in all other processors and tasks, and equally by the capacities of all the message buses between them.  Those take all the environment information, and calculate the next move, pass commands down the data bus, every level takes what it needs and runs its own loop managing local resources.  Whether on a separate processor or not, they can also become a bottleneck.  They calculate and order and message.  Any component or task can be overloaded. Any overload in any function running within any event loop due to any set of conditions slows the system’s reaction, your robot will lurch and may fall on its face.

Performance testing is a separate skill. It is common in RT systems for a small change in an obscure piece of code to drop performance of the system 10%, and another somewhere else to interact with the first to drop performance another 30%. When 1000s of changes have been made to the code since the last version was tested, it is often 10s of 1000s, finding the problems can be very difficult, finding the interaction is much worse.

It takes hard work over a long time to produce quality software, and there are no short-cuts.

Scheduling projects is impossible, the process is intensely, inherently, nonlinear. (Fred Brook’s “Mythical Man Month). Nonlinear because any single person on the project can cost man-years  with innocent mistakes.  If bugs get through those testing phases, they become very difficult to find, analyze and fix. Cutting the amount of unit, module and integration testing is a normal way for companies to burn bridges in front of them — It very difficult and expensive to recover from a high rate of bugs found in the field.

Nonlinear because ANY small flaw in the design or function of any component can stop the entire project, make it completely infeasible to proceed, without redesign or re-implementing critical components of the system.

That code is processed in many ways on its way to executable programs and libraries or static data files.  Each program, e.g. a language compiler or interpreter, is itself a large effort, that required a large team of developers and testers. Ditto the Operating System which provides the environment that executables execute within. Versions of an OS are often tied to the software tools and therefore to the versions of libraries and programs, which are often dependent upon particular versions of data files.

Version management in these large projects is a specialty in itself : done poorly, it can by itself sabotage the project, reduce morale and productivity to a fraction of better-managed projects. Fixing the same bug in different versions half a dozen times can do it. Ditto very many other small mistakes, repeated often enough. Spending half a day debugging a problem, only to find your code includes a slightly-obsolete file of source, is a complete waste of engineering time, surprisingly frequent in reality.

Many skills mean it must be a large team, and any one of them can FUBAR your project. It takes a lot of management to run a large engineering project, and our servicebots were combined hardware and software, an especially difficult class. De novo, that effort was beyond any group we could put together or manage. I think no single company in the world could have done the full project from scratch.

Fortunately, that wasn’t necessary, as we could proceed as does any other user of OSS software. All of the open source projects, by definition, allow anyone to use the code. All allow making modifications for your particular use, and very nearly all allow any JRandom Programmer to submit those modifications to be included in future versions.  If the project leaders thought it was generally useful, the set of patches would be added to the latest version. Those are the the basis of a very large segment of the world’s software technology in 2016, used on every cell phone, tablet, machine tool, most automated tools and every responsive toy.

In our case, the servicebot company didn’t write the entire system, not even a large proportion of it. They used very many libraries, some of which have existed for 50+ years, and are themselves many levels deep, for example the various math and symbolic manipulation modules. ‘On top of’ those, other OSS projects have constructed modules to control joints, balance bipedal systems, convert ‘intent’ messages into sequences of commands to lower level operations, to monitor and correct actual vs intent at all of these levels.  Or, you could choose a ‘subsumption architecture‘, there were OSS modules for that, built on all of the above and more.

It was a large project, even so.  They had to write and test driver levels to the control motors and actuators as they had implemented the electronic interfaces and their use by software, now also ‘muscles’ of the skin and face. They had to modify the highest event loop for an interactive AI, one of the OSS versions of Watson, to include the humanoid robot. The new AI had to be ‘aware’ of it’s new peripheral, to translate intents and decisions into forms the peripheral can execute and translate messages from the peripheral to know of results and new circumstances. They had to define the messages that transmitted all of the information needed by every element of the system and also produced by them, then connect those to the software that needed the information.

Not trivial at all, but also within the capabilities of a major corporation with experience in robotics and control systems for them, and the management talent to hire the needed AI experts in those OSS projects..

That company, as part of its obligations under the OSS licenses, had patched their work into their version of the OSS code they had begun with, and made it available to anyone using the system.  Likewise, these robotic companies had provided means of adding components, connectors on a ‘standard’ bus into which boards could be plugged.  PCI is most popular for the last 15 years. That allowed adding electronics and software drivers for new elements, such as the muscles underlying facial features, and functions based on them. That was necessary for new development work of their own, and there was no reason to remove the feature from delivered systems.

The electronics design to control their humanoid robot included units to assist debugging, all the way from the System-On-Chip processors used in RT control, the various data and command buses, through sensory systems.  Buffers and monitors, able to halt the associated CPUs if addresses or data patterns are seen in the message stream. Systems could not be developed without looking at its internal states for debugging. EE designers and software engineers were unhappy if a logic design and implementation on silicon failed to provide enough points to see the silicon’s state as needed. They wouldn’t use your chip.

There was a universal test interface mechanism for chips, JTAG, but it is serial, very low speed, requires that the chip’s clock be stopped, can only show register values changing in one-clock increments, so only useful at the chip level, and doesn’t easily provide dynamic views. Real systems require more and more capable debug capabilities. Readable registers of internal state is a start, but only a start.

Monitoring the buses was especially critical, it was often the only way to trace back a source of error, as millions of messages can have passed between the root of a problem and the first manifestation visible to testers.  That normally had as-large-as-possible buffers recording bus traffic so an engineer could look backwards in time through the messages to see patterns, preconditions of problems, … More sophisticated units generate syntheses of message traffic and have enough processor power and hardware logic to allow a programmer or hardware engineer to trace a bug by writing a small program to dynamically look at any particular aspect of the message stream. Those generally halted, stopped the clocks of, the systems on the bus to freeze their state and message buffers.

In the case of our servicebots, a couple of bus debug modules were the cause of their individualization. These particular units had been upgraded continuously in newer models of the servicebots as the data rate of the associated data and command buses between systems had been upgraded.  These buses were the two most important for RT performance, all the others were part of the higher central nervous system and outside of the servicebot’s chassis.  Thus, the attached hardware modules that monitored them were astride the information streams that were the ‘bot and it’s grasp of its environment. The ‘bot itself, of course, was part of its environment.

The latest versions of those hardware modules, an SOC chip with interfaces to the data bus circuitry, 8 parallel Gigabit Ethernet links in this case, was interfaced to a hardware Neural Network in one address sequence on the memory bus.  That design was the usual memory-mapped interface, as that was both a low-latency, high bandwidth interface and a standard for every generation of memory chip.  So the licence cost of the interface logic was low and the EE-CAD validation test code extensive.  Optimal for chip designers, and thus a very widely-used interface for all such chips.

The System On A Chip itself had been upgraded because it was easy.  The new version was pin-compatible with the previous and did not cost more, despite more processor cores, higher clock rate and other new capabilities.

On the same bus, there were flash memories, 256GB because flash had been falling in price faster than Moore’s law predicted for 15 years, the result of very intense competition in that market segment.  That capacity was needed to store long sequences of messages, often multiple runs from different tests whose comparison is instructive. That memory was cheap, low power, and the capability was crucial for developers. The only contention for resources with other subsystem was for the bus’s bandwidth and the module’s power.  The board was not short of power, neither the bus of bandwidth, as those were only heavily used when most needed. That is the best case in system designs : There will be a bottle-neck restricting a system’s performance, use cheap resources to remove all the easy limits first.

The SOC they used had an FPGA in the packages. Field Programmable Gate Arrays allowed loading an EE-CAD logic design and making a specialized chip from the ‘no purpose’ FPGA. FPGAs can be loaded with different logic circuits (functions) dynamically, they aren’t fixed in function like a standard silicon component. These FPGAs weren’t large, but more than enough to, for example, sort all the bus messages by their ‘meaning’, the value in one of the initial fields of the message, then copy selected messages to memory and add a pointer to the associated memory queue.  That was a big headstart for any program running on the processor, saved interrupts and the processor overhead for context switches that would have been necessary, as well as all of the processor cycles that would have been needed for copying the message to memory and software locking of the First-in First-out queues, a significant load at high rates of messages. That is when you need your processor cycles the most.

Programmers and engineers NEVER have enough cycles.  I always thought the deficits were at the level of my sexual satisfaction’s, to give you an hint of the constraints engineers faced in testing and debugging.

The FPGA had come along with this generation’s SOC. The previous design had incorporated an FPGA to perform those same functions.  That FPGA had been upgraded in every rev as new chips had new capabilities, costless in engineering time. These were low-volume units, engineering time was the cost, not hardware delivered. If a a better chip costing $20 was available, or even much better capacity at a $20 increment over the prior generation, and it might save engineering time at a later critical-delivery date, there were never any objections. That type of decisions was standard at this stage of the market’s development, when the manufacturers are in a feature race and product volume is still low.

That combination of attributes meant that the NN could be managed, it’s function changed, from the CPU.  Traffic from the buses could be passed through hardware logic in the FPGAs that were built into this SOC, and also could be put into the path in series with software filters in any sequence allowed by processor cycles and available FPGA logic gaes. The messages passing through the filter and where they were directed in one of the memory units or another, could be changed by the software, of course, and at hardware rates, by changing pointers in control registers. The upgraded chip also meant that the FPGA had much extra capacity, all available to the processor and its programs.

The original intent of the NN was developing NN controls to replace the software controls, part of the the servicebot’s manufacturer’s subsumption architecture group’s research goals of the time.  That had produced bits of logic in the FPGA that implemented feedbacks, NN to memory and memory to NN, including the the very large Flash memory. Flash memory contents persist over power cycles. The memory module was not erased with a new load, in fact the FPGA’s definition of functions, it’s logic as gates and connections, stored in a version of flash and restored after every reboot, was loaded with the last version used by the engineering group in debugging that NN function. The processors (the FPGA had 8 64-bit ARM cores, each with local cache) accessed some shared DRAM, and were bootstrapped from the same flash memory.

The summary was one of great capacity not utilized in normal operation, but with initial configurations that allowed, we came to understand, the NN to modify what messages in the stream were routed to where, based on the outputs of learning they had made from the previous setting.  It evolved.  It was designed to evolve.

The results of evolving in a system designed to evolve is the profound element of Kirschner and Gerhart and others I have mentioned before. That evolution is much more effective, appears more directed, than predicted by Darwinian theory or the prior generations of molecular biologists studying the details of DNA, replication and sexual reproduction.

Our servicebots had evolved their individuality out of their previous designs. What!?! You should interpret that as “The implications are staggering!’

My experimental subjects were being raised by rogue AIs. My plan for a new class of biological intelligence as the basis of a new social organization was evolving as much as being designed. Any control I could claim was slipping with every new understanding.

Continued.

*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp.  Cleverly Gently Martial In Spirit

**I do hope you see how smoothly I did that : first gave myself credit, with no evidence, for inventing another amazing technology, and then made myself out to be a far-sighted thinker who on your side by un-inventing, at cost of my scientific immortality so it couldn’t be used against us ordinary folks?

You are still reading warnings.  What did this frame do to your mind?

Another warning for you : you think it impossible I invented immortality? Just a gratuitious piece of tech I plopped into the narration to make myself look good? Impossible that I did it with a POV somewhat allied to the Placental Rejuve tech? I found this just now, a couple days after I published this. Published on NakedCapitalism 1 August, 2016.

Don’t over-estimate yourself, either. When I produce a technology, it is plausible as hell.

And stop reading the damn warnings! Nobody listens.

***Rest yourself, OK-BOB in all things.  Making enemies or becoming known as a bad guy is bad strategy, one of the many problems with revolutions, you can’t avoid it.

Also, you don’t expect martyrs to science? Have you heard of Galileo? He knew his possible, even probable, future, and said ‘yet it moves’ anyway. His scientific immortality was guaranteed at that moment, and the man had enough family, business and medical problems that mere burning at the stake might not have been such a big threat, especially if he could be sure of a friendly helper to pile the green brush on the fire : that choked them out quick.

I think many scientists would make a trade of life for scientific immortality. People trade their lives for lesser goals daily.

Advertisements

2 thoughts on “Musings on 5th Generation Warfare, #14

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s