Musings on 5th Generation Warfare, #16

Continued from here. The very necessary warning**

The second part of my* explanation for the development our laboratory servicebot’s individuality, and the small amount of that contrasted with the rather larger amount of progress they had made at everything else, is the way the centralized AI had produced specialized skills for servicebots. Without going into as much detail, the architecture was subsumption + neural network for all sensory and motor functions interfaced with a combination of a knowledge-base of ‘common sense’ knowledge and the more freeform text processing of a Watson.

The total system’s organization isn’t simple, but can be divided into three major components, the Real Time control, sensory systems and Cognitive.

RT control runs the bot and manages the interaction using local Neural Network and processors. RTC for interactive systems is organized as an event loop at the top level. Event loops are :

loop(forever) { check_do( this ); check_do( that ); … }

check for things to be done and do them if needed.  Is someone addressing me? Accept their command, analyze the command, queue the tasks. Do I need to perform a task? Call the function that breaks the goal into pieces, and queue those pieces for action in their sequence. Do I have pieces of actions to perform? Call the function that performs the one at the front of the queue. Do I need to adjust posture to maintain balance? Call the function that tells me what to do, in the full context of walking, picking up the baby, …

That loop must cycle fast enough to allow normal walking, etc. Bots are as heavy as people, their motors don’t have the strength of muscles yet, so weight/power ratio isn’t great. That means more momentum to be managed in normal movements at human speeds with less power, which needs finer control to achieve. That in turn, means messages from the sensors of limb position need to be at least 120 times a second, and the limb control NNs must take those off the bus without delay, FPGAs offload the processors in that. Also, the sensory systems provide their messages at least 120 / second for maintaining an adequate representation of the environment, having sufficient speed in reflexes, etc. Again, FPGAs queue the messages to tasks which keep those structure current, and can call ‘attention’ functions which interrupt cognitive processing, changing its ‘stream of thought’.

For reasonable latency in responding to people and events in the environment, the top level should cycle >50/sec. To allow that, the check_do() function normally checks to see if a queue of events has  anything, and if so handles it. Those do() functions at the top level do as much as they can fit into their alloted time, passing messages to queues which communicate them to other processes which ultimately put the necessary messages on the data bus, and the do_it() functions on other SBCs do the necessary work. Messages are thereby generated as a result, and, if all this works as intended, the ‘bot’s motions are smoothly continuous.

Those functions may require the action of motors that are controlled from separate processor boards. Multi-core processor chips and fancy caches in front of wide memory buses to ever-faster and -larger DRAM only go so far in providing compute cycles, software requirements easily outruns hardware capability, always. Distributed systems are the standard design solution, use messages between processes instead of a simple function call to coordinate processes, and so allow the process to run anywhere the message can reach. Communication links are also always inadequate, but they can be duplicated and run in parallel. The interface to and from those immediately becomes another performance bottleneck, and then requires a software layer and underlying hardware logic, complexity of its own that increases with every generation. Every server board includes all of those hardware technologies, all server OSs support them.

Servicebots used the same distributed system architecture implemented with single-board computers, 1 in the head, 4 in the chest, connected via parallel 10 Gigabit Ethernet. Ethernet wasn’t perfect, but excess bandwidth substitutes for latency and minimizes jitter, as well as some for compute cycles, always the major bottleneck in this class of system. The upper, lateral chest contained the motors and gearing that managed the arms. The chest also contained the cooling system for power and the major motors, which had heat ducts to the heat exchangers in the chest. These ‘bots had a dual-lumen trachea, air in one nostril and out the other with fans to move it. Later models using electro-contractile elastomers as muscles and 3D printed plastic ribs to a backbone much like a humans, breath. Lower chest had the thermionic LENR power source.  Abdomen contained the motors and gearing that drove the legs.

The mechanic side of these humanoid robots is impressive, but the amount of software is larger than needed by all of America’s space missions combined, only possible because of the vast amount of OSS and its continuous development since software.

The detail of the ‘topmost’ event loop would be required down to the individual motor on individual finger joints in a conventional control system.  Increasingly, Neural Networks were replacing that because large amounts of software so interdependent was ‘fragile’, meaning small changes in any component at least rippled, often tsunamied through the rest of the code, requiring big changes everywhere to keep the total set functioning.

Maintenance of large code-bases is a nightmare for corporations : finding and repairing bugs without side-effects to code and corporation is not possible, when mismanged even minimally the effort requires as much as 30% of total corporate effort for complex technical products and becomes the source of continuous conflict between engineering and the rest of the organization.

‘Bugs’ are anything a customer can’t figure out, from mistakes or ambiguities in a manual or screen display through “it died a dogs death and won’t reboot”. The problem is generally reported to their support engineer in the local sales office, someone who deals with customer questions and problems, helps the sales people set up demonstrations of the complex system, usually runs the keyboard in them. They are the first level support, if they can’t handle it, it gets written up in a formal bug report and kicked up to corporate. Corporate has more senior people who both know more and can talk directly to engineers.  Many times they can find a simple workaround for for the customer’s issue.

If not, it is passed to engineering. The engineering group often has their own specialists to deal with the bugs, and spare the developers, ‘real engineers’, from the constant interrupts their bugs cause. That is a sure sign of a FUBARed product and history of poor management decisions, also necessary in all large and soulless companies because it means engineering has escaped, had to be allowed to escape, primary responsibility for their errors. If every engineer is not responsible for their bugs, their code will have a high bug rate, and the load of dealing with bugs in subsequent steps will inevitably grow and ultimately consume the company.

That is an iron law of engineering, often ignored because marketing pushes for product on aggressive schedule because of a feature race with competitors. “Release the product before testing is complete just this once” cascades, the resulting bugs experienced by customers and their urgency overwhelm engineering, management creates layers to ‘help’ continue development of new features, responsibility diffuses, costs are spread, ROI falls, management is replaced and the new management can’t produce a business plan to correct the problem that fits within the resources of the company. Even the last company standing in a high-tech market is usually barely profitable, consumed by the support costs.

Even with a perfect management in a perfect history of superlative engineers and engineering, bugs happen, as there is no way to test a complex system exhaustively, and bugs can arise in any level of the software stack. Best case, the technical task itself is tough : a programmer is assigned a bug that is thought to be in code they generated. They may have to set up an elaborate test rig matching the customer’s case to find the symptom, replicate the bug. It can take days for a ‘race’ bug, one caused by an interrupt occurring at a point in the code that doesn’t correctly restore the state, to occur, more days to identify the first symptom, e.g. a pointer in a queue of messages set to null. The first symptom of that class of bug normally occurs millions of instructions after the pointer was set to null, so our engineer searches through the code inspecting every use of that pointer. If he is lucky, it was a use of the pointer that set it.  If not, it could be a random write from any part of the code : finding those is luck, being active in testing different configurations and versions, reading a lot of code, having a good memory.  Best case the cause is that pointer. This means staring at the code until your eyes bleed to find it in order to find a use that could be bollixed by an interrupt. This after weeks to collect the particular configuration of hardware and build the versions of software to match the customer, load and configure everything to match the customer.  It often takes discussion with the original customer support engineer, who needs to call the customer for information. Engineering minds are scattered over a dozen bugs as this proceeds in fits and starts, none of it of much intellectual interest, while engineering hours, the company’s scarcest resource, are burned at a great rate.

Fixing bugs, especially bugs that the engineer himself created and show his very best efforts to be less stellar than he had assumed in his own mind, is not fun.  Hard slogging work like that can’t be the job unless we have no choice. Engineers love the creative side of our work, in good conditions on an interesting project, one to challenge our creativity and grasp of the intersecting technologies, our mastery at optimizing their joint function, it is a permanent intellectual high. The only interesting thing about finding bugs is the process for doing so, that is a skill that must be developed. Music practice in an un-airconditioned gym on a hot summer afternoon is more fun.

Most bugs assigned to an engineer will be found in some other engineer’s code, that engineer may be available to fix the problem, or our guy may have to do it himself. That means learning someone else’s coding style, working through understanding what they did and how they did it, in great detail. Anything less increases the probability of the fix creating subsequent difficulties.

The process is demoralizing. When it goes on too long, engineers are disgruntled, and it doesn’t take many more discouragements before the best of them, the ones with the most opportunities, the most in demand and most recruited, start bringing their resumes up to date. That senior engineer goes to a different engineering group in a company that also promises him creative work, ‘no messy bug fixes’ in their pitches. Soon, a more junior engineer takes over.  His bug fixes have more subsequent bugs. The slope steepens.

This is engineering management’s versions of ruling classes setting themselves up for ruin by short-sighted decisions that put off problems. Management’s jobs depend on meeting deadlines. Bonuses and stock options’ worth ditto, and far more than the corporation’s profitabiity 2 years from now, or their products’ reputation for reliability.  Those change too gradually for anyone’s specific decision to be blamed. Selling the buggy product is marketing’s problem, they put lipstick on pigs all the time.

That cost of software was a major motivation for using NNs in control systems. As NN chips had improved in capacity and performance, the system was more robust and so less expensive to develop when NNs controlled individual joints based on a message stream from the senses and other joints reporting their current and intended actions.  That required learning sessions, the ‘bot learned in a baby’s sequence, learning to roll, crawl horizontally, drag themselves raised on their arms, then on arms and knees, etc.

Because NNs can be read and written from a CPU and its memory, what one ‘bot has learned can be propagated.  That allowed intensive training of a ‘bot in a large variety of situations through a large variety of actions, then all ‘bots of similar model being able to employ the same skills. They didn’t eliminate software : the NN was still part of the electronics, mounted on an SBC, the processor loaded its contents and set up the registers that enabled motors and other house-keeping tasks. But the code to do those functions was trivial compared to the control code that the NN replaced.

However, it also presented new problems, mating the NN layer with the rest of the system. For the sensory systems, it was relatively easy, and those had been where NNs were developed.  In fact, the first NNs were modeled after the early results from single-unit recording in  retina and cochlea. The sensory systems were, from those results which had suppressed feedback from more central components of those sensory systems as an un-intended consequence of the anesthesia and paralysis of the animal in preparation for the recording, relatively simple feature-extraction circuits. For the eye, they enhanced edges, detected spots and lines and edges and did so for different colors. The results passed from retina to the next stage of processing in the lateral geniculate nucleus in a spike train code. The LGN did subsequent processing from that and passed on its output in the same kinds of code to Area 17 of the visual cortex and the superior colliculus at the top-front of the midbrain. More connections to Area 18, 19 .. each with stages of analysis, more complex features extracted, corners and angles and figures, each more abstract and delocalized in visual space. The SC handled attention, reflexes and eye movements.

None of the sensory systems were completely understood, especially not the feedback from each level back to the previous, but the principles of topological organization of the processing and ‘channels’ for extracted features were common to all and easy and natural to implement in circuits. They were even relatively easy to mate with cognitive systems, a cell or small set of cells representing a ‘pink grandmother’ was assumed to be a highest level of the feature extraction, and the cognitive system could learn that an object in a particular section of the visual field had this set of attributes, e.g. ‘has fur’, ‘4 legs’, this size, and was moving in a certain direction at this rate. Deep Learning technology had shown that NNs could identify objects in visual scenes as well as humans, all it took was enough output pins to represent the various attributes, or codings of numerical values in registers to represent them. From such a message stream from each sensory system, the cognitive function could identify and track its environment.

Because those levels below the cognitive were relatively fixed in function, they were packaged in a multi-chip module with standard inputs from the camera and a standard computer data bus to read the structures containing sensory information.  As the technology had progressed, auditory systems generally provided both the details of complex sounds such as symphony orchestras and the name of popular pieces being played, although that required additional memory for the melodies and names.

The same arguments about the capabilities of those MCMs vs more discrete systems raged among connoisseurs of sensory systems as had from the days of integrated stereo systems. As usual, high-volume systems opted for convenient and low cost. All manufacturers hoped for volume, and needed the low cost that the lower-power devices provided.

The technical progress in sensory systems had been very rapid because different companies had specialized in different parts of the analysis for each system. Cameras were combined with the color and brightness correction and the retinal analysis stage. That resulted in a standard stream, now a Internet Engineering Task Force Request for Comments (RFC) standard, as that standards body was fastest and lowest-cost for the pioneers in this market. The market for vision systems needed standards to support an ecosystem.

That standard also allowed the development of very realistic eyes in a very realistic socket with lid and normal motion, binocular vision of great acuity. These were late developments, first model to America was the Sexbot chassis our group was soon to receive for Scherrhy’s embodiment. Beautiful eyes, also, they were nearly as lovely as our Tessels promised to be.

Standards evolved for each sensory system and level of analysis defining the messages and meaning. Each stage of sensory processing accepted the prior’s standard and emitted the next. Decoupling technologies by standards is good strategy, helps all the players to be in the same positive-sum game.***

All of the sensory systems wrote a stream of messages to the data bus describing what was happening, objects and attributes and identifications. The base system had been manipulating the sensor’s incoming info stream all the way through, e.g. the camera’s responses to different colors was not linear to the light it received, that was adjusted before the information went to the first level of the ‘retina’. The raw images, sounds, touches, muscle sensations were normally also stored in a large circular buffer, to be available if upstream processing needed to inspect the data in more detail.  Every system had a full suite of software tools that could do so, and as the technology had progressed, logic modules for the FPGAs that could form sophisticated DSPs, FFTs and other tools to make that fast. The system could also store scenes as compressed (or not) videos to the limit of the many memory capacities. Those functions were available to the top-level event loop, and the cognitive base ‘knew’ about them and their use.

Very excellent perfect memory for sensations in the past, time-stamped so the exact moment of a sound could be compared to the sequence of an event in the visual field, that kind of analysis could provide much information in addition the standard streams of messages. Servicebot minds had some big advantages compared to ours, and those were used by the various cognitive modules in everything. They needed every edge they could get, because, while the sensory systems matched human’s in sensitivity, and increasingly well in extracting the information in a sensory stream, the cognitive system wasn’t nearly to that level.

Additionally, the servicebot architecture had provided another edge, responsible for much of the progress in competence in our labs and with our kids we had seen. That was a result of the motor NNs being controlled via the same information feedback loop as an animal’s. Lower Motor Neurons in the ventral spinal cord or cranial nerve nucleus controlled the tension in ‘their’ sets of muscle fibers in a particular one of the 640 muscles in a human body by the frequency of ‘spikes’s initiated in the cell body and traveling down or out to the muscle.  That LMN can have no direct knowledge of the result, that is produced by sensors in the muscles, skin and joints whose signals run back into the spinal cord or brainstem to both make direct connections on the LMN, and on to higher levels.  All of those can act to adjust the muscle tension as necessary to hold the body in position to hold the baby off the floor. In the ‘bots, of course, the sensory system produced a stream of messages indicating values of strain sensors on motors, joints, limbs, belts.

The servicebot’s system was very cleverly organized to allow combining learned skills from many ‘bots and downloading it into all of them. This resulted directly from the fact that the ‘bots motors were controlled via a message stream from the senses, vestibular, vision, prioprioception, etc. They even had a startle response for the same reason people did.

The motor control processor selected relevant messages from the message stream. Messages indicating the position of the limb being moved, for example. The motor control NN would receive those and in turn emit values that caused the processor to continue stepping the motor until the limb had achieved the desired position, unless information from other sensors also fed into the NN produced messages that indicated the motor was being strained, the vestibular message stream indicated the ‘bot was losing balance, etc.

Because that NN was driven from a message stream, the stream at each NN’s input could be stored, transmitted to headquarters, played through a system there to train another NN, and do so in a simulator which didn’t need the entire bot, and could therefore ‘learn’ a training stream of messages much faster than an actual servicebot. That allowed running through a very large training series in reasonable time, normally done daily with selected message sequences from the field, selected for their novelty and usefulness from the 100s of servicebots of that model in the many circumstances and tasks they encountered in their aggregate.

That NN’s resulting skills could be download into other ‘bots. Because the NNs were in memory sockets, which had standard pins and electrical interface, the entire NN could be upgraded to a new generation’s technology and loaded with the equivalent control skills. That perhaps required changes to the processor’s control code, even an FPGA’s logic module to direct messages to a different address and a different mask/shift, but it was an elegant solution to the problem of training Neural Networks, and was what made these general-purpose humanoid intelligences possible. Without that, the costs of training every individual ‘bot would not be much less than for training a human.

True, ‘bots don’t need fuel that costs as much as human’s food and don’t need living quarters, not even much downtime, but low training costs made them affordable to the average family. That changed everything. Well, at least after bots improved their communication skills, it changed everything.  Especially when that was built upon what was by then a very large and impressive base of ‘skills’. The first Sexbots were capable of running a very large household, day 1.

The architecture also largely decoupled motor from the intelligence processing, as the cognitive units produced messages indicating intent to do an action, ‘pick up the baby over there’. Those intent messages were passed to the equivalent of compiler’s code-generator which translated those to sequences of ‘high-level motor’ actions, ‘walk to where the baby is’, ‘bend over and reach down to the baby’, ‘gasp the baby’, ‘lift the baby and straighten up’.

Those were decomposed further, ‘step forward with the left foot’, etc. But the NN’s ‘knew’ how to do their part of ‘step’ while maintaining balance, so those messages on the command bus would be interpreted by the lower level NNs as necessary to accomplish the motion in a way consistent with the servicebot’s position, etc.**** All of those could be developed independently of each other using simulators, a fact that made the total development much less expensive than it otherwise would have been. These architecture features made it possible for even a large, integrated Japanese manufacturer of robots to develop a sentient humanoid.

Be clear, this was a huge project, only possible because of the OSS component and the clever architecture that optimized use of so many technologies.

The cognitive processing side was a more capable version of Watson’s organization. As with Watson, it consisted of processes running on many systems, although this version had some in the ‘bot itself, others in local servers connected to the ‘bot via high bandwith wireless links and others on remote clusters of very powerful machines connected via high bandwidth Ethernet. The location of the various subsystems moved closer to the ‘bot as SBCs and high volume server boards from Intel and AMD got more powerful, their software was optimized, and more functions were moved into NNs.

As all systems responsive to their environment, the highest level was an event loop.  The first-called function was to the NLP interface. That level accepted and analyzed the requests with an verbal version of a Watson natural language interface. Understanding spoken language is much harder than written. Analysis of spoken language has the problem of more ambiguity and the need for more application of the language structure, deep grasp of possible meanings in order to interpret the sequence of phonemes emitted by the auditory unit. Often there is a choice of phonemes because of the ambiguity, and higher-level analysis is necessary to understand which one was intended by the speaker. This problem is a superset of voice input to a word processor, but largely a combination of that and the same ‘meaning’ analysis as a Watson. The robot manufacturer licensed these voice NLP packages, the R&D efforts in small software companies had been proceeding for more than 20 years.

The NLP function returned a data structure containing the sentences with the possible alternatives in meaning.  Those were evaluated as had Watson: it attempted to interpret those sentences as a request, broke the request or requests in the case of multiple possible interpretations, and passed the components on to the lower AI modules.  Based on the possibilities they produced, the User Interface used the lower AIs to evaluate the plausibility and probability of the list to decide what reply to make or action to perform.

This AI was an improvement on Watson, and the differences were this manufacturer’s new contribution to the total AI effort.  The first was that the ‘bot maintained a structure that represented itself and another that represented its surroundings. It needed these to know how to respond to commands, e.g. posture, power supply and usage and other chassis-specific information, what direction it was facing, what it was holding, …  The external environment information was necessary to avoid standing up straight under overhanging cupboards, backing into someone approaching from behind, or stepping off the edge of a curb without anticipating the necessary adjustment. These not strictly necessary, caution and re-checking would have accomplished the same, but that ‘sense of self’ improved response time at adequate standards of caution. Improved response time meant more rapid movement meant more work done per hour.

Along with this was a capacious working memory for events and prior analyses and their results.  These were pruned on a first-in, last-pruned basis, and pruning was done according to relevance to more recent requests, then by uniqueness. This rule preserved working memory by recency and historical relevance and ‘things that stuck out’ at the expense of the ordinary, the usual. Those entries were tagged by the usual background task which checked new entries against prior commands, analyses and results. This allowed maintaining unique events over longer period than the working memoryh would hold. Working memory was treated as a cache, searched for answers to new requests before anything else.  This increased mental nimbleness, as did all such caches from processor memory to internet access points. Increased mental nimbleness meant more effective work per hour.

A second improvement on Watson added various AI technologies to the ensemble analyzing requests and evaluating results. Cyc was one of those, major parts had been released as OpenCyc, an open source inference engine and the ontologies and ‘common sense’ knowledge-base. Additionally, Wolfram’s Alpha and other more-specialized AIs were part of the mix. Those provided a means of evaluating the reasonableness of an interpretation.  OpenCyc would know that fathers do not normally marry daughters, and so if “the father married her” could refer to several other different actors, it was most unlikely to be the daugher. Alpha handled quantitative statements, could return the fact that the moon was far away relative to the size of cows, making it unlikely the claim of the cow jumping over it was literally true.

The OpenCyc framework was where the ontologies for all of the sexual topics our open groups were being installed, we didn’t need to worry about that.  They were also developing any special reasoning relationships, the ‘ rules for common sense’ used by Cyc’s reasoning engine. The AI community had generated a number of other more specialized tools which were also added to the ensemble as the team had engineers free from previous portions of the work, so the cognitive component of the AI grew. All of these executed on the remote server clusters, many of their search functions in combinatoric spaces could overwhelm any lesser system.

That description is far too simple because the process of teasing meaning out of spoken words and generating actions to requests was iterative, recursive, meshed, AND distributed across an entire set of servers, which had high bandwidth connections to the internet. ‘Bot intelligences were as dependent upon their net access as human’s had become.

It was an ensemble of analysis methods that incorporated all of the effective methods of selecting and sorting probabilities, and  the UI that managed it selected one of the many possible answers or actions. One of thousands, for any complex request. By the standards of a human brain, it was abysmally inefficient (20 watts vs 100s of kilowats — how do you estimate the power used by a set of virtual servers in a commercial cloud?).

Ensemble methods depend on a table of probabilities of each of the element methods being correct for each of the types, in this case elements of, the request.  For this version of a Watson, this table was merely more complex, because the AI units could provide independent measures of their correctness from their internal evidence. The table’s value for those methods was an independent check on those internal-to-the-AI estimates.  In actual function, either one could be correct, or totally wrong, inferior to some other analysis method. The total, as complex as it was, as enormous as was the amount of software, was not perfect.  Neither were human minds.

This then, was the system our small team would be integrating  with our new software element, our simulator for a human brainstem, autonomic nervous system, associated glands and viscera. Obviously an enormous job, we hoped their latest revision of the ‘select an answer’ code was well documented and clean, because that was where we needed to add the biases our embodiment required.

Continued.

*Generalissimo Grand Strategy, Intelligence Analysis and Psyops, First Volunteer Panzer Psyops Corp.  Cleverly Gently Martial In Spirit

 

**Despite the incessant bragging you will see in this and other posts on the subject, the claims of technical elegance should be taken with a bushel or so of salt.  Nothing is easier than for someone with small expertise to blow that up to world-class level in the minds of an audience who know very much less, and/or amplify their insight into other areas based on their claimed expertise. It is a standard technique of persuasion, a common thing I observed in college Professors working in adjacent areas of scholarship to their real expertise. And it can only be detected by someone with MORE knowledge of the subjects in question. Do you know, or know of, an expert in NNs, AI software AND system architecture? Neither do any of the authors of this series, the Generalissimo, Scherrhy and me, their hard working transcriber who is a humble systems programmer when wearing his other hat, has never worked in AI or NNs.

The Generalissimo’s account here is very plausible, and seems elegant, based on my small knowledge. My work researching the Generalissimo’s many areas of at least claimed expertise prevent me pursuing the several Ph.D.s necessary to evaluate his explanation further. Same as his other technologies and scientific insights, as interesting as I find them.

As for his adoption of my OKBOB standards of ethics : be serious. Claims on paper, on a blog, are less substantial than a wisp of smoke, can be depended upon to determine behavior just as much. I have not seen contrary thinking, in his defense, and probably would have, given my special access to his mind, a wondering spirit I found while hiking through the historical continuum and gave a home, put to work writing for me. But you can’t trust me, either. Except in my claim he and Scherrhy are not slaves, of course. Not that you have any way of checking that, either.

***Corporations such as one of the giants in the field of commercial OSs and applications, a particularly excellent example of a negative-sum player which has consequently lost ground in all of these areas, used ‘extend and extinguish’ tactics against a standard-supported ecosystem in other areas of technology. Their unilateral use of a super-set, always expanding, of every standard had kept its competitors scrambling to be compatible with their software, but those markets never went far because of customer’s fear of lockin, a corporate specialty, and that corporation is receding in importance as a very direct result. That was just one of their dishonest tactics, nobody trusts them, consequently that corporation doesn’t get the breadth of ideas it needs to keep abreast of the total market.  $Billions in profits and a huge war chest have not substituted for the good will of the 10s of 1000s of other individuals and interests in the larger technology market, and the company retreats to niche player status. Of course, their tactics were attributes of its own management, and the corporation was and is a snakepit of corporate infighting, now firmly in the corporate DNA. Their own employees can’t trust them.

Business is intrinsically a positive-sum game, and that is the reason societies using it became wealthy. But the reverence for the personal attributes needed to pursue positive sum games is no  longer a part of bringing up children, nor a factor in our assessment of people, in our society.  Too bad for us.

****Although I know nothing of those fields, this system design seems so elegant I told the Generalissimo there was a good chance it hadn’t been used yet, he should get a patent. A distraction from his goal of “re-architecting The System”, he said. The man has some architectural chops, judging from this bit of work and my small expertise.

 

Advertisements

2 thoughts on “Musings on 5th Generation Warfare, #16

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s