Index
Home
About
Blog
Newsgroups: comp.risks
X-issue: 8.74
Date: 23 May 89 0131 PDT
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: SAGE-BOMARC risks
This is an account of two ancient (30-year old) computer risks that were
not publicly disclosed for the usual reasons. It involves an air defense
system called SAGE and a ground-to-air missile called BOMARC.
SAGE was developed by MIT in the late '50s with Air Force sponsorship to
counter the threat of a manned bomber attack by you-know-who. It was also
designed to counter the political threat of a competing system called Nike
that was being developed by the Army.
SAGE was the first large real time computer system. "Large" was certainly
the operative term -- it had a duplexed vacuum tube computer that covered
an area about the size of a football field and a comparably sized air
conditioning system to take away the enormous heat load. It used an
advanced memory technology that had just been invented, namely magnetic
core, and had a larger main memory than any earlier computers, though it
is not impressive by current standards -- it would now be called 256k
bytes, though no one had heard of a byte then.
The system collected digitized radar information from multiple sites and
used it to automatically track aircraft and guide interceptors. SAGE was
designed to work initially with manned interceptors such as the F-102,
F-104, and F-106 and used a radio datalink to transmit guidance commands
to these aircraft. It was later modified to work with the BOMARC missile.
Each computer site had about 50 display consoles that allowed the
operators to assign weapons to targets and monitor progress. As I recall,
there were eventually between one and two dozen SAGE systems built in
various parts of the U.S.
BOMARC missiles used a rocket booster to get airborne and a ramjet to
cruise at high altitude to the vicinity of its target. It was then used
its doppler radar to locate the target more accurately so that it could
dive at it and detonate. It could carry either a high explosive or a
nuclear warhead.
BOMARCs were housed in hardened structures. When a given missile received a
launch command from SAGE, sent via land lines, the roof would roll back, the
missile would erect, and if it had received a complete set of initial guidance
commands in the meantime it would launch in the specified direction.
Testing the fire-up decoder
It was clearly important to ensure that the electronic guidance system in the
missile was working properly, so the Boeing engineers who designed the launch
control system included a test feature that would generate a set of synthetic
launch commands so that the missile electronics could be monitored for correct
operation. When in test mode, of course, the normal sequence of erecting and
launching the missile was suppressed.
I worked on SAGE during 1956-60 and one of our responsibilities was to
integrate BOMARC into that system. This led us to review the handling of
launch commands in various parts of the system. In the course of this
review, one of our engineers noticed a rather serious defect -- if the
launch command system was tested, the missile would be in a state of
readiness for launch. If the "test" switch was then returned to "operate"
without individually resetting the control systems in each missile that had
been tested, they would all immediately erect and launch!
Needless to say, that "feature" was modified rather soon after we mentioned it
to Boeing.
Duplexed for reliability
For some reason, I got assigned the responsibility for securing approval to put
nuclear warheads on the second-generation BOMARCs, which involved "proving" to
a government board that the probability of accidentally launching a missile on
any given day as a result of equipment malfunctions was less than a certain
very small number and that one berserk person couldn't do it by himself. We
did eventually convince them that it was adequately safe, but in the course of
our studies we uncovered a scary problem.
The SAGE system used land lines to transmit launch commands to the missile site
and these lines were duplexed for reliability. Each of the two lines followed
a different geographic route so that they would be less likely to be taken out
by a single blast or malfunction. There was a black box at the missile site
that could detect when the primary line went bad and automatically switched to
the alternate. On examination, we discovered that if both lines were bad at
the same time, the system would remain connected to the alternate line and the
amplifiers would then pick up and amplify whatever noise was there and
interpret it as a stream of random bits.
We then did a Markov analysis to determine the expected time that it would take
for a random bit stream to generate something that looked like a "fire" command
for one of the missiles. We found that expected value was a little over 2
minutes. When such a command was received, of course, the missile would erect
and prepare to launch. However, unless the missile also received a number of
other commands during the launch window, it would automatically abort.
Fortunately, we were able to show that getting a complete set of acceptable
guidance commands within this time was extremely improbable, so this failure
mode did not present a nuclear safety threat.
The official name of the first BOMARC model was IM-99A, so I wrote a report
about this problem titled "Inadvertent erection of the IM-99A." While that
title raised a few eyebrows, the report was destined to get even more attention
than I expected. Its prediction came true a couple of weeks after it was
released -- both phone lines went bad on a BOMARC site in Maryland, near
Washington D.C., causing a missile to suddenly erect and start the launch
sequence, then abort. Needless to say, this scared hell out of the site staff
and a few other people.
The Air Force was suitably impressed with our prediction and I was
immediately called upon to chair an MIT-AT&T committee that had the honor
of fixing the problem. The fix was rather easy: just disconnect when both
lines are bad. With good engineering practice, of course, this kind of
problem wouldn't occur. However, the world is an imperfect place.
Les Earnest
Newsgroups: comp.risks
X-issue: 9.60
Date: 10 Jan 90 1200 PST
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: The C3 legacy: top-down goes belly-up recursively
After more than 30 years of accumulated evidence to the contrary, the U.S.
Defense Department apparently still believes that so-called command-control-
communications (C3) systems should be designed and built from the top down as
fully integrated systems. While that approach may have some validity in the
design of weapon systems, it simply doesn't work for systems intended to gather
information, aid analysis, and disseminate decisions. The top-down approach
has wasted billions of dollars so far, with more to come, apparently.
I noticed the citation in RISKS 9.52 of the article, "The Pentagon's Botched
Mission," in DATAMATION, Sept. 1 1989, which describes the latest development
failures in the World Wide Military Command and Control System (WWMCCS). The
cited article indicates that they are still following the same misguided "total
system" approach that helped me to decide to leave that project in 1965. I
confess that it took me awhile to figure out just how misguided that approach
is -- I helped design military computer systems for 11 years before deciding to
do something else with my life.
In RISKS 9.56, Dave Davis and Tom Reid observe that current C3 development
projects seem to be sinking deeper into the mire of nonperformance even as the
plans for these systems become more grandiose and unrealistic.
Please understand that I am not arguing against top-down analysis of
organizational goals and functions. It is clearly essential to know which are
the important responsibilities of an organization in order to properly
prioritize efforts. Based on my experience, attempts at aiding analysis and
decision-making tasks with computer applications should begin with the lowest
levels and proceed upward IN THE CASES THAT WORK. Contrary to some widely held
beliefs, many such tasks do not lend themselves to computer assistance and the
sooner one weeds out the mistakes and intractable tasks the faster one can
improve the areas that do lend themselves to automation and integration.
A great deal of time, effort, and money can be save by approaching development
in an evolutionary bottom-up way. It is essential to shake-down, test, and
improve lower level functions before trying to integrate at a higher level.
Trying to do it all at once leads to gross instability that takes so long to
resolve that the requirements change long before the initial version of the
system is "finished." Each time one moves up a level it is usually necessary
to redesign and modify some or all of the system. It is much faster to do that
a number of times than it is to try to build a "total system" the first time
because that approach almost never works.
Someone (Karl von Clausewitz?) once said that people who don't know history are
condemned to repeat it. A modern corollary is that people who do know history
will choose to repeat it as long as it is profitable. Unfortunately, the
Defense Department's procurement policies often reward technical incompetence
and charlatanism. I will support this claim with a few "peace stories" that
would have been much more atrocious "war stories" if any of the systems that we
designed had been involved in a real war. Fortunately, that didn't happen.
The presumption that computer-communication system development should be done
on a grand scale from the outset is just one of many bad ideas that have taken
root within the military-industrial establishment. The reason that this
misconception has persisted for decades is that there is no penalty associated
with failure. On the contrary, failures are often very profitable to the
contractors -- the bigger, the better. The bureaucrats who initiate these
fiascos usually move on before the project fails, so if anyone tries to point
fingers they can say that it was the fault of the subsequent management.
While the "total system" approach is one of the more persistent causes of
failure in C3 development, it is by no means the only misconception afloat. In
subsequent segments I will review some other causes of historical fiascos. All
of this will be ancient history, since I got out of this field about 25 years
ago. Of course, many of the more recent fiascos are protected from public
scrutiny anyway by the cloak of national security.
(Next segment: a SAGE beginning.)
-Les Earnest (Les@Sail.Stanford.edu)
Newsgroups: comp.risks
X-issue: 9.65
Date: 01 Feb 90 2101 PST
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: The C3 legacy, Part 2: a SAGE beginning
Thanks to moraes@csri.toronto.edu for pinning down my half-remembered
quotation in the preceding segment (RISKS 9.60):
> The actual quote is "Those who cannot remember the past are condemned
> to repeat it." from George Santayana's "The Life of Reason".
The grandfather of all command-control-communication (C3) systems was an
air defense system called SAGE, a rather tortured acronym for Semi-
Automatic Ground Environment. As I reported earlier in RISKS 8.74, some of
the missles that operated under SAGE had a serious social problem: they
tended to have inadvertent erections at inappropriate times. A more
serious problem was that SAGE, as it was built, would have worked only in
peacetime. That seemed to suit the Air Force just fine.
SAGE was designed in the mid to late 1950s, primary by MIT Lincoln Lab,
with follow-up development by IBM and by nonprofits System Development
Corp. and Mitre Corp. The latter two were spun off from RAND and MIT,
respectively, primarily for this task.
SAGE was clearly a technological marvel for its time, employing digitized
radar data, long distance data communications via land lines and
ground-air radio links, the largest computer (physically) built before or
since, a special-purpose nonstop timesharing system, and a large
collection of interactive display terminals. SAGE was necessarily
designed top-down because there had been nothing like it before -- it was
about 10 years ahead of general purpose timesharing systems and 20 years
ahead of personal computers and workstations.
While the designers did an outstanding job of solving a number of
technical problems, SAGE would have been relatively useless as a defense
system if a manned bomber attack had occurred for the following reasons.
1. COUNTERMEASURES. Each SAGE system was designed to automatically track
aircraft within a certain geographic area based on data from several
large radars. While the system worked well under peacetime conditions,
an actual manned bomber attack would likely have employed active radar
jamming, radar decoys, and other countermeasures. The jamming would
have effectively eliminated radar range information and would even have
made azimuth data imprecise, which meant that the aircraft tracking
programs would not have worked. In other words, this was a air defense
system that was designed to work only in peacetime!
(Some "Band-aids" were later applied to the countermeasures vulnerability
problem, but a much simpler system would have worked better under expected
attack conditions.)
2. HARDENING. Whereas MIT had strongly recommended that the SAGE computers
and command centers be put in hardened, underground facilities so that
they could at least survive near misses, the "bean counters" in the Pentagon
decided that this would be too expensive. Instead, they specified
above-ground concrete buildings without windows. This was, of course,
well suited to peacetime use.
3. PLACEMENT. While the vulnerabilities designed into SAGE by MIT and the
Pentagon made it relatively ineffective as a defense system, the Air
Defense Command added a finishing blunder by siting most of the SAGE
computer facilities in such a way that they would be bonus targets!
This was an odd side effect of military politics and sociology, as
discussed below.
In the 1950s, General Curtis Lemays's Strategic Air Command consistently
had first draw on the financial resources of the Defense Department. This
was due to the ongoing national paranoia regarding Soviet aggression and
some astute politicking by Lemay and his supporters. One thing that Lemay
insisted on for his elite SAC bases was that they have the best Officers
Clubs around.
MIT had recommended that the SAGE computer facilities be located remotely,
away from both cities and military bases, so that they would not be bonus
targets in the event of an attack. When the Air Defense Command was
called upon to select SAGE sites, however, they realized that their people
would not enjoy being assigned to the boondocks, so they decided to put
the SAGE centers at military bases instead.
Following up on that choice, the Air Defense Command looked for military
bases with the best facilities, especially good O-clubs. Sure enough, SAC
had the best facilities around, so they put many of the SAGE sites on SAC
bases. Given that SAC bases would be prime targets in any manned bomber
attack, the SAGE centers thus became bonus targets that would be destroyed
without extra effort. Thus the peacetime lifestyle interests of the
military were put ahead of their defense responsibilities.
SAGE might be regarded as successful in the sense that no manned bomber
attack occurred during its life and that it might have served as a
deterrant to those considering an attack. There were reports that the
Soviet Union undertook a similar experimental development in the same time
period, though that story may have been fabricated by Air Force
intelligence units to help justify investment in SAGE. In any case, the
Russians didn't deploy such a system, either because they lacked the
capability to build a computerized, centralized "air defense" system such
as SAGE or had the good sense not to expend their resources on such a
vulnerable kludge.
(Next segment: command-control catches on.)
-Les Earnest (Les@Sail.Stanford.edu)
Newsgroups: comp.risks
X-issue: 9.67
Date: 05 Feb 90 1523 PST
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: The C3 legacy, Part 3: Command-control catches on
(Continuing from RISKS 9.65)
As the U.S. Air Force committed itself to the developement of the SAGE air
defense system in the late 1950s, new weapons that did not require centralized
guidance came to be rejected, even though some appeared to be less vulnerable
to countermeasures than those that depended on SAGE. An example was a very
fast, long range interceptor called the F-109 that was to carry a radar that
would enable it to locate bombers at a considerable distance and attack them.
As such, it did not need an elaborate ground-based computer control system.
My group at MIT Lincoln Lab had been responsible for integrating earlier
interceptors and missles into SAGE. We subsequently joined Mitre Corporation
when it was formed from Lincoln Lab's rib and were later assigned the
responsibility for examining how the F-109 interceptor might be used.
I had assumed that the Air Force was genuinely interested in seeing how the
F-109 could best function in air defense. Accordingly, we worked out a plan in
which the interceptors that were in service would be deployed to various
airfields, both civilian and military, so as to make them less vulnerable to
attack. This dispersal together with their ability to function with minimal
information about the locations of attacking bombers appeared to offer a rather
resiliant air defense capability that could survive even the destruction of the
vulnerable SAGE system.
When we published a utilization plan for the F-109 based on these ideas, The
Air Force made it clear that we had reached the "wrong" conclusion -- we were
supposed to prove that it was a bad idea. We apparently had been chosen to
"study" it because, as designers of SAGE, we were expected to oppose any
defensive weapons that would not need SAGE.
In order to deal with the embarrassing outcome of this study, a Colonel was
commissioned to write a refutation that confirmed the ongoing need for
centralized computer control. The Air Force insisted that anyone who requested
our report must also get a copy of the refutation. Mitre necessarily acceded.
In any case, the F-109 was never built in quantity.
The seductive image
Though the designers of SAGE came to recognize its weaknesses and
vulnerabilities and the Air Force should have been reluctant to build more
systems of the same type, it somehow came to be regarded as the model of what
the next generation of military control systems should be. Never mind that it
was essentially useless as a defense system -- it looked good!
The upper floor of each SAGE command center had a large room with subdued
lighting and dozens of large display terminals, each operated by two people.
Each terminal had a small storage-tube display for tabular reference data, a
large CRT display of geographical and aircraft information (with a flicker
period of just over one second!), and a light gun for pointing at particular
features. Each terminal also had built-in reading lights, telephone/intercoms,
and electric cigar lighters. This dramatic environment with flickering
phosphorescent displays clearly looked to the military folks like the right
kind of place to run a war. Or just to "hang out."
Downstairs was the mighty AN/FSQ-7 computer, designed by MIT using the
latest and greatest technology available and constructed by IBM. It had:
o A dual-processor nonstop timesharing system. The off-line computer was
usually either undergoing preventive maintenance or was following the
actions of the online computer so that it would be ready to take over if
that machine failed. In this respect it was similar to the commercial
nonstop systems developed much later by Tandem and its followers.
o The computer was composed of rows of glimmering vacuum tubes spread over
an area about the size of a football field, with lots of large magnetic
drums used both for secondary storage and as communications buffers.
(Magnetic disks had not yet been perfected.)
o It used the recently-invented magnetic core memories in the largest
and fastest configuration yet built: 256K bytes with 6 microsecond
cycle time. Each of the two main memories was packed into the volume
of a shower stall, a remarkable density for that period.
o A gigantic air conditioning system was required to suck all the heat out
of the monsterous computer.
Remarkably, all of this new technology worked rather well. There were some
funny discoveries along the way, though. For example, in doing preventive
maintenance checks on tubes, a technician found one that was completely dead
that had not been detected by the diagnostics. Upon further examination it was
discovered that this tube didn't do anything! This minor blunder no doubt
arose during one of the many redesigns of the machine.
Both the prototype and operational SAGE centers were frequently visited by
military brass, higher level bureaucrats, and members of Congress. They
generally seemed to be impressed by the image of powerful, central control that
this leading-edge technological marvel had. Of course, General Lemay and his
Strategic Air Command could not sit by and let another organization develop
advanced computer technology when SAC didn't have any.
In short order the SAC Control System was born. Never mind that there was not
much for it to do -- it had to be at least as fancy as SAGE. When the full
name was written out, it became Strategic Air Command Control System. The
chance juxtaposition of "Command" and "Control" in this name somehow conjured
up a deeper meaning in certain military minds.
In short order, Command-Control Systems became a buzz word and a horde of
development projects was started based on this "concept." The Air Force
Systems Command soon realized that it had discovered a growth industry and
reorganized accordingly. The specifications for the new C2 systems generally
contained no quantitative measures of performance that were to be met -- the
presumption seemed to be that whatever was being done already could be done
faster and better by using computers! How wrong they were.
(Next segment: Command-control takes off)
-Les Earnest (Les@Sail.Stanford.edu)
Newsgroups: comp.risks
X-issue: 9.74
Date: 05 Mar 90 2025 PST
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: The C3 legacy, Part 4: A gaggle of L-systems
Martin Minow contributes some SAGE anecdotes in RISKS 9.68, including the
following.
> My friend also mentioned that the graphics system could be used to display
> pictures of young women that were somewhat unrelated to national defense
> -- unless one takes a very long view -- with the light pen being used
> to select articles of clothing that were considered inappropriate in the
> mind of the viewer. (Predating the "look and feel" of MacPlaymate by
> almost 30 years.) Perhaps Les could expand on this; paying special
> consideration to the risks involved in this type of programming.
While light pens did exist in that period, SAGE actually used light
_guns_, complete with pistol grip and trigger, in keeping with military
traditions. Interceptors were assigned to bomber targets on the large
displays by "shooting" them in a manner similar to photoelectric arcade
games of that era.
Regrettably, I never witnessed the precursor to MacPlaymate, which
probably appeared after my involvement. While I never saw anything bare
on the SAGE displays, a colleague (Ed Fredkin) did stir up some trouble by
displaying a large Bear (a Soviet bomber of that era) as a vector drawing
that flew across the screen. Unfortunately, he neglected to deal with X, Y
register overflow properly, so it eventually overflew its address space.
The resulting collision with the edge of the world produced some bizarre
imagery, as distorted pieces of the plane came drifting back across the
screen.
(Continuing from RISKS 9.67)
A horde of command-control development projects was initiated by the Air
Force in the early 1960s. Most were given names and each was assigned a
unique three digit code followed by "L." Naturally, they came to be
called called "L-systems." A Program Manager (usually a Colonel) was put
in charge of each one to ensure that financial expenditure goals were met.
Those who consistently spent exactly the amounts that had been planned
were rewarded with larger sums in succeeding budgets. Monthly management
reviews almost never touched on technical issues and never discussed
operational performance -- it was made clear that the objective was to
spend all available funds by the end of the fiscal year and that nobody
cared much about technical or functional accomplishments.
In 1960, after earlier switching from MIT Lincoln Lab to Mitre Corp., my
group was assigned to provide technical advice to a Colonel M., who was in
charge of System 438L. This system was intended to automate the
collection and dissemination of military intelligence information. Unlike
most command-control systems of that era, it did not have a descriptive
name that anyone used -- the intelligence folks preferred cryptic
designations, so the various subsystems being developed under this program
were generally called just "438L."
I had recently done a Masters thesis at MIT in the field of artificial
intelligence and hoped to find applications in this new endeavor. I soon
learned that the three kinds of intelligence have very little in common
(i.e. human, artificial, and military).
IBM was the system contractor for 438L and was already at work on an
intelligence database system for the Strategic Air Command Headquarters
near Omaha. They were using an IBM 7090 computer with about 30 tape
drives to store a massive database. It turned out to be a dismal failure
because of a foreseeable variant of the GIGO problem, as discussed below.
The IBM 438L group had also developed specifications for a smaller system
that was to be developed for other sites. Colonel M. asked us to review
the computer Request for Proposals that they had prepared. He said that
he planned to buy the computer sole-source rather than putting it out for
bids on the grounds that there was "only one suitable computer available."
When I read it, there was no need to guess which computer he had in mind
-- the RFP was essentially a description of the IBM 1410, a byte-serial,
variable word length machine of that era.
When Colonel M. sought my concurrence on the sole-source procurement, I
demurred, saying there there were at least a half-dozen computers that
could do that job. I offered to prepare a report on the principal
alternatives, including an approximate ranking of their relative
performance on the database task. He appeared vexed, but accepted my
offer.
My group subsequently reviewed alternative computers and concluded that
the best choice, taking into account performance and price, was the Bendix
G-20. I reported this informally to Colonel M. and said that we would
write it up, but he said not to bother. He indicated that he was very
disappointed in this development, saying that it was not reasonable to
expect his contractor (IBM) to work with a machine made by another
company. I argued that a system contractor should be prepared to work
with whatever is the best equipment for the job, but Col. M seemed
unconvinced.
This led to a stalemate; Colonel M. said that he was "studying" the
question of how to proceed, but nothing further happened for about a year.
Finally, just before I moved to another project, I mentioned that the IBM
1410 appeared to be capable of doing the specified task, even though it
was not the best choice. Col. M. apparently concluded that I would not
make trouble if he proceeded with his plan. I later learned that he
initiated a sole-source procurement from IBM just two hours after that
conversation.
In the meantime, the development project at SAC Headquarters was falling
progressively further behind schedule. We talked over this problem in my
group and one fellow who had done some IBM 709 programming remarked that
he thought he could put together some machine language macros rather
quickly that would do the job. True to his word, this hacker got a query
system going in one day! I foolishly bragged about this to the manager of
the IBM group a short time later. Two weeks after that I discovered that
he had recruited my hotshot programmer and immediately shipped him to
Omaha. I learned to be more circumspect in my remarks thereafter.
The IBM 438L group did eventually deliver an operable database system to
SAC, but turned out to be useless because of GIGO phenomena (garbage in,
garbage out). Actually, it was slightly more complicated than that.
Let's call it GIGOLO -- Garbage In, Gobbledygook Obliterated, Late Output.
The basic problem was that in order to build a structured database, the
input data had to be checked and errors corrected. In this batch
environment, the tasks of data entry, error checking, correction, and file
updating took several days, which meant that the operational database was
always several days out of date.
The manual system that this was supposed to replace was based on people
reading reports and collecting data summaries on paper and grease pencil
displays. That system was generally up-to-date and provided swift answers
to questions because the Sergeant on duty usually had the answers to the
most likely questions already in his head or at this finger-tips. So much
for the speed advantage of computers!
After several months of operation with the new computer system, the
embarrassing discovery was made that no questions were being asked of it.
The SAC senior staff solved this problem by ordering each duty officer to
ask at least two questions of the 438L system operators during each shift.
After several more months of operation we noted that the total number of
queries had been exactly two times the number of shifts in that period.
The fundamental problem with the SAC 438L system was that the latency
involved in creating a database from slightly buggy data exceeded the
useful life of the data. The designers should have figured that out going
in, but instead they plodded away at creating this expensive and useless
system. On the Air Force management side, the practice of hiring a
computer manufacturer to do system design, including the specification of
what kind of computer to buy, involved a clear conflict-of-interest,
though that didn't seem to worry anyone.
(Next segment: Subsystem I)
-Les Earnest (Les@Sail.Stanford.edu)
Newsgroups: comp.risks
X-issue: 9.80
Date: 11 Apr 90 1635 PDT
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: The C3 legacy, Part 5: Subsystem I
(Continuing from RISKS 9.74)
Of the dozens of command and control system development projects that were
initiated by the U.S. Air Force in the early 1960s, none appeared to
perform its functions as well as the manual system that preceded it.
I expect that someone will be willing to argue that at least one such
system worked, but I suggest that any such claims not be accepted
uncritically.
All of the parties involved in the development of C3 systems knew that their
economic or power-acquisition success was tied to the popular belief that the
use of computers would substantially improve military command functions. The
Defense Department management and the U.S. Congress must bear much of the
responsibility for the recurring fiascos because they consistently failed to
insist on setting rational goals. Goals should have been specified in terms of
information quality or response time for planning and executing a given set of
tasks. The performance of these systems should have been predicted in the
planning phase and measured after they were built so as to determine whether
the project was worthwhile.
Instead, the implicit goal became "to automate command and control," which
meant that these systems always "succeeded," even though they didn't work.
Despite a solid record of failure in C3 development, I know of just one such
project that was cancelled in the development phase. That was Subsystem I,
which was intended to automate photo-interpretation and was developed for the
Air Force by Bunker Ramo, as I recall.
The "I" in the name of this project supposedly stood for "Intelligence" or
"Interpretation." This cryptic name was apparently chosen to meet the needs of
the prospective users in the intelligence community, who liked to pretend that
nobody knew what they were doing. This pretense occasionally led to odd
conduct, such as when they assigned code names to various devices and tried to
keep them secret from outsiders. For example, a secret name was assigned to
one of the early U.S. spy satellites -- as I recall it was Samos -- but when
that name somehow showed up in the popular press they tried to pretend that no
such thing existed. In support of this claim, everyone in the intelligence
community was directed to stop using that name immediately.
When I attended a meeting in the Pentagon a few days after this decree and
mentioned the forbidden word, the person operating the tape recorder
immediately said "Wait while I back up the tape to record over that!" This was
a classified discussion, so there was no issue of public disclosure involved,
just the belief that there should be no record of the newly contaminated name.
Sometime in the 1981-82 period, the Air Force decided to terminate the
development of Subsystem I. A group of about 30 people from various parts of
the defense establishment, including me, was invited to visit the facility in
suburban Los Angeles where the work was going on to see if any of it could be
used in other C3 systems. We were given a two day briefing on the system and
its components, the principal one being a multiprocessor computer.
The conceptual design of this Polymorphic Computer, as they called it, was
attributed to Sy Ramo, who had earlier helped lead Hughes Aircraft and
Ramo-Wooldridge (later called TRW) to fame and fortune. The architecture of
this new machine was an interesting bad idea. The basic idea was to use many
small computers instead of one big one, so that the system could be scaled to
meet various needs simply by adjusting the number of processors. The problem
was that these units were rather loosely coupled and each computer had a
ridiculously small memory -- just 1K words. Each processor could also
sequentially access a 1K buffer. Consequently it was very awkward to program
and had extremely poor performance.
I sought out the Subsystem I program manager while I was there and asked if our
group was the only one being offered this "free system." He said that we were
just one of a number of groups that were being flown in over several months
time. When I asked how much they were spending on trying to give it away, he
said about $9 million (which would be equivalent to about $38 million today).
The Air Force Systems Command seemed to be trying desparately to make this
program end up as a "success" no matter how much it cost. When I asked why the
program was being cancelled, I got a very vague answer.
I did not recommend that my group acquire any of that equipment and as far as I
know nobody else did. The question of why Subsystem I was cancelled remained
unresolved as far I was was concerned. It is conceivable that it was because
they figured out that it wasn't going to work, but neither did the other C3
systems, so the reason must have been deeper (or shallower, depending on your
perspective). My guess is that they got into some kind of political trouble,
but I will probably never know.
(Next part: the Foggy Bottom Pickle Factory)
-Les Earnest (Les@Sail.Stanford.edu)
Newsgroups: comp.risks
X-issue: 9.97
Date: 30 May 90 1036 PDT
From: Les Earnest <LES@SAIL.Stanford.EDU>
Subject: The C3 Legacy, Part 6: Feedback
[My apologies for the gap in this series -- I'm running for City Council
currently and don't seem to have enough spare cycles. -Les]
Was there ever a command and control system that worked?
My opening remark in RISKS 9.80 was: "Of the dozens of command and
control system development projects that were initiated by the U.S. Air
Force in the early 1960s, none appeared to perform its functions as well
as the manual system that preceded it." Gene Fucci, who worked on the Air
Force satellite surveillance programs as a project engineer on SAMOS and
later as Field Force Test Director of MIDAS, found my remarks "somewhat
distorted" in that he believes the satellite command and control systems
worked well.
I will plead relative ignorance of those systems, but note that they were
called just "control systems" until "command and control" became a
buzzword in the early 1960s. I do not wish to take the position that all
systems to which the term "command and control" or "command-control-
communications" was eventually applied were failures -- just that all of
the dozens that I knew of were failures.
SAGE revisited
Some of the earlier C3 Legacy postings on SAGE have found their way via a
circuitous route to an old friend of mine, Phil Bagley, who also helped
design that system. Phil has now sent me snail-mail that takes a different
view of that program, as follows.
"I think that you have discovered what is behind the curtain. In case you
haven't, let me tell you my view. The motivation behind a big military
electronic system such as SAGE or BMEWS is _not_ to have it work. It is
just to create the _illusion_ that the sponsor is doing his job, and
perhaps peripherally to provide an opportunity to exercise influence.
Lincoln Lab and MITRE had no motivation to point out the obvious -- that
the emperor had no clothes. If you had asked a responsible think tank who
had no stake in the outcome how to deal most effectively with the issues,
you would have recommendations very different from those that guided the
electronic systems developments.
"Now it wasn't all for naught. Out of SAGE, computer technology got a big
boost. IBM learned how to build core memories and made a lot of money
building machines with core memories. Lots of people like you and me got
good systems and programming training (I still write programs). Ken Olson
learned how to design digital equipment and ultimately gave the world a
few billion dollars worth of Vaxes.
"The moral of all this is: When things appear not to make sense you very
probably are looking at it from the `wrong' point of view. Another way
to say it: It's pretty hard to fool Mother Nature, so if it appears that
she is being fooled, try to find a point of view which doesn't imply that
she's being fooled."
While Phil and others may be comforted by this view, I will argue that it
amounts to nothing more than "Whatever is, is right," which grates on my
rationalist soul. I believe that if a comparable amount of government
money had been invested in research, or on a more tractable application,
that computer technology would have advanced much more quickly than
actually happened.
I believe that as soon as MIT and MITRE engineers figured out that they
had designed an unworkable system, they had an ethical obligation to point
that out to their sponsors. Instead they (we) helped perpetuate the myth
that it worked so that we could continue in our beloved technological
lifestyle.
Phil's mention of Ken Olson reminds me that we gave a going-away party for
him and Harlan Anderson at the MIT Faculty Club when they left to form
their company to make transistorized digital modules based on experience
in building the TX-0 and TX-2 computers at Lincoln Lab. We told them that
they could have their old jobs back after their start-up went belly-up, as
we all expected. In fact, that reportedly came rather close to happening
more than once in the first couple of years, but somehow DEC squeeked
through and grew a bit.
Requiem: the SAIL computer, which would have reached the grand old age
of 25 next week, is slated to retire tonight and die in the near future.
It has provided an intellectual home for a very productive generation of
researchers and will be remembered fondly.
(Next part: the Foggy Bottom pickle factory)
-Les Earnest (Les@Go4.Stanford.edu)
Newsgroups: comp.risks
X-issue: 13.35
Date: Thu, 2 Apr 92 20:28:47 -0800
From: Les Earnest <les@sail.stanford.edu>
Subject: War Games II (Raymond, RISKS-13.33)
I found Eric Raymond's account of NORAD telephone indirection amusing but not
at all unusual -- I recently encountered a more elaborate runaround in dealing
with the county bureaucracy that manages the bus system here. Eric is lucky
that he did not get the treatment that we used when I had an office at C.I.A.
headquarters. There we answered the phone with "Hello" and, unless the calling
party immediately named a person who shared the office, we were programmed to
hang up without another word.
As one who helped design the initial computer system that went into the
Cheyenne Mountain facility, and who much later provided some input to the
screenwriters who wrote "War Games," I will assert that there is less there
than meets the eye (and imagination). That facility was intended to integrate
and control a number of other so-called command- control-communication (C3)
systems, but suffered from the same fundamental design flaws as its
predecessors. (See my C3 series in RISKS that I suspended two years ago, but
that I intend to resume as soon as I get time.)
Let me acknowledge that the Cheyenne Mountain facility is not totally useless.
If a nuclear holocaust does occur and if the evironmental security systems
there function as advertised, the residents of that hole may have an
opportunity to repopulate the Earth after a time.
Fortunately, the even more elaborate and senseless proposal to develop a
successor system, dubbed the Strategic Defense Initiative by Ronald Reagan, now
seems to be fading away despite attempts to revive it as a Killer Meteorite
Initiative.
Les Earnest, 12769 Dianne Drive, Los Altos Hills, CA 94022
415 941-3984 Les@cs.Stanford.edu ...decwrl!cs.Stanford.edu!Les
Index
Home
About
Blog