[continued
from previous page]
"Can
computers think?". I vividly remember his crestfallen face when he realised, too late, that he
had been trapped into publishing an article on computer architecture, which had
no meaning for him. There was already an embargo on discussion about advances
in computer architecture. Programmers wrote books and articles urging that the
hardware of computers be frozen for a decade so that the software problems could
be sorted out. Meanwhile, Professor Michie and the like boasted that anything
we would ever want to do would be achieved with the single processor Von Neumann
Computer. He was widely admired for the wisdom of his statement, and for his
ability to fathom the hi-tec future.
In
around 1970, the late Gordon Scarrott promoted CAFS, a content
addressable file system which was used for telephone directory
enquiries. However,
this was not the needed key breakthrough made possible by merging
the two functions, memory and processing, in the single new (semiconductor)
technology. Even so, CAFS was revolutionary enough for Scarrott's
boss, the head of ICL, to try to fire him, only to be restrained
by his board of directors. Hostility to innovation, however great
its financial payoff, ran deep in both the computer and in the
semiconductor industries, as it does today.
Scarrott
was cut off from the microelectronics
industry where I worked for many years, so his proposed DAP,
Distributed Array processor, was stunted, using conventional chips
rather than
a complete undiced wafer. So his machine had only 16 or perhaps
64 processors, rather than the million that I proposed. Such (coarse
grain) arrays made up of a small number of powerful processors
have discredited rather than promoted large scale (fine grain)
array processors. Amdahl, who raised $200 million to fail with
his massive brute force WSI programme in his company Trilogy, was
so revered for this failure, that he was then flown over to England
by the TEE to tell us that a five processor machine ran no faster
than a single processor machine, concluding that there was no point
in having
more than one processor. |
Fig
2: Cover page of Electronic Design,
October 26, 1989 |
Water cooling spoof
Two
months
after I arrived to work at Motorola Integrated Circuit R&D in Phoenix in 1966, I proposed a very small (one foot cube) and therefore very fast computer using individual hand wired integrated circuit chips totally immersed in liquid in direct contact with the active integrated circuit chip surface. Short interconnections would make it fast; and liquid would extract the high power of the fast circuits. Walt Seelbach, deputy head of R&D, told me he thought my report was amusing, obviously thinking it was a spoof.
That silenced me and I resorted to making only one innovative proposal in each
of the companies that hired (and fired) me.
By
1972, after my latest firing, by CTL in England, I had enough material
to patent the
mature Wafer Scale Integration idea - a self organising, self repairing memory
system on a whole wafer. An inventor is wise to get himself fired for technical
incompetence before patenting, because this makes it difficult for his employer
to claim that it was while working for them that he came up with the brilliant
technical invention. He is not acting dishonestly, because if he reports his
invention while employed, any British hi-tec company management will make sure
neither the company nor anyone else will ever profit from it. Rapidly escalating
profitability in a company, which is what major invention threatens, puts the
technology-free management
at risk, because they lose control to the technocracy.
Many years previously, I thought it might be possible to insinuate innovation into such a conservative industry without the attempt being noticed and blocked. I planned to try in the following way.
Surreptitious
Even
though computers were admired within and outside the industry for their
unreliability;
for the fact that if only one component failed, the whole £100,000 computer system stopped functioning; I thought that my Wafer Scale Integration
(WSI) design of a self-organising, self repairing memory might not cause too
much fear and hostility. I would propose a cheaper, because whole wafer, self
organising, self repairing WSI RAM memory as a plug-in to replace the conventional,
more expensive, less reliable RAM memory made up of individual RAM chips from
a diced wafer interconnected on a printed circuit board.
My
memory would be cheaper because the expensive process of dicing (cutting
up wafers) and then reassembling
the chips would be avoided. Further, the novel, added advantage that the WSI
memory would rebuild itself on switch-on, so that a RAM failure would be no
worse than switching the machine off and then on again, should not
generate too much
fear and opposition. My WSI memory would take its place as a somewhat slower
but larger than traditional RAM in the memory hierarchy, much faster than magnetic
tape or disc memory. (It was installed in Tandem machines in 1989.)
I
hoped to insinuate WSI plug-in replacement memory into IBM computers, IBM
then being
the big bully, equivalent to Microsoft today, and wait a few years for them to
gain acceptance. The idea of WSI memory "would be understood by anyone earning more than £50,000 p.a."; people important enough to be unable to understand the idea of array processing,
my real objective for WSI.
My
memories would have a search facility secretly hidden within them.
I would later point out to IBM machine programmers (but not
their bosses) that their memory search routines, which slowly, sequentially,
searched a block of data in RAM for a particular pattern, could be massively speeded up if a jump to the search subroutine were captured by hardware
and the search done in parallel by hardware in the WSI memory.
continued >>>