Chapter 3 – Why is this happening?
by Marshall Brain
How is it possible that we will be seeing conscious machine intelligence appearing in just a decade or three? For that matter, how is it possible that computers are now able to play Jeopardy given that, in World War I, computers did not exist, and in World War II they barely existed in the simplest form possible? The computers in World War II could do perhaps 5,000 calculations per second and were not much different from a fast adding machine that an accountant might use.
Here is the short answer. It is thought that the human brain can perform the equivalent of approximately one quadrillion computations per second. In WWII, that was an impossibly large number to imagine. But every year, without fail, silicon computers get faster and less expensive per computation. Chances are that inexpensive silicon computers, like a typical desktop machine or laptop, will be performing one quadrillion operations per second within 20 years or so. In other words, we will soon have inexpensive computers sitting on our desks that hold the equivalent processing power of the human brain. Once we reach that point, the rise of the second intelligent species is imminent.
Now let’s look at a longer answer. You may have heard about Moore’s Law. Moore’s Law says that the number of transistors on a silicon chip, and therefore CPU power, doubles every 18 to 24 months. History shows Moore’s law very clearly. You can see it, for example, by charting the course of Intel microprocessor chips starting with Intel’s first single-chip microprocessor in 1971.
- In 1971, Intel released the 4004 microprocessor. It was a 4-bit chip running at 108 kilohertz. It had about 2,300 transistors. By today’s standards it was extremely simple, but it was powerful enough to make one of the first electronic calculators possible. Things progressed very rapidly from there.
- In 1981, IBM released the first IBM PC. The original PC was based on the Intel 8088 processor. The 8088 ran at 4.7 megahertz (43 times faster clock speed than the 4004) and had nearly 30,000 transistors (10 times more).
- In 1993, Intel released the first Pentium processor. This chip ran at 60 megahertz (13 times faster clock speed than the 8088) and had over three million transistors (10 times more).
- In 2000 the Pentium 4 appeared. It had a clock speed of 1.5 gigahertz (25 times faster clock speed than the Pentium) and it had 42 million transistors (13 times more).
- Today, as I am writing this in early 2015, Intel’s fastest consumer processor is the Core i7 4790K. This chip runs at 4 gigahertz (2.6 times faster) and it has 1.4 billion transistors (33 times more). [Overview of i7 processors]
Prefer the Kindle?”The Second Intelligent Species” is now available on the Kindle – Click here! |
You can see that there are two trends that combine to make computer chips more and more powerful. First there is the increasing clock speed. If you take any chip and double its clock speed, then it can perform twice as many operations per second. Then there is the increasing number of transistors per chip. More transistors let you get more done per clock cycle. For example, with the 8088 processor it took approximately 80 clock cycles to multiply two 16-bit integers together. Today an i7 processor with 4 cores can multiply up to eight 64-bit floating point numbers every clock cycle.
There are several other significant advances that occurred between 2000 and 2015, made possible by the massive increase in the number of transistors. Microprocessors now have more than one processor, known as a core, on the chip. So the 2015 i7 chip has 4 cores (soon to be 8 cores], and each of those cores is superscalar, so it can execute two instruction streams simultaneously. This gives the chip the ability to simultaneously execute 8 streams of instructions. There is also a massive 8 megabyte on-chip cache to speed up execution by avoiding the need to go to slower RAM to fetch instructions. All calculations in the ALU are now 64 bit rather than 32 bit. And there is now a GPU on the same chip, known by the name Intel HD Graphics 4600, and it is available for calculations as well. The memory system for RAM has radically improved in terms of memory read and write speeds. So if the Pentium processor in 2000 could execute X instructions per second, the i7 processor can execute 2.6X because of the clock speed boost, and then an additional 8X speed improvement because of the multiple cores, and then another factor of improvement because of things like the cache, pipeline shortening, branch prediction, RAM speed, etc., and then another factor of improvement because of the GPU on the chip.
Taking Moore’s law literally, we would expect processor power to increase by a factor of 1,000 every 15 or 20 years. Between 1981 and 2001, that was definitely the case. Clock speed improved by a factor of over 300 during that time, and the number of transistors per chip increased by a factor of 1,400. A processor in 2002 was 10,000 times faster than a processor in 1982 was. This trend has been in place for decades. Scientists and engineers seem to be able to get around the limitations that threaten Moore’s law by developing new technologies. There may come a wall, when 2-D silicon chips as we know them today can become no faster in terms of their clock speed, and transistors can get no smaller. At that point it is likely that 3-dimensional chip stacking will take off, or chips will change their substrate from silicon to graphene, or quantum computing will mature, or completely new silicon architectures will emerge, or something else. Even the processors in smart phones are becoming incredibly powerful and use very little power – See the NVidia Tegra X1 released in January 2015 as an example.
The same kind of advancement has been happening with RAM chips and hard disk space. A 10 megabyte hard disk cost about $1,000 in 1982. In 2000 you could buy buy a 250 gigabyte drive that is twice as fast for $350. The 2000 drive was 25,000 times bigger and cost one-third the price of the 1982 model. In the same time period — 1982 to 2002 — standard RAM (Random Access Memory) available in a home machine went from 64 kilobytes to 128 megabytes — it improved by of factor of 2,000. In 2015, hard disks have grown to hold 6 terabytes and are in the process of being replaced by flash memory drives that are significantly faster and more rugged. RAM in a typical home machine or laptop is now typically at the 4 gigabyte level and is significantly faster than previous types of RAM.
The presence of the GPU on most modern microprocessor chips might seem like a simple add-on, but graphics processors have evolved significantly in the 21st century. They can now be used as computational engines. High end graphics cards have thousands of small processing cores, and there are now consumer graphics cards that approach 10 trillion floating point computations per second (example from 2014).
If in the year 2000 we had simply extrapolated out, taking the idea that every 20 years things improve by a factor of 1,000, what would we expect? What we get is a machine in 2020 that has a processor running at something like 1 trillion operations per second. When looking at CPU/GPU combinations, that has already occurred (and been exceeded) in 2015. It would have 128 gigabytes of RAM and 250 terabytes of storage space. A machine with this kind of power was nearly incomprehensible in the year 2000 — there were only two or three machines on the planet in the year 2000 with this kind of power (the monstrous NEC Earth Simulator, with 5,000 separate processor chips working together, was one example at the time). Here in 2015 that kind of machine is becoming trivial to imagine in terms of the CPU power – it is already easy to find CPU/GPU combinations exceeding 10 trillion computations per second. 128 gigabytes of RAM is easy to imagine today, and fairly affordable. The one thing that might not be there is the hard disk capacity, although it is easy to imagine flash memory drives, or an even better technology like memristors, getting there by 2020. 10 terabyte Flash drives are already on the horizon.
What if we extrapolate another 20 years after that, to 2040? A typical home machine at that point will be 1,000 times faster than the 2020 machine. Human brains are thought to be able to process at a rate of approximately one quadrillion operations per second. A CPU in the 2040 time frame could have the processing power of a human brain, and it will cost $1,000. It will also have a petabyte (one quadrillion bytes) of RAM. It will have one exabyte of storage space. An exabyte is 1,000 quadrillion bytes. That’s what Moore’s law predicts.
The other thing that changed between 2000 and 2015 is the rise of cloud computing. No one in 2000 would have imagined a company like Google with control over millions of server machines and petabytes of storage space. Yet here we are in 2015 and Google exists in that form. Many other companies – Microsoft, Facebook, Apple, Amazon, etc. – have similar amounts of computing power and storage space. And Amazon makes it possible for anyone to easily build their own cloud platform using a system called AWS (Amazon Web Services).
The processing power, hard disk space and RAM in a typical desktop computer has increased dramatically because of Moore’s Law since desktop machines first appeared in the 1980s. Extrapolating out to the years 2020 and 2040 shows a startling increase in computer power. The point where small, inexpensive computers have power approaching that of the human brain is just a few decades away. What we will have in 2100 is anyone’s guess (and humans will have been irrelevant for decades by then). The power of a million human brains on the desktop? It is impossible to imagine today, but not unlikely.
The point is simple. Within a few decades, we can expect to buy a $1,000 home computer or laptop machine that has the computing power and memory of the human brain. That is why we will see the second intelligent species arriving soon. Manufacturers will marry these inexpensive computers with a humanoid robotic chassis like the ASIMO chassis Honda has today. Advanced AI software will create autonomous humanoid robots with startling capabilities. It is not really hard to imagine that we will have robots like C-3PO walking around and filling many human jobs as early as the 2030 time frame. What’s missing from robots right now is things like vision and general intelligence, and by 2030 we will start to have more silicon brainpower than we know what to do with.
The New Employment Landscape
The problem, of course, is that all of these robots will eliminate a huge portion of the jobs currently held by human beings. For example, there are 3.5 million jobs in the fast food industry alone. Many of the waiter/waitress/server jobs will be lost to kiosks and tablets. Many more will be lost to robots that can flip burgers and clean bathrooms. Eventually they will all be lost. Eventually, the only people who will still have jobs in the fast food industry will be the senior management team at corporate headquarters, and they will be making staggering amounts of money.
The same sort of thing will happen in many other industries: retail stores, hotels, airports, factories, construction sites, delivery companies, education and so on. All of these jobs will evaporate at approximately the same time, leaving all of those workers unemployed.
But who will be first? Which large group of employees will lose their jobs first as robots and automation start taking jobs away from human beings? It is likely to be a million or more truck drivers….