“The Singularity” is something that most people have not heard about. The concept was first proposed by the Princeton mathematician John Von Neumann in the 1950s. What he said was that the singularity is an “event horizon," the moment beyond which “technological progress will become incomprehensively rapid, complicated,” unknowable and irreversible. And that technology will be changing at a rate where humans will achieve some unbelievably good things. But, at the same time, they could lose control. With the result being significant unknown, negative consequences.

At the time that Von Neumann made his observation (the 1950s), it was difficult to visualize what technological development scenario could lead to such a point in time. Now, however, such a future is quite clear to many scientists and engineers. And we could hit the singularity relatively soon. Possibly in the lifetime of many current residents of Cape Ann.

The singularity will occur because information technologies, especially artificial intelligence (AI), have been improving their capabilities at an exponential rate of increase, and will continue to do so to the point where Von Neumann’s prediction comes true.

To clarify, if a technology was improving at a linear rate, in 30 years it would be 30 times better. But, if the rate of improvement is increasing exponentially, say doubling every year, the technology becomes one billion times better in 30 years.

We have a well-known example of an exponential increase in a technology’s capabilities. It’s called Moore’s Law. Gordon Moore was the co-founder of Intel, the company that makes the microprocessor chips used in personal computers. In 1965, he observed that the number of transistors in a dense, integrated circuit doubled every year, while, at the same time, the price declined significantly. Moore projected that this rate of change for price/performance would continue. The projection provided by “Moore’s Law” has proved to be accurate for several decades. And the result has driven the amazing advancements we have seen in digital electronics, like personal computers and iPhones. And all at remarkably low prices.

When I was a student at the Harvard Business School in 1968, we had access to a massive mainframe IBM computer that was housed in a large, air-conditioned room and cost millions of dollars. Today, my Apple iPhone is 120 million times more powerful in terms of raw computing power than that IBM, has a myriad of easily accessed applications, and only costs a few hundred dollars.

Ray Kurzweil, who lives in Wellesley, is the director of engineering for Google, and is a well-known MIT scientist and entrepreneur. This recognized “genius” is the author of several books on the coming singularity. For decades, he has studied the data on the technological developments of artificial intelligence and demonstrated in his books that AI has, indeed, been improving at an exponential rate, analogous to Moore’s Law.

Kurzweil believes that the next major improvement in computer technology will come from work that is currently underway, to understand in fine detail, how the human brain works. By the mid 2020s, he says, we will use that knowledge to build an effective model of human intelligence, and apply it to computers. Kurzweil then projects that computers, and their AI software, will be at the level of human intelligence by 2029, only 12 years from now. That development will set the stage for the singularity.

This key achievement, where we will be able to talk with a computer and not be able to tell if it is another human being or not, will be followed by continued exponential growth in computing capacity and capability. By the early 2030s, Kurzweil says, the amount of non-biological computation will exceed the "capacity of all living biological human intelligence.” He goes on, "I set the date for the singularity — representing a profound and disruptive transformation in human capability— as 2045." And that is only 28 years from now. This is when AI will reach the superintelligence level, and where it will be difficult or impossible for present-day humans to predict the impact on our civilization. It will be the singularity.

At this point, the superintelligence will be able to develop its own “next-generations”, and do these developments at an increasingly higher rate of speed. This “runaway reaction of self-improvement, resulting in an intelligence explosion”, as I.J. Good said in 1965, is where humans could lose all control, and become less important to the AI. This state is what is most feared by the AI critics such as Elon Musk, founder of Tesla, and Stephen Hawking, the well-known astrophysicist. The analogy is a human’s relationship with ants. The insects are left alone, unless they get in the way of some human goal, and then they are eliminated.

Not all AI experts agree with Dr. Kurzweil on the date of the singularity. A few think it will be sooner than 2045, many think it will be much later. But, almost all agree that the singularity is indeed coming, and we need to be ready when it does. Other than stopping all AI research until we figure it out, which is very unlikely to be done, no one knows what to do to prepare.

An exception is Ray Kurzweil. He believes that, by the time of the singularity, humans will have many non-biological processes going on in their bodies and brains that will link them with the superintelligence. As a result, there is not going to be a clear distinction between human and machine. The bottom line, he says, is that it won’t be “us versus them." Instead we will be one human-machine civilization.

Some futurists have observed that the superintelligence that emerges after the singularity will be “god-like.” Let’s hope that it is a god we can live with.

Anthony J. Marolda is a physicist and resident of Annisquam.


Click here to open external link

Waiting for the singularity