What is technological singularity?
The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, leading to unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
The idea that human history is approaching a “singularity”—that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both—has moved from the realm of science fiction to serious debate. Some singularity theorists predict that if the field of artificial intelligence (AI) continues to develop at its current dizzying rate, the singularity could come about in the middle of the present century. Murray Shanahan offers an introduction to the idea of singularity and considers the ramifications of such a potentially seismic event.
John von Neumann was the first person to use the concept of a "singularity" in the technological context. Stanisław Marcin Ulam, a Polish-American scientist in the fields of mathematics and nuclear physics, a discussion with von Neumann that was "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". I.J. Good's "intelligence explosion" model predicts that a future artificial superintelligence will trigger a singularity.
Through his 1993 essay The Coming Technological Singularity, Vernor Vinge popularized the concept and the term "singularity". In his essary, he wrote that the technological singularity would signal the end of the human era, as the new artificial superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.
Public figures like Stephen Hawking and Elon Musk have expressed concerns that full artificial intelligence (AI) could potentially result in human extinction. There have been a lot of intense debates carried out about the consequences of the singularity and its potential benefit or harm to the human race.
What are the two ways in which the technological singularity could be reached?
A lot of philosophers and scientists say that the technological singularity could be reached if there is an emergence of superintelligence. They also say that time and speed are crucial aspects of singularity and say that smart systems would self-improve at an increasing rate.
Most of the proposed methods by which the technological singularity could be reached fall under one of two categories: intelligence amplification of human brains and artificial intelligence. The various speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading.
Robin Hanson expressed skepticism of human intelligence augmentation. He wrote that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult to find. In spite of of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is considered to be the most popular option among the hypotheses that could advance the singularity.
What is intelligence explosion?
Intelligence explosion is one of the possible outcomes of the creation of artificial general intelligence (AGI). Artificial General Intelligence could be capable of recursive self-improvement, which could potentially lead to the emergence of artificial superintelligence (ASI), the limits of which are not known as yet. Artificial superintelligence is a hypothetical agent that possesses intelligence greater than that of the brightest and most gifted human minds.
In 1965, I.J. Good speculated that hat artificial general intelligence could potentially bring about an intelligence explosion. It is possible that an intelligence explosion could lead to the technological singularity.
When could we reach technological singularity?
Join any discussion about business technology, and it won’t be long before someone mentions AI. The amount of digital data in the world has already surpassed our abilities to manage it, a development, which has made clear the need for algorithms to do the job for us.
But algorithms and AI aren’t the same things. They’re not even close. An algorithm is a computer program that parses data from sets too large for human interpretation, whereas true AI is capable of thinking for itself, making its own decisions, and having free will.
The explosion of AI has long been a popular trope of science fiction, but the reality is edging ever closer. But even if that sounds optimistic (or pessimistic, depending on how you look at it), it’s the logical next step in the evolution of technology. Once computers can learn for themselves without being taught and trained with data that we’ve collected, we can expect profound societal changes on a scale never before seen.
In his book, The Coming Technological Singularity, Vernor Vinge wrote that he would be surprised if the technological singularity occurred before 2005 or after 2030. Four polls of AI researchers, conducted in 2012 and 2013 by Nick Bostrom and Vincent C. Müller, suggested a median probability estimate of 50% that artificial general intelligence (AGI) could potentially be developed by 2040–2050. But even that would be a far way off from a technological singularity.
Could there be a non-AI singularity?
There are some writers who talk about the singularity in a broader way to refer to any radical changes in our society that could be brought about by new technologies like molecular nanotechnology. But Vernor Vinge and other writers state that in the absence of superintelligence, these changes could not qualify as a true singularity.