What is Superintelligence?

A superintelligence is a hypothetical agent that possesses intelligence far surpassing a level of general intelligence that massively exceeds our own.

Superintelligence –  we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue, or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain – for example, you can’t have a real-time conversation with “the scientific community”.

Moore’s law and Supercomputers

Moore’s law states that processor speed doubles every eighteen months. The doubling time used to be two years, but that changed about fifteen years ago. The most recent data points indicate a doubling time as short as twelve months. This would mean that there will be a thousand-fold increase in computational power in ten years. Moore’s law is what chip manufacturers rely on when they decide what sort of chip to develop in order to remain competitive.


If we estimate the computational capacity of the human brain, and allow ourselves to extrapolate available processor speed according to Moore’s law (whether doing so is permissible will be discussed shortly), we can calculate how long it will take before computers have sufficient raw power to match a human intellect.

The fastest supercomputer today (December 1997) is 1.5 Terraops, 1.5*10^12 ops. There is a project that aims to extract 10 Terraops from the Internet by having a hundred thousand volunteers install a screen saver on their computers that would allow a central computer to delegate some computational tasks to them. This (so-called metacomputing) approach works best for tasks that are very easy to parallelize, such as doing an exhaustive journey through search space in attempting to break a code. With better bandwidth connections in the future (e.g. optical fibers), large-scale metacomputing will work even better than today. Brain simulations should by their nature be relatively easy to parallelize, so maybe huge brain simulations distributed over the Internet could be a feasible alternative in the future. We shall however disregard this possibility for present purposes and regard the 1.5 Tops machine as the best we can do today. The potential of metacomputing can be factored into our prognosis by viewing it as an additional reason to believe that available computing power will continue to grow as Moore’s law predicts.


Is your company in need of help? MV3 Marketing Agency has numerous Marketing experts ready to assist you with AI. Contact MV3 Marketing to jump-start your business.

« Back to Glossary Index