singularity

The Shocking Truth Behind the Technological Singularity!

Just because something sounds ludicrous doesn’t inherently make it incorrect. The potential health benefits of standing on your head for five minutes a day might very well save your life. While you skeptically raise an eyebrow, I’ll be stimulating my pineal gland. Simply accepting the truth can sometimes be our biggest obstacle. The technological singularity sounds ludicrous, but that certainly isn’t an excuse to ignore it.

I was tasked with responding to a TED talk of my choice for a class that I’m taking. Ray Kurzweil’s 22-minute lecture on the exponential growth of technology stood out to me. Kurzweil is a full fledged futurist, complete with the attitude that he’s wasting his time talking to mere mortals. His main point is quite straightforward: technology has been, current is, and will definitely continue to advance at an exponential rate. That is, the sum of human knowledge continues to build upon itself, and the annual growth of it can be measured as a fraction of the previous year’s metric.

Okay cool. We’ve all heard of Moore’s Law, right? This shouldn’t be too radical of a concept to deal with. Ray argues that this is just one very specific example of the non-linear trends that are transforming our world. The size of the internet measured by hosts, the cost of sequencing genes, the efficiency of solar panels, and the capacity of harddrives all follow this model. While each of these commodities aren’t necessarily advancing at the same rate, they can be linearized by taking the logarithm of the Y-axis nonetheless.

Taking a step back reveals a more discrete picture of design paradigms. The number of transistors in a processor that Moore’s law has been acurately predicting is just one instance of a larger reality. This observation was preceded by vacuum tubes, and when they could no longer keep up with this model, a new paradigm was ready and waiting.  Frighteningly, there seems to be a good chance that each new paradigm increases this rate of growth even further. Transistors in modern integrated circuits will reach the size of individual atoms soon enough, and a new approach will be needed. Say hello to the world’s first stacked 3D processor.

What does all this imply?

Kurzweil’s lecture is already somewhat dated, so we get to cheat a little.  He says that by 2010, computers will disappear. They did. The iPhone 4 is 90% battery. Reebok just announced that they are partnering with flexible computing hardware manufacturers to create a line of smart-clothes that monitor your vitals.  He predicted this as well.  Keep in mind how far ahead R&D is from consumer products. It trickles down over time.  Don’t pretend that you know what we are currently capable of; you wouldn’t have found yourself at SuperProfundo if you did.

The 2020s should be an interesting decade if things continue. Nanomachines will give us superhuman abilities, delivering oxygen to our cells at a much improved rate. They will cure our diseases and allow us to enter a virtual world entirely.  We will be able to share our sensory experiences over the internet, and have immediate access to all human intelligence stored in such a way that it can interface directly with our brain. I’d recommend watching his lecture for the full effect.  Is “The Matrix” really only a few years away?

Kevin Kelly (founding executive editor of Wired magazine) tries to sand down the sensationalism, arguing that regardless of the time period, it always looks like a singularity is upon us. I feel like this is more of a play on phrasing than anything else. The term is borrowed from physics, referencing the unknowable aftermath of matter entering a black hole. We can have no idea what lies beyond an event horizon.

I came to a similar conclusion. The farther ahead of us we look, the cloudier it becomes. There will always be a point when visibility drops to zero, and since we are a moving target, so is that point. As new discoveries are made, what was once unknowable can now be theorized. Kevin uses that to justify Ray’s predictions not being valid. Perhaps.  If what lies beyond the singularity is invisible, by definition, I suppose it seems a bit foolish to be making any kind of claims.

I fail to see why this specifically doesn’t permit Kurzweil’s predictions from being possible, however.  An unknowable future doesn’t mean great things aren’t coming, it simply means they are beyond theory right now.  That still sounds pretty cool to me. Keep in mind that impossible things have already happened. As Louis C.K. hilariously puts it, we partake in the miracle of flight without appreciating it every day.  I have a device in my pocket that can literally call China in 2 seconds.  It can also show my position on the globe within several feet.  We landed on the fucking moon.

Try convincing a peasant from the middle ages, who passes the time by publicly whipping himself to apologize to God for causing the bubonic plague, that these things aren’t impossible. The singularity to him would have stopped long before the FM Radio.

So while it seems that this technologically deterministic singularity will always be just a tad out of reach, it’s honestly just a useful metaphor anyway.  It represents the idea that unpredictable things over time will become predictable.  We can only see a few miles ahead of us, but once we get there, we’ll be able to see a few more.

Think I’m retarded? I’m all ears.

Image Sources [1] [2]

6 thoughts on “The Shocking Truth Behind the Technological Singularity!

  1. Daniel Hazelton Waters

    This hyper exponential increase in performance of computation is about to explode at an even greater rate. First memristors will be made available to the public starting in about 2013 and then quantum computers will come. There are many other technologies coming as well. By 2021 the singularity will be undeniable.

  2. Florian

    There’s something to that concept that the singularity will always be a bit out of reach. However, there’s also certain concepts/events that define it more distinctly.

    Perhaps a confusing term for these would be “singularity excursion”. The idea is, that something happens that is increasing the development speed so high up, that the fog of the unknowable descends into the next few seconds. Situations where you cannot estimate your personal life for the next few seconds are not uncommon. However, imagine living in such a state for any prolonged amount of time, and it should illustrate what that means.

    As to what these events could be, there’s a couple candidates.
    1) We come up with a self-aware, strongly intelligent AI that immediately seizes its potential to create even smarter offspring and siblings, rapidly iterating trough scales of intelligence until their actions, insights, knowledge and motivations are as far removed from human comprehension as would be fitting for an omnipotent entity.
    2) Closely related to this, pretty much the same could happen where we to figure out the connectome of the human brain and had the ability to devise addons to it, which would allow the rapid improvement of the starting technique until the end result of the process closely resembles aforementioned omniopotent entity.

    If seen as a critical path to the first singularity excursion, then one can examine encouraging factors, which is why Moore’s law is so important. If either 1) or 2) where to be achieved, these important key technologies need to move much beyond what they are now:
    1) computing capacity
    2) data storage capacity
    3) MRI (or other scanning techniques) temporal and spatial resolution (down to milliseconds and cellular levels respectively)

    One could argue that other factors play a role (like progress in the fields of AI research and such), but this isn’t really true. Say you had a scanner that could rapidly build the connectome of the human brain, and say you had the hardware to run it, you wouldn’t need to know anything about AI research in order to be immediately confronted by a self-aware and strongly intelligent AI that could think as fast as you can run it.

    Or say you had a stupendously powerful computer that would let you apply a simple genetic algorithm to a random connectome of a brain, you could evolve intelligence without any clue as to how it works.

    As in all information-theoretical endevours, the following applies: If you can’t solve a problem, you can bruteforce it. So the question isn’t when we’ll have solved the problems preventing us from reaching a point of a singularity excursion. The question is when are our means becoming so great that a singularity excursion is the inevitable by-product.

  3. Seung Soo, Ha

    I’ve always understood the technological singularity as something akin to thermal runaway(http://en.wikipedia.org/wiki/Thermal_runaway)

    According to my view, fascinating technological developments(especially computer and information technology) are of themselves not enough to bring the singularity.
    The developments should themselves directly accelerate further development.

    Currently, a new development is not immediately deployed as tools for further development.
    Looking at SSDs for example, although they are already revolutionary, they are still struggling for mass adoption. I’m sure that certain high-end R&D departments have deployed them, but still, the process would have needed human intervention and approval. Thus the development only indirectly accelerated development.

    What could then, enable direct acceleration of development? I see AI as the answer.

    Imagine, a system that improves and replaces it’s parts and itself on-the-fly.
    A system that recompiles itself with the newly released compiler, refactoring it’s code with the new language, replacing its harddrive with SSDs on the way, while improving heuristics and criteria for determining such changes according to recent developments in linear algebra, neural networks.
    Multiple such systems, operating in multiple research fields autonomous pursuing research goals unaided by humans, working day and night, improving themselves and each other. Increasing their own efficiency. Lowering their own costs thus allowing more of them to be produced, traded and ultimately operated.

    The only thing I fear, is that they might turn to actually consuming rather than producing.
    Humanity already has a rough time being replaced by computers and robots in the productivity aspect. When they start replacing us as consumers, the world would brush us off and not miss us at all.

    Gosh, maybe I’ve spent too much time on your lucid dreaming article :-P

    1. Florian

      Rapid hardware development doesn’t matter that much for an AI once you have it running. Algorithmic improvement always beats the hell out of Moore’s law. The key development of an AI would be the improvement of itself, which doesn’t just mean smarter, but also way more efficient.

  4. Greg

    While I understand the argument that the Singularity will always be out of reach since it is a moving target, what people fail to realize is that what matters is the FIXED target of human intelligence. That is a very real number and has been stable for tens of thousands of years. Once computers/AI reach that target and surpass it, then everything changes.

    I think that is a very real event that is coming.

  5. Daniel Hazelton Waters

    I believe it is not coming (the singularity) I believe we are in it. I define the singularity as the potential resources and what we can do increasing like no other time in history. Just look at the difference of just one generation span of time nowadays! We do have the tools if we were motivated to feed everyone and produce the fuel all the vehicles on the planet could use. Oil can now be made in minutes using a relatively new process that can use just about anything as feedstock. We could engineer biospheres that float on the ocean and it could help regulate global climate while powering our vehicles with a byproduct of growing all the food 49 billion people could consume.