Updated: November 28, 2009
This article focuses more on biology than physics, but I hope you will excuse me. Skynet, an idea of living, thinking, breathing, fornicating machines taking on the world, while humans wallow in their own stupidity. Matrix, an idea of living, thinking, breathing, fornicating machines taking on the world, while humans are nothing more than players in a video game, plus some stunning FX effects.
For someone with an IQ higher than floorboard panels, this concept is ... well, annoying. I like good movies with lots of action and shooting, but I expect the plot to have some integrity, so that even when you're reading your favorite Jane Austin book while cozily parked on the warm plastic of your toilet bowl, you can still reason out some of the weaker scenes.
But I'm not here to discuss movies ... too much. We've done that quite a bit recently. I would like to discuss the concept of self-aware, self-sustaining machines from the perspective of biological constraints that make this quite impossible, even if we completely ignore the laws of physics. And then go back to movies. Let's see why living machines are just bad imagination.
Obstacle #1: Life is organic
Well, there you go. Life, as we define it on Planet Earth, requires all sorts of elements to be classified as such. There's nothing special in these elements, compared to the rest of them, but since they are found in all living organisms, hence the adjective organic, they are a prerequisite for life.
Most machine use lots of non-organic materials, including some fairly toxic ones. For example, gold, copper, lead, arsenic, and mercury, so freely used in the assembly of electronic circuits, are heavy metals and are quite deadly to living organisms. Tough luck, there.
Obstacle #2: Evolution
There's this tricky little thing called evolution. For the past four billion years or so, nature has done its best to perfect life from single-cell organisms into something more complex and useful, like programmers or male models.
Four billion years, billions of organisms, many which have been extinct for a long time now - deprecated in geeky terms. Machines, on the other and, have been around for just a few thousand years, since the early history man started shaping them. Complex machines have been around for just a few hundreds of years. Computers have been around for only about half a century.
How much time would a modern human require to transform machines into thinking organisms? Even if the human is a million times smarter, faster and more efficient than the entire Planet, it would still take roughly four thousands years to achieve something like that. And we're still not past the first century.
1.7% done, 98.3% still left to go.
BTW, no one really knows how to create life. The transition from a collection of elements into a living things is still a mystery and will probably be forever. In fact, if you focus on it for too long, you might feel slight buffets of despair hammering in your chest.
Creating a living machine
Let's assume for a moment that we want to build a self-aware, self-sustaining machine. We will play nature for the machine, just like nature played ... nature for us.
The first thing we will have to do is equip our machine with sensors, roughly equivalent to our five senses. These sensors will have to be active all the time, collecting all and any bit of information received. With time, our machine will learn how to process the data and filter unnecessary parts. But that's later.
To collect the vast amounts of data, our machine would require a staggering array of storage and a tremendous computing capacity to be able to register the torrents of information in near real time. This means that out machine would probably be a gigantic super-computing farm, with thousands of processor cores.
Even so, our machine would be weak compared to us. Why? Well, let me elaborate.
Obstacle #3: Weak processors
CPUs are like living cells. CPUs have transistors, we have DNA strands. CPUs have memory controller units and whatnot, we have mitochondria and ribosomes. CPUs have power units, we have ATP all over the place. In terms of complexity, CPUs are somewhat like single-cell organisms.
This means that even a world-wide grid with 10 billion computers is only a 10-billion cell organism, still without any self-awareness or ability to sustain itself. But it's a good start. Almost.
It took the evolution just 4 billion long years to move from single cell organisms to humans. A lone human body contains on average 1015 cells, full five orders of magnitude more than the most powerful computer grid we can possibly hope to build. More realistically, computer grids contain millions, maybe tens of millions of units, so the realistic gap is much, much bigger.
In theory, this means that computers would require a lifetime of a universe to evolve into something meaningful. But let's ignore this subtle conclusion for a moment. Let's continue with our project.
To process the information, our machine would require endless tons of scripts, written by humans, to allow it any sort of artificial intelligence, although each and every decision would be predetermined. Our machine would require a finite solution, for any which path it must take. Any unpredicted situation that cannot be solved using the built-in algorithms would cause the running programs to exit or crash.
Let me give you a crude example:
Let's say a machine is chasing after a human riding a motorbike. At one point, the human reaches the edge of the cliff and manages to leap over to the other side with his bike. The machine cannot go to the other side. What will it do? How will a machine handle a situation where all exit statuses come with the same probability of success, the fat, round 0%? Or worse, what if its calculations decided that the success of a jump to the other side was exactly 50%? What would it do? Jump? Stay? Toss a coin?
Then, there's the issue of precision and truncation. 50% chance for machines is more like 50.00000000%. And what if there's a little non-zero digit somewhere among the less significant bits. Does the machine take this one into account? Or does it truncate it off? What about the rounding?
Then, there's the small issue of ...
Machines would also have to die at one point - or continue replacing all its parts, until nothing of the original was left. Would this new, refurbished machine be able to do what the original did? Would it inherit all its' parents knowledge, including the experience? With humans, experience is individual, which is what makes us so unique. What would one machine's life history mean to another?
And so it goes ...
To make our machines capable of coping with endless reality, they would require an endless amount of code. This means that millions of humans would have to code millions of scripts every day to try to handle every possible scenario the machine would have to cope with, never quite really finishing the project. The programmers would have to be able to write down the collective human memory in the past 200,000 years into code - and continue writing forever, because every moment of life, something new happens.
Translating all this into binary code does not seem quite feasible.
Obstacle #4: Feelings & instincts
Eventually, though, we might assume that they would succeed. Dry information would be all there. Still, without feelings and instincts, our machine would never be able to take the right decisions.
Without the ability to analyze human behavior, something that humans have to cope with every moment of their life, learning and re-learning situations, our machine would be totally lost. Moreover, without feelings and impulses to drive their decisions, our machine would be defeated by sterile mathematical logic.
Humans often take decisions based on whim, gut feeling, irrational thinking, spontaneous moments of courage and mischief, delusion, past experience, and whatnot. These factors cannot be translated into algorithms, since courage, delusion and experience are individual, completely unique to the person whom they belong to. Collectively thinking, there are also decisions formed on higher levels of hierarchy, like family, tribe, town, country, nation, religion, etc. A single machine could never achieve these.
In a way, our machine would have the reasoning capacity of an autistic person, taking all for granted, literally, at face value. With such survival skills, it would be easily duped by 3rd graders. And we have still not yet given our machine any self awareness.
Obstacle #5: Self-awareness
To reach self-awareness, our machine would have to reroute some of its circuits, create a new would-be neurological pattern that would give it something beyond the pre-scripted abilities. This could only happen by accident or direct human intervention. Let's assume we want to help our machine reach awareness.
We would have to deliberately hotwire circuits, one by one, hoping the malfunction due to the introduced change would not cripple our machine and instead grant it a new, better, more optimized logic. This is somewhat like manually mapping DNA or the neuron nets in our brain.
So what do we have here? We could play with DNA, which has some 4+ billion base pairs, 90% of which are junk. Or we could play with synapses, of which an average human has approx. 1015. For the sake of simplicity, let's focus on DNA first, shall we?
Since we know 90% of DNA is junk, this means that roughly 90% of our reroutes would cause our machine either not to evolve or even devolve. This makes for a tricky experiment. We have roughly four billion binary options, all interlocked. In plain terms, that's just 24,000,000,000 options for mistake. Not bad, right?
Well, evolution took some 4 billion years and countless billions of species to perfect this formula. I think humans could eventually succeed in creating something with the skills of amoeba in what ... a few millennia?
We would still be short of self-awareness. We would have to develop an immune system for our machine and make it aware of itself, creating the first spark of intelligence. This intelligence would then have to go from being dumb to very smart, eventually attaining at least the human capacity, with 1015 adaptive logic circuits.
Let's go back to our computer grid. It's small, just a few billion cores, whereas we need hundreds of thousands of trillions. And we have still not taught our machine how to move, eat or replicate, how to fix its own problems. And it still would not have feelings.
Obstacle #6: Survivability
What would happen if we reset our machine, turned the power off or if a lightning struck one of its electrical sub-components, drying half the transistors? What would happen? Our machine would die. And we would have to start over.
The first task would be to create other machines, designed to monitor and fix any problems found with our baby. They would play the task of all sorts of P proteins, fixing damages in the DNA and whatnot. But they would struggle to work with any non-prescripted patterns detected in the grid. This means that we would have to upgrade the R&R firmware for every change introduced into the system. It would be a never-ending race condition, still wholly dependent on the human.
In theory, eventually, we might be able to create a machine, governed by our baby Adam, which would check for changes in its system, assume them all to be good, and then send updates to robots designed to maintain and fix it.
The next step would be to equip our machine with power cells, to keep it alive during power outages. But eventually, it would have to connect to every single power station in the world, to keep going at all times. And it would also have to have some sort of mobility.
As you can see, what we have here is something like Internet. A world-wide grid that is ever changing, with humans for its sensors, wholly, completely dependent on it. This is exactly what we have today.
This system is aware of itself, if you will, but its awareness is the collective existence of all its users. It is also capable of changing and evolving and it has all the elements of a living thing, with viruses traveling down its line, 90% junk manifested in email Spam, and so forth.
Internet is the Skynet. But ... it is one big collective thing spread across the entire globe. It's not individual robot machines made of Titanium running around with Minigun and mowing down pigeons. In fact, this would be counterproductive to its existence, as Internet feeds on humans. What more, bringing it down is such a simple affair. Just unplug the 13 root DNS servers and it's darkness worldwide.
Movies like the Terminator are a relic from the late 70s and 80s, when a new era of apocalyptic thinking evolved, following the success of the Star Wars movies and then Star Wars crisis between Americans and Soviets. But they have no place in our society today. It was fun imagining that the turn of the century would bring an electronic catastrophe, when it fact it brought enlightenment and happiness to billions of people, connecting them together.
Honestly, I think I hate almost everything that has to do with electronic doom. I can forgive sci-fi writers that published this kind of nonsense in the 40s and 50s of the 20th century, when future seemed to far and yet inventions burgeoned like mushrooms after a mushroom cloud. But for people who have tasted the slow reality of the modern world, it's just plain stupid.
It's ridiculous to think that someone would be so naive or yet so arrogant to assume machine domination too early into the future. It's an insult to evolution and humanity. BTW, have you noticed that dates in these would-be apocalyptic movies are slowly sliding away? To focus on the Terminator franchise, the first promised date of doom was 1992, then it was 1997, then it was 2010 or so, then it was 2038, and so on. Simply depressing. The most sensible thing would have been to choose a date far into the future, hundreds or thousands of years from now, so that some, minimal plausibility can be claimed.
The year is 2009. The technology is blooming faster than never before. And what is our greatest invention yet? iPhone, maybe, so that humans can have something to toy with. The concept of thinking machines is just infuriating.
The machines will never be more than stupid boxes doing what they're told. And the moment they start thinking for themselves, they won't be machines anymore. Since this will never happen, we can continue watching stupid movies and pretend that it might.