Intelligent Artificial Intelligence

Updated: August 17, 2018

Smart machines are not a new thing. The concept has been around, the way we associate it with the term Artificial Intelligence (AI), for at least 50 years. However, it has recently become popular, a buzzword really, with data companies doing "deep learning" to indicate a higher level of machine intelligence as opposed to ordinary database queries.

But in essence, the machine intelligence remains pretty dumb. The aforementioned companies are hiring thousands of real people to monitor video and chat, because algorithms can't do that effectively. Captcha is another example. Translations are woefully bad. All of it actually begs a question, why is AI not really intelligent?


Note: Image taken from, author Darren Hester.


To be able to build smart machines, we need to understand ourselves. I believe that computers, CPU in particular, are an expression of how our brain is wired and what it does, and how it transforms electrical impulses into higher cognitive functions. We cannot ask our brain to explain itself, but we can build models that try to bridge the gap between biology and silicon, even if we do this partially on a subconscious level. This means that artificial intelligence, if we really want it to become smart, needs to follow the same recipe of evolution that the human brain did - and the first tool in that transformation is language.


Above: The Internet, in a nutshell - we have come a long, long way as a species.

Language is a powerful survival aid. It's a mechanism that allows members of the same species to communicate effectively [1]. This is true for all animals. For primitive organisms, the communications is entirely chemical. The higher we go the more complex the communication becomes. Sound, due to its ability to carry over great distances, even in the dark, is the most superior form of non-physical communication, which is why almost all animals use it. Naturally, the next step in the evolution is the transformation of sounds into more complex forms that carry denser, more accurate information - words.

Imagine yourself a proto-human roaming the plains of Africa (or China as some new research suggests). And you see a bush of juicy berries right over the next right ridge. And then, there's a pack of predatory animals in the vicinity. How do you signal all this information back to your tribe? With words, you can do this in a few sentences, with unequivocal meaning and precision (well, to a degree). But it trumps all other forms of communication you might develop. It is not without a reason why we developed such complex verbal skills over the past few hundred thousand years.

Now, artificial intelligence.

Computer programs - in every shape or form, be they software algorithms or hardware instructions - are limited collections of words that form a very narrow, deterministic vocabulary. More importantly, the language the machines use is not designed for survival. It functions as an interpreter between media - human intentions and computational work - it's not there to aid the machine or increase its chances for survival. And because it does not help the machine's survival, it's dumb language. Indeed, you can teach the computers to speak every single language out there, but the computers will still be simple, primitive, insufficient.

Circuit board

Note: Image taken from, author David Felix.

Survival instinct

This brings me to the most important point in this article. Computers have no survival instinct. Their purpose is not to live. If you think bad sci-fi, if you think Skynet, problems always start when machines develop self-awareness and decide to protect themselves, by fighting their human creators.

But the end state is the least important (albeit the more exciting) state in the equation. If and when awareness occurs in a machine, it will be a very interesting moment in biology, philosophy, religion, as well as technology. However, at the moment, this is unlikely to happen. And the main reason is, machines have no reason to develop survival skills.

Computers are powered by external sources. They never have a fear of being shut down. This fear, a survival driven feeling, is what we might classify as the fear of death. It's not being hibernated. Or even being shut down. It's destruction, the cessation of life.

Even if you turn a computer off and unplug all its sources (including various motherboard batteries), you have still performed what is essentially a fully reversible action that allows this entity to continue its existence. Its being is printed in metal and plastic in rigid form, and it is not threatened by non-volatile changes. The combination of these two (well, three) factors - external power and non-volatile states of non-existence - are the reason why machines need not develop survival instincts (the first path to self-awareness), and the lack of self-awareness becomes the third reason.

You could kill a computer by destroying its boards. But computers currently have no sensors - the immune system - to protect themselves. If you swing an ax, there are no cameras designed to detect the attack and warn the computer to protect itself. We could potentially do that, but that would be like Skynet. Imagine a computer that fires a nuke if you threaten its power sources. Now it becomes complicated.

Moreover, the computer consciousness - at the moment - is not embedded in its circles. Personal computers, and even large supercomputers simply do not have sufficient complexity to develop any sort of consciousness. There's not enough "biological" mass to sustain the higher form of intelligence that such a process would require. On a macro scale, one could argue that the whole of Internet is one primitive artificial organism, and it sure does have its "soul" - but it is mostly comprised of the total integral of human input that has gone into this machine over the past 20-30 years, and not as a result of any self-growth. Makes sense, as intelligence takes millions of years and millions of examples to fine-tune, and the collective compute power on our planet is a rounding error on the path to something like that.


Note: Image taken from, author Maciej Urbanek.

Take even a cockroach as a crude example. It has something like one million brain cells, but then, it also has a neuron density that is about an order of magnitude higher than in mammals. This means that its brain has more synaptic power than its size would suggest (full 100% compared to human neuron/glial cells ratio [2]). But essentially, if we assume that any two cells could form a synaptic connection - and complex information may take tens or even hundreds or thousands of connections - then such a brain has capacity for more than 10 100000 distinct permutations. This is way, way, way more than any existing non-biological storage we have on the planet. Scale this down hundreds of orders of magnitude, and it's still more than what we have compute wise, or will for the foreseeable future.

Survival means fighting for resources

Let's go back to resources. Life is all about securing the necessary resources - food essentially - to allow the body to continue its vital functions. Every single living organism does this, and it's the primary imperative of existence.

Survival is also about propagation - but how does a machine, built from finite resources in a very finite way, propagate? You may say that software can do that, and indeed, technically, viruses and worms have a semblance of biological survival about them. But computers, if we assume individual uniqueness, cannot really replicate themselves. Today, machines are not unique in the sense you can replace one piece of machinery with another, and if you apply the same algorithms to the identical architecture, you get the same results. But then, we only talk about results on a level we're interested. Are any two printed boards identical? Is the nominal processor frequency identical in any two chips? If you zoom down, you start seeing differences. Humans are also identical, if you ignore tiny details like the hair color, fingerprints or height.

At the moment, let's focus on the food question. We provide ample food to our machines. They have never had any need to worry about that. This is also why they have no resilience. You can perhaps alter the board voltage a tiny bit, but beyond that, you get irreparable errors. Machines will stop working correctly if you flip even a single bit. They have almost no redundancy, and things will go badly with even tiny changes. Contrast that with humans. We have multiple correction mechanisms to stop uncontrollable cell division [3]. This is even more remarkable given the fact 1 in every 100,000 cells turns cancerous pretty much all the time [4]. The math for this is not simple, but essentially, one in 100,000 cells goes rogue every time there's a division, but then we have seven mechanisms (and we discover new ones now and then) to trigger apoptosis.


Note: Image taken from, author Bob Smith.

Machines are spoiled - well fed. And so they can dedicate all their resources performing tasks that we instruct them, and never worry about their own survival, which is where the brain is constrained in developing new pathways and having to adapt to prevent the cessation of life. Such conditions do not exist for our machines. If there were more of a Hunger Games scenario, where computers were forced to fight for the electricity, perhaps they would develop new intelligence - provided we also gave them spare circuitry to use for such contingency.

I believe this is also one of the reasons why we humans don't do computation that well. We do extremely complex tasks easily - identify people in a crowd, catch a ball, sing and dance - things that robots still cannot perform well. We do these because our brain is wired to perform these tasks to enhance our survival. Crunching numbers does not enhance it significantly.

Computers, on the other hand, dedicate all their resources crunching numbers while sparing zero capacity to self checks, protection, survival, search for food, correction of faulty circuits, etc. If we invested all the energy that we use just to sustain ourselves for pure math, we would easily beat the computers, but the biological computer that does the work would die in the process. Think about it - insignificant part of our power goes to thermal regulation and fighting off infections from alien organisms. Even so, our brains are hungry little things, consuming roughly 20% of all our energy intake [5].

So the thing is, if we want IAI rather than AI, then we need to give it the same playground as our body has: about 20% of all electricity will power the computational chip, and the remaining 80% will be dedicated to self-preservation, including developing and building protection mechanisms (weapons) to keep itself alive even if we pull the plug. That's not something most humans will ever be comfortable with. But we can always count on evil geniuses to try.


I read somewhere that we will have developed a cockroach replica by 2050. That sounds semi-reasonable. But we also need to remember that cockroachs are extremely resilient, and they will probably outlive humans in case of an end-of-the-world scenario. Do you want your media player to run around the house, when you try to turn it off? Or bite you on the ankle while you're sleeping?

In essence, the formula for intelligent artificial intelligence is the same one we used to evolve over the past two million years. First, the survival. Then, the inevitable emergence of self-awareness. And then, language. Any other way, we will struggle nailing down the AI question. Also, till then, we will have humans moderating the Web because language is oh so tricky, even after so much evolution.

I mean sarcasm or trolling, when done properly, you can't really tell, now can you?


Some reading material for y'all:







You may also like: