Consumer-oriented AI promotes stupidity

Updated: June 3, 2024

Here's a scenario. You go to a fair at the outskirts of your town. There, among the many rides, you find this machine. It works like this: you insert a token, you ask it a question. The machine vomits out a slip of paper, and on it, there's an answer. A piece of wisdom. Welcome to 2024 and the so-called AI chatbot thingies.

Today, there's an enormous amount of hype around AI. Not only is this annoying, it's also aggravating. Nothing insults intelligence quite as much and quickly as the passive-aggressive combo of corporate buzzwordology, greed and incorrect use of technology. First, AI ain't AI, it's just machine learning. Second, the fact it speaks like a human doesn't make it even remotely human. Alas, the distinction is lost on the masses. Indeed, if you think about it, the great danger in this AI hype is that its only viable purpose is to make stupid stupider.

Teaser

Credit: Photo by Jamie Haughton on Unsplash.

Garbage in, garbage out (GIGO)

Artificial Intelligence, generative models, neural processing, whatever. These words sound majestic, but all they do is obfuscate the simple reality of what we call and consider AI. Let me simplify that for you. AI, as coined today, is nothing more than the collective knowledge of the Internet, which you can access using natural language. This is supposedly different from ordinary Web searches in that you don't need to be an expert in asking if-then programmatic questions, you can phrase questions as an "ordinary human". A fallacy, but hey.

Now, how you ask questions is a much better indicator of intelligence than what answers you get, but we will get to that. Let's focus on the Internet. AI models are trained on various bits of data, mostly open and publicly accessible information (but not all). This information represents the sum of human knowledge and interaction. On its own, that sounds good, because, technically, all of our wisdom, from Planck to Einstein to Plato is out there. So technically, AI could give you the best intelligent answers to your questions.

Right? Wrong.

AI models are complicated black boxes. What happens inside an AI model stays inside an AI model. Often, it is very difficult and sometimes even impossible to deconstruct the "logic" by which AI models come to certain answers and conclusions. Sure, you have algorithms, you have weights, you have all sorts of lovely statistics at play. But when you mash them together, you get a system that can answer certain things, but not necessarily know why or how it arrived at its answers.

Furthermore, AI models don't really know what sources of data are true, accurate or both. They need either human input for that (which nullifies the whole idea of AI) or they deduce their own conclusions whichever way. And this is where it all gets interesting. Machines make decisions based on statistics. That's not intelligence. That's just fancy mathematics. The fact is sounds like a language does not make it a language. Take Voynich's Manuscript as an example.

Therefore, AI is simply eloquent mimicry of the Internet. The problem is, when your source of data is social media, forums, and an occasional scientific publication, your input becomes mostly memes, trolling, racism, and good ole human stupidity. But let's not be overly negative. Since the Internet represents average humanity, the Internet has average intelligence - around IQ 100.

This means that AI, in the best case, can offer general intelligence equivalent to an average human. And that's not a good result by any means. You will get some decent answers, but you will also get recommendations on using glue on pizza, logical and philosophical fallacies (like the fashion of basketball players being basketball uniforms, for instance), and outright nonsense (the so-called AI hallucinations). IQ 100, in the best case.

Robot

Credits: Photo by Rock'n Roll Monkey on Unsplash.

Thus, if the AI is 100% efficient and practical, it will, in the optimal case, output IQ100 garbage. Remember, garbage in, garbage out, one of the basic laws of conservation in nature.

All right, so now, that's what we have today. AI is producing average crap.

Now, let's go into the future 10 years. 50% of the Internet content is now AI crap. Only this crap is ever so slightly inferior to the original crap (the new garbage 2.0 created from garbage 1.0, with less than 100% efficiency and accuracy, as no machine can do that). Now, we have a new wave of AI information that is less intelligent than the original. We're now doing IQ 95.

Fast forward another decade, AI will be trained on IQ95 material to produce IQ85 material.

Repeat until Idiocracy.

Why WHY matters

The second major issue with AI answers is that, on their own, they mean absolutely nothing. What matters is how the machine chose the answer. Let's say you ask an AI box about the age of our Universe. The answer it could and probably will provide is along the lines of: 13.6 billion years.

Now, here's how you make the so-called AI fail. You just keep asking WHY. Ask the machine to explain its "reasoning", its methods, how it arrived at the answer it did, and why it thinks it's the right answer. In the best case, it will reveal its source and tell you something like: According to XYZ; or the data collected shows ...

But then, if the AI answer is: source X says Y - then you don't need the pointless chat box! You can simply go to the original material, do your own reading, and figure out your own answers. That would be no different than an ordinary Web search. Only that does not sound so glamorous, does it?

A true testament of intelligence is self-awareness. The system needs to have the capacity to introspect, to show doubt. If you ask someone a question, and then you challenge their answer, with most humans, you will provoke doubt, anger, confusion, something. There will be a human response. With the machines (and idiots), they will confidently spew nonsense ad nauseam, as long as there are free compute cycles to generate new nonsense out of existing information.

Therefore, if we put everything else aside, you have no reason to trust the answers provided by the machine. I mean, why should you? If the machine has no remorse, no moral compass (other than what Silicon Valley thinks it ought to have, which already makes the solutions inadequate for 99% of the world), no introspection, and no real intent, then the answers are equivalent to ramblings from a madman or elaborate lies from a snake oil salesman.

What makes everything funnier - or sadder, if you will - is that the fancy AI machines have "humanesque" characteristics. It started with cutesy digital assistant, with cuddly sci-fi names. Now, the machines supposedly speak like us. The anthromorphication is boring - and unoriginal. If you think about, all of the so-called AI solutions mimic the sci-fi novels and movies from the past century. It kind of makes sense, because people developing these solutions are mostly nerds, and nerds love their sci-fi. I know I do. But that does not mean I expect a Commander Data in my living room any time soon, or that the "computer" will behave, sound or reason like Majel Barrett (Lwaxana Troi) on the Enterprise (or Deep Space 9, if you will).

The biggest problem, enablement of idiots

The final piece in the equation - the end user. On its own, the technology is neither good nor bad. I can think of several use cases where these humanesque tools can and could be practical. I will purposefully not list them here, because I don't want to give away my ideas for free. The populistic implementation is cliche and annoying, but it's also probably the easiest way to demonstrate the technology, however lamely.

The real issue is with people who utilize the tools. Just the way social media has become a megaphone for morons and compulsive attention seekers, an amplifier for idiots and trolls and confolk, the same the AI tools are here to enable the next generation of stupid people. How so, Dedo, you ask? Well, if the current Web is a barrier for a natural interaction with the machines (the Internet), this new set of tools removes that limitation. If previously you needed some basic logic to get the right kind of answers or data, now, any idiot can ask anything, any which way they like, and the machine will provide.

Philosophers

Philosophers AKA not your average Internet user.

Think about it. There are people out there who will put a token into the machine, the machine will whir and buzz and flash, and then, a fortune cookie will come out, with something written on a proverbial piece of paper. And the end user will soak it up. No doubt. No argument.

The reason you see so much resistance and negativity around AI, and why people are trying to "fail" these tools is because those are INTELLIGENT people who see the risks and dangers, and are trying to show them to the world before it's too late. Only, intelligence isn't profitable. Stupidity is. What better way to coin on that than to have millions and millions of low-IQ people worship magical alien boxes that spew random wisdom? Fast, efficient, and always connected.

Too cynical? Well, you could say that AI will learn from people's computers, their day-to-day work, and then, be able to make decisions and conclusions based on that. Yes. Exactly. That's the problem. Imagine the daily routine of some mouth breather out there, who has their own set of conspiracy theories and their own set of logical fallacies, half-truths, superstitions, and other nonsense. Or an even simpler usebase, someone who saves files on their desktop! And now imagine software actually learning from those people, and then trying to make those people even more efficient! Not just that, the software will incorporate the mass wisdom in its own flows (machine learning, cor), so that sometime in the future, when you ask a question, a part of that answer will be flavored with the nonsense of the common ape somewhere.

This is why AI cannot be really collective. But that's the paradox. If AI runs locally only, and works for individuals, there's no big data there, no big profit for the corporations. The AI needs to gather all its data to make itself bigger and smarter (and more profitable) - but it will end up bigger and stupider instead. At some point, the equation will break.

Conclusion

AI can only work if the data is good, if the decisions being made are sane and logical. In other words, AI can only work if it supports real, profound human intelligence. That excludes 99% of existing data sources. Moreover, people who love critical thinking, philosophy and the scientific method are the least likely candidates to be using AI tools blindly. Because a big part of intelligence is curiosity (and doubt). It's discovery, it's the search for information. Having a magical know-it-all box removes all the fun. It removes the essence of what it means to be a human.

With corporate greed running amok, there's a pretty good chance AI will be used primarily in its consumeristic form. Which means we're on the path of diminishing returns, a slow cycle of increased stupidity that cannot end well. Even now, AI aside, we have "lost" a lot of the original knowledge and skills, because many of our tools are abstractization upon abstractization. AI promises to accelerate the process. We could end up with humanity that is clueless about basic things, because no one will know what the original data is, and where to find the sources of truth. Democratization of knowledge does not work. It simply makes everything average, and that's a horrible, horrible thing. Well, I think that's enough for one despondent morning. Happy trails.

Cheers.