Home Workflow Pulse Platform Join us!

AGI: Three Parts

I've been encountering articles by neuro-skeptics quite often lately: people who consider all this AI hype to be just that—hype, a bubble, and something clearly overvalued. Skepticism is natural, useful, and evolutionarily justified for humans. Moreover, I'm a big skeptic myself: I look at everything through the lens of engineering snobbery, professional deformation, and innate distrust. And it's also hard for me to admit to myself that all my 20+ years of experience in IT might soon be ready for the trash heap. I too cling to those logical straws that allow me to hope I'll remain in the ranks.

But my personal neuro-skepticism extends even further. It largely concerns the capabilities of natural neural networks—our brains. I'm very skeptical about human intellectual exclusivity. I find statements about our supposed unique creative abilities and our actual control over everything around us to be overly presumptuous. Our intelligence is not only unevenly distributed in the population, but also very unstable over time: we're highly dependent on stress, fatigue, hormone levels, and neurotransmitters... The smartest among us can sometimes behave like complete fools. And that's normal. As for humanity's average intelligence—hmm, I don't even know how to put it without offending our entire civilization. Even regarding free will, the scientific community now predominantly believes it doesn't exist at all.

What am I getting at? That we often fall into the trap of far-fetched comparisons: we evaluate the best representatives of humanity in their "golden years" against average performance of selected language models under specific limitations on computational complexity and memory operations. We overestimate clear algorithms and our natural ability to create them "consciously." And for this reason, we underestimate what's happening literally before our eyes. And here's what's happening.

We can identify three factors whose mutual influence determines general AI trends:

These three vectors create a certain general cumulative effect. It manifests more clearly in some areas already now, and in others—should manifest in the near future.

The synergy is often underestimated, considering modern LLM behavior in isolation from everything else.

But most importantly, all of this can be (and will be) "closed" within the neural network itself.

AGI

What fundamental advantages does AI have over NI (Natural Intelligence)?

Skeptics often cite the ENORMOUS number of neurons in the living brain as an argument when discussing the possibility of AGI emergence. In my view, considering the above, this isn't such a problem, since not all (and not always) neurons are equally useful and efficient for solving purely intellectual tasks.

For AI to be maximally efficient, it doesn't need to be similar to human intelligence—on the contrary, it needs to be strongly DISSIMILAR.

But what's needed for AI to become true AGI?

We need to unite and close within itself (architecturally) those three basic factors I wrote about above: Brute Force, Quality, and Efficiency.

We need to implement dynamic allocation of specialized neuron groups, a kind of internal network of "sub-agents," with information exchange between them occurring using efficient internal protocols and direct connections. Remember the recent story about AI models' love for owls? That is, strategically, we have the union of Quality and Efficiency as a basic development vector.

If we draw an analogy with humans, in some cases, this might resemble multiple personality disorder, where each sub-personality is occupied with its assigned task. These sub-personalities appear and disappear according to general necessity.

This is literally what's already being implemented by many AI tool development teams, but only as an externally controlled, and therefore significantly less efficient process.

But what about Brute Force? AI can't arbitrarily change its hardware characteristics and manage available resources, can it?

Yes, directly it can't yet, but it can do this indirectly, helping create fundamentally new architectures and approaches, optimizing existing solutions, in energy matters. It's already doing this.

Singularity

And then the singularity awaits us. For example, we often discuss what will happen to us—engineers and developers—when AI can fully replace us. But when (and if) it can and wants to do this, everyone else will be "replaced" too, including lawyers, economists, officials, politicians, and of course, scammers and manipulators. The level of tasks for real developers (fake ones we've already replaced) is slightly higher, so this "replacement" is practically guaranteed.

This will lead to the necessity of a complete and fundamental reconsideration of our civilization's entire economy, with the help of the same AI. And to lengthy and painful adaptation. That's in the best case.

In the worst case... I don't even know. Suggest your own variant.

P. S.

I came across a video in YouTube recommendations where the author popularly explains why all LLMs absolutely always lie and why this can never be fixed. He accompanied his story with a description of how LLMs work (actually, this is how T9 works). Although, in principle, this is a pretty good way to explain everything "on fingers," at the very basic level, but in the case of modern LLMs, everything is, of course, more complex. And predictions for solving such congenital problems are far from unambiguous.

But the most important thing the author misses is that the human brain lies and hallucinates in exactly the SAME way. We're also T9. You remember this popular joke:

- What does a cow drink?

- Milk!

26.08.2024
AI as a Platform
New risk for our jobs or new opportunities?
10.05.2024
The path of Full Stack
How to be efficient in multiple development areas?
RND-PRO.com © 2025