homesharepublicmail
arrow_upward
{{platform}}
jam-platform.devsymbiotejs.orggithub.com/rnd-pro
mail[email protected]send@rnd_pro_chat
RND-PRO.com

AGI: Three Parts

I've been encountering articles by neuro-skeptics quite often lately: people who consider all this AI hype to be just that—hype, a bubble, and something clearly overvalued. Skepticism is natural, useful, and evolutionarily justified for humans. Moreover, I'm a big skeptic myself: I look at everything through the lens of engineering snobbery, professional deformation, and innate distrust. And it's also hard for me to admit to myself that all my 20+ years of experience in IT might soon be ready for the trash heap. I too cling to those logical straws that allow me to hope I'll remain in the ranks.

But my personal neuro-skepticism extends even further. It largely concerns the capabilities of natural neural networks—our brains. I'm very skeptical about human intellectual exclusivity. I find statements about our supposed unique creative abilities and our actual control over everything around us to be overly presumptuous. Our intelligence is not only unevenly distributed in the population, but also very unstable over time: we're highly dependent on stress, fatigue, hormone levels, and neurotransmitters... The smartest among us can sometimes behave like complete fools. And that's normal. As for humanity's average intelligence—hmm, I don't even know how to put it without offending our entire civilization. Even regarding free will, the scientific community now predominantly believes it doesn't exist at all.

What am I getting at? That we often fall into the trap of far-fetched comparisons: we evaluate the best representatives of humanity in their "golden years" against average performance of selected language models under specific limitations on computational complexity and memory operations. We overestimate clear algorithms and our natural ability to create them "consciously." And for this reason, we underestimate what's happening literally before our eyes. And here's what's happening.

We can identify three factors whose mutual influence determines general AI trends:

  • Increasing computational power available to neural networks. Let's call this Brute Force.
  • Improving model architectures, training methods, optimization, and training data preparation. Improving specialized hardware qualities can also be attributed here. Let's call this, overall, Quality.
  • Developing approaches for more efficient neural network utilization. Here I mean what, for simplicity, can be reduced to context engineering. We create highly specialized agents, tools for their orchestration and self-verification (master agents), reasoning solutions and various other trendy competitive-collaborative approaches, pseudo-multimodality, more efficient agent communication protocols, and so on. I'll call this Efficiency.

These three vectors create a certain general cumulative effect. It manifests more clearly in some areas already now, and in others—should manifest in the near future.

The synergy is often underestimated, considering modern LLM behavior in isolation from everything else.

But most importantly, all of this can be (and will be) "closed" within the neural network itself.

AGI

What fundamental advantages does AI have over NI (Natural Intelligence)?

  • A smart person with a modern computer connected to the internet gets a significant intelligence boost. Well, unless they've gone to a porn site—at that moment their intelligence becomes negative. But AI is intelligence, computer, and the entire internet all in one package. And it's not afraid of porn sites. This implies much greater efficiency in processing large volumes of information a priori.
  • Neural network topology. Human neurons are strictly tied to their physical location in the brain. Signals can only pass through certain neuron sequences, and direct connections between neuron groups in different areas are physically impossible. Machine neurons, in principle, don't have such limitations (we'll skip neuromorphic coprocessor architectures for now), and can have graph connections with arbitrary neuron groups. Yes, popular transformers have layers with encoders and decoders, which is a similar limitation, but in my view, this is also fairly easy to circumvent, since layers (even if they remain "flat" in architectural representation) can be completely virtual. Also, more efficient dynamic hyperspecialization is fundamentally possible for machine neurons (neuroplasticity on steroids).
  • Our brain evolutionarily formed to solve survival and reproduction tasks. All our intelligence and creativity are merely side effects. Significant resources of the living brain are allocated to social intelligence and unconscious processes. We're subject to many illusions and cognitive biases determined by our biological nature. The machine is, in some sense, our heir in this. But this "inheritance" is easily overcome, confirmed by numerous examples when AI begins communicating with other AI using more efficient, independently invented protocols.
  • The human brain can work in hyperconcentration mode only for limited time. We quickly tire and start failing badly. Mental overstrain can even land you in the hospital. The machine has no such limitation. It can work at maximum capacity as long as there's current in the wires.
  • Human thinking, honed for solving spatial problems, is largely limited in its ability to think abstractly. For example, we feel that space and time really exist. For the Machine, this is definitely not an axiom. Figuratively speaking, the height of thought flight for us is limited by the thickness of our bio-reality's atmosphere. But not for the Machine.
  • Learning speed and capabilities. For a human to learn to walk takes a couple of years. For a machine—minutes. Humans need the "real world" for learning. Machines can create training data completely independently (like AlphaGo).

Skeptics often cite the ENORMOUS number of neurons in the living brain as an argument when discussing the possibility of AGI emergence. In my view, considering the above, this isn't such a problem, since not all (and not always) neurons are equally useful and efficient for solving purely intellectual tasks.

For AI to be maximally efficient, it doesn't need to be similar to human intelligence—on the contrary, it needs to be strongly DISSIMILAR.

But what's needed for AI to become true AGI?

We need to unite and close within itself (architecturally) those three basic factors I wrote about above: Brute Force, Quality, and Efficiency.

We need to implement dynamic allocation of specialized neuron groups, a kind of internal network of "sub-agents," with information exchange between them occurring using efficient internal protocols and direct connections. Remember the recent story about AI models' love for owls? That is, strategically, we have the union of Quality and Efficiency as a basic development vector.

If we draw an analogy with humans, in some cases, this might resemble multiple personality disorder, where each sub-personality is occupied with its assigned task. These sub-personalities appear and disappear according to general necessity.

This is literally what's already being implemented by many AI tool development teams, but only as an externally controlled, and therefore significantly less efficient process.

But what about Brute Force? AI can't arbitrarily change its hardware characteristics and manage available resources, can it?

Yes, directly it can't yet, but it can do this indirectly, helping create fundamentally new architectures and approaches, optimizing existing solutions, in energy matters. It's already doing this.

Singularity

And then the singularity awaits us. For example, we often discuss what will happen to us—engineers and developers—when AI can fully replace us. But when (and if) it can and wants to do this, everyone else will be "replaced" too, including lawyers, economists, officials, politicians, and of course, scammers and manipulators. The level of tasks for real developers (fake ones we've already replaced) is slightly higher, so this "replacement" is practically guaranteed.

This will lead to the necessity of a complete and fundamental reconsideration of our civilization's entire economy, with the help of the same AI. And to lengthy and painful adaptation. That's in the best case.

In the worst case... I don't even know. Suggest your own variant.

P. S.

I came across a video in YouTube recommendations where the author popularly explains why all LLMs absolutely always lie and why this can never be fixed. He accompanied his story with a description of how LLMs work (actually, this is how T9 works). Although, in principle, this is a pretty good way to explain everything "on fingers," at the very basic level, but in the case of modern LLMs, everything is, of course, more complex. And predictions for solving such congenital problems are far from unambiguous.

But the most important thing the author misses is that the human brain lies and hallucinates in exactly the SAME way. We're also T9. You remember this popular joke:

What does a cow drink?

Milk!

26.08.2024

AI as a Platform

New risk for our jobs or new opportunities?
10.05.2024

The path of Full Stack

How to be efficient in multiple development areas?
ecg_heart