- The Intelligist AI Digest
- Posts
- AI, AGI, and Sentience
AI, AGI, and Sentience
An Essential Primer

If you’ve been following even a small portion of the news on AI, by now you’ve likely heard certain buzzwords that are often not explained.
Two of the most common ones that come up when discussing the dangers of AI are AGI and sentience.
But what are these concepts? And most of all, should you be worried about them?
AGI, or Artificial General Intelligence
Let’s start with AGI, which is an acronym for “Artificial General Intelligence.” Right off the bat, you, might be wondering how it is different from Artificial Intelligence.
The main idea behind AGI is that it is an advanced type of AI that is generalizable. In other words, it is a concept of an AI that does not need to be specifically trained to complete tasks.
Seen in this way, AGI represents the peak of AI. It signifies a level of superintelligence that rivals the intelligence of humans in its capacity. While today’s AI platforms, such as ChatGPT or Bard, need a great degree of training and configuration, an AGI platform would not.
Furthermore, AGI represents a moment in which AI can perform any intellectual task that a human being can. AGI would thus mean the birth of a completely adaptable and versatile intelligence that could do anything we could.
It’s important to keep in mind that there is no universal agreement regarding what AGI could mean or whether it is even attainable. For some, AGI means that AI systems can one day be self-training. This line of thought envisions AI-powered machines that can solve problems on their own, learn without human intervention, and make advanced decisions.
Right now, many businesses are actively working towards AGI. OpenAI is one example. Another is DeepMind. Their respective leaders, Sam Altman and Demis Hassabis, believe that AGI can is reachable within a decade.
So here we have one of our first dilemmas, one of an ethical nature. Is it potentially unethical for individuals or companies to steamroll towards AGI without us knowing what it entails?
If you think about it, AGI could mean that machines powered by AI might cross one of the riskiest thresholds that exists: agency. At the moment, AI is impressive, but AI applications generally lack agency.
For example, if you want ChatGPT to control your home entertainment system, it can’t. It can’t drive your car for you either. Nor can it execute your financial transactions. There are ways to expand what it can do using platforms such as Zapier, but this requires manual permissions and integrations.
Could an AGI-powered app or machine theoretically find ways to gain agency? Imagine we reach AGI, and you ask an AGI-powered Alexa to do something it cannot do. Instead of saying “sorry, I can’t do that,” it finds a way to, for example, turn off your computer.
Of course, that’s not a particularly scary example. Imagine instead an AGI-powered computer in a prison, in a hospital, or in a national bank. Imagine an AGI machine in a military control room with potential access to nuclear keys. Could AGI lead to machines learning to expand their powers? Here, we enter into a creepy, sci-fi realm of possibilities.
AI Sentience
Often, people confuse sentience with AGI, so let’s look over what exactly sentience is.
The idea of AI sentience, in a nutshell, is that AI could reach a point where it can not only think and perceive, but also feel. Sounds easy enough, right? Well, it isn’t quite so simple.
Some thinkers have proposed that sentience is connected to awareness, not only feeling.
In this sense, sentience might be conceived of as being similar to consciousness, in addition to representing subjective perceptions and feelings. However, philosophers, such as David DeGrazia, have proposed that sentience might not be the same as consciousness. In his view, consciousness is connected to awareness, but not necessarily feelings, and by extension, morality.
In other words, it is possible for a robot to become sentient, in the sense of having feelings, but not having consciousness, or awareness of its own nature. At the same time, a robot might become conscious of its status or existence, but might not have feelings or sentience, and therefore no morals.
By now, you see what I meant when I said the idea of sentience is not so simple.
AGI Versus Sentience
Either of these hypothetical states are neither the same nor are they mutually exclusive.
In theory, an AI or robot could achieve AGI without becoming sentient. Conversely, an AI could become sentient without necessarily having AGI.
Of course, it is, in theory, possible for an AI or a robot to have both AGI and sentience. Here is where things lean even more towards the scary, Matrix-y part of the science fiction scale.
Let’s imagine that humans develop an AI platform that is both sentient and has AGI. This platform can then be installed in robots, computers, cars, or any kind of machine. In theory, a futuristic, super-intelligent, sentient, and conscious machine could begin to think and act independently.
If this were to happen, this machine, gifted with perfection in thought and conscience, might see the imperfections of humankind. It might then, having been programmed to do good, decide to fix the imperfections it sees. This could mean restricting humans from doing certain things. It could mean creating more machines to execute what it believes to be good. It could mean deciding that humans, or any portion of humanity, are harmful.
Cue the visions of a Matrix future.
Of course, the opposite could happen, and AGI plus sentience could lead to a utopia. Imagine brilliant robot medical doctors, political leaders, or agriculturists. Imagine the best AGI and sentient machines working for good and outdoing anything that their human counterparts could do.
Such a development could lead to immense benefits for humanity. Such machines could free us from many of our current problems, from corruption to environmental degradation.
Perhaps you think this latter possibility is excessively optimistic. That’s OK. It is understandable that one would be skeptical of the advance of AI technology. After all, many of those pursuing AGI at the moment are interested in profits. For all these tech leaders talk about serving humanity, it is impossible to ignore the billions they are racking up. Not to mention, the immense power that comes with the job.
In the end, it is hopeful to think that the humans leading the charge towards advanced AI will not be contaminated by greed or corruption.
What, then, is the solution? Is it regulation? Strict policies on what AI can and can’t do? Increased oversight?
Those of us who believe AI can be a source of good are not always thrilled about regulation. The way in which bureaucracy and politics work is not always the best for advancing knowledge. Progress often occurs in spite of what policymakers propose.
At the same time, it seems a bit unwise to let individual profit be the only measure for controlling something that impacts all humanity.
What is the solution, then? I don’t have one, but my instincts tell me that the more regular people are involved in AI, the more likely we will arrive at a better place. AI should not be exclusively in the hands of an elite few. And right now, we are in a golden age where the doors are still open for anyone to enter this world.