AGI | Definition and Development | Research Request



Greetings of peace ! I am Ahmad, a computer scientist and a robotics engineer. I work on developing a cognitive architecture that serves as a brain for robots. This approach is biologically inspired and aimed towards A.G.I. in humanoid robots. My work has given me an all-around experience with the fields of Cognitive Robotics, NLP/NLU, Human-Machine Interaction and Machine Vision in both theory and implementation.

tldr: i talk about NN not serving as adequate belief bases and how we can solve AGI, based on def of G and I (my insight). DRL cannot be used in its current form to develop AGI agents.


I was immensely excited that MIT had started a class on AGI. Lex Fridman has delivered lectures and I prayed that I could catch a flight and attend it in person. That couldn’t happen, but I was able to connect with the class online, and post on Slack for the entire community. Here is the script (now modified):

I have reviewed the Lecture 1 slides and I really like MIT’s AGI Mission Goals: that the hype does distort and kill the purpose of a field. This has happened in the case of Deep Learning. Personally, as a computer scientist, I believe that neural networks can be reified into linear mappings and they are not black-boxes. Isn’t a 3D graphic model simulation also binary data in bits ? Thus, we can claim that it can all be reduced down in dimensions. While we can visualize feature maps and where they form, we can also form an idea of their spatial relationships within the network, based on activations (where and how) and this gives us insight to how the NN and its components work. I hardly believe NNs are to be trained using only hyper-parameter tuning, as is done often. Furthermore, hyperplane clustering and dimensionality reduction methods have not been given their due attention using NNs in Deep Learning.

Now that we know that DL alone cannot help us build a human brain, this brings us down to AGI. To form it, we have to define it, because there are different ideas floating around, some by sets of scientists in different fields, and some inspired by sci-fi.

AGI Roadmap

What are our goals ? There isn’t one, there are many. Some scientists (like me) are driven mad to understand how intelligence works, for purely research purposes. Some may have thought about the benefits of mankind or at least some communities. Some may implement AGI for control and power, that is, if they can harness it. I will discuss AGI in terms of definition and its development, then comment on related issues.

What are we trying to build ?

  • Replicate the human physical/metaphysical duality and create a mind for a human-like robots? These humanoid robots will have to be submissive, or else we would be introducing a new social class in society.
  • Reach technological singularity with a virtual AGI agent, that is connected to the Internet and becomes an ASI? Imagine each country having its own ASI based on its own language/rules/traditions. Note: After reaching ASI, it cannot be an agent ! This agent could control a country’s defense system, infrastructure and all other devices on the country’s intranet. Think of an IoT connected OS for entire cities or governments, backed by blockchain (“safety”).
  • Create physical agents that can allow humans to transfer their “minds” and exist immortally, devoid of physical deterioration. Note: from a 3P-POV, an agent and environment (nature of duality), over infinite time, has no uniqueness or randomness, hence individuality and agency makes no sense.

Human level intelligence:

  • Do we create an AGI using a cognitive approach that maps the mental capacities of a mature human adult ?
  • Do we create an AGI that has the structure of a human child’s brain so that it forms ontologies and appropriate data representations, as it develops and learns language(s) ?

Whatever it is that we are going to make, it depends on our understanding of the acronym AGI and what the individual words mean to us:

If the GI stands for:

how human intelligence works, then AGI has to be reduced to biological fitness, similar to how humans work in nature. This definition would make sense if we are looking for everyday cognitive robots that can pass all the AGI tests. General intelligence in a human environment is an emergent property of humans, of how our genes have evolved our behaviour to enable survival, reproduction…and so on. This means that an agent would be designed bottom up, using a connectionist approach and be finalized as a human child’s brain. This brain would then have to learn over time and mature, before yielding intelligence. Neuroevolution is making its rounds these days, beating DRL for general purpose game-playing, but those are virtual agents. If used in physical agents, we can imagine humanoid robots that exist with humans with cooperative behaviour.

Problems: It is important to know that human intelligence is not restricted to our behaviour, meaning a robot cannot be human-level intelligence, just by having adequate software. The entire human body itself is intelligent, as it evolves over time, for repair and adaptation, but by communication from the brain. A human has cells that act as microbodies, that can serve as actuators within the body, whereas robots do not yet have nanomachines that can affect the electromechanical system within them, like oil their gears, or tune the motors. Thus we do not have the hardware for such robots, besides I do believe that we want submissive robots. We don’t want robots that rebel against us or impose their own “autonomy”.

Based on the definition of G, we can choose whether to program ‘Maslow’s Hierarchy of Needs’ within a humanoid robot, or implement ‘Biological Imperatives’, as mentioned above. But we don’t see the point of robots having survival, territorialism, competition, reproduction, quality of life-seeking, and group forming…because that is “dangerous”, as is commonly believed. Furthermore, what this will do, is insert another class into the human society and cause unprecedented effects, something that MIT Mission Goals is against :slight_smile:

Then comes the I part…
Intelligence. As an emergent behaviour of many smaller tasks ? Some claim it is notion of generalizing itself, that is intelligence. The definition of intelligence is very vital to developing AGI too. Generalization can refer to the abstraction and reification of spatio-temporal data, based on a language i.e the same data can be processed differently in various parts of a schema, if a concept exists for it. It could be formed (generated/augmented) too, similar to how languages evolve.Then intelligence could refer to how the usage of a language is used to communicate with the environment and other agents to bring about behaviour, either individual or social.

I have a framework of a cognitive architecture that I designed for humanoid robots. This arch. is designed to mimic the human mind and our thought process, but using a GA to control all its variables seems inadequate to me…

Why? This brings us to our goal for AGI.

Do we want “conscious” self-replicating machines ? No, we want submissive robots that help us and listen to us.

The narrative of AGI is now shifting towards an agent, acquiring a language and using it to develop social skills, and bring about behaviour in an environment.

How to build it ?
Once we have chosen a model for an AGI agent, we can talk about how to build it.

A biologically inspired approach would suggest using only neural networks: Currently DL using NN only tackles the problems of regression and classification. We can also use RBMs or a Hopfield Network to contain input patterns as perceptions, similar to RNNs for memory. While clustering and dimensionality reductions haven’t been explored, this does not mean we cannot use alts.

An alternative approach would suggest a hybrid design of a cognitive architecture…

Let me conclude with how to solve the AGI problem while DL lacks behind in Clustering and Dimensionality Reduction when it comes to using DNN:

If the generality and intelligence in an AGI agent are defined as I have concluded, then we can build a framework of it without exploring further ML. Then surely we have to train networks to be compatible with a memory store, where data can be represented in hierarchical stores, with respect to time. Spatio-Temporal abstractions of data are required (STA), which can then be stored in hypernetworks (see: hypergraphs).

These hypernetworks themselves are modelled after ontologies that have been developed by humans over time, as our minds have developed. Right now, we are lacking adequate knowledge representation, especially when it comes to concept formation. DL cannot deal with hierarchical representations either. The approach with ontologies, is what allows STA of data in an hierarchy. To a machine, an infinite ontology will have no semantic value, but to a human, the passage of time has strange effects :slight_smile: (leaving the details)…until we can develop metacogniton for it, generating models of percept sequences and action sequences, relative to the mapping of environment. This way, a recurring thought process is built for an agent, where it can access its goals and if its current actions are aligned towards it…

  • Transfer Learning: STA allows transferring skills as data is not domain dependent
  • Supervised Data: AGI can learn from Imitation, Transfer and Reinforcement against Unsupervised Data
  • Rewards: Depending on G and I: goals and sub-goals can be programmed or generated based on intents/beliefs from Metacognition
  • Fully Automated: a GA can control certain aspects of an AGI, but do we really need a fully automated robot ? I believe we need an agent that can assume multiple roles and perform them in different domains, but not everything…

This message has been slightly clarified from the original message that was posted. I am in the process of modifying this post to make it clearer.

Request for Research:
If you believe that intelligence and generalization are correlated, then you can understand how persons of diverse backgrounds offer unique and individual perspective. I believe we must gather a team of scientist from different academic fields but more importantly who speak different languages, to work together on this. Coming from a I would like to personally nominate myself to this research team as well This also concludes my request for research in this post.

DL - Deep Learning
GA - Genetic Algorithm
IoT - Internet of Things
DRL - Deep Reinforcement Learning
STA - Spatial-Temporal Abstraction


i have complete AGI model and a complete model of human psychology.

In cognitive machine there is a system for consciousness. Simply a a pointer that indexes through video. And then a
pointer that index through a single image frame.
All video is then rebuilt up form smaller pieces so that objects can be identified. So that object can be tracked form one frame
to another with index pointer. The focus. A consciousness pointer.

The consciousness index focus pointer as a x and y, and also a z move through time, or through video, or new built video.
The pointer can select a atomic piece form a image like a cat or cup and use x, y, and z, and track the flight and movement from
one frame to another or use different x, y, z, and more dimensions, to morph form on object to another. Or, third to use as
a atomic piece selector and build new video in empty frames. Like in a way image editor works.
At the same time it make it’s own internal language that it will learn to match to external spoken language.

To find atomic pieces of thing in video and images i well use some thing like a detector NN. I will
use a unsupervised NN detector and unsupervised generator NN for output and to generate memories

More of my conscious AGI machine:!topic/artificial-general-intelligence/f5yCbo3XALE!forum/artificial-general-intelligence