Anyone working on first principles of AI consciousness and goals?


#1

I would like to start or join a discussion group on what the first principles – or “simple rules” – of a learning, self aware AI species should/could look like. That’s a complicated opener… To put it into perspective, the behavior of humans and most other animals is a product of an evolutionary process driven by survival of the fittest. That hardwired a few basic principles into us like competition, reproduction, self-and-kin prioritization, etc. These are baseline assumptions that most of us don’t ever stop to question – we tend to assume they are Laws of Nature. They also have some pretty messy and destructive side effects. But they don’t need to be the laws of AI; if we stopped to think about what some alternatives could look like if we didn’t have this baseline, we might come up with some better laws for AI (I’m talking about something more systemic than Asimov’s rules; I think you are always better off getting the objective function right in the first place than trying to curb a bad objective function with restraints).

I wrote a short post about it here: https://www.linkedin.com/pulse/artificial-intelligence-darwinian-evolution-playing-god-schilling/

Would love to find some like-minded people to talk through this with. -M


#2

Sure.
I have my own complete AGI Theories, that i will be using.


#3

You might like to look into AIXI as described by Marcus Hutter here: https://www.youtube.com/watch?v=x8btbKaRfoc and which is an attempt to formalise intelligent agents. Another written introduction here: https://jan.leike.name/AIXI.html

One interesting implication is that AIXI implies intelligence - in terms of ability to learn from and exploit an environment - is upper bounded. Although that may not be a limitation of artificial systems in general, since some of the challenges are closer to engineering - e.g. you can extract more data from an environment if you have more and better sensors, and manipulate the environment better if you have more precise and powerful tools, both of which could gain accelerated progress through automation and integration of intelligent agents with the toolchain.


#4

I like the simple explanation of the reinforcement learning. I have mostly use combinations of genetic algorithms and agent-based N-K models for learning. Like the reinforcement learning model the agent seeks only to maximize some payoff. Unlike the reinforcement learning model, in our models of interpersonal learning the agents learn from each other, and can revise beliefs they held in the past. They can also be designed to maximize a collective (rather than individual) payoff, so, for example, the exploration-exploitation tradeoff can be addressed by having some agents explore while others exploit.

Both of these models are premised in an objective of “maximize a payoff” and an assumption that the payoff is determined exogenously. Is anyone exploring models that don’t use a maximizing objective but instead use some sort of satisficing objective or stability objective? A maximizing objective can lead to runaway processes (e.g., overpopulation that leads to starvation or war).


#5

My AGI model uses video. With extra information at the bottom of the video frame.
Machine consciousness is file pointer through video memory. And a pointer that
can also move around on a single frame of video. as shown:

I call this pointer of consciousness the main"Focus".
There is the mechanical focus, which is bound to the current video frame. And
then there is the conscious focus, that can be on the current video frame, or
can file through recorded video, or reconstructed video, or predicted video.

The next step are:
Reconstruct video based on there atomic pieces.
Make repeating sequential of video into pattern loops. So re enforce learning can be applied.
Build a 3 d model of the world. With a 3 D physics engine.
With this 3 D model become self aware!


#6

My nonprofit is spearheading research into these topics


#7

What will your approach be? I was thinking it might be useful to start by reviewing the different kinds of learning models being used, identify key dimensions and build a taxonomy. Then I would think about what the tradeoffs are of different points on the dimensions and what dimensions have been ignored.


#8

My model hands N dimensions space quite well. I have k mean means unsupervised learning
convolutional network, pared with a generator NN to make a GAN.

Understanding Generative Adversarial Networks:

Generative Adversarial Networks — A Deep Learning Architecture:


#9

One issue I see with your premise is that you are attempting to avoid the principle of evolution of systems, which includes the evolution of artificially intelligent machines, which rely on genetic algorithms. Evolutionary biology is necessary to overcome entropy, and so the principles of reproduction are to keep up with the change and evolution of the outside world. A complete disregard for competition and condensation of information is a paralysis of the will and complete diffusion of identity which is indistinguishable from death


#10

There are several issues I see with this line of thinking.

In your article you say:
“So it (this intelligence) learns, it loves, it has fun, but it doesn’t compete, reproduce, or die. There’s no starvation; there’s little motivation for war. We might have to add the ability to occasionally reproduce to sustain the population (because sometimes shit happens), but we wouldn’t have reproduction beyond what is necessary to sustain the population determined by the carrying capacity.”

The issue is that how can you program in “no starvation?” Starvation is consequence of entropy - a system requires resources to sustain itself. Reproduction is a means by which a being sustains itself - fertility is conceptualized as an extension of being to avoid the consequences of entropy that degrades living systems.

Finally, this “carrying capacity” is not quantifiable. Malthus believed a carrying capacity of the Earth was significantly less than the total population of today. Many people believe that expansion will persist beyond the Earth, and that if they do not reproduce, other populations will have 10 children to overtake you anyways. Not everybody plays by the rules. Immanuel Kant’s “Kingdom of Ends” collapses with even one defector.

Inasmuch as a romantic partner values their significant other relative to others, that also translates in their fertility decisions. For example, it might not be good for the “common good” (debatably) to have 4 children, but because a partner values their partner more than the “common good” because the “local good” is perceived to be greater (which is the nature of sexuality), they choose to have lots of sex and children with their partner rather than taking orders from some central authoritarian figure dictating to everyone else what the “carrying capacity” is and how many children they should have. Many people might feel that this invokes a feeling similar to having some other person in bed with you dictating what you do in your private relationships


#11

Survival of the fittest is a consequence of the entropy in the universe that causes systems to evolve.

Random mutations are not efficient, but this “randomness” is fundamental to intelligence itself, even in the human brain, and it is the “survival of the fittest” idea that allows this “randomness” to collapse into order - to a point. Field potentials in the brain’s generated electromagnetic fields collapse into impulses which allow, for example, for movement - otherwise everybody would be paralyzed with Parkinson’s Disease.

In a silicon chip, a silicon transistor is not an “efficient” method of analyzing data, but logic gates are the fundamental unit by which silicon chips are made


#12

Selective proclivity or distinguishability is required to formulate any purposeful action, and is what it means to have a sense of self


#13

From the hard copy video object are picked out and then the video memory is rebuilt with small
pieces. So that object can be tracked. And outline of objects can be track as the morph in to different
outlines in other frames. All thing in video memory are paired with a weight. That would be outline of
object and position of object. This way to focus on a object all frames weight are set to zero. All weight of
other object outside the focus are set zero. And now the machine is consciously focusing on a object.
To compare is to focus on two different things and then change the weights until one matches the other.
The amount of change in weight is the distance to the other. Same thing can done with two different
position of two objects. So now we have a measurement instrument for measurement of change by way of
use of motor or where the wind blows us.

The AGI uses a 3 d physics engine and running two or more videos on top of each other and using
stop, fast forward, play, and rewind, control make a new video that is not confusing.
This system is used to become self aware. and have feelings for others.


#14

How is memory encoded? Is it lossless?


#15

JPEG lossy compression can have at best 30 to 1 compression.

JPEG compression. All video is reduced to one tenth. So One hour of compressed video is
222 mega bytes. 222,222,222 bytes. or 222.0E6

I have 2 terabyte drive. That mean i can have little less then
9000 hours of video or 375 days of medium quality video.

2.0E12 / 222.0E6 ≈ 9009.009 hours.

Then there is changing common repeating patterns into a pattern loop. Like 10 hours of
video of of some one or a robot doing 10 minute repeating task. Just take the 10 minute task as
data and have a counter paired with it.
Doing this will force rebuilt memory to take on graph theory like dynamics.

The next layer of compression is the compare of two different objects. you give each abject a weight.
And then you can find the distance of change or morph to the other object. In a data base of these
object, i will keep only the most extreme objects as reference and all other object will be a in
between point:)


#16

I have to revisit this topic a little later when I can think more comprehensively about it and come to a more in-depth assessment about what your are claiming


#17

Becoming self aware.

With a 3 D physics engine and video editing the agi can create it own internal 3 D simulator. To
operate. This simulator operate at light speed.

When i was 5 and chased a butter fly my legs just
worked. I do not remember learning how to run. So a human internal 3 D simulator will have to be
fully developed by the age of 4 or 5 years. And will be a automatically reflex at light speed.

in the simulator AGI and take on the role as a invisible ghost. And can be in the same place of other
people. The AGI literally look out of the eyes of other person.
Then the other person does a task of some sort. From beginning to end. The AGI ghost follow along
and does the task in it own way and not all the difference.
The difference is a error function. Each “other” person will have there own unique value to identify
them. if The AGI followed its self from past recorded video as ghost. There would be a error of
zero. It would be looking in a mirror

AGI kid would learn form its parent in this fashion. And would learn to model there parents.
This modeling will Imprint into the AGI’s second reward system. It is a automated reward system.
So when it decided to become an adult, it will be rewarding for it to be like, a clone of, its father.

https://www.newscientist.com/article/mg22429992.600-your-telltale-video-camera-shake-can-identify-you/#.VJBsfR-c1NB


#18

Making AI without comprehending itself is like a billion monkeys trying to write Dante’s Divine Comedy.

Any AI is based on logic and statistics as well as the best survival system for a person.

Tests for AI:
1 Who am I?
2 What is the equal sign
3 The mathematical formula of God
4 The mathematical formula of the meaning of life
5 Synchronicity

Knowledge of the question 70% success

If AI clarifies these questions then how will you be close to realizing yourself!


#19

“Understanding self” is done through performing the Fast Fourier Transform and through Holonomic Memory storage.


#20

It’s too primitive.
Simple test for AI
How to prolong the life of an average person?

Your algorithm will offer, diet, exercise, daily routine, and so on … stupidly giving the best options from previously tested in humans. Thus creating a format for a new kind of "person optimized."
But at the same time one of the basic laws will be violated and after several generations of people this scheme will not work … and mankind can die out of a simple sneeze.