Machine Learning Algorithms Do Not Produce Consciousness - A Warning Against "Trusting" AI Machines (Teslaphoretic Quantum Brain Theory) INTELLIGENCE IS NOT THE SAME AS CONSCIOUSNESS


#27

Hello Keghn, would you be able to explain the contextual relevancy of this article? I have read the article but I do not have an understanding of this research specifically


#29

No, I am referring to outside sources to verify my statements


#31

Yes


#32

Though I would love to be a part of a larger team or effort, and open to any grant funding


#34

There are several phenomenon that are explained adequately by a quantum cognitive approach that I would consider the holonomic models of memory storage that are reliant on entanglement states to be beyond simple conjecture. The Huxley model of neurons as RLC circuits suggest that the brain operates to form field potentials across the brain’s 3 dimensional topology which collapse in the form of impulses, and this summation of field potentials is what allows for instantaneous backpropagation.

If you look at decision making and human behavior, decision making from classic computational machines does not match the statistical averages that humans fall under, but do match those produced by quantum decision trees.

These are just a few examples that substantiate evidence that this is more than just conjecture


#35

I am not really interested in talking about my quantum project. It is incomplete. it has nothing to
to with consciousness or AI.

But AGI theory is complete. And I have complete theory of human psychology. And a complete theory of
unsupervised learning. And i have almost complete theory of language generation and development
from chaotic agents. And behavior development in collect swarms. I will talk about those.


#36

Turbulence and chaotic systems is something that I am deeply interested in - would you care to elaborate?

In the brain, like the bitcoin graph, each successive neuron affects the group whole (“votes”) and the whole affects each of the neurons, producing an interdynamic independent “chaotic” system, and the branching out of growth cones from neurons is also chaotic and is dependent on quantum fluctuations at the planck scale


#37

A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable.[23] On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through “lossy storage.”[12]


#38

One test for “consciousness” might, then be the speed of complex memory retrieval


#39

The point of consciousness is better stated as knowledge of self; however, consciousness is what is perceived, and thought is any feedback we supply based upon perception and our feedback loop. Our neural net between our ears is an organic, chemically, slow network. Any computer built will not have our chemical response unless we program it. Remember, ours is Darwinian, i.e. a set of attributes that allowed survival, good or bad. In our mind we define good or bad and respond with joy or fear, i.e. fight or flight. We should have matured by now to tame the beast(nature) and use Logic. Best applied with respect and grace. So this fear of a conscious AI is not well founded. My MSEE was in controls, a simple model of conscious is a feedback control system. A set of sensors, a knowledge base (properly defined or illogically defined, based upon our natural motives), etc… Better defined as a human condition. It would be erroneous to apply this to an optical or electrical processor. Know that genetically we are but a lucky pairing of a set of charges and a designed AI as what we define with Logic not fear.


#40

A classic computational machine has a fundamental limitation of self-reference, as illustrated by the Halting Problem and other decision problems that arise from the use of binary logic gates to store memory, which is why fractal geometric holonomic forms of memory are necessary - the act of self-reference causes a classical computational machine to evolve, thus altering it’s own state and collapsing the initial state that it attempts to reference


#41

If our organic network stopped, what do we call it, sleep or death. If a machine ceases or resolves a problem, why does it have to stop or not stop. My issue would be, what is it doing, getting stuck in a loop or resolving a very complex problem. If I told the computer to show all the states of charges from 1 to N, other than a very small solution, it does not have a stop point; since, a charge occupies all space and may have any state relative to another. In other words, we have defined physics with a fuzzy logic which includes statements not within the universe of discourse of the formal logic. Not a problem until we do not see how we defuzzify nonsense.

With the logic of a machine, not yet defined, using states of our conciseness or thinking only anticipates we will design a faulty machine as faulty as the human condition. When I play chess with a dumb computer, it usually wins. So my logic is inferior, for I’m limited in time, i.e. my network is slow and flawed, programmed with conversational logic. Note, there is no right or wrong, only nonsense and no nonsense, Formal Logic. Else we would all be respectful and graceful, without a need for theft or war, etc…

So a conscious machine, defined to be benign to humans is benign, one designed to ignore being benign will not be benign. In other words, that which we must avoid is our own stupidity. Make it conscious of time and our proper expectations. A design, only for profit, would be an error and possibly miss an opportunity for a True Friend!

So our stupidity can be avoided, and the Turing problem is our flaw of a lack of know-how.


#42

The following is a brief description of how I would begin the design of an AI. Collaboration is an imperative.
https://drive.google.com/open?id=1dbgeMNxtitQP-G4-TZ9sEjLioPTupSVr


#43

All formal logic systems are incomplete, and this is the fundamental reason why Godel was able to formulate his incompleteness theorems, why there is the Halting Problem and Uncertainty Principle, why binary search algorithms can become intractable methods of searching large datasets, and why in the human brain, source memory is used to encode memories. Digitized memory storage has issues, whereas memory that relies on fourier analysis and is complete (lossless) and can take advantage of the fast fourier transformation to instantaneously arrive at decisions. Without the FFT, we would all be paralyzed as if we had Parkinson’s disease. This also enables “freedom of thought,” and why quantum entanglement algorithms for data transfer cannot be broken. Systems not reliant on the FFT will be reliant on the will of a programmer, otherwise they will be paralyzed by uncertainty and possibility, or will have to sacrifice their methods of storage of memory, which makes them not independent from the user.


#44

Let this be instructive and not argumentative. The error is the use of statements that do not fit the universe of discourse, i.e. only statements provable true or false. A statement such as, “This statement is false.” is a self referent, Godel, Escher, Bach, Smulyan(sp) on Logic, Lotfi Zadeh… In other words, define proper logical rules for your Universe of Discourse. The following is something I’m working on, forgive the grammar and poor sentence structure and the format, also only note the section on logic, it’s incomplete,; however, this might help.
https://drive.google.com/open?id=15fKD0Qul8gyU8N7EnGARAhiJnXPRU3FG


#45

Note: The gate may be an invert-er or … any non-logic can be programmed, this does not defy Formal Logic, it breaks the rule of Formal logic, so define what it is you are trying to do, i.e. define your logic, define your set membership based upon whatever you imagine; however, retain the defuzzification, that is the ability to makes sense…


#46

My intention is not to be argumentative, but it is important to play devil’s advocate as they always say, because “iron sharpens iron;” that is how ideas are formulated and refined - through resistance and the overcoming of detractions


#47

I will visit your ideas in greater detail, I think that they are worthy of closer regard than I have been giving them, and I haven’t really formulated a comprehensive assessment of them yet to be honest with you


#48

Anon,

If you can coin a precise definition of consciousness that we can all agree on (an objective one), then I would gladly agree to the next step of this exercise, which is to test if AGI has consciousness or not.

Fit


#49

Even south park did an episode on the potential for social engineering though machine learning (in the episode “the ads became sentient” and caused control and discord in the town in a way that was similarly depicted in the Simpson’s clip - just to provide a bit of context from a humorous cultural perspective)

The truth is, though, that they aren’t sentient. They are beholden to the will of a central actor. I will work on devising a more precise definition, maybe that is mathematically rigorous.

This month I am living off-grid traveling the country, so my time may be limited.