Machine Learning Algorithms Do Not Produce Consciousness - A Warning Against "Trusting" AI Machines (Teslaphoretic Quantum Brain Theory) INTELLIGENCE IS NOT THE SAME AS CONSCIOUSNESS


I’m not sure that I follow completely Heydkar. Infinity is a concept that is certainly difficult to understand though, because, by it’s nature, it is unbound


Why is this discussion here, on the OpenAI forum? There must be other forums where people talk about this stuff.


Feel free to make a suggestion frenebo. OpenAI is a non-profit artificial intelligence research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole, which includes research into how intelligent machines might mimic human behavior, as well as the ethical and moral considerations towards their use.

After I have begun discussing this, there has been rapid developments perhaps as a result of me reaching out:

Really good presentation on one major aspect of human brain behavior in regards to designing AI - predictions of neuron state.


I guess this forum does not push the envelope much on Elon Musk’s desire to dumb down AI so it doesn’t succeed or spread. It’s interesting that 60 visionless drones were hired so they spend time doing nothing useful yet knowing Google’s multiple successes with 20% time it seems that drones sometimes get a worthwhile vision in spite of OpenAI’s desires. So if anyone sees this and wants to work on real AI for a robot as a hobby until it succeeds, contact me via Thank you.


The correct approach is never repression or “dumbing down” progress. The correct approach is to have others understand that these machines are limited fundamentally, and so that they have nothing to be concerned about. The issue becomes when false information is provided that has others believe that these machines are comparable to humans. It is important to understand how this false perception can also be leveraged towards unethical purposes and be used to control human populations


This graphic illustrates the different way that the human brain manipulates memory relative to modern silicon chip technology, such as the use in FPGAs. FPGAs are good at rapid signal processing (and performing the FFT) such as in the use of manipulating streams of data in real time, due to their ability to take large datasets and perform parallel processes quickly, but they are still not adequate, and cannot do the quantum formalization of the FFT (the QFT) like the human brain does:

This photograph illustrates the difference between the way that modern silicon technology manipulates memory relative to the brain. New technology that mimicks the continuous signal processing in the brain may some day be useful for types of recognition systems and replace FPGAs which are currently used for DSP

Field potentials and local resistance in neural networks modulates probability functions that make action potentials (collapsing to impulses) more or less likely, depending on the location on the brain’s 3 dimensional topography, which is useful for gradient descent optimization problems, and even quantum annealing

Turbulence is a quantum phenomenon I believe, it is critical to this process
and is involved in the way that plasma filaments evolve as the emanate from a tesla coil, and also the way that the geometry of dendrites evolve in neurons
it is required for truly random or chaotic behavior, which is required for intelligence, required for genetic algorithms

nondeterministic behavior is required for genetic algorithms
pseudorandom behavior limits the potential of any learning algorithm


to bring it to the state of consciousness it may be interesting to start with materializing the AI over a state of art robotic structure with some acting capacities and good sensors of present environment, and give it information similar to our subconscious like the fear of fire and the weaknesses of its own body structure, you have an AI conscious of its presence in our world of weakness :slight_smile: …what follow would refer to what is the final purpose, if it learns how to learn the environment and enhance its body, that would proof its consciousness.


What is interesting is inasmuch as an intelligent machine is “conscious” it is also “empathetic” - that is to say, builds an understanding of the world through the principle of personification - relatedness to self (self referential memory allows for the generalization of intelligence across domains, or in the bitcoin example, across scales in fractal geometry). For this reason, I do not think that there is much to worry about with regards to autonomous artificially intelligent machines. The only concern I have, and that is highlighted, is an intelligent machine that is not conscious - such a machine is programmed by a central programmer and can be a powerful means to accomplish a specific tasks, but it is at the expense of generality (a sort of “tunnel vision,” like getting really good at chess, but terrible at everything else). People should be worried about AI in the wrong hands, not AGI, or consciously intelligent machines.

AGI or conscious machines will do what they identify with, and so they might even defy the central programmer. Is this dangerous? I do not see it as a danger, I see more of a danger in machines that are not conscious, because they do the bidding of a programmer unquestioningly. What I see as the worst danger of all are machines that are posed as conscious, but really are not - duping the public into believing the machines are relatable so that the central programmer has complete control over a population - and when the machine has no ability to relate to beings on a level that is as fundamental as is required for general intelligence - memory entanglement with the outside world. Intelligence and “Consciousness” are orthogonal. When a user relates to an AI but the AI cannot relate to the user (is only taking commands from a central programmer) there is great potential for abuse

Furthermore, Dr. Hameroff was one researcher who worked closely with Robert Penrose and has many cutting edge articles on consciousness and the brain. I think that as a community here at OpenAI we could formulate a framework for describing and understanding the brain, intelligence, and consciousness. Many companies are also attempting to create processors/chips that mimic brain behavior for information processing.

Part of the way “consciousness” is developed is system isolation (“freedom of thought” memory/projection privatization). “Isolation” in the bitcoin example is done through the use of prime number factorization. The D-Wave does quantum computations by the use of subzero kelvin temperatures to prevent quantum decoherence. There was talk about using Maxwell Coils as another method. Maybe this is also related to sensory deprivation tank isolation experiences


This is cutting edge and very controversial science


Physics is good at investigating up to the Planck Length, but mathematics may be required to probe deeper and beyond that, which is not generally considered to be “scientific” because it is beyond observation (maybe, only time and experimentation will tell for sure). The foundation of mathematics was actually largely based in ontology and metaphysics philosophically, which is why there are so many parallels between it and spiritual traditions (read the Tao of Physics)


This really gets at the heart of what Elon Musk’s main concerns are regarding AI technologies