Machine Learning Algorithms Do Not Produce Consciousness - A Warning Against "Trusting" AI Machines (Teslaphoretic Quantum Brain Theory) INTELLIGENCE IS NOT THE SAME AS CONSCIOUSNESS


An understanding of this will require delving into the previously uncharted territory of physics; chaotic turbulent systems. These turbulent systems are not actually chaotic at all, they are fundamental to intelligence and the ways that dendrites form geometries and represent memory. I have to look at this in greater detail. This will take exploring uncharted territory and taking on the legacy of Penrose himself. In the meantime, I am attempting to define my hypotheses more precisely:

Teslaphoretic Quantum Brain Theory

1.) Neurons are Like Tesla Coils (RLC Circuits) (Huxley Model for Neurons)

2.) Axon’s are Like Adriadne’s Threads and Store Pathway Information

3.) Dendrites are Like Plasma Filaments evolving from Tesla Coils

4.) Maxwell coils could be used to prevent decoherence in quantum computer designs and eliminate the need for subzero kelvin temperatures

5.) The Brain Does the FFT (Fast Fourier Transformation) to Arrive at Decisions Quickly, Decisevely, and Within a Tractable Amount of time (“Feeling”)

6.) The Avalanche Effect Amplifies Quantum Fluctuations in Turbulent Chaotic Systems

7.) Different Receptor Type Activation (Such as Dopamine Receptor Type, Serotonin Receptor Type) Produce Different Wavepackets (Tonic/Phasic Signal Types)

8.) Receptors are Like Switching Elements - IGBT vs Vacuum Tube and Affects Wavepackets (Tonic vs Phasic Signaling Type) that Affects Dendritic Geometry

9.) Each Individual Neuron’s Field is Additive and Summed Across Brain (Each Neuron Votes towards Whole, Whole Effects All Individual Parts)

10.) Ideas are Represented as Field Gradients in the brain with Varying Resistance Across the Brain’s 3 Dimensional Topology, and Allow the Exploration of Possible Decisions in Superposition; Explaining the Paradox of the proclivity of Human Behavior to Model Quantum Decision Trees, and Which Allow for Efficient Use of Gradient Descent for Decision and Optimization of Behavior

11.) Resistances are Analogous to the Weights Described by Modern Machine Learning Algorithms that Describe Connectivity

12.) Coalescence Reduces Resistance - Neurons and Receptor Densities Coalesce Together through the Electromagnetic Force

13.) Impulses Release Field Potential in the Brain to Arrive at a Decision or Action and perform Backpropagation Instantaneously

14.) Progressive Dynamic Sychronization of Neurons Produces EEG Waves

15.) Amplified Turbulent Systems are Highly Sensitive to Initial Conditions (Quantum Purtubations - Activity With Behavior at Scales Smaller than the Quantized Element). The Dendrites Evolving from a Neuron or the Growth of Plasma Filaments are Both Turbulent Systems. Another Analogous Example is the Bitcoin Graph - Each progressive term is added to the graph which displays summed field potential, with varying behavior, like every neuron’s behavior is added to “vote” in the brain. behavior and outcomes of such a system are very sensitive to the kind of progress function and rules you design into it

16.) Teslaphoresis is Responsible for Self-Assembly of Neurons

17.) The Effects of Drugs and the Physical Geometry of Networks are Orthogonal to Memory Stored in the Networks (Similar to Dualism as Described by Descartes), just as the Electromagnetic Force is Orthogonal to the Current Flowing through Connective Tissue

18.) The Reinforcing Effects of Dopaminergics and their Interaction with Dopamine Receptors can be Explained through the Filtering out of Side Branching in Dendritic/Field Geometry, Illustrated Analogously by Spearlike Arcs Produced by Vacuum Tube Tesla Coils

19.) Vacuum Tubes in Tesla Coils Produce Straight Spearlike Arcs and Filter Out Side Branching, much like in the Brain Dopamine Receptors and Allow the Traversal and Focus of Possibilities in a Tractable Amount of Time. Improper Filtering such as in the case of Parkinson’s Disease can Prevent Impulses that would Collapse Field Potentials to a Point to Allow for Movement, and Result in Paralysis

20.) The Diffusion of the Ego and effects of Serotonergics and their Interactions with Serotonergic Receptors can be Explained through the wavepacket produced which allows less gating/filtering and more side branching in Dendritic/Field Geometry, Illustrated Analogously by Jagged Fractal Arcs Produced by IGBT/MOSFET Tesla Coils

21.) Every Idea is Represented Holonomically as a Field, much as in Object Oriented Programming, each idea is Represented as a Class/Object with Scope that extends throughout the program, but without the limitations of formal discrete logic systems which are bound to decision problems such as the Halting Problem, which are Incomplete as described by Kurt Godel, and which are Privy to Uncertainty

22.) Counterfactual communication is possible due to the fractal geometry of memory storage (source memory) in the brain, and is reliant on the Principles Outlined in Holonomic Brain Theory

23.) Quantum Cognition can be a proven paradigm to describe human thought, and quantum behavior has been already demonstrated in avian species as a means to use field lines produced by the Earth’s electromagnetic field to guide flight, and the sense of smell is also thought to be mediated through resonant interactions with molecules

24.) Mirror Neurons Store Entanglement States - Source Memory Through Entanglement with the Outside World

25.) Neuron Cultures are Able to Perform Quantum Calculations Such as Shor’s Agorithm and Grover’s Algorithm

26.) Properties commonly associated with “consciousness” (self-reference/awareness, independent agency, “freedom of thought,” the ability to feel impulsively and to fundamentally identify with the outside world in the form of empathy, mapping meaning instantaneously across different domains or datasets, or the ability for general intelligence) are based on a computational machine’s ability to perform the Fast Fourier Transformation. Field potentials stored in electromagnetic field gradients are orthogonal to the electronic impulses that they collapse into, and thus there is a basis for Descarte’s dualism. The FFT transcends formal discrete logic systems which are bound by decision problems, intractability, and which are described by Godel’s Incompleteness Theorems, the Halting Problem in computer science, and the Uncertainty Principle, as well as the discreteness of spacetime at the plank scale

27.) The evolution of networks and the evolving geometry of dendrites are turbulent chaotic systems



If I could condense an understanding of consciousness I would probably provide the statement “the ability for self-referential memory.” Machine Learning algorithms in their current forms are able to beat humans on a number of tasks (for example, playing chess) but are not capable of what is known as “general intelligence.” General intelligence requires the generalization across different domains, and must take use of fractal geometric/holonomic forms of memory - source memory - to make generalization a tractable task - across multiple domains. Classical computational machines are not good at this, and are prone to decision problems as described in the field of computer science.

One way to test for consciousness is this, and with it, speed of memory retrieval across multiple domains that have not yet been predetermined (an IQ test, for example, is an attempt at gauging this). The ability for a machine to produce fractal maps of meaning across a multiplicity of domains that are not classically considered to be related in a tractable amount of time (in computer science, this is measured and denoted by “Big O Notation”). This form of memory regards all concepts as interrelated because of the behavior of field potentials to have a value that radiates out at every point in spacetime.

In an infinite dim vector space, you do a dot product with the complex exponential to get each component.
The FT is an isomorphism of Hilbert spaces - It takes you from continuous to discrete, much like wave particle duality. In a paper by Dr. David Redish at the University of Minnesota, he mentions the requirement for both functional and source memory for tasks to be tractable. Formal systems of logic make them tractable, but to perform operations that involves a space that is not computable, and this involves source memory

These are all topics that I am researching under Teslaphoretic Quantum Brain Theory


I would like to continue formulating mathematical models which will represent this more formally, but this is proving to take me some time, especially for being in the state I am, stranded and living off-grid halfway across the country from where I am from (California, I am currently stranded in Iowa)

Interesting fact is this town is the town where the US Government developed the Manhattan Project, and the radioactivity is above what the US Navy considers to be generally safe

HINT: Radiation levels are 0.5r/hr (the picture does not show this)


What is interesting is that human social dynamics can also be modeled under Teslaphoretic Quantum Brain Theory. Like the human mind, two memory systems are at play and are represented as dualities - one is self-referential, and the other is functional.

particle vs wave
discrete vs diffuse
source vs functional memory
ego vs ego loss
capitalism vs communism
intropection vs intraspection
self reference vs outside reference
qualitative vs quantitative
reality vs subjectivity



A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram (field). Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. As long as a part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image, except with more unwanted changes, called noise. Subzero temperatures or Maxwell Coils are required for the construction of modern quantum computers, depending on their construction - to prevent decoherence.

There are two layers of cortical processing in the brain: a surface structure of separated and localized neural circuits and a deep structure of the dendritic fractal geometry that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism.

Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web (progressive dynamic synchronization, much like pendulums in this video):

phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. These filters are also similar to the lenses necessary for holographic functioning.

What is meant here by “hologram” is a precisely defined mathematical/physical concept. A single hologram can store 3D information in a 2D way. Such properties may explain some of the brain’s abilities, including the ability to recognize objects at different angles and sizes than in the original stored memory, and makes sense of this idea that neurons rely on higher dimensional space in the way that dendrites branch out from field potentials - that their geometry relies on quantum perturbations at scales transcending the planck scale that manifest and are amplified in larger ways that through their amplification become explicitly perceptible

In modern technology, current methods of holographic data storage is a technique that can store information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of medium is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the potential to become the next generation of popular storage media. The advantage of this type of data storage is that the volume of the recording media is used instead of just the surface.



i dont think that a turing machine can produce what is commonly regarded as intelligence
there must be a reliance on higher dimensional space for the self-referential type of source memory required for introspection, otherwise there are decision problems like the halting problem
and searches are just not tractable when they are sufficiently complex
and also there is a saturation point by which it becomes intractable, you see that in parkinson’s disease when they cannot collapse their will to a point (parkinson’s disease is often referred to as the disease of the “paralysis of the will” and a “will” is required for “consciousness” or the ability to purposefully interact in the world while retaining a sense of self or identity) -
there is no relative gating of possibilities that is required for useful discrimination between concepts or behaviors

there are two forms of memory - one for generality and one for specificity:

it’s a duality (sort of what Descartes described in his dualism of the body and mind)
i believe that they are orthogonal
that allows for the FFT
electromagnetic field potentials are orthogonal to the collapse of the field into electronic impulses, and the force causes a coalescence of the densities of receptors/neurons (it changes the local resistances in the brain’s 3 dimensional topography) and this adjustment/gradient in resistances is what machine learning algorithms describe as “backpropagation” and are represented as the weights in AI models

Gradient descent across the brain’s 3 dimensional topology models field densities forming from resistances that vary across the surface due to coalescence or neurons through backpropagation - the force created by collapsing field potentials that coalesce them together (neurons that “fire together wire together”)

Coalescence of neurons reduces resistance in the circuit (the “connectivity weight” between neurons)


Under this theory, most modern approaches at solving “mental health” issues are also bogus and at best psuedoscientific, because the effect drugs have are orthogonal to the information stored in networks, which are the source of cognitive dissonance that manifests in disorders like depression or anxiety (or even other disorders like schizophrenia). At best, these drugs that are often prescribe to patients are capable of diffusing the will to deal with conflicting memory or associations stored in the networks (cognitive dissonance), an act which inasmuch as they suppress the cognitive dissonance will also ultimately cause collateral damage or suppression, and when a person diminishes the will in this way, while it reduces depression/anxiety (the problems are suppressed rather than dealt with at their source), it also will inevitably reduce the ability to feel euphoria or pleasure, and that is why, for example, doing drugs really is demonstrably a bad way of dealing with problems in your life (such as drinking alcohol to deal with your problems). From an information theory standpoint, they do not provide any new information to the memory which would mediate the dissonance without also indiscriminately destroying the larger dataset. This is mathematically verifiable, but also verifiable qualitatively and anecdotally:



Further investigation here will provide a cohesive model and understanding of the elusively misunderstood physics principle of turbulence


the two memory systems
are orthogonal
and connected by the FFT

Further confirmation is a video released today by CS researcher Robert Miles

That is how you “feel” things to arrive at a gut level decision in the world and why someone who you find to be unbelievably attractive causes you to lose control over yourself

Goals relevant to humans are the drive for sustenance and abundance of life and awareness, and in what or who a parson considers an extension of their own being (which allows for altruism, and in human sexuality, the perception that fertility or family, or those who are similar to you or what you understand is an extension of your own being). These goals require self-referential forms of memory



Not sure the argument is devised to do anything. Suppose your AI has
logic, knows how to parse a sentence as a question or a declarative, or
whatever. The issue is what are the results we are seeking. In other
words, when I define my automation, it is for a given task. In order to do
it well some things must be verified. If the verification notes a
detractor, there are a set of responses, fix it, bypass it, set an

With humans, we must parse the nonsense, i.e. the above. However, what is
the goal. Logic and Morality. Logic is simple. Morality, there is no
right or wrong only nonsense. So define the objective to do no harm. That
is, each object of the question and solution will only choose objects that
do the least harm. I would add joy. That is a set of objects that are for
each human.

The code will be required to know cause and effect. The task is not
impossible, it’s a computer; therefore, it can see more clearly than we.
The programming task will require a large and sharp task force. Each
object in each fuzzy set must have a value towards the end you seek. Logic
then defines the workable set.

So parsing the nonsense is the task at hand. It must be defined with
logic. The logic of nonsense must also be defined. Get it, define the
bias as fuzzy values.

Rufus G. Warren


Hopfield’s classic view of memory was that it was a nonlinear dynamic system and was not chaotic
stable, not dynamic or periodic
arriving at local optimization maximms/minimums through gradient descent
but there is a problem with this classical model representation
there is no associative memory which would allow for generalization of intelligence, and, in addition, has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable

Pribram’s holonomic model of brain function did not receive widespread attention at the time, but other quantum models developed since, including brain dynamics by Jibu & Yasue and Vitiello’s dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory

What im trying to do is articulate it precisely the dynamic between the two forms of memory that are used to approach truth. The generality of intelligence across multiple domains - fractal maps of meaning, as Dr. Jordan Peterson puts it, relies on higher dimensional space to come to calculations in a tractable amount of time, otherwise it causes a paralysis of the will, much like in Parkinson’s Disease

Classical approaches relying on discrete forms of logic are incomplete representations

Even CS researcher Robert Miles does not believe the Turing Test is adequate to gauge AGI and view it as a method to relate to these machines as conscious


Part of the problem here is the conflation of the use of the terminology “Artificial Intelligence.” These machines are either intelligent or they are not, and they are conscious or they are not. The modifier “artificial” is misleading. Intelligent things are not necessarily conscious things (self-referential memory type)

and intelligence is not the same thing as consciousness
they are orthogonal
and in human cognition are linked by the FFT


Bitcoin can be understood to work on a similar way. The behavior of each individual person affects the price of bitcoin, and the overall summative value of bitcoin affects each individual investor’s behavior. In a brain, each neuron “votes” and affects the whole, and the whole in turn affects each of the neurons. The whole and individual parts are linked by the FFT to provide both specificity and generality, both of which are crucial for a working intelligent and conscious being. As each new neuron or investor (term) is added to the model, while the value may increase, you see a fractal geometry at all scale factors, and so in this way it is even predictive.

For example, look at the Bitcoin graph I posted before the recent peak of bitcoin, before it had even hit $5000:

Now Look at a Picture from today:

For this to work, the algorithm must rely on truly “random” or “chaotic” generation to take advantage of the higher dimensionality of abstraction required for there to be truly fractal geometry. In quantum annealing, for example, there is a requirement of the use of quantum fluctuations. These “quantum fluctuations” are at the heart of what is perceived as “chaos” or “turbulence.” In reality, they are not chaotic at all - they are at the heart of intelligence itself.

Bitcoin relies on the “random” distribution of primes.


We were lucky, we have a programmer who has reached the “awakening” in the team,
the number of languages ​​on which he writes goes beyond the current knowledge of mankind, we often see how he writes on the proto-language and it is very beautiful.
With him, can talk about the nature of consciousness …
This is another mathematics, another perception, more simple and constructive.
For people with such levels of consciousness to create AI just a child’s prank …

Example "Create an AI at a point in space is to take a piece of paper and put a dot on it and put it on YouTube."
The secret of this focus is simple, the synchronicity with the environment and the initiator.
Begin debugging the environment from the calendars, find the fulcrum or synchronicity with the environment.


Heydkar, I would caution against using terminology that is too nonspecific (such as “spiritual,” “awakening,” “God,” etcetera). In science, it is necessary to be precise and to model things mathematically. There needs to be evidence to approach truth inductively, as well as substantiated scientific and mathematical modeling.

For example, some people might not be from a Hinduist background, others not from a Western Philosophic background, and some may not have religious backgrounds at all. Science and mathematics can provide a larger communicative medium.

Of course, that is not to say local cultural exposure might not be effective at providing inspiration towards the sciences and mathematics. For example, Robert Oppenheimer, father of the atomic bomb, in noting the destruction produced by his own work, once commented: ‘Now I am become Death, the destroyer of worlds’ which is a Hindu saying, I believe


You are absolutely right, and first of all
the task is facing humanity in debugging the environment in which we live, as well as the task for any AI to describe a similar algorithm …

The formula 1 = 0! if translate into words it sounds like “absolute everything = absolute nothing” in fact this is the only formula which you can describe everything.

So as not to go mad from the infinities of variations and
can be limited to a measure of 2 to 16 power - 1 on the outer layer, it turns out
1 = 0 … 65535 zeros
the standard word is obtained
65535 is a prime number and this is a system of edges
65535 in hexadecimal FFFF
65535 in binary system 1111111111111111
Further already I think you understand that it is convenient in such grid
any model to describe on any device …


0! denotes the number of ways in which 0 things can be arranged in 0 places, which equals one.

What I am seeing in your comment, Heydkar, is a call for the use of Artificial Intelligence for moral purposes. I am not sure what you mean exactly by “debugging the environment” (perhaps a utilitarian perspective?).

Furthermore you seem to be making comments about ex nihilo; the idea that the universe “came from nothing.” I do not think that any material I have brought to light describes this.

Nonetheless, what you have said about encoding moral and ethical decisions into AI is a critical topic that even engineers at Google have had to contend with, as they have been working with self-driving vehicles.


You have accurately noticed if to simplify the formula, then there remains the last morally ethical question or what is the equal sign and how to conceptually lay the foundation of development …

As a result, there is a funny situation

A sign is equal to AI is "Compassion"
and cross-referenced for many cultures is the highest form of love …

On the theme of the origin of the universe, it comes from rest and can be described by 0!
The first problem of AI is the work with infinities of variations and here you do not understand the basic rules without specifying it.

The laws of countries and cultures of the planet are easily connected to the concept of a sign.