Introducing the first chapter (rought draft) to my new book.

This is the first chapter to a new book that takes a look from in inside out as to how our perceptions render reality and will offer a tumble down the rabbit hole for those interested in reality theory, cognition and consciousness.
Chapter 1: More Than Meets the Eye
“The truth is stranger than fiction.” – Mark Twain

[This section deals with perception, approximation and subjective reality]
1.1 Perception
1.1.1 The Visual Spectrum
A funny thing about reality… not everything we see is as it appears. There is more than meets the eye. What we see, hear, taste, touch and smell from the objective world is filtered down and becomes an approximation constrained by cognitive and sensory limits. What we see from the observable world, is limited to a small band of the Electromagnetic Spectrum. The human eye can only see 400 nanometers (nm) to 700 nm. Each of the five physical senses have limits as to how much information they can collect from the outside world.

1.1.2 Invisible Information
When a human sees a dandelion, we take for granted that what we think we see is a yellow dandelion and nothing more. This is not the case. Unlike humans, bees see into the ultraviolet spectrum and flowers like the dandelion have evolved with ultraviolet patterns invisible to the human eye. To the bee, there is a darker UV center drawing it’s attention to the nectar and pollen. These ultra-violet patterns have no meaning for humans, but they do for bees and other insects.

1.1.3 Limitations in Observation
How we perceive reality is vastly different than how other lifeforms see the world around them. We might think we see the outside world objectively, however the limitations of our senses leaves us with a subjective approximation of the objective world.

1.1.4 Sight
When light travels to the eye the information is flipped upside down as it passes through the pupil. The 3D information becomes 2D information when it hits the retina. From there photoreceptor cells divided into rods and cones are stimulated and produce an electrochemical signal that travels through the optic nerve to the back of the brain into the occipital lobe.

1.1.4 Visual Information Processing
From here, the brain takes the electrical signals and further processes the information as alpha-beta tublin inside of microtubules produce evanescent infra-red photons that stimulate the dimers like binary switches in a CPU.

The information moves up through the cytoskeleton to the neuron for further processing and the synaptic endings ignite into an array of electrons firing between the synaptic nerve endings passing though neurochemicals to the next cell.

The final step of this process is what the brain outputs from sensory information. The brain renders the sensory data into an interface that we the observer, interacts with. The interface is how we interact with the objective world and the end result of this approximation is our subjective view of reality. Our interface to reality is by all definitions, a simulation.

The eye itself is not the best camera and the brain has to fill in the blanks with approximation and guess work. Researchers from the University of Glasgow have demonstrated that what the eye doesn’t see, the brain fills in with past experiences and predictions.[ PNAC ]

This becomes apparent with optical illusions such as the Hermann grid which produces a phenomena called lateral retinal inhibition where a grid of black squares on a white background produces gray circles to appear at the white intersections. When you stare directly at the white intersection the gray circle goes away.

Color-blindness is another example of how some people have a different view of reality than people who are not colorblind. People with color-blindness will have less color information rendered in their simulation of objective reality, where as a person without color-blindness will have more color information within their simulation.

Rotating Snake” <> by Akiyoshi Kitaoka




The objective world is subjectively experienced by all life-forms. Cats, dogs, ants, dolphins, amoebas all have different sensory organs. For each organism to survive, nature has evolved an ability to process sensory information into and interface allowing the organism to interact with the outside world.

Even though we all exist in an objective reality, how we experience reality through this interface makes the experience of reality subjective. The qualities of subjectivity are noted in the vast arrangement of species where diversity in both neurological function and sensory adaptation produce a relative model of the objective world based on information processing and approximations.

1.2 The Subjective Paradox
We may assume that what we see is objectively accurate, but based on the limits of our sensory organs and how the brain uses past experiences to fill in the blanks proves that what we experience as reality is subjective. This is the “Subjective Paradox”

The subjective paradox introduces a divide in terms of what reality is to the observer. We have an objective reality defined by matter, and a subjective reality defined by a simulation. What we observe is not objective reality rather an “experience of reality”.

1.2.a Qualia
There is a term used in Philosophy that describes the subjective nature of perception. That term is called “Qualia”. For example and apple is not tasted objectively. A horse, cat, dog, human and even different humans will have a different experience when tasting the same apple. The same can be said in how we feel pain, see colors, and hear sounds.

Qualia can explain why some people like the taste of chocolate, and others do not. Our experience of reality, like a fingerprint; has unique qualities that define who we are as individuals. Different experiences and interpretations do not invalidate objectivity; subjectivity simply addresses the fundamental way in which reality is experienced.

1.2.b The Interface
The next challenge in our experience of reality once we pass through the sensory limits is determined by information processing in the brain. Information is rendered into a view which emulates our experience of realism. Electrical signals and neural chemicals facilitate biological processes where the brain, like a computer starts to put this sensory data into a model of reality that allows us the observer, to interact with that data in a meaningful way.

Computers simulate virtual 3D environments using information processing and precise mathematical calculations to plot vectors and a Cartesian 3D grid commonly described with three variables: XYZ. To render one frame in the animation Cars 2, 12,000 processors were required over a time averaging 11.5 hours to finish just one frame of 3D animation.

The human brain preforms this feat and more in milliseconds. The Human brain is comprised of approximately 86 billion neurons ( based on a recent study by Dr. Suzana Herculano-Houzel from the Federal University of Rio de Janeiro). This network of cellular micro-processors accomplish reality renderings rivaling all modern efforts to simulate what the brain can achieve computers.

New research into the length of time it takes for the brain to see an image is only 13 milliseconds verses the old view of 100 milliseconds. The research was conducted at Massachusetts Institute of Technology [MIT]. The value of this research demonstrates just how efficient the human brain is at processing visual information.

Simulating 1 second of brain activity takes 82,944 processors according to the researchers at the Okinawa Institute of Technology Graduate University in Japan and Forschungszentrum Jülich in Germany and took 40 minutes to simulate.

The human brain is natures reality-rendering farm. What we take for granted in a given moment of perception is nearly impossible with our combined scientific efforts to emulate. The fact is, the brain is like a super-computer that uses cellular processors to achieve complex computation with the sole purpose of rendering our perception of reality.

What the brain renders is an interface to the objective world. Looking at the relationship between the objective world and the brain fits nicely into a computer analogy when it comes to data and views.

For example, a computer on the internet has data that a user wants to access and interact with. This data sits on a hard-drive in the form of magnetic pulses for 1 and a negative reverse pulse for 0 as binary information. Binary information is not useful to a user wanting to interact with that data, so computers need to render that information into a user-interface called a view.

When we see a browser with a hyper-link represented by a button that says click-me, it is far more meaningful and useful to us than a string of binary numbers. In a similar way, the brain creates an interface to the objective world rendering sensory data that is also converted into a binary like sequence of electrical signals.

In an effort to disprove that the brain acts like a computer, research at the University of Colorado at Boulder by psychology Professor Randall O’Reilly discovered that the neurons in the prefontal cortex are binary as they have two states either active or inactive. The basal ganglia acts as a big switch dynamically turning on and off different parts of the prefrontal cortex.

Similar digital operations have been observed in the neuron itself at the atomic level. Each neuron is fitted with a cytoskeleton that has long strands of microtubules which are comprised of alpha/beta tublin proteins which are stimulated by mitochondrial biophotons causing coherent and incoherent states.

Digital information processing by neurons and larger brain function is not a myth. The brain does process information and renders a view of this information using biological processes. There is a big debate as to the role of the human brain as a quantum super-computer and evidence is emerging that the brain can use quantum states as presented by the research group led by Anirban Bandyopadhyay, PhD, at the National Institute of Material Sciences in Tsukuba, Japan (and now at MIT).

A controversial theory known as “Orchestrated Objective Reduction” [Orch-Or] put forward in the mid-1990s by Roger Penrose and Stuart Hameroff suggested quantum vibrational computations in microtubules were “orchestrated” [Orch] by synaptic inputs and memory stored in microtubules and terminated by “objective reduction” [OR].

Evidence has now shown warm quantum coherence in plant photosynthesis, bird brain navigation, our sense of smell, and brain microtubules. The recent discovery of warm temperature quantum vibrations in microtubules inside brain neurons.

When factoring in information processing within a single neuron, it’s not only the 86 billion neurons acting computationally, there is also approximately 1.3×109 ( 1.3 Billion )tubulin dimers contained in microtubules playing a role in cellular information processing.

Image Source:

A dimer consists of alpha/beta tublin which are a globular protein.

Image Source:

Information processing in the brain is observed in various distributions from cell groups, to individual neurons, to microtubules, to α-tublin/β-tubulin pairs and even the atomic structure of these proteins. If we follow the pattern it is possible that atomic and sub-atomic information processing could theoretically play a role in how information propagates from the smallest bit, to the largest byte in neurology. Binary sequence simply requires an active/inactive state, this can also be a positive or negative charge and how deep information processing goes into the atomic structure of a neuron is still being sorted out.

What appears to be emerging is that binary sequencing is fundamental not only to computers, but to living systems as well. Even nature uses active and inactive states to facilitate information processing.

Is the brain a computer? It processes information and outputs complex spacial calculations and renderings to approximate 3D space in fractions of a second. The speed and efficiency that it has evolved suggests that quantum calculations could be part of the process. Quantum computers use a qubit, which is a quantum bit of information. It uses a polarized photon that can be two states of horizontal or vertical polarization and unlike the classical bit, a quantum bit can be in both states at once in a state of superposition.

The neuron also uses photons for processing information at the sub-atomic level and although there is still a debate if the brain is using quantum states, the use of photons implies that the neuron is using quantum state information processing and may already be capitalizing on the efficiency of qubit polarization and superposition. The rate at which our perceptions render indicate that classical computing is not at work in the brain, that would be too slow. However, quantum computing fits the efficiency at which the brain can preform these complex calculations, making it a good candidate for quantum theory of information processing by the brain.

The second tier of information processing moves into electrons passed neuron to neuron so in additional to quantum state calculations, the brain scales up into larger systems to preform group processing thus the brain is defined by what role each group of cells in the brain are responsible for, such as vision, hearing, motor-skills, cognitive function, memory etc.

The human brain has evolved long before computers were invented a technology that allows it to download sensory data from the objective world, and render that data into a user-interface so that we can interact with that information.

In the age of technology and computers, one might think that information processing, plotting vectors into virtual 3D space on a computer screen and rendering imaginative 3D virtual worlds like we see in video games and movies is a man-made phenomena. Hate to have to break our technological ego, but nature has this figured out first.

Take for example the human eye. Long before video cameras were invented, nature evolved a functional biological camera. Long before microphones were invented, nature evolved the biological version known as the ear. We think calculators and computers are a human invention only? Nature again evolved cells that process information and calculate spacial distances, we are just playing catch-up to what millions of years of evolution has already figured out.

There is one stark difference to how a brain processes information and renders that information into a view. The computer has a computer screen to plot pixels and draw images on, where the human brain does not. There is no screen inside our brain that outputs our experience of reality. So what does the brain use as it’s computer screen?

The mind uses a virtual screen. Ever see the Holodeck from Star Trek where the computer is able to project holograms into 3D space allowing the user to interact with these holographic simulations? The brain produces something similar in that it too projects a holographic overlay of spacial information outwards into virtual 3D space that overlays the objective data it is approximating.

This process far exceeds again what computers and virtual reality technology can produce. There are other philosophical terms used to describe how the brain renders reality from objective sensory data into a view. These ideas date back as far as Plato when he wrote his Cave Allegory. Rene Descartes described it as the seat of the soul which spawned the term “Cartesian Theatre” where in his view a homonoculus or little man sat as the observer. Daniell Dennet coined the term, “Many Worlds Interpretation” where he expands on this idea and Anthony Peake expands this concept into the Bohmian IMAX.

Without a computer screen to plot pixels, the human brain does something much more extraordinary. It produces a type of holographic virtual reality simulation that projects outwards creating a third-dimensional overlay to represent space and distances between objects.

Consider this fact, when you walk to a door and reach for the doorknob to open the door. What you see inside your mind is not the actual door, it is a simulation of that door. The door that you see in your mind is like a link on a web-browser. It is an interface drawn on this holographic simulation that approximates where the actual physical door resides. When you reach out and touch the simulated door in your mind, like any interface the body that you are controlling reaches out and opens the physical door.

Even this text that you are reading is not the text that exists on the outside. It too is an approximation rendered on your own virtual reality simulator. If you do not believe me, then close your eyes and continue reading.

What you see as reality, is a simulation of reality. It doesn’t mean what is being simulated isn’t real, so don’t confuse that concept with the interface you are looking at. The brain and our senses do a fantastic job at simulating reality otherwise we wouldn’t be able to survive in the outside world. The fact remains that this interface and how it is rendered is what we view as our reality; when it is actually our reality interface.

Thanks to computers and technology, we can see how information processing by the brain can produce a simulation of reality using a type of organic “hardware” and “software” that renders our view of reality. By closing our eyes, plugging our ears we can see through first-person experience that our view of reality is altered by blocking sensory inputs.

How does the brain simulate light? How is sound, taste, smell and touch simulated? What about feelings, the distance between objects? Everything that appears before us within our Phaneron is the result of many processes working together to form a final product in the form of a simulated virtual reality interface.

The Phaneron was coined by Charles Sanders Peirce as the real world filtered by our sensory input (sight, hearing, touch, etc).

The end result of information processing by the brain is evident in the final-product, the final rendering that we experience. The mechanics of how this feat is done lies within the network of billion of brain cells and synaptic endings working together processing various sensory inputs to output in a virtual way, our view of reality.

By examining the output, more of the underlying mechanics involved in these processes can be revealed. If we look at 3D modeling software which movie studios such as Pixar have used to produce 3D movies such as Toy Story and Cars we find a very simple system where artists create a 3D mesh that represents vectors plotted in virtual 3D space. The mesh is then covered with a bitmap overlay, which is all the colors and textures. The final step is introducing a lighting system where photons are simulated and the computer preforms millions of calculations in the final rendering which creates a vivid, colorful 3D image.

The brain also renders a 3D image from the 2D information captured by the retinas on each eye. The 3D image that we see is again a simulation of 3D space by the mind. The 3D rendering in the mind is an approximation of distance between objects sensed from the outside world, but a very accurate calculation allowing us to judge distances and interact with objects with little margin for error.

Does the brain use a type of meshing system and bitmap overlay as we see in modern day computers? There are some indications that something similar could be at work in the mechanics of sensory renderings. This is not a definitive yes, but an exploratory look at what these indications are and if what they are showing us simply reveal visual geometric neural processes of a meshing system.

The first example comes from sleep research where during pre-sleep before the onset of dreams, people can experience very vivid visual fractal patterns. These patterns are called Hypnagogia where phosphenes ( speckles of lights or geometric patterns ) appear.

Hypnogogic patterns vary and are like a type of fractal pattern similar to the Mandlebrot set. The patterns can consist of triangular and rectangular meshes swirling and turning about. They can be even more complex points and geometric clouds.

The second example is a pressure phosphene effect that produces geometric fractal patterns. This effect can be achieved by closing your eyes, gently applying pressure to the eyes as to not cause pain or discomfort. After a few moments pressure phosphene patterns may emerge which can give you a first-hand example of how we can see a geometric fractal produced by the brain. If you decide to try this, stop if there is any painful feedback as hurting the eyes is not a requirement so proceed carefully if attempted.

The third example requires a light device known as the “Lucia Lucid Light Stimulator” created by Dr. Engelbert Winkler and Dr. Dirk Proeckl. The Lucia device uses specially timed flashes of light directed at a person when their eyes are closed. The effects of this stimulation produces the above mentioned phosphene effects.

British Author Anthony Peake reports in his experience with the Lucia device that he suddenly found himself in a 3D hypagogic grid resulting from the effects of the Lucia device.

The fourth example comes from meditation as many practitioners of meditation start to experience hypnagogic phosphene effects when the mind progresses into deeper relaxation.

The fifth example comes from sensory deprivation such as a sensory deprivation tank where people also report having hypnagogic phosphene effects.

These are many safe examples which one could explore and experience the different ways in which hypnagogic fractals can be produced. A less safe example comes from the use of psychedelic drugs which are also known to produce visual geometric patterns.

Why does the brain render fractal images? Here are some examples of what people report as seeing during phosphene experiences.

The brain can produce a type of neural geometry which becomes apparent with sleep and meditation where fractal geometric patterns can be observed. The bigger question arises when phosphene patterns project from a 2D lattice into a 3D mesh that a valid 3D meshing system is an actual cognitive function similar to computer graphics. This can be observed in lucid dream exploration.

Borrowing from a previous paper related to dreaming, here are some examples produced for the purpose of illustrating the evolution of visual information from pre-sleep to a dreamstate following hypnagogic patterns. These are artistic interpretations that follow the flow of hypnagogia from it’s 2D to 3D progression into a final rendered dream experience.

1.1 Eyes Closed1.2 Hypnagogic Stars1.3 Hypnagogic Cloud
1.4 2D Hypnagogic Lattice1.5 3D Hypnagogic Mesh1.6 Bitmapped Dream

Is it possible that certain effects which produce phosphene effects simply reveal nature’s own meshing system?

When deconstructing the final-product of our rendered perceptions and in an effort to answer why complex 3D spacial projections are possible, answers may lie within these geometric hints that arise from hypnagogia, phosphenes and dreams.

How does the brain take two separate 2D images from the eyes to produce a vivid 3D rendering of those images, neatly simulated in virtual 3D space? For a computer to simulate 3D space, all that is required is a vector system using Cartesian XYZ coordinates which makes up the mesh, and a bitmap to overlay the mesh with color and texture.

The above image is taken from a thesis entitled “Automatic Knotclouds Placement Algorithm for Quadratic DMS-spline Function” which describes the minimal amount of polygons required to approximate a 3D surface.

Quadric polygons and triangles are similar to what can be observed in the phosphene experiences and most noted when a 2D hypnagogic lattice projects into a 3D hypnagogic mesh during the onset of a dream.

This idea is certainly an inquiry only based on the listed examples. The evidence seems to suggest some similar cognitive process is at work which seems similar to modern day computer graphics.

After a mesh approximates space, the final step is rendering the bitmap which is the colored texture on the mesh to finalize the rendering. Nature has already produced one of the most amazing virtual reality simulators, the human brain.

It simulates not only sensory data from the objective world, but vivid virtual dream worlds when we sleep. The next chapter explores what could be used to render the visual bitmap, sounds, taste, touch and smell.

Please note this is the first chapter of my new book. If you are reading this chapter and would like to be notified when the book is complete please contact me directy via e-mail. Any feedback is welcomed in the initial development of this project.
References and sources pending in final revision.
- Author: Ian A. Wilson
- Email:
- Website:
- Other Papers:

  • Andre

    “The brain also renders a 3D image from the 2D information captured by the retinas on each eye. The 3D image that we see is again a simulation of 3D space by the mind. The 3D rendering in the mind is an approximation of distance between objects sensed from the outside world, but a very accurate calculation allowing us to judge distances and interact with objects with little margin for error.”

    This reminds me of a very interesting phenomenon I’ve noticed. If I watch 2D footage while covering one of my eyes, especially if I remove all other surrounding visual stimuli, the image “pops” to life to a surprising degree, as if the brain can’t tell that the image is on a flat surface and is instead trying to simulate 3Dness.

  • youaredreaming

    I hope you enjoy the new look of the website, feel free to use the new comment system and I welcome any feedback.