
AN OVERVIEW
THEORIES OF CONSCIOUSNESS
Learn more about the different theories being tested as part of this research program. Some theories share a number of similarities, while others differ substantially.
Introduction
Encompassing a wide variety of fields from neuroscience to psychology and philosophy, consciousness research has been a fertile ground for the blossoming and development of a large number of diverse theories. Whether focusing on the global broadcasting of information within our brain, information integration, mechanisms of feedback loops or metacognition, all seek to explain the relationship between our subjective experiences and our physical neural system. Consensus, however, remains elusive. Within our program, through the process of adversarial collaboration, researchers favouring different theories work together to make the differences and similarities of their views more explicit. After highlighting and clarifying the expected results of their empirical predictions, they co-design tailored scientific experiments that challenge each other’s competing views. What follows is a series of brief summaries introducing the central ideas and claims of some of the key theories currently on the front scene of consciousness research, which our program aims to test.
Integrated Information Theory

IIT
“IIT rejects analogies of consciousness to any kind of circuitry – silicon or neural. It starts with the phenomenon of consciousness itself. It describes five axiomatic properties of the conscious state, puts forward postulates about the physical structure that would display such properties, and captures them in a mathematical model. The five axiomatic properties of consciousness are held to be: intrinsic existence, composition, information, integration and exclusion. […] From these axioms come postulates about the physical structure that best accounts for consciousness – what sort of “windmill” it must be to produce these five qualities. IIT holds that the physical structure of consciousness is one that is optimised to make a difference to itself, and one which cannot be reduced to its components. To find the signature of such a structure in the brain requires looking for something that demonstrates these characteristics – what proponents of this theory call a “maximally irreducible cause-effect structure”.”
Elizabeth Finkel in Prove It (2025)
IIT was first proposed as a theory of consciousness by Giulio Tononi in 2004. Along with Christof Koch, Marcello Massimini, Melanie Boly and their many colleagues, it was further developed and articulated through a complex mathematical framework.
“IIT holds that the degree of consciousness is equal to the extent to which a system generates more information than the sum of its parts, a quantity Tononi has dubbed Φ or phi. Phi can be measured in simulations, but the phi of the human brain is currently beyond measurement. However, in 2005, inspired by IIT and its prediction of integrated information, Tononi attempted to measure a crude proxy for phi in the human brain […] measured using a variation of a traditional EEG.”
Elizabeth Finkel in Prove It (2025)
Comparing what we already know of the connectivity architectures of different regions of the brain, IIT makes the hypothesis that a region of maximum Φ is likely to be found in a posterior “hot zone" of the cerebral cortex, and that activity in this region is sufficient for consciousness.
In contrast, as a paradigmatic illustrative example, IIT suggests that the cerebellum has a low Φ because the modules composing this structure at the back of the brain work largely independently from one another. This would explain why, despite accounting for about three quarters of the brain’s neurons, the cerebellum does not seem to be playing a very important role in consciousness. Indeed, people with a damaged cerebellum tend to live relatively normal lives.
Intriguingly, IIT predicts that changes in the structure of neural connections could affect consciousness even when neural activity itself remains the same. Such a distinction between the hypothesized roles of inactive versus inactivated neurons opens opportunities for testing through targeted experimental manipulations.
Global Neuronal Workspace Theory

GNWT
Global Neuronal Workspace Theory (GNWT) explains consciousness in terms of broadcasting of information within the brain by a global workspace working as a connecting hub between specialised neural modules. Because the global workspace has limited capacity, access to it is selective and arises from the competition between many inputs from those modules. This theory was originally proposed by Bernard Baars (1988) and was further developed with more data from brain imaging experiments by Stanislas Dehaene, Lionel Naccache, Jean-Pierre Changeux, and their collaborators. The Global Workspace Theory hence became the Global Neuronal Workspace Theory.
GNWT postulates that most of the information processing performed by the brain happens unconsciously in localized specialised modules. However, when one of the modules gains access to the global workspace, the information it carries is amplified and broadcasted globally in a phenomenon called ignition. Ignition renders that information conscious, making it broadly available to the rest of the system and allowing for flexible guidance of behaviour. According to GNWT, it is the process of ignition that we subjectively experience as consciousness. A conscious information is one that can be accessed and reported. While many modules of the brain can work in parallel, the global workspace, and therefore consciousness, has a limited capacity which corresponds to the focus of our attention and working memory.
The global workspace is a hypothesized network of neurons that are particularly dense in the prefrontal parietal areas of the brain and will be activated whenever a subject has a conscious experience. These neurons project through long-distance connections with sensory areas, allowing information exchange and use by cognitive systems involved in language, memory, or decision-making. Because information first needs to be broadcasted and amplified to reach consciousness, GNWT predicts that we should observe a corresponding delay of about 250 milliseconds between exposure to a stimulus and ignition.
Predictive Processing Based Theories
Theories based on Predictive Processing, such as Active Inference (PP-AI) and Neurorepresentationalism (PP-NREP), view perception as the outcome of a process of inference about the causes of sensory inputs. In other words, the awake brain is constantly engaged in a process of generating and correcting guesses about what is causing the stimuli it receives. This updating seeks to minimize errors in the predictions generated, while remaining mismatches are used to refine future guesses and support learning. Under theories of consciousness making use of predictive processing, conscious content is the best guess that the brain generates about the sensory inputs it receives.
Predictive Processing - Active Inference

PP-AI
As its name suggests, Active Inference (PP-AI) puts action at the core of the brain's predictive processes. This theory proposes that the brain updates its best guesses about the world based on expected consequences of performing actions. This is because actions are inherently linked to predictions about the sensory inputs that the brain will next receive. This ongoing process between actions and predictions allows for the predictive control of behaviour in order to anticipate and respond to incoming changes. This theory has been developed and defended by Karl Friston, Jakob Hohwy and their colleagues.
According to PP-AI, actions can be either overt or covert. Overt actions typically involve physical movement, such as saccadic eye movements in the case of vision: we never look at a scene in a perfectly steady way. Covert actions typically involve shifts in our focus of attention. According to PP-AI, changes in attention, motor activity or working memory should be necessary for changes in conscious experience. In short, to see is to look – to hear is to listen.
PP-AI predicts that the inactivation of neurons that only have a background level of activity should modify perception without necessarily disrupting it, and might even improve it by making it feel sharper.
Predictive Processing - Neurorepresentationalism

PP-NREP
Neurorepresentationalism (PP-NREP), just as PP-AI, is inspired by predictive processing. Conscious experiences comprise not only perception, but also imagery and dreaming. This theory views consciousness as a rich and spatially encompassing representation of the body and its environment, built by integrating information from all the senses. This survey of the environment and body therefore includes best-guess representations across the senses, distinct yet combined into an integrated picture, rather than simply linking single stimuli to specific behavioural responses. Its biological function is to enable the brain's action systems to engage in deliberation, planning, and complex goal-directed behaviours. PP-NREP was first proposed by Cyriel Pennartz in 2015.
Under PP-NREP, the corticothalamic areas of the brain dedicated to a single main sensory modality are organized hierarchically: lower levels compute basic features of sensory inputs and errors, whereas higher levels generate more abstract, holistic representations of sensed objects that remain invariant across different viewpoints. A specific aspect of PP-NREP is that it views lateral connections between sensory modalities as essential for conscious experience. The various modalities are highly interconnected and "superinferences" are needed to encompass and unify the richness of information coming through each of our senses. Superinferences do not come from a single sensory source, but combine information across all modalities, resulting in multisensory integration despite segregation. Superinferences are best understood as arising across multiple levels of brain representations: small neuronal networks engage in low-level representations (for instance single object features), which are then integrated at the higher level of object representation from a single sensory modality (for instance, vision). At an even higher level, these representations from single sensory modalities are integrated into multimodal, spatially encompassing representations.
PP-NREP contrasts with PP-AI not only by this multi-level account, but also by the view that actions are not necessary for conscious perception: while actions and attention influence conscious perception, consciousness can also be maintained in their absence. Furthermore, in this case, the prediction error is minimised largely based on present sensory inputs rather than anticipated ones.
PP-NREP predicts that sensory neurons can participate in perception even if they display a background level of activity. Even if awake and at rest, the presence of a specific stimulus is not necessary for the brain to generate a predictive representation. Indeed, the conscious brain is constantly engaged in maintaining and updating a representation of its environment. This process requires a level of spontaneous activity to maintain the structure of its multimodal network. Unlike with PP-AI, selectively inactivating the neurons displaying a background level of activity should therefore be disruptive to conscious experience, as those are still participating in conscious perception.
Recurrent Processing Theory

RPT
Recurrent Processing Theory (RPT) associates the subjective aspect of conscious experience with loops of communication within the sensory cortex. Instead of information flowing only one-way from lower sensory brain areas to higher ones, it emphasises the importance of recurrence, that is, feedback and horizontal connections in neural signals. RPT is a first-order theory: it ties consciousness to a first-order representation within perceptual areas of the brain. It was introduced by Victor Lamme in 2006.
After reception of a stimulus, RPT breaks down cerebral activity into three stages. First, information is rapidly processed and propagated through a hierarchy of sensory areas: this is feed forward processing. Second, signals loop back, reactivating earlier areas: this is local recurrence. It is at this stage that subjective experience arises. Thirdly and finally, information can spread further and activate more widespread areas, reaching frontal and parietal regions of the brain. When this happens, we become able to access, cognitively manipulate, and report our experience.
This final stage of RPT could correspond to GNWT's conscious access during ignition. However, RPT differs by distinguishing between conscious access and subjective experience. RPT argues that subjective experience emerges first, during the second phase of the process, and may not always lead to further access. That is to say, unlike GNWT, RPT argues that consciousness is possible even in the absence of access and therefore inability of a subject to report it.
RPT predicts that the subjective strength of perceptions is not dissociable from the signal strength of first-order states, as those are necessary and sufficient to determine subjective aspects of experience. Under this theory, recurrent processing in the occipitotemporal area of the brain is sufficient for consciousness. In contrast, activity in prefrontal area solely reflects further cognitive processing.
Higher-Order Theories
Higher-Order Theories (HOTs) come in different flavours leading to contrasting hypotheses about consciousness and the brain but still, they share a unifying core: they consider consciousness to be a metacognitive phenomenon. Within these theories, a mental state becomes conscious only if it involves a meta-representation: a representation that has another representation as their target. As a result, perceptions, feelings or thoughts can only be conscious when the mind generates a separate, higher-order representation about them.
While HOTs originate from and have been largely developed in philosophy, they have also become a more frequent target of investigation in neuroscience. Anatomically, HOTs tend to emphasise the role of the prefrontal cortex, a region of the brain that has been associated with complex cognitive functions.
Different HOTs defend different roles for the higher-representations responsible for consciousness. Some versions, called "sparse", argue that having a conscious experience requires the original first-order representation (coming from sensory perception) as well as the higher-order representation monitoring it: both need to be present together. On the other hand, other versions, called "rich", consider that the higher-order representation alone is sufficient for consciousness. HOTs also differ in whether or not they allow for strong misrepresentation: a case in which first-order representations and higher-order would be misaligned. This tends to be closely related with how “rich” a HOT is: the stronger the decoupling between first-order and higher-order representations can be, the greater the possibility of strong misrepresentations.
Below, we further explore several HOTs and their specificities.
Higher-Order Representation Of a Representation

HOROR
Higher-Order Representation Of a Representation (HOROR) is a "rich" HOT: it argues that what matters for a mental state to be conscious is solely for it to be a higher-order representation. This theory was first advanced by Richard Brown in 2015.
Even if the higher-order representation represents a lower-order one as occurring, it still does not require that the first-order representation be actually present at the same time. Because of this, HOROR opens the possibility for cases of strong misrepresentation: one could for instance have a meta-representation of hearing a sound even in the absence of activity in their auditory cortex. This feature might explain phenomena such as hallucinations and other forms of false beliefs about our own experiences.
HOROR predicts that the contents of higher-order representations are encoded in the prefrontal cortex. In the case of a strong misrepresentation, this activity would occur even if no corresponding activity typically linked to such content is present in the associated sensory areas.
Self-Organizing Metacognitive Account

SOMA
The Self-Organizing Metacognitive Account (SOMA) views consciousness as a learned skill, the brain's ability to represent its own processes to itself. It distinguishes between sensitivity (sensing and reacting to a sensory input) and awareness. Awareness is a higher-order state, which includes at least the meta-representation of one's own sensitivity. In a sense, consciousness then is the brain's own constructed theory about itself. This theory was proposed by Axel Cleeremans and his colleagues.
As is the case with other higher-order theories, SOMA requires that a sensory first-order state be monitored by a higher-order one to become conscious. This theory however goes further by suggesting that these meta-representations not only track first-order perceptual states, but also evaluate them in different ways, reflecting the mental attitude of the subject towards them: "Is this real?", "Is this good?", "Have I seen this before?". Furthermore, such meta-representations are learned over cognitive development and training. They are highly flexible and, ultimately, they shape our preferences and the ways in which we act and react to the world.
SOMA is a "rich" HOT, similarly to HOROR, but with a more relational aspect: first-order states still contribute to the qualitative aspects of conscious experience. Consequently, according to this theory, weak misrepresentation of a first-order state by a higher-order one is possible, but the "raw" data provided by the former will still affect the way things subjectively feel to us.
Perceptual Reality Monitoring

PRM
As suggested by its name, the main idea behind Perceptual Reality Monitoring (PRM) is that consciousness is the monitoring of how reliable our sensory inputs are. More specifically, according to PRM, a dedicated higher-order process is monitoring the reliability of our first-order sensory signals. Thus, unlike HOROR, PRM is a "sparse" higher-order theory: it requires that both a first-order representation and a higher-order representation be present alongside one another for consciousness to occur. Neither of these representations are sufficient for consciousness on their own. PRM was first introduced and defended by Hakwan Lau in 2019 and subsequently developed with important contributions from Matthias Michel.
First-order representations can occur either by being triggered by an external stimulus, or because we are imagining such a stimulus in our mind. It is interesting that despite the similarity of the first-order representations in both cases, we still experience these two situations very differently. PRM proposes that higher-order states are tasked with interpreting and discriminating first-order representations, attributing them either to perception or imagination. When first-order representations are evaluated as having been generated internally, we experience them as being part of our imagination in a way that is conscious yet distinct from our actual perception of the world. Higher-order representations also evaluate if first-order ones correspond to actual signals and not just noise, which would remain unconscious.
Unlike HOROR, PRM prevents strong misrepresentation of first-order states by higher-order ones. This is because the first-order representations remain an essential part of conscious experience: they provide its qualitative aspects. PRM leaves those qualitative aspects of consciousness aside: they might be explained by combining it with other theories dedicated to them. Thus, it focuses not on how specific experiences are encoded, but rather on how such content becomes conscious: either as representations of reality or as internal imagination.
Higher-Order State Space

HOSS
Higher-Order State Space (HOSS) focuses on how people report their own awareness: as both simple and abstract. It is simple because it varies along a single dimension, ranging from unaware to aware. It is abstract because it only encodes if a perceptual state is present or absent, rather than its full details. This comes with an asymmetry. When awareness is absent, there cannot be any perceptual content either: being unaware of a circle is the same as being unaware of a square. However, when awareness is present, it could be about any of a large variety of inputs, emotions or thoughts. HOSS was brought forward by Stephen Fleming in 2020.
HOSS, like PRM, is a "sparse" HOT: even though higher-order representations are essential for conscious experience, they work in tandem with first-order ones which accompany them to supply the detailed contents of perception. Like PP-AI and PP-NREP, HOSS makes use of the predictive processing framework in its models.
In HOSS, first-order representations are seen as complex and unconscious. On the other hand, higher-order representations provide simpler abstractions that tag the first-order representations with a degree of reliability. Unlike PRM, the difference between imagery and perception is not directly encoded by the higher-order representation, but rather by thresholds in awareness (with imagination being a weaker signal than actual perception). Contrary to HOROR and as with PRM, strong mismatches between the first-order and higher-order representations are not possible, thus precluding cases of strong misrepresentation.
