
Theories Of Consciousness
Learn more about the different theories being tested as part of this research program. Some theories share a number of smilarities, while others differ substantially.
​
Each theory is the product of rigorous scientific and philosophical enquiry. We hope the scientific experiments under this program will help generate new data to help us understand which of them are the most beneficial.

Please note, This section is still under construction. We plan to have new summaries of each theory soon.
Integrated Information Theory
(IIT)
Integrated information theory (IIT) is based on a phenomenological axiomatic approach. It starts from the essential properties that characterize any phenomenal experience to derive the necessary requirements for a physical system to be conscious (Tononi, 2004, 2008, 2012). According to IIT, the physical substrate of consciousness is a maximum of irreducible intrinsic cause–effect power, as determined from the intrinsic perspective of the system (Tononi, Boly, Massimini, & Koch, 2016). IIT defines a scalar measure for integrated information, known as phi (φ), which can theoretically be used to locate this physical substrate within the brain (or any other physical system). This is predicted to be a local maximum of intrinsic, integrated cause-effect power (Tononi, 2012). Quantifying phi in the human brain is challenging in practice, yet current theoretical and neuroanatomical considerations suggest that a complex of maximum phi is likely to reside primarily in the posterior cerebral cortex, in a temporo-parietal-occipital “hot zone” (Tononi et al., 2016). This is because the ‘pyramid-of-grids’-like connectivity architecture of posterior areas is more likely to specify a high number of higher-order distinctions and relations leading to a higher phi (Tononi et al., 2016) compared to the different connectivity of prefrontal regions. Currently, however, this remains an auxiliary hypothesis as the exact anatomy and fine- grained connectivity architecture of different parts of the brain is only partially known. Having said this, IIT does not exclude that certain anterior regions may contribute to the NCC of certain specific aspects of experience (such as interoception or thought, Koch, Massimini, Boly, & Tononi, 2016a).
Global Neuronal Workspace Theory
(GNW)
Global neuronal workspace theory (GNW) posits that what we subjectively experience as a conscious state, at any given moment, is the global broadcasting of information (Baars, 1989), in an interconnected network of prefrontal-parietal areas and many distant high-level sensory areas (Dehaene & Changeux, 2011; Dehaene, Charles, King, & Marti, 2014; Dehaene & Naccache, 2001). Specifically, preconscious processing is thought to occur in parallel in many localized, modular circuits such as the ventral visual stream. This processing remains unconscious, unless the activity triggers an ignition of the global neuronal workspace that causes information to become broadcasted and sustained, thereby becoming conscious (Dehaene & Naccache, 2001). The global neuronal workspace is a hypothesized network of long-range cortical neurons with reciprocal projections to homologous neurons in other cortical areas, distributed over prefrontal (PFC), parieto-temporal and cingulate associative cortices. These neurons, mostly originating from the pyramidal cells of layers 2 and 3, are connected through long-range excitatory axons to high- level sensory areas, allowing for flexible, domain-general amplification, distribution and exchange of information to various cognitive systems such as those involved in language, memory, planning, and voluntary action. Thus, according to GNW, a hallmark of conscious processing is a late (after ~250 ms) global broadcasting and amplification of information in an interconnected network of prefrontal, parietal and high-level sensory areas (Del Cul, Baillet, & Dehaene, 2007).
Recurrent Processing Theory (RPT)
Recurrent processing theory (Lamme 2006), like all first-order theories, assumes that first-order representations are at the heart of phenomenal consciousness. What distinguishes RPT from other first-order theories is its assertions about what sort of first-order representations are needed for consciousness. RPT holds that feed-forward information propagating from the sensory organs into lower-order sensory areas and then into higher-order sensory areas in the brain is insufficient for awareness. First-order representations based solely on bottom-up signals do not support phenomenal consciousness. Instead, according to RPT, awareness arises when bottom-up signals are combined with recurrent processing. Recurrent processing adds feedback and horizontal connections to purely feed-forward propagation of neural signals. These signals combined enable the process of perceptual organization, which is postulated by RPT to be central to conscious experience.
Higher-Order Representation of a Representation (HOROR) theory
HOROR theory (Brown 2015) says that the neural correlates of consciousness involve a higher- order representation that represents that certain first-order representations are present. For example, when someone consciously experiences a sound, there is a higher-order representation that represents that a first-order representation of a sound is present. According to the version of HOROR theory that we will focus on (LeDoux & Brown 2016), these higher- order representations encode the specific contents of phenomenal awareness and, importantly, are found in prefrontal cortex. HOROR therefore predicts that the contents of phenomenal awareness should be decodable from prefrontal cortex.
While all higher-order theories say that a higher-order representation is crucial for consciousness, some require that a first-order representation is also present and some do not. HOROR is in the second class. In the case of a conscious experience of a sound, HOROR requires that a higher-order representation (in prefrontal cortex) represents that a first-order representation of a sound is present, but it does not require that a first-order representation (in auditory cortex) is itself present. Specifically, according to HOROR theory the content of the relevant higher-order representation would be something to the effect of ‘I am having a first-order representation of the sound of a tuning fork’. This content determines the conscious experience whether or not a first-order representation is actually there.
Perceptual Reality Monitoring (PRM)
PRM (H. Lau, 2019) starts from the observation that FO representations can be active both when we are conscious of an external stimulus, and also when we are actively imagining or maintaining that stimulus or content in mind. However, the conscious experience in these two cases is very different – one feels more perception-like, more like “reality”, whereas the other is either not experienced perceptually at all (as in working memory) or is associated with a distinct experience of internal imagery. PRM proposes that this difference is underpinned by a downstream (HO) mechanism that indexes and interprets the FO representations, akin to a discriminator in a generative adversarial network (GAN; (Gershman, 2019)). When operational, such a discriminator allows the system to distinguish between FO activity related to imagery and perception (reality). PRM proposes that this same mechanism allows for perceptual metacognition – determining whether the FO state is being driven by noise, or external signal. This creates a relational HO theory, in which the quality of the experience is determined by the FO state, while a HO mechanism indexes whether this content should be given the status of “real” experience.
Higher Order State Space (HOSS)
HOSS (Fleming, 2020) starts from the observation that people’s awareness judgments of a range of content are naturally low-dimensional (ranging from no experience to a rich experience, along a single dimension) and they are abstract (we can become aware of any content, ranging from external stimuli to emotions to thoughts). These psychological features of awareness pose constraints on what we would need to add to a FO perceptual generative model to allow it to generate abstract, low- dimensional awareness judgments. Existing perceptual generative models typically contain high- dimensional FO representations, and a long-standing view is that FO perceptual inference often proceeds unconsciously (Helmholtz, 1856). HOSS builds on such models to incorporate HO representations, “awareness states”, which provide a low-dimensional abstraction of the signal-to- noise ratio of (potentially modality-specific) FO perceptual generative models. HOSS predicts content-invariant HO indices act to “tag” a FO state with different degrees of phenomenal magnitude. Alteration in the function (or priors) at this HO level may lead to blindsight-like effects and performance without awareness.
Self-Organizing Metacognitive Account (SOMA)
SOMA (Cleeremans, 2011, 2014; Cleeremans et al., 2020), like other HO theories, begins by assuming that first-order representations are not sufficient to afford conscious experience — there is a strong distinction between sensitivity, that is, the fact that a system can be sensitive to and appropriately react to states of affairs, and awareness, which minimally requires sensitivity to one’s own sensitivity, that is, knowledge of the fact that is one is a particular mental state. SOMA thus assumes, like other HO theories, that experiences presuppose the existence of a subject whose experiences they are. SOMA further assumes (1) that this self-sensitivity (or “inner awareness”) is driven by systems of meta- representations, that is, representations about other (FO) representations, (2) that such systems of meta- representations qualify their target FO states in different ways (Is this real? Do I like this? Have I seen this before? Do I regret having done this?) that characterize the mental attitude of the subject towards the target contents, and (3) crucially, that such systems of meta-representations are learned over cognitive development and training. Such meta-representations thus also convey affective dispositions (preferences) and are highly plastic, that is, subject to learning and to suggestion. SOMA is a rich HO theory in the sense that it assumes that HO states continuously track a rich and potentially variable set of FO features. It allows for misrepresentation, but has more of a relational character than HOROR in the sense that it assumes that “raw” data about external states of affairs is contained in the target FO states and that those contents always contribute something to the global phenomenology.
Predictive Processing Active Inference (PP-AI)
The specific claim from PP-AI with respect to consciousness is that active inference is necessary for change in conscious perception (Friston 2018; Friston et al. 2017, 2020; Hohwy 2013; Whyte and Smith 2021). Active inference is a predictive processing theory concerned with inference of policies for action for expected (in contrast to current) prediction error minimisation. In brief, active inference entails a covert or overt sampling of the sensorium to reduce uncertainty about the causes of sensations. Overt sampling in the visual domain mainly occurs through eye movements or other movement. Uncertainty reduction manifests as belief updating, which can be read as perception. Put simply, something can only be consciously ‘seen’ when ‘looked at’ or ‘noticed’. Covert active inference is attention to (or sensory attenuation of) some features of an object or its location. This subsumes attentional capture, e.g., the act of directing spatial attention in response to a salient stimulus. On a strong reading, active inference as a theory of change in consciousness can be refuted if something can be seen without looking at it – or noticing it. More formally, in the absence of active inference, there can be no stimulus-bound responses that induce perception or recognition of an object. Recognition is here defined as the act of recognising a particular thing or object (as opposed to being aware that something has changed in the sensorium). PP-AI supposes that to explicitly recognise an object the perception of that object is required
Predictive Processing - Neurorepresentationalism (PP-NREP)
Set within the general Predictive Processing framework, neurorepresentationalism (PP-NREP) postulates that perception arises from the construction of both high- and low-level inferential representations which can be simultaneously characterized as perceptual hypotheses; the continuous interaction with bottom-up sensory inputs provides for updating of generative models of the causes underlying changes in sensory input (Pennartz 2015). Differently from PP-AI, however, it states that overt or covert action (eye movement, top-down attention) is not necessary for conscious perception per se. That is, motor activity and attention (top-down or bottom-up) are able to influence conscious perception, but consciousness will be maintained even in their absence (Pennartz, 2018). The specific link to consciousness arises because representations, specified at a high conceptual level, can become comprehensive in a strong sense, i.e. when they provide a spatially encompassing and multimodally rich survey of the subject’s current situation. To specify the sensory modalities contributing to this richness, emphasis is not only placed on bottom-up/top-down cortical connectivity, but also on lateral (intermodal) connectivity. This leads to the postulate that background activity is necessary even in those brain areas and neuronal groups not explicitly tuned to perceived object features (Pennartz 2009, 2015). This background activity extends to those brain areas supporting the modality- and sensory-specific identification of the perceived feature under scrutiny, which generally includes other sensory cortical areas but also higher associative (e.g. parietal) and possibly motor cortical areas (but not all brain areas, e.g. not hypothalamus). Differently from PP-AI, PP-NREP states that perceptual brain systems are driving at the minimization of current prediction errors (optimizing inference on current sensory inputs) whereas PP-AI is devoted to the minimization of expected prediction errors in relation to inference of action policies.