What is Consciousness?
Consciousness – Difficult to describe but indisputable when it’s present, is widely regarded as the state of particular experience – the state or quality of awareness.
The problem with the study of consciousness is the lack of a universally accepted operational definition. Descartes proposed the idea of cogito ergo sum (“I think, therefore I am”), suggested that the very act of thinking demonstrates the reality of one’s existence. Today, consciousness is generally defined as an awareness of yourself and the world, there are still debates about the different aspects of this awareness.
Research has focused on understanding the neuroscience behind our conscious experiences. Scientists have even utilized brain-scanning technology to seek out specific neurons that might be linked to different conscious events. Modern researchers have proposed two major theories of it: integrated information theory and global workspace theory.
Integrated Information Theory
This approach looks at this by learning more about the physical processes that underlie our conscious experiences.
2 The theory attempts to create a measure of the integrated information that forms. The quality of an organism is represented by the level of integration.
This theory tends to focus on whether something is conscious and to what degree it is conscious.
Global Workspace Theory
This theory suggests that we have a memory bank from which the brain draws information to form the experience of conscious awareness.
3 While integrated information theory focuses more on identifying whether an organism is conscious, the global workspace theory offers a much broader approach to understanding how consciousness works.
Can Machines Have Consciousness?
The question of whether machines can have consciousness is not new, with proponents of strong artificial intelligence (strong AI) and weak AI having exchanged philosophical arguments for a considerable period of time. John R. Searle, albeit being critical toward strong AI, characterized strong AI as assuming that “…the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have cognitive states” (Searle, 1980, p. 417). In contrast, weak AI assumes that machines do not have consciousness, mind, and sentience but only simulate thought and understanding.
When thinking about artificial consciousness, we face several problems (Manzotti and Chella, 2018). Most fundamentally, there is the difficulty to explain consciousness, to explain how subjectivity can emerge from matter—often called the “hard problem of consciousness” (Chalmers, 1996). In addition, our understanding of human consciousness is shaped by our own phenomenal experience. Whereas, we know about humans from the first-person perspective, artificial consciousness will only be accessible to us from the third-person perspective. Related to this is the question of how to know whether a machine has consciousness.
A basic assumption for artificial is that it be found in the physical world of machines and robots (Manzotti and Chella, 2018). Furthermore, any definition given by humans will have to be made from the third-person perspective, without relying on phenomenal consciousness.
An example of this strategy is given by David Levy (Levy, 2009, p. 210) who prefers to take a pragmatic view according to which it is sufficient to have a general agreement about what we mean by consciousness and suggests “let us simply use the word and get on with it.”