top of page
Search
  • Writer's pictureelizabethmmorrow

Human consciousness, human unity and AI

Updated: Apr 10, 2021

This post explores ideas about consciousness and the open question of whether AI can develop higher level complex cognitive processes, that is develop into a new form of consciousness.



Contemporary cognitive computer sciences fuse understanding from different fields of technology and science to develop AI, or more specifically digital intelligence. Much of the learning being applied to future technologies draws on the science of the human brain, human learning, and cognitive development. This is called neuromorphic computing.


It starts from the position of recognising that human cognition is conscious and unconscious, concrete, or abstract as well as intuitive and conceptual. Currently, digital intelligence is at the level of memorising and identifying objects, rather like a child that has been taught the name of familiar objects (concepts) and how to sort them into categories, like shapes or food. Children can then learn to recognise basic patterns between concepts. Computers can learn to play and win games.


Current digital intelligence can roughly do what a pre-schooler can do, but exceptionally faster and at scale. Computer programmers can create computational structures, to allow machines to do things that they could not otherwise. Programmers have coded millions of simple concepts into databases. Categories of concepts are devised, then hierarchical structures, graphical structures, and more complex data structures.


Computer brains, called neural networks, are beginning to explore the relationships between objects, for example, older than, taller than, in front of. This is the sort of thing that children learn in first year of primary school, where the letters come in the alphabet for example. When they get good at it, this type of cognition helps children and computers to decipher positional associations and interpret meaning from more complex relationships. For example, reasons for why a person may be taller but not necessarily older than another person.


Human cognition encompasses language learning, and when advanced the comprehension of the meaning of words in different contexts of use, such as pronouns like ‘it’. Computers are being trained to understand naturalistic language and to recreate language, for example writing a synthesis from information in a set of books or writing a poem.


Children are naturally curious and are taught early on to pay attention. Computers are being taught to be more curious in their learning. Attention and curiosity support the acquisition of knowledge and the creation of new knowledge. However, this is at the forefront of current practice and where the real future of technology becomes unknown.


In time, advances in sensing technologies will foreseeably enable multi-modal sensing, for example combining forms of vision/optics, language recognition, sound, haptic, taste, smell. In time, machines will be able to explore different dimensions of the world and piece this information together.


At present, the most advanced technologies have limited ability for active embodied learning (tasks, problems, contexts) and a narrow mission to explore. With more advanced curiosity driven and unsupervised learning their capabilities for learning will expand.


In time computers can be programmed to mimic how people use their senses to create experiential knowledge, how people process knowledge to create meaning, and how they speak about different types of knowledge (epistemologies). Machines could produce more robust forms of scientific evidence for example through controlled experimentation or systematic review. Extending human learning will be enabled through knowledge acquisition and knowledge sharing.


Perception is a more complex cognitive process for humans and for machines. For example, learning to catching a ball or step out of the way of a moving car. Humans need to protect themselves by knowing what might happen if you do, or do not, act. Perception helps them to predict outcomes and respond. Parents and guardians make sure children learn quickly to understand the consequences of touching hot coffee cups or stepping out into the road. Simulations enable digital intelligence to perceive different outcomes of scenarios.


As they mature some humans train their minds in complex problem solving. For example, to be able to make a medical diagnosis, or to use forecasting data to decide when to invest in the stock market. Addressing complex problems is an artful use of intelligence that combines experiential knowledge and perception.


Humans may learn to use mental imagery. For example, to visualise themselves winning an Olympic medal. It inspires them to direct their cognitive abilities towards a goal. For example, it helps people to imagine what having a baby might be like and to prepare for life changes. Computer minds are not that advanced, virtual reality does however, merge with our perceived realities.


Machines can carry out complex and precise programmed actions, that are devised by humans, such as Boston Dynamic’s dancing robots. Digital intelligence could in theory experience pleasure and pain if the networks are created.


Traditionally, emotion was not thought of as a cognitive process, but now much research is being undertaken to examine the cognitive psychology of emotion. Research is also focused on one's awareness of one's own strategies and methods of cognition, which is called metacognition. For example, how a person learns and teaches critical thinking to university students.


Emotion is a powerful cognitive tool in humans. We respond to our emotions derived through our senses. We also create emotions through our thoughts – for example we can make ourselves happy at the memory of a loved one. Computers can learn about emotions as a concept and the relationships between emotions and other concepts, for example what a smile means. Humans and computers may struggle to judge whether the smile is genuine or fake.


Without understanding emotion in context, a human or a machine cannot understand the why a person is crying or feels hurt inside. People can learn to read their emotions and to use emotional intelligence. Digital intelligence can be programmed to identify visual cues and make inferences about people’s emotions on this basis, or simply to ask 'How do you feel today' and be trained to select an appropriate empathetic response, or not.


Cognitive process that elicits emotion can then be connected to rational associations based on rules, rewards and punishment.


The conscious mind uses a process of asking what is true, what is real, how should I respond. It connects with notions of what is right in society (rules), embodied in familial, social, cultural, and historical interpretations of expectation, duty, obligation, law, responsibility, and virtue.


Children learn about rules, rewards and punishment through socially engaged and interactive learning and with the analysis of direct experiences and lectures. Interactive and experiential learning enables the person to feel emotions associated with selfishness, sharing, being kind, telling the truth, not being a bully, and understand the meaning of sympathy, empathy, and compassion. In the early years (before 7) interaction with other humans recreates patterns of rules, rewards and punishment in young minds.


Through human development activities, technologies are being created according to human rules, rewards and punishments. With advances in software and programming machines will be able to interact with humans and learn to ‘think about’ the rules, the rewards and the punishments associated with their being.


Our first step must be to open ourselves to the possibility of alleviating human suffering through human unity.


8 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page