A demonstration of the VeinViewer - a device which allows you to see through the skin. The VeinViewer uses near-infrared light to detect vessels and blood up to 10mm beneath the surface, and projects a picture onto the skin to reveal vessel structure and blood flow in real time.
Clip taken from the 2013 Royal Institution CHRISTMAS LECTURES: Life Fantastic Lecture 1 - Where do I come from?
Consciousness in AI is a topic which is argued by not only computer and cognitive scientists, but also philosophers. Philosophers like John Searle and Hubert Dreyfus have argued against the idea that a computer can gain consciousness. For an example, arguments like Chinese Room have been proposed against the idea of strong AI. But there are also philosophers like Daniel Dennett and Douglas Hofstadter, who have argued that the computers can gain consciousness.
Although there are debates about the how to create a conscious machine, for this article I choose to look at the creation of machine consciousness in another way. Do we have to design the AI’s architecture with the conscious from the beginning to make a conscious AI? Will the AI be able to gain consciousness of its own? Or will the consciousness be emerged form when the AI’s architecture when it gained sufficient enough complexity by evolution or by self modification without human interference?
Consciousness without Human Design
Although consciousness is an important quality, defining it clearly is a somewhat difficult task. But we can roughly define it with two main components, the Awareness (Phenomenal Awareness) and the Agency. Awareness is ability to the external world and also feel or sense the content of the own mind. And the Agency is the control over external world and also the control over our self or the mental states. Which means the control the both behavioral (control external organs, hands, feet, etc.) and mental aspects. We should also be aware of the control it to become conscious. We should know/feel that we have the control (or that we are doing it). The actions we are not aware like beating of the heart, breathing or things we do without thinking (for example, walking or driving without thinking or concentrating on it or thinking about something else) aren’t taken as conscious actions. So after including all of these, we can define the consciousness as (or at least I’m using this definition for this article) the awareness and control over external objects and also awareness of ones own mental content. Another way of putting it is having a sense of self-hood.
According to the above definition of consciousness, we can see that the concept self is also linked with the consciousness. So, what is self? The self can be defined as the representation of one’s identity or the subject of experience. In other words self is the part that receives the experiences or the part that has the awareness. The self is an integral part in human motivation, cognition, affect, and social identity.
Concept of self may not be a something that we are born with. According to the psychoanalyst Sigmund Freud, the part of the mind which creates the self is developed later in the psychological development of the child. In the beginning a child only has the Id. Id is a set of desires which cannot be controlled by the child and only seeks pleasure (Pleasure Principle). But later in the development process a part of the Id is transformed into the Ego. And this Ego creates the concept of self in the child. Now the question will be, Can AI be developed into a stage where it can also create something like Ego like the human mind? If the AI has a structure which contains the necessary similarities to a human mind or the AI has an artificial brain similar to the human brain and nervous system, then AI may be able to undergo a process which create some sort of an Ego similar to human Ego. And for humans this Ego is created because of the interactions which a child has with the external world. So like that, maybe the influences which AI faces can trigger the creation of the Ego in AI.
According to the theory of Jacques Lacan the process of creating the self of a child happens in the stage called the mirror stage. In this stage the child (in 6-18 months of age) sees an external image of his or her body (trough a mirror or represented to the child through the mother or primary caregiver) and identify it as a totality. Or in other words the child realized that he or she is not an extension of the world, but a separate entity from the rest of the world. And the concept of self is developed through this process. So, can an AI go through this kind of a process or a stage and develop a self? Regardless of whether the structure of the AI is similar to a human mind or not, the realization of the fact that it is a separate individual from the first time will be a new and revolutionary experience to AI (if the AI is sophisticated enough to process that kind of realization or experience in a proper way). So this kind of an experience may be able to make a change in the AI which may be able to give the AI an idea about self. But if this stage of AI is similar to the mirror stage, then the AI must also have a way of seeing its own reflection in order to undergo this kind of a process. If the AI has a body (robot, maybe) and doesn’t extend beyond that body then this won’t be a problem. But if the AI can be copied into new hardware or extend itself through a network or hardware, then defining its boundaries can be somewhat difficult. So seeing it as something that is not fragmented and has clear boundaries will a bit tricky. But if the architecture of the AI may allow a different way of defining boundaries and see it as an individual then this would work.
When we consider the other animals, we can see that an animal must have a certain complexity to have the self awareness (or consciousness). Methods like Red Spot Technique have shown that animals like some species of ape and dolphins have shown self awareness and some animals haven’t. So we can assume that AI must also have an architecture with sufficient enough complexity for it to develop a consciousness. So at some point in the process of evolution, the AI must be able to achieve the necessary complexity, in order for the AI to become conscious. But if the evolution of the AI is similar to the evolution process in Darwinian theory, then the AI which finally achieve the consciousness won’t be the ones that the process of evolution begins with because the new generation of AI is built by merging the best architectures of the old generation of AI and mutating them. So for this merging and mutating process the AI may need human assistance.
But a single AI also can undergo a sort of an evolutionary process of its own. And such process would be self improvement or more precisely recursive self improvement. Recursive self-improvement is the ability of an AI to program its own software or add parts to its structure or architecture (maybe hardware vise too). So this process also will be able to make the AI achieve necessary complexity in some point.
Like that, maybe the AI will be able to produce consciousness through self modification, or through a stage in their psychological development process by themselves without humans specifically designing it to be conscious from the beginning.
i luv being nonbinary bc no matter what i do its gay. i like a boy? its gay. i like a girl? soooo gay. i like another nb person?? MAXIMUM gay, i am the winner
Researchers at the Monterey Bay Aquarium Research Institute (MBARI) have observed a deep-sea octopus brooding its eggs for four and one half years—longer than any other known animal. Throughout this time, the female kept the eggs clean and guarded them from predators. This amazing feat represents an evolutionary balancing act between the benefits to the young octopuses of having plenty of time to develop within their eggs, and their mother’s ability to survive for years with little or no food.
Every few months for the last 25 years, a team of MBARI researchers led by Bruce Robison has performed surveys of deep-sea animals at a research site in the depths of Monterey Canyon that they call “Midwater 1.” In May 2007, during one of these surveys, the researchers discovered a femaleoctopusclinging to a rocky ledge just above the floor of the canyon, about 1,400 meters (4,600 feet) below the ocean surface. The octopus, a species known asGraneledone boreopacifica, had not been in this location during their previous dive at this site in April.
Over the next four and one-half years, the researchers dove at this same site 18 times. Each time, they found the same octopus, which they could identify by her distinctive scars, in the same place. As the years passed, her translucenteggsgrew larger and the researchers could see young octopuses developing inside. Over the same period, the female gradually lost weight and her skin became loose and pale.
The researchers never saw the female leave her eggs or eat anything. She did not even show interest in small crabs and shrimp that crawled or swam by, as long as they did not bother her eggs.
The last time the researchers saw the brooding octopus was in September 2011. When they returned one month later, they found that the female was gone. As the researchers wrote in a recent paper in the PLOS ONE, “the rock face she had occupied held the tattered remnants of empty egg capsules.”
After counting the remnants of the egg capsules, the researchers estimated that the female octopus had been brooding about 160 eggs.
If you are reading this, thank this woman. Her name is Grace Hopper, and she is one of the most under appreciated computer scientists ever. You think Gates and Jobs were cool? THIS WOMAN WORKED ON COMPUTERS WHEN THEY TOOK UP ROOMS. She invented the first compiler, which is a program that translates a computer language like Java or C++ into machine code, called assembly, that can be read by a processor. Every single program you use, every OS and server, was made possible by her first compiler.
Spread the word! (Although I’ll bet there are still some dudebros out there who’ll claim she’s a “fake geek”…)
Favorite fact: She coined the term “debugging” when they had to remove an moth (an actual, living moth) that had gotten trapped in the Mark II computer at Harvard University in 1947. While referring to glitches as bugs existed before, she brought the term into popularity.
She also got the trend of personal computers going with her suggestion to the DoD to use more smaller units rather than one big one.
Please explain to me why I never knew about her before?
you know why
they also have a women in computer science convention named after her every year. this year’s is in phoenix, arizona, in early october, and i urge you to take the opportunity to go, if possible. my university, for example, granted scholarships for some students who applied to go, all expenses paid, and many companies and schools do the same.
And here they are:
Thermoception: Ability to sense heat and cold. Thermoceptors in the brain are used for monitoring internal body temperature.
Proprioception: The sense of where your body parts are located relevant to each other.
Chronoception: Sense of the passing of time. Your body has an internal clock.
Equilibrioception: The sense that allows you to keep your balance and sense body movement in terms of acceleration and directional changes.
Magentoception: This is the ability to detect magnetic fields. Unlike most birds, humans do not have a strong magentoception, however, experiments have demonstrated that we do tend to have some sense of magnetic fields.
Tension Sensors: These are found in such places as your muscles and allow the brain the ability to monitor muscle tension.
Nociception: In a word, pain. This was once thought to simply be the result of overloading other senses, such as “touch”, but it has it’s own unique sensory system. There are three distinct types of pain receptors: cutaneous (skin), somatic (bones and joints), and visceral (body organs).