Defining Consciousness

This topic contains 5 replies, has 2 voices, and was last updated by  PopeBeanie 3 weeks, 3 days ago.

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
  • #24803


    Most non-scientists with an opinion on what consciousness is don’t seem to agree well on definitions, and scientists are still working out definitions at the same time as they accomplish empirical research. I just thought I’d keep a topic open for suggestions on agreeable terminology, and the latest definitions.

    Starting with a very interesting new angle on consciousness. (I feel it’s related to the already popular topic of “embedded consciousness”, namely how a brain is “embedded” in a body from birth and takes decades to mature.)

    Anil Seth TED Talk



    Here’s Oliver Sacks, also focusing on hallucinations, but more specifically in patients experiencing them in the classic “I’m seeing or hearing things that aren’t there” sense. The link I’m providing skips past about half the video for people who want to just jump into the denser content, but you can click at the beginning of the time bar if you want to see the full 18 minutes.


    Posting here rather than repeating like a broken record in Unseen’s topic:

    We already know that decisions happen before we are aware that we made them […]

    It still feels like I made the decision, whether or not it feels like it took time to happen. No definition of “experience” or “I” or “consciousness” or “sub-consciousness” can reasonably contradict with how the brain arrived at a decision–or at ambivalence, for that matter. Any non-“I” explanation, as you say, would necessitate explanation of a duality between “I” and the brain that we know cannot exist without each other.

    I remember learning to ride a bike, but now it happens on its own autopilot. This is an example of a conscious experience become unconscious, along with some other experiences like an ability to avoid potholes. Or when walking, learning where we can and can’t place each foot, on every step. Much of the unconscious part of me used to feel conscious. Once again, I must insist that we must understand the nature of this, and consider how far back into childhood and possibly even to neo-natal experiences and auto-pilot circuit building to have any kind of meaningful understanding of the really “big picture” of consciousness.

    Consciousness and its underlying sub-consciousness (and whatever other chemo-psychic processes that happen) throughout an entire lifetime are moving targets, practically by definition. You have to consider the entire possible spectrum of consciousness before lumping it all into just one “why” question begging for just one answer.

    And again, no one else here has even considered the effects on each individual’s consciousness of the culture we grow up in, which (imo) can most definitely be discussed as if it has a high degree of gratuitiveness that goes way above and beyond where mother nature’s purely animal/genetic form of evolution left us. Could cave man even know such questions could exist? No, even the questions themselves could only be invented (much less pondered) after culture sufficiently matured.

    So “explaining the gratuitive nature of human consciousness” would necessarily include explanations for the existence of art, crafts, industry, music, religion, even philosophy itself. Those cultural aspects of conscious experience necessarily incur separate consideration from aspects of how animals in general might experience their myriad, varying spectrums of consciousness and components of sub-consciousness.



    If I created a very smart machine that turned out paintings, and it had a very human interface (Japanese automatons are becoming creepily believable in terms of their “social stimulus value,” so if one of these Turing-humans gave you art and it was good enough to be bought and sold on the fine art market, would that necessarily imply a consciousness generating it? And why (whatever your answer is)?

    Remember, my concern is whether this being is having experiences, because I think that is what consciousness is, And notice that this definition even covers things like dreams and optical illusions.

    Can androids dream of electric sheep?



    […] so if one of these Turing-humans gave you art and it was good enough to be bought and sold on the fine art market, would that necessarily imply a consciousness generating it? And why (whatever your answer is)?

    Remember, my concern is whether this being is having experiences, because I think that is what consciousness is […]

    [Pardners, I’m calling this here paragraph somewhat of a repeat of earlier remarks, a temporary zooming out from our primary focus on Unseen’s questions, and maybe a meta-note. I’m thinking about how to think and how to describe thinking. But most significantly, I think I know a lot about consciousness, especially my own, and yet there still comes a lesser or higher understanding of it, practically every day.]

    Still zoomed out wrt your questions, but focusing on topic “Defining Consciousness”:  Today’s revelation (for me) is a deeper confirmation I feel that defining consciousness is much easier than defining experience. I’ll take the liberty of positing that there is no hard problem of consciousness, but there is a hard problem of experience, also referred to often as qualia.

    Per wikipedia (which looks reasonable to me):

    In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined to be individual instances of subjective, conscious experience.

    Focusing on qualia can become fruitless in light of how, as a word, it’s abused almost as much as the word quantum, but I don’t know of a more useful word atm. I will repeat that a living being’s consciousness itself can vary over wide range of depth and quantity of qualia, good feelings vs bad, sensed via senses or otherwise imagined, and blah blah I’ve said a million times already. Consciousness is NOT an all or nothing possession, but comprises various or few modes of various or few experiences. (Or qualia.)

    Now my attempt to answer your questions, but with our granular emphasis on whether qualia/experiences are the things that a being or machine can or can’t have. Answer upon quick reflection: As an armchair scientist, I don’t fricken even know if there can ever be an empirically verifiable answer! So next, onto philosophical suppositions… (where I admit, you might be able to enlighten us more?!).

    There is a relevant “rights” aspect to this discussion, which is the question of whether any experiences of any being or machine should be protected by law. E.g. where can we draw the line on what constitutes unlawful torture of another being or machine. I’m seriously considering that, if and when humans codify such lines into law, a priori assumption number one is that machines have no qualia, experiences, consciousness, and rights! This is the current default in law (as machines are not even mentioned), and in my currently preferred position. I’m open to future re-considerations. For example, how to approach consideration of the inevitable human-machine hybrids.

    States vary on what protection humans have rights to, most notably before birth, and near end of life. How or when can machines be presumed to be subject to the same such biological and mental test criteria?

    And the gray lines may never disappear. Each state defines where those gray lines are and who has the authority to arbitrate their enforcement. One reason (I feel) this is relevant to your question is because WE humans are currently the arbiters. We are still developing our definitions and suppositions, even if we’ll never be able to arrive at absolute, empirical certainty of where is the black and white vs the gray.

    I cannot yet decide on whether your question can ever have a final, ethically valid or empirically based answer. It may eternally comprise a mix of human opinions, varying across different domains of authority.



    Can androids dream of electric sheep?

    @unseen, I apologize for my first, rambling response to your post, and/so following is more to the point.

    That is a really important question, and I think that since we don’t know if/how we’ll ever be able to reproduce or create consciousness artificially, I feel strongly that experimentally trying to do so is unethical. If we cannot know what actually goes in inside an artificial brain (or vessel of thoughts and/or experiences), how could we possibly ever know if we were creating a consciousness that suffers while we experiment with it?

    Perhaps the most ethical path toward such experimentation is to deal with naturally evolved human brains and incrementally enhance human experiences artificially, e.g. by repairing damaged neurocircuitry, adding sensory circuits (e.g. connecting to a tiny night-vision device), adding ultrasonic hearing, speeding up recovery from PTSD, adding comm channels (like wifi?!)… the list of possible repairs and enhancements is virtually unlimited, and objective/subjective reports of conscious experiences would be (imo) much more reliable than starting from scratch proto-consciousness, not to mention opted in with patient consent.

    Picking up on specifics you mention like art, dreams, and optical illusions, I just don’t see any way to delve into and thoroughly understand these kinds of experiences without first 1) advancing human neuro-technology and 2) getting reports from humans who consent to relevant experiments.

Viewing 6 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic.