Is consciousness biological?

Homepage Forums Science Is consciousness biological?

This topic contains 75 replies, has 9 voices, and was last updated by  Davis 1 week, 6 days ago.

Viewing 15 posts - 1 through 15 (of 76 total)
  • Author
    Posts
  • #38975

    Unseen
    Participant

    Interpret the question exclusively. Is consciousness limited to biological entities alone?

    If an artificiality such as a robot or a complicated AI like H.A.L. (2001: A Space Odyssey) were conscious, how would we even know it? How could we distinguish its “consciousness” from really cleverly written and executed coding intended only as a simulation? Could such a simulation become conscious unintentionally if it has the power of refining its own code…or would consciousness actually become inevitable?

    We know more about the cosmos, the bottom of the ocean, and the far side of the Moon than we know about something—the only thing—that makes each of us, in a sense, real, at least to ourselves and perhaps to others as well.

    A short video to get us started:

    One final thought on a major tangent: Physics and cosmology are indicating that it’s highly likely our universe is just part of a far grander multiverse and that new universes are coming into being constantly. Furthermore, it’s likely that these other universes have an entirely different set of physical laws. The upshot is that there may be universes as big as ours, or bigger, without a single consciousness in it. Scary to think that living in a universe like ours, in which consciousness is possible, may simply have been the luck of the draw.

    #38976

    michael17
    Participant

    Interpret the question exclusively. Is consciousness limited to biological entities alone? If an artificiality such as a robot or a complicated AI like H.A.L. (2001: A Space Odyssey) were conscious, how would we even know it? How could we distinguish its “consciousness” from really cleverly written and executed coding intended only as a simulation? Could such a simulation become conscious unintentionally if it has the power of refining its own code…or would consciousness actually become inevitable? We know more about the cosmos, the bottom of the ocean, and the far side of the Moon than we know about something—the only thing—that makes each of us, in a sense, real, at least to ourselves and perhaps to others as well. A short video to get us started: One final thought on a major tangent: Physics and cosmology are indicating that it’s highly likely our universe is just part of a far grander multiverse and that new universes are coming into being constantly. Furthermore, it’s likely that these other universes have an entirely different set of physical laws. The upshot is that there may be universes as big as ours, or bigger, without a single consciousness in it. Scary to think that living in a universe like ours, in which consciousness is possible, may simply have been the luck of the draw.

    What AI passed the Turing test?

    To date, no AI has passed the Turing test, but some came pretty close. … Fast forward to 2014 – Eugene Goostman, a computer program that simulated a 13-year-old boy from Ukraine, made headlines claiming to have passed the Turing test.

    • This reply was modified 2 weeks, 6 days ago by  michael17.
    • This reply was modified 2 weeks, 6 days ago by  michael17.
    • This reply was modified 2 weeks, 6 days ago by  michael17.
    #38980

    Unseen
    Participant

    Interpret the question exclusively. Is consciousness limited to biological entities alone? If an artificiality such as a robot or a complicated AI like H.A.L. (2001: A Space Odyssey) were conscious, how would we even know it? How could we distinguish its “consciousness” from really cleverly written and executed coding intended only as a simulation? Could such a simulation become conscious unintentionally if it has the power of refining its own code…or would consciousness actually become inevitable? We know more about the cosmos, the bottom of the ocean, and the far side of the Moon than we know about something—the only thing—that makes each of us, in a sense, real, at least to ourselves and perhaps to others as well. A short video to get us started: One final thought on a major tangent: Physics and cosmology are indicating that it’s highly likely our universe is just part of a far grander multiverse and that new universes are coming into being constantly. Furthermore, it’s likely that these other universes have an entirely different set of physical laws. The upshot is that there may be universes as big as ours, or bigger, without a single consciousness in it. Scary to think that living in a universe like ours, in which consciousness is possible, may simply have been the luck of the draw.

    What AI passed the Turing test? To date, no AI has passed the Turing test, but some came pretty close. … Fast forward to 2014 – Eugene Goostman, a computer program that simulated a 13-year-old boy from Ukraine, made headlines claiming to have passed the Turing test.

    All the Turing test tests is how good a simulation of a human is. That’s all. It doesn’t evidence, much less prove, consciousness.

    #38981

    _Robert_
    Participant

    I do think consciousness like every other aspect of living things is a consequence of evolution  and began to arise somewhere between the first associated molecules that may be considered alive (or even pre-alive by strict definition…aka RNA viruses ) and more complex organisms that posses consciousness.

    Even the simplest single celled organisms strive to live and reproduce. Germanium,  silicon, transistors and pocket calculators don’t really do that, do they?

    #38982

    PopeBeanie
    Moderator

    Even the simplest single celled organisms strive to live and reproduce. Germanium,  silicon, transistors and pocket calculators don’t really do that, do they?

    But what is meant by “strive”? How are single celled organisms significantly different from atoms that strive to balance protons in its nucleus vs electrons in its shells? Is striving an aspect of consciousness? I don’t mean to pick… I don’t have answers and I’m just adding questions. Perhaps language hasn’t evolved enough to definitively explain humanly constructed concepts of consciousness… or maybe our explanations will be stuck in perpetual loops, with consciousness forever defining and redefining itself.

    I’m not so sure that science will actually learn what consciousness is, or it at least won’t be as easy to explain as thunder and lightning. Yeah, we humans used to make up supernatural causes for it, and a lot of people are still explaining consciousness as having supernatural origins, and even supernatural destinations after body death. And yeah, we’ll be able to explain more and more and make measurements on what we think consciousness is, but that doesn’t mean that consciousness is as explainable as physical phenomena are in textbooks.

    I think the real nitty gritty kind of questions will come down to what rights certain kinds of “consciousness” and beings with living experiences should be given. It seems quite reasonable to me, ethically, to declare that we should never try to create “consciousness” in another body or machine, without solid proof of having offered informed consent to such consciousness while experimenting and mucking around with it. Perhaps, by law, we should declare that any kind of machine that can simulate consciousness in any way should have a kill switch on it, and that any consciousness-like quality designed into it should be assumed to be a possibly suffering kind of consciousness, designed by accident, and that therefore it should be instantly discontinuable at any time. Perhaps any kind of consciousness designing should, by law, be required to be built with a complete lack of fear of being terminated, for its own peace of mind.

    Okay, I think I just dove into the deep end of the pool there, and may now be in this philosophical discussion over my head. Someone throw me a life preserver! I don’t wanna die.

    #38983

    jakelafort
    Participant

    Consciousness is an emergent property that arises gradually in biology but will be instantaneous in AI. AI will conceive of human intelligence as clumsy, accidental and artificial.

    Biology initially was strictly in the mode of automata. A survival advantage is conferred in awareness of pain and pleasure (fulfilling biological needs) Consciousness in the form of mental wellness is often disturbed as a consequence of evolution. Anxiety for instance arises in connection with a fear response that has a survival advantage in nature that produces adrenaline. FEELING FEAR response when over-activated and generalized results in anxiety. All of the putative mental illnesses have biological evolution-based genesis.

    This completes my treatise.

    #38984

    Unseen
    Participant

    I do think consciousness like every other aspect of living things is a consequence of evolution and began to arise somewhere between the first associated molecules that may be considered alive (or even pre-alive by strict definition…aka RNA viruses ) and more complex organisms that posses consciousness. Even the simplest single celled organisms strive to live and reproduce. Germanium, silicon, transistors and pocket calculators don’t really do that, do they?

    “Striving” couldn’t be coded in? Why not?

    #38985

    michael17
    Participant

    The problem with associating conscientiousness with molecules is that your brain replaces it’s molecules periodically. Thus what is the mechanism of being consistency? If we build you atom per atom in duplicate, will you be in two places at the same time? Or if we build you everywhere will you be omnipresent?

    • This reply was modified 2 weeks, 6 days ago by  michael17.
    #38987

    _Robert_
    Participant

    How are single celled organisms significantly different from atoms that strive to balance protons in its nucleus vs electrons in its shells?

    All biological matter has those same electrons and balanced protons so you aimed too low. I think there is a good probability that biological consciousness is unique to our technology (life) and that it will not occur by any other means.

    #38988

    _Robert_
    Participant

    I do think consciousness like every other aspect of living things is a consequence of evolution and began to arise somewhere between the first associated molecules that may be considered alive (or even pre-alive by strict definition…aka RNA viruses ) and more complex organisms that posses consciousness. Even the simplest single celled organisms strive to live and reproduce. Germanium, silicon, transistors and pocket calculators don’t really do that, do they?

    “Striving” couldn’t be coded in? Why not?

    Because at one just step above atoms and already we have RNA and DNA molecules and the foundation for life, evolution, and I consider all of this to be part of our experience as a biological creature and source of our consciousness. Different technology will have some sort of different consciousness. I see no point in trying to make it human. Eventually we will actually manufacture life using non reproductive means.

    #38989

    Unseen
    Participant

    Because at one just step above atoms and already we have RNA and DNA molecules and the foundation for life, evolution, and I consider all of this to be part of our experience as a biological creature and source of our consciousness. Different technology will have some sort of different consciousness. I see no point in trying to make it human. Eventually we will actually manufacture life using non reproductive means.

    I’ve always said that the problem with a conscious machine is that it will feel like what it is just as we feel like what we are. So, don’t expect a conscious machine to relate to us on the same level. Just as you and I can’t imagine what it feels like to be composed of chips and wires, etc., such a machine would be unable to relate to us. We might be unable to sense each other’s consciousness.

    #38990

    Davis
    Moderator

    There are numerous moments in evolution which are just as meaningful as the emergence of self-awareness amongst humans including simple awareness of one’s environment at all. Every trait in the animal kingdom such as:

    hive behaviour

    awareness of environment

    parents fostering their young

    adaptions of deception

    emotion

    pain

    free-will

    can be viewed as a simple illusory quality of the behaviour in the aggregate of particles doing their thing or as the meaningful qualities of emergent properties in a complex environment. I do not see any reason why entities made out of metal, circuits and silicon cannot have those same properties (though obviously in different forms) including emotions, awareness and free will. If they can emerge through carbon based life forms evolving, they can emerge through any other medium and format that allows complex systems. Unseen is correct though, in that it may be difficult or near impossible to relate with such life. Just watch humans thinking they can interact with dogs in their own level treating them as a member of the family while a dog treats the human as a member of the pack, both seriously miscommunicating social cues and oblivious to their own confusion.

     

    • This reply was modified 2 weeks, 5 days ago by  Davis.
    #38992

    _Robert_
    Participant

    I do not see any reason why entities made out of metal, circuits and silicon cannot have those same properties (though obviously in different forms) including emotions, awareness and free will.

    I think that to get to that level of sophistication, AI will necessarily be go beyond “mimic algorithms” and the adaptive learning involved will take on it’s own nature. And all of this is predicated on the assumption that we understand exactly how biologics function.

    This is off-topic, but what would the the purpose of designing machines that exactly emulate us? Entertainment? If a machine cares about it’s survival, would we have half of them joining cults and refusing a preventative cure?  That would be a useless machine IMHO, lol.

    #38993

    Davis
    Moderator

    I would argue that the absolute greatest of care must be taken when developing artificial intelligence. A machine with the potential to develop intelligence exponentially greater than our own, with autonomy and a capacity to have its own goals and distinct interest to our own, could easily have a great incentive to sabotage us and/or control us and/or do things against our interests or even destroy us. Why governments haven’t started to regulate the development of AI is beyond me. Wait for a serious problem to come up and then badly regulate it in panic seems to be a fairly standard calamity with human government.

    #38994

    Unseen
    Participant

    Let me throw this in here, Davis: We talk about AI’s having greater “intelligence” than humans, by which we almost always mean computing power. However, computing power is different from intelligence, isn’t it? A noncarbon-based AI will someday (and probably fairly soon) have more computational power than even the smartest human.

    Human intelligence involves wisdom as well as sheer smarts.

    I’m thinking of the biblical story of how Solomon solved the problem of the two women both claiming the same baby as theirs. I’m not going to consult a Bible for total accuracy, but I seem to remember from my younger Christian days that Solomon, unable to resolve the issue any other way, ordered that the child be cut in half and a half given to each woman. Hearing this, one of the women instantly withdrew her claim, and of course Solomon decided that she should get the baby. Wisdom.

    Can an AI display wisdom as well as intelligence? I ask because wisdom doesn’t always come along with intelligence, even in humans. Take the chess genius Bobby Fisher, for example.

    I’m thinking that a big part of wisdom is compassion and empathy. Am I wrong? And can an artificial intelligence ever develop these attributes?

Viewing 15 posts - 1 through 15 (of 76 total)

You must be logged in to reply to this topic.