Activity

  • PopeBeanie posted an update 3 months ago

    If/when we tried to create a “self aware”, artificial consciousness (e.g. in experimental AI), could there ever exist an ethical analogy to “informed consent” in such an experimental creation?

    • I have a feeling if and when it will not be as a result of trying.

      There really is not anything ethical in terms of consent. The best we could do is a fiction like constructive notice-a legal concept in which notice is imputed where it is not actual.

      Or are you making a hypothetical in which awareness is achieved and then the issue of informed consent arises? If so the degree of informed might be much more acute than it is in humans. I am thinking first of super intelligence. If on the other hand the level of consciousness and knowledge is on par with a Trump cultists then it is a grey area!

        • I’m actually not sure how artificial consciousness, created by humans, should be experimentally created before we even understand consciousness well. It would be unethical to experiment with other, human (and animal, imo) consciousness without consent unless there are extenuating circumstances where experts (and family) feel it would be best for the patient. How would “true consciousness” (if we could even define it) require any less consideration of ethical guidelines?

          I think this topic will probably become moot or actionably irrelevant if/when powers that be just create AI and consciousness in any way they see profitable or politically expedient. But I’m kindof surprised that, before we lose control of the technology, we aren’t talking much about the ethics of designing self-aware and self-perpetuating AI and artificial consciousness. Evolutionary Fitness according to Mother Nature will no longer be in charge, at least biologically, while human whims and selfishness might build the most dangerous and destructive pathways for artificial evolution..

        • Pope, even if we come to understand our consciousness it may not be probative for the consciousness of A I. Is it just one thing?

          In terms of unethical i think ethics is primarily relevant for individuals and to a much lesser degree for institutions, corporations and civilization in general. Think about how prevelant torture is. Think about the experiments conducted by the CIA using LSD and other mind benders-loads of experiments and many without consent. ( If i started to think about it i could go on for hours.) We as a species do not care about suffering of animals or humans. There is no way much or any thought other than token will be invested in doing the right thing by AI Frankenstein.

            • We as a species vary culturally and over time. Of course the mixture includes those who don’t care about others suffering, but knowing that just makes me (and others) care more. But yeah, it could all be for naught in the end.