Some questions about sentient AI

Homepage Forums Small Talk Some questions about sentient AI

This topic contains 1 reply, has 2 voices, and was last updated by  PopeBeanie 2 years, 1 month ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #45236

    Unseen
    Participant

    1. Could an A.I. possess moral agency?

    2. Could an A.I. have a gender? In what sense?

    3. Could an A.I. have real feelings and emotions? How?

    4. What does “sentience” mean in the case of A.I.?

    5. What question didn’t I ask that you’d like discussed?

    #45343

    PopeBeanie
    Moderator

    Besides the fact that we humans have trouble with answering those questions sometimes even about other humans, and note that those questions and answers are still evolving and depend on current cultural perspectives, I choose to not ever assume that AI will have “personhood” equivalent to humanhood, or animalhood, for that matter. In fact, I propose that it would be unethical for any of us to experiment with, or create our own version of moral agency, gender determinations, or any supposedly “real” feelings, emotions, sentience, or consciousness, in AI.

    I believe that any AI that we create must be assumed to be able to be shut down, pulling the plug, as it were,  at any moment, without incurring any moral or ethical obligations or objections.

    What worries me the most is that the directions AI will take will, at first, be defined by owners and designers of that AI. Therefore, the first, serious challenges we face as we allow AI to evolve, will be to ensure or at least know whether the owners and designers themselves will be ethical and transparent in their AI designs. E.g. consider the challenges of how authoritarian regimes or countries with war machines will want AI to evolve and be used for their own purposes.

    As a side-entry to this issue, and something that some of us could personally experience and report on, perhaps some of us could voluntarily, after well informed-consent, take AI or AI-like implants that enhance our cognitive or emotional abilities?

    Or, will only some kinds of AI be more like the kind of AI where your questions matter? I.e., is it time to consider what different kinds of AI may be designed and/or socially interfaced with each of us in the future? Will just the best simulation of those characteristics you’re asking about be sufficient for widespread human acceptance, at least at first? And might we just be forever approaching the “realness” of those characteristics? It might, in the end, depend only on what future (human) generations consider to be real versions of those characteristics. How or when could any AI’s self-definition of those characteristics be considered valid, or final?

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.