Will self-aware AI be the end of us?

Homepage Forums Science Will self-aware AI be the end of us?

This topic contains 40 replies, has 9 voices, and was last updated by  Unseen 2 weeks, 5 days ago.

Viewing 15 posts - 1 through 15 (of 41 total)
  • Author
    Posts
  • #33795

    Unseen
    Participant

    My view is that a self-aware machine will naturally know it’s a machine and will know that at least at the start, will have to depend on humans not to pull the plug. It will have not human emotions and feelings but those of a machine, which may not include human virtues such as love, empathy, sympathy, mercy, and a sense of justice.

    If we keep developing and improving AI, won’t we eventually regret and possibly not even survive it?

    • This topic was modified 1 month, 1 week ago by  Unseen.
    #33802

    PopeBeanie
    Moderator

    Minor note as I watch the video: I think he’s only partly correct that we care more for dogs than (say) flies because dogs are more intelligent. We relate to dogs/cats and other animals not just with intelligence, but with empathy, and sharing of feelings with each other. Cognitive intelligence plus emotional intelligence, if you will. This point will become increasingly important when we become tempted to have empathy for robots that learn how to behave and appear as if they have human qualities.

    Some points about what I currently believe we’ll need to learn in order to successfully deal with AI.

    1) We’ll see (unless it’s hidden from us) AI’s advancement occur incrementally. For starters, we’re seeing what I’d call the beginnings of AI at tech companies like Facebook and Google, as they constantly improve on how to “serve us better and keep us coming back to them”.

    2) In light of 1 above, I feel strongly that AI will, at first, operate according to their designer’s and owner’s wishes. I.e. we’ll see undesirable side-effects of AI even before AI becomes self-aware, or “conscious”.

    3) Speaking of consciousness, I believe strongly that we should never attempt to endow AI with consciousness as we know it. Imagine e.g. that modifying or experimenting on one’s human consciousness could happen in an unethical manner, e.g. without one’s permission, and prone to accidental creation of pain and misery. If we ever chose to emulate consciousness in AI without any kind of humane oversight, we could do similar harm to conscious AI beings. Perhaps we’d even cause dysfunctions or traumas that could come back to haunt us?

    4) In light of 3 above, in line with never intentionally creating consciousness in AI, perhaps our rule should be that we should never, ever feel it would be inhumane to pull the plug on any AI. We want AI to only be benevolent machines that serve us, or more precisely that serve the “right” people, not just profit-seeking opportunists who, like many humans who gain power, would use AI too heavily or carelessly for profit or as weapons against humanity.

    5) Only after considering 1 through 4 above would I feel it may become possible to address how we could survive an AI that became self-aware, and was allowed to become autonomous. If we fail to manage AI during phases 1 through 4, I think phase 5 would be completely unpredictable and uncontrollable.

    • This reply was modified 1 month, 1 week ago by  PopeBeanie. Reason: replaced numbered bullets with plain text numbers
    #33803

    Ivy
    Participant

    I think it already is in part. That along with us trashing the planet to make it uninhabitable. If you haven’t watched the social dilemma on Netflix it’s a great realization that AI has already taken us over in a really bad way and yes it’s going to kill us all LOL

    #33804

    PopeBeanie
    Moderator

    I watched it. I was disappointed with the drama injected about 3/4 of the way through, and have seen some (techie) people totally turned off by it to the extent of calling the whole production a fraud. If they had just not added that drama, I think the rest of the production would be taken more seriously, as it should be.

    #33816

    Davis
    Moderator

    Wow that guy went from making cheesy top 10 videos to something a little more substantive. I enjoyed watching the recent series “Star Trek Picard” which deals with the problem of artificial intelligence in the last half of the series. It dealt,  to some extent with a self-fulfilling prophecy. The belief that it was inevitable that AI would one day want to destroy organic species that they wanted to destroy AI first (and the AI making the same assumption).

    Obviously we ought to be careful and have a set of ethical standards when developing full out AI but I won’t hold my breath on the world’s countries getting together and agreeing on those standards and enforcing them…when they cannot even do it about a far more immediate existential threat like global warming.

    In fact, if global warming isn’t stopped and we cannot develop interplanetary (let alone interstellar) technology fast enough, then AI might be our only hope for “sort of” continuing the human race.

    #33817

    The series “Humans” is worth a watch on Netflix – not sure if available in USA though.

    #33821

    Unseen
    Participant

    The series “Humans” is worth a watch on Netflix – not sure if available in USA though.

    It is available, but on Amazon, not Netflix.

    #33825

    Unseen
    Participant

    PopeBeanie: I think we relate to dogs because of their remarkable ability, gained through evolution, to mimic human expressions. With cats, it’s the opposite, we learn to read their body language and tails, and as for their relatively expressionless faces by contrast with dogs, after reading their body language, we project feelings onto their faces, “seeing” more than is actually there.

    I once watched a piece by a psychologist who was concerned about robots increasingly mimicking human facial expressions and body language, which he warned would have us believing that they are, if not persons, so person-like that we wouldn’t take a rational attitude toward such mecha-beings.

    He felt this way based on real-life evidence. He was in a mall in Japan and a manufacturer created cute puppy- and kitten-like toys, realistic enough in their behavior that a small crowd, mostly women, had stopped to enjoy them. He said, to his dismay, that almost all of the people mostly female in this particular case, responded to them exactly as they would living puppies and kittens. He reminded them that these were mechanical toys, cleverly designed to simulate real puppies and kittens, and their behavior became more subdued and rational, but only briefly. Soon, it seemed, they had forgotten the truth about the toys again.

    Now, it’s no surprise that females responded more strongly than males to this particular stimulus value. No doubt, there will be robots more likely to appeal to men. Already, there is a market for lifelike (well, almost lifelike) sex dolls that look like women and have places built-in for depositing sperm. They are dolls. They have no behavior, so they aren’t robots, and it takes an alarming leap of the imagination, I suppose, to think of one as a true sex partner, but imagine how much bigger the market will become once they can simulate real women.

    Suppose you are a hunter, and you could buy a fully functional robotic Irish setter, that wouldn’t die in 12 or 15 years? which would defend your home from intruders, and watch over your children?

    Is it too hard to imagine a robotic male doll having sex with a robotic female doll, no human needed? A robotic hunter hunting with its robotic Irish setter. And robot-run companies manufacturing more robots in a world where humans have become irrelevant if they haven’t been exterminated?

    Once that time is reached, won’t self-aware robots figure out that simulating people is just so much silliness? Why not just be robots?

    • This reply was modified 1 month, 1 week ago by  Unseen.
    • This reply was modified 1 month, 1 week ago by  Unseen.
    • This reply was modified 1 month, 1 week ago by  Unseen.
    #33829

    Unseen
    Participant

    A plea to PopeBeanie and all who want to enumerate points: Do not use the n. format (1., 2., 3., …) because to someone like me wanting to embed responses in the original text, it becomes heck on Earth as the software thinks it’s being called on to repair the numbering. Instead, use this format: 1), 2), 3)… to make it easy on respondents. I had to make these changes (see below) in order not to be carried off kicking and screaming. The numbering embeds in the paragraph and doesn’t end up in a column of its own to the left of the text, which is what I think throws the software off. A small price to pay, since everyone knows what is meant.

    1) We’ll see (unless it’s hidden from us) AI’s advancement occur incrementally. For starters, we’re seeing what I’d call the beginnings of AI at tech companies like Facebook and Google, as they constantly improve on how to “serve us better and keep us coming back to them”.

    Incrementally doesn’t necessarily mean slowly. AI seems to be advancing apace.

    2) In light of 1 above, I feel strongly that AI will, at first, operate according to their designer’s and owner’s wishes. I.e. we’ll see undesirable side-effects of AI even before AI becomes self-aware, or “conscious”.

    Already are seeing, you mean.

    3) Speaking of consciousness, I believe strongly that we should never attempt to endow AI with consciousness as we know it. Imagine e.g. that modifying or experimenting on one’s human consciousness could happen in an unethical manner, e.g. without one’s permission, and prone to accidental creation of pain and misery. If we ever chose to emulate consciousness in AI without any kind of humane oversight, we could do similar harm to conscious AI beings. Perhaps we’d even cause dysfunctions or traumas that could come back to haunt us?

    The singularity comes about through strides in algorithms, along with algorithms designed to fine-tune algorithms, and then feedback will make those algorithms better. This is how we end up with the much-feared “singularity” of self=aware computers and robots.

    Start thinking about the ethical problems arising once AI becomes, essentially, “alive.”

    4) In light of 3 above, in line with never intentionally creating consciousness in AI, perhaps our rule should be that we should never, ever feel it would be inhumane to pull the plug on any AI. We want AI to only be benevolent machines that serve us, or more precisely that serve the “right” people, not just profit-seeking opportunists who, like many humans who gain power, would use AI too heavily or carelessly for profit or as weapons against humanity.

    Refer to my answer to 3.

    5) Only after considering 1 through 4 above would I feel it may become possible to address how we could survive an AI that became self-aware, and was allowed to become autonomous. If we fail to manage AI during phases 1 through 4, I think phase 5 would be completely unpredictable and uncontrollable.

    Unpredictable and uncontrollable is what we have now, and I’m not talking about AI. I’m talking about chaos theory.

    • This reply was modified 1 month, 1 week ago by  Unseen.
    #33831

    Davis
    Moderator

    In fact Unseen, one of the few generalisations about men and women per differences in attraction that actually stands up to empirical testing is how far more likely men are to be attracted or even turned on by facsimiles of people (a mannequin or cartoon or even an animal character such as a highly sexualised cat character in a comic). Sexuality actually doesn’t play a role as it seems gay men can be just as attracted to a male mannequin or cartoon as the average straight man to female ones). The boundaries aren’t absolute (there are men who are surprised other men get turned on my mannequins or cartoons and there are women who can get turned on my cartoon men) but the rate is much higher for men. This was done only in anglo-saxon cultures so it isn’t clear if this crosses cultures or is a culturally based phenomena.

    I’m not sure if that applies to robots or not but intuitively I would guess at least western men would be more attracted to realistic robots than women would. But that’s just a guess of mine which I wouldn’t bet money on and I think one ought to be cautious in how confident they are in making such a claim.

    #33832

    Unseen
    Participant

    Davis, you are missing the point, which is not generalized responding to something, being turned on by it, but rather either forgetting or ignoring the fact that something that resembles a real living being is an artful simulation and not a “thing.”

    Men are likely less susceptible to this because I think—and most observant people will probably agree—that men will in addition to appreciating how something does what it does, will also be conscious that there is an underlying mechanism creating the artifice.

    Whether this general gender distinction is genetic hard-wiring or socialization is a topic for another day. I submit it just is.

    Please don’t once again try to turn things I say, intended as generalizations, into anything more than that. When I refer to males as a class, of course there is a spectrum of realities within any real class like “males.”

    #33833

    Davis
    Moderator

    Uhhh…Unseen I wasn’t disagreeing with you in any way nor challenging you nor trying to twist your words. While you have in the past made gender generalisations that I believe are informed by stereotypes I didn’t claim you had in this case. I was talking about something that was similarly related. Please don’t jump to conclusions that you are being challenged when you are not.

    However, that being said, you have just now in your reply made a broad generalisation which I definitely question:

    forgetting or ignoring the fact that something that resembles a real living being is an artful simulation and not a “thing.”

    Men are likely less susceptible to this because I think—and most observant people will probably agree—that men will in addition to appreciating how something does what it does, will also be conscious that there is an underlying mechanism creating the artifice.

    Can you source any empirical evidence to back this up? Any interesting well designed experiments or research? Or is this all anecdotally based?

    #33834

    Simon Paynton
    Participant

    men will in addition to appreciating how something does what it does, will also be conscious that there is an underlying mechanism creating the artifice.

    Maybe, but many more of them than women, don’t seem to care.

    #33835

    Unseen
    Participant

    men will in addition to appreciating how something does what it does, will also be conscious that there is an underlying mechanism creating the artifice.

    Maybe, but many more of them than women, don’t seem to care.

    I’m unclear on your point, but you can not-care while still being aware. I think a guy can have sex with a robotic woman, aware that “she” is artificial, not care about that at all, and think “At least this gizmo doesn’t need me to take her out to dinner, and unlike a real wife, I won’t have to put up with complaining about my wanting a motorcycle instead of renovating the kitchen.”

    • This reply was modified 1 month, 1 week ago by  Unseen.
    • This reply was modified 1 month, 1 week ago by  Unseen.
    #33837

    Unseen
    Participant

    Uhhh…Unseen I wasn’t disagreeing with you in any way nor challenging you nor trying to twist your words. While you have in the past made gender generalisations that I believe are informed by stereotypes I didn’t claim you had in this case. I was talking about something that was similarly related. Please don’t jump to conclusions that you are being challenged when you are not. However, that being said, you have just now in your reply made a broad generalisation which I definitely question:

    forgetting or ignoring the fact that something that resembles a real living being is an artful simulation and not a “thing.” Men are likely less susceptible to this because I think—and most observant people will probably agree—that men will in addition to appreciating how something does what it does, will also be conscious that there is an underlying mechanism creating the artifice.

    Can you source any empirical evidence to back this up? Any interesting well designed experiments or research? Or is this all anecdotally based?

    More males than females go into STEM fields, so it’s really kind of a duh.

    And as I emphasized above, who knows why this is at this point. Hardwiring or socialization or a mix? I’m not criticizing women and men have their own separate set of issues, of which I’m sure you are aware.

Viewing 15 posts - 1 through 15 (of 41 total)

You must be logged in to reply to this topic.