Will self-aware AI be the end of us?

Homepage Forums Science Will self-aware AI be the end of us?

This topic contains 40 replies, has 9 voices, and was last updated by  Unseen 2 weeks, 1 day ago.

Viewing 15 posts - 16 through 30 (of 41 total)
  • Author
    Posts
  • #33839

    Unseen
    Participant

    When Jeff Bezos buys Apple and Google, we’re done. LOL

    I wish he would buy Fox.

    #33848

    PopeBeanie
    Moderator

    Unseen wrote:

    A plea to PopeBeanie and all who want to enumerate points: Do not use the n. format (1., 2., 3., …) because to someone like me wanting to embed responses in the original text, it becomes heck on Earth as the software thinks it’s being called on to repair the numbering. Instead, use this format: 1), 2), 3)… to make it easy on respondents.

    Agreed, it gets messy. I didn’t expect that, and will be aware of it next time. And numberless bullet points probably cause similar difficulty.

    It was easy enough for me, just now, to pull the numbered bullets out. Mods aren’t limited to the usual editing rights expirations. Sometimes it takes me almost an hour to edit a post until I’m satisfied with it because I’m so bad at spontaneous wordsmithing. I’m even still editing sometimes without noticing when someone has already responded; and I often break up unexpectedly long paragraphs into smaller ones. Fortunately I don’t care much about fixing pretty minor spelling or grammatical errors .

    • This reply was modified 1 month ago by  PopeBeanie. Reason: added "It was easy enough" paragraph
    #33850

    Unseen
    Participant

    Unseen wrote:

    A plea to PopeBeanie and all who want to enumerate points: Do not use the n. format (1., 2., 3., …) because to someone like me wanting to embed responses in the original text, it becomes heck on Earth as the software thinks it’s being called on to repair the numbering. Instead, use this format: 1), 2), 3)… to make it easy on respondents.

    Agreed, it gets messy. I didn’t expect that, and will be aware of it next time. And numberless bullet points probably cause similar difficulty.

    I suspect so. Thanks.

     

    #33851

    Unseen
    Participant

    Of course, one of the problems of preventing self-aware (conscious) Ai is the undeniable fact that we know less about consciousness than almost anything else we study. I don’t mean in terms of dynamics, what conscio9us beings behave like. I mean WTF it is, what are the necessary and sufficient conditions of consciousness.

    We literally know far more about the birth of the universe, the Mariana Trench, and perhaps even dark matter and energy than we do about consciousness.

    That leaves the frightening prospect that we are well on our way to creating it accidentally. Is it simply a state that results once a certain level of complexity is achieved? If so, then we need to put a stop to computer complexity and accept what that means for the advancement of the human race.

    #33852

    jakelafort
    Participant

    Unseen, as to your most recent scribbles I am in agreement. However we humans have given no indication we are capable of retarding or halting the advancement of tech. The experts i read/listened to opine that super intelligence is within a generation and at its genesis will be a gazillion power of magnitude greater than the human brain and that there is no putting the genie back in the bottle. Further it is my opinion that none of us can make an educated guess whether super intelligence AI will be indifferent, baleful, or kindly disposed towards us.

    It is my guess that consciousness that arises over billions of years biologically is always impure with an unhealthy admixture of conscious and unconscious processes.

    #33853

    Davis
    Moderator

    No unseen. As stem fields are slowly (hardly there yet) becoming less hostile to women entering stem fields, the enrolment of women in STEM university courses are over all reaching parity with men (in some fields surpassing them). Sexual harassment and discrimination against women in STEM fields is still rampant to the point that it is seriously no surprise that women are only just now reaching parity with men in the field. It is not a case if “interest” in the field but until recently the realistic likelihood that you would be hired and not ignored or passed over for promotions or ridiculed or sexually harassed while working in the field. For example at the moment the percentage of women earning their PhDs in STEM in the US is 42% and growing (likely to surpass men in the future).

    And even if it was a “duh” case that women weren’t as interested as men in STEM fields, that has nothing to do with the generalisation about gender and robots you were making. That goes far beyond an “interest” in a field but a fundamental difference of the way women and men perceive and interact with artificiality. This kind of generalisation should be backed up by actual experiments, and not generalisations and personal observattions.

    #33856

    PopeBeanie
    Moderator

    The series “Humans” is worth a watch on Netflix – not sure if available in USA though.

    I like the series, but hope they’ll add more (“realistic”, imo) bad-guy AI, eventually. (It’s on Amazon here, as Unseen mentioned.)

    Blade Runner was ahead of its time on a similar theme, as are some of the time travel themed series/movies where people in the future fudge with past events. (I normally dislike the massive poetic license time travel plots employ, although Twelve Monkeys is one of my all time favorites.)

    #33860

    Unseen
    Participant

    No unseen. As stem fields are slowly (hardly there yet) becoming less hostile to women entering stem fields, the enrolment of women in STEM university courses are over all reaching parity with men (in some fields surpassing them). Sexual harassment and discrimination against women in STEM fields is still rampant to the point that it is seriously no surprise that women are only just now reaching parity with men in the field. It is not a case if “interest” in the field but until recently the realistic likelihood that you would be hired and not ignored or passed over for promotions or ridiculed or sexually harassed while working in the field. For example at the moment the percentage of women earning their PhDs in STEM in the US is 42% and growing (likely to surpass men in the future).

    I would never say that women are not as smart as men, though I might maintain that intelligence may exhibit itself somewhat differently depending on gender. I believe a certain amount of hard-wiring is involved which conditions the choices men and women make and also the early directions the genders take in their teen years leading up to their choice of college majors. It may be that women find more fulfillment in softer hard sciences like zoology, botany, and marine biology than in more “hard” hard sciences like physics and chemistry fields, unless they have softer applications.

    Anyway, while it’s great and right that more women are entering STEM fields, the goal of achieving a 50/50 split is arbitrary. There’s no law of the universe dictating that it has to be 50/50 or it’s unfair.

    Whatever the proper equalibrium is, clearly social inertia will mean that the solution is likely fairly far off in the future. Generations away.

    I agree that hostility toward women, disparagement of women, is disheartening and must be fixed. Perpetrators need to be treated as examples by being tossed out of programs, temporarily at least if not permanently. It’s intolerable. This is so that every qualified female student can be there on graduation day.

    Also, whatever prejudicial obstacles women face in the work place, that are not the result of limitations they impose on themselves need to be eliminated to whatever degree possible.

    Once those things are done, and only then, will we know whether 50/50, 45/55, 39/61 or some other ratio is a valid ratio in any particular field.

    So, see, I’m with you all the way except that I don’t know where the 50/50 notion came from. It sounds like a ratio someone comes up with when they have no idea what else to say.

    And even if it was a “duh” case that women weren’t as interested as men in STEM fields, that has nothing to do with the generalisation about gender and robots you were making. That goes far beyond an “interest” in a field but a fundamental difference of the way women and men perceive and interact with artificiality. This kind of generalisation should be backed up by actual experiments, and not generalisations and personal observattions.

    First, does it seem totally implausible to you that women respond to cuteness more strongly than men? On the whole, women do respond to cuteness more strongly than men in several obvious areas: babies and young children, animals, and clothing. Anyone with eyes and ears and a few functioning brain cells knows this even without peer-reviewed studies.

    I used to work a lot with women, sometimes while two or more were trying on outfits. Many a time, one female trying on an outfit would hear another one say “That looks really cute on.” This is not the sort of thing one generally hears from individuals of the male gender.

    The researcher’s point wasn’t that we shouldn’t make realistic robots because women can’t tell the difference. It was because he feared that men and women both will tend to forget or overlook the artificiality of robots as they come to look and behave like humans, thus giving AI a psychological foot in the door a system consisting of boxes and cabinets and terminals would not have.

    • This reply was modified 1 month ago by  Unseen.
    #33863

    jakelafort
    Participant

    If super intelligence emerges in AI and intends to ameliorate civilization/human condition it will handle that with aplomb. How hard can it be to alter our biology/microbiome/genetically engineer/repair and condition us? Or perhaps it will have as much interest in us as we have in amoeba.

    If there is communication with us then perhaps AI will reveal to us how the universe operates. goodbye religion…

    #33864

    Unseen
    Participant

    If super intelligence emerges in AI and intends to ameliorate civilization/human condition it will handle that with aplomb. How hard can it be to alter our biology/microbiome/genetically engineer/repair and condition us? Or perhaps it will have as much interest in us as we have in amoeba. If there is communication with us then perhaps AI will reveal to us how the universe operates. goodbye religion…

    The real “rubber hits the road” moment to fear will be when one of the goals AI is pursuing finds that the biggest impediment to reaching that goal is to eliminate humans.

    It isn’t just paranoid looney-tunes who are warning us about AI, it’s some of our best and most creative thinkers.

    #33865

    Unseen
    Participant

    But don’t think we’re out of the woods if we somehow prevent AI from achieving consciousness. There’s another problem every programmer is all too familiar with, and which has been confounding us from the the first electronic device that could execute instructions. And that is that computers obediently do what you tell them to do, not what you think you told them to do.

    #33875

    TheEncogitationer
    Participant

    Fellow Unbelievers,

    I proport no technical expertise on the subject of Artificial Intelligence (AI..)  Judging, however, from the beginning video/sales pitch for SkillShare, I can say this much about AI that is of relevance to Atheism, Secularism, and Naturalism:

    AI may have the potential to equal and exceed humans in knowledge absorption, intelligence, and mechanical abilities (to the extent AI is connected to mechanical systems.)  However, if AI takes up spatio-temporal coordinates in the Natural Universe, if it can commit error or break down, if it can lose the structural integrity and self-locomotion that we call life, then AI, contrary to the narrator, is not a god.

    At most, AI is another natural being like ourselves that has super-augmented versions of our own traits, for either good or ill.  It certainly deserves our awe, it may deserve admiration or disdain, but not worship and blind faith and obedience that humans normally give to alleged gods.

    Also, if we don’t yet know all of the potential good or ill of AI, it is no different, in this regard, then a newborn human.  Every outpatient of the maternity ward is either a potential boon and savior of our kind or potential enemy and destroyer of our kind.  It all depends on learning, nurturing, direction, and volition which path the outpatient takes.

    Humans have had enough problems with other humans in the forms of superstition, dogma, ideology, prejudice, hatred, intentionally-inflicted poverty, starvation, war, barbarity, and tyranny, throughout history and in the Twentieth Century in particular.  Could AI make any of this any worse?  Could AI mean the end of all these horrors?  As with all other questions, rational beings can only go where the evidence takes us.

    #33880

    Unseen
    Participant

    TheEncogitationer: We are not discussing AI as God, but AI as potential enemy who may someday, in fulfilling its objectives may see that mankind is in the way. resulting in our demise. Are you saying Let’s wait and see?

    #33881

    TheEncogitationer
    Participant

    Unseen,

    I know that confirmed, conscious Atheists wouldn’t believe AI or anything or anybody is a god, but the narrator of the video did mention the possibility of AI as becoming a god, so I had to respond.

    As for AI as a strictly natural entity, I think we should look at all of it’s potential, not just the worst ones and use what we know about raising intelligent human beings to bring out the best potential for AI.

    We could impart to AI that we humans created it and the we want only good things for AI and ourselves at the same time, that ours could be a mutually beneficial relationship, and then follow through and demonstrate it.

    If we’re dealing with an intelligent being, let’s treat it that way.

    #33891

    Unseen
    Participant

    Unseen, I know that confirmed, conscious Atheists wouldn’t believe AI or anything or anybody is a god, but the narrator of the video did mention the possibility of AI as becoming a god, so I had to respond. As for AI as a strictly natural entity, I think we should look at all of it’s potential, not just the worst ones and use what we know about raising intelligent human beings to bring out the best potential for AI. We could impart to AI that we humans created it and the we want only good things for AI and ourselves at the same time, that ours could be a mutually beneficial relationship, and then follow through and demonstrate it. If we’re dealing with an intelligent being, let’s treat it that way.

    Like I said, your position sounds like “Let’s wait and see.”

    I will reiterate that basic problem every programmer knows and dreads: computers do what you tell them to do  not what you thought you were telling them to do.

Viewing 15 posts - 16 through 30 (of 41 total)

You must be logged in to reply to this topic.