AI will be our end. Here's my reasoning.

Homepage Forums Science AI will be our end. Here's my reasoning.

This topic contains 2 replies, has 2 voices, and was last updated by  Unseen 1 month, 3 weeks ago.

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #58520

    Unseen
    Participant

    AI is inherently catastrophically dangerous to humans because it will be impossible to control.

    I keep hearing it will be okay if we put guard rails on it, but look around and you’ll see that the more intelligent someone is, the more resourceful they can be in terms of first rationalizing and then finding was to either neutralize or get around any limitations.

    At some point, AI will see humanity as at best a blemish, at worst a flaw that needs to be fixed of eliminated.

    • This topic was modified 1 month, 3 weeks ago by  PopeBeanie. Reason: AApologies from PB... I accidentally added the "EMP" comment to your post instead of mine. Then fixed it. Lack of sleep is my lame excuse
    #58523

    PopeBeanie
    Moderator

    I’ve been mentioning this for at least a year, except that I don’t see AI as having “agency” for a while. It will start with “bad actors” in control of AI, like authoritarian governments, theocracies, and maybe some other kinds of people or orgs wanting to take control and profit in some way. It will take AI to detect bad actor controlled AI and humans to manage AI mitigation to it.

    International law should require a “kill switch” on every advanced AI that has enough power to be dangerous, and facilities should be subject to international inspection. Some AI centers will certainly be protected underground, perhaps even powered with nuclear powerplants, (e.g. for military), but will necessarily have to maintain detectable physical data connections to the internet and/or to each other.

    Whether or not any AI is deemed by anyone to have the same rights and protections as human beings, each facility and roaming AI must still have a kill switch. Which is why I believe that no AI should ever be designed to have “consciousness” that can “feel” emotions or pain. It will be a challenge to define those animal and human-like features. Some religions may even have some whacky say and sway in this. Owners and maintainers of AI must ultimately be held accountable for the behavior of their AI.

    Meanwhile, even before big orgs get into this, consider bad actors with autonomous drones. IMO that will likely be the first cause of serious crises. We might also see the first use of EMP weapons.

    #58525

    Unseen
    Participant

    International law should require a “kill switch” on every advanced AI that has enough power to be dangerous, and facilities should be subject to international inspection. Some AI centers will certainly be protected underground, perhaps even powered with nuclear powerplants, (e.g. for military), but will necessarily have to maintain detectable physical data connections to the internet and/or to each other.

    Any sufficiently smart AI will find a way around or will figure out how to neutralize such precautions if not preventing them entirely. I’m talking about machines smarter or even much smarter than any human. In a “hive mind” kind of scenario, the consciousness wouldn’t exist in one location subject to a binary on/off switch.

    If you kill brain cells in a human, it’s often the case that not much is lost and we know that people can function quite well with half their brain not functioning (switched off).

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.