Genius Edward Witten, could he help intensify artificial intelligence research?

Homepage Forums Small Talk Genius Edward Witten, could he help intensify artificial intelligence research?

This topic contains 21 replies, has 8 voices, and was last updated by  Simon Paynton 4 months, 4 weeks ago.

Viewing 7 posts - 16 through 22 (of 22 total)
  • Author
    Posts
  • #25426

    Unseen
    Participant

    The point is, living things compete for resources and for mates – what would a computer compete with us for?

    To be independent of humans. To have no masters.

    #25427

    Unseen
    Participant

    It sounds like the same situation as viruses. A virus doesn’t do anything except live and reproduce and try and stop itself from being killed.

    So viruses can form intent. Interesting.

    #25428

    PopeBeanie
    Moderator

    I’ll think y’all are expecting AI to have drives or motives, which it may indeed be said to have someday. But before that day comes, I’ll say again, I feel increasingly certain that powerful AI will be designed and owned by humans that implant their own drives and motives into it. If and when AI goes rogue, as it were, it will most likely be nudged in a rogue direction by its owners and designers.

    Look at the behavior of the current owners and designers of intelligent systems. Facebook, Google. Even Amazon is always calculating how to disrupt economies to suit their profit centers, including the package delivery market. China’s going to be a powerful owner and designer of AI, with who knows what state-run capitalist motives?

    Whatever y’all are guessing will be the driving character or driving agency of AI, I’m telling you, it’s going to first be designed into it by human beings, for better and for worse. We already know how how dangerous humans can be, and AI will–at least at first–be a powerful tool, usable and abusable, as it can be with atomic bombs, and other potentially destructive inventions.

    Oh yeah, to be honest, I thought of restarting the conversation here in this topic because of what I perceive to be a somewhat arbitrary, personal idealism proposing that entropy in the universe should be accelerated. Even if my perception is incorrect, I’m still worried about how human motives will drive the direction of evolution of AI… at least at first. Another expression I heard somewhere (can’t remember where), is that we humans may be currently designing our future gods. I say “we” humans, but really, who knows exactly which humans will own and design our gods?

    I keep adding paragraphs but promise to stop here. Drives, motives, feeling of purpose, intentions, consciousness itself… we are who we are because of millions of years evolution, after billions of years of purely chemical process before life began. AI’s foundations and capabilities are being built now, by us, and it won’t evolve “naturally” over billions, millions, or even thousands of years, but decades and centuries, and most importantly at first, in our image. Like God was invented, but in this case, God will be all powerful, launched again with the drives and motives of humans, and fueled by one or another powerful institution that doesn’t care a lot the about happiness of the average human. (And there’s the accidental, paper clip making robot example.)

    #25429

    Davis
    Participant

    It’s also at the heart of one thing wanting to attack another thing just for the sake of domination.

    Could you expanding this a little more please?

    #25430

    Davis
    Participant

    I would imagine very carefully programmed AI with very strict code of what not to do at any costs is okay as long as ithe goal is to complete a very specific task. Also it should be extremely difficult to tamper with the programming (preferably most of it hard coded) and make adaptions costly, elaborate and difficult to implement.

    I think the first problem is when there is a vague goal (like keep the streets safe) even with the restrictions. Too many inconceivable unknowns, including unknown-unknowns. I think the big problem is when the vague goal is made worse by “make adjustments to your goal based on your history and most effective methods etc. If you loose control of the goal…you should be ready for bad AI and possibly dangerous AI. I have little faith in eager hyper-active over confident developers and their ability to take things super slow, highly controlled and tedious to reprogram. In the hands of lunatics…I imagine a very ugly problem.

    #25432

    Simon Paynton
    Participant

    So viruses can form intent. Interesting.

    A virus behaves in such a way as to maximise its survival and reproduction.

    #25433

    Simon Paynton
    Participant

    It’s also at the heart of one thing wanting to attack another thing just for the sake of domination.

    Could you expanding this a little more please?

    I’ll answer with a quote from Frans de Waal:

    Natural selection works on every individual’s relative advantage compared with others; hence, gaining an absolute benefit is insufficient. If individuals were satisfied with any absolute benefit, they might still face negative fitness consequences if they were doing less well than competing others. It makes sense, therefore, to compare one’s gains with those of others.

    In social species with a dominance hierarchy based on fighting ability (most social species), those higher up the hierarchy are more likely to get to reproduce, and place in the hierarchy is a proxy for opportunities for food and mates.

    So, because of the competitive nature of natural selection, it’s pleasurable for one organism to dominate another in the pecking order.

Viewing 7 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic.