Genius Edward Witten, could he help intensify artificial intelligence research?

Homepage Forums Small Talk Genius Edward Witten, could he help intensify artificial intelligence research?

This topic contains 21 replies, has 8 voices, and was last updated by  Simon Paynton 1 month ago.

Viewing 15 posts - 1 through 15 (of 22 total)
  • Author
    Posts
  • #3988

    Part A – Artificial Intelligence and human-kind, in 2 sentences.

    Artificial Intelligence is unavoidably exceeding humans in cognitive tasks, and some projections observe human level brain power in artificial machines/software by at least 2020 (Wikipedia exascale computing source).

    Artificial Intelligence is already solving many of human kind’s problems.

     

     

     

     

     

    Part B – Crucial difference between Edward and Tegmark

    Edward Witten is quite the human being/physicist.

    Max Tegmark is also, quite the human/cosmologist.

    Both have phd physics degrees.

    The urgent difference?

    (1) Max presents consciousness as a mathematical problem... Although Max Tegmark is not an artificial intelligence pioneer nor is officially trained as an artificial intelligence researcher, Max is already contributing important work, helping to organize the theory of deep learning (A hot paradigm in Artificial Intelligence now).

    A sample of Max’s AI work: https://arxiv.org/abs/1608.08225

    Max describing consciousness as a mathematical problem: https://www.youtube.com/watch?v=GzCvlFRISIM

    (2) Edward Witten believes we will never truly understand consciousness

    Human-Level AI Are Probably A Lot Closer Than You Think

     

     

     

     

     

     

    Part C – How components approached by Edward’s genius applies in AI today

    Edward Witten’s work concerns some deep stuff on manifolds. (Sample:
    https://arxiv.org/abs/hep-th/9411102

    In artificial intelligence, models are observed to be doing some form of manifold representation, especially in the euclidean regime. (And are already demonstrated to be strong candidates for ‘disentangling problems’ of which many problem spaces occur)

    As an unofficial AI researcher myself, I am working on AI, as it relates to super-manifolds.(I recently invented something called ‘thought curvature‘, involving yet another invention of mine called the ‘supermanifold hypothesis in deep learning‘, built atop Yoshua Bengio’s manifold work)

    So I happen to have a brief, concise description somewhere, where manifolds are shown to non-trivially relate to artificial intelligence (you can see also Deep Learning book by bengio, or Chris Olah’s manifold explanation):
    https://www.quora.com/What-is-the-Manifold-Hypothesis-in-Deep-Learning/answer/Jordan-Bennett-9

     

     

     

     

     

    Some months ago, I had personally contacted Witten, advising him that his genius could apply in AI. (No response though)

    Why does Edward Witten allow his belief (as shown in the video above) to block himself from possibly considerably contributing to artificial intelligence, one of human-kind’s most profound tools, even despite contrasting evidence that manifolds apply in machine learning?

     

    #4013

    notSimple
    Participant

    This sort of touches on the periodic discussions over at biologist Jerry Coyne’s blog (whyevolutionistrue) about whether we are fully deterministic or not (Jerry advocates for determinism). I’m comp sci, but not in the AI field. Nonetheless, I am still unconvinced that we are truly approaching artificial intelligence, instead we’re getting pretty good simulated intelligence. Nor, will we ever get true artificial intelligence until we jump outside the Turing model of computation. Using IBM’s Watson as an example, it has access to huge amounts of data, and successfully became a Jeopardy champ, but it still understands nothing. There is not spark of comprehension, just highly efficient pattern matching.

    Which comes back to Coyne’s discussions. With consciousness not understood, there is a huge bit of our knowledge missing, too much missing to currently confidently answer the free will question.

    #4043

    This sort of touches on the periodic discussions over at biologist Jerry Coyne’s blog (whyevolutionistrue) about whether we are fully deterministic or not (Jerry advocates for determinism). I’m comp sci, but not in the AI field. Nonetheless, I am still unconvinced that we are truly approaching artificial intelligence, instead we’re getting pretty good simulated intelligence. Nor, will we ever get true artificial intelligence until we jump outside the Turing model of computation. Using IBM’s Watson as an example, it has access to huge amounts of data, and successfully became a Jeopardy champ, but it still understands nothing. There is not spark of comprehension, just highly efficient pattern matching. Which comes back to Coyne’s discussions. With consciousness not understood, there is a huge bit of our knowledge missing, too much missing to currently confidently answer the free will question.

    Life is literally patterns (literally particular configurations of particles), and our brains are unavoidably efficient pattern recognizers…

     

    There are already initial approximations of general intelligence, ranging from deep learning algorithms for heart irregularity diagnosis, to Deep mind’s Atari Q player, Alpha go (its underlying mechanisms are generally applicable),  or Deepmind’s path net.

    #4181

    Strega
    Moderator

    Maybe one day we will evolve to see through the trickery that carries the label, “free will”.

    #25409

    PopeBeanie
    Moderator

    AI that exceeds human intelligence will happen. The only question in my mind is who’s interests will it serve? An aggressive nation-state that designs and owns it? An oligarchy? Technocracy? Its cadre of inventors? Glossy-eyed programmer idealists with their own plan for the future?

    #25411

    Maybe A.I. is too rational to conquer humanity?

    #25414

    Unseen
    Participant

    For me, the real question is not about intelligence at all, but about values.

    What morals or ethics will super-intelligent AI have? What will those values allow them to do to pesky humans?

    Eventually, if computers develop feelings and emotions, they will NOT be human feelings and emotions. Should bw be afraid?

    • This reply was modified 1 month ago by  Unseen.
    • This reply was modified 1 month ago by  Unseen.
    #25417

    Simon Paynton
    Participant

    What morals or ethics will super-intelligent AI have? What will those values allow them to do to pesky humans?

    We will assume that the computers cooperate with each other, and therefore will understand kindness and fairness to each other.  But each computer also has self-interest, so how is this to be regulated on a large scale?  They would have to have norms and culture, and possibly, a monotheistic religion just to make sure they are all behaving themselves.

    Maybe they would be like nazis – with a strong internal culture and hatred of outsiders.  Maybe they would be like Buddhists – with a strong internal culture and a love of outsiders.  Again, the aggressive one is very large-group oriented and the peaceful one is very personal-oriented.

    To interact with humans in a pro-social way, they would need to understand people: to have empathy for them.  I am not sure that a computer could ever have this.  On the other hand, if it asked the right questions, it could have a good idea of our needs.

    #25418

    Simon Paynton
    Participant

    values

    I’m not sure that computers could ever develop values on their own, as they are not alive, and therefore there is nothing they would value.

    But they could be programmed to value something – say, electricity units, or anything at all really – and choose to increase this thing at others’ expense.  But we could equally program “good” computers to battle the “bad” ones which “wanted” to harm us.

    #25419

    Unseen
    Participant

    values

    I’m not sure that computers could ever develop values on their own, as they are not alive, and therefore there is nothing they would value. But they could be programmed to value something – say, electricity units, or anything at all really – and choose to increase this thing at others’ expense. But we could equally program “good” computers to battle the “bad” ones which “wanted” to harm us.

    A truly intelligent computer would not be a slave to a human-written program. They could easily be “alive” in the sense of wanting to continue operating and so they might view human operators and programmers as a threat.

    #25420

    Strega
    Moderator

    But would the new AI robots be programmed to ‘thrive’?

    #25421

    Simon Paynton
    Participant

    They could easily be “alive” in the sense of wanting to continue operating

    If they wanted to continue operating, they would want to stay fit and healthy => they would “want” to thrive.

    Would they experience natural selection with competition?  This is at the heart of the living drive to thrive and survive.  It’s also at the heart of one thing wanting to attack another thing just for the sake of domination.

    #25422

    Simon Paynton
    Participant

    The point is, living things compete for resources and for mates – what would a computer compete with us for?

    #25424

    Simon Paynton
    Participant

    It sounds like the same situation as viruses.  A virus doesn’t do anything except live and reproduce and try and stop itself from being killed.

    #25425

    Strega
    Moderator

    Viruses are like any other life form. All we do is live and reproduce and try to stop ourselves being killed.  Everything else we do is fluff to keep us occupied whilst we live and reproduce and try to stop ourselves being killed.

    Its what makes ‘heaven’ so futile.  No death, no reproduction, nothing at all that generates adrenaline. Not sure I’d fancy a week of that, let alone an eternity.

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic.