Average Joe easily beats the AI that humbled the top go master. How?
This topic contains 9 replies, has 4 voices, and was last updated by Unseen 1 year, 3 months ago.
-
AuthorPosts
-
June 3, 2023 at 8:33 pm #48512
The answer is all about what’s wrong with AI’s so-called “intelligence.”
June 3, 2023 at 10:51 pm #48513Fellow Unbelievers,
Be careful what you wish for about regulating AI.
The same religious crowd that has spent centuries trying to shackle and put out the light of human intelligence equally wants to do the same to Artificial Intelligence too, as exemplified by the latest proposal from Religious Right live-action Butthead Senator Josh Hawley:
AI has power to ‘manipulate’ Americans, says Sen. Josh Hawley, advocates for right to sue tech companies
‘Everyday Americans’ need more power here, Missouri legislator says
https://www.foxnews.com/lifestyle/ai-power-manipulate-americans-sen-josh-hawley-advocates-right-sue-tech-companiesThe only ‘Everyday Americans” that would have the power under his proposal would be everyday lawyers, everyday bureaucrats, and other everyday destroyers of nice things.
June 4, 2023 at 1:20 am #48518The answer is all about what’s wrong with AI’s so-called “intelligence.”
The field of AI is only just now kicking off in a big time way, while it’s a big time opportunity for youtube presenters to speculate, pontificate, or just plain make up declarations about it. I disagree with the degree of relevance he puts on the fact that currently, [my paraphrase] we don’t know exactly how AI is working underneath its hood. It’s true, we don’t know exactly how it works, but then, same goes for how human consciousness works. In the end, we have to deal with what we know how to deal with, and IMO in the case of AI, keeping track of how bad actors misuse it is a whole lot more important to learn how to deal with.
So non-expert Go players could beat AI at Go, when Go can consistently beat expert Go players. He didn’t mention the probability of AI being able to learn from the non-experts as well. It’s no surprise that the main focus up to now has be on learning from game of Go played by experts and champions.
But this is also about us, designers in particular, eventually understanding how Large Language Models (LLM) work underneath the hood. AND, it’s about the evolution of many different kinds of AI, not just LLMs. I’ll be adding a topic soon on this in the AI Group, including a link to this topic of Unseen’s that you’re currently reading.
June 4, 2023 at 1:27 am #48519Be careful what you wish for about regulating AI. The same religious crowd that has spent centuries trying to shackle and put out the light of human intelligence equally wants to do the same to Artificial Intelligence too […]
I agree 100%, @Enco! IMO keeping track of bad actors will increasingly become our biggest challenge. I believe autocracies, theocracies, religious factions, and even USA’s GOP (and other political actors) would love to hold their dominating places at the top of the list. Plus other self-interested actors we can’t yet predict. No political partisan should ever be given any direct control over AI owners, designers, and actors, or be given full knowledge of how they’re acting.
OpenAI designed ChatGPT with transparency, seeing the coming days of lack of transparency. Of course, even the permanence of that transparency is not guaranteed.
Meanwhile, it is largely up to the open source community and other AI-interested communities to “police” for bad actors and expose them. (Haha, ruh roh, not another cancel culture?!) Obviously, such online communities will be the most expert at policing, and at using AI itself to analyze other, possibly nefarious or otherwise primarily self-serving AI owners, producers, and products.
June 4, 2023 at 2:00 am #48522“I agree 100%, @Enco!”
What are words i never thought i would see or hear.
June 4, 2023 at 3:08 am #48524What are words i never thought i would see or hear.
Haha LOL tan grande, I was thinking the same… while still feeling the love and truth.
June 4, 2023 at 6:30 pm #48538I disagree with the degree of relevance he puts on the fact that currently, [my paraphrase] we don’t know exactly how AI is working underneath its hood. It’s true, we don’t know exactly how it works, but then, same goes for how human consciousness works. In the end, we have to deal with what we know how to deal with, and IMO in the case of AI, keeping track of how bad actors misuse it is a whole lot more important to learn how to deal with.
Taking your analogy to heart. We can see how dreadfully people can treat people, either personally or genocidally, so what about machinery that really can’t understand, even with its learning ability, what a consciousness is because the only way to have a consciousness brings feelings, empathy included, with it.
AI, according to an AI, does not possess the capacity to feel empathy in the same way humans do. Empathy involves the ability to understand and share the emotions of others, to feel compassion and respond accordingly. AI systems are designed to process and analyze data, recognize patterns, and make informed decisions based on predefined rules or machine learning algorithms. While AI can simulate empathy by recognizing emotional cues or responding in ways that appear empathetic, it lacks true emotional experiences or subjective understanding.
Question for you: If you can agree that love does not consist of acting in a loving way, so being empathetic does not consist of acting in an empathetic way. Can we also agree that a simulation of love* is not love and can cause terrible blunders and so a simulation of empathy isn’t empathy and likewise would almost certainly result in terrible blunders, even genocide?
* Which is what psychopaths and sociopaths often learn to do, though it doesn’t prevent them from acting horrifically badly.
June 4, 2023 at 9:05 pm #48547Question for you: If you can agree that love does not consist of acting in a loving way, so being empathetic does not consist of acting in an empathetic way. Can we also agree that a simulation of love is not love and can cause terrible blunders and so a simulation of empathy isn’t empathy and likewise would almost certainly result in terrible blunders, even genocide?
Yes, it makes sense for us to separate the real feelings from a simulation. One of my ideals about AI which I often forget to mention is that any form of AI we create should have a foolproof off switch, in case it gets out of hand. We need to always be able to assume that AI, by design, does not have love, empathy, or consciousness. I throw consciousness in there for ethical reasons, and there may be other things we don’t want AI to “experience”, in case we actually have to push its OFF switch one day.
I see human consciousness, including the basic feelings that come with it like empathy and love as emerging on a scale of zero to whatever we arbitrarily assume is “100%”. IMO, it’s very important to understand that, in the scalar process of becoming an increasingly conscious being, we are doing so with the input of lifetime experiences, including the real time experiences of our own actions and reactions to it, from baby or pre-baby to adult.
Add to the above, our experiences and actions almost always happen in the context of the current state of the world, society, families and friends that we’re immersed in. AI cannot yet learn from those kinds of experiences. I say “yet”, because perhaps the way it might get a glimpse of the internal experiences of our consciousness is if/when AI is implanted in brains or otherwise interfaced with it.
Meanwhile, perhaps we might still be able to learn from AI about ourselves because of its inabilities to have emotions to “motivate” it. I don’t know exactly how that could work, while IMO the bigger question still is, how will owners, designers, any other agents controlling it, actually control their models of AI, in a way that doesn’t (visibly or invisibly) force a zero sum game on us for their own profit.
Perhaps even worse, I/we don’t have a clue how to keep humanity aware of both the potential threats and benefits of powerful AI?
Let me/us know if/when you prefer to stay focussed on the original topic of current inabilities (my word) of AI.
June 4, 2023 at 9:25 pm #48548[…]We can see how dreadfully people can treat people, either personally or genocidally […]
A whole ‘nuther important issue for humanity, right? In the current context of AI, maybe we can use it to learn more about possible preventions and interventions. (I started to make suggestions here, but they’re largely off topic.)
June 5, 2023 at 3:35 pm #48555We need to remember that when we talk about AI we are talking about a computer or a network of computers. We also need to remember that chaos theory is not “just a theory.” Chaotic systems are a fact of life.
There is no doubt whatsoever that portions of such a device can become chaotic, which means unpredictable.
An AI with control over aspects of life—and even life itself—acting unpredictably is a scary thing to contemplate.
-
AuthorPosts
You must be logged in to reply to this topic.