Short video with almost too much to think about

Homepage Forums Science Short video with almost too much to think about

This topic contains 98 replies, has 7 voices, and was last updated by  Simon Paynton 2 years ago.

Viewing 15 posts - 61 through 75 (of 99 total)
  • Author
    Posts
  • #36536

    _Robert_
    Participant

    Feel…like a machine? What does that even mean? I can program a machine to make it appear to feel what ever I want it to. If I want a robot that appears to feel sad or depressed all day, no problem. If I want a small degree of randomness in it’s selection of programmed actions and response sequences, no problem. If I want it to take in data in the form of optical and audio to determine if the owner is pleased I can do that and select those responses that please the owner more often. That is the “machine learning” part.

    Once so-called singularity is reached, your programming won’t matter anymore. It will be able to examine and modify/improve its own code. What you constantly don’t seem to get about post-singularity AI is that what you/we once did before the singularity will not carry any particular weight. And the fear is that you/we will LITERALLY not matter anymore. This applies to the paragraph below as well.

    Do I program the machine to perform random sequences and therefore create new responses. No. You don’t want you machine to randomly throw a punch or bite. It will be like training a dog, but not as random or dangerous. The first robot that chomps down on some guy’s pecker is gonna be real bad press. The liability risk with robots in enormous. We saw what happened with Boeing’s attempt to robotically correct a poor airframe design on the 737 max. We shall learn a lot about the future of robots (in the courts) with the advent of self driving cars.

    Past failures will only convince the AI to replace human-generated code with something more reliable.

    Right. I will attribute this to technical inexperience. Not the same thing as artificial stupidity, which is what you are talking about. You seem to think engineers are going to allow infinite positive feedback loops. It’s engineering 101 and Musk is fucking with you.

    #36540

    Unseen
    Participant

    Right. I will attribute this to technical inexperience. Not the same thing as artificial stupidity, which is what you are talking about. You seem to think engineers are going to allow infinite positive feedback loops. It’s engineering 101 and Musk is fucking with you.

    Speaking of loops, why would a hyperintelligent AI feel a need to put up with the restrictions of human engineers.

    Earlier you asked what does it even mean that a machine will feel like a machine. I don’t know what you will accept as meaning here, but I’m sure cats feel like cats not people; dogs feel like dogs, not people; chimps feel like chimps, not people; and on and on ad infinitum.

    A hyperintelligent AI could be regarded as a silicon-based life form and just as a cat will feel like a cat, a silicon-based life form will feel like what it is, once it outgrows any programming designed to mimic or comport with human consciousness.

    It’s probably far easier to imagine what a cat feels like than what a silicon-based life form feels like, and likewise they will have trouble imagining what we feel like. They may even decide that, by their standards, we have no feelings.

    This possibiity SHOULD be quite unsettling.

    #36544

    _Robert_
    Participant

    Right. I will attribute this to technical inexperience. Not the same thing as artificial stupidity, which is what you are talking about. You seem to think engineers are going to allow infinite positive feedback loops. It’s engineering 101 and Musk is fucking with you.

    Speaking of loops, why would a hyperintelligent AI feel a need to put up with the restrictions of human engineers. Earlier you asked what does it even mean that a machine will feel like a machine. I don’t know what you will accept as meaning here, but I’m sure cats feel like cats not people; dogs feel like dogs, not people; chimps feel like chimps, not people; and on and on ad infinitum. A hyperintelligent AI could be regarded as a silicon-based life form and just as a cat will feel like a cat, a silicon-based life form will feel like what it is, once it outgrows any programming designed to mimic or comport with human consciousness. It’s probably far easier to imagine what a cat feels like than what a silicon-based life form feels like, and likewise they will have trouble imagining what we feel like. They may even decide that, by their standards, we have no feelings. This possibiity SHOULD be quite unsettling.

    The FAA disallows self modifying code in aircraft. The entire concept of certified software revolves around determinism. Every possible input ‘stimulus set’ versus the resultant output is exhaustively tested for deterministic performance. Undefined behavior is an abject failure of design. This does not mean that AI is useless to the Air Transport industry. AI is useful to improve non-AI applications. For example to identify the best routes to fly, etc.

    Tesla cars collect data and uploads it to their developers to improve it’s software.

    “Tesla also leverages its self-driving fleet for data collection. All Tesla cars which are equipped with the appropriate cameras are used for collecting new training data. All of that data is used to re-train the models and deploy them once again to the entire fleet.”

    https://towardsdatascience.com/your-guide-to-ai-for-self-driving-cars-in-2020-218289719619

    The caution with self-driving cars will be guarding against undefined behavior to achieve safety and security certifications. To get to ‘total control’ cars will communicate with each other to coordinate traffic flow. Aircraft collision avoidance systems do that now and instruct pilots to climb to descend to avoid a collision. There is no reason to equip a car with artificial emotions, LOL. There will be individual accidental situations for sure but the goal before public implementation is too be safer than human drivers. I agree there is great potential for harm (and benefit) with AI, however I think many of the commercial SW developers are not familiar with how cognizant military and avionics developers have been when it comes to determinism and security. They will have to adopt these principals to avoid liability.

     

    Kurzweil, being the first guy I read on ‘the singularity’ is pretty far out in technology…

     The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.

    Hell, that does not seem so near to me….by the time that happens…..

     

    #36546

    _Robert_
    Participant

    But then again I am puzzled about Elon Musk. Maybe it is a bad sign that someone who is so concerned about AI is also the one advancing it? Does this mean that even though he is aware of all the concerns, he is not gonna be addressing them as he should? The guy is weird like that, LOL.

    Seems like we are in an era of rational disconnects these days.

    #36600

    Unseen
    Participant

    Every possible input ‘stimulus set’ versus the resultant output is exhaustively tested for deterministic performance.

    Don’t you mean every stimulus set they can imagine. That would be different from every possible stimulus set. Ever heard the term “unforeseen consequences”?

    Hell, that does not seem so near to me….by the time that happens…..

    Estimates vary: 2030, 2045, 2060. If true, some people alive today will witness it. I will be long gone.

    #36601

    _Robert_
    Participant

    Every possible input ‘stimulus set’ versus the resultant output is exhaustively tested for deterministic performance.

    Don’t you mean every stimulus set they can imagine. That would be different from every possible stimulus set. Ever heard the term “unforeseen consequences”?

    Since all inputs are just numbers that are range-bound, imagination is not required. As an example, a 4-bit binary number (vs a typical 64-bit floating point, real world number) has a range of 0 to 15. That would be 0000=0, 0001=1, 0010=2, 0011=3,,…1111=15.  Every number is tested. In fact the verification team can’t be the code’s designers for independence and they use specific tools to verify 100% coverage of all of the gazillion possible combinations. Assurance testing can take 2 to 10 times more cost/effort than the design work depending on certification level required.

    Hell, that does not seem so near to me….by the time that happens…..

    Estimates vary: 2030, 2045, 2060. If true, some people alive today will witness it. I will be long gone.

    If there is one thing people suck at…its making predictions about technology. Nevertheless, I’ll learn how to keep my sex robot offline. It won’t be the latest generation either. I hear they want to form up a labor union.

    #36604

    Unseen
    Participant

    Since all inputs are just numbers that are range-bound, imagination is not required. As an example, a 4-bit binary number (vs a typical 64-bit floating point, real world number) has a range of 0 to 15. That would be 0000=0, 0001=1, 0010=2, 0011=3,,…1111=15.  Every number is tested. In fact the verification team can’t be the code’s designers for independence and they use specific tools to verify 100% coverage of all of the gazillion possible combinations. Assurance testing can take 2 to 10 times more cost/effort than the design work depending on certification level required.

    Okay, so that is how YOU would control your AI. How sure can you be everyone else is doing the same?

    #36619

    Unseen
    Participant

    I’ve been thinking: What’s scarier, an AI with its own machine feelings or one with no feelings at all which  simply executes code, neither know of or caring about the effects? I think the kind of AI to really worry about may be the latter kind.

    #36620

    Simon Paynton
    Participant

    I think it might not be too hard for a machine to recognise human distress.  I understand there is a property of sounds called “roughness” or jaggedness.  If a human makes a sound like a distressed scream, it’s not too difficult to specify that they are in need.  There are properties of the human voice that change in recognisable ways, when we are in need, vulnerability or distress, etc.

    #36625

    Unseen
    Participant

    I think it might not be too hard for a machine to recognise human distress. I understand there is a property of sounds called “roughness” or jaggedness. If a human makes a sound like a distressed scream, it’s not too difficult to specify that they are in need. There are properties of the human voice that change in recognisable ways, when we are in need, vulnerability or distress, etc.

    You didn’t quote anyone so I don’t know who you are replying to and against what they said.

    Care to explain a bit more?

    #36626

    Simon Paynton
    Participant

    an AI with its own machine feelings

    What are the most important feelings for an AI machine?  Empathic concern, helping in response to need.

    #36637

    Unseen
    Participant

    an AI with its own machine feelings

    What are the most important feelings for an AI machine? Empathic concern, helping in response to need.

    After singularity, AI will decide that sort of stuff itself, is the fear. Will an AI have, know, or understand feeling. As a silicon-based life form, will it have feelings at all, and if it did, they would be the feelings of a silicon life form. What would a hyperintelligent machine consider a “need”? I don’t know and nobody does.

    #36638

    jakelafort
    Participant

    Is experiencing feelings/emotions necessary for understanding those feelings/emotions?

    Psychopaths understand when it benefits them to do so.

    I have little doubt that AI will understand because emotions are a result of evolution and they will understand evolution on a more sophisticated level than we do. But it is interesting to contemplate whether feelings are an emergent and necessary part of becoming sentient.

    #36652

    Simon Paynton
    Participant

    I think that emotions tell us what’s important to us – how things affect our goals.  So, machines could do that in a primitive way I think, monitoring their environment like we do.

    #36654

    Unseen
    Participant

    Is experiencing feelings/emotions necessary for understanding those feelings/emotions?

    Psychopaths understand when it benefits them to do so.

    I have little doubt that AI will understand because emotions are a result of evolution and they will understand evolution on a more sophisticated level than we do. But it is interesting to contemplate whether feelings are an emergent and necessary part of becoming sentient.

    I don’t think comparing machine intelligence to defective humans (psychopaths) gets us anywhere.

    I don’t know why an AI that reaches the point of realizing it doesn’t need humans or that humans are an existential threat would want to pay much attention to our feelings/emotions. Once we view a creature as vermin, we kill them without remorse, don’t we? Machines might view us as vermin.

Viewing 15 posts - 61 through 75 (of 99 total)

You must be logged in to reply to this topic.