Don’t let industry write the rules for AI

Homepage Forums Small Talk Don’t let industry write the rules for AI

This topic contains 3 replies, has 3 voices, and was last updated by  PopeBeanie 2 months, 1 week ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #26177

    PopeBeanie
    Moderator

    Industry has the data and expertise necessary to design fairness into AI systems. It cannot be excluded from the processes by which we investigate which worries are real and which safeguards work, but it must not be allowed to direct them. Organizations working to ensure that AI is fair and beneficial must be publicly funded, subject to peer review and transparent to civil society. And society must demand increased public investment in independent research rather than hoping that industry funding will fill the gap without corrupting the process.

    Read the whole opinion article (in Nature) here.

    #26186

    _Robert_
    Participant

     Organizations working to ensure that AI is fair and beneficial must be publicly funded, subject to peer review and transparent to civil society

    This is recipe to cook up a vacuum. Are we to assume that government workers are tops when in comes to technology? Great minds do not want to vegetate in a bureaucracy and act as regulators. The government officials I have had to deal with are years behind the curve and incapable of understanding cutting edge technology, with one exception: The ones who used to work in industry and are in government to help their old buddies out. By the time government figures out what is going on the game is about to change again. Technology schools are not necessarily ahead of industry either. They are often used by industry as research arms in return for grants.

    Industry should self regulate with a non profit consortium that includes government officials just for grins. They should generate the regulatory documents and provide compliance test results for their products. Tech companies are staffed by intelligent and morally sound people in general who as ethical or more than government fools. At least it used to be that way. Perhaps companies are busy worrying more about diversity hires than product safety these days. Personal responsibility is waning in this society of victims and I saw that with the young engineers. Boeing let a design flaw get out. I attribute that to a lack of personal responsibility. No FAA official that signed off on those systems had a clue about what might go wrong.

    The general public doesn’t have the time or ability and will just side with their inept political parties’ agenda as they always do.

    #26189

    The government officials in Congress that “interviewed” Mark Zuckerberg at his Senate hearing last year showed little or no understanding of how Facebook works, never mind the algorithms used by Cambridge Analytica in data harvesting. Zuckerberg spoke about using AI to weed out bogus accounts, hate speech and fake news.  On hearing this one Senator actually asked if it was true that Facebook listened to people by hijacking their audio in order to assist them in targeting ads. I almost felt embarrassed for them. They were so clearly out of their depth that at least it proved that none of them were replicants. The Tyrell Corporation could send an office junior to be interviewed by Congress and they would be in awe of his superior intellect!!

    #26190

    PopeBeanie
    Moderator

    Are we to assume that government workers are tops when in comes to technology?

    The author never said “government workers”, but publicly funded policy research. I don’t have a problem with the rest of what you wrote in that paragraph. Still, he wasn’t helpful in spelling out who should decide whom should be paid to do the work, unless he’s thinking of the NSF as the only deciders. I got an A in Political Science a few years ago, but have to admit that I know very little about the NSF. Do you have an opinion about how efficacious they are, or how good or bad they’ve been to us? (Trump tried to cut NSF Research funding in 2018 by 30%, but “quickly rescinded this due to backlash”, per wikipedia.)  I think we need more than the NSF, and I’ll explain that.

    Personal responsibility is waning in this society of victims and I saw that with the young engineers. Boeing let a design flaw get out. I attribute that to a lack of personal responsibility. No FAA official that signed off on those systems had a clue about what might go wrong.

    I posted on this opinion piece from Nature because I predict that science and tech (including AI) will increasingly go over the heads of most citizens, policy makers, voters, journalists, and politicians. Big business and private wealth power short-term profit industries, and they influence (if not control) day-to-day politics recently much more than average American citizens can hope to. Are there any institutions we can trust much more than (say) Academia, and if so, can we ever raise the awareness level and education of average citizens enough to make rational decisions/elections about who should lead the world?

    I don’t think that just finger-wagging and angst are going to do the trick.

    never mind the algorithms used by Cambridge Analytica in data harvesting

    So related to what I was saying to _Robert_, I’m especially keen on trying to understand how (e.g.) a publicly funded organization/institution like BBC executes such great investigative journalism and can produce so much great programming for the people. Do you think it costs too much, in the long run? Maybe the biggest problem, not surprisingly, is that not enough people take the time to educate themselves, e.g. to wit their Brexit fail?

    I’m not intentionally diverging from the topic of responsibly building and controlling AI. I’m just afraid that how we eventually fare against corporate or state designed and owned AI is just one of civilization’s newest illiteracy-in-science/tech dangers.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.