Can Tech Companies Be Trusted With AI Governance?

This content has been archived. It may no longer be accurate or relevant.

From Statista:

Public-facing AI tools, including text-based applications like ChatGPT or text-to-image models like Stable Diffusion, Midjourney or DALL-E 2, have quickly turned into the newest digital frontier in terms of regulatory, legal and online privacy issues. Already malicious actors are committing criminal offenses and spreading mis- and disinformation aided by the capabilities of generative AI, with national governments struggling to keep up the pace and companies shifting the blame on individual users. As a survey conducted by KPMG Australia and the University of Queensland shows, the general public already doesn’t trust government institutions to oversee the implementation of AI.

Surveying over 17,000 people across 17 countries, the study found that only one third of respondents had high or complete confidence in governments regarding the regulation and governance of AI tools and systems. Survey participants were similarly skeptical of tech companies and existing regulatory agencies as governing bodies in AI. Instead, research institutions, universities and defense forces are seen as most capable in this regard.

Although the people surveyed showed skepticism of state governments, supranational bodies like the United Nations were thought of more positively. The European Commission is currently the only organ in this category to have drafted a law aiming to curb the influence of AI and ensure the protection of individuals’ rights. The so-called AI Act was proposed in April 2021 and has yet to be adopted. The proposed bill sorts AI applications into different risk categories. For example, AI aimed at manipulating public opinion or profiting off children or vulnerable groups would become illegal in the EU. High-risk applications, like biometric data software, would be subject to strict legal boundaries. Experts have criticized the policy draft for its apparent loopholes and vague definitions.

Link to the rest at Statista

PG notes that disinformation has been around for a very long time. AI may change it in some manner, but the formula to dealing with disinformation is the same that it has always been handled – dissemination correct information.

PG has little difficulty in imagining how AI could be used as a powerful tool for quickly responding to disinformation.

3 thoughts on “Can Tech Companies Be Trusted With AI Governance?”

  1. Media operations are concerned that computers might infringe their historical, god given right to spread (by both commission and omission) disinformation.
    Their ox is getting gored.

    Cry me a river.

    • About to say. Could AIs be used to fight disinformation? Yes.

      Could they be used to spread disinformation? Yes.

      Is everything labeled disinformation actually disinformation? No.

    • Yep, what is “information” and what is “disinformation” is very much a matter of interpretation, and often what appears to be correct changes over time (e.g. as early theories are disproved).

      Based on what’s come out later, after the initial reporting, the NY Times and other such outlets are some of the biggest purveyors of incorrect information.

Comments are closed.