The Myth of Artificial Intelligence Western lawmakers are wrong

The Chinese government's problematic research to judge online comments

This story originally appeared in The Algorithm, our weekly AI newsletter. To get stories like this in your inbox first, sign up here.

While the US and the EU may differ on how to regulate the technology, their lawmakers seem to agree on one thing: The West must ban AI-based social scoring.

According to them, social scoring is a practice in which authoritarian governments, especially China, rank people’s trustworthiness and punish them for unwanted behavior, such as stealing or defaulting on loans. Essentially, it is seen as a dystopian superscore assigned to each citizen.

The EU is currently negotiating a new law called the AI ​​Act, which will ban member states, and perhaps even private companies, from implementing such a system.

The problem is that it “essentially bans thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

In 2014, China announced a six-year plan to build a system that rewards actions that build trust in society and penalizes the opposite. Eight years later, a bill has just been published that seeks to codify past social credit pilots and guide future implementation.

There have been some controversial local experiments, like the one in the small town of Rongcheng in 2013, which gave each resident an initial personal credit score of 1,000 that can be raised or lowered based on how their actions are judged. People can now opt out and the local government has removed some controversial criteria.

But these have not gained wider currency elsewhere and do not apply to the entire Chinese population. There is no nationwide all-seeing social credit system with algorithms that rank people.

As my colleague Zeyi Yang explains, “the reality is that that terrifying system doesn’t exist, and the central government doesn’t seem too keen to build it either.”

What has been implemented is mostly pretty low-tech. It is a “set of attempts to regulate the financial lending industry, allow government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes.

Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, which compiled a report on the subject for the US government, could not find a single case where data collection in China has led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” walked around the city and wrote down people’s misbehavior using pen and paper.

The myth originates from a pilot program called Sesame Credit, developed by the Chinese technology company Alibaba. This was an attempt to gauge people’s creditworthiness using customer data at a time when most Chinese didn’t have a credit card, Brussee says. The effort merged with the social credit system as a whole in what Brussee describes as a “Chinese whispering game”. And the misunderstanding took on a life of its own.

The irony is that while US and European politicians describe this as a problem stemming from authoritarian regimes, systems that classify and penalize people are already in place in the West. Algorithms designed to automate decisions are being deployed en masse and used to deny people housing, jobs and basic services.

For example, in Amsterdam, authorities used an algorithm to rank young people from deprived neighborhoods according to their likelihood of becoming criminals. They say the aim is to prevent crime and help deliver better and more targeted support.

But in reality, human rights groups argue, it has increased stigma and discrimination. Young people who end up on this list face more stops from the police, home visits from the authorities, and stricter supervision from school and social workers.

IIt’s easy to take a stand against a dystopian algorithm that doesn’t actually exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they’d better look more closely. Americans don’t even have a federal privacy law that offers some basic protections against algorithmic decision-making.

There is also a desperate need for governments to conduct honest and thorough audits of how authorities and companies use AI to make decisions about our lives. They may not like what they find, but that makes it all the more crucial for them to look.

Deeper learning

A bot that has watched 70,000 hours of Minecraft could unlock the next big thing in AI

Research firm OpenAI created an AI that gorged on 70,000 hours of video of people playing Minecraft to play better than any other AI before. It’s a breakthrough for a powerful new technique, called imitation learning, that could be used to train machines to perform a wide range of tasks by first observing humans do them. It also raises the potential that sites like YouTube could be a vast and untapped source of education data.

Why it’s a big deal: Imitation learning can be used to train AI to control robot arms, drive cars, or navigate websites. Some people, like Meta’s chief AI scientist Yann LeCun, think that watching videos will ultimately help us train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.

Bits and bytes

Meta’s AI can make and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces around a map. The game requires players to talk to each other and spot when others are bluffing. Meta’s new AI, called Cicero, managed to trick the humans to win.

It’s a big step towards AI that can help with complex problems, like planning routes around heavy traffic and negotiating contracts. But I won’t lie: It’s also an unnerving thought that an AI could fool humans so well. (MIT Technology Review)

We may run out of data to train AI language programs

The trend to build ever larger AI models means that we need even larger datasets to train them. Trouble is, we could run out of adequate data by 2026, according to a paper by researchers at Epoch, an artificial intelligence research and forecasting organization. This should drive the AI ​​community to find ways to do more with existing assets. (MIT Technology Review)

Stable Diffusion 2.0 is out

The open source text-to-image AI Stable Diffusion received a great faceliftand its results look much more polished and realistic than before. It can also do hands. The pace of development of Stable Diffusion is breathtaking. Its first version only launched in August. We will likely see even more advances in generative AI well into next year.


#Myth #Artificial #Intelligence #Western #lawmakers #wrong

Leave a Reply

Your email address will not be published. Required fields are marked *