The Download: China’s Social Credit Law and Robot Dog Browsing

The Chinese government's problematic research to judge online comments

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s happening in the tech world.

This is why China’s new social credit law is important

It’s easier to talk about what China’s social credit system is not than what it is. Ever since 2014, when China announced plans to build it, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there is an opportunity to correct the record.

Most people outside of China assume it will act as a Black Mirror-like system powered by technologies to automatically score each Chinese citizen based on what they did right and wrong. Instead, it’s a mix of attempts to regulate the financial lending industry, to allow government agencies to share data with each other, and to promote state-sanctioned moral values, vague as they may sound.

While the system itself will still take a long time to materialize, with the publication of a draft law last week, China is now closer than ever to defining what it will look like and how it will affect the lives of millions of citizens. Read the full story.

—Zeyi Yang

Watch this robot dog climb difficult terrain just by using his camera

The news: When Ananye Agarwal took her dog for a walk up and down the steps at the local park near Carnegie Mellon University, other dogs stopped in their tracks. This was because Agarwal’s dog was a robot, and a special one at that. Unlike other robots, which tend to rely heavily on an internal map to navigate, her robot uses a built-in camera and uses computer vision and reinforcement learning to navigate difficult terrain.

Because matter: While other attempts to use camera signals to guide the robot’s movement have been limited to level ground, Agarwal and his fellow researchers have managed to get their robot up stairs, clamber over stones and jump over holes. They hope their work will help make the robots’ implementation in the real world easier and greatly improve their mobility in the process. Read the full story.

—Melissa Heikkila

Rely on large language models at your peril

When Meta launched Galactica, an open source large language model, the company was hoping for a big PR win. Instead, all he got was a critique on Twitter and a piquant blog post from one of his most vocal critics, culminating in his embarrassing decision to shut down the model’s public demo after just three days.

Galactica was intended to help scientists by summarizing academic papers and solving math problems, among other tasks. But outsiders quickly pushed the model to provide “scientific research” on the benefits of homophobia, anti-Semitism, suicide, drinking glasses, being white or being a man, demonstrating not only how much its launch failed was premature, but also how insufficient artificial intelligence was. Researchers’ efforts to make large language models safer have been. Read the full story.

This story is from The Algorithm, our weekly newsletter that gives you the inside scoop on all things AI. Sign up to have it delivered to your inbox every Monday.

Required reading

I’ve scoured the internet to find you today’s funniest/important/scary/fascinating stories about technology.

1 Verified anti-vax Twitter accounts are spreading health misinformation
And perfectly demonstrating the issue with charging for verification in the process. (The Guardian)
+ Maybe Twitter hasn’t helped your career as much as you thought. (Bloomberg$)
+ A deepfake of the founder of FTX is circulating on Twitter. (motherboard)
+ Some of Twitter’s liberal users refuse to leave. (The Atlantic $)
+ Twitter’s layoff bloodbath is over, it seems. (The limit)
+ Twitter’s potential collapse could erase vast records of recent human history. (MIT Technology Review)

2 NASA’s Orion spacecraft has completed its lunar flyby 🌒
Paving the way for humans to return to the moon. (Voice)

3 Amazon’s inventory tracking algorithms are trained by humans
Low-paid workers in India and Costa Rica are reviewing thousands of hours of mind-boggling footage. (The limit)
+ The AI ​​data labeling industry is deeply exploitative. (MIT Technology Review)

4 How to make sense of climate change
Accepting the hard facts is the first step in avoiding the bleakest ending for the planet. ($New Yorkers)
+ The world’s richest nations have agreed to pay for global warming. (The Atlantic $)
+ These three graphs show who is most responsible for climate change. (MIT Technology Review)

5 Apple discovered the shady dealings of a cybersecurity startup
He compiled a document detailing the extent of Corellium’s relationships, including the infamous NSO Group. (wired $)
+ The hacking industry faces the end of an era. (MIT Technology Review)

6 The cryptocurrency industry is still in turmoil
Shares on its biggest exchange fell to an all-time low. (Bloomberg$)
+ UK wants to crack down on gamified trading apps. (FT$)

7 The criminal justice system is failing neurodivergent people
Impersonating an online troll led to an autistic man being sentenced to five and a half years in prison. (Economist $)

8 Your workplace may be planning to scan your brain 🧠
All in the name of making you a more effective employee. (IEEE Spectrum)

9 Facebook doesn’t care if your account is hacked
A series of new solutions to save accounts don’t seem to have had much effect. (WP$)
+ Parent company Meta has been sued in the UK over the data collection. (Bloomberg$)
+ Independent artists are building the metaverse their way. (motherboard)

10 Why training image-generating AIs on generated images is a bad idea
The ‘tainted’ images will only confuse them. (New Scientist $)
+ Facial recognition software used by the US government reportedly failed. (motherboard)
+ The dark secret behind those cute AI-generated animal images. (MIT Technology Review)

Quote of the day

“It seems like they cared more.”

—Ken Higgins, an Amazon Prime member, is losing faith in the company after a series of frustrating delivery experiences, he tells the Wall Street Journal.

The big story

What if you could diagnose diseases with a swab?

February 2019

On a nondescript side street in Oakland, California, Ridhi Tariyal and Stephen Gire are trying to change the way women track their health.

Their plan is to use blood from used swabs as a diagnostic tool. In that menstrual blood, they hope to find early markers of endometriosis and ultimately a variety of other ailments. The simplicity and ease of this method, if it works, will be a vast improvement over the current standard of care. Read the full story.

— Dayna Evans

We can still have nice things

A place of comfort, fun and distraction in these strange times. (Have any ideas? Write me a message or tweet them to me.)

+ Happy Thanksgiving—in your nightmares!
+ Why Keith Haring’s legacy is more visible than ever, 32 years after his death.
+ Even the gentrified world of dinosaur skeleton assembly isn’t immune to scandal.
+ Pumpkins are a Thanksgiving staple, but that wasn’t always the case.
+ If I lived in a frozen wasteland, I’m pretty sure I’d also be the grumpiest cat in the world.


#Download #Chinas #Social #Credit #Law #Robot #Dog #Browsing

Leave a Reply

Your email address will not be published. Required fields are marked *