Advertisement

SKIP ADVERTISEMENT

Nick Clegg of Meta says that they can be, with the right systems in place.

Nick Clegg, wearing a light-blue shirt and tie and dark blue jacket, sits in front of a blue screen, speaking and gesturing.
During the Athens Democracy Forum, Nick Clegg, the president of global affairs for Meta, discussed how artificial intelligence can be used to combat hate speech.Credit…Athens Democracy Forum

This article is from a special report on the Athens Democracy Forum, which gathered experts last week in the Greek capital to discuss global issues.


Moderator: Liz Alderman, chief European business correspondent, The New York Times

Speaker: Nick Clegg, president, global affairs, Meta

Excerpts from the Rethinking A.I. and Democracy discussion have been edited and condensed.

LIZ ALDERMAN A.I. obviously holds enormous promise and can do all kinds of new things. A.I. can even help us possibly solve some of our hardest problems. But it also comes with risks, including manipulation, disinformation and the existential threat of it being used by bad actors. So Nick, why should the public trust that A.I. will be a boon to democracy, rather than a potential threat against it?

NICK CLEGG I think the public should continue to reserve judgment until we see how things play out. And I think, like any major technological innovation, technology can be used for good and for bad purposes, can be used by good and bad people. That’s been the case from the invention of the car to the internet, from the radio to the bicycle. And I think it’s natural to fear the worst, to try and anticipate the worst, and to be fearful particularly of technologies which are difficult to comprehend. So I think it’s not surprising that in recent months, certainly since ChatGPT produced its large language model, a lot of the focus has centered on possible risks. I think some of those risks, or at least the way some of them are being described, are running really quite far ahead of the technology, to be candid. You know, this idea of A.I.’s developing a kind of autonomy and an agency of their own, a sort of demonic wish to destroy humanity and turn us all into paper clips and so on, which was quite a lot of the sort of early discussion.

ALDERMAN We haven’t reached “Terminator 2” status.

CLEGG Yeah, exactly. Because these are systems, remember, which don’t know anything. They don’t have any real meaningful agency or autonomy. They are extremely powerful and sophisticated ways of slicing and dicing vast amounts of data and applying billions of parameters to it to recognize patterns across a dizzying array of data sets and data points.

Traditional A.I. is generative. It can predict the next word. But it doesn’t know what the inherent meanings of those words are. So I think we need to adjust some of the somewhat breathless fears. But nonetheless, I think it’s probably fair to assume that, like a lot of these technologies, it will act as an accelerant for trends that already exist. But if you look at the progress that has been made on social media platforms in recent years by A.I. already, it’s quite encouraging.

A good example would be hate speech on Facebook. The prevalence of hate speech, and this is audited from our data, now stands at between 0.01 percent and 0.02 percent. So, looking at that data, for every 10,000 bits of content you’d be scrolling on Facebook, you might find one or two bits of hate speech. I wish it was zero. I don’t think it’ll ever get to zero. But here’s the thing, it is down by about 50 percent or 60 percent because of advances in A.I. If you can train A.I. classifiers to recognize a pattern, which you can, for instance, with hate speech, it becomes an incredibly effective tool in that adversarial space. Because if our policies and systems and technology are working as they should, in a sense we should be agnostic about whether it’s generated by a machine or a human being.

ALDERMAN Mark Zuckerberg has said that he is going to release the coding for Meta’s A.I., because that would be in the interest of the greater good. Other people from anywhere in the world could basically go into that and improve it. And yet, you know, there’s a lot of pushback on that.

CLEGG If you look at the history of the internet, it’s been built through open-source technology. It was often more innovative, and experience suggests that open-source technology is also safer because you get the wisdom of a crowd applied to it. Of course there are circumstances where you shouldn’t open source. Just a few weeks ago, our researchers came up with a new extraordinarily powerful tool, which can take a tiny snippet of your voice and extrapolate it perfectly to mimic your voice. I clearly don’t want to open source that. But where it’s possible, I think it is generally better to do so, because you democratize access to the technology, so it’s not just a handful of big American and Chinese companies who’ve got the GPU capacity and the data to run these models.

But as I say, if we ever approach, as a world, that dystopian vision of what’s called “artificial general intelligence,” where these models develop an autonomy and an agency of their own, then of course you’re in a completely different ballgame. And then, I think, the debate completely changes.

ALDERMAN I asked ChatGPT to come up with a couple of questions for Nick Clegg about the dangers that A.I. could pose to democracy. I’m going to ask you one: “We’ve got two billion people voting in elections in the world next year. We know that A.I. ” — and this is the A.I. saying this — “we know that A.I. is increasingly being used to manipulate voters with tailored content. Can tech companies keep up with this?”

CLEGG So I think it’s worth thinking of this conceptually, as there’s generation and then there’s circulation. And I think one of the things we need to work out is if you can have industrywide standards on being able to identify when something has been generated by A.I., either through explicit watermarking or invisible watermarking. Then, actually, if you can do that, and you can set those standards across the different platforms, it’s actually not rocket science to then limit the distribution of the material that you don’t want circulating very widely. I think the issue we have at the moment is that everybody’s coming up with different standards of watermarking or not watermarking.

I worry that we are spending quite a lot of time and energy speculating on existential threats from A.I. which may or may not arise, when right now ahead of those elections where two billion people are going to cast their ballots, we need common watermarking standards. Because we can do whatever we can, for instance on Facebook, to explicitly watermark our generative A.I. image production tools, as we are. But if others don’t, it’s going to be totally confusing to users, and we won’t be able to police the content that comes from off-platform onto our platform.

ALDERMAN You mentioned regulation. Why should the public trust that the tech industry would do a good job of regulating itself or sticking to any guidelines when the tech industry doesn’t exactly have a great track record of doing that in the past?

CLEGG That’s not what the tech industry is saying. The tech industry is saying there needs to be regulation, but regulation’s slow. So in the meantime, we’re coming up with a number of — I mean, what do you want, that we don’t come up with voluntary commitments? It’s the only thing we can do at the moment in the absence of law. I think everybody in the industry recognizes you do need hard guardrails on some of this stuff in law. But to get the ball rolling, the industry is saying, “Look, you need uniform standards on transparency.” You definitely need industrywide cooperation on watermarking and what’s called “provenance and detectability.” Now, in the absence of law, in the absence of lawmakers working out what they want, the industry is moving forward.

You always have this mismatch between the speed with which technology develops and the pace at which legislation is formed. The latter is always slower than the former. My concern is that it is now being coupled with what I believe to be a somewhat paralyzing debate about, “oh my gosh, is the end nigh?” And I think that’s an unhelpful combination.

Advertisement

SKIP ADVERTISEMENT

Read More

Spread the love