Is AI destroying (what's left of) our democracy?
Jul 12, 2023 17:06:59 GMT -5
huskyharper likes this
Post by bulkey on Jul 12, 2023 17:06:59 GMT -5
This is part of a Bloomberg article. I'm sure Democrats are doing similarly. It's not left or right, just wrong. I have a (sort of) relative who helped design Chat-GPT, who assures me that the watermarks are real. But how to identify them in a TV ad? BTW, on another board I'm on (for photography), we're getting AI-generated "contributions."
By Emily Birnbaum and Laura Davison
It’s a jarring political advertisement: Images of a Chinese attack on Taiwan lead into scenes of looted banks and armed soldiers enforcing martial law in San Francisco. A narrator insinuates that it’s all happening under President Joe Biden’s watch.
Those visuals in the Republican National Committee’s ad aren’t real, and the scenarios are pretty obviously fictional. But thanks to the handiwork of artificial intelligence, the images look like real life. Within days of the ad appearing online in April, Representative Yvette Clarke, a New York Democrat, introduced legislation to require disclosure of AI-produced content in political advertisements.
“This is going too far,” she said in an interview. Tiny type in the RNC ad reads, “Built entirely with AI imagery.” Clarke’s bill is going nowhere in a legislature controlled by Republicans, but it illustrates the degree to which the rapid advance of artificial intelligence has put Washington on its back foot.
Voters in the US and around the world are already inundated by AI-generated political content. Click on an email asking for donations, for example, and you may be reading a message drafted by a so-called large language model, political consultants say — the technology behind ChatGPT, the wildly popular chatbot from startup OpenAI. Politicians also increasingly use AI to hasten mundane but critical tasks like analyzing voter rolls, assembling mailing lists and even writing speeches.
As in many industries, AI is poised to increase political workers’ productivity — and probably eliminate more than a few of their jobs. It’s hard to say how many, but the business of politics is full of the sorts of roles that researchers believe are most vulnerable to disruption by generative AI, such as legal professionals and administrative workers.
But even more ominously, AI holds the potential to supercharge the dissemination of misinformation in political campaigns. The technology is capable of quickly creating so-called “deepfakes,” fake pictures and videos that some political operatives predict will soon be indistinguishable from real ones, enabling miscreants to literally put words in their opponents’ mouths.
Deepfakes have plagued politics for years, but with AI, savvy editing skills are no longer required to create them.
Put to its best use, AI could improve political communications. For instance, upstart campaigns with little cash could use the technology to inexpensively produce campaign materials with fewer staff. Some political consultants that traditionally work only with presidential and Senate campaigns are making plans to work with smaller campaigns using AI to offer more services at a lower price point.
And the tech industry is trying to combat deepfakes. Companies including Microsoft Corp. have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake.
AI Is Making Politics Easier, Cheaper and More Dangerous
Voters are already seeing AI-generated campaign materials — and likely don’t know it
By Emily Birnbaum and Laura Davison
It’s a jarring political advertisement: Images of a Chinese attack on Taiwan lead into scenes of looted banks and armed soldiers enforcing martial law in San Francisco. A narrator insinuates that it’s all happening under President Joe Biden’s watch.
Those visuals in the Republican National Committee’s ad aren’t real, and the scenarios are pretty obviously fictional. But thanks to the handiwork of artificial intelligence, the images look like real life. Within days of the ad appearing online in April, Representative Yvette Clarke, a New York Democrat, introduced legislation to require disclosure of AI-produced content in political advertisements.
“This is going too far,” she said in an interview. Tiny type in the RNC ad reads, “Built entirely with AI imagery.” Clarke’s bill is going nowhere in a legislature controlled by Republicans, but it illustrates the degree to which the rapid advance of artificial intelligence has put Washington on its back foot.
Voters in the US and around the world are already inundated by AI-generated political content. Click on an email asking for donations, for example, and you may be reading a message drafted by a so-called large language model, political consultants say — the technology behind ChatGPT, the wildly popular chatbot from startup OpenAI. Politicians also increasingly use AI to hasten mundane but critical tasks like analyzing voter rolls, assembling mailing lists and even writing speeches.
As in many industries, AI is poised to increase political workers’ productivity — and probably eliminate more than a few of their jobs. It’s hard to say how many, but the business of politics is full of the sorts of roles that researchers believe are most vulnerable to disruption by generative AI, such as legal professionals and administrative workers.
But even more ominously, AI holds the potential to supercharge the dissemination of misinformation in political campaigns. The technology is capable of quickly creating so-called “deepfakes,” fake pictures and videos that some political operatives predict will soon be indistinguishable from real ones, enabling miscreants to literally put words in their opponents’ mouths.
Deepfakes have plagued politics for years, but with AI, savvy editing skills are no longer required to create them.
Put to its best use, AI could improve political communications. For instance, upstart campaigns with little cash could use the technology to inexpensively produce campaign materials with fewer staff. Some political consultants that traditionally work only with presidential and Senate campaigns are making plans to work with smaller campaigns using AI to offer more services at a lower price point.
And the tech industry is trying to combat deepfakes. Companies including Microsoft Corp. have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake.