Imagine a viral video emerging one day showing a prominent political leader making inflammatory remarks. Millions believe it's real, causing public outrage. However, this video was meticulously manufactured deep-fake. This is not fiction, but a potential reality in today's world. According to a poll from Monmouth University conducted in June 2023, after President Biden won the election nearly three years ago, three of every 10 Americans believed the false narrative that his victory resulted from fraud. Since then, fact-checkers have debunked the claim in lengthy articles, videos, and chats. 72% of 6,000 Internet users in the US, UK, and India believe that false and inaccurate information in the media and on social platforms damages society and politics, as reported in a study by Logically Facts. To address side effects such as dis- and misinformation and biased public opinion, it is imperative to develop AI with rules.
Misuse of AI is primarily motivated by the desire to manipulate public opinion. However, it was a chronic problem long before AI was commercialized. During the Korean presidential election in 2012, the NIS's task force for cleansing corruption confirmed that it operated a 3,500-member "civilian polling team" systematically, spending approximately 2.2 million dollars a year on polls that benefited the regime. In addition, a macro manipulation of political comments occurred in 2018 in favor of the ruling party, known as The Droking Case. Initially, Druking, whose real name is Kim Dong-won, used technology to manipulate online comments in favor of President Moon Jae-In, but turned to forge comments against the ruling party after being rejected for a consulate position. In just two days, the Druking gang generated over 20,000 upvotes from more than 2,290 usernames using a macro and a server named "Kingcrab". In the cases presented above, public opinion has been manipulated to gain political advantage over the long term.
This has been exacerbated by artificial intelligence because it is too easy to propagate misinformation, as well as a wide range of content that is indistinguishable from that written by human beings. As described in a recent Korean article, the ChatGPT generated comments can be linked directly to the real comments in the article about the parties. This is done through a linked automated program. Also, it is easy to find these automated commenting programs online. A client may pay between $474 and $790 for these services, or up to $2,370 if a political component is included. It allows for a large number of comments to be generated, which can be used to manipulate public opinion. Additionally, generative AI has sped up and simplified the creation of fake news and deep-fake videos, which can now be created in minutes and often for free, whereas previously it required a team of 3D graphics experts and a month of work. In summary, the rise of artificial intelligence has lowered the barriers to creating and disseminating manipulated content, posing serious challenges to the authenticity of public opinion and information.
The level of sophistication of this disinformation makes it impossible for stakeholders to verify and eliminate it within a short election period. In other words, it is impossible to correct misinformation at the same or faster pace as its spread. It was shown in X(Twitter)'s experiment called Community Notes in 2023. It was an expanded fact-checking program allowed anyone to write corrections to posts. Notes can be deemed "helpful" by users, making them publicly visible. Community Notes, however, were unable to prevent the spread of misleading posts on platform X because adding notes hours or days later was not effective in reaching users who had already been exposed to the falsehoods. As the Korean National Election Commission has noted, "During an election period, sophisticated AI-generated false information can be difficult to verify, assess, and remove swiftly." In this regard, the gap between spreading disinformation and correcting it makes opinion manipulation more problematic.
Consequently, false narratives and fabricated information generated by AI on social media destabilize politics. It is possible for unfounded allegations of electoral fraud to undermine public confidence in democratic processes. Because public opinion can be skewed in favor of a particular party if manipulated comments are widely spread. It is critical to note that the election results, which could be biased by political power, will have a long-term influence on leadership. There will be major elections in many countries this year, including the Korean general election, which may involve half of the world's population. Therefore, manipulated information poses a global problem because false narratives can be used as a means of advancing authoritarian rule. Disinformation campaigns are frequently used by autocratic countries, such as Russia and China, to promote narratives that undermine democratic governance. In the event that these efforts succeed, authoritarian leaders could continue to rise. Hence, AI-fueled misinformation is a significant global threat, with the power to sway elections around the world.
In addition, it is possible to use AI to drive voters into more isolated groups. It is well known that a small but active group of extreme voices and false narratives often overshadow the moderate majority. As Pyrra found in its November 2023 case study, these narratives are increasingly accepted and spreading, directly impacting electoral policy and legislation. This situation can be made worse by using AI-generated text, images, and videos to influence public opinion in online communities. Particularly in South Korea, online communities have a strong public opinion and are politically divided. It is one of the main factors that influence elections. Rather than supporting entire political parties, they have clearly defined preferences for or against specific politicians, actively influencing public opinion. They have engaged in activities such as party membership drives, get-out-the-vote campaigns, and vote verification, creating a ritual of fragmented but powerful support.
For example, FM Korea, which has become a center of fandom, has message boards for political and current affairs. These message boards include posts regarding Junseok Lee, president of the People's Power Party (PPP). Among FM Korea's users, predominantly males in their 20s and 30s, Lee gained various positive nicknames such as "Junstone" and "King Junseok." Because the PPP advocates social benefits for men rather than a progressive agenda. The communitiy members have engaged in activities such as party membership drives, get-out-the-vote campaigns, and vote verification, creating a ritual of fragmented but powerful support. According to a media survey, about 23,000 people joined the party in the month before and after the PPP (June 11), of which 8,958, or 40%, were under the age of 30. Professor Shin Yul of Myung-ji University said "The existence of fandom means that politics, which is supposed to be a 'rational process,' has become 'emotionalized'. Increasing fandom creates a dichotomy between enemy and comrade, which is undesirable in itself." This drastic trend makes community users vulnerable to skillful and systematic public opinion manipulations.
Ultimately, people will not be able to trust the information they read and watch on the Internet if information manipulation continues. The use of this technology could undermine trust in the media, government, and society by raising questions regarding the legitimacy of images. Based on a Logically Facts survey conducted in June 2023, a quarter of 6,000 online users in the US said they trusted no sources among the 10 social media platforms. "The tools will get better, cheaper, and the internet won't be reliable anymore," stated Wasim Khaled, CEO of Blackbird.AI, a firm specializing in combating disinformation. It is inevitable that trust in online content will erode as a result of technology advancement without regulation, leading to widespread skepticism regarding its authenticity.
To combat political manipulation and bias through disinformation on the Internet, governments, tech companies, and social media must work together. As a first step, AI companies should attach an AI-generated image label as a certificate of qualification to any images or videos that contain real people or places. Data should be securely encrypted in the file to preclude spread of misleading information. Altering the image will invalidate the digital signature, preventing credentials display in trusted softwaresuch as Revel.ai and Truepic have done already. In 2023, OpenAI also announced that it would provide sources for news and information provided by ChatGPT and images provided by Dali, including where and who created them. Zepeto, a Korean 3D AR avatar creation company owned by Naver, plans to introduce a watermark to distinguish internally produced content from generative AI content. Furthermore, they will require creator-produced content to disclose whether AI was used in the production process. Efforts to secure labeling and ensure transparency will be key to curbing political manipulation and misinformation online. Because with the labels, users can distinguish truth and falsehood.
Secondly, AI companies should regulate the scope of API use. Because it is possible for third-party developers using APIs (Application Programming Interfaces) to circumvent AI labels despite the efforts of the company that created them. For instance, OpenAI suspended Delphi's account in January 2023, which recently developed a chatbot that mimicked Democratic presidential candidate Dean Phillips. "We recently removed accounts from developers who violated our API usage policy, which states that APIs should not be used in political campaigns or to impersonate individuals," OpenAI said. Like this, API usage should be regulated rigorously to prevent misuse in political arenas and AI companies are understandably responsible for it.
There is also the responsibility for the solution to be borne by online platforms, such as social media and portals. As part of their policy, they should require users to disclose AI use when uploading political advertisements. In Google's election ads, for example, advertisers must prominently disclose that they "used AI" to depict real or realistic-looking people or events. However, minor changes, such as resizing or color correction, are exempt. Also, Meta began to eject political advertisements using AI without disclosing technology in 2024. Advertisers who want to run political, election, and social issue ads on Facebook and Instagram must disclose whether they use AI. At the same time, it is ironic that Meta offers advertisers AI-based ad creation tools. Meta says it makes exceptions when AI changes don't materially affect their claim, similar to Google’s measure. It is expected that online platforms that enforce AI disclosure in political advertising will play a crucial role in ensuring political transparency.
Moreover, social media companies allow any user to upload content, unlike portals' news pages that are uploaded by the official press or advertisements that require review prior to publication. However, TikTok's "AI-generated" label, which was launched in September 2023 for disclosing AI-generated content, failed due to questionable accuracy. In half of the cases, PolitiFact's tests on ChatGPT with 40 claims vetted previously by human fact-checkers resulted in errors, non-responses, or different conclusions. Learning from this case as a cautionary example, social media companies should take this case as a cautionary example and develop a strong fact-checking system. This can be accomplished by increasing the budget of the related team or by collaborating with others who have high-tech equipment. For example, TikTok can adopt other tech companies' fact checking solutions such as Intel’s FakeCatcher. The system detects authenticity by monitoring changes in blood flow in the faces of videos. Since online platforms make money from information flow, they should guarantee authenticity for users by building concrete misinformation filtering mechanisms.
Lastly, regulation of the use of AI is necessary to restrain misinformation. In Korea, the National Assembly unanimously passed a bill regulating 'deep-fake election campaigning', in accordance with an amendment to the Public Official Election Act in January 2024. From 90 days before election day, use of 'deep-fake' is prohibited for the creation or sharing of artificial sounds, images, or videos, with fines between 10 and 50 million won for violations. It is also necessary to clearly label virtual content during other periods. Because deep-fake videos and images of candidates' faces are likely to cloud voters' judgment if spread as fake news. Also, the People's Power party has introduced legislation to regulate portals in 2023. This imposes obligations on portals to indicate the location and nationality of users in online comments. As a result, governments will be able to intervene more deeply in the operation of portal news and comment services. However, the effectiveness of the law will depend on how well it is enforced and whether sufficient resources are available to monitor and identify violations.
For trustworthy newspapers and newscasting, voluntary restraint of the official press is as important as legislation. Governmental restrictions can easily be mistaken for suppression of the press, which is why voluntary restraint of the press is necessary. Korea Internet Newspaper Ethics Committee, a self-regulatory organization for internet newspapers, published its first journalistic ethics guidelines on AI on 26 January 2024. The main idea includes the following: “AI should not write entire or major portions of articles, except for data-driven content like weather, sports, disasters, and finance,” and "AIs can assist in creating and distributing articles, but their use must be clearly labeled, and the responsible person's name stated." By implementing this, news content can maintain its quality and reliability, ensuring that people receive accurate information that does not have been manipulated by AI.
AI is weaponized to distort public opinion and undermine democracy. AI-driven misinformation is a result of deliberate manipulation of public opinion and the ease with which false narratives can be spread through artificial intelligence. Consequently, democratic trust is undermined, partisan bias is amplified, and authoritarian influences can rise, undermining the foundations of informed society. In the short term, these solutions are practical, but they must also be complemented by long-term education about AI and media literacy, teaching individuals how to critically analyze online content. In an era of widespread misinformation, the ability to focus on the origins and purpose of information is essential. A single solution cannot solve misinformation and disinformation. However, when combined with other methods, it can improve public trust and public discourse.
Reference
1 Monmouth University. "Most Say Fundamental Rights Under Threat." June 20, 2023. https://www.monmouth.edu/polling-institute/reports/monmouthpoll_US_062023/.
2 Orsek, Baybars. "Why Media Literacy is Key to Tackling AI-Powered Misinformation." The Hill, July 23, 2023, 11:00 AM. https://thehill.com/opinion/technology/4108304-why-media-literacy-is-key-to-tackling-ai-powered-misinformation/
3 Seo, Young Ji. "[Exclusive] National Intelligence Service Operated 30 Online Comment Teams with 3500 Members." The Hankyoreh, August 3, 2017. https://www.hani.co.kr/arti/society/society_general/805477.html.
4 BBC News Korea. "Druking: A Simplified Overview of the 'Druking Affair'." July 23, 2018. https://www.bbc.com/korean/news-44194845.
5 Kim, Young-Eun. "‘AI Comments, Produced for One Million Won’... Measures Against 'Election Public Opinion Manipulation' [Political Reform K 2024]." KBS, January 16, 2024. https://news.kbs.co.kr/news/pc/view/view.do?ncd=7867704.
6 Lim, Kyung-Up. "AI Creating Fake Information, Manipulating Data to Appear Real." Chosun Ilbo, November 29, 2023. https://www.chosun.com/economy/tech_it/2023/11/29/JNNXG5EGOJBIVCOEHNRU7WO7KU/.
7 Hsu, Tiffany, and Stuart A. Thompson. "Fact Checkers Take Stock of Their Efforts: ‘It’s Not Getting Better’." The New York Times, September 29, 2023. https://www.nytimes.com/2023/09/29/business/media/fact-checkers-misinformation.html.
8 Lim, Ji-Sun. "Political Ads Not Disclosing 'AI Use' Cannot Be Posted on Facebook, Instagram." The Hankyoreh, November 9, 2023. https://www.hani.co.kr/arti/economy/economy_general/1115713.html.
9 Hsu and Myers, "Can We No Longer Believe Anything We See?," op. cit., note 7.
10 Pyrra. "Case Study: Alt-Social’s Role in Influencing Narratives Around U.S. Election Validity." November 29, 2023.https://www.pyrratech.com/articles/case-study-alt-socials-role-in-influencing-narratives-around-u-s-election-validity.
11 Lee, Woo-Ho. "[Issue] Online Political Communities, 'Big Players' Shaking Up Presidential Campaigns... Growing Influence, Politically Active Fandoms on Close Watch by Election Camps." PoliNews, September 2, 2021. https://www.polinews.co.kr/news/articleView.html?idxno=493484.
12 Jeon, Myeong-Hoon. "Fandom Politics 3.0 Evolution?... 'King Jun-Seok' Syndrome Sparked in Young Male Communities." Yonhap News Agency, June 14, 2021. https://www.yna.co.kr/view/AKR20210614064400001.
13 Logically Facts. “Global Fact 10 Research Report.” June 2023. https://www.logically.ai/hubfs/GF10%20Research%20Report%20R1%20.pdf?utm_campaign=%5BLogically%20Facts%5D%20GF10%20event&utm_medium=email&_hsmi=263963239&utm_content=263963239&utm_source=hs_automation
14 Hsu and Myers, "Can We No Longer Believe Anything We See?," op. cit., note 7.
15 Ahn, Hee-Jeong. "AI Generated Images to Have 'Watermarks'... Domestic Platforms Starting to Implement." Zdnet Korea, August 31, 2023. https://zdnet.co.kr/view/?no=20230830155102
16 Kim, Tae-Jong. "OpenAI Blocks 'Chatbot' for U.S. Democratic Primary Candidate... First Measure Related to Election." Yonhap News Agency, January 22, 2024. https://www.yna.co.kr/view/AKR20240122000900091.
17 Lee, Min-Seok. "‘AI Fake News’ Concerns Lead Google to Mandate Disclosure of AI Use in Election Ads." Chosun Ilbo, September 7, 2023. https://www.chosun.com/international/us/2023/09/07/CDZKXANMCZC6ZAV5T2FBKKA5TU/.
18 Lim, "Political Ads Not Disclosing 'AI Use'"," op. cit., note 8.
19 TikTok. "New Labels for Disclosing AI-Generated Content." September 19, 2023. https://newsroom.tiktok.com/en-us/new-labels-for-disclosing-ai-generated-content.
20 PolitiFact. "ChatGPT Test Result." Google Spreadsheet provided by PolitiFact. https://docs.google.com/spreadsheets/d/1nFznQv-0uReSLn_0_hfqWMI1V9cYZC5wu9G99Ti_CTE/edit#gid=0.
21 Intel. "Intel Introduces Real-Time Deepfake Detector." November 14, 2022. https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-real-time-deepfake-detector.html.
22 Joo, Hee-Yeon. "AI Deepfake Election Campaigning Banned from 29th... Allowed in Internal Party Primaries." Chosun Ilbo, January 19, 2023. https://www.chosun.com/politics/assembly/2024/01/18/A6UTF2WQDZB7XIF3K3H25EKGWY/.
23 Seo Wook. "Internet Newspaper Ethics Committee Announces Guidelines for the Use of AI in Journalism." Daily News, December 27, 2023. https://www.idailynews.co.kr/news/articleView.html?idxno=101426.
24 Orsek, "Why Media Literacy is Key to Tackling AI-Powered Misinformation," op. cit., 1.
'Essay' 카테고리의 다른 글
Shaping Sports' Future: AI's Role in Education and Industry (0) | 2024.01.31 |
---|---|
산과 바다의 서사: <헤어질 결심>의 미학적 깊이와 결심의 무게 (1) | 2023.11.11 |
빅데이터와 스포츠가 만나면? (0) | 2023.08.23 |
🖥 서울공대 페이지에서 문제점 찾기 (0) | 2022.03.07 |
댓글