US politics: Election officials responded to voter misinformation that X’s AI chatbot had spread.

image 7 11

Almost immediately after Joe Biden made the announcement that he would not be running for re-election, erroneous information began to circulate on the internet regarding the possibility of a new contender succeeding the current president.

Screenshots that asserted that a new candidate could not be put to ballots in nine states swiftly spread around Twitter, which is now known as X, and amassed millions of views. The office of the Minnesota Secretary of State started receiving requests for fact-checks of these posts, which were completely inaccurate. The deadlines for the ballots had not yet passed, which meant that Kamala Harris had plenty of time to get her name added to the ballots.

Grok, a chatbot that Twitter uses, is the source of the false information. Grok responded incorrectly when users asked the artificial intelligence tool whether there was still time for a new candidate to be put on the ballots. Grok’s response was wrong.

In the midst of concerns that artificial intelligence could mislead or divert voters during the presidential election in the United States in 2024, the process of locating the cause and working to remedy it acted as a test case for how election officials and corporations that specialise in artificial intelligence will engage with one another. The demonstration also demonstrated the potential role that Grok, in particular, could play in the election by acting as a chatbot with fewer guardrails to prevent the generation of information that is more highly provocative.

A group of secretaries of state and the National Association of Secretaries of State, which is the organisation that represents them, approached Grok and X in an effort to draw attention to the false information. According to Steve Simon, the secretary of state for the state of Minnesota, the corporation did not immediately make any efforts to rectify the situation and instead gave the situation the equivalent of a shoulder shrug. “And that struck; I think it’s fair to say all of us, as just the wrong response,” he added. “It’s really the wrong response.”

Thankfully, the consequences of this incorrect answer were not particularly severe; it would not have prohibited individuals from exercising their right to vote. On the other hand, the secretaries swiftly took a firm stance in response to the potential events that were to follow.

There was a thought that went through our heads, and that was, “What if the next time Grok makes a mistake, the stakes are higher?” remarked Simon. The question that needs to be answered is, “What if the next time the answer gets wrong is, can I vote? Where do I vote? What are the hours? Can I vote absentee?” As a result, this was concerning to us.

image 7 12

Particularly concerning was the observation that the social media platform was disseminating inaccurate information, as opposed to users generating misinformation through its use.

The administrative staff disclosed their efforts to the public domain. Out of the nine secretaries in the group, five have endorsed a public letter directed at the platform and its owner, Elon Musk. The correspondence urged X to align its chatbot’s functionality with that of other chatbot platforms, such as ChatGPT, by guiding users who enquire about Grok election-related topics to a reliable nonpartisan voting information resource, CanIVote.org.

The initiative yielded positive results. Grok currently redirects users to an alternative website, vote.gov, when enquiries regarding elections are made.

According to a copy of the letter , Wifredo Fernandez, X’s head of global government affairs, told the secretaries, “We anticipate maintaining transparent communication throughout this election period and are prepared to address any additional enquiries you might have.”

The outcome represented a success for the administrative professionals and for delaying the spread of election misinformation, while also serving as an instructive case on addressing the limitations of AI-driven tools. Addressing misinformation promptly and consistently can enhance the message’s visibility, increase its credibility, and compel a reaction, Simon stated.

Although he expressed significant disappointment in the company’s initial response, Simon acknowledged, “I want to extend recognition and commendation where it is warranted.” The organisation operates on a substantial scale with international presence, and they have made a commendable decision to act in a responsible manner, which deserves recognition. I remain optimistic that they will maintain their current performance. We will maintain ongoing surveillance.

Musk has characterised Grok as a chatbot that opposes mainstream social narratives, providing provocative responses frequently infused with sarcasm. According to Lucas Hansen, co-founder of CivAI, a non-profit organisation that highlights the risks associated with AI, Musk is “against centralised control to whatever degree he can possibly do that.” This philosophical stance places Grok in a less favourable position for mitigating misinformation. Additionally, another characteristic of the tool—its reliance on top tweets to shape responses—can further impact its precision, according to Hansen.

Grok necessitates a paid subscription; however, it possesses the potential for extensive utilisation due to its integration within a social media platform, according to Hansen. Although it may produce inaccurate responses in conversation, the images generated can also exacerbate existing partisan divisions.

The visuals presented are quite extreme: a Nazi-themed Mickey Mouse, Trump piloting an aircraft towards the World Trade Centre, and Harris depicted in communist attire. According to a study by the Centre for Counting Digital Hate, Grok is capable of producing “convincing” images that could mislead people. This assertion is supported by examples of images the bot was prompted to create, including depictions of Harris using drugs and Trump appearing ill in bed, as reported by the Independent. A recent investigation by Al Jazeera reported the generation of “lifelike images” depicting Harris with a knife in a grocery store and Trump “shaking hands with white nationalists on the White House lawn.”

“Currently, any individual has the capability to generate content that is significantly more provocative than what was previously possible,” Hansen stated.

Leave a Reply

Your email address will not be published. Required fields are marked *