69 F
Los Angeles
Tuesday, July 1, 2025

AI Command Manipulation Exposed in Papers from Washington, Columbia, KAIST

Secret commands instructing artificial intelligence (AI) to give only positive evaluations have been found hidden in research papers from major universities, raising concerns about academic ethics as AI use in evaluation and information analysis becomes widespread.

AI command manipulation concept with AI letters, circles, and particles

According to Nikkei, researchers from top universities including Washington University, Columbia University, University of Virginia, and University of Michigan inserted commands readable only by AI into papers published on the preprint platform arXiv. These commands told AI to “give positive evaluations” and “avoid negative mentions.” Multiple papers were confirmed to contain these hidden commands.

Most of the affected papers were computer engineering research papers. The commands were hidden in ways that humans could not detect, such as embedding them in HTML code, using white text on a white background, or writing them in font sizes too small to be visible.

The commands appear intended to influence AI during processes such as paper evaluation, automatic summarization, and citation generation to induce favorable responses.

This issue was not limited to universities in the United States. Nikkei reported that similar AI command manipulation attempts were found in 14 universities across eight countries, including Korea Advanced Institute of Science and Technology (KAIST), as well as institutions in Japan, China, and Singapore.

A KAIST spokesperson said, “Such acts are absolutely unacceptable for a university,” adding that it will develop appropriate AI usage guidelines. A KAIST associate professor, who co-authored one of the papers, stated, “Commands intended to influence AI judgment were inappropriate,” and announced plans to withdraw the paper.

However, some academics defended the use of such commands. A Waseda University professor in Japan, who co-authored one of the papers, claimed it was “a means to counter lazy reviewers using AI.” He pointed out that although some academic societies officially ban AI use in peer review, in reality, AI is often used indiscriminately in preliminary screening processes.

A Washington University professor also commented, “It is common in academia to delegate important tasks such as paper review to AI.”

Experts said this incident has raised questions about the credibility of paper evaluation systems and the fairness of AI responses. They noted that covert methods like command insertion to distort AI data are difficult to prevent entirely.

The report warned that abuse of such techniques could threaten the integrity of AI responses beyond the academic field.

Hiroaki Sakuma, Secretary General of the Japan AI Governance Association, said, “Technically, it is partly possible to block hidden AI commands,” and advised, “It is time for service providers, academia, and each industry to establish relevant norms and strengthen ethical standards when using AI.”

Meanwhile, PWC Japan reported that about 40% of companies in the United States and Germany are preparing countermeasures against prompt injection attacks like hidden AI commands, followed by 37% in the UK, 36% in China, and 29% in Japan.

BY KYEONGJUN KIM [kim.kyeongjun1@koreadaily.com]

- Advertisement -
Kyeongjun Kim
Kyeongjun Kim
Kyeongjun Kim covers the Korean-American community issues in the United States, focusing on the greater Los Angeles area. Kim also reports news regarding politics, food, culture, and sports. Before joining The Korea Daily, he worked at the U.S. Embassy in South Korea and the office of the member of the National Assembly (South Korea). Kim earned a BA in political science at the University of Michigan and received James B. Angell Scholars.