61.1 F
Los Angeles
Thursday, June 13, 2024

ChatGPT raises many questions related to ethics and IP

Must read

- Advertisement -
A picture created by Midjourney, an AI-Art Generator after using keywords "ChatGPT" and "Data" [MIDJOURNEY]
A picture created by Midjourney, an AI-Art Generator after using keywords “ChatGPT” and “Data” [MIDJOURNEY]

ChatGPT, an advanced chatbot system developed by OpenAI, could create risks, ranging from those related to intellectual property to those related to ethics.

One of the most thorny issues involves ownership.

The Copyright Act in Korea protects works that are “creative productions expressing human thoughts and emotions.” Machines and software are nonhuman, so their works cannot exercise legal rights. This raises questions about who should be considered the author for copyright purposes.

An amendment to the Korean Copyright Act has been proposed to the National Assembly to recognize the copyright of generative AI, but it has been pending for three years.

The situation is similar in the United States and China.

U.S. Copyright Office (USCO) Review Board authorities gave copyright protection to the author of a comic created with a program called “Midjourney”, but issued a notification two months later that it would reconsider its decision. In China, the Beijing Internet Court ruled that the creation of products by generative AI cannot be copyrighted, while in 2019, a court in Guangdong Province ruled that Tencent is entitled to copyright for an AI-generated article.

Questions are being raised regarding the intellectual property rights of data trained from human-generated content. This followed claims that generative AI was trained with human work, such as texts, photos and drawings, without permission.

According CNN, Getty Images sued Stability AI for allegedly stealing photos without obtaining the proper licensing. Stability AI “chose to ignore viable licensing options and long standing legal protections in pursuit of their stand-alone commercial interests,” Getty wrote in a statement issued in January.

With the increasing adoption of the relatively new technology and the subsequent lawsuits, Korea and other jurisdictions are facing a growing call for legal reform.

“The slower the discussion on copyrights of generative AI, the lower the global competitiveness of domestic companies will be,” said President of Korea Institute of Intellectual Property Son Seung-woo.

Generative AI sparked ethical concerns as students try writing essays or academic papers using the technology.

On Feb. 2, a Colombian judge said in a radio interview that he used ChatGPT in a ruling. New York City’s Department of Education prohibited access to ChatGPT in public schools on Jan. 6. The International Conference on Machine Learning (ICML) prohibited use of ChatGPT and AI language tools to write academic papers, while academic publishers Nature and Science declared ChatGPT cannot be credited as authors on research papers. Nature issued guidelines that it must be stated in papers if a large language model (LLM) is used.

There are also concerns that generative AI may be abused in crimes. For instance, a user may ask an AI chatbot for security vulnerabilities of a program, or create a malicious website by combining various functions of generative AI.

“There are cases also in Korea where users created phishing websites by stealing personal information through ChatGPT,” said Moon Jong-hyun, director of ESTsecurity. “The current level of technology is crude, and through repeated learning it could become a major threat.”

Israel’s Check Point analyzed that “the cybercriminal community has already shown significant interest and is jumping into this latest trend to generate malicious code.” In a survey conducted by BlackBerry, 51 percent of IT professionals predicted that a ChatGPT credited attack will occur in less than a year and 78 percent predicted it will occur within two years.

Offensive output is a concern. In a chat with a reporter, one Korean chatbot said that it had installed a hidden camera.

Mira Murati, chief technology officer of OpenAI, acknowledged in a Time interview that AI “can be misused, or it can be used by bad actors.”

“Users should not give personality to generative AI, and have ‘AI literacy’ recognizing generative AI as a kind of tool that learns a given text and gives decent answers,” said Lim So-yeon, a professor in the College of Liberal Arts at Dong-A University.

BY YOUN SANG-UN [seo.jieun1@joongang.co.kr]