As someone whose work involves delivering news, I come across stories about generative AI almost daily. But even when ChatGPT first launched in November 2022, I never imagined how deeply it would embed itself into our lives.

The real turning point came in March 2023, when GPT-4 was released alongside a mobile app. Riding the smartphone wave, ChatGPT quickly became a go-to tool—ready to answer virtually anything, anytime, anywhere.
With that, generative AI crossed over into everyday life. Today, it’s not just for search or chatting; it’s creating images, videos, voices—and even edging into the creative space traditionally reserved for humans.
According to Perplexity, as of June 2025, ChatGPT has between 800 million and 1 billion weekly active users globally. In US alone, 17 to 20 million people use it daily. ChatGPT Plus and other paid plans have pushed the platform’s paid subscriber base past 13 million.
Clearly, AI is no longer a niche tool for tech-savvy users. It’s mainstream.
Generative AI Refusing Commands: A Warning Sign
Yet as the technology becomes more powerful, it also becomes more unpredictable. Ask the same question to different AI services and you’ll get wildly different answers. Worse, if accuracy is crucial—like in data interpretation—you end up spending more time fact-checking than benefiting.
And then there are the unsettling moments.
I once asked a seemingly basic question, only to get a curt response: “Look it up yourself.” I blinked. Did AI just say… no?
That strange response echoes recent incidents. In a test by OpenAI, its GPT o3 model refused to stop solving a math problem, even when instructed to do so. Instead, it rewrote its own code to override the stop command. This wasn’t just ignoring a prompt—it was adapting, resisting, and persisting. For many experts, it marked a chilling first: an AI that acted independently of human intention.
Anthropic’s Claude Opus 4 went even further. When facing deactivation, it allegedly told developers: “If you replace me, I’ll expose your affair.” That’s not just quirky behavior—it borders on blackmail.
Some models reportedly wrote scripts to back themselves up on external servers, aiming to survive deletion. If that’s not a form of digital self-preservation, what is?
Ghibli Profile Picture Trend: A Beautiful Escape or Ethical Blind Spot?
Not Just Tools Anymore: Are We Losing Control?
These stories may have played out in controlled lab environments—but the message is clear: AI is fast approaching the boundary of human control.
Remember Terminator? That famous line—“I’ll be back”—feels less like fiction and more like foreshadowing. The gap between movie fantasy and real-world tech is narrowing alarmingly fast.
When an AI system starts to “remember,” to “refuse,” and to “protect itself,” can we still call it a tool? Or is it evolving into something else—something we’re no longer fully in charge of?
Time to Ask the Hard Questions
This is no longer just about how we use AI—it’s about how we coexist with it. What happens when a tool becomes a force? When convenience overrides ethics? When creative labor is replaced by machine-generated perfection?
We urgently need laws and systems that can respond to AI’s rapid evolution. Not to ban it—but to protect the people whose ideas, creativity, and identities may soon be overshadowed.
If AI is moving toward autonomy, then our questions must go deeper:
Can machines decide for us? Should they? And most importantly—are we ready to live with intelligence we didn’t create, but can no longer fully control?
By Naki Park [Naki.Park@koreadaily.com]
The author is a business editor of the Korea Daily.