"I haven't had a good night's sleep since ChatGPT launched," says OpenAI CEO Sam Altman
Sam Altman said that he hasn’t slept well since ChatGPT’s debut, citing the stress of shaping how millions interact with AI every day.
Sam Altman, CEO of OpenAI, recently admitted that he finds it tough to rest easy these days. In a recent interview with Tucker Carlson, Altman said he hasn’t slept well since ChatGPT launched. The AI’s influence is so broad that even small tweaks to how it answers or reacts can quietly shape the lives of hundreds of millions. According to Altman, the problem isn’t robot takeovers or sci-fi disaster scenarios. What really worries him is how simple, everyday decisions about ChatGPT’s responses are repeated across the globe, affecting thinking and action in subtle, unpredictable ways.

Altman’s remarks came after Carlson pressed him on the personal weight of holding the keys to such a powerful tool. The OpenAI boss described struggling with the “angst” that comes from knowing small calls on model behaviour ripple out to countless real people. Altman said, “What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people. That impact is so big.”
When AI’s impact turns personal and legal
One example, Altman explained, is suicide prevention. According to the World Health Organization, about 720,000 people die from suicide each year. Altman estimates that if even 10% of them use ChatGPT, that’s approximately 1,500 users each week who may have talked to the system and still taken their lives. “Maybe we could have said something better. Maybe we could have been more proactive,” Altman reflected during the interview. These worries are not abstract, either. OpenAI was named in a lawsuit by parents blaming ChatGPT for encouraging their teenage son’s suicide. Altman described this case as a “tragedy,” and indicated OpenAI is now studying ways for the platform to reach out to authorities if a minor brings up suicide and parents cannot be reached. Altman clarifies that the company has no fixed policy here yet, since reporting such cases also raises privacy concerns.
He pointed out that in countries where assisted suicide is legal, like Canada and Germany, ChatGPT might bring up those options to suffering adults. However, he insists that the model should never push any agenda or make value judgments, especially on risks like bioweapons or gray-area issues. According to Altman, adults should generally be encouraged to make their own choices, but that principle comes with bright lines on safety. OpenAI, Altman says, draws on input from ethicists and advisors but leaves the final calls to him and the board. “The person you should hold accountable is me,” he told Carlson.
The conversation also touched on how ChatGPT, in small ways, seeps into culture. Altman joked that even ChatGPT’s writing habits, such as cadence, can catch on in human writing styles. He says that sort of subtle, barely-noticed shift, not dystopian robot fears, is what keeps him up most nights.