Artificial intelligence is no longer a speculative technology—it’s the operating system of modern life. But as AI spreads through classrooms, hospitals, and beyond, the question isn’t just what it can do. It’s what they should do.
That tension now keeps even the most powerful tech leaders awake at night. In a September 2025 interview, OpenAI CEO Sam Altman admitted that he hasn’t “had a good night of sleep” since ChatGPT launched in 2022. “There’s a lot of stuff that I feel a lot of weight on,” he told interviewer Tucker Carlson, citing the moral dilemmas that accompany hundreds of millions of daily AI interactions.
For Nicolas Genest, CEO and Founder of CodeBoxx, that sleeplessness is the symptom of a deeper structural failure: most companies treat ethics as an afterthought. His prescription is simple but radical—Ethics by Design.
“Ethics by Design means you don’t bolt morality onto software at time of acceptance testing or after a wave of feedback from users or outcry from stakeholders. It’s obviously too late,” Genest says. “You embed it from the first line of requirements…”
It’s a philosophy rooted in responsibility, not reaction. Genest argues that in the age of generative AI—where “entire platforms can now be built at record speed”—developers must take personal moral ownership. “Every individual composing the team must decide that ‘Evil will not go through them.’”
That personal accountability aligns with Altman’s own anxieties. He told CNBC that the “very small decisions” on model behavior—what questions ChatGPT answers, what topics it avoids—keep him up most nights. “Maybe we could have said something better, maybe we could have been more proactive,” Altman reflected, discussing the company’s struggle to prevent harmful advice on sensitive topics like suicide.
Both leaders point to the same reality: technology moves faster than conscience unless ethics are built into the architecture itself.
When Bias Becomes a Feature, Not a Bug
Bias in AI remains one of the industry’s most persistent failures. “Bias isn’t just a problem in AI—it magnifies the impact of bias,” Genest says. “It’s not noise, it’s a signal. It’s telling you where your assumptions, your team makeup, or your incentives are off.”
Altman echoed a similar view from a different angle, noting that OpenAI has consulted “hundreds of moral philosophers” to shape ChatGPT’s ethical framework. “This is a really hard problem,” he said. “We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.”
Still, bias remains entrenched in data. “The paradox at the center of AI is that the data may be factual, but the world it reflects isn’t fair. Models don’t invent bias; they inherit it,” Genest warns. The goal, he says, isn’t to erase bias but to illuminate it. “The answer to bias in AI isn’t to make AI politically correct; it needs to be self-aware enough to separate data from fairness, and prediction from judgment.”
Privacy, Accountability, and the Weight of Power
In his interview, Altman proposed a new concept: “AI privilege.” Like attorney–client or doctor–patient confidentiality, it would ensure that conversations with chatbots remain private, even from government subpoenas. “When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information… I think we should have the same concept for AI,” he said.
For Genest, that proposal underscores the need for transparent design. “Know exactly what your AI is doing, with what data, from and for whom,” he insists. “Companies are likely to be at risk not because their model is malicious—but because it’s a black box.”
Accountability, he argues, must be engineered—not improvised. “You don’t push to production unless someone in the room can say: here’s how you can monitor the output, here is how this decision gets made, and here’s who’s accountable if it goes sideways.”
As AI’s influence expands into national defense, accountability takes on new urgency. OpenAI’s recent $200 million Department of Defense contract shows just how existential that responsibility has become. Altman admitted uncertainty about how the military will use ChatGPT: “I don’t know exactly how to feel about that.”
The Human Code
Whether in Silicon Valley or at tech academies like CodeBoxx, the lesson is the same: AI is a mirror, not a mask. “When you’re designing intelligent systems, you’re encoding trust,” Genest says. “You better be sure that trust is earned, not assumed.”
Altman expressed a similar hope despite his insomnia. He believes AI could “up-level” humanity by making everyone more capable, not just the powerful. Yet, as both leaders note, that outcome depends on choices made today—and whether ethics can keep up with innovation.
The Takeaway
AI’s biggest challenges—bias, privacy, morality—won’t be solved by better models alone. They’ll be solved by better judgment. As Genest puts it, “Technology with built-in traceability, stated purpose and intent—built by humans who take responsibility for the requirements they expressed upstream and how they operationalized the software output downstream.”
Sam Altman may lose sleep over the small decisions, but Genest would argue those decisions are exactly where morality begins. The future of AI won’t hinge on code alone—it will depend on the conscience of those who write it.