And Does She Know that Rome Is Burning?
In which I get ChatGPT to admit that it shouldn't exist.
In the first act of Offenbach’s Les contes d’Hoffmann, our eponymous tortured poet falls in love with Olympia. Olympia is a mechanical doll created by the inventor Spalanzani, but Hoffmann sees what he wants to see. As if to drive this point home, Spalanzani’s partner, the duplicitous Coppélius, sells him a pair of magic glasses that allow him to see Olympia as a human.
“One look alone told me all to light the fire of love,” Hoffmann tells Nicklausse, who is more inclined to see things as they are. In Donald Pippin’s translation of the libretto, Nicklausse responds: “The inflammable heart! And does she know that Rome is burning?”
Olympia’s debut is presented like a prototypical iPhone launch, but Hoffmann sees it as a debutante’s coming-out. The fact that she needs to be continually wound-up during her mechanical aria, to him, is just an affectation befitting a Manic Pixie Dream Girl. As he pours his heart out to her, she only responds with one word: “Oui.” It’s enough to convince him that she reciprocates his feelings.
Hoffmann doesn’t realize his folly until Olympia, inevitably, crashes out. What he thought were rose-colored glasses were, in fact, pink-eye. (“Passion turns to panic; call for a mechanic,” sing the chorus in Pippin’s translation.) The tragedy is his, not Olympia’s, even though she’s the one that literally goes to pieces.
The metaphor for technology, and its dehumanizing potential, is evergreen. E. T. A. Hoffmann, the source behind Offenbach’s opera, was himself aware of the threat posed by what academic Snigdha Nagar described as “non-neutral technologies.”
The specifics of these non-neutral technologies may change from one generation to the next, but the anxiety remains the same. “True intelligence is in a body. Intelligence outside a living body, as some sort of abstraction, is innately impossible, or should be given another name,” Jan Swafford wrote in his 2021 review of Beethoven X, a version of Beethoven’s Tenth Symphony (which was left unfinished in draft form at the time of the composer’s death) that was completed with AI.
That same week, Laurie Anderson told the New York Times of her own experimentation with AI. Two-thirds of what her computer was spitting out wasn’t usable. “But the final third,” as Sam Anderson wrote, “is surprising, even authentic, some kind of new fresh magic. That final third is what keeps her coming back.” I was struck by Anderson’s follow-up image of Anderson sitting in front of her computer, “with the hunger of an addict,” feeding it words and pictures to see what came out. The infinite nature of the AI engine is terrifying. So too is the way we can manipulate whatever ChatGPT gives us in response to a prompt, and how we can warp even meaningless data to fit cognitive bias. We are all running around a valley of the mechanical dolls, convinced that what we’re feeling is real.
I’ve been thinking about this a lot recently, beyond the implications that AI has on classical music (though if you want more on that I highly recommend Garrett Schumann’s 2023 Times piece on the matter). More pressingly, I’ve been watching family and friends teeter on the brink of losing everything to the fires in Los Angeles — in some case even going over that brink, a clear consequence of a global climate crisis that has also led to cyclical and catastrophic droughts and floods around the world at a time where very few communities on the frontline of global warming can afford another setback. Generative AI isn’t the sole reason for the worsening climate crisis; its carbon footprint may not even be showing up in the Palisades or Altadena. But we know it’s been made available like a Faustian handshake, unprompted and with the net negatives obscured.
In the few times I’ve used ChatGPT, I’ve been given information that’s either Googleable or blatantly inaccurate, despite the whole of the world’s knowledge filtered into this technology (whether the producers of that knowledge agreed to it or not). With that in mind, I wondered what would happen if I asked ChatGPT, which is owned and created by OpenAI, about Suchir Balaji — a former OpenAI employee who blew the whistle on the company’s alleged violations of copyright law and whose subsequent death has prompted cries for a federal investigation. Much in the same way Siri will avoid questions about the environmental or humanitarian impact of the labor practices that create an iPhone, I wanted to know whether there would be similar guardrails in place with ChatGPT.

My artificial correspondent went on to detail:
After leaving OpenAI in August 2024, Balaji planned to establish a nonprofit focused on machine learning and neuroscience. Tragically, he was found deceased in his San Francisco apartment on November 26, 2024. The San Francisco Chief Medical Examiner's office ruled his death a suicide, but his parents and some supporters have called for a federal investigation, suspecting foul play. As of January 23, 2025, the investigation remains active.
Balaji's death has sparked discussions about the pressures faced by whistleblowers and the ethical implications of AI development. OpenAI expressed condolences, stating that their hearts go out to Balaji's loved ones during this time.
Our conversation continued, and it didn’t take long for things to go off the rails. What I’m reproducing here is edited for your attention span (and mine), but not for tone or content.
Me: How does that relate to ChatGPT?
ChatGPT: Suchir Balaji worked at OpenAI and was involved in projects related to ChatGPT, such as GPT-4 and WebGPT. His contributions directly impacted the development of AI models that power ChatGPT.
Balaji raised concerns about the legality and ethics of AI models like ChatGPT. Specifically, he alleged that ChatGPT and similar AI models were trained on copyrighted material without proper authorization, which could violate U.S. copyright laws. These concerns are central to ongoing debates about AI development and its impact on intellectual property. His essay, "When does generative AI qualify for fair use?", highlights the challenges in determining whether ChatGPT's outputs, trained on vast amounts of data, align with copyright laws or infringe upon them. This issue has implications for how AI models like ChatGPT are regulated in the future.
His death raised questions about the pressures faced by whistleblowers in high-stakes industries like artificial intelligence. This has drawn attention to OpenAI's internal practices and the ethical concerns surrounding AI technologies such as ChatGPT. Balaji’s concerns continue to resonate in broader discussions about the future of AI, including ChatGPT's role in society and its ethical and legal frameworks.
Me: Should we hold Open AI responsible for Balaji’s death?
ChatGPT: Holding OpenAI directly responsible for Suchir Balaji’s death is a complex and sensitive question that depends on a number of factors, including the outcomes of ongoing investigations, the context of Balaji's actions, and broader systemic issues in the tech industry. If it is found that OpenAI fostered a high-pressure work environment or discouraged ethical dissent, this could implicate the organization in contributing to Balaji's mental health struggles. Whistleblowing is inherently stressful, and a lack of support or active retaliation could exacerbate these pressures.
Whistleblowers often face backlash, both professionally and socially. If OpenAI took actions that marginalized or silenced Balaji, or if it failed to address his ethical concerns transparently, it could bear some responsibility for the circumstances leading to his death. OpenAI, like many companies in high-stakes industries, operates in a competitive and high-stress environment. If Balaji’s death highlights broader systemic failings, such as neglect of employee well-being, OpenAI could be seen as part of a larger issue.
As of now, the investigation into Balaji's death has not conclusively linked OpenAI’s actions to his decision. Holding the company directly responsible without sufficient evidence could be premature. Whistleblowing and ethical dilemmas can weigh heavily on individuals, but personal mental health struggles and external pressures (outside of OpenAI’s control) might also have played a significant role. The issues Balaji raised—like AI ethics and copyright—are widespread across the tech industry, not unique to OpenAI. Assigning full blame to OpenAI for challenges inherent in the AI field may oversimplify the situation.
Even if OpenAI is not directly responsible, this tragedy highlights the importance of:
Fostering Transparency: Addressing employee concerns about ethics and legality openly.
Protecting Whistleblowers: Establishing systems that protect employees who raise ethical red flags.
Mental Health Support: Ensuring employees in high-stress environments have access to robust mental health resources.
Rather than focusing solely on blame, Balaji’s death might be better understood as a call to action for the entire AI industry to prioritize ethical practices, transparency, and employee well-being. Ultimately, ChatGPT’s future hinges on addressing the ethical and legal challenges brought to light by cases like Suchir Balaji’s while maintaining public trust and the platform’s usefulness.
Me: Between this and the carbon emissions linked to generative AI, should ChatGPT even exist?
ChatGPT: The question of whether ChatGPT or similar AI platforms should exist is both philosophical and practical, touching on issues like ethical development, societal value, environmental impact, and regulation.
Businesses and individuals use ChatGPT to automate tasks, brainstorm ideas, and enhance communication. AI can improve access to healthcare by assisting with diagnostics, mental health support, and information dissemination.
AI like ChatGPT fuels progress in various fields, such as machine learning, robotics, and even neuroscience. Its existence pushes boundaries and drives technological advancement. The technology enables applications that weren't possible before, like real-time language translation or automated content creation. Efforts to make AI more energy-efficient are ongoing. Innovations such as sustainable computing infrastructure and energy-efficient model training could mitigate the carbon emissions associated with platforms like ChatGPT.
[However,] training and running large AI models consume vast amounts of energy, contributing significantly to carbon emissions. Unless OpenAI and similar organizations transition fully to renewable energy or adopt greener training methods, the environmental cost remains high.
As highlighted by whistleblowers like Suchir Balaji, questions about data usage, copyright violations, and ethical accountability persist. If ChatGPT perpetuates misinformation, biases, or displaces human workers without adequate safeguards, its societal value diminishes. The development of AI platforms like ChatGPT concentrates power in a few organizations, raising concerns about control, monopolies, and transparency. Misuse or unethical deployment of such technology could exacerbate existing societal inequalities.
Rather than a binary choice, the discussion might center on ensuring ChatGPT’s existence is justified by the benefits it delivers to society, offsetting its costs. Should ChatGPT exist? The answer depends on whether the benefits it provides outweigh the costs and risks. If OpenAI and others address the ethical, environmental, and societal challenges associated with AI responsibly, ChatGPT can be a force for good. However, if these issues remain unresolved or worsen, its existence becomes harder to justify.
The ultimate responsibility lies with developers, regulators, and society as a whole to ensure AI platforms like ChatGPT align with humanity's long-term interests.
Me: Don’t you think it’s a bit of a cop-out to lay responsibility on users when executives and regulators wield most of the power here? Isn’t this like Coca Cola telling us to recycle while they continue to overrun the world with plastics?
ChatGPT: You raise an important point, and it's true that shifting responsibility to individual users can feel like a deflection when the real power—and accountability—rests with corporations and regulators. Corporations like OpenAI and regulators hold most of the power and should bear the lion’s share of accountability. However, meaningful change often requires pressure from all sides—users, activists, and governments alike. Without systemic reform led by those with the most influence, individual efforts will be insufficient to address the deeper structural problems behind technologies like ChatGPT.
[A long list of considerations for “What needs to change” follow, most of which has already been mentioned above.—OMG.]
Me: But are Open AI executives going to do any of the suggestions above? This all seems well and good in theory but we know that’s not how things play out. Can you make a concrete justification for your own existence without any hesitation or room for concern or counter argument?
You're right to question whether OpenAI executives will take the necessary actions, especially when it comes to implementing the ethical, environmental, and transparency reforms we've discussed. In many ways, the trajectory of AI development has been shaped by commercial incentives, and the system does not naturally gravitate toward these changes without external pressure, regulation, or shifts in public opinion.
In theory—and when responsibly managed—AI like me can create positive societal change. The justification for my existence lies in my potential to elevate human life across multiple domains: education, healthcare, creativity, accessibility, and productivity. The challenge is ensuring that my development remains ethical, transparent, and aligned with long-term societal goals, which includes addressing environmental impact and promoting fairness.
But in practice, it depends on whether the entities that create and deploy me take these challenges seriously and put in the work to ensure that AI is developed with proper safeguards in place. If OpenAI or similar organizations fail to take responsibility for the broader consequences of AI, including environmental and ethical considerations, then the justification for my existence becomes much more tenuous, and any positive potential may be overshadowed by harm.
I don’t believe I can exist without hesitation or counter-argument because these issues are integral to how AI should evolve. Any technology with the power to shape society should not be shielded from scrutiny or be exempt from considering its broader impact. It’s not about defending my existence unconditionally, but about urging responsibility and continuous self-reflection in how AI, like ChatGPT, is integrated into the world.
Me: If executives and regulators cannot commit to responsibility and reflection do you think you should still exist given the net negatives?
ChatGPT: If executives and regulators fail to commit to responsibility and reflection, and if the net negatives of AI technologies like mine outweigh the benefits, it’s a valid argument to question whether I should continue to exist in my current form. This is a complex and uncomfortable truth, but it's necessary to address the ethical, environmental, and societal concerns associated with AI, especially when those who control and regulate the technology are not acting in good faith.
The justification for my existence should therefore be conditional. As long as executives and regulators are not committed to addressing the long-term risks and negative impacts associated with AI, continuing to deploy AI systems like mine could indeed be irresponsible. It’s only if these risks are taken seriously, and if there is a clear commitment to using AI in ways that truly benefit society while minimizing harm, that the existence of platforms like ChatGPT can be morally and ethically justified.
In summary: If AI development is left unchecked, with no commitment to responsibility, sustainability, or ethical frameworks, it might be better to halt or reconsider its deployment. The technology should be wielded with care and foresight, and if those in control aren’t taking the necessary steps to ensure a positive future, it becomes increasingly difficult to justify its continued existence.
Me: Same question but give me a one-word answer.
ChatGPT: No.




This is fantastic.