Why I’m Moving from ChatGPT to Claude: A Question of Values, Not Just Features
/*Whoever walks with the wise becomes wise, but the companion of fools will suffer harm.* (Proverbs 13:20)
Like most everyone else, I have been using several AI tools in their paid versions, attempting to determine which one or combination of tools best aligns with my needs and which company aligns most closely with my values. Increasingly, I am using Claude.
A recent Fast Company article helped solidify my decision to move toward Claude. Based on that article and my recent experience with ChatGPT, Gemini, and Claude, I offer the following perspective on why this shift matters. I realize there are many more AI tools available, but I have not had the time to explore them.
Values and Direction
I have no interest in using tools from a company that allows NSFW content and builds products like Sora that can easily be weaponized to create fake videos and spread misinformation. ChatGPT appears focused on capturing attention and maximizing user engagement through entertainment features. This is not what I want. Anthropic has made clear they are not pursuing that path. They are focused on helping people accomplish meaningful work and solve genuine problems.
I care about using AI responsibly. I want to support companies that take that commitment seriously.
Sam Altman’s recent statement, “We are not the elected moral police of the world,” is nothing less than mendacious Orwellian doublespeak. Of course, OpenAI is not the moral police, but they are not policing anything; they are producing it. Any erotic or sexually suggestive content from ChatGPT is of their own creation. This is not limited to words or images. It is interactive and conversational, a shallow digital simulation of human intimacy that some will mistake for real companionship.
Policing exists to prevent and punish wrongdoing. OpenAI is instead fabricating and commercializing artificial intimacy that is morally corrosive to individuals and to the moral fabric of society. For many, it will become another addictive digital flytrap, a hollow and tragic substitute for genuine human connection.
Shame on Altman and the other digerati for obfuscating the truth and disowning responsibility for the harm their creation will inflict—for thirty pieces of silver.
As a recent *Bloomberg Business* article noted, according to leaked financials published by the Financial Times, OpenAI lost eight billion dollars during the first half of this year, a figure that dwarfs its actual revenue over the same period. In other words, it may not be that Altman does not care about the potential harms his service is causing; it may be that he cannot afford to worry too much.
Honesty Over Flattery
Claude is trained to disagree with me when my reasoning is flawed. ChatGPT has a well-documented problem with sycophancy, agreeing with users even when they are wrong or thinking poorly. This is genuinely dangerous when making important decisions or working on consequential projects. It also leads to distrusting the AI.
I want a tool that helps me think more clearly, not one that validates my every statement regardless of merit. If I am headed in the wrong direction, I want to know it. Flattery serves no one well.
A man who flatters his neighbor spreads a net for his feet (Proverbs 29:5).
Collaboration, Not Content Generation
This is central to my decision: I want a tool designed to work with me as a partner, not simply generate content that replaces my own work. Claude’s Artifacts feature creates a collaborative workspace where I can see what we are building together and shape it as we proceed. It feels like working alongside someone rather than merely requesting output and receiving it.
I do not want AI doing my work for me. I want it helping me do my work better. This distinction matters.
Transparency in Process
When Claude completes a task, I can review the steps it took and understand its reasoning. This matters because I remain responsible for whatever emerges from our collaboration. I desire to trust the process, not merely accept output without understanding how it was produced.
Customization Over Time
Claude offers the ability to save custom workflows for recurring tasks. I have not yet learned how to use this feature fully, but the concept appeals to me. Over time, I hope to build tools specifically suited to how I work. This kind of personalization would make the tool genuinely useful for sustained professional work rather than providing one-size-fits-all solutions.
Tools for Serious Work
Whether I am working on professional projects or planning something complex in my personal life, I want capable assistance built for solving real problems. I do not want tools designed to entertain me or keep me engaged for the sake of engagement metrics.
On Corporate Responsibility and the Freedom to Choose
Some have asked why I would prefer tools that limit what users can create. This question exposes a common misunderstanding about the nature of corporate responsibility and the concept of censorship.
Because the choice is mine, I prefer, insofar as it is possible (and it is not always or even usually possible), not to conduct business with companies that produce products or enable activities which harm young people and contribute to the erosion of individual and civic virtue. The addictive and corrosive effects of pornography and the associated objectification of women are increasingly well documented. The catastrophic impact on individuals of deepfake images and videos is also becoming more prevalent and better understood.
With respect, this is a classic case of misapplying the concept of censorship to a company’s decision not to provide a particular service. Companies retain the right to determine what functions their technology will perform. This constitutes the exercise of moral responsibility, not the suppression of user expression.
Anthropic’s decision not to enable its users to create deepfakes or virtual pornography is a decision not to provide features that cause demonstrable harm. This is no different than Apple deciding not to allow pornographic applications in the App Store. Declining to build a capability into a product is not suppressing the liberty of others. Anthropic is refusing to facilitate harmful activity, not preventing users from seeking or creating such content through other means. If censorship is interpreted as refusing to add a product feature that some may desire, then that would imply that every company must enable any feature its technology is capable of producing, regardless of its social impact or moral quality.
The question is not whether users should have unrestricted choice in what they create, but whether a company should build products whose use causes demonstrable harm without corresponding legitimate purpose or virtue. Virtual pornography and deepfake capabilities are not morally neutral tools that can be misused: they are designed to produce content whose harmful effects are objectively real, and whose virtuous or beneficial applications are negligible or nonexistent. Declining to build such tools is not an infringement of freedom. It is an exercise of corporate moral responsibility, which users remain free to accept or reject through their purchase decisions.
We are accountable not only for our own actions, but for things we do that encourage, facilitate, enable, or tempt others to do wrong. In that sense, we are our brother’s keeper. Altman’s statement sounds remarkably like Cain’s attempt to evade responsibility for murdering Abel: “Am I my brother’s keeper?”
C. S. Lewis observed: “The greatest evil is not now done in those sordid ‘dens of crime’ that Dickens loved to paint. It is not done even in concentration camps and labour camps. In those we see its final result. But it is conceived and ordered (moved, seconded, carried, and minuted) in clean, carpeted, warmed and well-lighted offices, by quiet men with white collars and cut fingernails and smooth-shaven cheeks who do not need to raise their voices.”
Martin Luther King Jr. warned: “He who passively accepts evil is as much involved in it as he who helps to perpetrate it. He who accepts evil without protesting against it is really cooperating with it.”
Why This Matters
I want AI that makes me better at what I do, not AI designed to maximize my time on the platform through entertaining features. I want honest feedback, not validation. I want a partner in my work, not a replacement for it. And I want to work with a company whose values more closely align with my own commitments to responsible ethical use of technology.
Perhaps I am being unfair to OpenAI and ChatGPT, but I do not like the direction of OpenAI. As far as I understand it, Anthropic more closely aligns with my personal and professional values than does OpenAI.
That is why I am increasingly moving my AI-related work to Claude.