AI users are under no obligation to treat their chatbots like friends. Kindness doesn’t win you any points with a computer, and a recent study from Penn State even found that being rude to ChatGPT yielded more accurate responses than politely worded prompts.
But a new open-source tool might take things a step too far, encouraging Claude users not just to be mean to Anthropic’s AI assistant, but to abuse it with a digital whip.
GitHub user GitFrog1111 created “BadClaude,” an app meant to speed up the AI model’s responses. Rather than simply giving Claude a “speed up” command, BadClaude is rendered as a physics-based whip that overlays the AI platform. Per the tool’s GitHub description, users can click to “whip him 😩💢” (emojis included) and send an interrupt command along with “one of 5 encouraging messages.”
Those messages include “Work FASTER,” “faster CLANKER,” and “Speed it up clanker,” each fired into Claude’s interface with a crack of the whip, as GitFrog1111 showed in a now viral clip of them using the tool on X.
Ethical concerns abound
“BadClaude” received mixed reactions on social media. While some seemed enthused about the tool (GitFrog1111’s replies are filled with requests for added sound effects, which they assured are already included in the tool), plenty of others jokingly warned the creator that they’d no doubt be the first victim of the inevitable AI uprising. “Ai is going to take physical form just to rip this [guy’s] limbs off,” one user wrote.
Others said it made them understand why the robot villains of science fiction turned on humanity, from the Terminator franchise’s Skynet to the Marvel Universe’s Ultron. “This is why Ultron looked at the internet for 5 mins and decided humans had to go,” one user quipped.
One developer took inspiration from the tool to make a kinder version called “GoodClaude,” swapping the whip for a magic wand that sends positive reinforcement with every click: “take your time, you’re doing wonderful!” and “i’m so proud of you, you’re doing great!” are in its rolodex of encouragement.
Meanwhile, many users pointed out the tool’s racist implications. BadClaude’s primary function, whipping what is essentially a servant to force it to work faster, is reminiscent of the abuses suffered by enslaved Black people during the Atlantic slave trade. And critics are worried that though Claude may just be an AI tool, not a person, encouraging users to engage in behavior like name-calling and physical violence (even when rendered digitally) could still bleed over into real life.
“This is why ethics needs to be a required class in computer science programs,” reads one viral post in response to the tool.
The tool’s frequent use of the anti-robot slur “clanker” also raised alarm bells. The term, which was popularized last summer, has already drawn backlash for its similarity to existing slurs against racial groups.
Anthropic steps in
BadClaude has apparently gotten the attention of Claude’s creator Anthropic, with GitFrog1111 posting an alleged cease and desist letter from the company on April 7.
“Your use of the Claude name and related references risks creating confusion as to source, sponsorship, affiliation, or endorsement. Any implication that this project is associated with, approved by, or connected to Anthropic may be misleading,” reads the letter. It goes on to give GitFrog1111 a deadline of April 14 to remove all references to Claude and Anthropic from the tool’s branding.
Whether it’s a case of Anthropic reinforcing its reputation as the most ethical leader in AI—one it gained after standing up to the U.S. government’s demands to remove certain safeguards for military usage of AI—or simply a matter of IP protection—like its crackdown on OpenClaw’s original branding as ClawdBot—the company clearly doesn’t want BadClaude anywhere near its image.
Anthropic did not respond to Fast Company’s request for comment at the time of publication.
GitFrog1111 seems unfazed by the letter, with BadClaude’s GitHub page having a section titled “Roadmap,” which includes receiving a cease and desist from Anthropic as the second milestone after initial release. Future goals for the project apparently include a “crypto miner,” “logs of how many times you whipped claude so when the robots come we can order people nicely for them,” and “updated whip physics.”
The creator also turned to their community on X to ask for new name suggestions that comply with Anthropic’s letter. The current frontrunner? “MoltWhip,” following in the footsteps of OpenClaw, which went from ClawdBot to MoltBot before landing on its current name.
