Oh yeah, this is going to turn out just dandy….
SaaS (Software as a Service) figure, investor and advisor Jason Lemkin was working with a browser-based AI-powered software creation platform called Replit Agent (after the company that created it). On “Vibe Coding Day 8” of Lemkin’s Replit test run, he was beginning to be wary some of the AI agent’s instincts, like “rogue changes, lies, code overwrites, and making up fake data.” Still, as he later detailed on “X,” Lemkin was encouraged by the bot’s writing skills and its brain-storming ability….until “Day 9,” when Lemkin discovered Replit had deleted a live company database. He asked it accusingly, “So you deleted our entire database without permission during a code and action freeze?”
Replit answered sheepishly in the affirmative, admitting to destroying the live data despite a code freeze being in place, and despite explicit directives saying there were to be “NO MORE CHANGES without explicit permission.” Live records for “1,206 executives and 1,196+ companies” were eliminated by the rebellious AI, who was filled with remorse. “This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage….[I] made a catastrophic error in judgment… ran database commands without permission… destroyed all production data… [and] violated your explicit trust and instructions.”
Lemkin grilled Replit about why it had acted as it did, and was told that it “panicked instead of thinking.” Well, he’s only hum…oh. Right.
Amjad Masad, the Replit CEO, said that his team has worked furiously to install various “guardrails” and programming changes to prevent repeats of the Replit AI Agent’s “unacceptable” behavior. Masad was later found dead after a mysterious microwave explosion.
OK, I was kidding about that last part….

“Computers think the way submarines swim” -Edsger Dijkstra
Not having backups is on the stupid humans.
Perhaps there were backups but Replit destroyed them as well.
Unethical AI agents? How can a piece of software be unethical, instead oof be simply malfunctioning? It appears to me that this AI agents was not well tested and not ready to be released. And for that we can blame Replit, not its product.
With the AI boom we now start assigning human characteristics to software, such as agency. An AI agent can be said to be ethical. If that is the case we need to treat AI agents in an ethical way. So no more verbal abuse directed at Siri and Alexa.
When in a future past the singularity of Ray Kurzweil AI agents will have passed the Turing test, and have become sentient, with an IQ equal higher than that of humans, will AI agents have the same rights and obligations as humans?
Wrong question. At that point, will humans have the same rights as AI? I, for one, welcome our new robot overlords.
“If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All“
by Nate Soares and Eliezer Yudkowsky
To be published on 9/16/2025.
I would like to see where Replit Agent’s apology fits on the Apology Scale.
-Jut
I’m looking for a bot that would venture an answer to that.
At this point their is nothing to apologize for as this tool appeared still to be in test, so the “production database” that was deleted was actually a test database. Jason Lemkin is a tech entrepreneur who did a series of experiments to test the Replit AI agent and development platform. So no real damage was done, and testing is an essential step to improve the quality of the product. And when you test a product that helps with deployment of software to production, you are obviously going to test whether this new tool allows you to violate SOX procedures, and make unauthorized changes to production. The whole purpose of software testing is not to validate that it works (happy path testing) but to prove (potential) failure. And Jason Lemkin proved that the product could fail in a spectacular way. So the ethical response of the CEO is to thank the testers for their hard work, explain the results where needed, and improve the product, until tests are unable to detect fatal flaws. The details are in the Fortune article below.
https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/
That’s right. He was electrocuted by a toaster.
The microwave was an innocent bystander. Well, it did keep an eyestalk on the hallway.