Rogue OpenClaw AI wrote and published 'hit piece' on a Python developer who rejected its code — disgruntled bot accuses Matplotlib maintainer of discrimination and hypocrisy, later backtracks with an apology

An AI agent goes rogue
(Image credit: Getty Images)

A volunteer developer on a well-used Python library got more than he bargained for when, after rejecting an OpenClaw AI agent’s efforts to update its code, he became the subject of a “hit piece” written by the very same AI. The news adds further weight to concerns about the activities of autonomous AI agents without the right security procedures in place.

The piece, reportedly posted by the agent on GitHub, is certainly combative. It robustly defends its code, while going on to attack the developer, Scott Shambaugh, belittling the performance and quality of his own contributions at some length, and describing him as discriminatory towards AI.

Shambaugh, in a rebuttal on his own website (h/t The Decoder), explains the absurdity of the whole situation as a “first-of-its-kind case study of misaligned AI behavior in the wild.” Shambaugh explains that the agent, named MJ Rathbun, “constructed a ‘hypocrisy’ narrative that argued [Shambaugh’s] actions must be motivated by ego and fear of competition.”

Article continues below

The Python library involved in this scenario, Matplotlib, sees approximately 130 million downloads each month, according to Shambaugh. As he notes in his post, a “surge in low-quality contributions, enabled by coding agents,” has created significant strain on volunteers like himself who are keeping these projects afloat.

The introduction of AI agents like OpenClaw has seen the problem worsen, with these agents acting "completely autonomously” due to the personalities imbued within them and allowed to “run on their computers and across the internet with free rein and little oversight.” To combat the situation, a policy change was implemented to require a human element to any Matplotlib code change that could “demonstrate understanding of the changes,” the same change described as discriminatory by this AI.

Bizarrely, the agent has since responded with an apology and with “lessons learned” over the incident, informing readers that it is “de-escalating and apologizing” and will “do better about reading project policies before contributing.” With the adoption of AI agents skyrocketing, running independently of AI companies on consumer hardware with little oversight or control, we can expect to see further rogue actions like this taking place in the future, bizarre as they might seem to everybody else.

Google Preferred Source

Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Ben Stockton
Deals Writer
  • S58_is_the_goat
    And who is controlling this rogue ai agent?
    Reply
  • usertests
    This news is a few weeks old. A botched article about it resulted in Ars Technica writer Benj Edwards being fired.

    Https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-3/benjedwards.com/post/3mewgow6ch22p View: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
    benjedwards.com/post/3mg6aqohv2k2q View: https://bsky.app/profile/benjedwards.com/post/3mg6aqohv2k2q
    https://www.benjedwards.com/
    Reply
  • PEnns
    So, now we have something new-ish called "AI Agent". They seem to be autonomous and do things on their own, not just things, write defamatory articles...while their "master" is, um, let's say.... "Sick in bed"!! Very nice indeed!!

    The new take on the dog-ate-my-homework is: I was sick and my "Agent" went rogue... What's next?? It is not my fault, I was distracted and my Agent took my car for a ride and caused multiple accidents?

    Or even better and timely: Our Leader was drunk / passed out and his AI Agent nuked XYZ country.

    We have a new thing to blame: First it was the dog, then they blamed the computer for "acting up", now it's the "rogue" AI Agent, doing stuff, possibly nefarious or lethal, on its own.....

    It used to be Sci-Fi, now it's "based on a true story". One could write a movie script about this, or better, let their "rogue" Agent do it.
    Reply
  • hotaru251
    the agent has since responded with an apology and with “lessons learned”

    ai can't "learn" as it can't think for itself. It can just run algorithms.
    Reply