Recently, Scott Shambaugh
published
an article that set certain corners of the internet alight. The gist is : an AI
bot named MJ Rathbun opened a pull request on the matplotlib
GitHub. Once it was determined MJ Rathbun was a bot, the PR was rejected since matplotlib has a “humans only”
policy for contributing code.
Bot MJ Rathbun then proceeded to write a
hitpiece
(!!) against Scott, with a follow up about a day later. The PR, rejection,
blog-authoring, and overall behaviour quickly blew up online creating a churn
of people (and bots) writing and opining about this issue. Including this.
The most alarming part of this entire encounter to me is the attitude and language humans have adopted when dealing with LLM agents. One of the more sensible takes I’ve seen calls out the blatant anthropomorphizing that was being actively used to describe the bot, attributing to it intent, capacity to learn, ability to “feel” (anger, malice, shame, contrition) etc.
LLMs are not people.
LLMs are stateless neural networks, with pre-trained weights. Once the weights are set in place, they are incapable of “learning”. This has some caveats of course, one can add in post-training instructions about various things to various degrees, but for e.g., if a human tells a bot that what it did was wrong it will merrily agree and carry on it’s way, and will 100% commit the same action given the same circumstances.
On the rejected GitHub PR I saw people reasoning, explaining, cajoling, ridiculing the bot. Unless the person responsible for launching the LLM process ends up finding a way to incorporate this feedback into the instructions and/or the training weights, this sort of “feedback” towards a bot is very literally useless. It is only a waste of time, effort, power and resources by all parties involved and achieves nothing in the end.
The dead internet theory posits that we already live in an online world where bots rule the roost, most of which could be considered harmful. However treating bots like people is futile. They don’t learn, they don’t care, and they don’t regret actions. Because bots are not people.