One of the frightening things about AI chat bots is that, because they "speak" in normal language, people mistake them for being sentient and human. People have allowed the machines to talk them into divorce, suicide and violence. They also turn racist pretty fast, and that sure seems human-like.
I was slinging code last night, which is to say that I was prompting Claude to sling code. It made a change that caused the bits to stop working, so I asked it to dig into why. It found the problem, and then caught and ate the exception. I thought that it was generally pretty well understood, by humans at least, that this is the greatest of sins. You just don't eat exceptions. Of course AI tech bros will tell you that you have to direct the AI to debug, but this was its solution to debugging. I'm sure this happens all of the time for non-engineering types who vibe code.
My first reaction was in the vein of, "You moron, you can't do this." But I quickly caught myself to realize that I was anthropomorphizing the machine myself. I can't hurt its feelings, though for some reason belittling it felt satisfying.
As fantastic as these tools are, and as promising as they (maybe) are, we have to keep in mind that expertise is not easily acquired. It's not any easier to train the robots, either, since they can't determine right from wrong. We can have "right" people train them, sure, but when that experience leaves the workforce, who trains them? This seems like a larger cultural problem right now, where we have stopped valuing expertise. Like, expertise for everything. Folks think they have a PhD in Googling now, which feels like we're moving toward the movie Idiocracy.
Randos on LinkedIn, who have been using AI as long as the rest of us, seem to be making it worse by promising to crack some code for you, despite having no history of delivery. It scares me that business leaders take these folks seriously.
So let's all take a breath, stop equating machines with humans, and leverage them as the tools that they are.
No comments yet.