I feel like I've had enough experience with AI this year to conclude that the thing it's not good at is context. I have some very recent examples of this.
First off, my Facebook banishment is a perfect example of what happens when you turn things over to the machines. I don't know what it thinks that I did, but I'm certain that cats and theater selfies are not against the rules. Enforcement is probably the worst application of enforcement of any kind, as false-positives in law enforcement can attest. It kind of reminds me of a variation on Minority Report, where people were convicted for crimes before they happened. This is an example though of the stakes are too high to use tools that get it wrong. Facebook banning doesn't matter in the larger scope of things (other than the continued enshitification of the platform), but civil rights violations are serious. The context of any situation, not simply markers that might relate to an actual problem, has to play a part. But AI doesn't do that.
The coding agent stuff gets a lot of attention lately, because people really believe that it can reduce the number of people that you need for those jobs. So far, that hasn't been true. Putting aside for a moment that software developers probably only spend about 40% of their time writing code, at best (because of meetings and other stuff), the AI tools today only write code if you can tell it exactly what you need. I have first hand experience with that. First I have to correct it over and over to do what I ask, then I end up having to ask it to do it in a way that is more readable, maintainable and scaleable. If that weren't enough, it confidently gives you code that won't compile. I've seen people liken this to plumbing. You don't have to solder pipes together anymore, because of PVC and snap-together bits, but you need a plumber to understand how the system works, and various quirks and concepts. You might be able to DIY stuff, but you aren't an expert.
What I really don't care for is the chat bots in customer service situations. Admittedly, this might be how they're trained and programmed more than what AI is capable of. If you've ever used these, all they're really doing is steering you toward support articles that may sound like they could help you. They generally do not ask contextual questions that get closer to the root of what you're after. So yeah, they're fine for the kind of "level 1" first line that's just following scripts, but when does your issue ever fit a script?
No comments yet.