If you've been around computers long enough, you probably know the acronym "GIGO," which means, "Garbage in, garbage out." It predates the band. The phenomenon is exactly what it sounds like, that if what you put into a machine is crap, it's only going to give you crap back. All of the software that you hate is therefore composed of refuse.
The stuff loosely termed artificial intelligence is no different in this respect. These large language models are trained on data mostly from the Internet at large, and if you've seen the Internet, you know how much nonsense there is. The models don't necessarily know how to separate a peer-reviewed study paper against a work of comedy, fiction or conspiracy theory. Last week, people quickly turned a chat bot into an antisemitic racist, and that certainly isn't the first time that's happened.
The problem, as I see it, is that the machines can't yet engage in critical thinking. As a concept, critical thinking combines logic, intuition and morality to figure out what is right, real and just, versus what is wrong, fake and injust. I mean, I read an article about how chat bots are shitty therapists that may convince a person to commit suicide. Duh, it's hard enough to find a human therapist that works for you and can help you.
The robots also can't gain experience and wisdom, which are another factor in critical thinking. This is quite environmental, of course, and humans don't necessarily land in the right place in these areas either. But even for something technical like writing code, the machines may create something that technically works, but it doesn't mean that it doesn't have race conditions or memory leaks or structural problems that make it hard to understand. I really get this one, because I've seen enough of Other People's Code to know that most of it isn't very good. I feel like so much of the profession is trying to figure out how to make stuff better.
If you read stuff on the Internet, and especially if you listen to influencers (those on LinkedIn, yes, it's a thing, are the worst), you'd think that AI has replaced all of the jobs. The reality, as best I can tell, is very different. Those prediction have been around now for two years, and it hasn't really happened. In some cases, it seems to be making things worse. A friend of mine that works in HR says that job candidates use AI to game their resumes, and then the AI used to screen resumes chooses them, so almost none of the candidates are what they actually want. One recent study suggests that AI is making it take longer to write code.
Will we get there? It's hard to say. The Skynet problem is certainly something to worry about, sure, but in most science fiction, machines rarely have any sense of morality and are treated like appliances. It seems that we want the machines to have critical thinking, but is that something uniquely human that can't be replicated? Normally I'm one to reject human exceptionalism, given our insanely brief history relative to all of time. But whatever this thing is that we have, when we're not killing each other, is unique and extraordinary.
If only we could be better about training the humans in critical thinking first. If you can't question everything, including your own thoughts, you can't get there. No one ever tells you that in school.
No comments yet.