In my talk to college students last weekend at Orlando Code Camp, it didn't take long for one of them to ask about the impact that AI would have on their careers. If you follow technology news, or really any news, the talk of artificial intelligence can be a little exhausting. I'm not an expert, but I do have some opinions based on my own anecdotes. I could be wrong, and it wouldn't be the first time.
First, my own experience with using AI in coding scenarios is somewhat limited. I don't write code in my day job, so the experience is limited to what I've done in my open source projects. The results are mixed. It has been really good at generating HTML and CSS, which I don't consider "coding" as much as it is fighting the quirks of syntax to make content look a certain way. Even after two decades, I rarely get it right the first time, but the generated stuff does if you're explicit. For example, I might tell it, "Create a layout with three rows, where the top one is 90 pixels high for a banner ad, the middle row can scroll, and the bottom row is fixed for buttons. Also, make it work in a reactive way for desktop and mobile."
It's pretty good, but not perfect, when it comes to writing code for unit tests. In fact, a lot of the time it will work by naming the test method to describe the test. Something like, "NullNameAndCountFiveOrHigherReturnsFalse," where I'm describing the input parameters and return value, works pretty well. Where you get into trouble is when you have really complex methods that you're trying to test, in part because it's hard to even mock out what you need. A lot of my older code is like this.
Writing the actual production code is more of a crap shoot. Especially in a smaller project, or one that lacks a lot of domain context, you have to prompt it and correct it a lot, and even then, you'll have to edit the results. As others have pointed out, the AI's tend to be resolved to give you something, even if it's wrong.
Getting back to the student question, what I told them was that the classic "GIGO" principle, garbage in, garbage out, still applies. These models are trained on a lot of public code bases in open source projects. That, or they're contextually looking at your code base written ten years ago by people who were not at a point in their career where they wrote "good" code. If the machines are learning based on crappy examples, it stands to reason that their output won't be better.
Therein lies the problem with AI overall... It has no concept of right and wrong. That's why people still manage to make it racist for funsies. Humans are sometimes not great at morality, so I don't know how you could synthesize it. "Correct" coding is a somewhat squishy idea too, as it depends on the language, frameworks, etc., and some are "better" than others.
Where I left it was, AI has potential to make you more productive, if you know how to prompt it. Most coding is already about composition, not algorithms, so if you can explain to the AI how you want to compose something, that's positive. I have not, however, seen any evidence that AI vendors know how to solve the quality/moral problem. Thousands of years of philosophy, and we haven't even figured it out among humans. I'm not saying it won't get there, but I don't believe, for now, that we're close.
No comments yet.