A sincere attempt at coding with AI, 2025 edition

posted by Jeff | Thursday, August 21, 2025, 5:00 PM | comments: 0

About a year and a half ago, I wrote about my first experience using AI tooling to write code. My expectations were low, but with a very narrow focus on trying to accomplish one thing, largely driven by math I don't know, it eventually got me to where I wanted to be. It would have been more trial and error without it, and in retrospect, I'm not sure if it saved time. These days, people are predicting the end of the software engineering profession, or at least, a radical change in it that requires less people. I'm more skeptical, because among other reasons, AI is not wise and doesn't understand concepts as readability, maintainability, etc. It learns this stuff from existing code, and if you've done this for any length of time, you'll know that most code isn't very good. This is also a very old code base, and while some of it is solid, some of it's janky. I imagine that's also true of most code bases out there. It's amazing that computers work at all.

Still, I decided I would try implementing a new feature in POP Forums, specifically an "ignore" feature, so you can not see posts by certain other users, leaning on the AI as much as possible. As much as I'd like to try to do this as if I were new to coding, I don't know how I could fake that. Instead, my attempt involves playing dumb about approach, and going from there. Next steps were what I know from experience to do, so frankly I'm throwing it a bone here. I used JetBrains' Rider IDE, with the Github Copilot agent using GTP-4.1.

Chapter 1: What's the approach?

I started by asking this:

How can I implement a feature that allows a user to ignore posts and topics by other users? The ignoring is based on a list that the user maintains. Ignored topics in all of the topic lists (by forum or recent) appear in a muted color, and posts appear collapsed in a muted color, though the user can click on them to view. The implementation should use existing conventions for data access and project organization.

It responded with a data model that assumed I wanted to match ignore ID's on topics, which I did not. It also included an extraneous ID and time stamp. It also listed some API endpoints and Vue markup, neither of which I use anywhere in the main front end (though it's used in the admin). It gave some example method signatures for the repository level, and CSS that ignored the fact that I was using Bootstrap. So basically, it doesn't know what context to consider.

I gave it context, pointing to existing code, and it kept chatting back abstract examples. I finally told it that wasn't helpful, and it dropped the examples and gave me a narrative:

Understood. For your scenario, integrate the ignore logic into your existing data access and view model construction layers. Fetch ignored user IDs for the current user, and when building view models for posts or topics, set an IsIgnored flag if the author is in that list. This keeps the logic consistent with your current architecture and ensures the view receives all necessary info to render ignored content appropriately. No changes to the SQL table are needed based on your description.

This is wrong right up front. You don't put logic into data access, so that's bad advice. Meh, OK, I'll move on to see what kind of action it gives me with code completion.

Chapter 2: Let's save and get data

I started by defining the database table in a SQL script and the simple model class with UserID and IgnoreUserID. The autocomplete suggested "IgnoredUserID" instead of "IgnoreUserID" to match the database. Even though by convention I capitalize "ID" everywhere, everything it generated refused to do this. I asked it to generate a repository class to Add, Delete and GetList, using Dapper (the mini-ORM library I use) and the extension methods used in other classes (simplifies getting connections and stuff). Aside from not capitalizing "ID," the get list was totally wrong. It sort of looked right, but it put a return value to use "AsList()," which is not even a thing. When I told it this, it told me that yes, it's not a thing. Why did it suggest it then?

Then I tried to coax it into doing a join with the user table to get the name as well, for display in the user's list of ignored users, and no prompt seemed to get it there, even after I defined a new model to include the name. Next I asked it to get a list of UserID's that represented the intersection of the user's ignore list and a list of UserID's passed in as a parameter. That list comes from the posts that will be displayed. I told it to use the method that gets user signatures as an example, and it nailed this first try. Mind you, I basically told it how to do it, so it sure better get that right!

The service layer is mostly a wrapper around the repository, but not the method that gets the list of ID's to ignore. To match the convention of the signatures, avatars and such, it takes the user and list of posts as parameters, which means it has to use the UserID from the former, and get a list of ID's from the latter. I tried several ways to tell it how to do this, and each time, it gave weird results that didn't return what I asked. It also didn't check to see if the user was null, so it didn't even know to do basic null checking.

Chapter 3: Front end

I expected this to be easy for the machine, but it didn't go to the solution I expected without me being prescriptive. I started with this:

Using Bootstrap and as little custom CSS as possible, how can I make the div starting on line 22 be collapsed and replaced with the text "ignored?"

This was admittedly not a great question, because it lacked the context that I wanted the user to be able to click on it and expand the div to show the hidden post. It suggested a simple "if" in the view. Then I made it clearer:

I want the markup to still be there regardless of isIgnored, because my next question will ask how I can click on the ignored div and make it un-collapse.

It gave a solution to render both, but ended on, "The next step can add interactivity to toggle visibility." OK, so why not just give that to me then? I cycled with the robot three more times, trying to refine a solution that was "elegant" and "only required a little CSS," but it wouldn't give me the Bootstrap solution, which is just a matter of adding attributes to the div and button markup. Finally I just asked it outright, "How can I do this with Bootstrap Collapse?" I was intentional about this line of questioning for two reasons. First, I wanted to act as if I didn't know that there was a specific Bootstrap solution for this, as someone who didn't have experience with it. Second, I wanted to see if it had any deep contextual understanding of the Bootstrap library. This matters because so much of software is composition, using existing solutions from frameworks and libraries to make a thing. In this case, it was steering me toward inventing something instead of using what already existed.

I didn't bother asking the AI to help with the user's maintenance of their ignore list.

Conclusion and observations

As I said earlier, a lot of people like to debate the value of AI in the world of software engineering, and I've been generally skeptical of what it can do. That hasn't really changed. There are a lot of people who spend a lot of time and energy trying to convince others that AI coding (or worse, "vibe coding") is a game changer and huge productivity booster. Others even believe it makes humans obsolete. If you question these beliefs, you usually get the response that "you're doing it wrong." The one study that tries to measure time savings actually concludes that coding takes longer with AI.

For a very long time, when you couldn't figure something out, you searched the Internet for solutions, which led you to blogs and, more often than not, questions on StackOverflow. AI should be a good replacement for that, and to an extent it is. What I keep coming back to though is that it often suggests code that doesn't even compile, or recommends method calls that don't even exist. I suppose that's a variation on the "hallucinations" problem in other AI use cases. It confidently makes stuff up. My admitted confirmation bias is that the lack of ability to reason and exercise wisdom creates hurdles for AI to be what people want it to be.

I'm not down on the technology though. I think a lot of the problem in this context is that AI is treated like a panacea for which chat bots are the solution. The fix is to make them more contextual, which is to say integrate them with compilers and feed them specifics about related libraries and frameworks. There also has to be a better feedback loop with the humans who understand what is "best" in terms of technique. I think we're a long way off from making machines demonstrate wisdom and creativity, and the garbage in, garbage out phenomenon applies. Human and documented context can help. Self-training as a machine seems unrealistic, for now.

If I think about what tools over the years have helped with productivity the most, it starts with the refactoring tools. I remember the first time I used Resharper with Visual Studio. It was like playing chords on a piano, only it was keyboard combinations, to improve stuff in a hurry. Automated build mechanisms, testing frameworks and a hundred different open source libraries all made coding faster, and better. I'm not sure if AI, as we know it now, can be an abstraction over coding, but combining it with advancements in languages may help get it closer to that. For now, it's a leaky abstraction, because you need to understand how it works and how to game it to make it even a little effective.


Comments

No comments yet.


Post your comment: