The Assist: A Look at AI-Powered Development Tools in Everyday Use

There is, rightfully, a host of ethical concerns surrounding the use of AI. However, while the world waits for pending lawsuits to make their way through the courts, developers and agencies are already living in a world where the use of Github Copilot , Cursor, and a host of AI-assisted (or even AI-first) tools is not only expected, but is affecting deadlines and budgets. As technology professionals, how are we to operate in this climate? As those responsible for shipping high-quality, functional code to paying clients, it is imperative that we are able to pinpoint the strengths and weaknesses of this new paradigm, and to separate the hype from the reality on the ground. 

What can these tools do, and what can’t they do? Furthermore, what should they do? I can say that one thing they are not doing is writing this article. Yes, these words are the product of traditional human thought, which should hopefully assuage the fear of any “gotchas” in the closing paragraph. But let’s examine some angles from the day-to-day perspective of a developer. 

AI as “Better Google”

Using AI assisted tools essentially as search has thus far been my optimal use case. We’ve all been in the situation where the Stack Overflow result is too narrow, the official documentation is too cumbersome, and the client is waiting on delivery. In other situations, documentation may be too sparse, incomplete, poorly written, or assuming contextual background information you don’t have, such as in the case of open source libraries. Maybe we don’t need to read 2000 lines of options to learn how to toggle the one or two arguments we will need for a given project. I find that the LLM chat handles this use case exceptionally well, usually delivering information specific to my use case in an easily digestible manner. Additionally, they seem to be getting better at providing sources and footnotes so that the accuracy can be checked (which is important, but more on that later). 

However, the real benefit here is the level of granularity you can coax out of the responses. Starting with a general overview of a coding concept or specific technology and then being able to drill down to specific cases is immensely helpful not only for producing work more quickly, but for learning that technology on your own terms, in your own style, so that next time you may not need the AI assist at all. Being able to ask follow-up questions is not a feature of technical documentation. It is a feature of Reddit and Stack Overflow, but even there you are at the mercy of other users’ time and willingness to help you, whereas the response with an LLM to this kind of query is usually immediate. 

I’ve even begun compiling these results into my own documents when I feel that it is something I may use again in the future so that I have the optimally-worded reminder of the information I need. The process is a bit like digestion; essentially transforming the raw material into a format my brain prefers.

AI as Scaffolder and “Knower of the Syntax”

Increases in developer efficiency with the use of AI is widely reported, and personally I would think that these reasons are a fundamental cause of those gains. I’m fairly handy with a text editor and snippet functions, but in my experience, that cut and paste and / or prefix expander snippet is never going to get you all the way there. A prompt that lays out the scaffold of a web page or that stubs out a form with client-specific fields can likely save time on average

Furthermore, we all wear many hats in this business and context switch frequently. Maybe you’ve been writing pure Javascript for three years and one day you need to dust off your SQL to pull some reports for a client. This concept overlaps with the previous section a bit, but a well-worded prompt is going to help you clear those cobwebs and accomplish what you need to a lot faster than retraining yourself for a single task. 

To put it another way, I find that AI is helpful for automating certain routine tasks more quickly and sometimes more efficiently than other manual or heuristic methods. 

AI as Scriptwriter

Speaking of heuristic methods, I, like many people, am a big fan of shell scripting. Aliases, scripts, anything that helps me do a very specific and personalized productivity hack is very exciting. Unfortunately, I never got very good at it. I have a few cool scripts lying around on my GitHub, but I never finished Unix Power Tools and never became a sed/awk poweruser. AI is great at this, and has helped me write some scripts efficiently enough that it is actually even a net time savings to use some of them! 

AI as Debugger

One would think that low-hanging fruit such as a missing semicolon or end statement would be prime time for AI-assisted IDEs to flex their debugging muscles, but in my experience, even this level of error-checking produces inconsistent results. When even more traditionally thought-intensive bugs arise, I’ve found the LLM to be virtually helpless. 

On a more philosophical note, LLMs probably shouldn’t be debugging code. Debugging is an absolutely fundamental skill in programming - it requires contextual knowledge, the ability to follow logic and patterns, comprehensive understanding of systems, and creativity. 

And yes, it is painful. I’m not going to say “No Pain, No Gain”, but I am going to say that if you outsource your pain to an LLM to debug your code, you have not only stunted your learning process, but likely didn’t get the fix you were hoping for anyway. 

AI as Vibe-Coding Coworker

Though there are some examples of fully vibe-coded projects out there, I am adverse to the idea of relying on a wholly vibe-coded project or even feature that might ever see use by a client. My interactions with the assistant have revealed great shortcomings in the technology’s ability to understand high-level architecture and logic in a codebase that contributes to, among other things, the debugging limitations mentioned above. 

Additionally, the models still hallucinate. This is an issue that I run into frequently enough where I would never just “press play” on a blank project and expect usable results. In contrast, I’ve found that often a Stack Overflow result may not be exactly what I’m looking for, but I don’t think I’ve ever come across a response that was not only wrong, but completely fabricated. At the very least, I would hope and expect that such responses would be downvoted to oblivion, never to be gazed upon by hapless knowledge-seekers. 

But with these drawbacks in mind, I like to abide by the following rule: Don’t let the assistant write more code than you can review in less time than it would have taken you to code it yourself. Maybe this is playing it a little too safe, but at the end of the day, we are responsible for the code we submit and ship, because the clients aren’t going to blame OpenAI for bugs in their system.

It’s possible, and even likely that one day this will all change, and developers won’t place much more scrutiny on LLM-generated code than many do now on open source packages and third party integrations. But speaking from my experience, I don’t think the technology is at the point where we can safely leave our project completely in its hands. 

That said, the debate is ongoing. Let me know your thoughts in the comments!

Lou DiDomenico

Staff Developer at SwiftKick Web

Next
Next

Is Your Website Ready for International Customers? Here's How to Tell