Learning to code when LLMs came knocking at the door
- Ayden Smith
- Nov 29
- 5 min read
Updated: 2 days ago
Learning to code as a full-time software development college student from 2021 to 2024 had its unexpected turns. The release of generative AI in late 2022 would drastically change how my peers and I learned how to write software. Alone, the change from having no AI, to GPT-3.5, then to Sonnet 3.5 was big enough in terms of intelligence. But the models themselves were not static; no, the way AI was interacted with—the tools and techniques—changed also. I’m not here to tell you what model or coding tool you should use, or how many jobs AI could replace — we’ve all had enough of that. Instead, this will be about a student's journey learning the tools and techniques to write software while these same tools and techniques were being flipped on their head. It’ll be about the human experience of coming to terms with the fact that the skills honed over the years are not needed.
We start in the first phase: 2019–2021. This is the pre-LLM phase. In order to create software by yourself, you had to learn how to write it. Learning how to write software required reading and watching others code, then using that knowledge to solve coding problems and build coding projects. The programs you wrote inevitably brought with them errors, and it’s hard to understand error messages. Getting help required Google-searching key words of the error message. Usually, the results came as Stack Overflow posts. However, most posts would quickly branch off into describing a much different issue than yours, or keywords and phrases would go in one ear and straight out the other. If you were lucky, the post would answer your question or lead you to better formulate your next question to Google. But enough searches without promising results would require reaching out to humans, hoping someone with spare time would answer your question. This knowledge gap made progressing in your coding project difficult. Copying and pasting from Stack Overflow only got you so far, so a good understanding of how to write code was needed to complete projects. By hand crafting the logic in the code, the feeling of having a lot of ownership came naturally, and building a functional project was rewarding.
The next phase is from late 2022–2023. Now, AI, via GPT-3.5 in ChatGPT, was in the public’s lens. Presented as a familiar chat interface, interacting with AI was now intuitive. Whereas before, interacting with AI on a public website could take the form of adding weights to a neural network. Or, in order to get artificially generated language, your best bet was talking to an assistant like Siri or Alexa, which had limited responses. With LLMs, this new paradigm of getting helpful results, given arbitrary inputs, was enticing, especially to someone learning to code. For a new coder, this ability of AI to directly take your error message, no matter how archaic, and return an apparently helpful message with attached code was breathtaking. However, what seemed like a helpful message would quickly show its holes. These descriptions of the message would be shallow, and the code nonfunctional. To circumvent these shortcomings, asking project-agnostic questions to get a more generalized approach had promising results. As rare as it was, AI could output fixes to relatively complex code, keeping the hope alive of AI being a competent coding assistant. So much so, these people would continue returning to the chat interface. It was becoming a new source to copy and paste code from, chipping away at the code they would’ve gotten from Stack Overflow.
Late 2024 brought with it major changes. Claude 3.5 was the new kid in town and a significant step up from the free-to-use GPT-4o and even the paid GPT-4. Along with intelligence advancements, Anthropic created a chat interface of their own to interact with Claude. This competitor to ChatGPT brought new features on top of the similar chat interface. This includes a UI element reminding you to create a new chat thread, the purpose being to reduce how fast you would eat up the limit on the free tier. This reminder was convincing enough to change the habits of students who used to keep all their prompts for a specific class in one thread. Another big change from Anthropic had to do with visually reducing the size of the prompts. Now, a large amount of pasted text would be embedded in a file icon. This feature, along with Claude’s increase in context window, made it seamless to send prompts with much more text, usually in the form of pasting entire code files. This new ease of use led students to push the limits of code generation. Students aimed their sights high as they gave large chunks of their coding assignment at a time. Aiming high would pay off for the classes that had detailed assignment instructions paired with common coding problems the AI is well trained on. For example, creating an animal taxonomy graph using JavaScript. In these cases, the AI proved capable enough that the decrease in human involvement was being noticed. Professors acknowledged this, and students questioned whether this level of involvement would prepare them enough for the test.
The last phase puts us in early 2025 up until today—late 2025. AI-integrated editors, the popular ones being Cursor and GitHub Copilot, remove the need to transport code from the AI chat interface to the code editor. Now, a single button press is needed to move the generated code to your file when interacting with AI in what’s called ‘Ask Mode’. But the real magic comes from ‘Agent Mode’. Agent mode removes not just the transport layer, but also the need to figure out what file and where in the file the code needs to go. With a single prompt, the AI can zip around your codebase from file to file as if it has access to your mouse and keyboard. This new ability takes a better approach to reduce the context-gap issue, this biggest issue to getting the desired output. Now, prompts were becoming significantly shorter as pasting in the relevant code was no longer needed. If you wanted the AI to skip the search process, you would only need to link the file by its name in the prompt. This magic did have its limits. This AI was still only as good as the data it was trained on. Being unable to generate functional code of lesser-known libraries and not reevaluating its approach after countless regeneration attempts are a few examples. This new agent approach proved to students that it could not only complete big chunks of their assignments, but briskly code much of their web-based final project and their personal projects. For these projects, the AI could easily claim more ownership than the student. This was not a problem on the conscience of the students who wanted to just pass a class, but more ambitious students geared their focus toward complex enough projects that the AI could not do.
As 2025 comes to a close, there is no sign that the landscape of AI in software development is settling down. The AI-integrated editors are pumping out various new techniques to work with AI. The tools and time the agent can iterate off a single prompt are increasing. These longer reasoning chains are enabling the agent to research, plan, and test its own code. It’s hard to tell what will be the big feature of the next phase of AI coding. If the past phases have served as a lesson, the next phase will be characterized by a decrease in human involvement, coupled with a byproduct of a change in the feeling of ownership over one’s work. AI is not knocking at the door anymore. It’s in the house, and it’s here to stay.

Comments