
Artificial intelligence (AI) has quickly become part of everyday learning, especially in technical fields like software engineering. Tools such as ChatGPT, Claude, and GitHub Copilot are no longer just experimental; they are now common study aids, debugging partners, and learning companions. In software engineering courses, AI can help explain unfamiliar concepts, generate starter code, and fix errors faster than traditional resources alone. In ICS 314, AI played a noticeable role in how I learned and worked. While I experimented briefly with Claude early in the semester, I primarily used ChatGPT and eventually stuck with it because I preferred the way it explained problems and generated solutions. Throughout the course, I used AI for understanding instructions, writing and fixing code, checking requirements, and learning new tools.
For Experience WODs, I rarely used AI. When I did use it, it was usually during my first attempts and only to help clarify what the instructions were asking me to do or how to get started. For example, I would sometimes ask ChatGPT something like, “Here are the WOD instructions <WOD instructions>. Can you explain what step 4 is asking me to do in simpler terms?”
However, I tried not to rely on AI during Experience WODs. My goal was to complete these WODs in an average time without AI so that I would be better prepared for in-class WODs, where time pressure is much higher. I also found that watching the instructional videos after my first attempt was more helpful than using AI, since the videos showed the steps visually and in order. Overall, AI was somewhat useful for clarification, but I intentionally limited its use because I wanted to build confidence in my own understanding first.
My approach to AI during the in-class practice WODs evolved throughout the semester. At first, I used AI from the very beginning of my first attempt and treated it like a real WOD. I would paste the instructions into ChatGPT and ask for a solution. I quickly realized this approach was not helping me learn. It increased my stress and made me feel unprepared because I could not clearly tell what I actually understood versus what AI was doing for me.
Later in the semester, I developed a system that worked much better. On my first attempt, I completed the entire WOD without using AI. This usually resulted in a DNF, but to me that was okay because my goal was to identify what I already knew and where I struggled. On my second attempt, I used AI and treated it like a real in-class WOD. I focused my questions on the areas that caused problems during my first attempt. This approach made AI much more useful because it helped me prepare for actual in-class WODs, taught me how to ask better questions, and improved my speed over time.
I used AI in almost every in-class WOD, except for Nextjs 1. In total, I used ChatGPT in eight WODs. I tried Claude once during the TypeScript 3 WOD, but I did not like the results and returned to ChatGPT because I was more familiar with how it explained problems and generated solutions.
My typical process during an in-class WOD was very structured. I would copy the full WOD instructions into ChatGPT, along with any extra details mentioned by the instructor. After receiving an initial solution, I would test the code and fix any errors I understood on my own. If I encountered errors I did not understand, I would paste the relevant code and error message and ask, “Here is my current code <current WOD code> and the error message <error message>. Please explain what the error means and how to fix it.” Before submitting, I always asked ChatGPT to validate my work, using a prompt like, “<WOD code here> Does this code meet all the requirements? Here are the instructions: <WOD instructions here>”.
This process was extremely useful. AI helped me generate a starting point, debug errors, and double-check requirements. Without AI, I believe I would have failed several WODs due to time constraints.
For essays, I used ChatGPT only as a checklist tool. I did not ask it to write content for me. Instead, I asked questions such as, “Does this essay meet all the requirements listed in the prompt?” This use of AI was helpful for validation but did not replace my own writing or ideas.
I used ChatGPT consistently during my final project for both coding and non-coding tasks. My usual process was to explain the issue, paste the GitHub issue description, include relevant files, and add error messages if something was not working. For example, I might ask, “Here is my current code <current code here> and here is my end goal: <issue description here>. I am getting this error <error message>. What should I do?” AI typically accounted for half or less than half of the total effort on an issue. Most generated code required manual edits before being added to the project. Only one response, related to adding an inline edit feature for vendor ingredients, was used mostly as-is. Overall, AI acted more like a helper than a replacement for my own work.
I used AI sometimes to learn concepts I did not understand, especially during Experience WODs. I also used it as a tutorial for installing tools such as VS Code, Postgres, and pgAdmin4. For example, I asked, “Give me step-by-step instructions for installing Postgres on my Windows laptop.” AI was helpful here because it provided clear, simplified steps when official documentation felt overwhelming.
I used AI to help answer questions in class, since we were encouraged to do so. It was helpful when I understood the topic but struggled to put my thoughts into words. I did not use AI to answer questions in Discord because I did not answer any questions there.
I only asked one question in the smart-questions channel and did not use AI for it. The question was about downloading a ZIP file that was coming out empty, and it did not involve AI in any way.
I used AI once when learning functional programming. I asked something like, “Give me a TypeScript coding example that uses .reduce.” While this helped a little, it was less useful than I expected, and I still needed other resources to fully understand the concept.
I sometimes asked AI to explain its own code and why it chose one approach over another. For example, I would ask, “Why did you implement it this way instead of using another method?” This helped me better understand the logic behind the solution.
AI was heavily used for writing code during in-class WODs and the final project. It was especially helpful during in-class WODs for generating initial solutions under time pressure.
I did not specifically ask AI to document code, but when it included documentation, I reviewed it and decided whether to keep it. About half the time, I kept the documentation if it was clear and accurate.
I frequently used AI for quality assurance, especially for ESLint errors and runtime issues. I asked questions such as, “What is wrong with this code?” or “Can you help me fix the ESLint errors in this file?” This was extremely helpful when I did not understand the error messages on my own.
I did not use AI for any other course elements beyond those listed above.
Using AI in ICS 314 had a clear impact on how I learned software engineering concepts. On the positive side, AI helped me understand unfamiliar topics more quickly and reduced my frustration by helping me get unstuck when I ran into problems. However, I also noticed that over time I became dependent on ChatGPT to help me get started. Instead of struggling through the first steps on my own, I often turned to AI immediately. While this saved time, it sometimes limited my problem-solving growth. I learned that AI is most beneficial when used as support rather than as a first step.
Outside of ICS 314, AI has proven useful in both collaborative and everyday settings. In collaborative software projects, it can help explain unfamiliar code, troubleshoot errors, and explore alternative solutions, making it easier for teams to solve real-world software engineering problems efficiently and reduce common mistakes. In daily tasks, AI can assist with rewording messages or checking grammar before sending important emails.
One of the biggest challenges I faced was learning how to write effective prompts. Early in the semester, I often gave vague instructions and received unhelpful results. Over time, I learned that being more specific produced much better answers. Another challenge was that asking AI to make too many changes at once sometimes caused it to misunderstand what I wanted. In those cases, starting a new chat and clearly restating the problem worked best. These challenges also represent opportunities, as learning how to communicate clearly with AI is becoming an important skill in software engineering.
Traditional teaching methods emphasize lectures, documentation, and trial-and-error learning. AI-enhanced learning adds faster feedback and explanations that can be adjusted to each learner. While traditional methods are important for building strong fundamentals, AI can improve engagement and efficiency. I think the best learning experience comes from combining both approaches rather than relying on only one.
In the future, AI will likely play an even larger role in software engineering education. If AI gets better at giving accurate answers and understanding what users are actually asking, it could become an even better learning tool. However, it will remain important to teach students how to use AI responsibly so that it supports learning rather than replacing critical thinking.
Overall, AI played a significant role in my experience in ICS 314. It helped me learn faster, complete challenging WODs, and better understand complex concepts. At the same time, it taught me the importance of balance. When used thoughtfully, AI is a powerful learning tool. Future courses can benefit from clearly teaching when and how to use AI so that students develop both strong technical skills and independent problem-solving abilities.