AI in the software industry

Since AI and LLMs specifically became a hot topic, naturally, it took almost all the attention from the software communities. AI is accelerating development, but it’s also quietly eroding the discipline that made good software possible in the first place.

While AI is definitely useful, and makes my work so much easier, it doesn't come without a cost. We are witnessing a shift in the structures that the software industry is composed of, tools but the people as well. Data breaches are becoming more and more frequent. Lessons learned in software are not taken seriously by everyone.

Social media does not help as well, as FOMO is spread like wildfire. Every post you read tells you to start delegating everything to AI agents, or you will be left behind, without a job and hungry for the rest of your life. What's worse is that people spreading this are either selling AI based products or simply seeking attention and numbers on social media.

Engineers are also tempted to trade diligence for speed, often without realizing the cost.

I will share my experience so far.

My experience with AI

My current setup is simple:

  1. Claude code during development
  2. ChatGPT / Claude for general queries

I really like it and use it daily. Does it solve all of my problems, not even close.

It does work well on well structured repositories. Using it for repetitive tasks is a no brainer. Describing exactly what I want done and how to do it works perfect. For some issues, especially when using tools that are not that frequent, I found that it can waste more of my time than I would need to implement it the old way. One issue I noticed is the temptation to give it a few more iterations, which doesn't usually end well. If it can't get it right in 2-3 iterations, I do it myself.

Also, it can explain documentation pretty well, until it doesn't. And then it wastes your time again. In my daily workflow I try to consciously do research and reading documentation myself. First of all to be diligent, but also to avoid brain rust.

Explaining legacy codebases works surprisingly well, which makes sense, as agents can build a context from multiple files at once. It struggles with heavily unstructured codebases, but at least you get some understanding of what the codebase is doing. Refactoring with structured codebases tends to work well.

It's useful when you want to get a quick summary on topics. Recall is pretty high and they tend to do a good job there.

I don't give them any OS level permissions, and probably never will. I haven't yet tried to building anything providing specs only, but I would expect it to handle it pretty well for smaller projects. For larger projects, I would not.

Generating db migrations is also good, but be diligent and test up and down migrations on a local db.

AI agents can be good assistants, but don't agree with the idea that you can use them as your employees. That might work well in ordinary situations, but not in chaotic ones.

The term AI slop is used more and more, but to be fair, if you provide junk as input you will get junk as output. That was true from the beginning of times for AI models and seems like it's not going away. I don't see it too often, but I try to be precise with instructions.

A big skill of engineering will be to recognize what to use AI agents for. Once you figure that out, you can often do 3x more daily. But you are often tempted to stretch the boundaries a bit further and lose some time. Which is fine, you need to test the limits anyway.

Even the big cloud providers will have to learn those lessons, as downtime is increasing for them.

New people coming to the industry

Since there is an illusion that you don't need to understand software engineering to build software products, people with no expertise are building their products. First of all, I think most of them will fail, unfortunately. But second of all, those people are the new surface area for security exploits. Recently LiteLLM was compromised and all you had to do to get your ssh keys, environment variables, cloud credentials and other secrets stolen was to install it via pypi. This package has 97 million downloads per month. LiteLLM is integrated in many agentic workflows. Nobody chose to install it. Many plugins for agents used it under the hood, and that was enough.

They don't know how to maintain a system and probably don't even have an idea that they will have to do it.

Many will learn the hard way unfortunately, and again, it's because people are tempted to do things the easy way. Seems like the tool that is supposedly going to replace developers will build a greater gap.

Junior software engineers have a unique challenge in front of them. Using AI is becoming a natural thing to do, but will it come at a cost of learning base understanding of software engineering? I think it's likely. Battle scars are necessary in software engineering, probably in any industry.

Conclusion

I think a general problem for engineers, but for others as well is that everything is moving too fast. If you lose track of your goals, you will end up chasing rabbits, constantly looking for the new shiny things without building your knowledge base and skills.

That's where this blog is coming from, intentionally slowing down and reflecting. I try to remind myself, I am not chasing results, I am chasing knowledge. And you can't cheat knowledge.

There's a lot more to be said about this topic, but it's enough for one blog. It's not black and it's not white, somewhere in the middle with a potential to be a remarkable technology, but my doubts are coming mostly from the human factor in the equation. Learning to use AI agents will definitely be useful, and not to abuse them.

While output is increasing, quality is quietly eroding and for engineers who care about craft, that loss is hard to ignore. We’re shipping more than ever, but building less of what actually matters.