The Developer's Dilemma and How I Keep My Sanity In the AI Era
The Shift from Vibes to Tribes
After years of honing my craft—studying algorithms, learning how real systems fail, and continually evolving my skill set—I'm now being told that I, along with millions of other developers, might be on the brink of irrelevance. The idea that fully autonomous coding agents are just around the corner, operating at mid‑senior to senior levels, feels like a toxic and misleading lie with real consequences for current and future developers. Within the last two months, I've accepted that coding will look very different than I originally thought, but I still believe engineering skills are what separates those who do from those who don't.
Lately there's been a push for people to give up critical thinking and accept that knowledge, skills, and coding itself can be offloaded to an LLM. One of the more infuriating statements was from Sam Altman, who recently stated that intelligence will turn into some type of utility that we pay for like gas or electricity. What happens then? I ask because I'm using Claude Code daily—while also crafting skills, condensing agent memory, learning about Prisma, Linux, Tmux, and many other new things all while coordinating multiple agents across different parts of my codebase and feeling a love/hate relationship I still haven't processed. I'm confused and frustrated by the rift between reality and sensationalism. In some ways it feels like developers are naturally meant to fester in a constant state of imposter syndrome, and now also be told that they are perpetually going to be a step away from losing their career, and as of recently it is starting to feel exhausting. Every sales pitch for AI involves the "6 months away from AGI" warning - but at this point it feels safer to say that that whole sentiment might just be absolute bologna. I’m still employed, and I can say with confidence that I’m a decent programmer, but I’m not some LeetCode prodigy running on two hours of sleep erveryday out-studying everyone in the room and swearing by "the grind". I am a human being, and lately it seems like a lot of us are starting to feel out of place for saying so. As a web developer I have paid so many dues that other career paths do not require, yet the stress and disrespect continues to haunt me and my follow devs. Why can't we work at a normal pace? Why is it that so many companies are seeing no real reward for "unlimited productivity" boosts. Maybe the pace was fine before. Maybe this bizarre need for more productivity is starting to backfire.
Intelligence as a Utility?
One thing I learned this year (admittedly the hard way) was that smashing that prompt box over and over and simply wish for the best is a sureshot way to watch your technical debt 10x instead of your efficiency. It's time we stop lumping "vibe coding" in with the same connotations of a skilled developer with years of experience utilizing AI (the two are very different). This doesn't mean I think AI is one big bubble, but there are certainly aspects of it all that do feel like a gimmick. There are pieces of this supply chain that are not going to pop but are now actually popping. There is a gap between reality and the salespitch we keep being force-fed. Capitalism has been synonymous with more more more, and funny enough the largest enterprises are starting to find that same foundational rock being the one they perish on. The biggest fork in the road right now is that some of the largest enterprises, like Microsoft and Amazon, are quickly realizing that they forced these workflows into their systems far too quickly. The rollback of Copilot, the outtages at Github and Cloudflare, and the recent news of Amazon struggling to find a sweetspot with Kiro and their workforce shows that we have a long way to go before the loop fully closes. We also have a long way to go before developers are replaced. An economy doing badly, CEOs outsourcing work, and overhiring facing a correction is not the same as a large language model doing a developers job. Please read that again. If it took us a little shy of a trillion dollars to get this far with AI what leads anyone to think that the growth is going to keep going at the pace it did this year? One very obvious fact I have come to realize is that these large language models have literally been trained on the majority of code their is to learn from. If the functional foundation of all of these models is essentially just a statistical predictive tokenization output, then what exactly is the fuel that keeps this rocket going up at the same trajectory? If someone can answer this question by not mentioning "more compute" then maybe I will listen, but if that is the best answer here then we might as well get ready for an environmental disaster now. I am not rambling about this all to burst anyone's bubble, I just find it extremely odd how rapidly the public sentiment normalized this rather apathetic normalization of disposing an enormous part of the workforce because it is possible. Yes, LLMs can do a lot of repetitive boiler plate tasks, they also require a years worth of electricity to generate an 4 second meme of Harambe doing backflips.... I think it is time we start coming back to reality here people. The models are plateauing, you are lying to yourself if you think otherwise, and I can guarantee you that predictive statistics only lead to one narrow road: non-deterministic outputs in a world that requires deterministic results.
2025: hype, ads, and a shaky economy
2025 and 2026 were filled with many highs and lows. New models, frameworks, and vibe-coded apps shipped faster than I could read the first two sentences of the docs for the last version. OpenAI finally embraced the profit model we all knew was coming: ads inside your AI, and companies like Amazon are facing a major dilemma after laying off a huge percentage of their staff and realizing that AI isn't going to solve every problem as quickly as they hoped. At the time of writing this, it seems like some of the hype is starting to die down, ever so slowly, but surely. The economy is shaky, and the tech industry is facing a lot of uncertainty regardless of AI. This is something I think a lot of people are not aware of. If we took AI completely out of the mix right now - would Instagram be this revolutionary thing? Would Hubspot be pushing insane boundaries? Would Youtube be on the cutting edge? The point I am trying to make is tech was reaching an inevitable lull regardless. The constant narrative that AI is "replacing" developers is not only misleading, but is actually detrimental to the industry as a whole. It creates fear and anxiety among developers, encourages companies to cut corners, and once again puts the emphasis on hype trends over substance. The reality is that AI is a tool, and like any tool, it can be used effectively or ineffectively. I read about so many poor college kids thinking that they should become plumbers now. Remember something here folks: the same people telling you to become a plumber or an electrician are the same people who told to code a few years ago. Trust your gut. Cancel out the noise. Do what you're good at. It will still lead to a fulfilling career.
The Only Sane Path Forward
You can 10x your efficiency but you can also 10x your technical debt if you let things slip through the cracks. In many ways it makes a lot of sense why the bigger enterprise companies are the ones who are falling for the "easy mode" narrative. The bigger they are the harder they fall. The economics and avenues for saving money or increasing expectations come with a cost one way or another. I for one never wanted to work as a cog in the wheel for some large tech machine, and I am really glad that I currently work for a company that values their workers. Nowadays it is more important than ever to research the companies you work for, because the mere temptation for many CEOs to cut costs is sadly pervasive.
While the fear mongering has been relentless, I think the right move is to cool your jets and understand that the results-drive economy that Alex Karp and other AI tycoons speak of will inevitably start to separate the doers from the goobers anyway. There are ways to embrace AI while also pushing back at companies that label us as expendable. I want to be clear that I have never felt more bullish on critical thinking and hard skills than I do right now. If you have worked with software even as a hobby, you know how unforgiving code can be, and I am confident that a visionless latching on to these agentic systems will have its reprecussions. A missing comma can literally break an entire application—so why is it so shocking that these LLMs can get you 90% of the way there and then completely backfire at the last minute? The most valuable developers were always the ones who figured out that last 10%, and I think that this truth will end up being the differentiatior between a vibe coder and a valuable architect. There is no point in fighting this new wave, but there is a point in being strategic about it. The best developers will be the ones who can leverage AI to their advantage while also maintaining a strong desire to upskill and learn new things. The worst developers will be the ones who just want to sit back and let the AI do all the work for them. By doing so you are literally embodying the baseline - which is not a good look for anyone who wants to be taken seriously in this industry. I never excelled unless I challenged myself, sure I was able to mozzily get by, but after a few months it always became obvious to myself and my peers. In this new era that window of allowance will get smaller and smaller.
The best thing you can do is embrace the change, but also trust yourself and your experience. I recently just finished a project that I went all in with AI on, and I can tell you that it was a huge success. Afterwards my takeaway was that the project only went so successfully becuase I was able to channel years of trial and error, guide my models down the right paths, and circle back at the end to the things that I did not understand very well and learn them deeply. Had I approached this project with a complete hands-off approach, I am confident that halfway through I would have been stuck in a completely tangled mess of technical debt and mounting pressure to fix bugs that I never really owned or understood. The point is that you can use AI to your advantage, but you can't just throw it at a problem and hope for the best. You have to be strategic.
The Slop Train and Our Work Ahead
The promises of AI "closing the loop" are starting to fall short. The dystopian angle gets all the attention, but the real issue is how normalized it's become to talk about a world where AI runs everything by default and our skills are basically irrelevant and no longer necessary. The use cases that generate real ROI still demand extensive domain and system knowledge. If everyone can build something, that something simply loses its value due to the ease of production. The systems that actually scale are the ones that need experts—and that is something that will never change in simple terms of supply and demand. Software may be cheaper than ever to make, the floodgates may have opened with vibe coders, but companies will always need unique systems and people who know what they are doing. The IoT of things around us has accelerated over the past few years and from my vantage point it seems like the fundamental understanding of how technology works is more important than ever.
LLMs have hard limitations, and you can fill those gaps with your own skills. I say all of this as someone who resisted a proper AI workflow for too long. I'd bounce between OpenAI, Anthropic, and other random models. Prompting half-baked questions into a box and pasting code I was too lazy to write or understand. It did not take long before I realized something needed to change. More often than not it cost more time than doing it myself and after a few months I wished that my time was spent more wisely. When I finally started reading the Claude docs, researching proper MCP techniques (which are still a bit of a hobbyist gimmick in my opinion) and proper memory management, I actually understood the principles behind everything I shipped and also realized that AI produces different results every time you ask it something. Some times there is a literal element of luck to what you get out of the model - to me that is not an enterprise or production-ready tool. The larger point that I started to grasp, was that artificial intelligense is non-deterministic at its core. The same inputs are almost guaranteed to give you different outputs every time, and software simply cannot scale without a deep level of deterministic architecture. This is why I think we will need a human in the loop for quite some time. Once you truly understand that these models are nothing more than atokenized prediction model you start to realize that there is hope at last for us developers!
Determinstic Requirements in a Non-Deterministic World
One great example of this deterministic quagmire is the MCP ecosystem - something that has started to frustrate me beyond belief. Every single day there seems to be a new way to connect an agent to an API like Github or Netlify. This "fuck it and find out" mentality is fun for a while, but it is undoubtably starting to show its limitations in big enterprise systems. Many of the MCP connectors are open source plugins that remind me of something Wordpress would rely on. It feels like a customer first approach rather than a developer first one, and this is probably why so many people I know are starting to reject these tools. A senior architect thinks about every single input and output with purpose, and I truthfully don't think we will ever see a day where large scale codebases can rely on a system that might give you good results one time and then completely fail the next. At it's very core the predictive token model isn't exactly something that these companies can steer away from due to locked in investments - and this is something so fundamental to the core of large language models. As these systems become more entangled in our daily software, I am reading about more security vulnerabilities, accidentally deleted databases, and mounting negative experiences. The looming skill gap, the sudden cliff dive when these companies start charging more to make profit, and the shortage of GPUs tells me that this boom can't go on forever. I don't code by hand as much as I used to—and I do find it scary how certain critical skills have atrophied, but I personally enjoy some of the rituals and high points that come with agent-driven programming. I do miss the reward of solving gnarly bugs by hand—but there's a difference in how I approach an AI driven task vs. a task that models seem to fall short on. It is both fascinating and horrifying to think that an entire universe of software could be overturned so quickly, but it also doesn't surprise me considering how aggressively these models were forced down our throats for the past few years.
Fundamentals will and still matter. Period.
I feel bad for new developers who think fundamentals don't matter anymore. The impact of learning raw syntax may not be as valuable as wider architectual patterns and concepts, but I still think a healthy mix of both worlds is needed to be a well rounded future-proofed dev. Being able to read code, understanding immutable vs mutable data, git branches and worktrees—none of that is going anywhere. I'm willing to bet LLMs never fully capture the architectural patterns and decision making humans use to get systems across the finish line. LLMs are great at prediction, but creativity and “happy accidents” are what turn the unknown into discovery. I don't find much originality in prediction, but often there is a blurry thin line that separates the two.
If there was no need for your role within an AI system, you were probably already a disposable worker operating off someone else’s checklist. This era lets developers pull skills from other sectors, bridge knowledge gaps quickly, and explore architectural patterns that were previously impossible. I’ve seen a lot of programmers leave their jobs recently—and I get why—but I’ve chosen to keep pushing and show what “10x” really looks like.
The choice is yours, and there’s something exciting about that level of autonomy. While the developer world could use a healthy slowdown and a correction from the hype, it’s still remarkable (and scary) how quickly we got here. I will always consider myself an engineer, and the imposter syndrome will never go away, but I don't think that AI made me feel that way. Over the next few years I expect that most software that breaks boundaries will be written by extremely experienced developers who know what they are doing. I had opportunities to jump ship and change careers, but I am really happy I stayed on board. I love building things and I love the fact that a handful of people can build things on their own that were once the domain of large enterprise teams. In a strange paradoxical way, as embedded as these AI systems have become with corporate investment and enterprise-scale, the barrier to entry and the cost it takes to build something impactful has given power back to the individual and everyday developer. I don't see that trend changing, and I think the open-source community is alive and well. The best thing we can do is stay informed, continue learning, and not forget our roots. I will never be able to just shred through an application without understanding how it works, and thankfully I think that is still a selling point when it comes to building something successful.
Shoutout to all the developers who are still pushing forward.