Every dev shop website now claims to be "AI-accelerated." Most of them are not. They installed Cursor, used it for two weeks, and added a line to their About page. You can usually tell within five minutes of a sales call.
We've been using AI tools daily across actual client projects for the last 18 months. .NET backends, Angular frontends, MDX content systems, mobile apps, the lot. Some of it has changed the work meaningfully. Some of it hasn't moved the needle. And some of it is a trap that makes inexperienced developers ship worse code, not better code.
This is what we've actually learned, in plain language, with the gotchas left in.
The Honest Math
Building production software is roughly:
- 40% understanding what to build (requirements, edge cases, business context)
- 20% architecture and design decisions
- 25% writing the code
- 15% testing, debugging, and integration
AI helps the most with the 25% writing-code part, and a fair bit with the 15% testing-debugging part. Where it's almost useless is the 40% understanding-what-to-build part, because that work is fundamentally about talking to humans and reading between the lines of what they say.
Anyone selling you "AI will write your software" is either not shipping software, or they're shipping bad software and don't know it yet.
What We Actually Use It For
Real examples from real work.
Boilerplate that's tedious to write but easy to verify. Generating a new Angular standalone component with imports, signals, and a template stub. Writing the controller and service for a new .NET API endpoint when the pattern is well-established in the codebase. Setting up a new EF Core migration's Up/Down methods. The pattern has to exist already in the codebase. Then AI matches it.
Test scaffolds. Generating the table of test cases from a function's signature and expected behavior. We still write the assertions and edge cases ourselves, but the setup, mocks, and arrangement code that nobody enjoys writing gets done in seconds.
Explaining unfamiliar code. Drop a 200-line legacy method into the chat, get a clear summary back. Especially useful when picking up a project mid-life, or onboarding a new developer to an old codebase. Faster than reading line by line, and the questions you ask back are more pointed.
Refactoring drudgery. Renaming a property across 30 files. Converting a class component to a hooks-based one. Pulling a complex inline expression into a named function with the right type signatures. The kind of work that is mechanical but error-prone when done by hand.
Migration scripts. One-off transformations between formats. CSV to JSON, old schema to new schema, REST request shape to GraphQL query. We describe the transformation, show two examples, get a script, run it on a small subset, verify, then run it on the full data. Saves hours.
Code review as a second pair of eyes. We run pull requests through a separate AI reviewer before merging anything substantial. It catches things human reviewers miss when they're tired: a forgotten null check, a race condition in a callback, an N+1 query in a new endpoint. Not everything it flags is real, but the noise-to-signal ratio is good enough to be worth the 30 seconds.
Quick syntax lookups across stacks. When you're switching between .NET, Angular, React, Flutter, and Python in a single afternoon, "what's the equivalent of X in Y" is a question you'd otherwise burn a tab and three minutes on. AI answers it in two seconds without you leaving the editor.
What We Don't Use It For
Architectural decisions. AI is fluent in patterns but doesn't know your team, your performance constraints, or your existing tech debt. Asking "should we use SignalR or polling here?" gets you a balanced essay. The right answer requires sitting with the actual data flow for an hour.
First-pass code in critical paths. Anything touching money, authentication, or user data doesn't get AI-generated code shipped without a human writing or rewriting it from scratch. AI hallucinates the right shape with the wrong details, and the wrong details in those areas cost real money.
Library API calls in fast-moving libraries. AI confidently suggests methods that don't exist in the version of the library you're using. We learned to verify every external call against the actual docs, especially for newer or rapidly versioned packages.
Anything done under deadline pressure. "Just get AI to do it, we don't have time to review carefully" is the start of a production incident. The temptation to skip review is highest exactly when the consequences of a bad PR are highest.
Greenfield architecture from a one-line brief. "Build a SaaS platform for X" prompts produce working-looking code that has none of the structure a real product needs. We treat AI as a junior pair programmer that needs scope, context, and a clear definition of done. Junior pair programmers don't get to make architectural calls.
Treat AI Like a Junior Pair Programmer
This is the single mental model that's saved us the most time. Think of AI as a smart junior who started Monday. They're fast, willing, and have read a lot. They don't know your codebase, your conventions, your business, or your client's quirks. They will confidently do the wrong thing if asked the wrong question.
The same management you'd give a real junior works:
- Tight scope on each task ("update this method to handle null inputs," not "make this better")
- Real context with the prompt (paste the relevant types, the calling code, the test if there is one)
- Always review the output before it lands in the codebase
- Always run the tests
- Always ask "does this match the rest of the file's style"
The teams that get burned by AI are the ones treating it like a senior. It is not a senior. It is a fast, well-read junior who never gets tired and never asks for clarification when it should.
A Real Example: A Multi-File Refactor
A recent client asked us to add a soft-delete pattern to an existing application. Twelve domain entities, each with their own repository, controller, and a handful of related queries. Manually, this is two days of mechanical work plus a day of testing.
What we did:
- Added the soft-delete pattern to one entity by hand. Carefully, with tests, to establish the pattern.
- Described the pattern to AI: add
IsDeletedbool, update queries to filter, addDeleteAsyncmethod that setsIsDeleted = trueinstead of removing the row, add anIncludeDeletedquery option for admin views. - Pointed at the second entity and asked it to apply the same pattern.
- Reviewed, tweaked, ran tests. It got 90% right. The 10% it missed was a custom query that didn't follow the standard repository pattern.
- Once the second entity was clean, we did the remaining ten in a batch and reviewed each one.
Total time: under a day instead of three. The acceleration only worked because we did the first entity by hand and used it as the contract. AI is a multiplier, but you have to give it something worth multiplying.
What Actually Changes for Clients
This is the part that matters for anyone hiring development work.
Quotes get tighter. When mechanical work shrinks, the parts of a project that were padded for safety can be quoted closer to the bone. We're not passing on imaginary savings. We are passing on real ones.
Iteration is faster. Showing you a working prototype in week one instead of week three changes the whole conversation. You spot product issues earlier, when changing them is cheap.
More cycles on the hard parts. The time saved on boilerplate gets reinvested in the hard 40% (understanding what to build) and the hard 20% (designing how it fits together). Those are the parts that make or break a product.
No quality drop, when done right. This is the one we're most careful about. The code that ships is reviewed, tested, and architecturally consistent with the rest of the codebase. AI is a tool in our pipeline, not a substitute for engineering judgment.
Practical, Not Magic
If someone is selling AI development as a magic shortcut, they're either lying or they're going to learn the hard way. The honest version is more useful: AI is a sharp tool that lets experienced developers move faster on the parts of the work that were always tedious, while the parts that were always hard stay hard.
Our founder has been writing software for 16+ years. The fundamentals have not changed. The keystrokes per useful feature have just gone down.
If you want to talk about whether AI fits into a project you're scoping, get in touch. We will tell you honestly where it will help and where it won't.

