For months, I was hunting for the “perfect” AI use case. You know the type, some elegant problem where an LLM would slot in beautifully and everyone would nod and say, ah yes, that’s clever. I made lists. Sketched ideas. Toyed with APIs. Nothing stuck. Everything felt like I was forcing AI into places it didn’t want to go.
Then one quarter, while drowning in planning docs and status reports, I realized I’d been sitting on the answer the whole time.
Every week I spent hours gathering updates from dashboards, tickets, Slack threads, and scattered notes, just to summarize them for leadership. Each report was its own mini translation project: converting technical noise into something concise and human-readable. It was draining, and I was bad at it, and I hated it.
That’s when it clicked. The problem wasn’t missing data. The problem was cognitive load.
Starting ugly
I started small. Pulled some data through an API, grabbed comments and statuses, and pasted the whole mess into an LLM just to see what would happen.
The results were rough. But they were promising. It could tell the difference between “debugging the issue” and “fixed, testing now.” It understood nuance I didn’t expect it to.
Manually copy-pasting wasn’t going to scale, obviously. But the potential was there. So I broke the experiment into two questions:
- How do I get the right data to the AI?
- How do I ask the right questions of it?
A few scripts and some prompt tuning later, I had something that could read updates, categorize them, and draft short summaries — In Progress, Risks, Wins — the way a manager would. Except it didn’t get tired, and it didn’t miss things.
I just wanted my Sundays back
I didn’t set out to “build an AI tool.” I just wanted to stop spending my weekends on status reports. But once it worked for me, other people wanted it. So it evolved: parameterized functions, better prompts, richer context. What started as a hack turned into a repeatable workflow.
And somewhere along the way, I noticed something: AI wasn’t replacing the work. It was amplifying the parts that were already human, the judgment calls, the nuance, the storytelling. The stuff I’m actually supposed to be good at.
Things I didn’t expect
What started as a reporting shortcut became a communication layer:
- Clarity for stakeholders: even non-technical folks could follow along
- Better documentation: once updates were visible, people started writing cleaner notes (funny how that works)
- Less overhead: more time for actual engineering, fewer “quick syncs”
What I took away
- Stop chasing AI use cases. The good ones emerge from real pain. If you’re annoyed by something, pay attention.
- Prototype first. The best insights show up after you start building, not before.
- Frustration is data. Every recurring annoyance is a feature request in disguise.
- Simple beats shiny. A janky script that solves a real problem will outperform a polished tool that solves a hypothetical one.
The actual lesson
The most interesting thing about this project is that I never planned it. No proposal. No roadmap. I just built something that solved my problem and in the process figured out how AI could bridge the gap between raw data and human understanding.
The future of AI in engineering isn’t about cramming it into every workflow. It’s about recognizing when you’re already staring at the right problem and being willing to build something ugly to see if it works.
Sometimes the smallest side project teaches you the biggest lesson.