Thinking with AI: Decision-Making Under Uncertainty in Agentic Software Development
Modern AI tools are exceptionally good at generating code, documents, and options. Their real value, however, shows up earlier - when the problem is still large, fuzzy, and underspecified.
In my experience, agentic tools are most effective not as answer machines, but as thinking partners. They help surface tradeoffs, explore alternatives, and stress-test decisions. What they do not replace is ownership.
Think first, then think with AI
Before involving an agent, I try to think alone - briefly but deliberately.
That usually means writing down the problem I’m trying to solve, what success would look like, non-goals and constraints, and a small number of core user workflows.
This seed doesn’t need to be polished. Its purpose is to anchor intent. Without it, AI tends to optimize for plausibility rather than relevance.
Once that intent exists, AI becomes far more useful.
Use AI as a design partner, not an oracle
I’ve found it helpful to treat AI as a collaborator in a structured loop:
Propose → Critique → Decide → Verify
- Propose: ask for multiple designs, approaches, or architectures.
- Critique: explore tradeoffs, risks, and failure modes.
- Decide: narrow options intentionally and make commitments.
- Verify: simulate usage, test edge cases, review against constraints.
This loop repeats, not once but continuously.
AI accelerates the propose and critique phases dramatically. Humans must own the decision.
A simple rule I keep coming back to: I decide, the agent documents.
Ask questions that surface tradeoffs
The quality of collaboration depends heavily on the questions asked.
The most productive prompts are rarely “build X,” but instead:
- Which decisions are irreversible?
- What are the main failure modes?
- How would different users experience this API?
- Where will this design hurt in six months?
- What would testing and verification look like?
Good questions expose structure. They turn vague ideas into concrete choices.
Simulate before you commit
One of the most powerful uses of AI is simulation.
Before committing to a design, I often ask the agent to write example usage, migrate hypothetical existing code, walk through edge cases and failures, or draft a migration guide for something that doesn’t yet exist.
These simulations surface design flaws early, when changes are still cheap.
Experiment fast, stay unattached
Agentic tools make experimentation dramatically faster. That changes what is affordable to try.
In one recent project, we generated a substantial subsystem quickly - only to remove it entirely after review feedback. A few years ago, that would have felt like a painful failure. With agentic tooling, it was simply the right decision.
Speed expands what we can try, but it doesn’t decide what stays.
Keep decisions visible as plans evolve
To manage complexity, I rely heavily on living documents:
- plans with phases and acceptance criteria,
- design docs capturing options and tradeoffs,
- ADRs for irreversible decisions,
- open-decision lists that make uncertainty explicit.
What starts as three phases often becomes seven. New information appears. Assumptions break. Feedback forces pivots. That’s not failure - it’s the process working. Keeping decisions visible makes reversals easier when needed.
Agentic tools don’t eliminate uncertainty. They make adapting to it cheaper.
AI helps maintain these artifacts. Humans review, revise, and decide.
This shared written context becomes the backbone of collaboration - between people, and between people and agents.
Ownership remains non-negotiable
AI can draft quickly, explore broadly, and simulate relentlessly. What it cannot do is own outcomes.
That responsibility remains human: deciding what matters, choosing what to keep, and accepting the consequences.
Used well, agentic tools don’t replace judgment. They clarify where judgment is required.