When Your Diagnostic Tool Finds What You Didn't Know

Huy Dang ·

You build a tool. Not a product — an internal diagnostic. Something that checks whether your system follows the rules you’ve set for it. Ownership boundaries, dependency directions, visibility constraints.

You run it for the first time.

It immediately flags something you’ve been staring at for months. Something you’ve walked past a hundred times. Something so fundamental to how your system works that you stopped seeing it as a choice — it just felt like the way things are.

The tool doesn’t know what “the way things are” means. It only knows the rules. And according to the rules, this thing is a violation.

Except it isn’t. It’s the most important structural decision in the entire system.

The Blind Spot Problem

Here’s a question worth sitting with: why do you need tools to find things you already know are there?

The answer is uncomfortable. You don’t see them precisely BECAUSE you know they’re there. Familiarity breeds invisibility. The thing you decided six months ago, the workaround that became permanent, the exception that became the rule — these fade into the background of your perception. They become load-bearing assumptions you no longer examine.

Your eyes have context. Your eyes have history. Your eyes have preferences. These are usually strengths — they help you focus on what matters, filter out noise, move quickly through familiar territory.

But they’re also the mechanism of blind spots. You skip what you’ve accepted. You gloss over what you’ve decided. You don’t question what’s working, even if the reason it’s working is different from what you think.

A tool has none of that context. It has rules, and it applies them mechanically. It doesn’t know what you’ve accepted. It doesn’t know your history. It doesn’t care that “we’ve always done it this way.” It just checks the rule and reports the result.

That mechanical ignorance is the tool’s greatest asset.

The Programming Lens

The idea behind a programming lens is simple: borrow a programming language’s type system as a thinking discipline for non-code problems.

Every programming language enforces specific rules about how things relate. Ownership. Visibility. Dependencies. Lifetimes. These rules exist because decades of software engineering proved that without them, systems become unmaintainable. Code that compiles under strict rules tends to be code that works.

The insight — and I’ve written about this before — is that these same rules apply to organizational architecture. Who owns what? Who can see what? What depends on what? These aren’t just software questions. They’re structural questions that every system, software or human, needs to answer.

A programming lens takes a language’s rules and applies them as diagnostic checks. Rust’s ownership model, for instance, says: every resource has exactly one owner. Others can read it (borrow), but ownership is singular and explicit. Apply that rule to an organization, and you get a diagnostic that flags every resource with ambiguous, shared, or missing ownership.

Simple concept. But what happens when you actually run it is anything but simple.

What “Violation” Actually Meant

Here’s what happened on the first diagnostic run.

The tool applied a visibility rule: components nested inside a parent should be accessed through the parent, not directly. This is good architectural hygiene — it prevents coupling, maintains encapsulation, keeps the dependency graph clean.

The diagnostic flagged a component that seven other parts of the system accessed directly, bypassing its parent entirely. By the stated rule, this was a clear violation. Seven counts of it.

The natural instinct is to fix it. Add the indirection layer. Route access through the parent. Make the architecture clean.

But when I actually looked at what the tool had found, the picture was different. This component wasn’t an implementation detail hiding inside its parent. It was the foundational knowledge layer of the entire system. Every other component needed it to function — not as a feature, but as a prerequisite. Without it, nothing else could initialize.

The component was a bootstrap dependency. The thing that must exist before the system that produces it can run. Wrapping it behind its parent would be like requiring you to boot the operating system before you can run the bootloader. The indirection wouldn’t add clarity — it would add an obstacle between every component and the thing they all need to start.

The diagnostic was right that the pattern was unusual. It was wrong that it was a problem. The “violation” was actually the most important architectural decision in the system — the one that made everything else possible.

Three Layers of Discovery

That first run taught me something about how diagnostic tools create value. It’s not the mechanism you’d expect.

Layer 1: Confirmation. Most findings confirm what you already know. “Yes, that boundary is clean. Yes, that ownership is clear.” This is valuable but unsurprising. It’s the tool agreeing with your intuition. Worth doing — confirmation builds confidence — but not where the real insight lives.

Layer 2: Genuine violations. Some findings surface real problems. Ambiguous ownership, circular dependencies, missing boundaries. Things you might have known about but hadn’t prioritized, or things that crept in gradually and escaped notice. This is the traditional value proposition of any diagnostic: it finds bugs.

Layer 3: Structural revelations. Rarely, a finding looks like a violation but is actually a window into your system’s deep structure. The tool flags something that breaks the rules, and when you investigate, you discover the rules don’t account for this category of thing. The component isn’t violating the architecture — it’s revealing a concept the architecture hasn’t named yet.

Layer 3 is where the magic lives. And it only happens when the tool thinks differently than you do.

Building Tools That Disagree With You

This is the counterintuitive principle: the best diagnostic tools are the ones that disagree with you.

If you build a tool that encodes your own thinking — your own assumptions, your own categories, your own sense of what’s normal — it will find the same things you’d find if you looked carefully enough. It saves you time, but it doesn’t extend your perception.

If you build a tool that encodes a DIFFERENT thinking discipline — one with different assumptions, different categories, different rules — it will find things you can’t find, because it’s looking through a lens you don’t naturally use.

That’s the whole point of borrowing a programming language’s type system. Rust thinks about ownership differently than you do. It has rules you wouldn’t naturally apply to organizational design. When those rules flag something, the finding is interesting precisely because it comes from a perspective you don’t share.

The same principle applies beyond programming lenses:

Financial audits find things operators miss — because auditors think in flows and balances, not in products and features. They see the money moving in ways the operator doesn’t track.

Security reviews find things developers miss — because security engineers think in attack surfaces and trust boundaries, not in user stories and feature requirements. They see the gaps between the things you built.

User testing finds things designers miss — because users don’t know how the system is supposed to work. They try things the designer would never try, and they find things the designer would never find.

In every case, the value comes from the mismatch between the tool’s thinking and yours.

The Meta-Lesson

If your diagnostic’s first run finds nothing surprising, you have a problem. Not with your system — with your diagnostic.

A tool that perfectly matches your mental model is a mirror. It shows you what you already see. Mirrors are useful for checking details, but they don’t extend your vision.

A tool that applies a different mental model is a window. It shows you something from an angle you don’t naturally occupy. Windows are how you discover what you’ve been staring past.

When you build diagnostic tools — for code, for organizations, for any complex system — deliberately choose thinking frameworks that differ from your default. If you naturally think in hierarchies, build a diagnostic that thinks in networks. If you naturally think in ownership, build one that thinks in message flows. If you naturally think in structure, build one that thinks in dynamics.

The disagreement between your thinking and the tool’s thinking isn’t friction. It’s the diagnostic.

The first run will surprise you. That’s how you know it’s working.


This is the fifth post in a series about applying programming language concepts to organizational design. Previously: The Beautiful Absurdity of Modeling Your Company in Java, Why Every Founder Should Think in Types, The Bootstrap Paradox.

Huy Dang is the founder of AccelMars, building tools for the AI era. Follow the journey on X and LinkedIn.