8
min read

The accidental genius of bad logic - how decisions derail

Decision-making is closer to solving a math problem with the wrong formula and still landing on the correct answer–by accident. The danger is not the mistake itself, but the feedback.

We have already covered some aspects of conscious decision-making before. Now let’s go deeper and discuss its psychology in more detail. 

Every manager has a story where a questionable decision somehow worked out. The launch no one believed in became a hit. The rushed hiring choice turned into a star performer. The gut call beat the spreadsheet.

It feels like validation. Yet, it isn’t. Because when the outcome looks right, the flawed reasoning gets reinforced. Next time, under slightly different conditions, it fails. Predictably. Decision-making in organizations is full of these quiet distortions—well-known in theory, consistently mishandled in practice.

The usual suspects (briefly)

Confirmation bias: We accept data that supports our beliefs and ignore what contradicts them.
Not deciding is also a decision: Delay is rarely neutral; it compounds consequences.
Sunk cost fallacy: We continue because we already invested, not because it still makes sense.

These are widely discussed. Yet they persist, not because people haven’t heard of them, but because they manifest subtly in everyday managerial routines.

Now let’s take a closer look at some other bad practices.

Data-driven vs. intuitive decision-making

Two teams sit in a meeting room, discussing a conflict everyone already senses. One initiative is designed to increase user engagement—longer sessions, deeper interaction. Another is pushing for faster conversions—shorter paths, fewer steps, quicker exits. The interference is obvious.

So they do what organizations often do: they discuss it. Thoroughly.

Arguments are made. Hypotheses are raised. Someone references a similar case from two years ago. Another person brings in a benchmark from a different company. The conversation is logical, structured, and ultimately inconclusive. What doesn’t happen is more resolving.

No one defines what success would look like for each initiative if run together. No one sets up a way to measure how one affects the other. No controlled rollout, no segmented testing, no explicit metrics tied to the interaction.

Instead of turning an intuitive conflict into something measurable, the organization keeps it at the level of theory—where it is safe, debatable, and impossible to resolve.

A month later, both initiatives are launched. The data comes in. Engagement is down. Conversions are slightly up. The feature is labeled underperforming, the campaign “moderately successful.”

The conclusion feels data-driven. It isn’t.

Or consider how success criteria are handled. A team launches a new initiative without clearly defined success criteria. Three months later, the review happens.

“Adoption is lower than expected, but early signals are promising.”

“Revenue impact is limited, but strategically important.”

“Users are confused, but that’s normal in early phases.”

All of these may be true. None of them answer the original question: is this working?

Had the criteria been defined upfront—specific adoption thresholds, expected behavioral changes—the conclusion would be clearer. But defining them afterward allows the narrative to adjust to the outcome. And once time, effort, and reputation are invested, the narrative tends to become… forgiving.

Good decisions don’t come from choosing between intuition and data—they come from translating intuition into something testable, and refusing to let data become post-rationalization.

Fast vs. slow decision-making (and the illusion of control)

There is a persistent tension between overthinking and premature action.

Known mediocrity vs. uncertain opportunity
A company renews a long-standing supplier contract with average pricing and frequent minor issues. A new vendor offers significantly better terms but requires integration effort and some operational change. Management delays change and sticks with the incumbent—“at least we know what we’re getting”—and locks in another two years of predictable underperformance.

Quick decision that signals decisiveness without grounding
A dip in quarterly sales triggers an immediate pricing change. No segmentation analysis, no understanding of customer sensitivity—just a blanket discount “to boost volume.” Sales increase temporarily, margins erode, and customers begin to anchor to the lower price. The decision was fast, visible, and directionally wrong.

Action bias

A system outage hits a critical service. Within minutes, senior leaders join calls, request updates, suggest changes, and escalate visibility. More people get involved. Communication increases. So does confusion.

Engineers, who were already working on the issue, now need to explain, justify, and adapt in real time. Resolution slows down.

Afterward, leadership points to their high level of involvement as a sign of commitment.

In reality, it was action bias: the need to appear in control replaced actual control.

There is an uncomfortable asymmetry here:

  • Doing nothing looks like failure, even when it is the optimal choice.
  • Doing something looks like leadership, even when it degrades outcomes.

A more disciplined approach would be simpler, but harder to follow:

Before intervening, ask your team:

  • Do you actually know what needs to be done?
  • Is the team capable of doing it without me?

If yes, the highest-value action may be restraint. Not visible. Not satisfying. Often correct.

Decision quality is not a function of speed, but of timing—knowing when to move, and when intervention only creates the illusion of control.

The framing effect

The same decision can look entirely different depending on how it is presented:

  • Option A: “This investment has an 80% chance of delivering a positive return.”
  • Option B: “There is a 20% chance we lose the entire investment.”

Silence follows Option B. Questions multiply. Concerns surface.

The numbers are identical. The decisions are not.

Framing does not just influence inexperienced decision-makers. It systematically affects experienced leaders, especially under time pressure. The presentation becomes part of the decision.

Inside organizations, this effect compounds as information travels upward.

A project team encounters delays and rising costs. At the operational level, the situation is clear: timelines are slipping, and risks are increasing.

By the time the update reaches middle management, the message shifts:

“There are some challenges, but they are being managed.”

At the executive level:

“The project is largely on track, with minor risks.”

No one explicitly lies. Each layer adjusts the message slightly—removing friction, softening edges, making it more acceptable. This is the mum effect in action: unpleasant information gets diluted.

Now combine this with framing.

By the time a decision reaches senior leadership, it is not just incomplete—it is curated. Risks are framed as manageable. Upsides are emphasized. Trade-offs are blurred.

Leaders then make rational decisions based on irrational inputs.

And when outcomes disappoint, the post-mortem focuses on execution—rarely on how the decision was framed in the first place.

Decisions are only as sound as the way they are presented—if the framing is biased or diluted, even rational leaders will arrive at systematically distorted conclusions.

What actually helps leaders decide better

Decision quality depends less on intelligence and more on signal clarity.

Read the implicit signals

Most decision inputs are presented as rational arguments. They are not.

People optimize for incentives—often unspoken, sometimes misaligned with the stated goal. A proposal framed as “strategically important” may be driven by ownership, visibility, or budget preservation. Agreement in the room may reflect political alignment, not conviction.

This is why stated positions are weak signals.

Actual behavior is stronger:

  • What do people prioritize when trade-offs are real?
  • Where do they allocate time and resources?
  • What do they do when no one is explicitly evaluating them?

Misalignment is rarely declared. It is observable.

Leaders who rely only on articulated reasoning make decisions on curated inputs. Leaders who read implicit signals get closer to the underlying reality.

Consultants vs. internal experts

External consultants are often brought in to provide objectivity. Sometimes they do. Often, they provide confirmation.

A familiar pattern: An internal team has already formed a view but lacks the authority—or confidence—to push it through. A consulting firm is engaged. Data is gathered, interviews are conducted, slides are produced. The conclusion aligns with the internal perspective.

Nothing fundamentally changes, except now the recommendation carries external validation. The decision feels safer because responsibility is shared.

In other cases, internal experts effectively “write the answer,” and the consultant formalizes it.

This is not inherently wrong. External perspective can add structure and comparability. But it should be clear what role is being played:

  • generating insight,
  • or legitimizing a pre-existing decision.

Confusing the two leads to a specific failure mode: outsourcing responsibility while retaining the illusion of rigor.

Maintain proximity to reality

As organizations scale, leaders become structurally distant from operations. Information reaches them filtered—summarized, adjusted, often unintentionally distorted. We already discussed it.

So to overcome the issue of transmission, leaders must aim at maintaining proximity and access to unfiltered reality, which requires deliberate effort:

  • direct exposure to frontline operations,
  • unfiltered conversations across levels,
  • occasional bypassing of formal reporting lines.

Without this, leaders depend on second-hand interpretations of reality—precisely where effects like message dilution and selective framing emerge. Decisions made far from the source of truth are not necessarily wrong, but they are systematically more fragile.

Decision-making is a system, not an event

Decision-making must be explicit

Most organizations treat decisions as isolated moments. They are not. They are outputs of a system. That system needs structure:

  • Clear decision rights: Who decides what must be unambiguous.
  • Defined decision mechanisms, for example:
    • Majority-based (simple or qualified)
    • Leader decides after input
    • Principle-driven, consistently applied and objectively evaluated

Consistency matters more than perfection

If decision processes change case by case, the organization adapts accordingly: lobbying increases, politics intensify, and outcomes depend less on merit and more on influence.

Think of traffic enforcement: the deterrent is not the severity of the penalty, but the certainty of its application. Inconsistent enforcement produces persistent violations. Consistent enforcement—even with moderate penalties—shapes behavior.

Decision follow up

Finally, decisions must not end at the moment they are made.

  • Execution needs to be tracked.
  • Outcomes need to be measured.
  • Assumptions need to be revisited.

Otherwise, the organization keeps celebrating correct outcomes produced by incorrect reasoning—until the math stops working.

Do you want to subscribe?
Or would you like to discuss the topic?
Get in touch
ARE YOU INTERESTED IN SIMILAR ARTICLES?
TAKE A LOOK HERE:

Send us a message

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Follow us