Your most popular senior leader just drove a $30M decision into the ground. Your least liked VP just prevented a crisis that would have cost the organization twice that. Neither of these facts will appear in the 360 feedback either of them receives this year.
The 360 will tell you that the popular leader is "inspiring," "collaborative," and "vision-oriented." It will tell you the VP is "difficult," "not a team player," and "lacks executive presence." Both of these assessments are accurate perceptions. Neither of them tells you anything useful about judgment quality — which is the only leadership variable that determines organizational performance when stakes are real.
This is not a problem with the people filling out the 360. It is a problem with what the 360 was designed to measure.
What 360 feedback actually measures.
360 feedback instruments measure perception. Specifically, they measure how a leader is experienced by the people around them — in normal working conditions, at the resolution of regular interaction, filtered through each respondent's relationship history, personal preferences, and current political context.
This is not useless data. Perception matters. How a leader is experienced affects team motivation, psychological safety, and collaboration quality. A leader who is consistently perceived as dismissive or unpredictable creates real organizational costs, and the 360 will surface that.
What 360 feedback cannot measure:
- How a leader decides when the information is incomplete and the stakes are high
- Whether a leader's instinct under pressure is to protect the organization or protect themselves
- Whether a leader can distinguish between what they know and what they are assuming
- How a leader's decision quality holds up as complexity increases and time compresses
- Whether a leader's confidence in their own judgment is calibrated — or dangerously detached from their actual accuracy
None of these show up in a 360. They show up when the crisis lands. By then, you are past the point where developmental intervention is useful.
A 360 tells you how people feel about a leader in ordinary conditions. It tells you almost nothing about how that leader performs when the conditions stop being ordinary. Yet most succession decisions are made primarily on 360 data.
The specific ways 360 data misleads.
Likability bias.
360 respondents are human. They rate people they like more favorably across every dimension — even dimensions that have no logical relationship to the relationship quality. A leader who is warm, available, and socially skilled will receive higher ratings on strategic thinking, decision quality, and business acumen than a leader who is less warm but objectively more effective at those things. The instrument cannot separate likability from capability, and in practice, it usually defaults to likability.
Proximity bias.
Respondents rate behaviors they have directly observed. Most high-stakes leadership behavior — consequential decisions, crisis response, resource allocation under constraint — happens in rooms that most 360 respondents are not in. What they observe is how the leader behaves in their presence: in team meetings, in one-on-ones, in email threads. This is a thin and unrepresentative slice of the leadership behavior that actually determines organizational outcomes.
Anchoring to personality rather than performance.
360 instruments are built on behavioral anchors that describe personality traits expressed as workplace behaviors: "communicates clearly," "demonstrates integrity," "builds trust." These describe how a person shows up, not how they decide. A highly intelligent leader can demonstrate all of these traits consistently and still make catastrophically bad judgment calls under pressure — because those calls depend on cognitive architecture, not behavioral consistency.
The halo and horn effects.
One highly visible success creates a halo that inflates ratings across all dimensions. One visible failure creates a horn effect that depresses them. The 360 captures the organizational narrative about a leader more accurately than it captures the leader's actual capability profile. In organizations with strong political cultures, this makes 360 data a reflection of who is winning the narrative contest — not who is developing the best judgment.
What actually predicts leadership performance under pressure.
Three data sources that 360 feedback cannot replicate:
Decision quality under ambiguity.
The only way to assess this is to put leaders in ambiguous, high-stakes decision scenarios and observe the actual decision process — not just the outcome. Outcome measurement is retrospective and noisy. Process measurement — watching how a leader structures a problem, what information they prioritize, how they handle contradictions, where they default when uncertain — is predictive. It is also uncomfortable. Most organizations avoid it because it produces data that is harder to act on than a 4.2 out of 5 in "strategic thinking."
Calibration accuracy.
A leader's ability to know what they know — to distinguish genuine competence from confident ignorance — is one of the most powerful predictors of decision quality. Leaders who are well-calibrated make better decisions under uncertainty because they know when to seek more information, when to trust their instincts, and when their instincts are operating outside their zone of genuine competence. 360 feedback cannot measure calibration. Structured simulation scenarios can.
Failure response pattern.
How a leader responds when a decision goes wrong is a stronger predictor of long-term leadership effectiveness than how they perform when conditions are favorable. Do they diagnose the decision or rationalize it? Do they protect their team or protect their narrative? Do they update their mental model or defend the original one? 360 feedback captures the surface behavior after a failure. It cannot capture the internal process that determines what the leader actually learns from it.
The practical recommendation.
Do not eliminate your 360. It provides genuine signal about interpersonal behavior and team experience that is worth having. But stop using it as the primary data source for succession decisions, high-potential identification, and leadership development investment.
Supplement it with two things:
First, structured judgment assessments — simulated decision scenarios calibrated to the actual complexity of the roles you are filling. Observe the process. Score the reasoning. Compare against behavioral patterns that predict high-stakes performance, not ordinary-conditions favorability.
Second, decision quality reviews — periodic structured analysis of actual decisions made by leaders at your target level. Not outcome reviews. Decision reviews. Was the reasoning sound? Was the information adequately synthesized? Was the call made at the right level of confidence given the available data? This builds an organizational capability for assessing judgment in context — which is what you actually need for every high-stakes talent decision you make.
The 360 will tell you whether your leaders are pleasant to work with. That matters. But the question that determines organizational performance is different: what do they actually do when it gets hard?
Most organizations cannot answer that question with data. They answer it with narrative — and they discover the gap between narrative and reality when the crisis lands and the popular leader freezes, and the difficult VP who "lacks executive presence" is the only person in the room who knows what to do.
At that point, the 360 data is not useful. It never was. It was just the only data you had.