Annual Reviews Don't Measure Performance. They Obscure It.

8 minute read

March 2012. Donna Morris sat in a Mumbai hotel room, jet-lagged and frustrated, talking to a reporter about what wasn't working at Adobe.

"We plan to abolish annual performance reviews," she said.

The problem: she hadn't cleared this with her CEO. She hadn't discussed it with her executive team. She'd just said what everyone in HR already knew but rarely admitted out loud: the system was broken.

The next morning, she woke up to a front-page headline in the Economic Times: "Adobe Systems set to scrap annual appraisals."

She had eight days before the story hit US media. Backtrack and look indecisive, or commit and make it real?

Morris chose commitment. She published an internal blog post explaining what she'd announced. Within hours, it became one of the most-commented posts in Adobe's history. Employees weren't angry she'd killed annual reviews. They were relieved someone finally had.

That accidental announcement became one of the most documented transformations in performance management. But here's what matters: the research showing why annual reviews fail had been published sixteen years earlier. Companies had the evidence. They just weren't acting on it.

The Pattern Everyone Saw But Nobody Fixed

Morris had been tracking something for years: every February, voluntary attrition spiked. Not January, when people reflect on fresh starts. February. Right after annual performance reviews landed.

Strong performers received their evaluations, then walked out the door weeks later. The system designed to improve performance was driving away the people Adobe needed most.

Adobe's annual review process consumed 80,000 manager hours yearly. Stack ranking forced managers to label some employees as low performers even when teams were uniformly strong. Compensation was tied tightly to ratings, which made every conversation feel like negotiation rather than development.

Everyone knew it wasn't working. But changing it felt riskier than continuing.

I hear this same logic when I work with scaling companies. Leaders know their review process is broken. They see the February attrition. They watch strong performers disengage after reviews. But the system is embedded in compensation structures, promotion calendars, legal frameworks. Changing it means changing everything connected to it.

So they keep running a process everyone hates because it feels safer than the alternative.

Then Morris accidentally forced the decision.

Why Annual Reviews Backfire

The research explaining Adobe's February pattern had been clear since 1996. Avraham Kluger and Angelo DeNisi analyzed 607 effect sizes across 23,663 observations. Their finding should have rewritten every HR playbook: more than one-third of feedback interventions actually decreased performance.

Not "failed to improve." Decreased.

Tell someone how to improve their work, performance goes up.

Tell someone how to improve themselves, performance stays flat or drops.

The mechanism is straightforward. When you receive feedback, your attention shifts. Sometimes it shifts toward the task: What can I do differently? How do I improve this specific thing? Performance improves.

But often, especially in high-stakes annual reviews, attention shifts toward the self: Am I good enough? What does this say about me? How do I compare? Performance drops. You stop thinking about the work and start thinking about your status.

Tell someone how to improve their work, performance goes up. Tell someone how to improve themselves, performance stays flat or drops. Same information. Different framing. Completely different outcome.

The annual review is almost perfectly designed to trigger the wrong response. It happens once a year, so it feels like judgment rather than conversation. It's formal enough to activate status anxiety. It covers everything, so it feels overwhelming. And it's tied to compensation, which turns every piece of feedback into a financial negotiation.

John Hattie and Helen Timperley found the same pattern across hundreds of studies. Feedback on the task produces strong effects. Feedback on the person (praise, criticism, rankings, comparisons) produces weak or negative effects.

Yet annual reviews are structured entirely around the person: How did you perform? Where do you rank? Are you meeting expectations?

I hear the defense constantly: "People need to know where they stand."

But listen to that language. "Where they stand" isn't about the work. It's about status. Rank. Judgment. That's self-focused feedback disguised as clarity.

What actually changes behavior sounds completely different: "In that client meeting, when you pushed back on their timeline, they got defensive. Next time, try asking what's driving their deadline first."

That's task-focused. It works in a formal review. It works better in the moment, right after the meeting, when the behavior is fresh and course correction is possible. It stops working when you wrap it in a process that makes people worried about their rating instead of thinking about their work.

The Timing Problem Nobody Talks About

Gallup's data is stark: only 14% of employees strongly agree that their performance reviews inspire them to improve. The other 86% find the process somewhere between unhelpful and actively demoralizing.

But when feedback frequency increases, everything changes. Employees who receive daily feedback from their managers are 3.6 times more likely to be motivated to do outstanding work compared to those who receive annual feedback.

The gap isn't about feedback quality or manager skill. It's about timing.

Wait twelve months and you're not providing feedback on behavior, you're providing feedback on memory. You're asking someone to reconstruct what they were thinking during a project they finished in March, then somehow translate that into changed behavior for projects that don't exist yet.

Annual reviews also create what Morris called "the deferral problem." Managers avoid difficult conversations throughout the year, telling themselves they'll address issues "in the annual review." Problems fester. By the time the review arrives, the behavior is months old and the feedback feels like archaeology.

Ask yourself: when was the last time one of your people came out of an annual review genuinely energized about their development? If you can't remember, you're seeing the same pattern Adobe saw.

Eight Days to Build Something Different

Morris had eight days to turn an accidental announcement into a real system.

What emerged was Check-In: regular, informal conversations between managers and employees about three things. Expectations (what you're working on), feedback (how it's going), and growth (where you're developing).

No paperwork. No ratings. No rankings. No formal structure. Compensation decisions moved to a separate annual process, completely decoupled from development conversations.

Every HR professional Morris talked to predicted disaster. Without formal documentation, how would Adobe defend termination decisions? Without ratings, how would employees know if they were meeting expectations? Without structure, wouldn't managers just avoid difficult conversations?

Adobe launched it anyway in Fall 2012.

Within one year, voluntary attrition dropped 30%. The February spike disappeared.

The real surprise: involuntary attrition increased 50%. Underperformers were being identified and addressed faster than ever. The system that eliminated formal evaluations turned out to be better at evaluation than the evaluations themselves.

Here's why. When managers had to compress twelve months of observation into a single annual rating, the easiest path was to avoid conflict. Give everyone acceptable scores, save hard conversations for "next time," move on. Problems stayed hidden.

When Check-In replaced that with regular conversations, managers couldn't defer anymore. Issues had to be addressed when they emerged. Development happened when development was actually possible. And because compensation wasn't tied to the conversation, feedback could be honest without feeling like salary negotiation.

The managers who struggled most with Check-In were the ones who'd been hiding behind the annual process. The forced structure of a yearly review let them avoid difficult conversations the other 364 days. When that crutch disappeared, they had to actually manage.

Fourteen Years Later

Adobe's Check-In system is still running in 2026. It has survived business model transformation, multiple reorganizations, and a global pandemic that forced the entire company remote.

A 2025 case study examining Check-In from 2012 through 2024 found the results held. Voluntary attrition stayed below industry average. Manager time savings exceeded 100,000 hours annually. Employee survey data showed sustained improvements in clarity on expectations and quality of development conversations.

The question of documentation turned out to be wrong. Check-In didn't eliminate documentation, it improved it. Regular conversations create a continuous record that's more accurate and more defensible than once-yearly snapshots reconstructed from memory. If you need to make a termination decision, you have twelve months of real-time observations rather than one manager's year-end summary.

Adobe wasn't alone. Between 2012 and 2016, Accenture eliminated 90% of their old review process. Deloitte discovered their system consumed 2 million hours yearly and rebuilt from scratch. GE abandoned "rank and yank." Microsoft killed stack ranking after concluding it contributed to their "lost decade."

The pattern was consistent: shift from annual evaluation to continuous conversation, voluntary attrition drops while involuntary attrition rises. A 2023 McKinsey study quantified it. Companies making this shift see an average 15% improvement in employee performance and 20% increase in engagement.

You keep more of the people you want and identify problems with the people you don't. This seems contradictory until you realize annual reviews weren't actually surfacing performance issues, they were obscuring them.

What This Means for Your Organization

The research that convinced these companies wasn't new when they acted on it. Kluger and DeNisi's meta-analysis was published in 1996. Hattie and Timperley's work came out in 2007. The evidence had been sitting there for years while companies continued running processes everyone involved knew weren't working.

Three questions diagnose whether you're running Adobe's old system or something closer to what works:

Does your feedback rhythm match how work actually happens? If you're building software in two-week sprints but giving feedback annually, the system is designed around fiscal calendars, not work reality. Feedback should arrive when behavior is fresh enough that course correction is possible.

How close is feedback to the actual work? Real-time feedback focuses attention on the task. Delayed feedback focuses attention on self-judgment. The gap between those two experiences is the gap between feedback that works and feedback that doesn't.

Does the process point toward growth or judgment? If your system requires managers to compress a year into a rating, they're stuck defending a number rather than exploring development. If compensation and development are entangled, every conversation feels like negotiation. Separate them, and honest feedback becomes possible.

Here's what changed at Adobe once Check-In replaced annual reviews: conversations stopped being events and became rhythm. Managers couldn't defer hard conversations to "next review." Employees didn't wait for formal feedback to ask how they were doing. The work and the development became the same thing.

That's not a new system. It's removing the bureaucracy that was preventing management from happening.

Morris didn't plan to eliminate performance reviews. She was exhausted, frustrated, and said something in an interview she hadn't fully worked through. That accidental honesty forced a decision Adobe had been avoiding for years.

The research was there. The pattern was visible. All it took was someone willing to act on what everyone already knew.

The question now isn't whether continuous feedback works better than annual reviews. That's been settled for nearly three decades. The question is whether you're willing to run a system research shows often fails because changing it feels harder than continuing.


References

Adobe. (n.d.). How Adobe continues to inspire great performance and support career growth. Retrieved January 30, 2026, from https://www.adobe.com/check-in.html

Cappelli, P., & Tavis, A. (2016, October). The performance management revolution. Harvard Business Review, 94(10), 58–67. https://hbr.org/2016/10/the-performance-management-revolution

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284. https://doi.org/10.1037/0033-2909.119.2.254

Mann, A., & Harter, J. (2016, January 7). The worldwide employee engagement crisis. Gallup. https://www.gallup.com/workplace/236495/worldwide-employee-engagement-crisis.aspx

McKinsey & Company. (2017, October 4). Performance management: Why keeping score is so important, and so hard. https://www.mckinsey.com/capabilities/operations/our-insights/performance-management-why-keeping-score-is-so-important-and-so-hard

Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology, 10, Article 3087. https://doi.org/10.3389/fpsyg.2019.03087

Post a Comment

0 Comments