Hi Taro - just wanted to say thank you for the Meta/FAANG discussion event yesterday. I was wondering if you could share with the Taro community your thoughts on how managers evaluate their employees in detail (you mentioned some things like internal tools that one could go and see how many PRs, discussions, comments someone had in Github/JIRA and who all were at the top of that baseline followed by the bottom rankers; so I would like more specifics if possible).
Although no one likes it, it would be good to understand how "stack ranking" works at FAANG - and how some managers evaluate on this criteria, despite it being a practice that sucks. In this way I can just be more sure I'm hitting a baseline - even if it's invisible because I can take daily steps to work on my own visibility and perceived performance.
I feel like the biggest challenge right now is getting critical feedback from a manager / org (and it sounds like some companies in the FAANG space are pretty awful about it). E.g. I read about a Redditor who got let go without much notice because they weren't up to par (decided by a skip level manager) in terms of their code and daily output (while the direct manager and everyone else had been communicating often that this employee's performance was great). But this goes back to the idea that 'great' is 'average' lately, and it's way harder to hit exceeds and greatly exceeds on performance.
Thank you in advance!
For folks who are interested, this is the event OP is referring to: Group Office Hours With Alex - Meta Discussion (Interview, Culture, Growth)
There's a lot to unpack here, so I'll split up my response into 3 parts:
If you want to go super deep into Meta performance review, check this out: https://www.promotions.fyi/company/meta/performance-review
The most infamous tool at Meta is called Team Insights. It has tons of metrics on employee output, and it's even extendable: I remember one of the staff engineers [E6] in my org added new metrics to it to track people's Workplace activity better.
As you might imagine, the most important metrics in the tool are:
Aside from diff metrics, I'm sure what metrics are cared about varies based on org. My org at Instagram was very Workplace heavy, so I imagine the calibration folks looked at those metrics (especially if the E6 added it - E6+ engineers take part in calibrations).
Random side note: If you're interested in how Workplace comes into play for Meta engineer performance, I wrote a lot about it here - "Is posting on Workplace required for good performance?"
If you want to get an idea of what other metrics can be used to judge code contribution, check out the discussion here: "Internship Metrics For Conversion?"
The thread's about interns, but I imagine it's still relevant for all engineers, especially L3s (junior) and L4s (mid-level).
Meta has a "softer" form of stack ranking that I think works like this:
Step #3 is where it gets spicy, and I don't have much visibility into it. I may also be completely wrong 😅 - If there's some Meta VP reading this, I would love to be corrected.
From my understanding, Meta doesn't follow some rigid stack rank where there's this strict rating distribution and there has to be some amount of people in PIP territory. Back when I was at Portal, we actually had an above average distribution: Tons of engineers got really good ratings, way more than the company average.
All that being said, I have heard that Meta is getting really brutal with the layoffs waves, so it's entirely possible the company has a hard target of people to cut every performance cycle now. 🤷
The good news is that these are effectively the same thing: If you are doing high-performer behaviors, you will probably survive stack rank. You don't need to be deliberate around navigating stack rank (unless you want to purely play politics to survive).
There's a ton that goes into being a high-performer engineer, so here's just my high-level points:
There's no way you can learn all this from 5 minutes reading my random Q&A comments, so here's all the resources ever to help you actually learn all these things:
It varies a lot (FAANG isn't a monolith!), and we talk about this in-depth across the resources here:
For some historical context from my perspective:
As usual, I'll attach my standard disclaimer: There's always exceptions. I know Amazon SDEs with exceptional WLB. I know Microsoft SWEs who are burnt out. The most important thing always is to find a good team. 😊
None of this super matters anymore as the layoff-era has changed everything, mostly for the worse unfortunately. "Chill" places are now hard. Hard places are now extremely hard.
But this goes back to the idea that 'great' is 'average' lately, and it's way harder to hit exceeds and greatly exceeds on performance.
For Big Tech companies nowadays, this is 100% true. Every company wants to extract far more value from their engineers nowadays given the economic climate.
There's some detail about Meta's performance review process in this section of promotions fyi: https://www.promotions.fyi/company/meta/performance-review#calibration
Also here's a video on YouTube I made about how stack ranking is typically implemented in Big Tech today (the more politically correct version compared to what MSFT had in the 2000s).