Recently I had asked a friend who works in DevX analytics (developer experience analytics) about her work and she mentioned that currently there was no other way to measure developer productivity other than lines of code or how frequent, often, and long someone was coding. Isn’t this a bad measure for productivity? Couldn’t someone just be pushing crap code (copy and paste) that has no substantive value to the team? I also thought about this article by the CTO of Carta that also talks about alignment and other ways to understand substantive and more qualitative measures than quantitative factors to evaluate success and I guess productivity? https://review.firstround.com/unexpected-anti-patterns-for-engineering-leaders-lessons-from-stripe-uber-carta/
How does your company evaluate developer productivity and success and metrics? I’ve found talking to former Meta engineers a lot they tend to have some really specific ways to measure success metrics that seem reasonable.
I personally believe that the best measure of developers' productivity is the "impact" they have, not the number of lines of code they write. Consider a developer who writes thousands of lines of code for projects that have little to no impact on the team, organization, company, or industry (depending on their level). Now, compare that person's productivity to a developer who finds a bug in a service and fixes it by changing just 15 lines of code, resulting in reduced latency, lower product costs, or increased revenue. The contribution doesn't even have to involve code. Someone could write comprehensive onboarding documentation for new hires that cuts their acclimation time in half. This bring a significant saving for the company in the long run, as it could save hundreds of hours in onboarding costs.
I also recommend the book Slow Productivity: The Lost Art of Accomplishment Without Burnout by Cal Newport. It helps to focus on what matters in the long run.
There isn't one. This is for reasons that you more or less perfectly covered (e.g. making up code to increase commit/SLOC count). It only gets harder to measure developer productivity over time as engineers will slot into a bunch of different archetypes as they get to senior/staff, which often leads to them coding less.
Meta does indeed measure everything their engineers do, and I think they do a good balance of looking at the metrics as a base and then contextualizing it with peer feedback to get a real picture of how the engineer is truly doing. I got rewarded a lot for having very, very high commit and review counts, but that's because I also got feedback that my written code and code review comments were extremely high quality. Quantity on its own doesn't mean much, and it can even be a red flag if quality is poor.
If you want to learn more about how Meta ranks/measures engineers, check this out: "How does Stack Ranking work (at FAANG) and how can I be proactive at a base level?"