The company I work for started tracking a wide range of metrics related to our day-to-day work (with an external tool called LinearB). It integrates with pretty much everything to collect as many metrics as possible such as lines of code, number of PRs, size of PRs, time spent on reviewing, cycle time, time spent in meetings, etc. These tools feel like they only aim to gather as many metrics as physically possible, but do not always manage to put them into context. For example if you go on holiday or sick leave, all your metrics go down (for obvious reasons).
Personally I feel some of these metrics are straight up toxic and I also see that many people in our company started feeling paranoid about this and feeling an urge to “game” the metrics so their numbers look good.
The reason for this is that initially we were told the metrics are only going to be used on a team level, but now we are getting strong signals that this is used on the individual level as input for things like determining promos, raises, bonuses, etc. I know that there are standards and best practices to follow (like having small, meaningful PRs), but using these metrics as a signal for perfomance feel stupid, because it depends so much on the type of work I do. One week I'm debugging a production incident and it may be resolved with a single line config change, the other week I'm writing tons of unit tests, etc.
We were told that this whole thing is pretty much industry standard and very common at big companies like FAANG. Is that really so? If yes, could you elaborate on how is it implemented and how do you deal with the stress associated with trying to maximize your metrics (which may not be a direct consequence of "getting the work done", so you have to do extra just to increase your metrics).
Really appreciate all you inputs. Thanks.