We maintain a healthy suite of E2E tests, which does catch regressions sometimes before release.
It is hard to keep track of it all manually, but I believe that each time a regression is caught before release, it's overall positive on time spent. Each instance of this happening saves time which would have spent hot-fixing the issue in production if the regression wasn't able to be caught early by the tests.
How can I verify my hypothesis and make sure that this overall E2E testing effort is well-run, impactful, and net positive with engineering time investment?
For more great resources around automated tests: