I work in as an iOS engineer. My current company is a startup that's scaling up fast. In classic startup fashion, testing was historically not a top priority here. As a consequence, we struggle with lots of manual QAing and low code confidence when modifying pieces of poorly tested legacy code.
This has improved in the past year simply by unit testing every new piece of code by default and by introducing unit tests in legacy code as we touch it, boy scouting style.
At this point, I think we need to step it up and I'm looking forward to formalising a testing strategy for our mobile team.
Here are some ideas I have:
What other testing strategies can I propose to establish some standard we can use to further improve our code confidence, reduce bugs, and rely more on automated testing?
How do the top tier tech companies approach mobile testing?
At Robinhood, we have a mix of screenshots tests, unit tests, integration tests (powered by Selenium), and manual QA.
While all those different testing frameworks are very valid approaches to improving test coverage (and quality), I think you should identify what is the single, largest you're looking to address by introducing new testing paradigms/frameworks. All of those avenues take a significant amount of time to introduce and to support: introducing all of them within a timely manner will drain a significant amount of resources in the short to mid term (an immediate block of focused dev time is needed for the implementation to be successful) and will lower the overall velocity in the long term (more tests needed per commit means slower turnaround for code to be landed).
If you don't have a clear understanding of the problem you're trying to solve for the business, then you'll likely end up hurting the business (if successful) or your ideas will get shot down since it isn't clear how the proposals will be a net gain for the company. I recommend loosely computing the yield for every testing practice you wish to introduce (benefit to the business / time needed to implement) and then pushing the 1-2 practices at the top of the list.
Hope this helps!
Adding on to my response, HWalia had some really great advice and I want to extend it.
Automated tests are hard, almost always harder than people think. Mobile is tricky as it's still a newer space that's less well-defined than something like traditional back-end, and iOS is especially tricky due to the closed nature of the platform + the thrashier nature of the SDK compared to Android. Something I have seen time and time again is:
What this means for you as a testing/quality lead is that you need to do everything in your power to prevent #3 in this cycle - You need to make sure that adding or fixing an automated test is extremely easy and the impact of the team maintaining its test suite is crystal clear.
To me, this is far more important than the tactical types of tests you're going to cobble together. For example, end-to-end tests can theoretically catch bugs the best as they exercise the entire flow and catch an error at every step in the process, but the problem is that they're a huge pain to maintain and write.
Here are the questions you should always be thinking about:
Striving to be able to answer positively to all these questions will naturally lead you down the right road optimizing the testing framework, creating clear dashboards, and building up the proper systems and alignment with engineering leadership.
It wasn't about automated testing, but I did have to show all these behaviors when I completely revamped the oncall system for my ~20 engineer org at Instagram (another quality effort). You can watch the in-depth case study here: [Case Study] Revamping Oncall For 20 Instagram Engineers - Senior to Staff Project
Also, becoming a true testing/quality champion is definitely staff scope, and it seems like your company is well-established at Series E. I hope this all becomes a shining star of your senior -> staff packet!
At this point, I think we need to step it up and I'm looking forward to formalising a testing strategy for our mobile team.
It makes sense and is a good opportunity to agree on the testing terminology and the test boundaries. I have seen often E2E, UI, Functional and acceptance testing terms used interchangeably. Each test category should clearly define which code will be exercised, from the architecture layers and what can and can not be mocked
In mobile, there is no one solution that fits all, but a good place to start is to follow the mobile testing pyramid and tweak it according to your needs. Ideally, you will have a combination of Unit, integration and UI testing.
I want to introduce API contract tests because our back-end API lacks documentation
This is good, but it might be beneficial to use the mocked Json in integration tests. That way you will exercise the whole UI, Domain and Network layer. In both cases, you have to manually maintain the JSON files. Ideally, they will start a single screen and you could mock the response to test multiple view states. I know this is hard in iOS due to the close nature of the platform.
Snapshot testing is good, and it might cover localisation tests, they are also beneficial when refactoring the UI or migrating to the new design system.
You are spot on to keep the UI tests to core flows only as they are expensive to run. It is good to generate some automated reporting nightly so the team can check the state of these tests.
Lastly, be cautious of your time spent on keeping these tests in a healthy state and the benefit you are getting out it. I would also track the number of times, the regression has been caught before release because of these tests - this information will help you to prioritise your time!
At Instagram, it was a mix of:
Instagram was unique though as:
I talk about Instagram's testing philosophy more in-depth here: "What should I do when no one is available to help?" - This engineer is stuck on introducing new mobile tests
Here's some other good resources around testing as well: