Is your team using test automation? If so, what's your approach to automation, and what are the main challenges you face?
I'm especially curious about: What strategies has your team implemented to tackle your challenges and make test automation more effective?
Taro's too small to have any real automation, but back at Instagram and Robinhood we mainly had a mix of these 2 things:
In terms of making testing more effective, my main advice is to focus on the more "meta" mechanisms surrounding testing instead of just the test coverage itself. You can find more very detailed thoughts here (thread is mobile-focused but applies to everyone): "What do mobile testing strategies look like at top tech companies?"
Thanks Alex for your response. What do you mean by "meta" mechanisms?
Here’s some additional context behind my question: In my previous roles, I worked at large organizations like TD Bank, Walmart Labs, and Shopify. I led the automation efforts in those roles, where processes were well-established and controlled. We had a dedicated automation team, and the development team also contributed by writing end-to-end (E2E) and integration tests.
This is my first experience working in a startup, and it has been quite challenging. The startup operates in the healthcare domain and is subject to regulatory requirements, including FDA approvals, which significantly slow down the development process.
Currently, I am leading the testing efforts for both the mobile app (built with Flutter) and the web app (built with React). Here are the primary challenges my team is facing:
App Complexity:
Our mobile app, built with Flutter, is highly complex and involves integration with a hardware device. The app communicates with the device via Bluetooth to monitor ECG and health vitals. While we have unit and widget tests in place, writing E2E tests remains a challenge. Previously, we used Flutter’s native testing tools for E2E, but we recently transitioned to Appium, which has improved our coverage.
High Maintenance:
Maintaining automation scripts demands significant time and effort. As a startup, shifting priorities and evolving app requirements make script maintenance particularly challenging.
Budget Constraints:
With a limited budget, we only have one dedicated QA resource for automation. Leveraging my experience in automation, I designed the framework and contributed to writing scripts. While I’m working to train the team to assist with automation, most of our time is consumed by urgent testing tasks, leaving little room to focus on automation initiatives.
These challenges make it difficult to maximize automation coverage. Therefore, I decided to ask others about the challenges they are facing and the strategies they are using to overcome them.
What do you mean by "meta" mechanisms?
It means that it's more important to focus on the system, culture, and infrastructure surrounding the tests instead of the tests themselves.
In concrete terms, it means that organizations looking to promote healthy automated testing shouldn't focus on something like test coverage as that's too primitive and reductive a goal.
Instead of that, organizations should focus on things like this:
I have seen so many engineering teams just force test coverage/volume as a goal and end up with a bunch of tests that are flaky and unmaintained. Test coverage being the KPI also incentivizes engineers to write tests for unimportant logic that is non-critical and not likely to break (i.e. the "easy" code). A real example of this is when I found several unit tests back at Instagram that were testing methods that were literally just null checks.
Thank you so much, Alex, for your response and for explaining the "meta" mechanism. It's very insightful. I've actually witnessed how this test coverage approach can undermine automation efforts, leading to the creation of useless tests that add no real value. As you mentioned, organizations should prioritize the system, culture, and infrastructure that support testing, but unfortunately, this rarely happens.
So, some of the strategies that I have seen work well
Other caveats
Thanks Bikram. What do you mean by deterministic simulation testing?
https://www.youtube.com/watch?v=ZtZmEs74ReI
https://www.youtube.com/watch?v=sC1B3d9C_sI
https://www.youtube.com/watch?v=4fFDFbi3toc
Basically, you run the situation where order of operations are determined by random chance, where if you know the seed, you can replay the order of operations.
Thanks Bikram for sharing videos about deterministic simulation testing. This looks like an interesting concept to explore.
We have very minimal automated testing at Taro, but this company will be interesting for you: https://www.ycombinator.com/companies/dev-tools-ai
I know the company through YC, and the founder Chris is a big Taro supporter. He may be a good person to talk to about making test automation more effective, especially in the age of AI.
Thanks Rahul. I checked the link https://dev-tools.ai/ but it's results in 403 Forbidden.