Let’s face it. Nobody sits down at their laptop and thinks “Yes! I can’t wait to write some tests.” But if you write code, chances are you’re going to have to at some point.
When apps were monoliths, unit tests typically got the job done. They tested the logic of your application, and with only a couple connection points (perhaps to the frontend or a database), integration and end-to-end testing was pretty straightforward. Nowadays, how do you test a myriad of API’s and databases that have a web of dependencies both internal and external, with messages that change independently without notice? How much effort would it take to validate atomic API functionality, edge cases, and overall performance and SLA’s? Who would orchestrate the test automation, environments and dependencies, and the data to run the right use cases (the Triple Threat)? Yeah, I’m cringing too.
While everyone has their own way of writing unit tests, 7 years in the field has shown me the typical method of integration and end-to-end testing seems to be UI testing. Some of the popular players used to be Selenium, HP QTP, and Appium from Saucelabs. Wait — are we talking about the present? If we add guys like Cypress and Katalon to the mix, then, yes. UI testing hasn’t evolved much.
The problem is, as we begin to strangle the monolith and containerize, the number of connection points grow exponentially.
It is hard to keep pace with all the API integration points, especially with manual test creation. Can you manually write enough tests to ensure your customers won’t run into bugs? Ideally, these tests would not only verify functionality, but response times too. And not only happy path, but
sad negative path too. Error-handling code rarely gets checked (it’s difficult to make UAT/Staging return errors to induce error handling). And then there’s load and throughput. In a perfect world, you (and management) would want you to test for that right? (You get the point).
Once I even heard a team lament that with the number of features they deliver in a sprint, it’d take a developer 2 months to write tests to adequately check and cover what they wrote. The ability to capitalize on digital trends and adapt rapidly to market conditions is key for survival.
From a technological, marketing and software perspective, advancing and improving legacy systems that we define as more old school or analog is a big glaring step. Now is the time for retailers to execute their digital transformations, but can every business afford it?https://www.forbes.com/sites/jonathantreiber/2020/06/30/digital-transformation-during-covid-19-the-gap-widens-between-the-haves-and-have-nots/#5d9448983b27
Thanks to the FAANG’s, “test in prod” has become popular. They told us how to do it in a book. Summary: Make deploying code so fast and automated, that if you need to roll out fixes, you can do it immediately.
Minor detail: If you need to roll out a fix, chances are a monitoring alert went off, or customers filed tickets. And by then, it’s already too late. Sure, addressing problems quickly is an essential ability to have, but most of us aren’t Google or Facebook. Most of their site traffic is in the millions, and the vast majority are using a free product. So if a new feature doesn’t get rolled out as planned — big deal.
How about your company though? Are you in retail? Finserve? eCommerce? Most businesses count on customers for revenue, and every transaction matters. Canary deploy, blue/green, it’s all a form of having your users test your application for you. Add on top of that security requirements, GDPR and California Consumer Privacy Protection Act, and you have very little margin for error.
As I discussed in my last blog, you can’t drive toward Florida and expect to end up in Canada. Likewise, you can’t test in prod and expect to reduce customer-facing service outages and degradations before they happen.
You have to be proactive vs reactive.
Why do a lot of companies test in prod though? 3 reasons. In short,
These reasons always conspire to make production one of a kind. So it’s perceived to be the only place you can do realistic, robust testing.
Another reason testing is hard is that it can involve lots of people from different teams. Architects, DBA’s, senior developers and infrastructure folks may all need to be involved for adequate testing. Unless you’ve put in a couple years to streamline and automate this already, most companies have this on the backlog and keep pushing it out in favor of new features. And that’s just for integration or end-to-end testing. Let alone regression. Lastly: it’s amazing how few companies are able to do any form of performance testing whatsoever.
Ideally, there would be a way you could automate test and mock creation from traffic, that captures the state, data, and real-world use cases of production. Those artifacts should then be used to create safe, stable, predictable environments and run regression, performance, and fuzzing tests. That’s what Speedscale does.
Taking sanitized traffic and making it available to the lower environments accomplishes several things.
With the move to cloud and standardized infrastructure, there is great opportunity to automate and scale quality within companies. Most testing innovation Speedscale has seen thus far is on the UI testing layer, but API’s and integration points are conducive to programmatic, automated validations. Just think of all the field names, status codes, response times and urls that need to be checked.
Like anything, we encourage you to start small, run lots of experiments, innovate and iterate. Best of luck! And if you’d like a demo of Speedscale, reach out to [email protected].
Many businesses struggle to discover problems with their cloud services before they impact customers. For developers, writing tests is manual and time-intensive. Speedscale allows you to stress test your cloud services with real-world scenarios. Get confidence in your releases without testing slowing you down. If you would like more information, schedule a demo today!