Just as users don't like a broken app, no developer likes an API that breaks. Breaking changes are changes made to APIs that cause some kind of disruption on the integration side. While new major versions or deprecated endpoints bring obvious change, smaller revisions or drift can also impact the client implementation, requiring a key eye to monitor and respond to.
Thankfully, there are strategies engineers can use to test APIs for breaking changes. Below, we'll analyze some possible ways to watch out for breaking changes, from diffing API definitions, to contract testing, performance monitoring, and other surefire areas, like following developer-centric communications. Consider these testing habits to avoid the negative outcomes of breaking changes, hopefully, before they affect users!
1. Perform Specification-Based Testing
One way to highlight breaking changes in APIs is to diff OpenAPI specifications. OpenAPI files can be hundreds of lines long, and comparing them manually is time-intensive. So, using tools to automatically spot changes is usually necessary.
One great tool for this is oasdiff, an open-source command line tool that can take two OpenAPI versions and highlight the precise differences between them. For example, I took an example OpenAPI description for an imaginary Museum API and made some edits to the YAML. Below is the result from comparing openapi.yaml
and openapi copy.yaml
:
5 changes: 2 error, 0 warning, 3 info
error [new-required-request-parameter] at /.../museum-openapi-example-main/openapi copy.yaml
in API GET /special-events
added the new required 'query' request parameter 'ExhibitType'
error [api-removed-without-deprecation] at /.../museum-openapi-example-main/openapi.yaml
in API DELETE /special-events/{eventId}
api removed without deprecation
info [api-security-component-added]
in components/securitySchemes
the component security scheme 'adminAuth' was added
info [api-security-added] at /.../museum-openapi-example-main/openapi copy.yaml
in API POST /special-events
the endpoint scheme security 'adminAuth' was added to the API
info [api-operation-id-removed] at /.../museum-openapi-example-main/openapi.yaml
in API POST /tickets
api operation id 'buyMuseumTickets' removed and replaced with 'purchaseMuseumTickets'
As you can see, oasdiff
spotted that a new parameter was added to allow users to filter special museum events by exhibit. It also spotted a new administrative security scheme, a removed DELETE
function, and a renamed operation. Spotting drifts in specifications like this can be paramount to avoiding broken applications that are hardcoded against method names, parameters, or components.
That said, specification comparison has significant limitations. First off, it hinges on the API providing public specs. Secondly, simply diffing two definitions purely analyses structural changes — it doesn't analyze runtime behaviors, which often don't even match the documentation to begin with.
Contract testing can take this a bit further to verify that the API actually meets pre-defined requests and responses. Some tools to help with contract testing include Pact, Karate, Spring Cloud Contract, or Spectator for PHP developers.
2. Run Functional Tests
Another more comprehensive method to ensure API integrations don't break is to perform functional testing. Regression tests, for instance, help verify APIs function as expected after changes or updates. Regression tests can cover an extensive range of tests.
For instance, you could program tests to ensure endpoints, parameters, or error responses work correctly. In our Museum API above, this could involve tests that ensure GET
calls successfully retrieve Museum opening hours or that a POST
request to the special-events
endpoint successfully creates a new event listing.
Instead of manually hitting APIs using curl
, having a saved suite of tests is usually preferred. Postman is one tool that could be used to perform regression testing. Using Postman, quality test engineers can organize their API requests into Collections and program custom scripts. Another newer AI-powered API testing tool with a low CPU footprint is Aspen.
Beyond regression testing, continuous backward compatibility checks can verify that API integrations work with older clients. This could help prevent unforeseen changes causing outages in end-user applications. (Remember, museums attract a lot of retirees whose API calls from browsers running Internet Explorer in Windows '98 should work, too!)
3. Test API Performance And Security
Changes in performance, as well as security bugs, could cause clients to break. As such, it's a good idea to regularly test performance. One way is to throttle the API with many requests to see how it responds under stress. This could help you set thresholds and design user experiences to account for changes during peak traffic.
For instance, suppose our hypothetical Museum API receives thousands of concurrent requests during mid-day on a holiday and fails to respond. This is where local caching, smart defaults, or routing to alternative third-party APIs could save the day to prevent poor user experiences.
API changes could also introduce security flaws in your application, which could also present issues that cause breaking changes. This isn't purely hypothetical — OWASP now includes Unsafe Consumption of APIs on its list of the top ten API risks. API analysis tools, like API Insights, can expose both performance defects and security gaps in APIs.
Or, perhaps an API version drift alters a security scheme, which prevents the client or user from accessing a particular feature. (Like when I added a new endpoint scheme security adminAuth
I added to the Museum API definition above). Security is an ever-evolving area for APIs, and keeping an eye on how APIs handle authentication and authorization is critical to maintaining safe, functional end applications.
4. Set Up More Dynamic Monitoring
Dynamic testing and monitoring could also help prevent sudden API changes from disrupting things. By tracking API usage and performance and logging it, you can observe how your application is using the API and monitor for unexpected behavior or deviations from the norm, which could be flagged for a response.
The ELK stack (Elasticsearch, Logstash, Kibana) is a common approach to collecting and storing metrics. Other API-specific monitoring tools include New Relic, Moesif, APImetrics, APIContext, or Treblle. Additionally, many API managers offer built-in monitoring at the API gateway level.
Breaking changes could also rear their ugly heads early on, stunting development progress. And sometimes, you don't want to use actual API calls into applications while in development for data security or cost purposes. For these reasons, an alternative is to simulate API responses while in the testing or development stages using virtualized instances of APIs. The trick is ensuring these integrations work with production endpoints when the time comes!
5. Follow Developer Communication Closely
While it's not a technical testing procedure per se, keeping an eye on all developer communication helps track breaking changes. Changelogs often have valuable release notes that can help engineers stay on top of intricate API changes. Software providers that use semantic versioning, for instance, will typically support legacy API versions for some time and publicly announce their sunset dates in their developer portals.
Most API providers also set feature deprecation timelines and publicly announce them. Many go a step further and include this information in the response of the API itself. Error messages may also contain valuable information that may inform broken integrations. So, watch for these sorts of communications — it's a great idea to subscribe to the provider's newsletter or follow their developer-specific social media channels to stay in the loop.
Alternatively, if you are the one developing and maintaining the API, be sure to keep up-to-date documentation. Share it with others, and provide a mechanism for collecting user feedback. This will help find edge cases to ensure the API behaves well across platforms, devices, or environments. Lastly, API owners can also follow this example API Deprecation Email to know how to artfully craft developer messaging regarding changes.
Final Thoughts on Avoiding Breaking Changes
APIs are assets to digital business, becoming products in their own right. And part of having a functional product is ensuring consumers can use it correctly. This means ensuring a quality developer experience throughout the API lifecycle for APIs. If this is set up correctly, the end user experience can also be sound.
Broken features are a surefire way to scare off potential end users. One test found that 88% of app users abandon apps due to finding bugs and glitches. The rising ubiquity of APIs means ensuring integrations aren't causing latency or broken functionalities is intrinsically tied to retaining a positive user experience.
As we've covered above, a handful of methods exist for testing APIs for breaking changes. Specification-based testing, like OpenAPI drift testing and contract testing, can see if the API behaves as expected according to the contracts the API provider has provided. Functional tests can take this further, testing runtime behaviors to glean more accurate production insights. An ongoing habit of monitoring to gauge performances and discover security vulnerabilities can uncover additional weaknesses that could contribute to broken updates.
All in all, responding to software revisions and changes can be cumbersome, involving a multifaceted approach to address holistically. But by following some of the tips above, you'll be prepared to discover and respond to breaking changes in APIs before they undermine the faith end users have in your product.