Skip to main content

How to maintain Rest API backward compatibility?

All minor changes in Rest API should be backward compatible. A service that is exposing its interface to internal or/and external clients should always be backward compatible between major releases. A release of a new API version is a very rare thing. Usually, a release of a new API version means some global breaking changes with a solid refactoring or change of business logic, models, classes and requests. In most of the cases, changes are not so drastic and should still work for existing clients that haven't yet implemented a new contract.
So how to ensure that a Rest API doesn't break backward compatibility?

 

1. Tests

First of all, to be sure that your changes are still compatible with your existing clients - you should have a set of tests that will fail if changes have introduced backward compatibility problems.

2. Always add parameters 

Never delete existing optional or mandatory elements passed to or returned by API.
If you've pushed to production API version with typo don't delete this field! Add a new field with correct field and make the API to use a value from one of this properties.  Mark field with a typo as obsolete and remove it on a new API version issue.

3. Do not make optional parameters be mandatory

Let's image we have method search:
public List<object> Search(Filter filter);

public class Filter
{
    public bool? Active {get;set;}
}
Here field "Active" is an optional field. When "Active" is set client gets either list of active or list of inactive records. If "Active" is not set - the client gets a list of all existing records (active and inactive).

Let's change class Filter to this:
public class Filter
{
    public bool Active {get;set;}
}
This will let to a situation where old clients that are not passing "Active" flag and expect a list of all records will get only a list of inactive records because default value "false" will be used on parsing JSON (XML) on a server. This breaks the logic of the client.
How to solve this?
The best way to do this will be to create a new search method accepting new filter:
public List<object> SearchV2(NewFilter filter);

public class NewFilter 
{
    public bool Active {get;set;}
}
And mark an old method as obsolete (if you don't plan to use it in future):

[Obsolete("This method shouldn't be used anymore. Use SearchV2 instead")]
public List<object> Search(Filter filter);

4. Always add additional HTTP response code returned by the API

This one is similar to point about parameters but a bit different.
If clients await from clients HTTP response 200 or 503 only - new response 501 can lead to unexpected behavior of clients.

5. Never delete or modify existing HTTP Response code behavior

If you use 200 HTTP response to identify an exception (things happens) and have clients that are using this method and this response to identifying that something went wrong - changing our method to use 200 HTTP response to show success will break backward compatibility.

6. Change URLs wisely

If you're changing an URL of your method you should provide either support of both passes (the old one and the new one) or make a redirect from the old URL to a new.

Let's try to sum up this. If you have to make small changes - creating a new version of API is an overkill and can be done in a more elegant way. To not change backward compatibility never remove parameters that your server or clients are expecting, rather add new parameters or new methods and try not to change the behavior of your API.

Hope you enjoyed reading!

Comments

  1. I think the combination of using SQL servers and REST API might provide you some help here.

    Sql Server Load Rest Api

    ReplyDelete

Post a comment

Popular posts from this blog

How to Build TypeScript App and Deploy it on GitHub Pages

Quick Summary In this post, I will show you how to easily build and deploy a simple TicksToDate time web app like this: https://zubialevich.github.io/ticks-to-datetime.

Pros and cons of different ways of storing Enum values in the database

Lately, I was experimenting with Dapper for the first time. During these experiments, I've found one interesting and unexpected behavior of Dapper for me. I've created a regular model with string and int fields, nothing special. But then I needed to add an enum field in the model. Nothing special here, right?
Long story short, after editing my model and saving it to the database what did I found out? By default Dapper stores enums as integer values in the database (MySql in my case, can be different for other databases)! What? It was a surprise for me! (I was using ServiceStack OrmLite for years and this ORM by default set's enums to strings in database)
Before I've always stored enum values as a string in my databases!
After this story, I decided to analyze all pros and cons I can imagine of these two different ways of storing enums. Let's see if I will be able to find the best option here.

Caching strategies

One of the easiest and most popular ways to increase system performance is to use caching. When we introduce caching, we automatically duplicate our data. It's very important to keep your cache and data source in sync (more or less, depends on the requirements of your system) whenever changes occur in the system.
In this article, we will go through the most common cache synchronization strategies, their advantages, and disadvantages, and also popular use cases.