Skip to main content

Add versioning to your dozer APIs

· 6 min read
Sagar

This blog post will be your guide to seamlessly add version to your dozer APIs, which will ensure that your APIs evolve gracefully while preserving backward compatibility, helping you keep your users and developers satisfied. For this we will be modifying a simple postgres configuration file and uploading multiple versions of the same API to Dozer Cloud.

What is API versioning?

API versioning is a technique used in software development to manage and evolve Application Programming Interfaces (APIs) while ensuring backward compatibility and a smooth transition for existing clients and applications. It involves specifying and indicating the version of an API that a client should use when making requests.

Why do we need API versioning?

The need for API versioning arises when changes are made to an API that may impact how clients interact with it. These changes could include adding new features, modifying existing ones, or deprecating certain endpoints. Without versioning, any changes to the API could potentially break existing client applications that rely on the API.

Pre-requisites

Before we begin, make sure you have the following:

  • Dozer installed. For more installation instructions, visit the Dozer documentation.
  • Dozer Cloud account. If you don't have one, you can sign up for free here.
  • This guide uses a dataset within an AWS S3 bucket, you can follow the same or use other connectors available in Dozer.

Setting up the Database

For this guide we will be using a simple dataset comprising of a table contained within a csv file which is inside a AWS S3 bucket. The csv file can be downloaded from here. The table can be added to an folder inside a S3 bucket.

Setting up version 1 of the API

Dozer configuration provides you with a version key to be defined at the top of the yaml config file. It is mandatory to define the version key in every configuration file.

Configuration file 1

app_name: aws-s3-sample
version: 1
connections:
- config: !S3Storage
details:
access_key_id: "{{AWS_ACCESS_KEY}}"
secret_access_key: "{{AWS_SECRET_KEY}}"
region: "{{AWS_REGION_S3}}"
bucket_name: "{{AWS_BUCKET_NAME}}"
tables:
- !Table
name: stocks
config: !CSV
path: # Add the folder name here
extension: .csv
name: s3

sql: |
-- Ticker Analysis
SELECT Ticker, AVG(Close) AS average_close_price, SUM(Volume) AS total_volume
INTO ticker_analysis
FROM stocks
WHERE Date >= '2025-01-01' AND Date < '2025-02-01'
GROUP BY Ticker;

-- Daily Analysis
SELECT Date, AVG(Close) AS average_close_price, SUM(Volume) AS total_volume
INTO daily_analysis
FROM stocks
GROUP BY Date;

-- Highest Daily Close Price
SELECT Date, MAX(Close) AS highest_close_price
INTO highest_daily_close
FROM stocks
GROUP BY Date;

-- Lowest Daily Close Price
SELECT Date, MIN(Close) AS lowest_close_price
INTO lowest_daily_close
FROM stocks
GROUP BY Date;

sources:
- name: stocks
table_name: stocks
connection: s3

endpoints:
- name: ticker_analysis
path: /analysis/ticker
table_name: ticker_analysis

- name: daily_analysis
path: /analysis/daily
table_name: daily_analysis

- name: highest_daily_close
path: /analysis/highest_daily_close
table_name: highest_daily_close

- name: lowest_daily_close
path: /analysis/lowest_daily_close
table_name: lowest_daily_close

After saving this file as <file-name>.yaml you need to run the following command with the dozer cloud login credentials.

dozer cloud login --organisation_slug <organisation_name> --profile_name <profile_name> --client_id <client_id> --client_secret <client_secret>

Remember to replace the placeholders with the actual values, and keep these confidential.

Now before uploading let us verify that the configuration file is correct by running the following command.

dozer build -c <file_name>.yaml

This will build the configuration file locally and generate a dozer.lock file containing the hash of the configuration file.

Now we are ready to upload the configuration file to Dozer Cloud. After exporting your AWS credentials as environment variables, run the following command to upload the configuration file.

dozer cloud deploy -c <file_name>.yaml -s AWS_ACCESS_KEY -s AWS_SECRET_KEY -s AWS_REGION_S3 -s AWS_BUCKET_NAME

Congratulations! You have successfully uploaded your first version of the API to Dozer Cloud. You can now visit https://cloud.getdozer.io to view your API.

Version 1

Setting up version 2 of the API

Now that we have successfully uploaded the first version of the API, let us make some changes to the configuration file and upload the second version of the API. For this let us assume our newer API version no longer demands the need of lowest_daily_close. So we will remove the endpoint as well as the sql transformation for lowest_daily_close from the configuration file and upload the new version v2. We will also change the version key to 2.

app_name: aws-s3-sample
version: 2
connections:
- config: !S3Storage
details:
access_key_id: "{{AWS_ACCESS_KEY}}"
secret_access_key: "{{AWS_SECRET_KEY}}"
region: "{{AWS_REGION_S3}}"
bucket_name: "{{AWS_BUCKET_NAME}}"
tables:
- !Table
name: stocks
config: !CSV
path: # Add the folder name here
extension: .csv
name: s3

sql: |
-- Ticker Analysis
SELECT Ticker, AVG(Close) AS average_close_price, SUM(Volume) AS total_volume
INTO ticker_analysis
FROM stocks
WHERE Date >= '2025-01-01' AND Date < '2025-02-01'
GROUP BY Ticker;

-- Daily Analysis
SELECT Date, AVG(Close) AS average_close_price, SUM(Volume) AS total_volume
INTO daily_analysis
FROM stocks
GROUP BY Date;

-- Highest Daily Close Price
SELECT Date, MAX(Close) AS highest_close_price
INTO highest_daily_close
FROM stocks
GROUP BY Date;


sources:
- name: stocks
table_name: stocks
connection: s3

endpoints:
- name: ticker_analysis
path: /analysis/ticker
table_name: ticker_analysis

- name: daily_analysis
path: /analysis/daily
table_name: daily_analysis

- name: highest_daily_close
path: /analysis/highest_daily_close
table_name: highest_daily_close

After modifying the configuration file, we can run the following command to build the configuration file locally.

dozer build -c <file_name>.yaml

Since we have already logged in to Dozer Cloud and exported the environment variables in the terminal, we can directly run the following command to upload the new version of the API.

dozer cloud deploy -c <file_name>.yaml -s AWS_ACCESS_KEY -s AWS_SECRET_KEY -s AWS_REGION_S3 -s AWS_BUCKET_NAME

Now if you visit https://cloud.getdozer.io you will see that the new version of the API has been uploaded and the older version is still available.

Version 2

Deleting a specific version

Multiple versions of the API will use the same infrastructure, hence it is recommended to checkout the cpu and memory usage of the API before uploading a new version, to prevent API and APP pods from crashing. This can be done by visiting the Infrastructure tab in the dashboard.

Infrastructure

Let say the users have successfully migrated from version 1 to version 2 of the API and we no longer need version 1. We can delete the older version by running the following command.

dozer cloud version delete <version>

Conclusion

As you can see, Dozer makes it extremely easy to add versioning to your APIs. You can now add new features to your APIs without worrying about breaking the existing applications that rely on your APIs. You can also use the same configuration file to upload multiple versions of the same API.

For more information and examples, check out the Dozer GitHub repository and dozer-samples repository. Happy coding, Happy Data APIng! 🚀👩‍💻👨‍💻