Download now, read later
Download the full version of our Tracker case study.
DOWNLOAD PDF NOWThis is the story of how Clearcode scoped and built the MVP for one of our internal projects — tracker.
Every AdTech and MarTech platform is different.
Demand-side platforms (DSPs), for example, help advertisers purchase inventory from publishers on an impression by impression basis via real-time bidding (RTB).
Customer data platforms (CDPs), on the other hand, collect first-party data from a range of sources, create single customer views (SCVs), and push audiences to other systems and tools.
Although the functionality and goal of AdTech and MarTech platforms varies, they all have one thing in common; they all need a component that collects and delivers data from different sources (e.g. websites) to different systems (e.g. DSPs).
This component is known as a tracker.
Our tracker can be used to collect event data for AdTech & MarTech platforms.
![]()
Krzysiek Trębicki, project manager of Tracker
One of our development teams completed an internal research and development project to build a tracker that can collect a range of events, such as:
Our project focused on building a tracker for a DSP, but it can be adapted to any AdTech or MarTech platform that needs to collect event data.
The main goal of the project was to build a tracker with core functionality, example extensions, example deployment scripts, documentation, and allow the tracker to be integrated into other components, such as analytics tools and reporting databases.
With our tracker project, we followed the same development process that we apply to all our AdTech and MarTech development projects for our clients.
Here’s an overview of the development process we followed when building the tracker:
The goal of the Minimum Viable Product (MVP) Scoping phase was to define the scope of the project and select the architecture and tech stack.
We achieved this by doing the following:
We started the project by creating a story map to help us:
Based on the results from the story mapping sessions, we created a list of functional requirements for the tracker.
The functional requirements relate to the features and processes of the tracker.
We identified that the tracker would need to:
The non-functional requirements of the project aren’t connected with building a working product, but are connected to the performance, scalability, security, delivery, and interoperability of the tracker.
We identified the following non-functional requirements of tracker:
We selected the architecture and tech stack for the tracker project by:
We researched different benchmark tools to help us select the right programming language for the tracker.
The main metrics we wanted to test were:
The ideal benchmark tool needed to be easily integrated with our continuous integration (CI) environment, either run via the command line or a plugin for Jenkins.
Below are the pros and cons of the benchmark tools that met our requirements.
Wrk is a HTTP benchmarking tool capable of generating significant load.
The best choice for simple performance and load testing would be Wrk, however in case we need some more advanced scenarios, it requires configuration tests in Lua, which may be time consuming. It also doesn't support metrics visualization out of the box.
Locust is an easy-to-use, distributed, load testing tool. It's written in Python and built on the Requests library.
k6 and Locust seem to be comparably easy to set up and use, although Locust was chosen as it allows us to write tests in Python, which is a technology the team knows very well.
Gatling is a powerful open-source load testing solution.
Gatling was discarded due to its complexity and scala-based DSL used for configuring tests. Once we had chosen the benchmark tools, we moved on to selecting the technologies that we would test.
We decided to test the following programming languages and technologies:
We decided to run initial benchmark tests on all the technologies using the benchmarking tools listed above, but later focused on running more tests using wrk2, gatling and locust.
All three tools were configured to allow for maximum elasticity.
The two best performing programming languages were Nginx + Lua and Golang, and while they had similar results across all benchmarking tools, we chose Golang due to current market needs and its popularity.
The MVP Scoping Phase took 1 sprint (2 weeks) to complete.
With the MVP Scoping Phase completed and our architecture and tech stack selected, we began building the MVP of the tracker.
We built the tracker in 5 sprints (10 weeks).
Below is an overview of what we produced in each sprint.
What we achieved and built in this sprint:
What we achieved and built in this sprint:
What we achieved and built in this sprint:
What we achieved and built in this sprint:
What we achieved and built in this sprint:
Contact our team and find out how our tracker can help your AdTech or MarTech platform collect data
Contact our team