Diving Into IndigoTrace – A Technology Stack Overview

Proving data integrity is critical to all successful trade finance, insurance, and supply chain processes. Some of the biggest challenges today are all about knowing who did what, when, where, and why. Faced with growing regulatory compliance procedures, businesses need to rely on secure critical processes between their customers and partners.

To solve these critical issues we have launched IndigoTrace. IndigoTrace provides Plug & Play traceability for inter-business processes. It aims to make the technology provided by IndigoCore even more accessible.

Allowing for traceability and synchronization of events across any organization or system, IndigoTrace gives businesses the confidence to make strong decisions based on concrete data stored in cryptographic audit trails. It is designed to be user-friendly for everyone within a business - not just the IT department.

This blog post is going to give a general overview of the architecture and key technologies we used to build IndigoTrace. If you are unfamiliar with IndigoTrace you should check out our previous post for a more product focused introduction.

IndigoTrace is made up of a single page web app and API. A key technical goal was to create a scalable and performant API that our UI can rely on to deliver an exceptional experience to our end users. We will be covering what tools we used to make IndigoTrace possible.

React All The Way

Our entire frontend infrastructure is built using React, which offers great flexibility and encapsulation for better reuse of our components throughout. Redux is used to manage the entire state of our application. It considerably simplifies the decomposition between actions and components. Used along with Redux-DevTools, it becomes a powerful tool for debugging and time travel throughout the app. To simplify navigation and manage URLs in a user friendly way, we used React-Router, which uses dynamic routing to mimic the routing of a traditional web app.

There are some cryptography requirements on the frontend to generate an Input’s key pair, that will later on be used for signing payloads. For that purpose, we used the excellent tweetnacl library, which supports elliptic curve (ed25519) key generation out of the box.

All HTTP requests are handled by Axios, which is promise-based, easily testable and mockable, and comes with a built-in cancellation feature.

On the testing side, create-react-app projects all come with the excellent Jest library with built-in watch mode, code coverage reports and mocks. Airbnb’s Enzyme is a must have when it comes to testing React components, which makes it easy to assert, manipulate and traverse our React components. It comes with three main APIs to shallow render, static render and full DOM render. Which one to use will depend on the situation and what you are trying to test. We also rely quite a lot on Sinon.js and Chai for spying, stubbing and asserting.

We host our site on AWS S3 and leverage CloudFront and Route53 for routing and DNS purposes. All we have to do is upload our React bundle to S3 each time we want to deploy. Route53 will take care of DNS, directing our traffic to CloudFront, and CloudFront will take care of SSL and serving our UI from its global CDN. This provides a highly available UI, makes deployment simple and reduces the complexity of our infrastructure.

Micro Services for Everyone

The IndigoTrace API uses a micro service architecture written in golang and hosted on Kubernetes.

Our Micro Services

Our API is composed of a micro-service architecture using Go Micro. This allows us to isolate each logical component of our API into a separate service that can scale and be versioned independently of the others. There is a single API gateway that manages all of the JSON HTTP endpoints and routes them to the appropriate services. Each service that has public endpoints will have a micro service using a JSON HTTP interface. These API micro services then use protobuf over RPC to make calls to our core micro services that handle the bulk of the logic. Protobuf is used to reduce our payload size when making calls internally between micro services. Certain API micro services may make calls to several different core micro services for a single endpoint. This separation of API micro services and core micro services allows us to share functionality on a more granular level.
We use viper to provide a consistent configuration interface across our different services. Viper allows us to easily support a number of ways to load configuration into our services which makes configuration from local to production environments a breeze.

To ensure our API is always working we have an exhaustive test suite that heavily leverages the testify package to simplify assertions.

Our Data Layer

We have three main data stores to handle all of our data. The first is Postgres, which we use to store all of the info about users, workflows, traces, and inputs. The three ways we interface with Postgres are Gorm, Casbin, and IndigoCore. Gorm is an ORM for golang with support for Postgres and several other languages. We have found it very useful for cleanly modeling objects such as users and workflows. Casbin is an authorization library that stores its ACL rules in Postgres. IndigoCore is used to model the traces as Proof of Process maps.

The second data store is Kafka, which we use as our message broker. Kafka allows us to use pub-sub to relay messages between our different services and provide a websocket notifications api driven by gorilla/websocket to our frontend app.

The third data store is Bitcoin. This is used as a time stamping service for each trace event’s fingerprint. To accomplish this we again use the SDK provided by IndigoCore. Each time a new event is added to a trace it gets added to a batch. After a configured time period or batch size a merkle tree is computed for all of the events in the batch. The merkle root is then added as the OP_RETURN opcode to a bitcoin transaction. Once the block containing our transaction is confirmed, each event is updated in our database with a proof containing a link to the Bitcoin transaction and all of the hashes used to compute the merkle root. Since Bitcoin acts as a public decentralized data store, each event can be independently verified by our customers after downloading its proof. This prevents any single party from breaking the integrity of any trace stored through our API, including us. We plan to add more public blockchain networks in the future, such as ETH and LTC.

Our Infrastructure

The IndigoTrace API is hosted on Google Cloud Platform (GCP) where we heavily use Google Kubernetes Engine (GKE). We use Terraform to manage any resources we need in GCP that are not managed directly by GKE. This includes GKE clusters, persistent volumes, etc. GKE will create and mange certain GCP resources when you use certain Kubernetes resources. One example is the ingress resource, which will create a load balancer on GCP. We opt to keep these resources out of Terraform and let Kubernetes manage them.

To monitor all of our services we use Prometheus and Grafana. Prometheus collects all of our metrics and Grafana displays everything in different dashboards.

Each of our services are packaged as a separate Docker image and are referenced in a Helm chart that we use to deploy to GKE. Helm allows us to deploy all of our services using a single command and structure our Kubernetes manifests in a succinct and reusable way.

We use Travis for continuous integration and delivery to ensure that our services are working as intended. This is true for both the API and the UI. On every commit we build our services and run our respective test suites. This catches most bugs before they get merged into master. Any time we want to merge some code into master a pull request is made on Github. This allows the rest of the team to properly review the changes. Travis integrates quite nicely with Github so reviewers can see if the pull request is failing any tests.

We use Reviewable on top of Github to review pull requests and have found that it makes code reviews easier to manage. Every time we merge into master we deploy to our staging environment where we can actually play with the services and catch any issues our tests might have missed. This process has allowed us to quickly push quality code and keep the entire team up to date on the code base.

Putting It All Together

The choices we’ve made have been guided by best practices and the latest technologies. The stack we’ve built is efficient, scalable, easily deployable and highly performant. This has allowed us to deliver the highly reliable product that our customers count on to make strong business decisions.

We are currently running a Beta Program for IndigoTrace with a few hand picked customers. If you would like to be part of it, feel free to sign up on our website Otherwise, we plan on releasing the product to everyone by the end of summer 2018, so stay tuned!