DevOps Services
Here at Cloudsprint we believe DevOps should be done right and doing it right is all about demonstrating impact and return on investment (ROI). Working together with our customers we ensure that we demonstrate return on investment (ROI) along with helping you to measure and enhance productivity of your DevOps practice teams. Below provides an example use case scenario.
The Situation
This major Gov.UK department faced several challenges to manage enterprise data as the data was being held in individual ‘islands’. They departments data and technical directorates were putting in efforts to join up this data so that it made sense to solve the problem centrally rather then each individual team trying to solve the complex data issues. As a result of existing data silos, many of departments services had a copy of same data which was managed and manipulated for their own purpose. This data duplication further increased the challenge of managing this data. It was estimated that over 120 systems across the department held similar or same data. These data copies needed to be synchronised regularly to avoid data quality issues and need to be managed so that the department did not incur compliance risks. In order to address these complex challenges we were roped in to perform the discovery to choose the new central enterprise data platform and use the modern devops practices to manage & deploy the new enterprise data platform.
The Opportunity
The technical discovery presented a major opportunity to relook at the data problems from scratch. We at cloudsprint always put users at forefront. For user research we chose a qualitative approach in-order to gain an in-depth understanding of how each of the business teams were using data, source of their pain points, frustrations and work arounds. Findings were used to build persona for each business team and build a matrix of requirements against each team and persona. Various data platforms were then assessed against the requirements. The goal was to build and manage the chosen platform using automation and modern devops practices.
The Outcome
Working with the company's enterprise and data architecture community after the discovery the third party confluent products were chosen to be core of the enterprise data platform.
The platform was hosted on the departments strategic Microsoft azure platform and was underpinned with confluent enterprise platform ( virtual machined based approach ). The data platform was deployed on 40+ Linux virtual machines using state of the art devops practices. The technologies used in the data platform included confluent enterprise platform, kafka brokers, kafka zookeepers, kafka connect, kafka schema registry, kafka connect, kafka control centre, terraform, Microsoft azure api management, azure active directory domain services, ldap authentication, Microsoft azure functions, azure app services, azure sql database PAAS( platform as a service), key-vault, storage, & several others. The (Infrastructure as a code) IAAC language utilized for the project was terraform. The coding for various components has been done in c# .net core utilizing confluent kafka producer/consumer API.
We worked with the supplier technical teams to performed capacity planning & created the physical design of the platform. Implementation of the platform DevOps process was done via azure devops & terraform. The confluent enterprise platform configuration was done using ansible. setup of all environments (dev, test, uat & production) was scripted and automated. The platform role based access control was implemented via integration with the departments LDAP server. We enabled platform component security for inbound traffic via azure application gateway and web application firewall (waf).
Finally a proof of concept mvp(minimal viable product) application was built and demoed to various stakeholders highlighting optimal consumption of the new data platform. The mvp application & automated platform deployment and management via modern devops practices demonstrated reduction in lead time for complex data sourcing and much better return of investment in departments data projects.