Increasing cost-efficiency through automation & visualization of data analysis
Automated ETL routine that loads the data, Azure Data Catalog, Power BI reports
Our customer, a multinational law firm, has been using different transactional systems to track the civil claims of one client.
Since it was a mix of class action and civil claims of over 400.000 cases, the customer dealt with 1000 layers on the plaintiff’s side and an additional 100+ partner layer companies. Tracking all this information was done in different systems. The most important systems were based on a low-code platform, and every week new features and new integrations were being rolled out.
The challenge for the customer was to get an overview of all data and systems to both create reports and consume data for settlements and automated pipelines.
Additionally, the customer needed to design and implement a data platform to consume the data for pipelines, extracts, and reports.
These data platforms need to automatically scale on a weekly rhythm when the main transactional system from the low-code platform is extended.
For the data catalog, different interviews of technical and functional experts have been conducted, and the information has been consolidated and updated in an Azure Data Catalog.
The metadata from the low-code platform was taken and transformed into a Data Vault 2.0 data model for the data platform. We implemented an ETL routine that automatically loads the data coming from the low-code platform.
We also supported the customer by creating PowerBi Reports showing an overview of the cases and identifying issues, such as when two cases are considering the same vehicle. Furthermore, we have implemented extract pipelines for specific brand partners to extract and use the data for their purposes.
Our customer has been able to visualize better, analyze the cases, and provide the data to partners.
Additionally, these data platforms are used as an archive for revision so that the transactional systems with licenses can be shut down when the cases are closed, and no costs are related to it.
Archiving the data for revision is goes back about 10 years. The data catalog sped up the process by identifying the relevant data and the correct context for people creating reports or requesting excerpts.
FRAMEWORK & TOOLS
For the data catalog, the Azure managed service called “Azure Data Catalog” was used.
For the data platform, the technologies of the customer maintaining it were used. These included Microsoft SQL Server, SSIS, and PowerBI.
Apache Kafka and Kafka Connect for real-time integration.