Case Study
UST helps global apparel manufacturer overcome data challenges with 90% reduction in development efforts
OUR CLIENT
Founded more than a century ago, this American multinational apparel company produces clothing for some of the most recognizable brands in the industry. With more than 50,000 employees, the company generates nearly $10 billion in revenue annually.
THE CHALLENGE
Disorganized data led to poor business insights
Our client had siloed systems with 20 years of historical data. Consolidating and analyzing data from the disparate systems was cumbersome, time-consuming, and fraught with issues, including duplicate or inaccurate information. While the data analysis team tried to manage through the challenges, shadow IT teams and business units created their own workarounds that led to inconsistent insights from the same KPIs. Often, poor business decisions became apparent when inventory reached the marketplace. The company needed help from data engineering experts to design and implement a solution that could seamlessly consolidate the siloed data, so analysts could uncover reliable insights to help company leaders make better business decisions.
THE TRANSFORMATION
Enterprise data lake consolidated data and resolved analysis issues
After a thorough analysis of the company’s supply chain, inventory, HR, finance, customer, and retail data systems, UST designed and implemented an end-to-end Azure-based enterprise data lake that included Azure Data Lake Storage Gen2, Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Analysis Service, Azure Automation, PowerShell, Power BI Premium, Azure Purview and Azure Monitor. The solution was designed with:
- Fully automated data loads—from disparate data sources into a consolidated data warehouse and analysis service cubes with file waiting and automatic recovery processes
- A 3-year active data volume—consisting of six to eight TBs of data, 80 to 100 billion rows, 60 to 80 million daily delta rows, as well as archived data for years four through seven
- More than 130 tables—with 10+ tables of at least five billion rows, 20+ tables using more than one billion rows, and 20-30 million delta rows per day for major fact tables
- Fully automated health check monitoring and alert mechanisms—to ensure high data quality and availability
- More than eight in-memory cubes in S8 V2—with a constraint of 200 GB memory and a 640 quantum processor max limit
- Automated unit testing—with Azure DevOps to streamline CI/CD processes
THE IMPACT
Data governance provided reliable information for more than 40 concurrent users
The company’s enterprise data lake can support more than 40 concurrent users as they analyze data to uncover insights across the organization. The solution uses horizontal CPU scaling, automated pauses for analysis services, and the scaling down of data warehouse units to keep costs in check. Dynamic data pipelines have contributed to a 90% reduction in development efforts.
RESOURCES
https://www.ust.com/en/what-we-do/digital-transformation/data-analytics