Financial Services Institutions Explore New Strategies for Managing Data Sprawl As Enterprise Infrastructures Grow in Complexity

Lane F. Cooper, Editorial Director, BizTechReports

The extreme fragmentation of infrastructure and end-point environments represents a new permanent reality for financial institutions that must determine how to ensure secure and effective access to structured and unstructured data scattered across a growing array of on-prem and cloud resources. It is a problem statement that affects institutions' productivity and operational efficacy as executives and staff hunt and peck for the most current and accurate information to support strategic decision-making and day-to-day operations.

To listen to the full audio interview with Shinydocs’ Jim Barnet and Jason Cassidy, CLICK HERE.

According to analysts at IDC, the issue is growing in intensity as financial institutions face fierce competition from digital banks and insurers that leverage data to deliver innovative products and services. 

"Data sprawl represents a growing challenge for leaders in the highly regulated financial services sector because efforts to modernize enterprise operations are gaining momentum," says Jason Cassidy, CEO of Kitchener, Ontario-based Shinydocs, in a recent podcast interview for journalists.

Supporting the hybrid workforce and mobile-first customers while embracing heterogeneous enterprise architectures is rendering moot conventional data management strategies that call for creating data lakes or other paradigms for centralized data repositories, adds Jim Barnet, director of enterprise accounts at Shinydocs, a company that has spent the past decade automating the process of finding documents, files and records and then aligning them with the requirements of staff workflows and stringent industry regulations.

"It is becoming clear that users need a lot more flexibility and less complexity in terms of where content is stored as well as how they access and work with structured and unstructured data," he says. 

The accumulation of systems of record as the result of both organic and M&A growth has complicated the ability to find a single version of the truth through data centralization.

"Given the trajectory of today's distributed computing environments, it is hard to imagine that the industry will be able to put the genie back in the bottle. As a result, important questions are being raised about how to achieve governance goals and optimize operational processes as the volume of unstructured data grows along with the number of different repositories," adds Barnet. 

Operating in a Decentralized and Real-time AI Environment

The industry has responded by embracing emerging technologies -- including artificial intelligence and machine learning (AI/ML) -- to manage the complexity and constantly changing disposition of data resources. It is a proposition, however, that is easier to talk about than execute.

"By now, every financial services CIO has probably been through ten failed AI projects. And the reason why a lot of these efforts fail is because AI is often a hammer looking for a nail. The hardest part of AI -- and indeed of any records management modernization initiative -- is preparing all of the data to be used across a range of disparate applications. It cannot be implemented in a vacuum," says Cassidy. 

Comprehensive and systematic steps must be taken to assess institutional data and application landscapes in the context of strategic, operational and regulatory imperatives before deploying AI models. It raises issues about how to prepare data -- especially unstructured data.

History has demonstrated that satisfactory solutions to this issue are unlikely to be found by pursuing data consolidation strategies that require massive content relocations involving significant human intervention that affect many dependent processes. A better approach, suggests Cassidy, is to do the opposite.

"Don't move unstructured data anywhere. Instead, determine how to curate these resources effectively. Companies like Google, after all, do not force you to move your website onto Google. It just finds information and instantly provides access to the most relevant results," he says.

This can be replicated for enterprises by creating what Shindydocs calls "Big Indexes" that generate a knowledge graph of information owned and accessed by financial institutions. 

"The first step is to create a content catalog using technology to scan hundreds of terabytes -- even petabytes -- of data scattered across the enterprise," says Barnet. "We deploy automation to create a directory of all the content stored across on-prem and cloud applications, email systems and collaboration platforms," says Barnet. 

Once the content catalog is built, Shinydocs automatically pre-calculates every search against terms that are important to the enterprise. The result is a massively rich enterprise-wide distributed dataset. 

"It runs behind on-prem firewalls and across cloud services so that staff and decision-makers can put real questions to the data. It is the critical first step for AI-enabled information management, governance, privacy, security modernization, migration, and digital transformation," concludes Cassidy.

###

EDITOR’S NOTE: To listen to the full audio interview with ShinyDocs’ Jim Barnet and Jason Cassidy, CLICK HERE.

Staff Reports