What Does Big Data Mean For Banks?

Traditional data management strategies and storage technologies are holding back big data projects. Try this new approach.

InformationWeek Staff, Contributor

August 6, 2014

1 Min Read

In a recent Said Business School study, 63% of banks recognized proficiency in big data as a competitive advantage. However, 91% indicated that they lack key skills necessary to execute more effectively, and only 3% reported that their organizations had deployed big data initiatives on a continuous basis. Many banks are trying, but few appear to be succeeding.

Why are banks struggling?
When faced with the requirements of a new big data initiative, banks too often draw only on prior experience and attempt to leverage familiar technologies and software-development-lifecycle (SDLC) methodologies for deployment.

Traditional technologies, particularly the industry’s most common data stores (e.g., relational databases), were designed to enforce structure and optimize processing performance within a constrained hardware environment. As a result, many bank technologists are used to transforming data to meet these constraints, including aggregation to satisfy scalability limitations and data-normalization to satisfy schema restrictions.

Aggregation and normalization of data in this manner can result in several weaknesses:

  • Rigid schemas do not tend to allow for flexibility in responding to upstream and downstream data changes.

  • Data lineage may be lost after aggregation and summarization.

  • Data governance is likely weakened when several constituents retain responsibility for an extended, multi-stage data flow.

Read the rest of this story on Bank Systems & Technology.

Read more about:

2014
Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights