- No comments
The term “big data” certainly isn’t just referring to the mounting volumes of information companies are collecting. It is just as much about the variety and velocity matters i.e. the massive amounts of unstructured data that needs to be stored, managed, cleaned and then dashed around to talk to other data or move in near real-time.
One cannot overlook the issue of volume. Estimates contend that financial and securities organizations are juggling around 3.8 petabytes per firm. Following behind the investment institutions, the banking industry is contending with around 1.9 petabytes.
The variety and need for speed on this data is the real crux of the issue—and big banks are starting to see clearer paths as they look to advanced analytics platforms outside of the traditional databases as well as to frameworks like Hadoop.
To put the real challenges in some context, we take a look at a few examples of large financial and banking institutions that are hitting the upper limits of their traditional systems and looking beyond to new analytics and framework solutions.
Morgan Stanley’s Big Data Approach
As one of the largest global financial services organizations in the world with over $300 billion in assets under its care, Morgan Stanley keeps close tabs on new frameworks and tools to manage the complex information pools that back high-stakes decisions.
The financial services giant has been vocal about how it is solving the challenges of the industry, most recently by looking to the Hadoop framework.
The limitations of the traditional databases and grid computing paradigms that served the financial giant for years were stretched to the limit.
Like several other investment banks, Morgan Stanley started to look to Hadoop as the framework of choice to support growing data size, but more importantly, data complexity and the need for solid speed. The adoption of Hadoop allowed Morgan Stanley to “bring really cheap infrastructure into a framework” that let them install Hadoop and let it handle the tasks.
The company now has a “very scalable solution for portfolio analysis. At the core of the Hadoop future at Morgan Stanley is the matter of scalability. It allows management of petabytes of data, which is unheard of in the traditional database world.
Bank of America Tackles Big Data
As one of the largest banks in the States, Bank of America has been in good company with others of the same ilk that are seeking to tap into Hadoop to manage large amounts of transaction and customer data.
Big data will create a new era for businesses of all types, spawning a “second Industrial Revolution” which will be driven by open source frameworks, including Hadoop which has the potential to be as disruptive as Linux was 20 years ago.
Hadoop enables the bank to be good custodians of cash, & increase transparency & in the larger system to drive positive change.
Hadoop in finance has fast emerged as the preferred choice for financial big data as Hadoop financial analysis has the following advantages:
- Segregation of data and computation to save network bandwidth and faster calculations
- Financial big data requires performing thousands and millions of calculations in a matter of seconds. Traditional solutions import data over the server which causes unnecessary bottlenecks over the network and interferes with other processes
- Hadoop in finance can be leveraged to perform calculations locally and use the network only for transmitting the results which saves precious network space for more important processes.
The differentiator that Hadoop brings is by enabling doing the same things on a much larger scale and gets better results.
To sum it up, Hadoop and Big Data in financial services has been acknowledged by world’s leading financial institutions as the way for the future.