Press "Enter" to skip to content

Financial services companies are starting to use the cloud for big data and AI processing

The financial sector has historically been nervous about allowing its data to go off premises, making it harder to scale. Now it’s allowing for some data in the cloud to speed AI and data management.

Image: iStock/alexmillos

Financial services companies are and continue to be focused on maintaining a majority of their mission-critical systems on premises, where they have direct control. They also want direct control to quickly recover systems if a failure occurs. It’s also financial institutions that have the reputation of being transaction-driven and not customer-centric.

Must-read cloud

SEE: Cloud data storage policy (TechRepublic Premium)

As marketplace competition increases, most now recognize the need to store and mine customer information and the need to incorporate unstructured big data culled from the Internet, demographics, and other forms of data that don’t necessarily arrive in structured data record formats.

Financial institutions mesh unstructured data together with traditional structured data so they can perform analytics and artificial intelligence (AI) on a composite of customer information. But in the process of doing so, they must find ways to scale and store an increasing amount of data that can grow exponentially overnight.

The quandary they face is that a broad scale-out of data storage and processing isn’t workable with most on-premises systems, which might take a year or longer to budget for and acquire. This is where using more scalable cloud services can deliver value.

“Every organization that has data at scale can benefit from the economies of scale in the cloud,” said Paul Scott Murphy, VP of product management, big data/cloud at WANdisco, a provider of distributed computing services. “The cloud has been a natural home for huge swathes of data that financial institutions use every day.” 

SEE: Top cloud providers in 2020: AWS, Microsoft Azure, and Google Cloud, hybrid, SaaS players (TechRepublic)

Murphy said that many banking customers he works with start with cloud data hosting of their customer-related data.

“There is a natural affinity for those datasets to be used along with data from CRM, ad tech, and service desk applications that are cloud-native,” he said. We’ve seen that once these companies establish the necessary controls, security, and governance for the data they hold in the cloud. After that, they expand to much broader types of big data, such as transactional information for real-time risk analysis, data aggregation and analytics to support loan origination decisioning, and customer and market intelligence to offer personalization.”

Key factors that are moving more big data to the cloud include digital transformation, in which cloud hosting is playing a larger role; agility and cost optimization, which enable companies to move data more rapidly to the cloud than in their own data centers, and to do this in a pay-per-use mode that can trim operational and capital expense overhead; and an expanding array of AI and machine learning (ML) services and expertise that cloud providers can offer.

Together, these forces enable financial companies to bring big data and AI applications to market sooner than if they had to do it on their own.

SEE: Artificial intelligence can take banks to the next level (TechRepublic)

Critical backups and recoveries of big data are also a concern.

“One way we have addressed data backup in the cloud is through a live data strategy,” Murphy said. “With this strategy, data changes are replicated immediately as they occur. This approach enables data to remain consistent across multiple environments and enables near zero recovery point objective (RPO) and recovery time objective (RTO) targets.”

Murphy recommends that companies consider a live data backup strategy because it lowers the risks and costs of legacy data migration approaches and enables continuous data migrations, which are needed in a hybrid cloud computing environment that uses both on-premises and on-cloud data and applications.

“I also suggest that companies take a data-first approach for their Hadoop big data migration to the cloud,” he said. “Get the data there quickly so your data scientists can begin to experiment with the new system immediately for a faster time to value.”

Finally, don’t consider decommissioning all or parts of your on-premises systems too hastily. If your long-term strategy is to move more data and applications to the cloud, do it gradually and test thoroughly.

Also see

Source: TechRepublic