Press "Enter" to skip to content

Can’t hire fast enough? How to deliver value from your AI with a small data team

Enterprise data science teams are having trouble hiring data scientists and machine learning engineers fast enough. But there is no need to wait until your team is fully staffed to start generating value from your AI. It’s just a question of bringing together automation and better tooling so that even small data science teams can have an outsized impact.

Image: Prostock-studio/Adobe Stock

In engaging with chief data and analytics officers, one of the most common themes I hear is the difficulty in hiring and retaining data scientists and machine learning engineers. Major data science initiatives will come to a halt when a principal data scientist leaves or efforts to integrate AI throughout the enterprise become just a trickle of projects due to the inability to hire ML engineers quickly enough to manage each model in production.

And this is a widespread issue. According to a 2021 Gartner survey, nearly two thirds (64%) of IT executives identified lack of skilled talent as the biggest barrier to adopting emerging technologies like AI and machine learning. Hiring for data scientists takes 20% longer than IT jobs as a whole, and more than twice as long as the average US corporate job. Demand for ML engineers is even more intense, with job openings for ML engineers growing 30x faster than IT services as a whole.

SEE: Hiring kit: Data scientist (TechRepublic Premium)

Enterprises have poured billions of dollars into AI (investments that have included the expansion of data teams) based on promises around increased automation, personalizing the customer experience at scale or delivering more accurate predictions to drive revenue. But so far there has been a massive gap between AI’s potential and the outcomes, with only about 10% of AI investments yielding significant ROI.

For CDAOs, this is a key question. How can they deliver value from applied AI/ML throughout the enterprise in the near term with only a small number of data scientists and possibly even fewer ML engineers? Put another way, can small data science teams start driving outsized value without waiting months or years for a fully staffed, fully trained team?

Instead of waiting until they fill these roles, MLOps teams need to find a way to support more ML models and use cases without a linear increase in data science headcount. So how do they do that? Some tips include:

Recognize the strengths of existing team members

Different team members bring different strengths and skills to the team. Data Scientists excel at turning data into models that help solve business problems and make business decisions. But the expertise and skills required to build great models aren’t the same skills needed to push those models in the real world with production-ready code, and then monitor and update on an ongoing basis. On the other side, ML Engineers integrate tools and frameworks together to ensure the data, data pipelines, and key infrastructure are working cohesively to productionize ML models at scale.

But while data scientists may be happy to hand their models over to the MLOps team for a production rollout, this process may not be efficient.  Because data scientists and MLOps engineers don’t speak the same language, and don’t work or think the same way, there can often be time-consuming bottlenecks as one group tries to articulate a requirement (e.g. data preprocessing required), and the other team tries to satisfy it.

Additionally, if a model starts misbehaving or becoming less accurate in production, how do the ML engineers detect the issue and alert data scientists that a model might need to be retrained? It can be a team effort to diagnose the issue – is it an error in the production stack, or is something wrong with the model? This can lead to the same communication and coordination bottlenecks seen during deployment as data scientists struggle with gaining visibility into their models within the production stack.

Avoid repeating the mistakes of cloud adoption

Ten years ago, IT infrastructure teams sought to build their own private clouds. These ended up taking longer and costing more than expected to build, requiring more resources to maintain, and having less of the latest capabilities in security and scaling than what was provided by the public clouds. And instead of investing in core business capabilities, these enterprises ended up investing significant time and headcount to infrastructure.

Many enterprises are now repeating that same do-it-yourself approach to most things MLOps. The most common approach for putting ML into production is often custom solutions cobbled together from various open source tools like Apache Spark.

These often are inefficient (as measured by inferences run over compute and time required) and especially lack the observability needed to test and monitor the ongoing accuracy of models over time. Additionally, these approaches are too bespoke to provide scalable, repeatable processes to multiple use cases in different parts of the enterprise.

Hire for what matters and automate everything else

To that end, CDAOs need to build the capabilities around data science that are core to the business, but invest in technologies that automate the rest of MLOps. For example, a retail financial services company might find value in hiring individual data scientists with industry expertise in each subvertical like insurance, credit cards and home loans to create more granular customer risk profiles by line of business. But there’s no similar business gain for the company from hiring dedicated ML engineers for each line of business: In fact, it creates increased cost and decreased productivity. Instead, the business is better when it has a standardized platform for deploying and managing ML models in production that is agnostic to the team that developed it, or the model building frameworks used.

Yes, this is the common “build vs. buy” dilemma, but this time the right way to measure isn’t solely OpEx costs, but rather quickly and effectively your AI investments are permeating throughout the enterprise – whether generating new revenues through better products and customer segments, or cutting costs through greater automation and decreased waste.

While hiring in data science and MLOps will continue to be difficult, CDAOs can start delivering immediate value from their AI/ML with even a limited team of data scientists. The primary blocker will be the belief that “we need to build this all in-house.” By understanding the different functions required to build and operationalize AI/ML, and then identifying those that can be automated through best-in-breed tooling, the CDAO organization can punch well above its (headcount) weight class with even a small team.

Cristina-Morandi-headshot
Cristina Morandi, director of customer success at Wallaroo

Cristina Morandi is the director of customer success at Wallaroo and has spent her career supporting Fortune 500 companies as well as government entities through cutting edge technologies. She helped Datalogue transform from an early stage start-up to a NIKE owned enterprise data integration product driven by ML and automation. In addition, she spent eight years dedicated to one of the largest environmental and human use monitoring initiatives in history as the Information Management Lead and as a GIS contractor for BP America.

Source: TechRepublic