IRS Increases 2025 Retirement Contribution Limits: Key Details
Given the buzz surrounding artificial intelligence (AI), one might think private equity (PE) funds and their portfolio companies would be further along in their adoption journeys. Yet recent surveys show that just 26% of general partners (GPs) have implemented at least one AI function at their investment funds – and only 13% of middle-market companies have embedded AI into their operations.
While most PE players are at least exploring AI use, these statistics should be a wake-up call for fund managers and C-suites. After all, the costs of foundational AI models are coming down rapidly – in recent months the price of a token/word on the new ChatGPT-4o model has been slashed by 85% to 90% – and out-of-the-box platforms make AI of all kinds increasingly accessible.
The benefits of adopting AI are clear, whether it takes the form of cost reductions, productivity improvements or other operational efficiencies across an array of functions. And organizations have the fuel needed to power these new tools. “The value-add your company has is its own data,” Rob McGillen, Chief Innovation Officer, CBIZ Financial Services, said at a M&A East panel earlier this month. “That data is gold.”
In what follows, we’ll unpack that discussion and share what PE funds and their portfolio companies need to know to get started. The bottom line? There’s no late-mover advantage when it comes to AI, but organizations need to head off potential risks. Now is the time to dig in.
Key Use Cases
The business applications for AI are growing rapidly. For PE firms, most use cases revolve around translating dense tranches of information into clear outputs, learnings, alerts and predictions – whether that involves sourcing new investments (e.g., by analyzing global financial reports, news and company data), crafting investment theses and confidential information memos, automating compliance functions or conducting due diligence.
AI can also help with real-time decision-making. “Imagine you’re integrating a new manufacturing carve-out acquisition,” said McGillen. “With AI, you can take the knowledge of that business, put it in an AI model and see how it might change given potential market fluctuations or disruptions – for instance, the impact of a hurricane or sudden shifts in supply chain. That allows you to take action on the fly.”
According to a 2024 Barnes & Thornburg survey, however, so far, most GPs have only implemented AI for market data analysis (37%), compliance (32%) and identifying targets (28%); fewer are using it to forecast (23%) or conduct due diligence (20%).
Portfolio companies have an even wider range of use cases – dependent, of course, on each one’s particular business model and sector. At a high level, though, AI can be used for everything from marketing and supply chain optimization to quality control, human resources decisions, customer service and financial engineering.
That doesn’t mean every company has taken advantage of these capabilities. As one panelist observed, the vast majority of middle-market companies right now spend most of their time on low-risk AI applications, like productivity and communications improvements. Very few focus on existential threats to their organizations and what their strongest competitors might be doing to harness the technology’s power to gain an edge in the marketplace.
Three Best Practices to Get Started
No matter where a PE fund or portfolio company is in their AI journey, they would do well to keep these three best practices top-of-mind:
Focus on data quality. AI’s outputs are only as good as its inputs – and, as McGillen put it, “Good data is at the core of what makes AI effective.” That’s backed up by a 2023 survey of Chief Data Officers in which 46% identified “data quality” as their organization’s greatest challenge in realizing generative AI’s potential.
Curating, standardizing, cleaning and integrating unstructured data for use in AI applications is no simple feat. Organizations should start by understanding what makes good data in the first place. Consider the following:
- Can your data be used in supervised learning? In other words, is it labeled and tagged to teach algorithms how to recognize the relationship between inputs and outputs that can help solve specific business problems?
- Does your data reflect reality? For instance, if a retailer’s website shows that a product is available but it’s not actually on the shelf, that represents a gap in data quality.
- Is your AI model explainable, transparent, accurate and contextualized? Organizations need to clearly document how an AI model and its data inputs work, and for which purposes. This is particularly important as regulations intensify – the black box excuse won’t hold water with government officials concerned about bias, discrimination and privacy.
Intentionality matters. To be successful, fund managers and executives must look past the hype of AI and into specific business use cases. “Just having a data set and expecting it to deliver something usable isn’t going to cut it,” said McGillen. “In applying AI, there needs to be an expected outcome to align the inputs properly. In essence, good data is really relative to what you want the end result to be.”
As an example, McGillen discussed how he built an AI chatbot for M&A East attendees to improve their experience by providing additional information about the conference. Using ChatGPT-4o, he created a working bot that scraped the internet for details such as which firms registered for the event, the backgrounds of scheduled speakers and a detailed agenda.
“Ask yourself: What do you want to do with your data?” McGillen added. “Intention drives AI decisions and allows you to manage risks accordingly.”
Implement several layers of security and governance. It should come as little surprise that the increased use of AI tools expands the attack surface for cyber criminals and creates additional data privacy vulnerabilities. As a result, it’s critical that organizations develop several layers of security controls.
“Organizations should first understand where data used in AI applications resides,” said McGillen. “Is it in the cloud? An on-site server? What security and governance controls are in place for each of those systems? In the work CBIZ is doing now with clients around AI governance and cybersecurity, focusing on the controls and safety factors for data access and use are the clear first steps to charting a mature AI future.”
Next, it’s important that AI users negotiate contract terms with their vendors. For cloud service providers, it may be about specifying which party is responsible (and liable) for data security. For large-language model vendors, this may entail establishing contractual boundaries that stipulate your data won’t be used to train the model for others and clarify who will own the outputs or if they will use personal/confidential information.
Risk Versus FOMO
Executives and PE fund managers alike must strike the right balance when it comes to AI adoption. McGillen summarizes “On the one hand, PE funds and management have to account for the new risks associated with the technology, be they related to data privacy, regulatory issues or inaccurate outputs. On the other, they’re not immune to the fear of missing out – and rightly so, as leading companies and PE funds are already advancing their AI initiatives.”
Walking this line is no easy task. But not exploring AI isn’t a viable option, either. Taking a strategic approach – focused on data quality, intentional use cases and effective security and governance – can help business leaders take advantage of this transformational technology.
Connect with us to learn more about AI and private equity.