This is the fifth in a series of articles outlining how businesses can successfully capitalize on the adoption and integration of artificial intelligence (AI), while at the same time remaining vigilant regarding AI’s unique risks. Even though AI has the potential to upend a broad spectrum of industries, the integration of AI in the finance industry—and specifically the investment management industry—may carry considerations that are among the most important.
The economy, as a whole, fundamentally relies upon the health and reputation of the investment management industry. The significance of maintaining the professional responsibility and integrity of this industry cannot be overstated, particularly with respect to investment advisers and broker-dealers. It is therefore essential to our economy for investment advisers and broker-dealers to understand the inherent risks, adopt appropriate safeguards, and be cognizant of their underlying duties to clients when incorporating AI tools. These tools include predictive data analysis (PDA) and generative AI (GenAI) that power offerings like “robo-advisor” services.
In light of these considerations, this article seeks to outline the most prominent potential risks associated with certain AI technologies, good practices for mitigating those risks, and the inherent conflicts that may arise between business interests and investment advisers’ professional obligations.
Risks of PDA, GenAI, and Robo-Advisors
Advances in PDA and GenAI technologies allow investment advisers and broker-dealers to more efficiently monitor general market trends and identify previously imperceptible market trends and inefficiencies. This new information allows firms and advisers to use more sophisticated investment advisory programs and trade execution to capitalize on lucrative investment opportunities for their own accounts and for the accounts of their clients.
With respect to PDA, firms develop PDA-based investment programs and trading systems/algorithms by amalgamating extensive amounts of historical data points from a wide range of sources, enabling these systems to “predict” what the future economic landscape will look like. With respect to GenAI, firms utilize GenAI systems to generate suggestions for new investment opportunities based on considerable amounts of data input, what the system “learns” through machine learning and large language models, and investors’ interactions. In short, PDA systems can be used to predict market conditions, and GenAI systems can be used to identify and capitalize on investment opportunities in light of market conditions.
While both PDA and GenAI systems present powerful predictive functionalities, their reliance on data creates an inherent risk of biased, inaccurate, and/or unsuitable outputs. Faulty or out-of-date data inputs, minimal testing, failure to benchmark systems, and incongruities between the system design parameters and the goals of clients can all contribute to these output risks.
One encapsulation of these output risks is the “robo-advisor.” Robo-advisors are non-human investment advisers or investment advisory programs that use algorithms to advise clients on suitable investment opportunities. A robo-advisor can offer investors cost-effective investment advice and access to an expansive number of investment opportunities and strategies. Suitability is ultimately predicated on the quality of data input and the design and maintenance of the advisory program, including the underlying algorithm itself. Even when fully disclosed to investors, the design of an advisory program/algorithm can impact suitability because of the inherent conflict of interest risk between the designer of the advisory program/algorithm and the client.
The risk is that the advisory program/algorithm may recommend investment opportunities that are better suited for firms than individual investors. A robo-advisor’s program/algorithm can function with limited human direction, but only if properly monitored, maintained, and adjusted. So, while robo-advisors provide investors with flexible options for addressing their needs, transparency and accountability remain important principles for investment firms that have a fiduciary duty to their clients, even when advising through robo-advisors.
Risk Management
Risk management has always been a fundamental tenant of the investment management industry and, in particular, the regulatory framework applicable thereto. Adopting and implementing risk management controls in compliance regimes is a constant and continuing obligation of all industry participants to ensure the stability, security, and credibility of the investment management industry. Consequently, while the initial implementation of risk management controls should first address the creation of a GenAI or PDA investment program itself, such controls should thereafter continue to evolve to address risks arising from the manner in which such programs are operated.
Looking first at the development of GenAI and PDA investment programs, creating systems based on accurate, unbiased data input is a crucial first step in implementing appropriate risk mitigation controls. Since investors are not privy to the quality of the data used to train the systems underlying GenAI and PDA investment programs, their relationship with investment management professionals relies on trust and the duty of firms to input reliable and fair data. A GenAI model, without supervision, can begin to output data that is not easily understandable or explainable, causing it to become a “black box” system. A PDA system, by contrast, makes predictions. With biased and inaccurate data input, the only conclusions a PDA system can come to are biased and inaccurate ones. Robust and representative data input is therefore necessary to ensure biases are not created or perpetuated through GenAI and PDA investment programs.
GenAI and PDA investment programs should be designed to be flexible and accommodating to new data input and periodical updates to address the inevitability of such programs, and the systems upon which they rely, becoming outdated and therefore incapable of making accurate predictions that take into consideration emergent trends, new opportunities, and evolving risks for investors. The initial design of such resilient, future-looking systems, as well as the ongoing obligation to input clean and robust data, can be expensive, time-consuming, and, at times, daunting. But proactively addressing these material risk vectors should be a priority for investment firms to ensure not only compliance with their risk management obligations but also the success of their investment programs.
While effective GenAI and PDA risk management starts with the creation of the systems themselves, good governance requires an ongoing commitment to identifying—and taking active steps to mitigate or eliminate—emergent risks arising from the operation of such systems, as well as ensuring compliance with firms’ evolving regulatory compliance obligations. To that end, employing or contracting with personnel well trained in AI management and maintenance is critical to satisfying a firm’s risk management obligations. Qualified professionals can confirm that AI systems perform as intended, thereby maximizing benefits for firms and investors alike, while also remaining abreast of regulatory developments that may require changes to firm practices.
Good governance naturally begins with a focus on internal practices, with AI professionals conducting regular testing and monitoring of AI systems to catch issues like stale data and inaccuracies before they negatively impact investors or create liability for firms. However, effective risk management should not discount external considerations. By encouraging and remaining receptive to investor feedback, firms may benefit not only from increased client retention and satisfaction, but also from valuable “outsider” perspectives when evaluating the ongoing efficacy, reliability, and appropriateness of a particular system. Transparency with investors about the role of AI in advice and offerings is key to eliciting this feedback and minimizing investor risk. Transparency serves the dual purpose of limiting investors’ assumptions while reframing their expectations about what AI systems can provide. Being open with investors about how these systems work (without disclosing trade secrets), the quality of the data used to train the systems, and how investors’ input data is being collected and used can make an appreciable difference.
Finally, good governance requires firms to not only develop initial risk management policies and procedures that are compliant with then-current regulations; they must also constantly monitor internal compliance with, and remain cognizant of regulatory developments that may require revision of, such policies and procedures. Appropriate time, resources, and personnel should be dedicated to ensuring accountability to clients and regulators and that firm practice matches policy. As investors and government agencies, like the SEC, are becoming increasingly vigilant in holding firms accountable for deficient AI-driven systems and controls related thereto, consistent monitoring, auditing, and enforcement of firm procedures is paramount to effective risk management. By emphasizing a strong culture of risk management compliance, firms and investors should feel confident that GenAI and PDA investment programs are being developed and operated in a trustworthy manner.
Key Takeaways
Here is the bottom line: to be successful, GenAI and PDA systems require human oversight and transparency with investors.
Good governance, including risk management and transparency, are indispensable in maintaining professional integrity and reputation, as well as building trust and confidence with investors. The ability to understand and explain the conclusions of a GenAI or PDA system is both ethically and legally important as new regulations continue to develop and investors become more knowledgeable about AI. Investors rely on firms to give quality financial advice. When an investor’s financial well-being is negatively impacted by a GenAI or PDA system, or their robo-advisor, they will be looking for an explanation. Prioritizing an investor’s trust and loyalty is key in terms of client retention and the recruitment of additional investors.
The professionals at Frost Brown Todd are committed to providing guidance regarding AI governance, regulatory considerations, and more. If you have questions, please contact an attorney with our Artificial Intelligence team.
*Grace Jelkin contributed to this article while working as a summer associate in Frost Brown Todd’s Nashville office. Grace is a rising 2l at the University of Louisville Brandeis School of Law. She is not a licensed attorney.