Unless you’ve been living off grid for the past year, you’ve heard about ChatGPT, its creator OpenAI, and generative AI more thematically. As a quick primer, generative AI is essentially what it sounds like: giving machines the ability to intelligently generate something. Generative AI describes algorithms (such as ChatGPT) that can be used to create new content such as audio, code, images, text, and videos – and the field is evolving quickly.
I was reading the WSJ this morning (as all cultured people do), and I came across the following article on AI in investing, and I thought I would share some of my thoughts more broadly (as to date I’ve been sharing them only 1:1 in meetings)
Augmentation or Automation?
When talking about any new technology, the ‘automation’ word comes up – will this automate the jobs people are doing today? Will this make humans redundant? Will the machines take over? In the financial services industry, across markets and investment banking, I hear the same question: will generative AI change investing? The answer is obviously yes, but I’m not holding my breath for robots to start coming up with investment theses and attending yankees games any time soon. To me, generative AI and related tools (like ChatGPT) will follow the same path and have the same impact that technology has always had: it will continue to augment individuals in the industry, by automating certain workflows (which will free up time and resources), and augmenting other workflows (making them more effective and efficient). Let me jump into a few of my hypothesis:
- For the sell side, it will dramatically reduce the value of the historical content archive to clients. Initiations, reviews, and other long form analysis used to have a relatively long shelf life as clients could use those reports to get ‘up to speed’ on a name. It would be helpful to pull down the latest reports, and use them as a starting point. In the near future, clients will simply go to a tool such as ChatGPT and say “summarize the major drivers of company ABC, the industry dynamics affecting company ABC, and any catalysts or risks on the horizon in the next five years in under 5,000 words”.
- On the other hand, I believe timely content that is responsive to short term dynamics/macro events (i.e. “here is why the inflation print today matters, and what it means for fed policy”) becomes more valuable, because at this point in time the systems that are powering the language models are still relatively behind in terms of content consumption (months or years). It will be a long time until the models are consuming information in real time.
- Finally for the sell side, it feels like generative AI will help drive the next best action for anyone in a client facing role. Firms have an abundance of data about clients, their interactions, engagements, and resource consumptions, but currently have to manually analyze and extrapolate actions from that data set. Being able to ask “who should I call on this name next?” makes it much easier (and much more efficient).
- For the buy side, summarization seems like the most obvious and short term use case. While one could argue that a great sales note is a summary of the firm’s evolving views, perspectives, and resources, why not summarize the summaries? Having a system sitting on the buy side that ingests every single email sent to a firm would solve a lot of problems that brokerages and funds have been trying to solve with more traditional methods. Remember corporate access calendars? Why not have a system absorb every email and spit out a summary of upcoming events. Long form research? Give me a summary. Emails I didn’t read today? Give me a summary. You get the idea
The other prickly challenge that no one is addressing or even acknowledging in capital markets is that these models have a limitation: the information they have access to. The “Large” in Large Language Models (LLMs), of which ChatGPT is one, is an important part of the acronym. In order for these models to work well, they need a large corpus of text – the bigger the better. If you’re writing an essay about George Washington, that’s easy – there’s no shortage of publicly available information on the topic that a model could leverage. If you want to write an analysis of ABC Corp, you’re going to have a hard time getting access to an institutional corpus of information.
This poses the question: where is this information, and who can leverage it? There are the producers and the consumers, who each have a copy, then there are the information infrastructure providers, who distribute the information.
For those who are information producers, the reality is that they likely don’t have enough information to really build a model around. To be fair, they could try and supplement it with public information, but then it’s not really all that differentiated vs. other LLMs that have access to the same public information (with the marginal difference being from the information they produce, and have exclusive access to).
For those in an information consumption position, they might actually be in one of the best positions to build their own LLM. If you think about the biggest asset managers and hedge funds in the world, they are getting 1000’s of emails and IMs per day, every day, across every employee. That adds up to a lot of information very quickly. Theoretically, brokerages and other information providers could ask them not to leverage that information in a LLM, but it seems unlikely and difficult to enforce.
In the middle are the infrastructure providers. As the conduits of the information, they also have insight into a large corpus of text, but likely can’t actually use it (without explicit client consent). Recently Bloomberg announced their own version of ChatGPT (creatively named BloombergGPT), which started with around 700 billion token data set. From the article, it sounds like it came from both proprietary data and public data, but what exactly composes the private data is unclear (i.e. are they leveraging Bloomberg IB messages? One would hope not)
So, when do the robots start trading?
Probably not any time soon. The markets are both rapidly changing (with new information constantly being produced) but also adversarial – something that gives LLMs a hard time.
What we’re much more likely to see is a new generation of software applications that leverage generative AI and LLMs to elegantly tackle a lot of problems. This could include content summarization, initiation-esque analysis and deep dives, or leveraging of data sets to drive insights and next steps.
I’m not one to get overly excited about the latest trend or technology, but using ChatGPT myself, and seeing the outputs, it does make you wonder about the future of the industry, and what the potential applications will be. But for now, all we can do is wonder. Nothing matters about theoretical use cases until someone actually builds it.
Maybe I should go ask ChatGPT what it thinks…
Previous Thought Pieces:
- The Cloud Wars have Officially Arrived in the Capital Markets
- The SEC Officially Says “No” to Unbundling
- A Blockchain Blow-Up You Missed Much Closer to Home (Spoiler: Not FTX)
- People are getting fired for using Whatsapp, but no one gets fired for using Email
- Wall Street shifts to the Cloud
- Asset Manager or Asset Owner – who votes?
- Are Research Marketplaces officially dead?
- Who will own identity in Capital Markets?
Hit The Street
Cut through the noise and avoid information overload.