And finally, here’s what Justina’s interested in this morning -Bloomberg

Justina Lee is a cross-asset reporter based in London. Follow Bloomberg’s Justina Lee on X @Justinaknope  .

More than a year ago, I interviewed a few hedge funds about how they’re using ChatGPT. At the time, it already seemed clear the new tech could be very helpful in automating grunt work, but it was unclear what more it could do and data security was a big question mark. Over the last few months I’ve been speaking to hedge funds again and trying to get a sense of how their gen AI applications have evolved. Here are a few observations:

  1. No one talks about data security any more, presumably because they’ve become comfortable their queries are not being used for training by the AI companies. The system (at these more resourceful places) now seems to be to build an interface that can plug into the different closed-source models (ChatGPT, Claude) and also some open-sourced ones that can be fine-tuned internally.
  2. Boosting productivity of specific tasks like coding and extracting information from documents is still the main use. Some people wonder if gen AI ultimately has to be used for more complex tasks to justify its cost, but it seems at least the time savings are real and substantive.
  3. Some funds are exploring more sophisticated uses. Man Group’s moonshot idea is to use it to search through research, generate a hypothesis and even test it all on its own.
  4. It’s clear that anything more complex requires much more than typing a prompt into ChatGPT. Balyasny said that in order to get the AI to analyze a question like ‘which stocks will be winners and losers from higher tariffs?’, it had to first train it to break the topic down into sub-questions. (In computer science, this is called chain of thought.)
  5. Relatedly, in a lot of the cool use cases — like one Balyasny showed me where the gen AI read an academic paper and backtested the strategy in it — you’re combining the GPT’s mastery of natural language with other tools. For instance, in that case, the GPT itself can’t actually conduct a backtest, so it was relying on a separate program coded by Balyasny.
  6. For quants, there is a particular use case: sentiment analysis, or finding systematic signals in text. The most cutting-edge firms have been doing that for ages, but as Two Sigma told me, you used to have to code a tool that looks for particular keywords or expressions. Now because large language models are able to parse context you can build the signal by just asking them, say, “Is this story about an executive departure?” That means they can test far more signals.

The upshot seems pretty positive. One thing I wondered throughout my reporting is whether these gen AI tools level the playing field; now sentiment analysis is no longer the preserve of a firm that can build out a natural-language processing team, for instance. But even putting aside the cost of using ChatGPT and the like, it seems that putting it to good use still requires a team of engineers.

The important caveat to all that is that OpenAI says they’re working on taking ChatGPT toward “human-level” problem-solving, so it might be that we all just need to wait for it to get there.

Justina Lee is a cross-asset reporter based in London. Follow Bloomberg’s Justina Lee on X @Justinaknope  .

— With assistance from Nick Bartlett and Justina Lee