Hedge fund Two Sigma also doesn't buy the ChatGPT hype
As Large Language Models (LLMs) grip the popular imagination, fears over job security are possibly overdone. First, Richard Eisenberg of Jane Street suggested they're less transformative than they seem, now it's ex-Goldman partner, Marty Chavez of investment management firm Sixth Street and David Siegel of hedge fund Two Sigma, who are offering words of reassurance.
That's not to say the work being done with these LLMs isn't impressive. Siegel says, "everyone is pretty surprised at behaviors occurring from Large Language Models," but that "people are reading too much into it."
Lamenting the prominence of AI bots in the news, Siegel brought up "Eliza, the first chat program invented at MIT in the 70s that made the front page of the New York Times." For its time "people were seduced by it" despite, as Chavez pointed out, it being made of "a few hundred lines of lisp."
"It's just more software," Chavez says. "We've been bringing more software into finance for a really long time." He says LLMs will never achieve the "holy grail" of predicting the stock market.
The reason for this, for Chavez, is that LLMs' strengths lie in analyzing stable datasets. Using cats as an example, he says that the prominence of data and its definiteness in defining what a cat is makes ChatGPT or Google Bard well-equipped to understand it. ""The concept of a cat is stable in time," he says. "The stock market is notoriously not a stable distribution."
Have a confidential story, tip, or comment you’d like to share? Contact: firstname.lastname@example.org in the first instance.
Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will – unless it’s offensive or libelous (in which case it won’t.)