Meta’s ‘Chief AI Scientist’, winner of the prestigious Turing Award for his pioneering work on deep learning, wanted to dot the ‘i’s about ChatGPT and its designer OpenAI.
” There’s nothing revolutionary about it, even though that’s how it’s perceived by the public “. During an online conference which ZDnet was able to attend, Yann LeCun wanted to temper the ardor that has been stirring the Web since the launch of ChatGPT at the end of November.
His word is interesting, because Yann LeCun is far from just anyone. The work of the French researcher, now chief scientist for AI at Meta, is indeed at the origins of the technique of deep learningor deep learning, which revolutionized artificial intelligence from the end of the 2000s. Research which even earned him the prestigious Turing prize, the “Nobel of computing”.
Read also: Five tips to get better answers with ChatGPT
“Well put together, but not particularly innovative”
ChatGPT is also based on the deep learning to work. But does not represent, according to LeCun, a major technical advance. ” When it comes to the underlying techniques, ChatGPT isn’t particularly innovative “he notably recalled during the conference, while admitting that ChatGPT “ is well put together, well done “.
As LeCun also points out, OpenAI is indeed far from being the only one working on AIs of this kind, nor necessarily ahead of other research laboratories. Meta, his employer, has also designed a “great language model”, which responds to the sweet name of OPT-175B. Comparable in size to GPT-3, the model behind ChatGPT, it was delivered free of charge to the scientific community. Google is also working on similar technologies, and already demonstrated its LaMDA bot publicly almost two years ago.
“And it’s not just Google and Meta: half a dozen start-ups have fundamentally very similar technology said LeCun, according to ZDnet. Before recalling that the technology behind the GPT-3 model comes both from work on self-supervised learning, a technology that LeCun “has been advocating for a long time, before OpenAI even existed “… But also “transformers”, technique of deep learning introduced by Google researchers in 2017. Not to mention the use of a feedback (RLHF) inaugurated by DeepMind – now owned by Google – in 2017.
“ChatGPT and other great language models didn’t come out of nowhere, they’re the result of decades of input from various people” summarizes LeCun on Twitter.
The title is blunt, but the article conveys what I said on the @collectivei Q&A about progress in AI.
ChatGPT & other LLMs didn’t come out of a vacuum & are the results of decades of contributions from various people.
No AI lab is significantly ahead of the others. https://t.co/WtLMrQVnVm
—Yann LeCun (@ylecun) January 23, 2023
But what are Google and Meta doing?
However, one question remains: if this technology is ready, especially at companies like Google or Meta, why are they leaving it in the closet? According to LeCun, the answer is quite simple: OpenAI is a nascent company whose economic equation is very different from that of more established groups. Asked about this by a user, he responds on Twitter : “ Large companies could have made a public demonstration, but did not, because they have less to gain than a small company looking for investors, and much more to lose (due to bad press). »
And it’s true that when it comes to bad press, Meta does indeed know a lot. Mark Zuckerberg’s company has however launched an experiment with BlenderBot, but this robot is only available in the United States and was criticized when it was launched for taking up conspiracy theories. As for Google, it is visibly afraid of unleashing a monster that would harm its reputation… And its wallet.
This should soon change: LeCun does not hesitate to say that Meta will multiply generative AI services in the future. And at Google, it’s the commotion to quickly counter OpenAI initiatives. ChatGPT may not be revolutionary, but it shook things up in a matter of weeks.