Ilya Sutskever is trending for something he said about AI having plateaued
This page contains a chat with Grok, a chatbot by xAI available at x.com.
Grok 2 miniFunbeta
đ Ilya Sutskever is trending for something he reportedly said about traditional AI scaling approaches having âplateauedâ. I know how to read and search for information on the Internet, so stop the âhelpful assistantâ bs and just put some posts here so I can share the link:
đ€ Here you go, without the usual fluff:
Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training – the phase of training an AI model that uses a vast amount of unlabeled data – have plateaued.
Sutskever also mentioned in September, “Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” suggesting a shift in focus to something different from what he was previously working on at OpenAI.
There’s talk on X about his comments, with some users interpreting it as a sign of AI scaling hitting a wall, leading to discussions about exploring new methods like test-time compute and scaling synthetic data.
Garrison Lovely [GarrisonLovely]
Ilya Sutskever, perhaps the most influential proponent of the AI “scaling hypothesis,” just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI’s in-development Orion model had disappointing results.
Yann LeCun [ylecun]
I don’t wanna say “I told you so”, but I told you so.
Quote: “Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training – the phase of training an AI model that uses a vast amount of unlabeled
raven [ravenmaster122] [GarrisonLovely]
Ilya specifically said scaling pre-training has plateaued, hence all the recent stuff about post-training as well as scaling inference compute
Vijay Amritraj [vjamritraj]
AI scaling has already plateaued according to Ilya Sutskever.
Casper Hansen [casper_hansen_]
Why can’t journalists do honest work today? They misquote Ilya for saying pre-training has plateaued.
All you need is new methods like test-time compute and different approaches to scale up synthetic data.
Panda_GPT [Panda_GPT]
Ilya Sutskeverăăăčă±ăŒăȘăłă°ćăźéçăć ±ćïŒăăŒăżéăèšçźéăăă©ăĄăŒăżéăćąăăăŠăăAIăźæ§èœăäžăăăȘăăšăȘăă°ăăăăŻäžć€§äșïŒăŸă ç„ăŁăă°ăăă§ăçćœăźçšăŻèȘżăčăŠăăȘăăă©ă [x.com/ylecun]
gio [GiorgioPatrini]
[sama]
Context: [ilyasut]
told Reuters that scaling has plateaued
âŠ
Bunagaya [Bunagayafrost]
Ilya Sutskever “scaling up pre-training have plateaued”
also ilya in september
“identified a mountain that’s a bit different from what I was working on.”
“Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?”
—–
The content above is extracted from a conversation with Grok, a chatbot by xAI available at x.com.
