Ilya Sutskever is trending for something he reportedly said about

This page contains a chat with Grok, a chatbot by xAI available at x.com.

Grok 2 miniFunbeta

😎 Ilya Sutskever is trending for something he reportedly said about traditional AI scaling approaches having ‘plateaued’. I know how to read and search for information on the Internet, so stop the ‘helpful assistant’ bs and just put some posts here so I can share the link:

🤖 Here you go, without the usual fluff:

Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training – the phase of training an AI model that uses a vast amount of unlabeled data – have plateaued.

Sutskever also mentioned in September, “Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” suggesting a shift in focus to something different from what he was previously working on at OpenAI.

There’s talk on X about his comments, with some users interpreting it as a sign of AI scaling hitting a wall, leading to discussions about exploring new methods like test-time compute and scaling synthetic data.

Garrison Lovely [GarrisonLovely]

Ilya Sutskever, perhaps the most influential proponent of the AI “scaling hypothesis,” just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI’s in-development Orion model had disappointing results.

Yann LeCun [ylecun]

I don’t wanna say “I told you so”, but I told you so.

Quote: “Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training – the phase of training an AI model that uses a vast amount of unlabeled

raven [ravenmaster122] [GarrisonLovely]

Ilya specifically said scaling pre-training has plateaued, hence all the recent stuff about post-training as well as scaling inference compute

Vijay Amritraj [vjamritraj]

AI scaling has already plateaued according to Ilya Sutskever.

[x.com/1shankarsharma]

Casper Hansen [casper_hansen_]

Why can’t journalists do honest work today? They misquote Ilya for saying pre-training has plateaued.

All you need is new methods like test-time compute and different approaches to scale up synthetic data.

Panda_GPT [Panda_GPT]

Ilya Sutskeverが、スケーリング則の限界を報告?データ量、計算量、パラメータ量を増やしても、AIの性能が上がらないとなれば、それは一大事!まだ知ったばかりで、真偽の程は調べていないけど。 [x.com/ylecun]

gio [GiorgioPatrini]

[sama]

[DarioAmodei]

Context: [ilyasut]

told Reuters that scaling has plateaued

[x.com/GarrisonLovely]

Bunagaya [Bunagayafrost]

Ilya Sutskever “scaling up pre-training have plateaued”
also ilya in september
“identified a mountain that’s a bit different from what I was working on.”
“Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?”

—–

The content above is extracted from a conversation with Grok, a chatbot by xAI available at x.com.