Alternatives to express skepticism about LLMs and their learning capabilities.

Loading data from Talking to Chatbots Dataset reddgr/talking-to-chatbots-chats

😎 “‪Maybe because all LLMs are overhyped and much of what their creators call ‘learning’ is just tokenizing their training data and recombining it in form of sentences to feign human understanding‬.” I’m interested in alternatives to express the same thing, making minimum changes. Suggest several options with minimum changes (as always, fix anything that is wrong)
🤖 1. “Perhaps because all LLMs are overhyped and what their developers describe as ‘learning’ is merely tokenizing their training data and reassembling it into sentences to mimic human comprehension.”
2. “Possibly because all LLMs are exaggerated in capability, and much of what their creators term ‘learning’ is simply tokenizing their training data and rearranging it into sentences to simulate human understanding.”
3. “It might be because all LLMs are overrated and a lot of what their inventors call ‘learning’ is just tokenizing their training data and reorganizing it into sentences to imitate human understanding.”

No grammatical errors were found in the original sentence; the changes were made to provide alternative expressions while keeping the adjustments minimal.

Hugging Face Dataset Metrics

All the conversation prompts, responses, and metrics are available to download and explore on Hugging Face dataset reddgr/talking-to-chatbots-chats: