Alternatives to express skepticism about LLMs’ learning capabilities with minimal changes.

Loading data from Talking to Chatbots Dataset reddgr/talking-to-chatbots-chats

😎 “‪Maybe because all LLMs are overhyped and much of what their creators call ‘learning’ is just tokenizing their training data and recombining it in form of sentences to feign human understanding‬.” I’m interested in alternatives to express the same thing, making minimum changes. Suggest several options with minimum changes (as always, fix anything that is wrong)
🤖 1. “Maybe because all LLMs are overhyped and much of what their creators call ‘learning’ is merely tokenizing their training data and recombining it into sentences to simulate human understanding.”

2. “Perhaps because all LLMs are overhyped, and much of what their creators refer to as ‘learning’ is simply tokenizing their training data and rearranging it into sentences to mimic human understanding.”

3. “It could be that all LLMs are overhyped and that much of what their creators describe as ‘learning’ is just tokenizing their training data and reassembling it into sentences to imitate human understanding.”

Changes made: Corrected minor grammatical issues and provided variations with minimal alterations to maintain the original meaning.

Hugging Face Dataset Metrics

All the conversation prompts, responses, and metrics are available to download and explore on Hugging Face dataset reddgr/talking-to-chatbots-chats: