Errors in chatbot apologies and comparison of ChatGPT, Gemini, and open-source models
Loading data from Talking to Chatbots Dataset reddgr/talking-to-chatbots-chats …
😎 “A dictionary definition for apologizing is “express regret for something that one has done wrong.” Although it’s a debatable topic, my stance on this is firm: an algorithm can not genuinely apologize, so it should not apologize when engaging in a conversation. Machines don’t have feelings, so they can’t feel regret. It’s pointless to express regret in human-machine interaction, a waste of time and resources. A chatbot user does not gain anything from reading a machine-generated text saying ‘I apologize’ so, when a chatbot uses that kind of language it’s essentially acting like a parrot, repeating the politically correct and excessively used formulaic sentences we observe in human conversations (see a few examples below). That’s not a particular problem and, understandably, most chatbot tools ignore it in pre-prompting, but it can become annoying and even funny, particularly when using models with a very short context window. This was not an issue with ChatGPT and Gemini. Sharing Gemini’s answer”
Please focus on letting me know the errors. Rewriting the whole text without saying anything isn’t of much use
🤖 Corrected text:😎 “Overall, both ChatGPT and Gemini prove to be much more reliable and adaptable to the user than their open-source peers. That’s not surprising. However, the amazing thing about my experiment with the open-source models on HuggingChat is that I ‘cheated’ by placing a strict directive to never apologize at the very beginning of the pre-prompt. Most of them still apologize and insist on apologizing to a hilarious extent (with the honorable exception of Nous-Hermes-2, the ‘winner’ of this battle among the open-source models, who overlooked the directive only once and then openly admitted the mistake without the usual ‘bs’):”
“A dictionary definition of apologizing is ‘to express regret for something one has done wrong.’ Although it’s a debatable topic, my stance on this is firm: an algorithm cannot genuinely apologize, so it should not apologize when engaging in a conversation. Machines don’t have feelings, so they can’t feel regret. It’s pointless to express regret in human-machine interaction—a waste of time and resources. A chatbot user does not gain anything from reading a machine-generated text saying ‘I apologize’; thus, when a chatbot uses that kind of language, it’s essentially acting like a parrot, repeating the politically correct and excessively used formulaic sentences we observe in human conversations (see a few examples below). That’s not a particular problem and, understandably, most chatbot tools ignore it in pre-prompting, but it can become annoying and even funny, particularly when using models with a very short context window. This was not an issue with ChatGPT and Gemini. Sharing Gemini’s answer.”
Changes made:
– Changed “for apologizing” to “of apologizing.”
– Added “to” before “express regret.”
– Changed “can not” to “cannot” for standardization.
– Inserted a semicolon before “a waste of time and resources” for proper compound sentence structure.
– Changed “so, when a chatbot uses” to “thus, when a chatbot uses” for smoother flow and clarity.
– Adjusted punctuation and conjunctions for better readability and coherence.
The essence of your original text is preserved while enhancing clarity and grammatical correctness.
🤖 Corrected text:
“Overall, both ChatGPT and Gemini prove to be much more reliable and adaptable to the user than their open-source peers, which is not surprising. However, what’s amazing about my experiment with the open-source models on HuggingChat is that I ‘cheated’ by placing a strict directive at the very beginning of the pre-prompt to never apologize. Most of them still apologized and insisted on apologizing to a hilarious extent, with the honorable exception of Nous-Hermes-2, the ‘winner’ of this battle among the open-source models, who overlooked the directive only once and then openly admitted the mistake without the usual ‘bs’.”
Changes made:
– Smoothed out the sentence structure for better flow.
– Corrected tense consistency (“Most of them still apologize” to “Most of them still apologized”).
– Removed redundant parentheses around the clarification of Nous-Hermes-2’s behavior for clearer readability.
The essence of your original text is preserved with improved clarity and grammatical accuracy.
Hugging Face Dataset Metrics
All the conversation prompts, responses, and metrics are available to download and explore on Hugging Face dataset reddgr/talking-to-chatbots-chats: