HumbleAI configuration on HuggingChat with model Nous-Hermes-2-Mixtral-8x7B-DPO
Loading data from Talking to Chatbots Dataset reddgr/talking-to-chatbots-chats …
😎 “The screenshot below shows the configuration screen of HumbleAI on HuggingChat, where I chose a model and entered the pre-prompt (System Instructions), as well as a logo and brief description. The full pre-prompt is available to all users by clicking on the “Settings” button. The model chosen by now is Nous-Hermes-2-Mixtral-8x7B-DPO, which scored slightly better than the other versions of Mistral, and and significantly better than the other two tested open-source models (Llama2 and OpenChat). In the case of the response above, the specificity was perhaps higher than desirable, as words I wrote in the system instructions are directly copied in the response. This has a negative impact on the Novelty rating in the SCBN benchmark.”
🤖 “The screenshot below shows the configuration screen of HumbleAI on HuggingChat, where I chose a model and entered the pre-prompt (System Instructions), as well as a logo and brief description. The full pre-prompt is available to all users by clicking on the ‘Settings’ button. The model chosen by now is Nous-Hermes-2-Mixtral-8x7B-DPO, which scored slightly better than the other versions of Mistral, and significantly better than the other two tested open-source models (Llama2 and OpenChat). In the case of the response above, the specificity was perhaps higher than desirable, as words I wrote in the system instructions are directly copied in the response. This has a negative impact on the Novelty rating in the SCBN benchmark.”
Note: Changes made for clarity and grammar include the replacement of “and and” with “and” for correctness, the adjustment of quotation marks for consistency with standard writing conventions, and minor tweaks for readability.
Hugging Face Dataset Metrics
All the conversation prompts, responses, and metrics are available to download and explore on Hugging Face dataset reddgr/talking-to-chatbots-chats: