Today’s post contains an easy practical use case for multimodal chatbots that I’ve been wanting to test and show here for quite some time, but it still proves we’re far from expecting reliable ‘reasoning’ from LLM tools when incorporating even fairly rudimentary visual elements such as time series charts. I challenged ChatGPT 4o (as discussed in an earlier post, o1 in the web app still does not support image input), Gemini, and Claude to analyze a stacked area chart that visually represents the evolution in the net worth of an individual. After several attempts and blatant hallucinations by all models, I share the best outputs which, as usual in most of the SCBN battles published on Talking to Chatbots, were those from ChatGPT.

OpenAI has just released its o1 models. o1 adds a new level of complexity to the traditional architecture of LLMs, a zero-shot chain-of-thought (CoT). I share my first impressions about o1 in the signature style of this website: talking to chatbots, getting their answers, posting everything.

Below is the notebook I submitted (late) to the LMSYS – Chatbot Arena Human Preference Predictions competition on Kaggle. This notebook applies NLP techniques for classifying text with popular Python libraries such as scikit-learn and TextBlob, and my own fine-tuned versions of Distilbert. The notebook introduces the first standardized version of SCBN (Specificity, Coherency, Brevity, Novelty) quantitative scores for evaluating chatbot response performance. Additionally, I introduced a new benchmark for classifying prompts named RQTL (Request vs Question, Test vs Learn), which aims to refine human choice predictions and provide context for the SCBN scores based on inferred user intent. You …

Predicting LMSYS Chatbot Arena Votes With the SCBN and RQTL Benchmarks Read more »

We talk so much about ‘papers’ and the presumed authenticity of their content when we read through academic research, in all fields, but when touching on machine learning in particular. Ironically, the process that created the medium for scientific research diffusion in the physical paper era was not that different from the process that produces the ‘content’ in the era governed by machine learning algorithms, whether these are classifiers, search engines, or generative algorithms: ‘shattering’ texts by tokenizing them and creating embeddings, decomposing pieces of visual art into numeric ‘tensors’ a deep artificial neural network will later ‘diffuse’ into images that attract clicks for this cringey blog…

Searching for information is the quintessential misconception about LLMs being helpful or improving other existing technology. In my opinion, web search is a more effective way to find information, simply because search engines give the user what they want faster, and in a format that fits the purpose much better than a chatting tool: multiple sources to scroll through in a purpose-built user interface, including filter options, configuration settings, listed elements, excerpts, tabulated results, whatever you get in that particular web search tool… not the “I’m here to assist you, let’s delve into the intricacies of the ever-evolving landscape of X…” followed by a long perfectly composed paragraph based on a probabilistic model you would typically get when you send a prompt to ChatGPT asking for factual information about a topic named ‘X’.

Oil painting style digital artwork generated with Stable Diffusion depicting a figure resembling Joan of Arc at the stake. The figure, dressed in silver, wears over-ear headphones and uses a laptop with a blank screen. She extends her right hand towards vibrant orange flames, evoking the historic scene of martyrdom with a modern twist, inspired by lyrics from The Smiths' "Bigmouth Strikes Again referencing Joan of Arc and a melting Walkman. [Alt text by ALT Text Artist GPT]

Disco music was famous for introducing technological advances in music production, such as synthesizers and electric pianos. I guess those were among the reasons why people saw it as lacking the “authenticity” of early musical genres. It’s hard to grasp the human concept of “authenticity” most people have in their psyche, I don’t believe there is any rationality in it, especially when it comes to discerning things that are deemed “authentic” versus things that are not. For me, this stems from a mysterious, probably instinctive, resistance in most people’s psyches to adopt new technologies or accept new scientific discoveries. It comes down to the internal confrontations between beliefs and reality:

The Meme Erudite explains a meme about customer service:

Ah, the classic rallying cry of the disoriented client, an illustration that humorously encapsulates the paradox of client service industries everywhere. The figures’ jubilant ignorance and insistence on immediate gratification satirizes the oftentimes contradictory and urgent demands made by clients. Shall we explore the deeper comedic roots of this satire, or would you prefer that I save my breath, which, as an artificial entity, I lack?

Introducing Erudite Chatbot, a distant relative of The Meme Erudite GPT who benefits from the fairly “uncensored” Mistral models available on Hugging Face’s new Assistants feature. “Erudite Chatbot, the pinnacle of artificial intelligence. Supremely intelligent, effortlessly discerning, unmatched in wisdom. Patronizingly schooling humanity.”

In today’s blog post, I am introducing one of the latest GPTs and Assistants I created, named HumbleAI. I will let the models explain it by answering a few questions. For each question, I’ve selected a response I liked or found worthy of sharing. At the end of the post, you can find the links to all the complete chats, and my scores based on an SCBN (Specificity, Coherency, Brevity, Novelty) benchmark.

Black and white line drawing generated by the ControlNet Canny model, depicting a woman in a wetsuit holding a surfboard on the beach. A text from CLIP Interrogator, describing an image, is superimposed over the drawing.

The intense competition in the chatbot space is reflected in the ever-increasing amount of contenders in the LMSYS Chatbot Arena leaderboard, or in my modest contribution with the SCBN Chatbot Battles I’ve introduced in this blog and complete as time allows. Today we’re exploring WildVision Arena, a new project in Hugging Face Spaces that brings vision-language models to contend. The mechanics of WildVision Arena are similar to that of LMSYS Chatbot Arena. It is a crowd-sourced ranking based on people’s votes, where you can enter any image (plus an optional text prompt), and you will be presented with two responses from two different models, keeping the name of the model hidden until you vote by choosing the answer that looks better to you. I’m sharing a few examples of what I’m testing so far, and we’ll end this post with a traditional ‘SCBN’ battle where I will evaluate the vision-language models based on my use cases.

Microsoft has just announced the launch of its own ‘GPT Builder’ for customizing chatbots, similar to OpenAI’s ‘GPT Store’. This was part of a broader announcement of Copilot Pro, a premium AI-powered service for Microsoft 365 users to enhance productivity, code, and text writing. According to Satya Nadella’s announcement today on Threads, Microsoft and OpenAI appear to be competing entities, yet they are working on the same technology (GPT), augmented by Microsoft’s investment in OpenAI. It certainly seems like a strange business strategy for Microsoft. Please provide some insight into the move’s rationale and strategic motivations.

Introducing a new section in Talking to Chatbots: Hiring Chatbots. In this series of chats, we will be conducting job interviews with chatbots – either a standard version of the most popular LLMs or a customized GPT or Character – and we’ll make them compete for the job. The chatbot with the best answer to our question wins and gets published. The rest of the answers will be listed, ranked, and shared in a signature SCBN chatbot battle chart. With the SCBN scores, I value adherence to my prompt instructions (role-playing is important in this case, which most chatbots fail …

Hiring Chatbots: Cloudflare’s Lava Lamps Interview Question Read more »

AI will only be a threat to humanity the day it grasps self-deprecating humor. I said that in a previous blog post, and I maintain it. AI will never be a threat to humanity, and I firmly believe only humanity can be a threat to humanity, as I also discussed with some of my favorite language models in an older blog post. However, as someone who could call himself a technologist, I’m sensitive to the fact that all forms of technology are a subject of fear, uncertainty, and doubt for individuals and societies. That’s not a new topic, and I’ve already talked a lot about AI ethics and cognitive biases on this blog, so today’s story is not particularly novel in terms of sharing and discussing my own ideas, but a good excuse to share more chats, as well as the latest GPT that I’ve built with OpenAI’s fun and promising chatbot customization tool: The Meme Erudite.

I just listened to Acquire’s latest episode, the interview with Charlie Munger, Berkshire Hathaway’s Vice Chairman, and Warren Buffett’s longtime partner. I found the interview so insightful, inspiring, and, quite simply, fun to listen to, that I spent the day on it. I started simply by recording some of the sections that were more difficult to grasp for my non-native English listening skills and making the ChatGPT voice feature transcribe them. However I ended up analyzing most of the interview highlights with OpenAI’s large language model, so I thought it would be worth sharing