OpenAI has just released its o1 models. o1 adds a new level of complexity to the traditional architecture of LLMs, a zero-shot chain-of-thought (CoT). I share my first impressions about o1 in the signature style of this website: talking to chatbots, getting their answers, posting everything.

We talk so much about ‘papers’ and the presumed authenticity of their content when we read through academic research, in all fields, but when touching on machine learning in particular. Ironically, the process that created the medium for scientific research diffusion in the physical paper era was not that different from the process that produces the ‘content’ in the era governed by machine learning algorithms, whether these are classifiers, search engines, or generative algorithms: ‘shattering’ texts by tokenizing them and creating embeddings, decomposing pieces of visual art into numeric ‘tensors’ a deep artificial neural network will later ‘diffuse’ into images that attract clicks for this cringey blog…

Searching for information is the quintessential misconception about LLMs being helpful or improving other existing technology. In my opinion, web search is a more effective way to find information, simply because search engines give the user what they want faster, and in a format that fits the purpose much better than a chatting tool: multiple sources to scroll through in a purpose-built user interface, including filter options, configuration settings, listed elements, excerpts, tabulated results, whatever you get in that particular web search tool… not the “I’m here to assist you, let’s delve into the intricacies of the ever-evolving landscape of X…” followed by a long perfectly composed paragraph based on a probabilistic model you would typically get when you send a prompt to ChatGPT asking for factual information about a topic named ‘X’.