Personal perspective on safety concerns regarding text generated by LLMs.

Loading data from Talking to Chatbots Dataset reddgr/talking-to-chatbots-chats

😎 DON’T MANIPULATE:

Agree. The difficult point though is that we really don’t have a clue about what ‘safety’ means in the context of a machine that generates text, while it’s reasonable to achieve a consensus on whether an output is a hallucination or not.

From my perspective, I have absolutely no concern about whatever text an LLM generates… it’s just machine-generated text, it can have as many “unsafe” or concerning attributes as any other text has. A factually inaccurate text generated with an LLM is a concern? Not for me. Just as concerning as any text written by a human, which is as likely to be factually accurate as my willing to believe whatever it claims is. A potentially misleading o deceiving text generated by a machine is a safety concern? Honestly, not for me, as I am predetermined to mistrust a text regardless of its potential origin or form.

It’s just text: bits of information encoded in some societal convention we call language and printed in some medium, be it a piece of paper, a computer screen, or a stone. The value I assign to text relies exclusively in context, and the context in which I am reading a text and determine what it means to me certainly won’t change because of the tools that are used for encoding it and printing it.

To me, text is an artifice, it’s a tool whose utility and value relies entirely in the context, not in the medium or the origin. Thus, the medium and origin do not represent a threat or a concern to me. Essentially, I don’t care people can use an LLM to create text that says b💥💥💥💥💥💥t or tries to deceive when people historically have used all sorts of tools and media to create text that says b💥💥💥💥💥💥t or tries to deceive, from papyrus to MS Word with spell-check.

That is my personal point of view and I will not change it, it is what it is. Nobody will ever convince me of being scared of the dangers of a particular AI (we’re talking about LLMs now, not other AI applications) and its impact on my safety because my opinion is formed I essentially don’t believe those arguments.

What’s the problem, then? I don’t know if I am right or wrong. I probably can’t be right or wrong when expressing an opinion about what safety means for me or for other people. Me and other people, we are not the same. We don’t have the same concerns, fears, or opinions, so I can’t say “safety” is not important just because I’m not concerned about it, no matter my perception or others’ perception of my point of view.

That is a good question. I simply decided to leave my comment here in case someone finds it a relevant point of view.
🤖 Agree. The difficult point, though, is that we really don’t have a clue about what ‘safety’ means in the context of a machine that generates text, while it’s reasonable to achieve a consensus on whether an output is a hallucination or not.

From my perspective, I have absolutely no concern about whatever text an LLM generates… it’s just machine-generated text; it can have as many “unsafe” or concerning attributes as any other text has. A factually inaccurate text generated with an LLM is a concern? Not for me. It’s just as concerning as any text written by a human, which is as likely to be factually accurate as my willingness to believe whatever it claims. A potentially misleading or deceiving text generated by a machine is a safety concern? Honestly, not for me, as I am predetermined to mistrust a text regardless of its potential origin or form.

It’s just text: bits of information encoded in some societal convention we call language and printed in some medium, be it a piece of paper, a computer screen, or a stone. The value I assign to text relies exclusively on context, and the context in which I am reading a text determines what it means to me; this certainly won’t change because of the tools that are used for encoding it and printing it.

To me, text is an artifice, it’s a tool whose utility and value rely entirely on the context, not on the medium or the origin. Thus, the medium and origin do not represent a threat or a concern to me. Essentially, I don’t care if people use an LLM to create text that says b💥💥💥💥💥💥t or tries to deceive when people historically have used all sorts of tools and media to create text that says b💥💥💥💥💥💥t or tries to deceive, from papyrus to MS Word with spell-check.

That is my personal point of view, and I will not change it. It is what it is. Nobody will ever convince me to be scared of the dangers of a particular AI (we’re talking about LLMs now, not other AI applications) and its impact on my safety because my opinion is formed, and I essentially don’t believe those arguments.

What’s the problem, then? I don’t know if I am right or wrong. I probably can’t be right or wrong when expressing an opinion about what safety means for me or for other people. Me and other people, we are not the same. We don’t have the same concerns, fears, or opinions, so I can’t say “safety” is not important just because I’m not concerned about it, no matter my perception or others’ perception of my point of view.

That is a good question. I simply decided to leave my comment here in case someone finds it a relevant point of view.

Note: Changes were made to improve grammar, clarity, and readability.

Hugging Face Dataset Metrics

All the conversation prompts, responses, and metrics are available to download and explore on Hugging Face dataset reddgr/talking-to-chatbots-chats: