ChatGPT is a Bullshitter
ChatGPT is a Bullshitter and if you use it to communicate with other people you will be too.
My timelines have filled with gushing reports of the fantasticness of ChatGPT, which is depressing. There is breathless talk of using it for everything, and of how many jobs our AI overlords will replace.
- "It's like a calculator for words" - No, it isn't; calculators give correct answers and we throw out calculators which don't.
- "It'll answer my emails for me" - Well okay, if you want to bullshit people.
- "It'll write my 'cold emails' for me" - Oh dear god no; the thought of industrial-scale bullshit spamming my inboxes is enough to make me reach for my revolver.
Get the bandwagon rolling
Many of the crypto types who previously filled the timelines with 'Web 3.0' nonsense have now re-invented themselves as AI bullshitters. But there is no such thing as Artificial Intelligence, it's a badly-coined phrase from the 1950s. What LLMs such as ChatGPT are doing is 'statistics', or 'Machine Learning'. It's not artificial; LLMs are trained on the output of millions of real humans, also consider the awful pay of the people who pre-trained the transformer. LLMs are not intelligent. Mere pattern-matching is not enough; intelligence synthesises and draws conclusions, has beliefs, and knows what it knows.
The language used to describe LLMs is deliberately anthropomorphic. Take "Hallucination"; LLMs use the exact same process when 'hallucinating' as when producing accurate content, so in what sense is it hallucinating? Human beings hallucinate, machines are just wrong. Also, the first person singular is not appropriate - "as a large language model I… " - it's not a person, it's auto-complete.
The demands to regulate the coming 'super-intelligence' is self-serving claptrap from companies looking to profit from the implied importance and increased barriers to entry regulation provides. All part of the hype.
But it's a bullshitter
Harry Frankfurt, a philosopher, wrote an essay (On Bullshit) defining this rhetorical style as persuasive speech whose essence is a “lack of connection to a concern with truth—this indifference to how things really are”. LLMs are designed to produce bullshit: persuasive speech with no concern for the truth.
A Generative Pre-trained Transformer (GPT) model is a next-token optimiser. The tokens are words and the optimiser seeks plausible text which is grammatically correct, not truthful. LLMs cannot ascertain truth - they merely manipulate symbols (tokens). LLMs cannot generate new information either, they're not intelligent or informed, so insights must come from you. Since an LLM cannot draw new conclusions or insights, and if you use ChatGPT to write for you, your writing will not contain any new information. So why should I bother with your messages?
LLMs produce the bland, insipid text of corporate-speak; the measured, authoritative style used by PR flacks, LinkedIn influencers and MBA-ists writing of the hard decision to let go 10% of the workforce, or claiming they take the security of personal data extremely seriously just after the data breach has finally gone public.
If you're an SDR who has to send a lot of emails and thinks LLMs are your saviour, well, you're going down the quality curve, probably proving that a machine can replace you, and you'll sound like a bullshitter. I mean, there's enough spam as it is. LLM output is likely to make the work of spam filters much harder. Face it: if your job could be done by a LLM such as ChatGPT then you're in a bullshit job, and your employer is definitely aware of this.
Another thing: your prompts may disclose PII or proprietary information. Suppose you send a set of composite prompts with "name", "engaging event", and "call to action". What have you revealed to a third party? There is little protection provided by the privacy policy (you divulged it, after all). Is using ChatGPT a reasonable use of PII for internal business purposes?
Write like a person
The current mania around LLMs will pass as other fads do, by settling into a few specialised use cases where they genuinely add value or become grassed over and forgotten. Perhaps as a search or summarising tool on scoped data with a definite notion of truth, such as policy manuals, or Customer Service and Support queries. Or maybe you're stuck when you need something to say, and your hangover means you're under par this morning. In which case okay, get your LLM of choice to generate something, then refine the text into something else that's genuinely yours.
Bullshit and the bullshitters who spout it will always be with us, but decent use cases will emerge. After all, it's early days.
It is often a struggle to express your thoughts properly in writing. Writing is hard, but it is a skill that can be learned. Writing well takes talent, and a machine will remain a machine. However, the process of writing to another human being and persuading them into a course of action is a personal thing between people; don’t be a bullshitter.
- Posted in Voice.