A Couple of Chats with ChatGPT
Don't blame the program for GIGO (Garbage In, Garbage Out).
GIGO is a basic principle in data management: If you have inaccurate or incomplete data going in, you will get an inaccurate or incomplete result coming out.
One of the best short descriptions of ChatGPT and similar chat bots that I’ve heard is: They create something that is plausible, not accurate.
So here are a couple examples that I posted to Linked In two months ago:
https://www.linkedin.com/posts/martinschellindonesia_chatgpt-activity-7081172474683408385-0MSw
First I asked Open AI’s ChatGPT about the origin of two expressions:
https://chat.openai.com/share/91678c38-3815-4f0a-b13c-78dd08f9a2ea
I got replies that mimicked the most common explanations found on the web. For “raining cats and dogs” the bot offered several “theories” which seem to be presented in descending order of prominence. Namely, the bot follows crowd sourcing and gives top billing to the “theory” that is more common, not the one that is most logical or most authoritative.
For “posh” I prompted the bot with “travel by sea” so it focused its entire reply on the rampant folk etymology that attributes the word to an acronym. This explanation was debunked by Michael Quinion, who titled his entire 2005 book on folk etymologies Port Out, Starboard Home.
Then I asked ChatGPT about an ironic joke about a baseball player ordering pizza:
The bot only named the late Yogi Berra, who uttered many ironic aphorisms while playing for the New York Yankees. Because the joke fits Berra’s style, it was repeated many times with his name attached after the internet came online. However, other players were linked to the joke in print media before he was:
https://quoteinvestigator.com/2014/07/22/pizza/
I recall reading that the freebie version of ChatGPT will overtly deny it has access to the web if asked. This is a bit sly on the part of OpenAI. True, the bot cannot sift real-time crowd sourcing by accessing the web. But its database is derived from the (somewhat) moderated contents of the web several years ago — when that version of the bot was prepared for release to the public. So, the denial merely means that the current free version lacks an updated source of folk etymologies, rumors, etc.
In sum, this type of AI simply reflects a human tendency: We often echo comments robotically, so an AI or robot that mimics our “thoughts” shouldn’t surprise us.
I want to take the opportunity to clarify the title of this substack section: Alexander Pope wrote “An Essay on Criticism” more than 3 centuries ago. His full admonition used the word “learning” which I (and many others) misquoted as “knowledge”:
A little Learning is a dangerous thing; Drink deep, or taste not the Pierian Spring.
If we think superficially, preferring to parrot news items and speculation simply because they are echoed widely (or “it sounds like it should be true, therefore it is”), then we are not drinking deeply from the spring of accumulated human wisdom.
Well done. Another way of saying this: don't check your brain at the door!