Introduction
Prompt Engineering is blowing up as a topic of discussion. Now that ChatGPT4 is entering the scene, it is even more so. In one of these discussions there was a comment made the mirrored something I’ve been thinking about recently.
It was mentioned that using prompts which were polite and respectful would yield better results than just barking orders.
To test this observation (theory?) I began a hunt for prompts which would yield incorrect results in a predictable way. One of the first prompts that I tried was a simple math questions that ChatGPT has a history of getting wrong. The experiment and results of that endeavor are covered in this blog: Is ChatGPT as Bad at Maths as Some Say?
Unfortunately that prompt did not turnout to be a valid question for testing the value of being polite.
So, I turned to ChatGPT4 to see if it had a few problematic prompts up its sleeve. Continue reading

