The Place of Respect in Prompt Engineering

Introduction

Prompt Engineering is blowing up as a topic of discussion. Now that ChatGPT4 is entering the scene, it is even more so. In one of these discussions there was a comment made the mirrored something I’ve been thinking about recently.

It was mentioned that using prompts which were polite and respectful would yield better results than just barking orders.

To test this observation (theory?) I began a hunt for prompts which would yield incorrect results in a predictable way. One of the first prompts that I tried was a simple math questions that ChatGPT has a history of getting wrong. The experiment and results of that endeavor are covered in this blog: Is ChatGPT as Bad at Maths as Some Say?

Unfortunately that prompt did not turnout to be a valid question for testing the value of being polite.

So, I turned to ChatGPT4 to see if it had a few problematic prompts up its sleeve. Continue reading

Is ChatGPT as Bad at Maths as Some Say

Introduction

Is ChatGPT as bad at math as some say?

The answer to this is yes, maybe, and not necessarily. It all depends on the situation, the circumstance, and what specifically is happening.

A better question might be: Should I trust ChatGPT to solve my math problems?

The answer to this is definitely no. Not, if by “trust” you mean take whatever answer the AI gives you and implement it without question, and/or double-checking, and/or doing a simple reality-check.

Of course you could also ask the question: Should I trust ChatGPT to correctly answer ANY question? Continue reading