- Inclined.ai
- Posts
- 🤖 Learn to Master ChatGPT (Sunday Special)
🤖 Learn to Master ChatGPT (Sunday Special)
A Guide to Prompt Engineering
Hey, it’s Davis. You’re reading a free preview of our premium Sunday Deep Dive. I hope you enjoy it and find this content worth the time. If you’d like upgrade to keep getting these deep dives, every Sunday click here.
Mother wakens the crew of the Nostromo. Captain Dallas, still recovering from his long sleep, sits in a white room and types on a sleek, modern keyboard.
He’s talking with Mother. He’s talking with AI.
Since the dawn of Sci-Fi, writers have imagined communication between humans and computers. Today, I can tell a chat AI—like ChatGPT—to play the role of Mother from Alien and recreate one of my favorite movies.
Getting ChatGPT to play that role is prompt engineering, and the concept is a growing trend in tech. The way we speak to computers will matter more and more over time.
I’m reminded of a recent tweet by Andrej Karpathy, “the hottest new programming language is English.”
It’s not a far-fetched concept. After all, programming is talking to a computer. From IBM onwards, every computer requires human instruction, and that’s not changing.
What’s changed is the ability to speak to sophisticated algorithms without the need to pull up a terminal or learn python. AI is changing the world, and knowing how to talk to these tools is half the battle.
WHAT IS PROMPT ENGINEERING?
The term is relatively new, and its origins are argued (because we live in the internet age, and it’s harder to claim ownership). Prompt engineering is the ability to instruct and teach AI effectively.
If it helps, think of this as rapid testing or instruction writing for artificial intelligence.
What’s important is not to let this overwhelm you. The first prompting happened with the first AI model. The first example was showing a computer images of circles and triangles. Today’s neural networks can process way more data, creating complexities.
So, the concept is simple, but digging into the full power of AI today is something else entirely.
We’re not talking about asking questions. Odds are, if you’re typing “what’s 2+2” into ChatGPT, then you need to keep reading.
We can all ask chatbots questions. That can work more often than not. But AI is not perfect. A common metaphor I see is to treat GPT-based large language models like the smartest five-year-old you’ve ever met.
I have a niece around that age and can’t imagine trying to get her to write an essay on the effects of soil mismanagement in relation to Reconstruction politics. See! Your eyes glazed over reading that, so how do we make this work for our AI buddies?
The Principles of Prompting
Stop asking single-line questions. That’s like using a top-rated cookbook to find out how to make grilled cheese.
There are three ways to instantly get better at prompting and go from grilled cheese to top-notch bolognese. From there, we can get into some specific prompt concepts and the ability to unlock ChatGPT’s full potential.
Principle 1: Context is King
GPT-3.5 is swimming in data. When you ask it for a simple request, it can end up complicating things more than you realize. Did you ever wonder why ChatGPT is so bad at math?
The reality is the LLM is taking words and turning them into patterns. From there, it’s making an educated guess.
Give your chat AI a frame to search into. If you give it a math problem, you need to make sure it grasps that you want it to do math. If you’d like ChatGPT to write a high school essay, you must ensure it knows to write at that level.
Instead of: “Plan a party for a kid.”
Try: “My child is turning 9. They like superheroes and the color red. Help me plan a party for this weekend. Ten of his friends are coming to my house.”
You’ll get a much better response this way. Context is the cardinal direction that helps your chat companion find the most correct guess and phrase it the best way.
Principle 2: Get Specific
Pretend you’re writing a law that’s going to be judged by the Supreme Court of the United States. You know what they look for: narrow tailoring.
Keep things on track and stay focused. Try to avoid prompting outside the specific request. You’ll only hurt the ability of the chat AI to give you a quality response. Odds are they’ll even skip over parts if you confuse them with too many requests.
It runs parallel with context. If you set ChatGPT up in a room and then tell it to focus on describing the chair first, you’ll see better results.
Instead of: “I’m going to a job interview. Write five questions for me to answer. Add tips for how to not get nervous before the interview. Do not create questions asking about my background.”
Try: “You’re interviewing a software engineer. Create five questions to ask them to understand their skill set and qualifications better.”
Nothing limits the number of prompts you can do. Focus and expand from the initial request and try not to do everything at once.
Principle 3: When in Doubt: “Let’s take this step-by-step.”
Welcome. You discovered the magic word today. This phrase slows everything down for the AI and gets you where you need to go.
You don’t need to start with this phrase. Using it tells ChatGPT to show their work.
We’ll explain where this concept comes from further in our briefing, but here’s the TL;DR: sometimes, there’s a part of our prompt it’s not identified correctly. “Let’s take this step-by-step,” reminds you and ChatGPT to slow down and get specific.
If you learn to utilize this phrase more often and find ways to make it work for you, you’ll become a better prompt engineer. One term can do a lot of heavy lifting.
Pro-tip: We’ve shown you “standard” prompts in all these examples. Many prompt engineers will use “Standard QA form” prompts. Here’s our example for this principle written that way.
Example:
“Q: The Industrial Revolution rapidly changed the infrastructure in London. Describe three essential innovations from this period and connect them to Landon’s development.
A: Let’s take this step-by-step.”
Even without our magic word, this style of standard prompting is quite helpful to adopt.
However, we’re beginning to stumble into the advanced tactics used in prompt engineering, so it’s time for a new section.
UNIQUE WAYS TO PROMPT
Let’s preface this: we can go super deep here. Prompt engineering is changing daily, and as these models get more sophisticated, the need to adapt prompts strengthens.
To keep things clean, I will go through these using our metaphor from earlier. Let’s pretend ChatGPT is a super-intelligent toddler.
Got it? With that buy-in, we can continue.
1/ Role Prompting
We’ll start with a popular tactic. Our toddler is great at imagining things. You tell them they’re a fireman, and suddenly they can give you detailed ways to ensure your apartment is up to code. Role-playing is a fun, easy way to build context.
The best part of role prompting is how easy it is to understand and use. All you need to do is tell ChatGPT to play a role. From there, the AI will do its best to fill the part like that enthusiastic drama student from your old high school.
You can even take this a step further. Try framing your prompt as a script. Tell the LLM specific instructions around a scene that gives you the answer to your question.
TRY IT OUT FOR YOURSELF:
Copy this prompt into ChatGPT and find a destination!
“Act as a travel guide. I will tell you my location and you will suggest a place to visit near my location. In some cases, I will also give you the type of places I will visit. You will also suggest me places of similar type that are close to my first location. My first suggestion: [fill it in]”
Why would you take that extra step? While popular, role prompting does not necessarily improve accuracy. You can tell your five-year-old they’re a mathematician, and they’ll still manage to screw things up.
Let’s get deeper.
2/ Chain-of-Thought Prompting
There’s a scene in Guardians of the Galaxy where Rocket Raccoon is trying to teach young Groot how to activate a complicated device. That’s chain-of-thought prompting.
You take an example question and answer it for ChatGPT. Show them your chain of thought. Then you give it a new question in the same vein and ask it for an answer.
This prompt style allows you to get more specific. You’re telling your toddler they’re here to answer this particular question with one specific logic pattern.
Within this specific style is two other sub-categories. Let me give the rundown:
Zero-shot Chain-of-Thought is “Let’s take this step-by-step” you frame the question the same, but don’t give it a precursor. Instead, you ask it to think through the points made. EX: Q: X is A. Y is B. What is C? A: Let’s take this step-by-step.
Self-consistency is using several responses to find the most accurate answer. You give ChatGPT more swings at the ball. Take the hits and discover the grouping.
TRY IT OUT FOR YOURSELF:
Copy this prompt into ChatGPT and see how accurate it is:
“Q: Which is a faster way to get home?
Option 1: Take an 10 minutes bus, then an 40 minute bus, and finally a 10 minute train.
Option 2: Take a 90 minutes train, then a 45 minute bike ride, and finally a 10 minute bus.
A: Option 1 will take 10+40+10 = 60 minutes.
Option 2 will take 90+45+10=145 minutes.
Since Option 1 takes 60 minutes and Option 2 takes 145 minutes, Option 1 is faster.
Q: Which is a faster way to get to work?
Option 1: Take a 1000 minute bus, then a half hour train, and finally a 10 minute bike ride.
Option 2: Take an 800 minute bus, then an hour train, and finally a 30 minute bike ride.
A: ”
Alright, you’re almost there—one more to go.
3/ General Knowledge Prompting
You’re going to notice a trend here. This prompt style also circles context and narrow tailoring.
All you do is tell your toddler how the world works. The cow goes moo. The dog goes woof. So what does a cat say?
It’s an oversimplification, but the core reasoning is there. Show ChatGPT some knowledge and turn that into the only focus for that chat. You can take an article from the internet and summarize it for the model. Make sure to ask if it understands and relay the information to you.
Once you know you have the attention set in the suitable space, get to work. For instance, we can share an Inclined newsletter with it and tell ChatGPT about its structure and tone.
From there, you can provide new information and tell ChatGPT to summarize it within the same structure as Inclined. You both share the same general knowledge now.
TRY IT OUT FOR YOURSELF:
Copy this prompt into ChatGPT and test it out:
“Prompt 1. Look over this article here: [pick an article]. Breakdown its structure and general tone.
Prompt 2: Recall the structure and tone you mentioned above. Take that general knowledge and summarize this article: [pick a new one] using the same structure and tone.”
Did you know some people don’t consider that prompt engineering?
PROMPT CULTURE
“How can something not be prompt engineering if it’s a prompt style?”
Good question, imaginary reader. The culture around this skill is relatively fresh. So some of these concepts are seen as too easy to be considered accurate prompt testing.
General knowledge prompting is simply establishing the context, and for some, that’s a baseline everyone needs to do. The same can be said for role prompting, too. All of these tiny preferences are semantics.
Don’t sweat whether you’re a “real” prompt engineer. Test this out and share your insights in these communities. The opportunity is there for you.
You may even know about DAN (we’ve covered it in previous newsletters) and other AI hacking methods. Those all start with prompt engineering. You can make the case that unless the AI behaves outside its parameters, you’re not genuinely doing prompt engineering.
I'm afraid I have to disagree with that, and careers are sprouting up everywhere that center directly on this skill. Many require a core understanding of the prompt styles we’ve discussed.
Yep, you can learn this and make money from talking with AI.
Anthropic even posted a role for a prompt engineer that nets a quarter million in salary. I did not make that up and even considered sprucing up the old resume. When a new skill like this comes about, it’s worth looking at.
There are many other examples like this, and OpenAI uses a red teaming strategy where their engineers attempt to prompt hack their own GPT models.
I can tell you all about the open roles here, but tomorrow the whole cycle will change. Isn’t that exciting, though? The entire identity around prompt engineering will change by this time next year.
WHAT SHOULD YOU TAKEAWAY?
Communication is everything. Learning to speak with AI is rising in importance.
We all watch with mouth agape at the new wonders in AI because we know this will disrupt every industry. If any of this piqued your interest, the window to pursue it is now open. Ride that wave and learn to become a brilliant prompt engineer.
Heck, even if you don’t want to switch careers, talking with ChatGPT and all the newest LLMs is becoming a part of our daily routine. Get to the point where you maximize every interaction and work with these chatbots to upskill your workflow.
Prompt engineering can save you time, eliminate hassle, and even help you become a more patient person. Focus on what you want and explain it with intent.
Make magic happen, and remember: take it step-by-step.
TIME FOR SOME Q&A ❓❓❓
What do you mean by singularity?
Singularity has multiple definitions. When we speak about it in terms of AI we’re discussing a “technological singularity.”
You might know the term Artificial General Intelligence (AGI) and the idea that AI will evolve to the point of thinking and acting the same as a human. A good concept for this is to think of Sunny from I,Robot. He’s AGI.
A technological singularity is what some believe happens after humanity advances computer intelligence passed our point of comprehension. The AI goes from general intelligence to super intelligence.
The runaway development of this technology and it’s exponential growth may lead to the end of human civilization or at least how we see it today. Remember, it’s not necessarily a doom and gloom situation, but many dystopian Sci-Fi books and movies were made from this concept.
The reason you see it brought up more today is the speed at which AI is growing and developing today and signs that we’re moving towards that peak now.
Futurist Ray Kurzweil wrote in his book “Singularity is Near” that he predicts it will happen by 2045. He’s moved his estimate up to 2030 more recently.
Upgrade today for less than the price of a latte a month and get benefits like:
A weekly Sunday deep dive into AI trends
Access to Q&A privileges to get your biggest AI questions answered
Exclusive discounts & invites to beta AI tools