Introduction
In this lesson, we will explore prompting and prompt engineering, which allow us to interact with LLMs effectively for various applications. We can leverage LLMs to perform tasks such as answering questions, text generation, and more by crafting specific prompts.
We will delve into zero-shot prompting, where the model produces results without explicit examples, and then transition to in-context learning and few-shot prompting, where the model learns from demonstrations to handle complex tasks with minimal training data.
Prompting and Prompt Engineering
Prompting is a very important technique that involves designing and optimizing prompts to interact effectively with LLMs for various applications. The process of prompt engineering enables developers and researchers to harness the capabilities of LLMs and utilize them for tasks such as answering questions, arithmetic reasoning, text generation, and more.
At its core, prompting involves presenting a specific task or instruction to the language model, which then generates a response based on the information provided in the prompt. A prompt can be as simple as a question or instruction or include additional context, examples, or inputs to guide the model towards producing desired outputs. The quality of the results largely depends on the precision and relevance of the information provided in the prompt.
Let's incorporate these ideas in the code examples. Before running them, remember to load your environment variables from your .env
file as follows.
from dotenv import load_dotenv
load_dotenv()
Example: Story Generation
In this example, the prompt sets up the start of a story, providing initial context ("a world where animals could speak") and a character ("a courageous mouse named Benjamin"). The model's task is to generate the rest of the story based on this prompt.
Note that in this example we are defining separately a prompt_system
and a prompt
. This is because the OpenAI API works this way, requiring a “system prompt” to steer the model behaviour. This is different from other LLMs that require only a standard prompt.
import openai
prompt_system = "You are a helpful assistant whose goal is to help write stories."
prompt = """Continue the following story. Write no more than 50 words.
Once upon a time, in a world where animals could speak, a courageous mouse named Benjamin decided to"""
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": prompt_system},
{"role": "user", "content": prompt}
]
)
print(response.choices[0]['message']['content'])
embark on a journey to find the legendary Golden Cheese. With determination
in his heart, he ventured through thick forests and perilous mountains,
facing countless obstacles. Little did he know that his bravery would
lead him to the greatest adventure of his life.
Example: Product Description
Here, the prompt is a request for a product description with key details ("luxurious, hand-crafted, limited-edition fountain pen made from rosewood and gold"). The model is tasked with writing an appealing product description based on these details.
import openai
prompt_system = "You are a helpful assistant whose goal is to help write product descriptions."
prompt = """Write a captivating product description for a luxurious, hand-crafted, limited-edition fountain pen made from rosewood and gold.
Write no more than 50 words."""
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": prompt_system},
{"role": "user", "content": prompt}
]
)
print(response.choices[0]['message']['content'])
Experience the epitome of elegance with our luxurious limited-edition
fountain pen. Meticulously handcrafted from exquisite rosewood and
shimmering gold, this writing instrument exudes sophistication in every
stroke. Elevate your writing experience to new heights with this opulent
masterpiece.
Zero-Shot Prompting
In the context of prompting, “zero-shot prompting” is where we directly ask for the result without providing reference examples for the task. For many tasks, LLMs are smart enough to produce great results. This is exactly what we did in the examples above. Here’s a new example where we ask an LLM to write a short poem about summer.
import openai
prompt_system = "You are a helpful assistant whose goal is to write short poems."
prompt = """Write a short poem about {topic}."""
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": prompt_system},
{"role": "user", "content": prompt.format(topic="summer")}
]
)
print(response.choices[0]['message']['content'])
In the realm of golden rays,
Summer dances in perfect sway,
Nature's canvas aglow with hues,
Kissing warmth upon the dews.
Breezes whisper through the trees,
Serenading the humming bees,
Joyful laughter fills the air,
As sunshine gleams without a care.
Sand between our toes, so fine,
Waves crashing in rhythmic rhyme,
Picnics filled with sweet delight,
Summer's pleasures, pure and bright.
Days stretch long, nights invite,
Stargazing dreams take flight,
Fireflies dance in twilight's haze,
Summer's magic shall never fade.
The generated poem is nice, but what if we have a specific style of the poem we’d like it to generate? We could try the descriptive approach or simply provide relevant examples of what we need in the prompt.
In-Context Learning And Few-Shot Prompting
In the context of LLMs, in-context learning is a powerful approach where the model learns from demonstrations or exemplars provided within the prompt. Few-shot prompting is a technique under in-context learning that involves giving the language model a few examples or demonstrations of the task at hand to help it generalize and perform better on complex tasks.
Few-shot prompting allows language models to learn from a limited amount of data, making them more adaptable and capable of handling tasks with minimal training samples. Instead of relying solely on zero-shot capabilities (where the model predicts outputs for tasks it has never seen before), few-shot prompting leverages the in-context demonstrations to improve performance.
In few-shot prompting, the prompt typically includes multiple questions or inputs along with their corresponding answers. The language model learns from these examples and generalizes to respond to similar queries.
import openai
prompt_system = "You are a helpful assistant whose goal is to write short poems."
prompt = """Write a short poem about {topic}."""
examples = {
"nature": "Birdsong fills the air,\nMountains high and valleys deep,\nNature's music sweet.",
"winter": "Snow blankets the ground,\nSilence is the only sound,\nWinter's beauty found."
}
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": prompt_system},
{"role": "user", "content": prompt.format(topic="nature")},
{"role": "assistant", "content": examples["nature"]},
{"role": "user", "content": prompt.format(topic="winter")},
{"role": "assistant", "content": examples["winter"]},
{"role": "user", "content": prompt.format(topic="summer")}
]
)
print(response.choices[0]['message']['content'])
Golden sunbeams shine,
Warm sands between toes divine,
Summer memories, mine.
Limitations of Few-shot Prompting
Despite its effectiveness, few-shot prompting does have limitations, especially for more complex reasoning tasks. In such cases, advanced techniques like chain-of-thought prompting have gained popularity. Chain-of-thought prompting breaks down complex problems into multiple steps and provides demonstrations for each step, enabling the model to reason more effectively.
Conclusion
In this lesson, we explored prompting in the context of language models. Prompting involves presenting specific tasks or instructions to an LLM to generate desired responses. We learned that the quality of results largely depends on the precision and relevance of the information provided in the prompt.
Through code examples, we saw how to use prompts for story generation and product descriptions. We also explored zero-shot prompting, where the model can perform tasks without explicit reference examples. However, we introduced few-shot prompting, a powerful in-context learning approach to improve the model's performance on more complex tasks. Few-shot prompting allows the model to learn from a limited number of examples, making it more adaptable and capable of handling tasks with minimal training data.
However, we also recognized that few-shot prompting has its limitations, particularly for complex reasoning tasks. In such cases, advanced techniques like chain-of-thought prompting are gaining popularity by breaking down complex problems into multiple steps with demonstrations for each step.