Tools
Search
Import
Library
Explore
Videos
Channels
Figures
Atmrix
About
Tools
Search
Import
Library
Explore
Videos
Channels
Figures
Atmrix
About
Go Back
A
Anthropic Cast
09/06/24
@ Anthropic
In the future, prompting may involve the model eliciting information from us rather than us having to specify everything.
Video
A
AI prompt engineering: A deep dive
@ Anthropic
09/06/24
Related Takeaways
A
Anthropic Cast
09/06/24
@ Anthropic
The future of prompting may involve more guided interactions with the model, rather than just typing into a text box.
A
Anthropic Cast
09/06/24
@ Anthropic
When prompting, it's important to give the model an 'out' for unexpected inputs, allowing it to indicate uncertainty when necessary.
SS
Sander Schulhoff
06/19/25
@ Lenny's Podcast
Including additional information or context in prompts significantly improves the model's understanding and output quality.
KW
Kevin Weil
04/10/25
@ Lenny's Podcast
You can enhance your prompting by including examples in your prompt. For instance, provide a problem and a good answer, and then ask the model to solve a similar problem.
SS
Sander Schulhoff
06/19/25
@ Lenny's Podcast
Providing additional information and examples significantly improves the results in conversational prompt engineering.
A
Anthropic Cast
09/06/24
@ Anthropic
Reading prompts and model outputs closely helps improve prompting skills; experimentation is key.
A
Anthropic Cast
09/06/24
@ Anthropic
The skill of prompting is about introspection, understanding what you want, and making yourself legible to the model.
SS
Sander Schulhoff
06/19/25
@ Lenny's Podcast
Chain of thought prompting, where you instruct the model to think step-by-step, is less necessary with modern reasoning models but can still be useful in some cases. If you're running thousands or millions of inputs through your prompt, you'll still need to use classical prompting techniques to make your prompt more robust.
A
Anthropic Cast
09/06/24
@ Anthropic
When prompting, I try to give examples that are not too similar to the data the model will see, to avoid overly consistent responses.