Use positive instructions

Growing research suggests that focusing on positive instructions in prompting can be more effective than relying heavily on constraints [3].
Unknown[1]
Instructions directly communicate the desired outcome, whereas constraints might leave the model guessing about what is allowed [3].
Unknown[1]
It gives flexibility and encourages creativity within the defined boundaries, while constraints can limit the model’s potential [3].
Unknown[1]
Use positive instructions: instead of telling the model what not to do, tell it what to do instead [3].
Unknown[1]
Prioritize instructions, clearly stating what you want the model to do and only use constraints when necessary for safety, clarity or specific requirements [3].
Unknown[1]
Space: LLM Prompting Guides From Google, Anthropic and OpenAI