This article introduces the CLEAR Framework for Prompt Engineering, designed to optimize interactions with AI language models like ChatGPT. The framework encompasses five core principles—Concise, Logical, Explicit, Adaptive, and Reflective—that facilitate more effective AI-generated content evaluation and creation. Additionally, the article discusses technical aspects of prompts, such as tokens, temperature, and top-p settings. By integrating the CLEAR Framework into information literacy instruction, academic librarians can empower students with critical thinking skills for the ChatGPT era and adapt to the rapidly evolving AI landscape in higher education. (Leo S. Lo, 2023, The Journal of Academic Librarianship, https://doi.org/10.1016/j.acalib.2023.10272)
Be intentional, even scientific, with your prompts. Do not share any information that is confidential, proprietary, or personal in your prompts.
It pays to be nice to your AI: Large language models (LLMs) tend to give better answers when prompted respectfully and failure to do that can "significantly affect LLM performance," per a new cross-cultural research paper.
Why it matters: Your prompt may affect the answer you get today — and also how well an AI model can answer for everyone else tomorrow.
What they did: The researchers tested a half-dozen chatbots against dozens of tasks, using up to 150 prompts per task.
What they found: LLMs mirror certain human communication traits, which means politeness toward chatbots tends to generate better responses, just as politeness does in human conversation.
Reality check: Chatbots are not sentient, and your politeness isn't making them feel good.