Skip to Main Content

Artificial Intelligence (AI)

Ethics, readings, help, and tools for AI in an academic and research setting. Includes a comparative table (AI matrix) of various AI tools/platforms.

The CLEAR path: A framework for enhancing information literacy through prompt engineering

This article introduces the CLEAR Framework for Prompt Engineering, designed to optimize interactions with AI language models like ChatGPT. The framework encompasses five core principles—Concise, Logical, Explicit, Adaptive, and Reflective—that facilitate more effective AI-generated content evaluation and creation. Additionally, the article discusses technical aspects of prompts, such as tokens, temperature, and top-p settings. By integrating the CLEAR Framework into information literacy instruction, academic librarians can empower students with critical thinking skills for the ChatGPT era and adapt to the rapidly evolving AI landscape in higher education. (Leo S. Lo, 2023, The Journal of Academic Librarianship, https://doi.org/10.1016/j.acalib.2023.10272)

The CLEAR Framework's five components
  1. Concise: brevity and clarity in prompts
  2. Logical: structured and coherent prompts
  3. Explicit: clear output specifications
  4. Adaptive: flexibility and customization in prompts
  5. Reflective: continuous evaluation and improvement of prompts

Be intentional, even scientific, with your prompts. Do not share any information that is confidential, proprietary, or personal in your prompts.

"Politeness" in your prompts makes a difference

Being nice to chatbots pays off (article link from AXIOS)

It pays to be nice to your AI: Large language models (LLMs) tend to give better answers when prompted respectfully and failure to do that can "significantly affect LLM performance," per a new cross-cultural research paper.

Why it matters: Your prompt may affect the answer you get today — and also how well an AI model can answer for everyone else tomorrow.

  • "Impolite prompts may lead to a deterioration in model performance, including generations containing mistakes, stronger biases, and omission of information," the researchers found.

What they did: The researchers tested a half-dozen chatbots against dozens of tasks, using up to 150 prompts per task.

What they found: LLMs mirror certain human communication traits, which means politeness toward chatbots tends to generate better responses, just as politeness does in human conversation.

  • The thesis proved true across English, Chinese and Japanese prompts and with each chatbot tested.
  • "Impolite prompts often result in poor performance, but excessive flattery is not necessarily welcome," the researchers found.
  • One effect of excessively rude or flattering prompts was that they generated longer answers in English and Chinese.

Reality check: Chatbots are not sentient, and your politeness isn't making them feel good.

Subjects: Interdisciplinary