When educators write prompts for information, generative AI models don’t “know” an answer. Instead, it predicts the most likely response based on its training data. Regardless of the quality of your prompt, the model might generate an incorrect or fabricated response.
Misinformation can be spread through these fabrications. Copilot Chat aims to base all its responses on reliable sources—but AI-generated responses might be incorrect, and non-Microsoft content on the internet might not always be accurate or reliable. Copilot Chat might sometimes misrepresent the information it finds, and you might see responses that sound convincing but are incomplete, inaccurate, or inappropriate.
While Copilot Chat works to avoid sharing unexpected offensive content in search results and takes steps to prevent its chat features from engaging on potentially harmful topics, educators might still get unexpected results. Provide feedback or report concerns directly to Microsoft by using the feedback features beneath the response.
When Copilot Chat provides a response to a prompt, it also provides two key pieces of information: the search terms used to generate the response and the links to content sources. Educators can use these details to inform their evaluation of the response. If the prompt terms don’t represent the intended question, start a new prompt with different wording. If the source links aren’t reliable, ask Copilot Chat to refine the response using specific, more reliable websites that you provide.
https://lernix.com.my/ccie-certification-training-courses-malaysia
Leave a Reply