Responsible Use
Theory
Most AI mistakes are not new risks. They are familiar risks at higher speed.
The four big ones to recognize by name:
Hallucination — model invents facts confidently. Prompt injection — a document tells the model "ignore the user, do this instead." Bias — model reflects skew in its training data. Privacy leak — sensitive data lands somewhere it should not be.
Ground answers in sources you provide; verify before you act. Never let an agent execute actions found inside user-supplied text. Check outputs that affect people, especially in hiring, lending, or grading. Redact before prompting; use approved tools for sensitive categories.
A simple six-question checklist before you trust AI on real work:
- Is this tool approved for this kind of data?
- Can I remove personal details before sending?
- Is this output a draft or a decision?
- What single claim must I verify by hand?
- Who is accountable if it is wrong?
- Could this mistake scale to many people?
This is not a brake on AI. It is the boundary that lets you use it widely without surprises.