Ethics and Responsibility in AI Use

Humancentered AI means AI is designed and used to serve human values, needs, and wellbeing rather than replace them. It keeps people in control (agency), makes impacts understandable (transparency), and ensures accountability for outcomes.

Core principles

  • Human agency and accountability: People remain responsible for decisions and for what they publish, submit, or send.
  • Transparency: When AI meaningfully shapes work, acknowledge it. Don’t pass AI output off as purely human work.
  • Fairness and inclusion: Watch for bias and differential impact on individuals and communities. Promote equitable access to AI tools—especially when quality varies across paid and free tiers.
  • Privacy and security: Protect students, employees, and the institution by strictly enforcing rules against prohibited data sharing and discouraging risky uploads into AI platforms.
  • Accuracy and verification: Treat AI-generated content as a preliminary draft or working hypothesis until it has been checked. When possible, require cited sources and confirm those sources independently before relying on the information.
  • Purpose and proportionality: Use AI where it helps; avoid “automation for its own sake.” Avoid dependency by asking questions or assigning tasks that are understood.

The human-in-the-loop rule

Each of us is responsible for checking and verifying any AI output we plan to share with others. Ethically, this means you should not share, send, or submit content copied directly from AI without reviewing all of it, and clearly stating its origin. This is not only because AI can be incorrect, biased, or misaligned with your intent, but because you are responsible for what you put out in the world. This responsbility should not be taken lightly.

Practically: treat AI as a collaborator that can accelerate drafting and brainstorming, but not as an authority. Unchecked AI can very quickly create a feedback loop of lowquality content (“slop”) that degrades communication, learning, and trust.

A short verification checklist

  • Privacy check: Ensure you didn’t share protected or sensitive information in the prompt.
  • Source check: If the output includes facts, numbers, quotes, citations, or claims about people/events, verify them with reliable sources.
  • Bias check: Ask what perspectives are missing. Test alternative phrasings and look for stereotypes or unfair generalizations.
  • Context check: Confirm the output actually fits the WP context (policies, tone, audience, and local constraints).
  • Attribution check: If you used AI to generate or rewrite content, document the tool and how you used it.
  • Final human review: Read as if you were the recipient. Does it sound accurate, respectful, and appropriate?

Featured talk

Surfing the AI Wave: A HumanCentered Approach to Innovation & Ethics — Douglas Schmidt, Inaugural Dean of the School of Computing, Data Sciences & Physics at William & Mary.

Video: Surfing the AI Wave

Highlights and initiatives (WP examples)

  • Faculty development workshops and communities of practice that focus on ethical AI use in pedagogy and research.
  • AI certificate offerings (e.g., within the College of Arts, Humanities, and Social Sciences) that help students build applied AI literacy.
  • Pilots and case studies that document what works—and what doesn’t—so that learning is shared across campus.