Policies and Guidelines

William Paterson University’s AI policies govern the use of artificial intelligence by students, faculty, and staff in academic and operational contexts. These policies apply whether you use Universitysupported AI tools or nonUniversity AI tools, and whether you work on a University device or a personal device when engaged in Universityrelated work.

Key policies (link to the official policy pages)

  • AI: Use of Generative Artificial Intelligence in Student Academic Coursework (policy for faculty and students).
  • Generative AI Policy and Guidelines for Employees (policy for faculty and staff as employees).

Do’s and Don’ts for everyday AI use

Do:

  • Be aware of and follow William Paterson policies relevant to your role and context.
  • Maintain transparency: clearly attribute any aigenerated or aiassisted output used for academic or work purposes to the tool that created it.
  • Keep a human in the loop: review outputs for plagiarism risk, accuracy, bias, security issues, and appropriateness before sharing.
  • Maintain compliance: follow applicable laws and regulations (FERPA, HIPAA, etc.) and University policy requirements.
  • Evaluate tools: when using AI regularly for a task, periodically assess its error patterns, bias, and workflow impact.

Remember: there is no shame in using AI. It is a powerful tool, not a personal secret. AI is available to all and is only useful when combined with human knowledge, transparency, and judgement.

Don’t:

  • Compromise security: do not reuse University credentials as logins to public AI tools; never enter passwords into an AI tool.
  • Harm confidentiality: do not input University intellectual property, protected or sensitive data, or personally identifiable information (PII) into nonapproved tools.
  • Discriminate: do not use AI outputs that discriminate against individuals based on protected characteristics (race, color, religion, sex, national origin, age, disability, marital status, political affiliation, sexual orientation, etc.). Sharing biased or harmful AI output is your responsibility, not the AI platform that generated it.
  • Engage in unlawful conduct: do not use AI to commit fraud, misrepresent identity, or produce deceptive content.
  • Implement without vetting: do not procure or deploy AI tools (including APIs) for University work without appropriate review/approval. Contact IT before purchasing or using a nonapproved tool. If you are unsure about using an AI tool or version, contact IT first via the helpdesk.

A simple decision tree

  1. Is this for WP coursework, teaching, research, or operations? If yes → follow WP policy and guidance.
  2. Does the task involve student records, health data, HR data, financial data, contracts, passwords, or other sensitive information? If yes → do not upload any prohibited data; do not use nonapproved tools; consult IT/appropriate offices.
  3. Could the output affect grades, employment, eligibility, or other highstakes decisions? If yes → require human review, documented reasoning, and appropriate oversight.
  4. Will you publish or submit AI-assisted output as your own work? If yes → attribute and document the use of AI as required, including retaining and providing relevant prompts and records of conversations with the AI tool when requested.