Body
ARTIFICIAL INTELLIGENCE GUIDELINES
Effective: 10/1/2025
Approved: 9/25/2025, President’s Cabinet
1. Purpose and Scope
Artificial Intelligence (AI) can be an important tool for teaching, research, administration, and student experience, but it must be used responsibly. This document establishes York College of Pennsylvania’s (YCP) guidelines for the responsible use of AI technologies in both learning and workspace environments while ensuring compliance with YCP's values. These guidelines apply to all York College of Pennsylvania faculty, administrators, staff, students, alumni, researchers, and contractors.
2. Risk Awareness
Generative AI is a flexible technology that presents both potential benefits and risks. Although Generative AI tools are relatively new, many of the risks they pose are similar to those associated with traditional internet or software tools. Some common risks include:
-
Entering private or confidential data into a Generative AI prompt.
-
Generative AI inaccurate or misleading outputs could affect public communication or inform agency programs or policies based on faulty information.
-
The reinforcement of existing biases in outputs could affect the quality and fairness of work products.
-
Violating copyright laws.
When deciding whether to use a Generative AI tool, users should evaluate the risks involved and refer to the relevant policies.
3. Ethical AI Use
Not all technologies affect users similarly, and certain groups, such as specific student populations, may be more vulnerable to harm. Human and systemic biases embedded in generative AI algorithms and the data used to train large language models (LLMs) can shape the output of AI tools, potentially reinforcing inequities if these biases are not identified and addressed. Therefore, it is crucial to critically assess the outputs generated by AI.
4. Data Privacy
Data shared with generative AI tools may be used or stored in ways that expose sensitive information, such as institutional, research, grant, or contract data. This includes, but is not limited to, personally identifiable information (PII), Protected Health Information (PHI), sensitive institutional data, student data protected under FERPA, and research data subject to export control restrictions.
Additionally, large language models (LLMs) can store interactions and use them as part of their training data. Any input provided and any materials uploaded to LLM processors could become integrated into the model's training set, potentially being shared in the future without proper attribution. Consequently, resources and intellectual property may be used in unforeseen ways. Data uploaded to these systems passes through various technological providers, each with its own privacy policies and terms of use. Therefore, AI users must share only open data or information that does not require confidentiality.
Many AI tools are now available to assist with recording and processing of meeting audio and video content. To comply with Pennsylvania regulations, employees must obtain express consent from all participants before recording a meeting for AI processing or other purposes.
5. Transparency
The commitment to transparency is two-fold: YCP commits to being transparent about the ways in which AI tools are used in making administrative decisions, and faculty and students are transparent in disclosing their use of AI in their work. The latter is significant given that transparency in AI use is vital to uphold the integrity of academic work by citing sources to ensure credit is given where it is due and the origins of ideas are traceable. While powerful, AI tools often generate content without clear attribution to original sources, potentially obscuring the lineage of ideas foundational to scholarly work.
Employees should disclose the source of content when sharing data, text, images, or other outputs created by a generative AI tool.
6. AI Literacy
AI literacy involves the following components: understanding how it works, critically evaluating outputs, responsibly applying tools in context, remaining vigilant as technologies evolve, and appreciating broader ethical and societal impacts. AI is not a destination, but a pervasive technology now embedded almost across every tool we use (Norrie, 2025). All stakeholders must have a technical understanding of AI, be able to critically evaluate AI tools and outputs, effectively use and manage AI tools for their purpose, and do so while conforming to the ethical considerations laid out in these guidelines. YCP will continue to develop technical information knowledge base and offer other resources including learning platforms.
7. Faculty Guidelines
YCP supports the use of AI tools by faculty to enhance the teaching and learning experience while ensuring academic integrity. Faculty shall follow the permitted and prohibited use guidelines in the table below, including:
-
Include course-specific AI policies in syllabi. Clearly explain to students the uses of AI that are allowed in your course and those that are not allowed.
-
Curate AI-aware assignments that assess genuine learning outcomes.
-
Do not use AI detection tools because their accuracy has limited or contradictory support in published research.
-
Incorporate AI literacy into course content where appropriate.
-
Disclose AI use in creating course material or assessment
Using generative AI without explicit permission of the instructor is a violation of YCP's Academic Integrity Policy (see the most recent College Catalog for the full policy). Faculty should remind students of whether and how they are allowed to use generative AI tools in their courses and assignments.
Permitted Use
|
Prohibited Use
|
Syllabus and lesson planning: Faculty can use generative AI to help outline course syllabi and lesson plans, to get suggestions for learning objectives, teaching strategies, and assessment methods. Course materials that the instructor has authored (such as course notes) may be submitted by the instructor.
|
Faculty must not enter any sensitive, restricted or otherwise protected data into any generative AI tool or service.
Note: Student submissions may include sensitive or protected information. If AI tools are used for automated grading, faculty must ensure that student submissions are scrubbed of such information prior to using the AI tool.
|
Professional Development and Training Presentations: Faculty can use AI to draft materials for potential professional development opportunities, including workshops, conferences, and online courses related to their field.
|
Faculty may not use AI tools or services to generate content that helps others break federal, state or local laws; institutional policies, rules or guidelines; or licensing agreements or contracts.
|
Personalized Student support: AI tools can be used for tutoring, translating, academic advising, easing the administrative process, brainstorming, editing, accessibility tools and assistive technology.
|
Faculty may not use AI tools or services to generate content that enables harassment, threats, defamation, hostile environments, stalking or illegal discrimination.
|
Administrative Assistant: automating tasks and drafting and revising communications.
|
You may not use AI tools or services to infringe copyright or other intellectual property rights.
|
1 Table prepared by VEdTech
8. Research Guidelines
Faculty can benefit greatly from using AI in their research. “With natural language processing capabilities and machine learning algorithms, AI can detect errors and inconsistencies that may have been overlooked during the writing process. By identifying these errors, AI tools can help researchers produce higher-quality work with fewer mistakes.” (Cooperman, 2024). However, some journals have banned or heavily dissuaded the use of AI to write papers. This guidance2 from Pennsylvania State University provides helpful guidelines, such as:
-
Disclosing its use.
-
Take on the primary responsibility of verifying the veracity of information.
-
Account for bias in AI-generated content.
-
Maintain the integrity of the data, both as input and output.
-
Use AI as a tool not as a replacement for human thought and creation.
2 Penn State Guidance on Al in Research: https://pennstateoffice365.sharepoint.com/:w:/s/VPR-ORP/EUwbcanBfctBlWG1AG9Cw4EB0fy6TtAbnajHtlNyNiYUUQ?e=MF74G2
9. Student Guidelines
AI tools offer students powerful resources for learning, research, and productivity. However, responsible use is necessary and requires transparency, critical evaluation, and adherence to institutional policies to maintain academic integrity. All AI use must be disclosed and attributed.
Additionally, students must follow the specific AI usage policies outlined by their course instructors. These policies may vary, with some courses permitting AI tools for brainstorming or editing, while others may restrict their use entirely. When in doubt, students should consult their instructors to ensure compliance with course expectations.
10. Guidelines for All YCP Members
When inputting data, stakeholders should carefully consider the sensitivity of the information being used.
-
High risk/low impact (avoid): Using Generative AI to draft a public communication that includes sensitive information, where doing so manually would take little time. Simply copying and pasting the AI-generated output with minimal review is not advisable.
-
Low risk/high impact: Using Generative AI to compare a new version of a publicly available policy with an older one, then asking the AI tool to identify modified sections and manually verifying the changes.
Non-public and confidential data shall not be input into any Public Generative AI prompt, tool, or system. For additional information, refer to the YCP Data Classification Policy (https://service.ycp.edu/TDClient/219/Portal/KB/ArticleDet?ID=26042)
AI users should promote accessibility and inclusivity:
-
Ensure that AI tools are accessible to users with diverse needs and abilities.
-
Provide alternative options for those who cannot or choose not to use AI tools.
-
Actively seek diverse perspectives and representation in AI implementation and policy making.
-
Regularly assess AI tools for potential biases or discriminatory outcomes.
-
Actively work to mitigate biases in AI systems that may result in unfair or discriminatory outcomes.
-
Consider the long-term impacts of AI use on learning outcomes and educational equity.
AI users must ensure Legal and Ethical Compliance:
-
Respect data privacy and security regulations when using AI systems
-
Comply with copyright and intellectual property laws when using AI-generated content.
-
Follow institutional and professional ethical guidelines in AI applications.
-
Comply with State of Pennsylvania regulations such as requiring express consent from all meeting participants before recording them for AI processing or any other purposes.
-
Disclose the source of content when sharing data, images, text or any other output created using an AI tool.
-
Follow YCP Acceptable Use Policy (https://service.ycp.edu/TDClient/219/Portal/KB/ArticleDet?ID=18716)
11. Compliance and Enforcement
Any violation of above-mentioned guidelines will result in discipline in accordance with YCP's academic integrity policy and HR policies.
APPENDIX I: Sources
Alsamhori, A. & Alnaimat (2024, December). Artificial Intelligence in Writing and Research: Ethical Implications and Best Practices. Central Asian Journal of Medical Hypotheses and Ethics. https://www.researchgate.net/publication/387553053_Artificial_intelligence_in_writing_and_research_ethical_implications_and_best_practices
American Public University System. (n.d.). *Generative AI policy*. APUS Student Code of Conduct. https://www.apu.apus.edu/student-handbook/university-policies-and-code-of-conduct/apus-student-code-of-conduct/generative-ai-policy/
Anthology (2024, August). Vl. 08-24. AI Policy Framework. https://backstage.anthology.com/sites/default/files/migrated/2024-08/AI-Policy-Framework_v1_08-24.pdf
Antoniak, M. (2023, June 22). *Using large language models with care - AI2 blog*. Medium. https://medium.com/ai2-blog/using-large-language-models-with-care-eeb17b0aed27
California State University. (n.d.). *Ethical principles: AI framework for higher education*. GenAI. https://genai.calstate.edu/communities/faculty/ethical-and-responsible-use-ai/ethical-principles-ai-framework-higher-education
Committee on Publication Ethics. (2024). *COPE position - Authorship and AI - English*. https://doi.org/10.24318/cCVRZBms
Also available at: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
Cooperman, S. (Spring, 2024). AI assistance with scientific writing: Possibilities, pitfalls, and ethical considerations). Foot & Ankle Surgery: Techniques, Reports & Cases. https://www.sciencedirect.com/science/article/pii/S2667396723000885
Cornell University. (n.d.). *Ethical AI in teaching and learning*. Center for Teaching Innovation. https://teaching.cornell.edu/generative-artificial-intelligence/ethical-ai-teaching-and-learning
EAB. (n.d.). *Craft an AI acceptable use policy to protect your campus*. https://eab.com/resources/blog/strategy-blog/craft-ai-acceptable-use-policy-protect-campus/
Florida Institute of Technology. (n.d.). *Responsible use of generative AI in academic work*.
https://www.fit.edu/provost/academic-guidelines/responsible-use-of-generative-ai-in-academic-work/
Gasevic, D., Siemens, G., & Sadiq, S. (2023). Empowering learners for the age of artificial intelligence. *Computers & Education: Artificial Intelligence, 4*, 100130. https://doi.org/10.1016/j.caeai.2023.100130
Harvard University Information Technology. (n.d.). *Generative AI: Guidelines for use*. Harvard University. https://www.huit.harvard.edu/ai/guidelines
IBM. (2023, November 2). What are large language models (LLMs)? IBM. https://www.ibm.com/think/topics/large-language-models
Norrie, J. (2025). Emailed feedback to 8/14/2025 Draft of YCP AI Guidelines.
Kassorla, M., Georgieva, M., & Papini, A. (2024, October 17). Defining AI literacy for higher education. EDUCAUSE. https://www.educause.edu/content/2024/ai-literacy-in-teaching-and-learning/defining-ai-literacy-for-higher-education
Penn State Office of Research. (n.d.). Generative AI in research: Impact, best practices, and concerns. Penn State University. https://pennstateoffice365.sharepoint.com/:w:/s/VPR-ORP/EUwbcanBfctBlWG1AG9Cw4EB0fy6TtAbnajHtlNyNiYUUQ?e=MF74G2
Pennsylvania Office of Administration. (n.d.). *Artificial intelligence policy*. https://www.pa.gov/content/dam/copapwp-pagov/en/oa/documents/policies/it-policies/artificial%20intelligence%20policy.pdf
Serrano, C. (n.d.). *Building transparency in AI: Best practices for citing and using AI in higher education*. Medium. https://medium.com/@christynaserrano/building-transparency-in-ai-best-practices-for-citing-and-using-ai-in-higher-education-5288c458f353
Temple University. (n.d.). *Chat-GPT syllabus statement guidance*. Center for the Advancement of Teaching. https://teaching.temple.edu/sites/teaching/files/resource/pdf/Chat-GPT%20syllabus%20statement%20guidance.pdf
Temple University. (n.d.). *Guidelines for generative Al*. https://tuportal6.temple.edu/documents/380033/1166174952/Guidelines%2Bfor%2BGenAI.pdf/30bcd155-1311-6434-355c-efc963798189?t=1725490134990
University of Wisconsin-Madison. (n.d.). *Generative AI: UW-Madison use policies*. https://it.wisc.edu/generative-ai-services-uw-madison/generative-ai-uw-madison-use-policies/
APPENDIX II: Literacy Resources
-
Basic
-
Coursera: Kennesaw State University- AI for Education https://www.coursera.org/learn/ai-for-education-basic
-
Coursera: University of Glasgow - Generative AI in Education https://www.coursera.org/learn/generative-ai-in-education
-
edX: UniversiIT of Alaska - Teaching and Learning in the Era of AI https://www.edx.org/learn/education-teacher-training/university-of-alaska-fairbanks-teaching-and-learning-in-the-era-of-ai
-
Intermediate
-
Advanced
APPENDIX III: Definitions
Stakeholders: Stakeholders include all YCP faculty, administrators, staff, students, contractors, and alumni.
Generative AI: Artificial intelligence systems capable of creating original content, such as text, images, audio, or video. In YCP academic settings, these tools can assist with content creation, research synthesis, and educational material development when the use of Generative AI to create content is authorized (e.g. approved for use in the course or on the assignment) and properly disclosed/cited.
Large Language Models: Large language models (LLMs) are a category of foundation models trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks (IBM, 2023).
AI Literacy: Grasping the basic principles of AI technology, critically assessing the use of AI tools in teaching, research, and the administration of educational objectives, and consistently reviewing these tools and methods to safeguard against biases, improper use, and misapplication of AI models.
MFA: Multi-Factor Authentication.