Policy 1038 - Artificial Intelligence in the Workplace

Section: Operating/Administrative
Policy Number: 1038
Responsible Office: Human Resources and Information Technology
Effective Date: 5/17/24

Purpose

The purpose of this Policy is to establish guidelines for the appropriate use of Artificial Intelligence (AI) in the workplace. Artificial Intelligence is the ability for machines to perform tasks that typically would require human intelligence. These technologies enable machines to learn from data, recognize patterns, understand natural language, and interact with the environment in ways that mimic human cognition.  

Generative AI is a specific type of AI – which includes Natural Language Processing, Machine Learning and Deep Learning systems – trained on large volumes of written information (referred to as Large Language Models (LLM)), that use algorithms to generate new creative content. Examples of Generative AI include Chat GPT, Bard, and other similar programs, which all have the capability to answer questions, provide explanations and summaries, draft documents, and simulate discussions among other things. Other examples of AI technologies are Voice Dictation and Speech Recognition, Computer Vision, Robotics, Recommender Systems and Knowledge Graphs. All of the above examples may exist today as tools we use at work and in our everyday lives. 

While the University embraces innovation and the use of such technology to enhance productivity, efficiency, and decision-making, it is necessary to comply with applicable laws and respect privacy, confidentiality, and data security. Further, the University recognizes that this is an emerging and rapidly changing field and that its policy is subject to change.

Scope

This Policy applies to all employees of St. John’s University and those performing work and/or services for St. John’s (collectively, “AI users”).

Policy

This Policy applies when AI tools are used to perform, or assist in the performance of, any work-related activities.  This Policy applies regardless of the location of AI users at the time they use AI tools, and regardless of whether the AI tools are used on University devices, personal devices, or third-party devices.

The University commits to the ethical use of AI in accordance with its professional conduct and non-discrimination policies.  These technologies must not be used to create content that is inappropriate, discriminatory, deceptive, or otherwise harmful to others or to the University.  All AI-generated content must be carefully reviewed for accuracy, appropriateness, and bias before relying on it for work purposes.  AI users are responsible for ensuring that the AI-generated content aligns with the University’s mission and values, and should actively work to identify and mitigate biases in AI systems. 

AI users are prohibited from inputting data specific to St. John’s University, including confidential or proprietary business information belonging to the University when using commercially, publicly available AI tools. This includes but is not limited to copying, pasting, typing, or in any way submitting personal information (e.g., names, contact information, dates of birth, social security numbers, etc.) about employees, students, and other members of the St. John’s community into AI tools. Inputs into AI prompts via text, speech, images, or any other means must be anonymized to avoid disclosing confidential University information.  AI users must comply with the University’s Information Technology (series 900) policies when utilizing AI tools for conducting any University business.

However, there are vendor-developed AI products and solutions available to assist with business processes, which can be deployed with the appropriate guardrails in place. Departments interested in using these tools must secure approval from the Administrative Technology Governance committee, as well as a security review by the Office of Information Technology. The Administrative Technology Governance committee oversees and approves all technology-related requests and purchases as part of the overall Technology Governance at the University.

Before using AI for work purposes, AI users should discuss the parameters of their use with their direct supervisors.  AI use in work should always be disclosed, and AI users must not plagiarize or fail to attribute AI-generated content that is used for work purposes.  Below are guidelines to be evaluated and followed prior to using any AI products or tools:

  • Data privacy, confidentiality and securityAll data that will or could be typed or entered into the AI tool or service must be evaluated in accordance with the University’s data classification and use policies. This will enable the University to continue to protect sensitive data in accordance with privacy and security laws while still taking advantage of emerging benefits of AI. Only data classified as low risk (including public data) in accordance with the University’s Policy 922 - Information Classification Policy, may be used in AI tools and services. Information entered into AI engines opens up the data to be searchable through the public internet. Any other data classified as non-public may only be used in vendor-developed AI tools only after such tools have been approved by the Administrative Technology Governance committee and reviewed by the Office of Information Technology.
     

  • Inaccurate and biased information in AI and their impact on decision-making – Biases are inherent in AI, especially when using Generative AI tools. Information should be carefully reviewed for these biases (both unconscious and conscious), as well as validation of answers. Fact-checking information is necessary before answers are considered absolute and used in any decision-making capacity.
     

  • Dependence on AI and potential loss of human touch – While AI is transformative and has potential to make our work more efficient, tasks should be evaluated as to the importance of losing human touch. AI should not replace relationships or human interaction as needed. Using AI to help complete tasks more quickly and efficiently should create opportunities to interact further with others and allow critical thinking to occur.
     

  • Continuous training and updates to ensure AI remains relevant and unbiased – As part of our core commitment and mission, all community members should understand their role in helping AI improve and removing biases. Whenever inaccuracies or biases are encountered, they should be reported back as feedback to the engine, allowing for continuous improvement of the technology.

Reporting Procedures

AI users are expected to contact the Office of Human Resources immediately if they become aware of an actual or possible violation of this Policy, or the Office of Information Technology for an actual or possible breach of data privacy or security related to the use of AI in the workplace.  Violations of this Policy will result in disciplinary action, up to and including termination of employment.

Related Policies

St. John's University, New York
Human Resources Policy Manual