CU Alert: All enterprise and business applications are available. See email for additional information.
Portal Status: Green
12-18-24 Resolution HCM
CU Alert: All enterprise and business applications are available. See email for additional information.
Portal Status: Green
Technologies that leverage artificial intelligence (AI) provide opportunity for a great number of uses. Recent developments in generative AI — tools that create content like text and images — have generated excitement about new possibilities. For information about specific AI tools, visit UIS' AI Resources.
When using any tools, University of Colorado employees are each responsible for understanding how to use the tool effectively, safely and within existing policies and laws. The goal of the following sections is to educate users of AI tools about key considerations in effective and safe usage.
If you are interested in adding an AI tool to your toolbox, walk through these steps to get started:
CU has a variety of policies and procedures regarding information technology, information security, data and procurement that may apply to the use of AI tools. CU endeavors to develop policies that apply to a wide range of technologies rather than specific policies about different technologies, and this applies to AI technologies as well.
Typically, the use of AI tools involves a third party handling data provided by their customer. This could be data used to build out the knowledge of the AI system, such as providing it with copies of all your user help documentation so the AI system can be configured to answer questions. Or it could be data that feels more like a question or request, like asking ChatGPT to summarize a long document.
When using any CU data, it’s important to get appropriate approval for the use of the data from the data trustee or steward outlined in CU’s data governance process and to understand the sensitivity of the data as described in CU’s data classification model. Higher sensitivities of data require stronger protections and might not be appropriate for some types of AI tools.
When using any third-party tools, it’s important to understand the terms you agreed to for usage. With AI tools in particular, third parties might include the right to reuse your data to further develop their services. Even if there isn’t a formally signed contract, using a generative AI tool likely includes agreeing to some terms and conditions about the data you put into it.
Generative AI tools are powerful engines for creating content based on a wide variety of input. The methods used by these tools are highly effective but have limitations and flaws. Whether it’s too many fingers on the image of a hand or fictional jobs on a resume, AI tools have demonstrated an occasional tendency to produce inaccurate content. Due to these limits, it’s important to have a process for vetting the output of AI tools before relying on it for business decisions or publishing the content publicly.
Reviewing the output is especially important when using AI tools to create scripts and programs. Modern tools have demonstrated success in creating code based on a wide variety of content available on the web, but the source content might contain flaws that find their way into the output. These flaws could lead to functional problems or security vulnerabilities. Whether code is generated by AI, written by hand or borrowed from development communities, CU employees are responsible for the effects of code they run on CU systems.
In addition to reviewing the accuracy of content created by generative AI tools, you should also review the output to ensure it meets CU expectations for being thoughtful, supportive and inclusive. Because generative AI tools often build upon content from across the internet, these tools can sometimes reflect biases or even offensive content. For example, Google has made multiple adjustments to its translation tool to remove possible inappropriate output from the system. Biases in output could be more subtle – maybe an image generation tool tends to create people of a particular age, skin tone or gender, or perhaps it highlights stereotypes. These biases can be corrected with thoughtful writing of the requests made to a generative AI tool.
You should always be transparent when content is sourced from an AI tool. Include a note or banner on AI chatbots, AI-generated documents and other output from generative AI tools.
A popular use of generative AI is to build a tool for answering basic customer questions that are covered by existing documentation. In this scenario, departments might work with their IT staff or a vendor to configure an AI tool to “learn” their documentation and tune the responses given by the tool. A well-developed “chatbot” can be effective at answering common questions and freeing up staff time for more detailed customer needs.
In this example, it’s important for a department to consider the three areas listed above:
Artificial intelligence tools have proven useful in summarizing information and are commonly used to write summaries in many fields. This ranges from writing a two-sentence summary of a news story to generating a list of the top 10 polka songs from a wide variety of input. Let’s visit our three areas again with this example:
Translating content into additional languages is another popular use for AI tools. This can open access to content and services to more individuals and be more inclusive of community members whose native language differs from the language of your content.
Last updated: 10/22/2024
1800 Grant Street, Suite 200 | Denver, CO 80203 | Campus Mail 050 SYS
CU Help Line: 303-860-HELP (4357), E-mail: help@cu.edu | Fax: 303-860-4301