Generative artificial intelligence (AI) services, which generate text, images, or other media in response to human-generated prompts, are rapidly growing and evolving. Many generative AI applications carry immense potential across a wide range of marketing communications disciplines, including graphic design, writing, and software development. However, users must exercise caution to ensure accuracy and transparency and to avoid disclosure of proprietary or confidential institutional data. Failure to adhere to expectations around data sharing and privacy could constitute a violation of UO policy, the student conduct code, and/or state or federal laws.
University Communications encourages exploration of generative AI services with the following guiding principles in mind:
Ensure Humans Review Any Outcomes
AI can enhance human-led work. It does not replace it. Uniquely human elements of creativity and judgment remain critical. Some widely used AI tools, some of which pull from limited or old data sources, have presented false or fabricated information and even citations. Humans must evaluate, fact-check, and review AI-generated outcomes before using in any work product.
Be Transparent, Use Citations
The use of material taken from any source—whether directly quoted, paraphrased, or otherwise adapted—should be attributed to that source. This includes materials from artificial intelligence (AI) content generators and generative AI tools such as ChatGPT. There may be specific citation formats required for academic uses. In general, all work, published or not, should cite AI-generated elements, noting the organization responsible for creating and maintaining the tool used, the title of and version of the tool, the prompt used, the date, and a link to the tool.
A narrative citation, included in the body of a document, should be included, either parenthetically or in text and as a footnote, for example, (Open AI, GPT 3.0, “Provide citation format for generative AI,” August 3, 2023, chat.openai.com). Annotate citations with additional context, for example describing how AI was used to edit an image or provide specific text. Failure to adhere to expectations around proper citations could constitute a violation of UO policy, including the student conduct code, and/or state or federal laws.
Demand and Check Citations
Request citations in prompts and check all citations provided in results. Use traditional research methods to fact check and properly confirm, document, and attribute all AI-generated text, images, or videos.
Monitor for and Reduce Bias
Biased prompts can generate biased responses. In some cases, AI tools can generate biased results, especially when using data based on widespread assumptions and norms. It is critical for all users of AI to monitor for and reduce bias in prompts and results.
Protect the UO’s Data
Any data provided in prompts may be used in subsequent responses to other users of the AI platform worldwide. Do not use in prompts or otherwise feed any AI tool any personally identifiable or confidential information, intellectual property, licensed or copyrighted materials, or any source code, especially that protecting institutional data. Do not submit or share official University of Oregon wordmarks, logos, images, video, or other proprietary files, code, or text. Failure to adhere to expectations around data sharing and privacy could constitute a violation of UO policy, the student conduct code, and/or state or federal laws.
Respect Individual Privacy
Feeding personal information into AI tools risks privacy and security and could violate UO policy and/or state or federal laws. Do not share names and information about real students, employees, or research participants. Never provide employee-related data such as performance or benefit information. Failure to adhere to expectations around data sharing and privacy could constitute a violation of UO policy, the student conduct code, and/or state or federal laws.
Avoid impersonation. If speech generation is used, it must never been done in a way that impersonates or misleads. No one’s voice should be replicated without express consent. AI-generated sound should never be presented as raw audio.
Appropriate Marketing and Communications Outcomes and Uses
With critical human oversight, AI could be a useful tool to generate early drafts or iterations of products. UO marketing and communications staff must provide judgment, reasoning, and context while also infusing life and the UO brand voice into final products.
Generative AI tools such as ChatGPT might be used to
- Condense lengthy or complicated publicly available texts into summary paragraph(s), sentences, outlines, or main points.
- Extract data from publicly available texts and organize it in tables and graphs.
- Generate outlines or drafts as jumping-off points for correspondence of original writing, provided prompts do not include institutional data.
- Brainstorm headlines, subheads, or marketing taglines for publicly available texts.
- Check copy for inclusive language and implicit bias.
- Adapt publicly available text for various intended audiences.
- Provide initial editing and proofreading. (A UO, human editor should always review and approve any AI-generated content suggestions before publication.)
- Draft initial interview questions, job postings, or survey prompts.
- Generate or check code used on websites for web publishing purposes. (Code used in protection of UO systems and institutional information should never be shared with a publicly available AI tool.)
- Make recommendations for information architecture/classification or items within a site’s navigation.
- Optimize publicly available web content for SEO.
Related UO Links:
Essential AI Principles and Policies for Creating Marketing Content
Microsoft AI Learning and Community Hub
Bias in AI and Machine Learning: Sources and Solutions
Acceptable uses of generative AI services at IU