What is AI?

Artificial intelligence (AI) is “any technology/machine that can perform complex tasks that are typically associated with human intelligence.” Most of the AI tools with which you may be familiar (like ChatGPT, Claude, and Gemini) are generative AI tools, meaning that they use algorithms to generate content. It is important to note that these tools aren’t thinking—they are using patterns to create outputs. This also means they are not reading or analyzing text.

How is AI Integrated into Library Services?

The Giovale Library has integrated AI into two of our services. Other library tools, such as ScienceDirect from Elsevier, may have options to pay for AI tools. We do not recommend students pay for accounts to use these tools.

JSTOR’s AI Research Tool

Use this tool to narrow and analyze search results, ask questions about texts, and find related resources when doing research. JSTOR works best when used for research in the humanities, such as philosophy, literature, religion, and history. To use the tool, when viewing a specific result, toggle "AI Research Tool” to on. You will be prompted to create a free account. JSTOR’s AI Research Tool is chat-based. You can type in your own questions or use the suggested prompts. Once you’ve created an account, you can also use the AI Research Tool to analyze your search results. Click “Semantic Scholar” once you’ve completed a search in JSTOR.

Learn more about JSTOR’s AI Research Tool

Natural Language Search

Use this tool to search GriffinSearch and other EBSCO databases by asking questions or using longer phrases, rather than keywords. The tool uses AI to create effective keywords, which you can then refine. To use Natural Language Search simply type a question or phrase into GriffinSearch or another EBSCO database and toggle “Natural language search” to on. Once your search is complete, you can click “Show refined query” to see what search terms were used.

Learn more about Natural Language Searching

Making Ethical Choices about AI

Know Your Values

When deciding whether to use AI and which tools to use, consider how your values intersect with the ethical issues involved in its use.

Environmental Impacts

  • Energy and water consumption: AI tools require large data centers with massive carbon footprints that disproportionately pollute low-income and marginalized communities.
  • Resource extraction: the methods of mining rare earth elements used to create microchips that power AI can have extreme environmental impacts.

Economic Impacts

  • Intellectual property and plagiarism: many AI companies do not disclose the data on which their models are trained; however, AI models may draw upon and reconstitute existing works. This is especially concerning when using video and image-based AI tools.
  • Exploitation of workers: some AI companies rely on underpaid and exploited workers to train their AI models and perform quality control. Many of these companies outsource this labor to the Global South.

Social Impacts

  • Bias: generative AI tools replicate the biases of the data on which they were trained. Many tools also generate multiple prompts and use human intervention (either employees or users) to select the “best” responses. As a result, they may present biased, problematic, or inaccurate content as fact.
  • Paid access: some AI tools require users to pay for access to more advanced models, or after a certain number of uses. This exacerbates the gap between those with the disposable income to pay for better tools and those without. 
  • Cognitive offloading: studies have shown that consistent use of AI may hinder your ability to think critically.

Follow University and Class Policies

Though Westminster University does not have a campus-wide policy on the use of AI, we do have an academic integrity policy that precludes unauthorized and uncited AI usage. Many professors will include a syllabus statement about acceptable and unacceptable uses of AI in their courses or assignments. Be sure to carefully read your professors’ syllabi and ask for clarification if you have questions.

Be Transparent

When you use AI tools to generate content, you must acknowledge the use of AI. The Writing Center offers useful handouts that provide guidance on ethically using and citing AI tools.

Using AI Effectively

Unlike search engines, generative AI tools allow you to “prompt” the tool to provide specific types of results. Creating useful prompts will allow you to use AI tools more effectively. Here are a few tips and tricks:

  • Provide context when writing a prompt. Include details about what you’re trying to achieve (for instance, brainstorming search strategies, getting clarification on a topic), what kind of output you would like (bulleted list, short paragraph, table), and for whom the output is intended.  Another useful tip is to assign it a role; for example, start your prompt with: “Simulate the perspective of a professional in the field of _______. Select keywords for the following topic: _____.”
  • Be specific about the task you want to accomplish. For example, rather than asking, “Can you help me brainstorm three topics for a psychology paper?” Ask, “I need to write a college-level paper for a psychology paper that synthesizes peer-reviewed research on a topic related to mental health. What are five topic ideas that could help get me started?”
  • Start simple, then build. Detailed prompts can be great, but you can also treat the prompt like a conversation, not a one-off search. If the initial response falls short, ask follow-up questions, add more context, and clarify what you need. For large queries, break it into smaller steps and revise as you go. If you don’t like the the output, include information about what is not working and why.
  • Use active language. Include specific instructions that tell the tool how to respond. For example, ask the tool to provide examples, write, explain, or summarize. You can also provide rules for AI models to follow. For example, you can tell the tool to never guess if it cannot find an answer.

Evaluating AI Outputs

Because AI tools are not thinking, it is important to evaluate the outputs these tools create. This includes content generated by tools integrated into library databases. Content generated by AI may sound authoritative; however, that doesn't mean it is correct. Remember to evaluate AI outputs with a critical eye. You can (and should) ask generative AI tools to explain how conclusions were reached and from where the tool drew the information it provided. Keep in mind that many AI tools are programmed to answer you in the affirmative, even when corrected.

If you use AI to summarize sources, including AI integrated into library research tools, you may miss important points. AI can help you learn more about a source, but it cannot replace critical reading.

Contact the Library

Need help evaluating AI content? Have feedback about AI integration into library tools? Contact the library for additional help.