Skip to main content

Generative AI as an Information Type

Advice to help you critically evaluate when and how to use responses generated by AI.

Searching for, reading, and synthesising information to inform your academic work can take time and requires the development of your academic skills. It may be tempting to think that tools such as ChatGPT, BingBard and Perplexity AI speed up this process by finding information for you on a topic and answering your questions directly. However, it is important to know a little more about how these tools work, the information they provide, and to think critically about whether the information they create is appropriate for your intended use.

Search engines versus Generative AI

Search engines

Traditional search engines and literature searching tools such as Google, Library Search or your subject databases, work by searching for information based on the keywords you enter. The search tools match your keywords to the words used to describe content and give you a list of potential sources, such as websites, books, articles or news items, to investigate.

For this to work effectively, you need to break down your topic into a search terms, thinking about alternative terminology, how to combine your keywords, and what you want to include or exclude. It is not always an easy process; it can take time to refine your approach to searching and to identify the best resources to search within.

Generative AI

On the surface it may appear that what generative AI tools do is not just a search for information, they also find and present information for you. You enter your prompt or question, and the response is a unique answer. A source of information has been generated for your purpose alone. You do not need to follow links to online materials or seek out print sources because AI has presented you with the answer and the information is right there in the response.

In terms of information seeking behaviour, this is probably what the majority of us want. Don’t tell me where to find the answer, give me the answer!

However, it is important to recognise that generative AI does not create information in the way you might expect and are familiar with. The way that information is generated has a big impact on the reliability and authority of the response, and how you might use it during your studies.

How is Generative AI creating information?

Generative AI text tools such as ChatGPT and Bard are computer programs trained on a large language model. That means these programmes have been trained by consuming large amounts of text from books, articles, and websites, which the programme then analyses to find patterns and relationships between words and longer bits of text. This intelligence is based on a recognition of patterns rather than an interpretation of meaning or what the words actually say.

The text responses are built word by word, phrase by phrase, based on what is most likely to come next and what ‘fits’.

You may find it useful to think of this text generation like building with Lego blocks. If we build a Lego model without thinking about the Lego set the blocks have come from or the colour of the blocks, the result may be a functional Lego model, but it won’t necessarily be accurate or match the picture on the box.

In the same way, the AI tool matches the text pieces together to build a response without consideration of which sources the text comes from or the context of the information.  You have what looks like a model answer to your question, but is the information accurate, and will it hold up in assessment against assignment marking criteria?

What does this mean for you?

At University we are used to reading books and articles that are evidence based, created through the synthesis of multiple sources and data, and written by authors who have developed an understanding or expertise of a discipline. There is an expectation that the types of information you use and the sources you select, will be high quality academic, technical and specialist sources.

You should be cautious about using responses generated by AI as a source of information in your studies and take time to think about whether it is an appropriate source to use for the task you have been given.  There are some key issues to be aware of:

Generative AI is not an academic source

ChatGPT and similar tools are good at answering questions by giving you an alternative explanation, providing background information or offering an overview of main points. At university level, there is an expectation that the sources you use will be more in-depth and scholarly, and that you will draw on differing perspectives to form your own opinions. Ask yourself, is information created by generative AI good enough for your work? What are the markers of your work expecting you to use as sources of information?

Where is the evidence or sources?

When ChatGPT provides an answer to a question, it will not immediately cite where the information came from. This is because it is building the text block by block, constructing it from multiple sources. You typically cannot trace the response or any individual part of it back to a single parent source or know where the information came from. ​When prompted to give a citation, this will be included but you cannot be sure that the reference is real or that it is the source of that idea.

Few AI tools provide details about what was included in the training dataset, which makes it even more challenging to evaluate whether the response is reliable and authoritative.  

Is the information up-to-date?

Although it may appear to, ChatGPT is not connected to the internet and does not function like a search engine. The current version was trained on a massive dataset that was collected prior to 2021, and all AI tools that have been built on top of the ChatGPT model will be similarly time bound. Bard is reported to be using live data and AI enhanced search such as Bing will potentially draw on up-to-date information.

Answers are often incorrect or hallucinated

The way AI generates text, by constructing information from a giant dataset using a predictive model, can lead to hallucinated responses. The replies may sound plausible but on critical reading you may find that the response is factually incorrect or unrelated to the context. You will need to approach any material generated by AI with caution to ensure that you are not using fabricated sources as the basis for your assessed work.

Evaluate the responses as a type of information

In order to approach the information generated by AI tools critically:

1. Cross-check the response and do lateral reading

As with any information source, don’t accept generative AI responses at face value. Cross-check the information with other sources, your own knowledge, and information provided by your lecturers in learning materials. Read some easily accessible online sources or some reference sources for your subject. Do they contain the same information and conclusions? Do they confirm what the response claims? Looking at multiple sources in this way, to compare and contrast what they are saying, is called lateral reading, and it’s a useful technique that can help you avoid bias and misinformation.

2. Verify that references are real

Search for any references included in the response in Library Search, Scopus, Web of Science or Google Scholar. Does the book or article exist? Are the publication details correct?

3. If they are real, check that they support the response

It is common to find that although the references may be correct, on closer reading the source has been misrepresented in the response. Take time to check that the cited sources are about what the AI tool claims.

4. If you use generative AI responses in your work, you’ll need a reference

You will find guidance on referencing AI as an information source on the Library referencing guide and examples for different referencing styles in Cite Them Right Online.

Citing generative AI guide

An overview guide to citing ChatGPT and other generative AI tools in different styles.