Artificial intelligence (AI) permeates our everyday lives. It also poses new challenges for university teaching and research. How are AI tools such as ChatGPT used in research and teaching, and what new skills are required to use them?
The launch of ChatGPT, a chatbot from the American company OpenAI, caused a major stir at the end of 2022. Although artificial intelligence has long been part of our everyday lives, ChatGPT made new waves. AI tools are already being used extensively in various industries, including in teaching and research, where they help students, researchers and lecturers to organize their work more efficiently. ChatGPT, for example, may formulate questions, conduct research more quickly or summarize and simplify texts. But AI tools offer much more than these basic organizational tasks.
Noah Bubenhofer is Professor of German Linguistics at the University of Zurich. He researches the role of language in society, focusing particularly on artificial intelligence and large language models. He has been using language models and other machine-based methods to empirically study language use for some time. The linguist explains: “AI systems, such as chatbots, enable more individualized assistance for students. These systems offer a kind of tutor with whom students can work through content. But in my field of linguistics, for example, it is also a huge help since the barriers to programming are lower. A chatbot can help me implement my ideas for an analysis.”
Challenges of ChatGPT and Co.
However, this delegation of tasks to AI is being regarded with increasing criticism, especially in terms of students’ learning processes. There is a fear that students will not acquire certain skills in the first place: learning approaches that are primarily characterized by conscious understanding, combined with frequent practice and experience, are being eliminated over time. However, it is not quite that simple, as Bubenhofer also emphasizes: “AI does not deliver a finished result. Rather, it’s about the new possibilities that arise from the cooperation between man and machine.” Particularly when it
comes to quality control, users must focus on critically examining the content produced by AI.
Even if chatbots write grammatically flawless texts, their content can be complete nonsense. The tools have no intelligence, they only simulate it. They also lack human judgment. At the same time, systematic errors can occur if chatbots are (intentionally or inadvertently) steered in a certain direction through their training data, thereby promoting misinformation, prejudice or discrimination.
Teaching new skills
Just as new regulations became necessary with the introduction of the pocket calculator, these will also have to be established for AI tools. Transparent use of the tools is important here. Students and researchers, as well as lecturers, must experiment with and practice new ways of dealing with AI systems. While the product has long been the focus, we must now concentrate on the process. For example, those writing a text must ask themselves: How can ChatGPT be used in my work? How have problems occurred in the past and where are the possible sources of future error? It is clear that established teaching approaches, as well as learning materials and performance assessments, will change. “We have to ask ourselves what skills students should have - and where it makes sense for them to use AI help.” Bubenhofer therefore advocates so-called AI literacy: “What do you need to know about large-scale language models and AI applications to be able to use them sensibly, responsibly and critically?” One thing is certain: the use of AI tools will become standard practice. This makes it doubly important that AI literacy is taught in schools and universities.