Utilising artificial intelligence (AI) in research responsibly and sustainably

Associate Professor (tenure track) Nannan Xi giving a presentation on how researchers can be empowered and benefit from the use of artificial intelligence.
Associate Professor (tenure track) Nannan Xi giving a presentation on "From control to empowerment".

How can I utilise AI when doing research?

This was discussed in the AI-related Research Afternoon organised by the Faculty of Management and Business (MAB). The Research Afternoon was held on September 24, 2024 at Sauna Restaurant Kuuma. Almost one hundred researchers participated the event. The event began with lunch, followed by presentations and discussions on the use of AI in research in a responsible and sustainable way. At the end of the event, the Faculty’s annual ‘highly cited publication’ awards were awarded and short greetings from Research and Innovation Services were provided. There was also an opportunity to continue the informal discussion on the sauna benches after the event. There were four presentations on the use of AI.

Generative models in research: Ethical perspectives

First, Senior Specialist Ville Rantanen from Research and Innovation Services and Doctoral Researcher Otto Sahlgren from the Faculty of Social Sciences (SOC), in their joint presentation, focused on the ethical perspectives on generative AI models in research. At the beginning of their presentation, they briefly introduced the most common generative AI models and how they can be utilised when conducting research. Rantanen and Sahlgren also introduced Tampere University’s guidelines for using AI, the Finnish Code of Conduct of Research Integrity by the Finnish National Board on Research Integrity (TENK), and the EU’s AI Act and the principles of research integrity (from the European Code of Conduct for Research Integrity  by ALLEA), all of which provide guidelines for the ethical and responsible use of AI in research.

In their presentation, Rantanen and Sahlgren gave clear examples of ethical challenges related to the use of AI that every researcher should be aware of. One of the ethical challenges relates to reliability and openness. For example, generative AI models are known to “hallucinate”, i.e. produce seemingly reliable information that may nevertheless be incorrect (e.g. contradictory assertions, non-existing references). In addition, generative AI models are very complex and multilayered, meaning that the parameters behind them cannot be fully explained (so-called “black box” problem). Large AI models are also inherently stochastic, meaning that they have some randomness. Therefore, they do not always produce the same results when repeated, which is problematic from the point of view of the reliability of the research.

The use of AI also poses challenges from the perspective of data protection, privacy and information flow. For example, training data used for the development of the generative AI models may have been collected without informed consent from data subjects (e.g., by scraping the web). In addition, generative models can recall examples of original training data and thus contain sensitive and copyrighted information. Some AI models may also use the data fed into them to further develop the models and transfer the data outside the EU. These challenges pose risks from the perspective of plagiarism and may violate privacy, data protection principles and copyrights.

AI tools (or models) may also exhibit bias due to unrepresentative or erroneous training data and harmful content used in model development. This can lead to harmful stereotypes or toxic language. Subtle biases in AI models can also be problematic as they are not easy to detect. From the perspective of research integrity, biases may have a negative impact on the representativeness, validity and reliability of the results, for example when conducting literature reviews.

In their presentation, Rantanen and Sahlgren also emphasised the challenges of social and ecological sustainability related to the use of AI. From the point of view of social sustainability, the problem is that “big tech” AI service providers (eg. OpenAI) have outsourced their data workforce to countries with low labour protection. These employees, for example, run the risk of being traumatised by violent or otherwise inappropriate material that they need to remove from the AI models. From the perspective of ecological sustainability, there are problems with the fact that most AI tools consume significant amounts of energy and water and generates carbon dioxide emissions. Thus, researchers should be aware of these challenges.

At the end of their presentation, Rantanen and Sahlgren offered a clear six-point list of things to remember when using AI in research:

  1. Know your AI tools and their makers
  2. Use AI tools responsibly
  3. Protect your and others’ data
  4. Exercise caution
  5. Be transparent
  6. Know when not to use AI.

From control to empowerment: AI for sustainable outcomes

In her presentation, Associate Professor (tenure track) Nannan Xi from the Information and Knowledge Management Unit (MAB Faculty) focused on how researchers can be empowered and benefit from the use of artificial intelligence. She started her presentation by considering the problems and benefits associated with the use of AI in research. The problems are related to the themes raised also by Rantanen and Sahlgren, such as possible misinformation, plagiarism and bias. On the other hand, there are many benefits associated with the use of AI of which Xi offered four examples.

First, AI is useful for acquiring resources and making efficient use of them. For example, when applying for external funding, AI can be used to analyse successful applications, prepare an application that meets funding criteria and train for a possible interview round (e.g. ERC-funding). Secondly, AI can be utilised to develop one’s own professional competencies. For example, AI can be used as a ‘mentor’ that provides real-time information on one’s own competences and gives suggestions how to develop them. Thirdly, AI provides tools for conducting research, for example, for formulating research questions, identifying research gaps, conducting quick literature reviews, identifying new trends and simulating different research designs. Fourthly, Xi emphasised the use of AI when communicating with academic audience. AI can be used, for example, as a ‘copyeditor’ to improve the clarity and structure of one’s own text, organising content, proofreading and managing citations. The use of AI offers also new kinds of tools for storytelling and visualising research results.

At the end of her presentation, Xi, like Rantanen and Sahlgren, emphasised researchers’ own responsibility and openness in the use of AI. According to her, the use of AI offers new opportunities to free up researchers’ time for creative, innovative and aesthetic thinking in collaboration with other researchers.

EVIL-AI – Identification and mitigation of the negative effects of AI agents

In his presentation, Professor Henri Pirkkalainen from the Information and Knowledge Management Unit (MAB Faculty) talked about the EVIL-AI (“evil eye”) project, which focuses on identifying and mitigating the negative effects of AI agents. This recently started project, has received EUR 1.4 million funding from the Jane and Aatos Erkko Foundation. In addition to Pirkkalainen, the project is led by Professor Pekka Abrahamsson and Associate Professor (tenure track) Johanna Virkki from Faculty of Information Technology and Communication Science (ITC).  The best-known examples of AI agents are Microsoft’s Copilot and ChatGPT that is trained to aid in a particular task, but various care and service robots serving humans, as well as avatars in metaverse, can also be modified to conversational AI agents. The project investigates the risks associated with the use of AI agents as well as possible manifestations of malicious AI. The topic is important, as we trust technology today more than ever and it is expected that our relationship with AI will become even more personalised in the future.

Professor Henri Pirkkalainen giving a presentation. The present slide is titled "Caricatures, the uncanny valley and full realism".
Professor Henri Pirkkalainen.

Regarding the future trends of conducting research, Pirkkalainen emphasised a few things. Firstly, AI agents are getting better at assisting in work including research activities. Secondly, different AI agents will be available for different purposes (e.g. analysing data, brainstorming on research ideas, assistance for Microsoft tools). Thirdly, advancing beyond text-based AI formats (e.g. chatbots) will be inevitable. Finally, Pirkkalainen emphasised that recent developments on memory-equipped AI agents can be a significant leap towards a personalised AI research assistant.

How to benefit from the AI as a researcher?

Professor Pekka Abrahamsson (ITC Faculty) in his presentation also talked about how the use AI will change our way of conduct research. In particular, he focused on how researchers can utilise generative AI models. By using memorable and vivid examples, Abrahamsson showed diverse ways how AI can assist in different stages of the research process. In addition, he demonstrated how AI can be used to quickly create a ‘podcast’ of a certain topic such as the MAB faculty’s research strategy. On the other hand, Abrahamsson gave examples of how to identify if AI has been used in publications (e.g. the use of certain words and phrases). He also offered tips on how to give effective prompts to an AI model. Firstly, the prompts should be clear, and include constraints. It is also important to provide a context. In addition, it is a good to require citations and re-ask the same issue to verify the information. How to give right kind of prompts sparked also a lively discussion in the audience.

Based on the presentations and discussions at the Research Afternoon, it can be argued that AI already offers many opportunities when conducting research varying from applying for funding to writing publications. For example, according to Rantanen and Sahlgren, most scientific publishers allow the use of AI to improve the quality of the manuscript, but the use of AI needs to be clearly described. Both Pirkkalainen and Abrahamsson in their presentations emphasised that we are just getting started in the use of AI. Thus, it can be assumed that the use of AI will revolutionise the way we conduct research in the future. And still, as Xi highlighted in her presentation, after all people do research.

 

The MAB Faculty warmly thanks the DigiSus Research Platform for the financial support for the event!

Text (based on the presentations) and pictures: Hanna Salminen, Research Specialist, PhD, e-mail: hanna.salminen@tuni.fi