Leiser recalls one of his first ChatGPT prompts; he wanted to show his grandparents just how much this technology can do: Explain the rules of the traditional German card game Binokel in the form of a poem. Everyone was enthusiastic when the language model responded. However, over time Leiser realized the more precise his prompts and the more specific background knowledge they required, the more often the answers received were inaccurate or partly incorrect. The chatbot made up these answers. This process is referred to as hallucinating. “Generative AI is very similar to humans in this regard. It is reluctant to admit it doesn’t know something,” says Leiser with a laugh.
From a user’s perspective, formulating ChatGPT prompts purposefully remains a compromise. How accurate does a prompt have to be for results to be good? How vague does it need to be to allow the language model enough flexibility to generate creatively? “If I am deeply involved in the topic, it is easier for me to assess if answers are plausible. If incorrect information continues to accumulate, I start to doubt the model,” adds Leiser. He likes ChatGPT as an engine for brainstorming. “In this case, small inaccuracies can be overlooked. However, once I make decisions based on the information provided to me, blindly trusting AI, things can become critical quickly, for example, in politics or medicine.”
Knowledge-Guided Machine Learning in Medicine
As part of the team headed by Prof. Ali Sunyaev, Leiser researches hybrid intelligence, the best possible synergy of human intelligence and artificial intelligence. In line with knowledge-guided machine learning, the goal is to determine the parameters of human decision-making processes and use them in the development of AI. One of the large fields of research is healthcare. Doctors can detect diseases because of their medical expertise, their professional experience and judgment based on experience. They make diagnoses, define treatment plans, and when in doubt, rely on ethical principles. How can machines learn to understand these decisions?
“We intend to train AI so that it delivers better results faster, for example, in cancer detection,” Leiser explains. To do this, physicians look at images of tissue. Eye tracking is used to determine where exactly the doctor is looking when he or she decides whether the tissue is cancerous. The knowledge regarding the location is fed into AI. In addition to images, text sources such as doctors’ reports and medical guidelines can be used for training purposes. Leiser emphasizes that the aim of his research is not to take decision-making away from medical staff: “Decisions about what is right and wrong must remain with humans. AI can only assist.”
Strengthening Decision-Making Authority Among Youths
Leiser studied Computer Science and Business Information Systems at Karlsruhe Institute of Technology (KIT). As a doctoral student, he became part of the Critical Information Infrastructures research group and followed his group leader Sunyaev to TUM Campus Heilbronn, a home advantage for Heilbronn-born Leiser. “While it is nice to be back in my home town, it is not the only reason I am happy to be here. TUM Campus has an atmosphere of new beginnings; things are happening.” Leiser appreciates the collaboration within the team and with companies in the region. For one of his research projects, he is currently in contact with SLK-Kliniken, one of the Heilbronn-Franken region’s largest providersof healthcare services.
Beyond healthcare, interest in the area of hybrid intelligence extends to a variety of application fields. The research group considers education as a focal point. The project application has been submitted. Leiser says: “Media literacy is a huge issue in education. If we fail to strengthen the next generations’ decision-making authority and allow them to rely entirely on AI for critical thinking, we will have a problem in the long term.” He adds: “Humans have thousands of years of knowledge of cultural techniques and problem-solving strategies. We must preserve this knowledge and find ways to develop alongside AI in the future.”