Skip to Content
A person in a blue suit gestures while speaking on stage in front of blue curtains, standing next to a small round table.

Humans Still Call the Shots

  • News
  • Events

Katharina Zweig, Professor of Computer Science at the Rhineland-Palatinate Technical University of Kaiserslautern, clearly counters societal fears that artificial intelligence will “kill us humans or take away our jobs” at the Bürger-Uni of TUM Campus Heilbronn, Heilbronner Stimme, and Dieter Schwarz Foundation.

On the one hand, machines do not possess intelligence comparable to that of humans and lack the ability to adapt to changing situations, says the computer scientist at the event on “Bildungscampus” Heilbronn in mid-November on the topic of “Are machines the better decision-makers?”

On the other hand, the working world will continue to change, but not in such a way that machines will replace humans in all processes. “There will be new jobs, and it won't be possible to automate as much of the old jobs as people think,” said Zweig in a discussion with moderator and Stimme editor Tobias Wieland. In her estimation, machines will probably take over 80 percent of the routine processes that have always taken up 20 percent of the time.

 

We Need the People Behind the Machines

 

AI as a useful everyday helper, not as a superintelligence that should be trusted blindly – that's how Zweig's position can be summed up. The computer scientist explores the limits of the application of artificial intelligence, especially language models. In doing so, she gives the audience a recommendation: AI should never be used to make decisions, predict risks, or generate evaluations of school essays. Nor should it be used for academic essays, as is already the case in academia, as co-host Luise Pufahl, Professor of Information Systems at TUM Campus Heilbronn, mentions in her welcome address.

But why aren't machines the better decision-makers? Mainly because their decisions are not reliable: it is often impossible to understand how they came about. “The computer, the machine, the system. That's technology, there's no entity behind it,” says Zweig, recipient of the Federal Cross of Merit, explaining her point of view. For her, there still needs to be a person behind the machine who takes responsibility.

 

Evaluations Without Understanding of Content

 

Take language models, for example: if a language model is used to evaluate a specific essay, it generates a text that looks like an evaluation but does not correspond to any value judgment. This is because the machine has been trained to measure structures such as word choice and sentence length in a text. The text is graded based on these criteria, which say nothing about the content. This is also evident from the fact that, in a corresponding experiment, none of the AI's suggestions for improvement would have been appropriate for the specific text, explains Katharina Zweig.

Example complaint bot: The scientist recounts how, in one case, she pretended to be a customer and was then asked for her phone number during the complaint process. She was asked to enter an American number. “I said I was from Europe and that our phone numbers are different,” reports the professor. “The machine then asked me to simply give it the last ten digits.” The exception – European customer – had not been considered for this communication. Her aim is to prevent poorly made software or to highlight when well-made software is used incorrectly.

 

Try It Out to Understand

 

For those who haven't tried it yet, she recommends testing the most popular models, Chat GPT or Perplexity. “Artificial intelligence will change our lives, your children's lives, and your grandchildren's lives – for better or for worse. But we need to know how it works in order to understand it.” The expert's advice: “Only use AI systems whose quality you can verify.”

The next edition of the Bürger-Uni will take place on March 25, 2026, with the topic “Is this the future or can we do without it?” with Professor Maximilian Lude.

 

 

Watch the event