Hal Daumé III
  • Research

Interview: AI, Language, and Ethics with Hal Daumé III

Professor Hal Daumé III is an invited scholar at SCAI and ISIR, as well as the Volpi-Cupal Professor of Computer Science and Language Science at the University of Maryland (UMD.) In our interview, he takes us through the nuances of human-AI interaction, the ethical responsibilities of researchers, and how to use AI to enrich the human experience rather than replace it.

What brings you to Sorbonne University’s SCAI and ISIR?

HD: I’m usually based at the University of Maryland, where I've been teaching and researching for about 15 years, but I had never taken a sabbatical. I saw this as a unique chance to take time away from my usual setting, immerse myself somewhere entirely different, and bring a fresh perspective to my work. I knew I wanted to go abroad, and I have a long-standing professional connection with François Yvon at ISIR, whom I've known for around 20 years. I reached out to him, and everything fell into place. It’s also a great fit professionally because several research projects at Sorbonne intersect with what I’m working on, especially in natural language processing (NLP) and human-AI interaction. Working here isn’t a direct replica of my setup at Maryland; it provides fresh challenges and connections, which I’m excited to learn from and, hopefully, contribute to as well.

Interactive AI plays a big part in your work. Why is it important?

HD: Just five years ago, explaining the relevance of human interaction with AI was a challenge. I’d give presentations and spend several minutes justifying why human-AI interaction mattered. Now, with the advent of AI tools like ChatGPT, the need for studying this is obvious. It’s clear that AI will continue to affect more aspects of daily life and work. I’m especially interesting in enabling AI to learn from interactions naturally. Let’s say you have a simultaneous translation AI—it should be able to “learn” from interaction mistakes. If the translation gets garbled, for example, the AI should recognize that something went wrong from the way a user reacts – just as we do as humans in natural conversation. But many systems aren’t learning from these responses; they’re just facilitating communication without improving over time. The idea of adaptive, responsive AI is what excites me—it’s where I see a big opportunity.

You mentioned that natural interaction is a key part of your research. What does this look like when working with AI?

HD: It’s essential to understand and incorporate subtle cues of “success” and “failure” in AI systems, especially when explicit feedback isn’t possible. As people, we don’t explicitly rate the conversation we’ve just had with the person we’ve had it with. Humans can communicate dissatisfaction without words— in natural conversation, if I see you look confused, I know I need to back up and clarify. It would be great if AI could learn from those small, implicit cues, rather than only improving from overt actions like a thumbs-up or down.
A lot of my research focuses on language, as it’s the most intuitive way humans interact. I find it fascinating because with language, we can adjust to different levels of detail. Like, if I’m teaching you to make béchamel sauce, I might just say, “make béchamel” if you know what you’re doing. But if not, I’d walk you through it: “melt butter, add flour, stir in milk,” and so on. That kind of layering of instructions through language is something that’s natural for us but could make AI more flexible too. If AI can adjust based on a user’s knowledge level and interact naturally, it could make these kinds of interactions feel seamless.

Hal Daumé III pictured at SCAI

Beyond the technical aspects, I know that you’re also deeply interested in the societal impact of AI. Can you tell us more about that?
HD:
Definitely. I feel like a lot of public discourse around AI is pretty extreme—it’s either AI is going to save us all, or it’s going to destroy us all! I think both of those are caricatures. If I didn’t think AI had positive potential, I wouldn’t work in this field. On the other hand, I recognize the risks. About ten years ago, I started working on what we might call “AI ethics,” though it wasn’t called that at the time. It became clear that AI was no longer something contained within isolated systems; it was going to be applied in complex, real-world situations. And these real-world applications highlighted new challenges. For instance, AI systems now interact with humans in ways that can reinforce societal biases, particularly in language applications. My early work in AI ethics focused on analyzing these biases and considering how they play out in language. I see it as a core part of our responsibility in this field to be proactive about such potential issues.

Could you give us an example of how you tackle these ethical concerns in your projects?

HD: One example is our work with American Sign Language (ASL) technology, where ethical considerations, especially around data, are critical. ASL is unique because it’s a visual language, so developing AI that can interpret or generate it is complex, requiring huge amounts of representative data. Unfortunately, it’s difficult to collect this data in ways that are both respectful and accurate. We approached this by crowdsourcing video data with explicit consent, ensuring participants’ privacy was protected. This was an important project not only for the technology itself but for the approach it represents. We need to address real community needs with care, recognizing that even small improvements—like adjusting video conference settings to highlight ASL users—can make a meaningful impact.

As AI continues to develop, there’s a growing discussion around regulation. Where do you stand on the regulation of AI? Who is accountable for ensuring AI remains ethical?

HD: It’s a critical conversation, and I think there’s a strong case for AI regulation, but it’s challenging. In the U.S., where I’m from, the federal government is a long way from implementing AI regulation, though states are making incremental progress. The EU AI Act, on the other hand, is promising, if a bit restrictive in some ways. I feel data protection should be the core of AI regulation, as so many issues stem from data misuse rather than the AI itself. Some U.S. states are already tackling this by regulating automated decision-making systems, which is much clearer to define than “AI” as a whole. A lot of harm results from data being used in ways that were never intended, so I think focusing regulation on data use is a step in the right direction.

How do you think AI will affect employment?

HD: I think it’s less about replacing jobs and more about shifting certain tasks within jobs. Rather than eliminating jobs, AI could automate specific aspects of a role. For example, in Singapore’s self-driving taxis, tasks like cleaning and refueling are still necessary, so new roles emerge around these needs. My worry is that we’ll focus too much on efficiency rather than job satisfaction. Efficiency is a corporate priority, but in my view, it shouldn’t be the only one. There was an insightful study on GitHub Copilot, showing that AI increased efficiency but also made developers feel more satisfied because it handled the tedious parts. This is where AI could be genuinely beneficial, but I’m concerned we may prioritize efficiency at the cost of human autonomy and fulfillment.

What do you hope your work will contribute to society?

HD: My goal is to design AI tools that prioritize experience over efficiency. For instance, a project I did with Reddit looked at ways to support their content moderators, who are volunteers. They’re not focused on doing things quickly; they’re invested in community building. Working with volunteers really pushed us to prioritize improving the experience over speeding up the task. I think this model—focusing on users and their needs rather than just productivity—could be valuable across AI research, especially in academic settings.

I’m hopeful we can shape AI in a way that respects human choice and autonomy, even if that requires new ways of working and collaborating. I’m glad to see this conversation expanding beyond technology circles, too. It’s important that researchers, users, and the broader public stay involved in this journey.