Why Real Autonomy Is So Hard: Insights from the SIX Mobile Work Machines Results Seminar
- Aparajita Chowdhury
- 3 hours ago
- 3 min read

On Monday 5.5.2026, Mixed Fleet researchers participated in the SIX Mobile Work Machines Results Seminar, which brought together researchers, industry partners, and practitioners to reflect on the realities of autonomous and semi-autonomous machines operating in complex, real-world environments. The event was held in Solo Sokos Torni Hotel, Tampere.
A recurring theme throughout the seminar was that real-world autonomy is fundamentally different from autonomy in controlled settings. Changing terrain, harsh weather conditions, human unpredictability, and safety-critical constraints make full autonomy not only difficult, but often impractical. Instead of aiming for automation for its own sake, many discussions emphasized the importance of understanding where and how autonomy can genuinely add value.
Against this backdrop, we presented results from the Mixed Fleet project, focusing on Work Package 1 (WP1) and Work Package 2 (WP2). Together, these work packages address the challenge of enabling safe, efficient, and trustworthy collaboration between humans, manually operated machines, and autonomous systems.

Work Package 1 centers on human–machine interaction in mixed-fleet environments such as warehouses and industrial worksites. Our findings so far highlight that trust in autonomous systems is closely tied to the quality of interaction and communication. When systems clearly express their intentions, limitations, and state, operators are better able to understand and rely on them. At the same time, overly complex interfaces or excessive information can increase cognitive load and undermine trust. The results underline that operators want machines that support their expertise rather than override it, reinforcing the importance of human-centered design in mixed-autonomy workplaces.
Work Package 2 focuses on what happens once an operator expresses intent. In our presentation, we showed how human commands, potentially expressed in natural language, can be transformed into structured mission plans that machines can execute safely. By combining AI-based planning approaches with formal verification methods and simulation-based validation, WP2 aims to ensure that autonomy remains predictable, safe, and aligned with human goals. This approach allows systems to respect operator intent while ensuring that safety constraints are never violated before execution begins.

Several presentations during the seminar explored the balance between autonomy and operator agency and how to respect human intent while ensuring safety. A human operator issues a command based on their goal, experience, or sense of urgency. However, that command may steer the system toward an unsafe state. Rather than simply rejecting the input or overriding the operator entirely, shared control approaches such as Control Authority Allocation Control (CAAC) aim to make the smallest possible adjustment needed to keep the system within a safe operating envelope. This idea becomes even more interesting when we consider actual user behavior. Operators often become more confident and act more aggressively when they trust the system to manage safety on their behalf. CAAC reduces mental workload, preserves the operator’s intent, and shortens task completion time, but it also changes how people behave. When users feel protected, they push boundaries more readily. This does not mean humans are the problem. Instead, it shows why understanding human behavior is essential when designing autonomous systems.
Autonomy is not just a technical scale from manual to fully autonomous operation. It is also a negotiation of responsibility, trust, and control. In mobile working machines, effective autonomy often lies not at the extremes, but in carefully designed shared control, where machines adapt to humans and humans adapt to machines. Taken together, these insights underline a crucial message from the SIX event: machines may already be capable enough, but the real challenge is designing autonomy that accounts for how people think, decide, and act in real conditions. Human behavior is not noise around an otherwise perfect system. It is a core design input.
The seminar reinforced a central message of the Mixed Fleet project: the future of mobile work machines is not characterized by fully autonomous systems operating in isolation, but by heterogeneous fleets in which humans and machines work together. Mixed autonomy allows systems to handle routine or safety-critical aspects while leaving room for human judgment, experience, and situational awareness.
As the Mixed Fleet project continues, our work remains focused on deepening the understanding of human–machine teaming, building trust through transparency and verification, and bridging advanced AI techniques with the practical constraints of real operational environments. The discussions at the SIX seminar confirmed that while real autonomy is hard, thoughtfully designed collaboration between humans and machines makes it achievable.
Written by Aparajita Chowdhury



