User experience and trust in AI-assisted technologies are key factors in controlling their adoption. We investigate these aspects in an Agentic AI platform that integrates routine productivity services and exhibits different levels of autonomy: a manual baseline that lacks AI-driven automation, an Agentic AI with medium autonomy that requires user confirmation before acting, and an Agentic AI with high autonomy that acts proactively for low-stakes tasks. The study, involving 230 participants with heterogeneous professional backgrounds, examines how autonomy of the system affects user activity, user workload, perceived support, and trust.We found that both Agentic AI systems outperformed the baseline in user productivity. In task execution, they achieved a precision of over 82%, higher than the baseline’s 65%. The recall of the Agentic AI system with high autonomy was 63%. This denotes much higher throughput than the system without AI-driven automation (14%). The Agentic AI systems outperformed the baseline in workload reduction (NASA-TLX Aggregate score) with a statistically significant difference. Both AI-driven systems received equivalent or slightly higher trust than the baseline. However, the system with medium autonomy was the best at balancing productivity gains and user preferences for control. Specifically, the correlations between individual user characteristics (Desirability of Control and Propensity to Trust) and the resulting trust in the systems suggest that the influence of personal traits on system evaluation is least pronounced when automation is combined with explicit user intervention. These results encourage the adoption of user-controllable Agentic AI architectures in multitasking support.
The autonomy equation: How agentic AI reshapes trust and workload in routine productivity applications
Geninatti Cossatin A.;Ferrero F.;Ardissono L.;Mauro N.
2026-01-01
Abstract
User experience and trust in AI-assisted technologies are key factors in controlling their adoption. We investigate these aspects in an Agentic AI platform that integrates routine productivity services and exhibits different levels of autonomy: a manual baseline that lacks AI-driven automation, an Agentic AI with medium autonomy that requires user confirmation before acting, and an Agentic AI with high autonomy that acts proactively for low-stakes tasks. The study, involving 230 participants with heterogeneous professional backgrounds, examines how autonomy of the system affects user activity, user workload, perceived support, and trust.We found that both Agentic AI systems outperformed the baseline in user productivity. In task execution, they achieved a precision of over 82%, higher than the baseline’s 65%. The recall of the Agentic AI system with high autonomy was 63%. This denotes much higher throughput than the system without AI-driven automation (14%). The Agentic AI systems outperformed the baseline in workload reduction (NASA-TLX Aggregate score) with a statistically significant difference. Both AI-driven systems received equivalent or slightly higher trust than the baseline. However, the system with medium autonomy was the best at balancing productivity gains and user preferences for control. Specifically, the correlations between individual user characteristics (Desirability of Control and Propensity to Trust) and the resulting trust in the systems suggest that the influence of personal traits on system evaluation is least pronounced when automation is combined with explicit user intervention. These results encourage the adoption of user-controllable Agentic AI architectures in multitasking support.| File | Dimensione | Formato | |
|---|---|---|---|
|
LAM-EditorialVersion.pdf
Accesso aperto
Tipo di file:
PDF EDITORIALE
Dimensione
5.56 MB
Formato
Adobe PDF
|
5.56 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



