As robots continue to find applications in a wide range of real-world scenarios, the next wave of technological progress in the field of robotics appears to be centered around their “trustworthiness” to interact effectively with complex environments and execute tasks autonomously with minimal human intervention.
But what exactly does it mean for a robot to be trustworthy? From my perspective, a fundamental set of criteria, encapsulated in the “3R principle”, can outline the essential attributes:
- Reliable: Competent Operation in Unknown Environment.
- Receptive: Empathetic Interaction with Human.
- Rational: Adaptive Autonomy for Human-robot Teaming.
Numerous researchers have tried to approach these challenges from diverse viewpoints, yet in my view, the greatest obstacle impeding the practical application of robots in real-world settings is uncertainty. Factors such as noisy observations, dynamic environmental conditions, ambiguous human instructions, and evolving human-robot team structures render any deterministic approaches fragile over extended real-world operations. This leads me to a fundamental question: Can we establish the trustworthiness of robots through a probabilistic lens?
This inquiry serves as the driving force behind my research, which aims to pave the way to trustworthy robots with Bayesian learning. As an embodiment of common sense reasoning, Bayesian learning provides a formal and consistent way to reasoning in the presence of uncertainty, which empowers robots to address a variety of uncertainties encountered when they navigate in unknown, unstructured, and dynamic environments, especially with the presence of human partners. More specifically, Bayesian learning demonstrates its benefits for the creation of trustworthy robots across multiple dimensions:
- Safe: Bayesian learning provides principled modeling and quantification of uncertainty, which captivates risk-aware decision-making for safety-critical robotic applications, such as autonomous driving and surgical manipulation.
- Data-efficient: Bayesian learning allows for the incorporation of prior knowledge, which is critical when dealing with intricate situations where only limited data is available, but domain expertise can contribute to the learning process.
- Adaptive: Bayesian learning naturally supports incremental or sequential learning, which enables the accumulation of evidence and the refinement of beliefs with newly collected data during the real-time execution of robotic systems.
- Explainable: Bayesian learning offers interpretability through the explicit handling of uncertainty, integration of prior knowledge, and posterior analysis, which contributes to more transparent modeling and planning processes.
However, while promising and intuitive, integrating Bayesian learning into robotic techniques poses significant challenges. Among these challenges, the most notable arise from two perspectives:
- Inference: How to specify proper likelihood models to accommodate diverse types of data sources and structures while ensuring consistency?
- Decision: How to utilize incrementally updated probabilistic models to enhance the decision-making process of robotic systems?
As such, my research centers on seamlessly incorporating Bayesian learning into various robotic techniques by rigorously addressing the associated challenges in inference and decision, culminating in a harmonious framework that advances the development of trustworthy robots in real-world scenarios.