Effective human-robot communication is one of the main con- cerns in modern robotics. Involved systems should be very robust, allowing little chance for misunderstanding users com- mands. The main purpose of this work is to develop a general framework for multimodal human-robot communication, which allows users to interact with robots using speech and gestures, integrated into unique commands. The produced architecture relies on the definition of different modules separately analysing the low level inputs and presenting a further fusion module able to extract semantics from these multiple channels. In this paper, we introduce our general approach and provide a case study where gesture and speech modalities are combined.
Interacting with robots via speech and gestures, an integrated architecture / Cutugno, Francesco; Finzi, Alberto; M., Fiore; E., Leone; Rossi, Silvia. - Proc. of INTERSPEECH 2013:(2013), pp. 3727-3731. (Intervento presentato al convegno Interspeech 2013 tenutosi a Lione, Francia nel 25-29 Agosto).