Brain-Robot Interface-Based Sound Source Navigation of an Assistive Robot in Industry 4.0
Abstract
This paper proposes a bi-modal Brain-Robot Interface (BRI) -based framework to localize a sound source in an industry 4.0 environment with a semi-autonomous mobile robot. The online BRI paradigm introduces a technique to incorporate human motion intentions using Motor Imagery (MI) and intended sound directions using Auditory Steady State Response (ASSR) with a mobile robot in audio-aware industrial beds. A Common Spatial Pattern (CSP)-based Linear Discriminant Analysis (LDA) classification algorithm is utilized so that the direction of the intended sound source is localized from the human electroencephalograph (EEG) signals mounted over the temporal, occipital, and parietal cortices. The entire system has been experimented under different scenarios (with and without background noise) and has been evaluated with proposed reliability metrics. The complete BRI system shows an overall accuracy of 90.47% with MI LDA classifier, over 6 dB differences with ASSR SNR, and reliability scores of 4.07 m-2 and 2.87 m-2 with varying configurations of distance from 2 meters to 6 meters under the influence of background noise.
Related articles
Related articles are currently not available for this article.