A Memory Augmented Architecture for Continuous Speaker Identification in Meetings
We introduce and analyze a novel approach to the problem of speaker identification in multi-party recorded meetings. Given a speech segment and a set of available candidate profiles, we propose a novel data-driven way to model the distance relations between them, aiming at identifying the speaker label corresponding to that segment. To achieve this we employ a recurrent, memory-based architecture, since this class of neural networks has been shown to yield advanced performance in problems requiring relational reasoning. The proposed encoding of distance relations is shown to outperform traditional distance metrics, such as the cosine distance. Additional improvements are reported when the temporal continuity of the audio signals and the speaker changes is modeled in. In this paper, we have evaluated our method in two different tasks, i.e. scripted and real-world business meeting scenarios, where we report a relative reduction in speaker error rate of 39.28% and 51.84%, respectively, compared to the baseline.