Mask Estimator Approaches For Audio Beamforming
Abstract
Beamforming is a family of algorithms and performs a spatial filtering operation that makes it possible to map the distribution of the sources at a certain distance from the microphones and therefore locate the strongest source. The state-of-art methods for acoustic beamforming in multi-channel ASR are based on a neural mask estimator that predicts the presence of speech and noise, which in turn used to determine spatial filter coefficients value. These models are trained using a paired corpus of clean and noisy recordings (teacher model). In this thesis, we attempt to move away from the requirements of having supervised clean recordings for training the mask estimator. The models based on signal enhancement and beamforming using multi-channel linear prediction serve as the required mask estimate. In this way, the model training can also be carried out on real recordings of noisy speech rather than simulated ones alone done in a typical teacher model. We propose two model in this thesis, both based on Unsupervised Mask estimation, and several experiments performed on noisy and reverberant environments in the CHiME-3 corpus as well as the REVERB challenge corpus highlight the effectiveness of the proposed approaches. Both the method that we discuss are novel method, where the first model only deals with the real data, the second model deals with complex data i,e complex short time Fourier transform features to obtain the mask estimate.