Wav2Vec2-BERT+LM: Transcribing Speech and Evaluating Models using Huggingface Transformers

What is Wav2Vec2-BERT? Wav2Vec2-BERT is a successor of the popular Wav2Vec2 Model, a pre-trained model for Automatic Speech Recognition (ASR). Wav2Vec2-BERT is a 580M-parameters audio model that has been pre-trained on 4.5M hours of unlabeled audio data covering more than 143 languages. Following the basic architecture of Wav2Vec2, with increased pretraining data and slighly different training objectives, various models (XLSR, XLS-R and MMS) with pretrained checkpoints were released. Wav2Vec2-BERT pretrained model was introduced in the SeamlessM4T Paper by Meta in August 2023. [Read More]

Releasing Malayalam Speech Corpus

Originally Published in SMC Blog SMC announces the release of Malayalam Speech Corpus (MSC). It is the repository of curated speech samples collected using MSC web application. Speech samples are selected on the criteria that they have at least 3 positive reviews. MSC is a project launched by SMC to crowd source Malayalam speech samples from any contributor who can read out sentences and record them as speech samples. The MSC web app has provisions for recording voices and reviewing them. [Read More]

Talks on Speech Recognition Research and Malayalam Computing

Sharing the videos of two informal interviews I did during the past few months. In this video I talk with Hrishikesh Bhaskaran on my involvement with SMC and my projects. This was the part of an interview series hosted by Tinker Hub Foundation. In the following video I talk on Speech recognition systems in general, and on the voice corpus initiative by SMC. This interview is hosted by Mujeeb for IB Computing Youtube channel. [Read More]