149 lines
5.9 KiB
Markdown
149 lines
5.9 KiB
Markdown
|
|
---
|
|||
|
|
library_name: transformers
|
|||
|
|
license: mit
|
|||
|
|
language: es
|
|||
|
|
metrics:
|
|||
|
|
- per
|
|||
|
|
tags:
|
|||
|
|
- audio
|
|||
|
|
- automatic-speech-recognition
|
|||
|
|
- speech
|
|||
|
|
- phonemize
|
|||
|
|
- phoneme
|
|||
|
|
datasets:
|
|||
|
|
- facebook/multilingual_librispeech
|
|||
|
|
model-index:
|
|||
|
|
- name: Wav2Vec2-base Spanish finetuned for phonemes by LMSSC
|
|||
|
|
results:
|
|||
|
|
- task:
|
|||
|
|
type: automatic-speech-recognition
|
|||
|
|
name: Speech Recognition
|
|||
|
|
dataset:
|
|||
|
|
name: Multilingual Librispeech
|
|||
|
|
type: facebook/multilingual_librispeech
|
|||
|
|
args: es
|
|||
|
|
metrics:
|
|||
|
|
- type: per
|
|||
|
|
value: 2.94
|
|||
|
|
name: Test PER on Multilingual Librispeech ES | Trained
|
|||
|
|
- type: per
|
|||
|
|
value: 2.66
|
|||
|
|
name: Val PER on Multilingual Librispeech ES | Trained
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
# Fine-tuned Spanish Voxpopuli v2 wav2vec2-base model for speech-to-phoneme task in Spanish
|
|||
|
|
|
|||
|
|
Fine-tuned [facebook/wav2vec2-base-es-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-es-voxpopuli-v2) for **Spanish speech-to-phoneme** (without language model) using the train and validation splits of [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech).
|
|||
|
|
|
|||
|
|
## Audio samplerate for usage
|
|||
|
|
|
|||
|
|
When using this model, make sure that your speech input is **sampled at 16kHz**.
|
|||
|
|
|
|||
|
|
## Output
|
|||
|
|
|
|||
|
|
As this model is specifically trained for a speech-to-phoneme task, the output is sequence of [IPA-encoded](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) words, without punctuation.
|
|||
|
|
If you don't read the phonetic alphabet fluently, you can use this excellent [IPA reader website](http://ipa-reader.xyz) to convert the transcript back to audio synthetic speech in order to check the quality of the phonetic transcription.
|
|||
|
|
|
|||
|
|
## Training procedure
|
|||
|
|
|
|||
|
|
The model has been finetuned on Multilingual Librispeech (ES) for 30 epochs on a 1xADA_6000 GPU at Cnam/LMSSC using a ddp strategy and gradient-accumulation procedure (256 audios per update, corresponding roughly to 25 minutes of speech per update -> 2k updates per epoch)
|
|||
|
|
|
|||
|
|
- Learning rate schedule : Double Tri-state schedule
|
|||
|
|
- Warmup from 1e-5 for 7% of total updates
|
|||
|
|
- Constant at 1e-4 for 28% of total updates
|
|||
|
|
- Linear decrease to 1e-6 for 36% of total updates
|
|||
|
|
- Second warmup boost to 3e-5 for 3% of total updates
|
|||
|
|
- Constant at 3e-5 for 12% of total updates
|
|||
|
|
- Linear decrease to 1e-7 for remaining 14% of updates
|
|||
|
|
|
|||
|
|
- The set of hyperparameters used for training are the same as those detailed in Annex B and Table 6 of [wav2vec2 paper](https://arxiv.org/pdf/2006.11477.pdf).
|
|||
|
|
|
|||
|
|
## Usage (using the online Inference API)
|
|||
|
|
|
|||
|
|
Just record your voice on the ⚡ Inference API on this webpage, and then click on "Compute", that's all !
|
|||
|
|
|
|||
|
|
## Usage (with HuggingSound library)
|
|||
|
|
|
|||
|
|
The model can be used directly using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
import pandas as pd
|
|||
|
|
from huggingsound import SpeechRecognitionModel
|
|||
|
|
|
|||
|
|
model = SpeechRecognitionModel("Cnam-LMSSC/wav2vec2-spanish-phonemizer")
|
|||
|
|
audio_paths = ["./test_rilettura_testo.wav", "./10179_11051_000021.flac"]
|
|||
|
|
|
|||
|
|
# No need for the Audio files to be sampled at 16 kHz here,
|
|||
|
|
# they are automatically resampled by Huggingsound
|
|||
|
|
|
|||
|
|
transcriptions = model.transcribe(audio_paths)
|
|||
|
|
|
|||
|
|
# (Optionnal) Display results in a table :
|
|||
|
|
## transcriptions is list of dicts also containing timestamps and probabilities !
|
|||
|
|
|
|||
|
|
df = pd.DataFrame(transcriptions)
|
|||
|
|
df['Audio file'] = pd.DataFrame(audio_paths)
|
|||
|
|
df.set_index('Audio file', inplace=True)
|
|||
|
|
df[['transcription']]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Output** :
|
|||
|
|
|
|||
|
|
| **Audio file** | **Phonetic transcription (IPA)** |
|
|||
|
|
|:---------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|||
|
|
| ./prueba_revision_texto.wav | paɾeθia un tiβuɾon kompleto ðe βeɾas ke si asi si aoɾa koxemos a aθɛntwaða este βlak ðoɡ ʝa tendɾemos notiθjas ke embjaɾ a aθɛntwaða nwestɾo βwem patɾon el kaβaʎeɾo |
|
|||
|
|
| ./10179_11051_000021.flac | pestaɲeaðo keðose en donde estaβa apoʝandose apenas en su muleta i kon los oxos klaβaðos en su kompaɲeɾo komo una βiβoɾa lista paɾa aβalanθaɾse |
|
|||
|
|
|
|||
|
|
## Inference script (if you do not want to use the huggingsound library) :
|
|||
|
|
|
|||
|
|
```python
|
|||
|
|
import torch
|
|||
|
|
from transformers import AutoModelForCTC, Wav2Vec2Processor
|
|||
|
|
from datasets import load_dataset
|
|||
|
|
import soundfile as sf # Or Librosa if you prefer to ...
|
|||
|
|
|
|||
|
|
MODEL_ID = "Cnam-LMSSC/wav2vec2-spanish-phonemizer"
|
|||
|
|
|
|||
|
|
model = AutoModelForCTC.from_pretrained(MODEL_ID)
|
|||
|
|
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
|
|||
|
|
|
|||
|
|
audio = sf.read('example.wav')
|
|||
|
|
# Make sure you have a 16 kHz sampled audio file, or resample it !
|
|||
|
|
|
|||
|
|
inputs = processor(np.array(audio[0]),sampling_rate=16_000., return_tensors="pt")
|
|||
|
|
|
|||
|
|
with torch.no_grad():
|
|||
|
|
logits = model(**inputs).logits
|
|||
|
|
|
|||
|
|
predicted_ids = torch.argmax(logits,dim = -1)
|
|||
|
|
transcription = processor.batch_decode(predicted_ids)
|
|||
|
|
|
|||
|
|
print("Phonetic transcription : ", transcription)
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**Output** :
|
|||
|
|
|
|||
|
|
'esˈtoj ˈmuj konˈtento ðe pɾesenˈtaɾles ˈnwestɾa soluˈsjon ˈpaɾa fonemiˈsaɾ ˈawðjos ˈfasilˈmente | funˈsjona βasˈtante ˈβjen'
|
|||
|
|
|
|||
|
|
## Test Results:
|
|||
|
|
|
|||
|
|
In the table below, we report the Phoneme Error Rate (PER) of the model on Multilingual Librispeech (using the Spanish configs for the dataset of course) :
|
|||
|
|
|
|||
|
|
| Model | Test Set | PER |
|
|||
|
|
| ------------- | ------------- | ------------- |
|
|||
|
|
| Cnam-LMSSC/wav2vec2-spanish-phonemizer | Multilingual Librispeech (Spanish) | **2.94%** |
|
|||
|
|
|
|||
|
|
|
|||
|
|
## Citation
|
|||
|
|
If you use this finetuned model for any publication, please use this to cite our work :
|
|||
|
|
|
|||
|
|
```bibtex
|
|||
|
|
@misc {lmssc-wav2vec2-base-phonemizer-spanish_2026,
|
|||
|
|
author = { Olivier, Malo },
|
|||
|
|
title = { wav2vec2-spanish-phonemizer (Revision 4c60fe7) },
|
|||
|
|
year = 2026,
|
|||
|
|
url = { https://huggingface.co/Cnam-LMSSC/wav2vec2-spanish-phonemizer },
|
|||
|
|
doi = { 10.57967/hf/8136 },
|
|||
|
|
publisher = { Hugging Face }
|
|||
|
|
}
|
|||
|
|
```
|