初始化项目,由ModelHub XC社区提供模型

Model: Cnam-LMSSC/wav2vec2-spanish-phonemizer
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-08 11:35:49 +08:00
commit 27535b64fd
10 changed files with 416 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

149
README.md Normal file
View File

@@ -0,0 +1,149 @@
---
library_name: transformers
license: mit
language: es
metrics:
- per
tags:
- audio
- automatic-speech-recognition
- speech
- phonemize
- phoneme
datasets:
- facebook/multilingual_librispeech
model-index:
- name: Wav2Vec2-base Spanish finetuned for phonemes by LMSSC
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: Multilingual Librispeech
type: facebook/multilingual_librispeech
args: es
metrics:
- type: per
value: 2.94
name: Test PER on Multilingual Librispeech ES | Trained
- type: per
value: 2.66
name: Val PER on Multilingual Librispeech ES | Trained
---
# Fine-tuned Spanish Voxpopuli v2 wav2vec2-base model for speech-to-phoneme task in Spanish
Fine-tuned [facebook/wav2vec2-base-es-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-es-voxpopuli-v2) for **Spanish speech-to-phoneme** (without language model) using the train and validation splits of [Multilingual Librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech).
## Audio samplerate for usage
When using this model, make sure that your speech input is **sampled at 16kHz**.
## Output
As this model is specifically trained for a speech-to-phoneme task, the output is sequence of [IPA-encoded](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) words, without punctuation.
If you don't read the phonetic alphabet fluently, you can use this excellent [IPA reader website](http://ipa-reader.xyz) to convert the transcript back to audio synthetic speech in order to check the quality of the phonetic transcription.
## Training procedure
The model has been finetuned on Multilingual Librispeech (ES) for 30 epochs on a 1xADA_6000 GPU at Cnam/LMSSC using a ddp strategy and gradient-accumulation procedure (256 audios per update, corresponding roughly to 25 minutes of speech per update -> 2k updates per epoch)
- Learning rate schedule : Double Tri-state schedule
- Warmup from 1e-5 for 7% of total updates
- Constant at 1e-4 for 28% of total updates
- Linear decrease to 1e-6 for 36% of total updates
- Second warmup boost to 3e-5 for 3% of total updates
- Constant at 3e-5 for 12% of total updates
- Linear decrease to 1e-7 for remaining 14% of updates
- The set of hyperparameters used for training are the same as those detailed in Annex B and Table 6 of [wav2vec2 paper](https://arxiv.org/pdf/2006.11477.pdf).
## Usage (using the online Inference API)
Just record your voice on the ⚡ Inference API on this webpage, and then click on "Compute", that's all !
## Usage (with HuggingSound library)
The model can be used directly using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
import pandas as pd
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("Cnam-LMSSC/wav2vec2-spanish-phonemizer")
audio_paths = ["./test_rilettura_testo.wav", "./10179_11051_000021.flac"]
# No need for the Audio files to be sampled at 16 kHz here,
# they are automatically resampled by Huggingsound
transcriptions = model.transcribe(audio_paths)
# (Optionnal) Display results in a table :
## transcriptions is list of dicts also containing timestamps and probabilities !
df = pd.DataFrame(transcriptions)
df['Audio file'] = pd.DataFrame(audio_paths)
df.set_index('Audio file', inplace=True)
df[['transcription']]
```
**Output** :
| **Audio file** | **Phonetic transcription (IPA)** |
|:---------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------|
| ./prueba_revision_texto.wav | paɾeθia un tiβuɾon kompleto ðe βeɾas ke si asi si aoɾa koxemos a aθɛntwaða este βlak ðoɡ ʝa tendɾemos notiθjas ke embjaɾ a aθɛntwaða nwestɾo βwem patɾon el kaβaʎeɾo |
| ./10179_11051_000021.flac | pestaɲeaðo keðose en donde estaβa apoʝandose apenas en su muleta i kon los oxos klaβaðos en su kompaɲeɾo komo una βiβoɾa lista paɾa aβalanθaɾse |
## Inference script (if you do not want to use the huggingsound library) :
```python
import torch
from transformers import AutoModelForCTC, Wav2Vec2Processor
from datasets import load_dataset
import soundfile as sf # Or Librosa if you prefer to ...
MODEL_ID = "Cnam-LMSSC/wav2vec2-spanish-phonemizer"
model = AutoModelForCTC.from_pretrained(MODEL_ID)
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
audio = sf.read('example.wav')
# Make sure you have a 16 kHz sampled audio file, or resample it !
inputs = processor(np.array(audio[0]),sampling_rate=16_000., return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits,dim = -1)
transcription = processor.batch_decode(predicted_ids)
print("Phonetic transcription : ", transcription)
```
**Output** :
'esˈtoj ˈmuj konˈtento ðe pɾesenˈtaɾles ˈnwestɾa soluˈsjon ˈpaɾa fonemiˈsaɾ ˈawðjos ˈfasilˈmente | funˈsjona βasˈtante ˈβjen'
## Test Results:
In the table below, we report the Phoneme Error Rate (PER) of the model on Multilingual Librispeech (using the Spanish configs for the dataset of course) :
| Model | Test Set | PER |
| ------------- | ------------- | ------------- |
| Cnam-LMSSC/wav2vec2-spanish-phonemizer | Multilingual Librispeech (Spanish) | **2.94%** |
## Citation
If you use this finetuned model for any publication, please use this to cite our work :
```bibtex
@misc {lmssc-wav2vec2-base-phonemizer-spanish_2026,
author = { Olivier, Malo },
title = { wav2vec2-spanish-phonemizer (Revision 4c60fe7) },
year = 2026,
url = { https://huggingface.co/Cnam-LMSSC/wav2vec2-spanish-phonemizer },
doi = { 10.57967/hf/8136 },
publisher = { Hugging Face }
}
```

4
added_tokens.json Normal file
View File

@@ -0,0 +1,4 @@
{
"</s>": 39,
"<s>": 38
}

117
config.json Normal file
View File

@@ -0,0 +1,117 @@
{
"activation_dropout": 0.0,
"adapter_attn_dim": null,
"adapter_kernel_size": 3,
"adapter_stride": 2,
"add_adapter": false,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2ForCTC"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"classifier_proj_size": 256,
"codevector_dim": 256,
"contrastive_logits_temperature": 0.1,
"conv_bias": false,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": false,
"dtype": "float32",
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_norm": "group",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.0,
"freeze_feat_extract_train": true,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"mask_channel_length": 10,
"mask_channel_min_space": 1,
"mask_channel_other": 0.0,
"mask_channel_prob": 0.0,
"mask_channel_selection": "static",
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_min_space": 1,
"mask_time_other": 0.0,
"mask_time_prob": 0.05,
"mask_time_selection": "static",
"model_type": "wav2vec2",
"no_mask_channel_overlap": false,
"no_mask_time_overlap": false,
"num_adapter_layers": 3,
"num_attention_heads": 12,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 12,
"num_negatives": 100,
"output_hidden_size": 768,
"pad_token_id": 37,
"proj_codevector_dim": 256,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"transformers_version": "4.57.3",
"use_weighted_layer_sum": false,
"vocab_size": 40,
"xvector_output_dim": 512
}

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7908db3be13eb7c0012a9479dea3746d25152c6ea24d041e07b1a28f6e20fa2f
size 377635736

10
preprocessor_config.json Normal file
View File

@@ -0,0 +1,10 @@
{
"do_normalize": true,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"processor_class": "Wav2Vec2Processor",
"return_attention_mask": false,
"sampling_rate": 16000
}

3
pytorch_model.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:40e48162bf60d7443258212f4c2e7fec4363f1f879b8664a369a57e156e30701
size 377681271

6
special_tokens_map.json Normal file
View File

@@ -0,0 +1,6 @@
{
"bos_token": "<s>",
"eos_token": "</s>",
"pad_token": "[PAD]",
"unk_token": "[UNK]"
}

49
tokenizer_config.json Normal file
View File

@@ -0,0 +1,49 @@
{
"added_tokens_decoder": {
"36": {
"content": "[UNK]",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": false
},
"37": {
"content": "[PAD]",
"lstrip": true,
"normalized": false,
"rstrip": true,
"single_word": false,
"special": false
},
"38": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"39": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "<s>",
"clean_up_tokenization_spaces": false,
"do_lower_case": false,
"eos_token": "</s>",
"extra_special_tokens": {},
"model_max_length": 1000000000000000019884624838656,
"pad_token": "[PAD]",
"processor_class": "Wav2Vec2Processor",
"replace_word_delimiter_char": " ",
"target_lang": null,
"tokenizer_class": "Wav2Vec2CTCTokenizer",
"unk_token": "[UNK]",
"word_delimiter_token": "|"
}

40
vocab.json Normal file
View File

@@ -0,0 +1,40 @@
{
"[PAD]": 37,
"[UNK]": 36,
"a": 1,
"b": 2,
"d": 3,
"e": 4,
"f": 5,
"i": 6,
"j": 7,
"k": 8,
"l": 9,
"m": 10,
"n": 11,
"o": 12,
"p": 13,
"r": 14,
"s": 15,
"t": 16,
"u": 17,
"w": 18,
"x": 19,
"|": 0,
"ð": 20,
"ŋ": 21,
"ɛ": 22,
"ɡ": 23,
"ɣ": 24,
"ɪ": 25,
"ɲ": 26,
"ɾ": 27,
"ʃ": 28,
"ʊ": 29,
"ʎ": 30,
"ʒ": 31,
"ʝ": 32,
"ː": 33,
"β": 34,
"θ": 35
}