初始化项目,由ModelHub XC社区提供模型
Model: jfarray/Model_paraphrase-multilingual-mpnet-base-v2_1_Epochs Source: Original Platform
This commit is contained in:
29
.gitattributes
vendored
Normal file
29
.gitattributes
vendored
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||||
|
pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
7
1_Pooling/config.json
Normal file
7
1_Pooling/config.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"word_embedding_dimension": 768,
|
||||||
|
"pooling_mode_cls_token": false,
|
||||||
|
"pooling_mode_mean_tokens": true,
|
||||||
|
"pooling_mode_max_tokens": false,
|
||||||
|
"pooling_mode_mean_sqrt_len_tokens": false
|
||||||
|
}
|
||||||
125
README.md
Normal file
125
README.md
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
---
|
||||||
|
pipeline_tag: sentence-similarity
|
||||||
|
tags:
|
||||||
|
- sentence-transformers
|
||||||
|
- feature-extraction
|
||||||
|
- sentence-similarity
|
||||||
|
- transformers
|
||||||
|
---
|
||||||
|
|
||||||
|
# {MODEL_NAME}
|
||||||
|
|
||||||
|
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
||||||
|
|
||||||
|
<!--- Describe your model here -->
|
||||||
|
|
||||||
|
## Usage (Sentence-Transformers)
|
||||||
|
|
||||||
|
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
||||||
|
|
||||||
|
```
|
||||||
|
pip install -U sentence-transformers
|
||||||
|
```
|
||||||
|
|
||||||
|
Then you can use the model like this:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from sentence_transformers import SentenceTransformer
|
||||||
|
sentences = ["This is an example sentence", "Each sentence is converted"]
|
||||||
|
|
||||||
|
model = SentenceTransformer('{MODEL_NAME}')
|
||||||
|
embeddings = model.encode(sentences)
|
||||||
|
print(embeddings)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Usage (HuggingFace Transformers)
|
||||||
|
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import AutoTokenizer, AutoModel
|
||||||
|
import torch
|
||||||
|
|
||||||
|
|
||||||
|
#Mean Pooling - Take attention mask into account for correct averaging
|
||||||
|
def mean_pooling(model_output, attention_mask):
|
||||||
|
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
||||||
|
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
||||||
|
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
||||||
|
|
||||||
|
|
||||||
|
# Sentences we want sentence embeddings for
|
||||||
|
sentences = ['This is an example sentence', 'Each sentence is converted']
|
||||||
|
|
||||||
|
# Load model from HuggingFace Hub
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
|
||||||
|
model = AutoModel.from_pretrained('{MODEL_NAME}')
|
||||||
|
|
||||||
|
# Tokenize sentences
|
||||||
|
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
||||||
|
|
||||||
|
# Compute token embeddings
|
||||||
|
with torch.no_grad():
|
||||||
|
model_output = model(**encoded_input)
|
||||||
|
|
||||||
|
# Perform pooling. In this case, mean pooling.
|
||||||
|
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
||||||
|
|
||||||
|
print("Sentence embeddings:")
|
||||||
|
print(sentence_embeddings)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Evaluation Results
|
||||||
|
|
||||||
|
<!--- Describe how your model was evaluated -->
|
||||||
|
|
||||||
|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
|
||||||
|
|
||||||
|
|
||||||
|
## Training
|
||||||
|
The model was trained with the parameters:
|
||||||
|
|
||||||
|
**DataLoader**:
|
||||||
|
|
||||||
|
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
|
||||||
|
```
|
||||||
|
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Loss**:
|
||||||
|
|
||||||
|
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
|
||||||
|
|
||||||
|
Parameters of the fit()-Method:
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"epochs": 1,
|
||||||
|
"evaluation_steps": 1,
|
||||||
|
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
|
||||||
|
"max_grad_norm": 1,
|
||||||
|
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
|
||||||
|
"optimizer_params": {
|
||||||
|
"lr": 2e-05
|
||||||
|
},
|
||||||
|
"scheduler": "WarmupLinear",
|
||||||
|
"steps_per_epoch": null,
|
||||||
|
"warmup_steps": 2,
|
||||||
|
"weight_decay": 0.01
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Full Model Architecture
|
||||||
|
```
|
||||||
|
SentenceTransformer(
|
||||||
|
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
|
||||||
|
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Citing & Authors
|
||||||
|
|
||||||
|
<!--- Describe where people can find more information -->
|
||||||
29
config.json
Normal file
29
config.json
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "/root/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-multilingual-mpnet-base-v2/",
|
||||||
|
"architectures": [
|
||||||
|
"XLMRobertaModel"
|
||||||
|
],
|
||||||
|
"attention_probs_dropout_prob": 0.1,
|
||||||
|
"bos_token_id": 0,
|
||||||
|
"classifier_dropout": null,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"gradient_checkpointing": false,
|
||||||
|
"hidden_act": "gelu",
|
||||||
|
"hidden_dropout_prob": 0.1,
|
||||||
|
"hidden_size": 768,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 3072,
|
||||||
|
"layer_norm_eps": 1e-05,
|
||||||
|
"max_position_embeddings": 514,
|
||||||
|
"model_type": "xlm-roberta",
|
||||||
|
"num_attention_heads": 12,
|
||||||
|
"num_hidden_layers": 12,
|
||||||
|
"output_past": true,
|
||||||
|
"pad_token_id": 1,
|
||||||
|
"position_embedding_type": "absolute",
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.16.2",
|
||||||
|
"type_vocab_size": 1,
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 250002
|
||||||
|
}
|
||||||
7
config_sentence_transformers.json
Normal file
7
config_sentence_transformers.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"__version__": {
|
||||||
|
"sentence_transformers": "2.0.0",
|
||||||
|
"transformers": "4.7.0",
|
||||||
|
"pytorch": "1.9.0+cu102"
|
||||||
|
}
|
||||||
|
}
|
||||||
13
eval/similarity_evaluation_results.csv
Normal file
13
eval/similarity_evaluation_results.csv
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
||||||
|
0,1,0.24428484492337774,0.1856406898421688,0.2726824721314313,0.17419708567381592,0.20191195613958324,0.18182615511938452,-0.10491629431153585,-0.05976104399028721
|
||||||
|
0,2,0.00991846103061972,-0.02924476620801289,0.055113311084578745,0.020344185188182883,-0.05181919251571324,-0.11443604168352871,-0.20918413072312658,-0.20471336345609026
|
||||||
|
0,3,-0.19133977745814687,-0.23395812966410312,-0.1856055404353524,-0.20979940975313596,-0.24658701621299942,-0.23777266438688743,-0.21513752432557787,-0.31279184726831183
|
||||||
|
0,4,-0.24274072096547497,-0.35729475236746183,-0.25872924721883545,-0.3661953333872919,-0.30538405938267577,-0.40306916904087337,-0.2054266670808891,-0.2937191736543904
|
||||||
|
0,5,-0.2563185872329178,-0.47173079405099055,-0.28779967197621187,-0.4539296320113305,-0.3280240424319306,-0.4806313750708206,-0.19666857622618755,-0.20471336345609026
|
||||||
|
0,6,-0.26296224516835887,-0.4933464908134349,-0.30283612369964674,-0.4259563773775791,-0.3397828930846645,-0.4984325371104806,-0.1916226842622694,-0.27210347689194603
|
||||||
|
0,7,-0.26454599320825356,-0.4068837037636577,-0.30621843565996715,-0.4157842847834876,-0.34418735335978196,-0.4246848658033177,-0.17494633948767138,-0.3000767315256975
|
||||||
|
0,8,-0.2603208222351698,-0.3789104491299062,-0.3035021360723016,-0.3331360324564947,-0.3419157730097211,-0.4195988195062719,-0.16297896476114715,-0.2708319653176846
|
||||||
|
0,9,-0.2585585288062718,-0.3331360324564947,-0.30151014158894635,-0.32677847458518755,-0.34241836134017023,-0.40306916904087337,-0.1453743538475858,-0.24285871068393314
|
||||||
|
0,10,-0.25675462725255527,-0.319149405139619,-0.29929263031051323,-0.2746465000404689,-0.3439065220633915,-0.40306916904087337,-0.11985896733483366,-0.18691220141643022
|
||||||
|
0,11,-0.2568531796813742,-0.30134824309995895,-0.2991856239826185,-0.28609010420882175,-0.34494696558683446,-0.38526800700121333,-0.11293098364248208,-0.19454127086199882
|
||||||
|
0,-1,-0.2568531796813742,-0.30134824309995895,-0.2991856239826185,-0.28609010420882175,-0.34494696558683446,-0.38526800700121333,-0.11293098364248208,-0.19454127086199882
|
||||||
|
14
modules.json
Normal file
14
modules.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"idx": 0,
|
||||||
|
"name": "0",
|
||||||
|
"path": "",
|
||||||
|
"type": "sentence_transformers.models.Transformer"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"idx": 1,
|
||||||
|
"name": "1",
|
||||||
|
"path": "1_Pooling",
|
||||||
|
"type": "sentence_transformers.models.Pooling"
|
||||||
|
}
|
||||||
|
]
|
||||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:ed8e9b9e918767fcfef496711be850edbe6a884a9f229ad6a3369a3c69638ea5
|
||||||
|
size 1112255985
|
||||||
4
sentence_bert_config.json
Normal file
4
sentence_bert_config.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"max_seq_length": 128,
|
||||||
|
"do_lower_case": false
|
||||||
|
}
|
||||||
3
sentencepiece.bpe.model
Normal file
3
sentencepiece.bpe.model
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
|
||||||
|
size 5069051
|
||||||
2
similarity_evaluation_sts-test_results.csv
Normal file
2
similarity_evaluation_sts-test_results.csv
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
||||||
|
-1,-1,0.7582899517647347,0.3071817111321424,0.7409359563757848,0.2805960221191717,0.7477312895052887,0.2993652585555905,0.7297027377479415,0.33232111266213366
|
||||||
|
1
special_tokens_map.json
Normal file
1
special_tokens_map.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
|
||||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:3a3313815c3d2e1b78b5182b09e66e6cd4cdd54df67a35c4a318c23d461821a4
|
||||||
|
size 17082913
|
||||||
1
tokenizer_config.json
Normal file
1
tokenizer_config.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "/root/.cache/torch/sentence_transformers/sentence-transformers_paraphrase-multilingual-mpnet-base-v2/", "tokenizer_class": "XLMRobertaTokenizer"}
|
||||||
Reference in New Issue
Block a user