初始化项目,由ModelHub XC社区提供模型
Model: Muennighoff/SBERT-base-nli-v2 Source: Original Platform
This commit is contained in:
27
.gitattributes
vendored
Normal file
27
.gitattributes
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
7
1_Pooling/config.json
Normal file
7
1_Pooling/config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"word_embedding_dimension": 768,
|
||||
"pooling_mode_cls_token": false,
|
||||
"pooling_mode_mean_tokens": true,
|
||||
"pooling_mode_max_tokens": false,
|
||||
"pooling_mode_mean_sqrt_len_tokens": false
|
||||
}
|
||||
141
README.md
Normal file
141
README.md
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
pipeline_tag: sentence-similarity
|
||||
tags:
|
||||
- sentence-transformers
|
||||
- feature-extraction
|
||||
- sentence-similarity
|
||||
- transformers
|
||||
---
|
||||
|
||||
# SBERT-base-nli-v2
|
||||
|
||||
This model is used in "SGPT: GPT Sentence Embeddings for Semantic Search" and "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
|
||||
|
||||
## Usage
|
||||
|
||||
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
|
||||
|
||||
## Evaluation Results
|
||||
|
||||
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
|
||||
|
||||
## Usage (Sentence-Transformers)
|
||||
|
||||
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
||||
|
||||
```
|
||||
pip install -U sentence-transformers
|
||||
```
|
||||
|
||||
Then you can use the model like this:
|
||||
|
||||
```python
|
||||
from sentence_transformers import SentenceTransformer
|
||||
sentences = ["This is an example sentence", "Each sentence is converted"]
|
||||
|
||||
model = SentenceTransformer('{MODEL_NAME}')
|
||||
embeddings = model.encode(sentences)
|
||||
print(embeddings)
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Usage (HuggingFace Transformers)
|
||||
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
||||
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModel
|
||||
import torch
|
||||
|
||||
|
||||
#Mean Pooling - Take attention mask into account for correct averaging
|
||||
def mean_pooling(model_output, attention_mask):
|
||||
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
||||
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
||||
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
||||
|
||||
|
||||
# Sentences we want sentence embeddings for
|
||||
sentences = ['This is an example sentence', 'Each sentence is converted']
|
||||
|
||||
# Load model from HuggingFace Hub
|
||||
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
|
||||
model = AutoModel.from_pretrained('{MODEL_NAME}')
|
||||
|
||||
# Tokenize sentences
|
||||
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
||||
|
||||
# Compute token embeddings
|
||||
with torch.no_grad():
|
||||
model_output = model(**encoded_input)
|
||||
|
||||
# Perform pooling. In this case, mean pooling.
|
||||
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
||||
|
||||
print("Sentence embeddings:")
|
||||
print(sentence_embeddings)
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Evaluation Results
|
||||
|
||||
<!--- Describe how your model was evaluated -->
|
||||
|
||||
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
|
||||
|
||||
|
||||
## Training
|
||||
The model was trained with the parameters:
|
||||
|
||||
**DataLoader**:
|
||||
|
||||
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
|
||||
```
|
||||
{'batch_size': 64}
|
||||
```
|
||||
|
||||
**Loss**:
|
||||
|
||||
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
|
||||
```
|
||||
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
|
||||
```
|
||||
|
||||
Parameters of the fit()-Method:
|
||||
```
|
||||
{
|
||||
"epochs": 1,
|
||||
"evaluation_steps": 880,
|
||||
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
|
||||
"max_grad_norm": 1,
|
||||
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
|
||||
"optimizer_params": {
|
||||
"lr": 2e-05
|
||||
},
|
||||
"scheduler": "WarmupLinear",
|
||||
"steps_per_epoch": null,
|
||||
"warmup_steps": 881,
|
||||
"weight_decay": 0.01
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Full Model Architecture
|
||||
```
|
||||
SentenceTransformer(
|
||||
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
|
||||
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
|
||||
)
|
||||
```
|
||||
|
||||
## Citing & Authors
|
||||
|
||||
```bibtex
|
||||
@article{muennighoff2022sgpt,
|
||||
title={SGPT: GPT Sentence Embeddings for Semantic Search},
|
||||
author={Muennighoff, Niklas},
|
||||
journal={arXiv preprint arXiv:2202.08904},
|
||||
year={2022}
|
||||
}
|
||||
```
|
||||
26
config.json
Normal file
26
config.json
Normal file
@@ -0,0 +1,26 @@
|
||||
{
|
||||
"_name_or_path": "bert-base-uncased",
|
||||
"architectures": [
|
||||
"BertModel"
|
||||
],
|
||||
"attention_probs_dropout_prob": 0.1,
|
||||
"classifier_dropout": null,
|
||||
"gradient_checkpointing": false,
|
||||
"hidden_act": "gelu",
|
||||
"hidden_dropout_prob": 0.1,
|
||||
"hidden_size": 768,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 3072,
|
||||
"layer_norm_eps": 1e-12,
|
||||
"max_position_embeddings": 512,
|
||||
"model_type": "bert",
|
||||
"num_attention_heads": 12,
|
||||
"num_hidden_layers": 12,
|
||||
"pad_token_id": 0,
|
||||
"position_embedding_type": "absolute",
|
||||
"torch_dtype": "float32",
|
||||
"transformers_version": "4.12.3",
|
||||
"type_vocab_size": 2,
|
||||
"use_cache": true,
|
||||
"vocab_size": 30522
|
||||
}
|
||||
7
config_sentence_transformers.json
Normal file
7
config_sentence_transformers.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"__version__": {
|
||||
"sentence_transformers": "2.1.0",
|
||||
"transformers": "4.12.3",
|
||||
"pytorch": "1.10.0+cu113"
|
||||
}
|
||||
}
|
||||
12
eval/similarity_evaluation_sts-dev_results.csv
Normal file
12
eval/similarity_evaluation_sts-dev_results.csv
Normal file
@@ -0,0 +1,12 @@
|
||||
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
||||
0,880,0.8402229328824593,0.8479467809066611,0.8377041384307301,0.8386509773385028,0.8378590570672684,0.8387485930642314,0.7936930445866539,0.7883115493602922
|
||||
0,1760,0.8428672519635304,0.8515722638356917,0.835594254785157,0.838648438190843,0.8355923366615887,0.8386451766227558,0.7996520160554057,0.7965590603900657
|
||||
0,2640,0.8382482909241696,0.8455151084100325,0.8309445337370904,0.8343986078558546,0.8312975926771572,0.8349184142644006,0.7956012409926939,0.7907794556809262
|
||||
0,3520,0.8385906990432744,0.8446911950300531,0.8285807356962958,0.8315702340178425,0.8288299273100047,0.8316724049068571,0.8004503869606524,0.7964016688223735
|
||||
0,4400,0.8421381036184302,0.847939130409553,0.829537733253374,0.8340129438886361,0.8296438233766957,0.8338376074687022,0.8054344844722731,0.801972300780605
|
||||
0,5280,0.8385769364841424,0.8442582018486776,0.8250872762548562,0.828924307661778,0.8252835182550359,0.8289756539354802,0.8088539669381685,0.805066411456809
|
||||
0,6160,0.8424114299161032,0.8481587721561612,0.8306096010920552,0.8338888299239802,0.8309714049583965,0.8341896290639493,0.8159088913031215,0.8124107723971171
|
||||
0,7040,0.8426682543330776,0.8481541493771704,0.8297680812398768,0.8342678515378603,0.8299587836731918,0.8343830473468955,0.810857517375999,0.8070888310019685
|
||||
0,7920,0.8447951714867032,0.8502920334148244,0.8308108735914013,0.8349054907679374,0.8310803552367414,0.8351996938179452,0.8159572343328244,0.8121456707906802
|
||||
0,8800,0.8441476713808787,0.8496473924005559,0.8296510703171643,0.833935377998808,0.8299051522472691,0.8340472446240512,0.8171294745055375,0.8131505611638811
|
||||
0,-1,0.8441510107784661,0.849658700237541,0.8296538245100681,0.833924778347365,0.8299078097152924,0.8340555328859988,0.8171338278988209,0.8131456536871847
|
||||
|
14
modules.json
Normal file
14
modules.json
Normal file
@@ -0,0 +1,14 @@
|
||||
[
|
||||
{
|
||||
"idx": 0,
|
||||
"name": "0",
|
||||
"path": "",
|
||||
"type": "sentence_transformers.models.Transformer"
|
||||
},
|
||||
{
|
||||
"idx": 1,
|
||||
"name": "1",
|
||||
"path": "1_Pooling",
|
||||
"type": "sentence_transformers.models.Pooling"
|
||||
}
|
||||
]
|
||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f6706ecf66067083e415aa95cf9709f252c1d8f8b6e8bd495ab4344c4c75b03e
|
||||
size 438010289
|
||||
4
sentence_bert_config.json
Normal file
4
sentence_bert_config.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"max_seq_length": 75,
|
||||
"do_lower_case": false
|
||||
}
|
||||
2
similarity_evaluation_sts-test_results.csv
Normal file
2
similarity_evaluation_sts-test_results.csv
Normal file
@@ -0,0 +1,2 @@
|
||||
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
||||
-1,-1,0.8250258959167259,0.8387561364496815,0.8248033699667212,0.8283308132069686,0.8241175244348278,0.8282283059807736,0.7497309835582678,0.7283381222424192
|
||||
|
1
special_tokens_map.json
Normal file
1
special_tokens_map.json
Normal file
@@ -0,0 +1 @@
|
||||
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
||||
1
tokenizer.json
Normal file
1
tokenizer.json
Normal file
File diff suppressed because one or more lines are too long
1
tokenizer_config.json
Normal file
1
tokenizer_config.json
Normal file
@@ -0,0 +1 @@
|
||||
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased", "tokenizer_class": "BertTokenizer"}
|
||||
Reference in New Issue
Block a user