初始化项目,由ModelHub XC社区提供模型
Model: Ramikan-BR/Qwen2-0.5B-v5 Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
unsloth.F16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
68
README.md
Normal file
68
README.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
base_model: unsloth/qwen2-0.5b-bnb-4bit
|
||||
language:
|
||||
- en
|
||||
license: apache-2.0
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- transformers
|
||||
- unsloth
|
||||
- qwen2
|
||||
- trl
|
||||
- sft
|
||||
---
|
||||
## For the first time after the fifth refinement using accumulated Lora, the AI managed to give me a Python code to train an AI offline. We are making progress with this model “Qwen2-0.5B,” which is so small and yet so powerful that it seems it will far surpass the TinyLlama 1.1B I was training for generating Python code. The detail is that TinyLlama is twice the size of this model, which has only 0.5B!
|
||||
|
||||
Setting `pad_token_id` to `eos_token_id`:151643 for open-end generation.
|
||||
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
||||
|
||||
### Instruction:
|
||||
I need train a AI offline on my computer, give me a code.
|
||||
|
||||
### Input:
|
||||
|
||||
|
||||
### Output:
|
||||
The task is to write a Python program that trains an AI model on a specific dataset. The program should take the dataset as input and use the training data to train the model. The model should then be tested on a separate dataset to evaluate its performance. The program should output the accuracy of the model on the test dataset.
|
||||
|
||||
To accomplish this, you will need to follow these steps:
|
||||
1. Import the necessary libraries for data manipulation and training.
|
||||
2. Load the dataset into memory.
|
||||
3. Split the dataset into training and testing sets.
|
||||
4. Train the model on the training set.
|
||||
5. Evaluate the model's performance on the testing set.
|
||||
6. Calculate the accuracy of the model on the testing set.
|
||||
|
||||
Here is a sample code that demonstrates these steps:
|
||||
```python
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from sklearn.model_selection import train_test_split
|
||||
from sklearn import svm
|
||||
|
||||
# Load the dataset
|
||||
data = pd.read_csv('your_dataset.csv')
|
||||
|
||||
# Split the dataset into training and testing sets
|
||||
X_train, X_test, y_train, y_test = train_test_split(data.drop('target', axis=1), data['target'], test_size=0.2, random_state=42)
|
||||
|
||||
# Train the model
|
||||
model = svm.SVC(kernel='linear', C=1.0)
|
||||
model.fit(X_train, y_train)
|
||||
|
||||
# Evaluate the model's performance on the testing set
|
||||
accuracy = model.score(X_test, y_test)
|
||||
print('Accuracy of the model on the testing set:', accuracy)
|
||||
´´´
|
||||
In this code, we first load the dataset from the CSV file. Then, we split the dataset into training and testing sets. We train the model using the `train_test_split` function from scikit-learn, with a 20% split for the training set and 80% split for the testing set. We then evaluate the model's performance on the testing set and print the accuracy.<|endoftext|>
|
||||
```
|
||||
|
||||
# Uploaded model
|
||||
|
||||
- **Developed by:** Ramikan-BR
|
||||
- **License:** apache-2.0
|
||||
- **Finetuned from model :** unsloth/qwen2-0.5b-bnb-4bit
|
||||
|
||||
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
||||
6
added_tokens.json
Normal file
6
added_tokens.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"<|PAD_TOKEN|>": 151646,
|
||||
"<|endoftext|>": 151643,
|
||||
"<|im_end|>": 151645,
|
||||
"<|im_start|>": 151644
|
||||
}
|
||||
31
config.json
Normal file
31
config.json
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"_name_or_path": "unsloth/qwen2-0.5b-bnb-4bit",
|
||||
"architectures": [
|
||||
"Qwen2ForCausalLM"
|
||||
],
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 151643,
|
||||
"eos_token_id": 151643,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 896,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 4864,
|
||||
"max_position_embeddings": 131072,
|
||||
"max_window_layers": 24,
|
||||
"model_type": "qwen2",
|
||||
"num_attention_heads": 14,
|
||||
"num_hidden_layers": 24,
|
||||
"num_key_value_heads": 2,
|
||||
"pad_token_id": 151646,
|
||||
"rms_norm_eps": 1e-06,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 1000000.0,
|
||||
"sliding_window": 131072,
|
||||
"tie_word_embeddings": true,
|
||||
"torch_dtype": "float16",
|
||||
"transformers_version": "4.42.4",
|
||||
"unsloth_version": "2024.7",
|
||||
"use_cache": true,
|
||||
"use_sliding_window": false,
|
||||
"vocab_size": 151936
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"bos_token_id": 151643,
|
||||
"eos_token_id": 151643,
|
||||
"max_new_tokens": 2048,
|
||||
"transformers_version": "4.42.4"
|
||||
}
|
||||
151388
merges.txt
Normal file
151388
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ec1ddea90065dd63c75ca42389e07d8f72cdb01ede4c1113a70900919f3de910
|
||||
size 988097536
|
||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dd1dc23213e89abf3c85664d995961d5032d81d090f0f013b4bda1abde0a834f
|
||||
size 988162898
|
||||
14
special_tokens_map.json
Normal file
14
special_tokens_map.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"eos_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": "<|PAD_TOKEN|>"
|
||||
}
|
||||
303121
tokenizer.json
Normal file
303121
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
52
tokenizer_config.json
Normal file
52
tokenizer_config.json
Normal file
@@ -0,0 +1,52 @@
|
||||
{
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"151643": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151644": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151645": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"151646": {
|
||||
"content": "<|PAD_TOKEN|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"bos_token": null,
|
||||
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|endoftext|>",
|
||||
"errors": "replace",
|
||||
"model_max_length": 131072,
|
||||
"pad_token": "<|PAD_TOKEN|>",
|
||||
"padding_side": "left",
|
||||
"split_special_tokens": false,
|
||||
"tokenizer_class": "Qwen2Tokenizer",
|
||||
"unk_token": null
|
||||
}
|
||||
3
unsloth.F16.gguf
Normal file
3
unsloth.F16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2df4926a98aebc4a734a4f95f63e446fd535ae4a33c7603551e6ddb68d783870
|
||||
size 994153920
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user