初始化项目,由ModelHub XC社区提供模型
Model: k050506koch/GPT3-dev-125m-0104 Source: Original Platform
This commit is contained in:
51
README.md
Normal file
51
README.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
license: mit
|
||||
datasets:
|
||||
- HuggingFaceFW/fineweb
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
---
|
||||
# GPT3
|
||||
|
||||
Welcome to the GPT3 repository! This project is an attempt to recreate the architecture and approach from the original OpenAI GPT-3 paper. The repository includes scripts for training, fine-tuning, and inference of a GPT-3-like model using PyTorch and the Hugging Face Transformers library.
|
||||
Here are located weights of dev checkpoints of my models. You can always download a folder, paste it's path inside inference.py and chat with them.
|
||||
|
||||
# **You can find all code on [GitHub](https://github.com/krll-corp/GPT3)**
|
||||
# Note: This is a model with 125 million parameters (attempt to replicate GPT-3 Small). (it's very undertrained.)
|
||||
# Note 2: This is a model checkpoint released on 12/02 2025 (12 batch size, 4 grad accumulation, 512 tokens and 65000 steps under Lion optimizer). It scores 28.65% on MMLU which is slightly higher than 25% (random guess)
|
||||
# Note 3: This model already demonstrates basic abilities in generating text. It's not perfect and I will continue working on it. Expect Instruct models soon.
|
||||
|
||||
## inference:
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
model = AutoModelForCausalLM.from_pretrained('k050506koch/GPT3-dev-125m-0104', trust_remote_code=True)
|
||||
tokenizer = AutoTokenizer.from_pretrained('k050506koch/GPT3-dev-125m-1202')
|
||||
tokenizer.pad_token_id = tokenizer.eos_token_id
|
||||
print("\n", tokenizer.decode(model.generate(tokenizer.encode("He is a doctor. His main goal is", return_tensors='pt'),
|
||||
max_length=128, temperature=0.7, top_p=0.9, repetition_penalty=1.2, no_repeat_ngram_size=3,
|
||||
num_return_sequences=1, do_sample=True)[0], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
# Note for instruct models: all instruct models share same chat template
|
||||
```plaintext
|
||||
User: Here is a user prompt.<|endoftext|>\nAssistant: Here's an answer from model.<|endoftext|>
|
||||
```
|
||||
Please note that the model is not trained for continious chat and not tested for it so it's possible but unlikely that it will keep the topic and act coherently between messages.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! I'm just a student who is interested in AI so my code may be incorrect or have logical issues. Please open an issue or submit a pull request for any improvements or bug fixes, I will be happy.
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License. See the LICENSE file for details. Everyone can use and modify this code at their discretion.
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
Thanks OpenAI, HuggingFace and Pytorch for making this project possible!
|
||||
|
||||
- [OpenAI GPT-3 Paper](https://arxiv.org/abs/2005.14165)
|
||||
- [Hugging Face Transformers](https://github.com/huggingface/transformers)
|
||||
- [PyTorch](https://pytorch.org/)
|
||||
Reference in New Issue
Block a user