初始化项目,由ModelHub XC社区提供模型
Model: deepseek-ai/deepseek-llm-7b-base Source: Original Platform
This commit is contained in:
57
README.md
Normal file
57
README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
license: other
|
||||
license_name: deepseek
|
||||
license_link: LICENSE
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true">
|
||||
</p>
|
||||
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p>
|
||||
<hr>
|
||||
|
||||
|
||||
|
||||
|
||||
### 1. Introduction of Deepseek LLM
|
||||
|
||||
Introducing DeepSeek LLM, an advanced language model comprising 7 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community.
|
||||
|
||||
|
||||
### 2. Model Summary
|
||||
`deepseek-llm-7b-base` is a 7B parameter model with Multi-Head Attention trained on 2 trillion tokens from scratch.
|
||||
- **Home Page:** [DeepSeek](https://deepseek.com/)
|
||||
- **Repository:** [deepseek-ai/deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM)
|
||||
- **Chat With DeepSeek LLM:** [DeepSeek-LLM](https://chat.deepseek.com/)
|
||||
|
||||
|
||||
### 3. How to Use
|
||||
Here give some examples of how to use our model.
|
||||
#### Text Completion
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
|
||||
|
||||
model_name = "deepseek-ai/deepseek-llm-7b-base"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
|
||||
model.generation_config = GenerationConfig.from_pretrained(model_name)
|
||||
model.generation_config.pad_token_id = model.generation_config.eos_token_id
|
||||
|
||||
text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
|
||||
inputs = tokenizer(text, return_tensors="pt")
|
||||
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
|
||||
|
||||
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
||||
print(result)
|
||||
```
|
||||
|
||||
### 4. License
|
||||
This code repository is licensed under the MIT License. The use of DeepSeek LLM models is subject to the Model License. DeepSeek LLM supports commercial use.
|
||||
|
||||
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details.
|
||||
|
||||
### 5. Contact
|
||||
|
||||
If you have any questions, please raise an issue or contact us at [service@deepseek.com](mailto:service@deepseek.com).
|
||||
|
||||
Reference in New Issue
Block a user