Update README.md
This commit is contained in:
66
README.md
66
README.md
@@ -41,33 +41,63 @@ tags:
|
||||
|
||||
- Prompt template
|
||||
|
||||
- Prompt type: `llama-3-chat`
|
||||
- Prompt type for chat: `llama-3-chat`
|
||||
|
||||
- Prompt string
|
||||
- Prompt string
|
||||
|
||||
```text
|
||||
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
||||
|
||||
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
|
||||
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
```
|
||||
|
||||
```text
|
||||
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
||||
- Prompt type for tool use: `llama-3-tool`
|
||||
|
||||
- Prompt string
|
||||
|
||||
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
```text
|
||||
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
||||
|
||||
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
|
||||
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
```
|
||||
{system_message}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
||||
|
||||
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
|
||||
|
||||
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
|
||||
|
||||
[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","description":"The temperature unit to use. Infer this from the users location.","enum":["celsius","fahrenheit"]}},"required":["location","unit"]}}}]
|
||||
|
||||
Question: {user_message}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
||||
```
|
||||
|
||||
- Context size: `128000`
|
||||
|
||||
- Run as LlamaEdge service
|
||||
|
||||
```bash
|
||||
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
|
||||
llama-api-server.wasm \
|
||||
--prompt-template llama-3-chat \
|
||||
--ctx-size 128000 \
|
||||
--model-name Llama-3.2-3b
|
||||
```
|
||||
- Chat
|
||||
|
||||
```bash
|
||||
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
|
||||
llama-api-server.wasm \
|
||||
--prompt-template llama-3-chat \
|
||||
--ctx-size 128000 \
|
||||
--model-name Llama-3.2-3b
|
||||
```
|
||||
|
||||
- Tool use
|
||||
|
||||
```bash
|
||||
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
|
||||
llama-api-server.wasm \
|
||||
--prompt-template llama-3-tool \
|
||||
--ctx-size 128000 \
|
||||
--model-name Llama-3.2-3b
|
||||
```
|
||||
|
||||
- Run as LlamaEdge command app
|
||||
|
||||
|
||||
Reference in New Issue
Block a user