159 lines
5.1 KiB
Markdown
159 lines
5.1 KiB
Markdown
---
|
||
library_name: transformers
|
||
tags:
|
||
- unsloth
|
||
- trl
|
||
- sft
|
||
license: mit
|
||
datasets:
|
||
- neo4j/text2cypher-2024v1
|
||
language:
|
||
- en
|
||
base_model:
|
||
- unsloth/Llama-3.1-8B-Instruct
|
||
pipeline_tag: text-generation
|
||
---
|
||
|
||
|
||
## Model Card for Llama3.1-8B-Cypher
|
||
|
||
### Model Details
|
||
**Model Description**
|
||
This is the model card for **Llama3.1-8B-Cypher**, a fine-tuned version of Meta’s Llama-3.1-8B, optimized for generating **Cypher queries** from natural language input. The model has been trained using **Unsloth** for efficient fine-tuning and inference.
|
||
|
||
**Developed by**: Azzedine (GitHub: Azzedde)
|
||
**Funded by [optional]**: N/A
|
||
**Shared by [optional]**: Azzedde
|
||
**Model Type**: Large Language Model (LLM) optimized for Cypher query generation
|
||
**Language(s) (NLP)**: English
|
||
**License**: Apache 2.0
|
||
**Finetuned from model [optional]**: Meta-Llama-3.1-8B-Instruct
|
||
|
||
### Model Sources
|
||
**Repository**: [Hugging Face](https://huggingface.co/Azzedde/llama3.1-8b-text2cypher)
|
||
**Paper [optional]**: N/A
|
||
**Demo [optional]**: N/A
|
||
|
||
### Uses
|
||
#### Direct Use
|
||
This model is designed for generating **Cypher queries** for **Neo4j databases** based on natural language inputs. It can be used in:
|
||
- Database administration
|
||
- Knowledge graph construction
|
||
- Query automation for structured data retrieval
|
||
|
||
#### Downstream Use [optional]
|
||
- Integrating into **LLM-based database assistants**
|
||
- Automating **graph database interactions** in enterprise applications
|
||
- Enhancing **semantic search and recommendation systems**
|
||
|
||
#### Out-of-Scope Use
|
||
- General NLP tasks unrelated to graph databases
|
||
- Applications requiring strong factual accuracy outside Cypher query generation
|
||
|
||
### Bias, Risks, and Limitations
|
||
- The model may **generate incorrect or suboptimal Cypher queries**, especially for **complex database schemas**.
|
||
- The model has not been trained to **validate or optimize queries**, so users should manually **verify generated queries**.
|
||
- Limited to **English-language inputs** and **Neo4j graph database use cases**.
|
||
|
||
### Recommendations
|
||
Users should be aware of:
|
||
- The importance of **validating model-generated queries** before execution.
|
||
- The **potential for biases** in database schema interpretation.
|
||
- The need for **fine-tuning on domain-specific datasets** for best performance.
|
||
|
||
### How to Get Started with the Model
|
||
Use the following code to load and use the model:
|
||
|
||
```python
|
||
from unsloth import FastLanguageModel
|
||
from transformers import AutoTokenizer
|
||
|
||
tokenizer = AutoTokenizer.from_pretrained("Azzedde/llama3.1-8b-text2cypher")
|
||
model = FastLanguageModel.from_pretrained("Azzedde/llama3.1-8b-text2cypher")
|
||
|
||
# Example inference
|
||
cypher_prompt = """Below is a database Neo4j schema and a question related to that database. Write a Cypher query to answer the question.
|
||
|
||
### Schema:
|
||
{schema}
|
||
|
||
### Question:
|
||
{question}
|
||
|
||
### Cypher:
|
||
"""
|
||
input_text = cypher_prompt.format(schema="<Your Schema>", question="Find all users with more than 5 transactions")
|
||
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
|
||
outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True)
|
||
print(tokenizer.decode(outputs[0]))
|
||
```
|
||
|
||
### Training Details
|
||
**Training Data**: The model was fine-tuned on the **Neo4j Text2Cypher dataset (2024v1)**.
|
||
**Training Procedure**:
|
||
- **Preprocessing**: Tokenized using the **Alpaca format**.
|
||
- **Training Hyperparameters**:
|
||
- `batch_size=2`
|
||
- `gradient_accumulation_steps=4`
|
||
- `num_train_epochs=3`
|
||
- `learning_rate=2e-4`
|
||
- `fp16=True`
|
||
|
||
### Evaluation
|
||
#### Testing Data
|
||
- Used the **Neo4j Text2Cypher 2024v1 test split**.
|
||
|
||
#### Factors
|
||
- Model performance was measured on **accuracy of Cypher query generation**.
|
||
|
||
#### Metrics
|
||
- **Exact Match** with ground truth Cypher queries.
|
||
- **Execution Success Rate** on a test Neo4j instance.
|
||
|
||
#### Results
|
||
- **High accuracy** for standard database queries.
|
||
- **Some errors in complex queries requiring multi-hop reasoning**.
|
||
|
||
### Environmental Impact
|
||
**Hardware Type**: Tesla T4 (Google Colab)
|
||
**Hours Used**: ~7.71 minutes
|
||
**Cloud Provider**: Google Colab
|
||
**Compute Region**: N/A
|
||
**Carbon Emitted**: Estimated using ML Impact calculator
|
||
|
||
### Technical Specifications
|
||
#### Model Architecture and Objective
|
||
- Based on **Llama-3.1 8B** with **LoRA fine-tuning**.
|
||
|
||
#### Compute Infrastructure
|
||
- Fine-tuned using **Unsloth** for efficient training and inference.
|
||
|
||
#### Hardware
|
||
- **GPU**: Tesla T4
|
||
- **Max Reserved Memory**: ~7.922 GB
|
||
|
||
#### Software
|
||
- **Libraries Used**: `unsloth`, `transformers`, `TRL`, `datasets`
|
||
|
||
### Citation [optional]
|
||
**BibTeX:**
|
||
```
|
||
@article{llama3.1-8b-cypher,
|
||
author = {Azzedde},
|
||
title = {Llama3.1-8B-Cypher: A Cypher Query Generation Model},
|
||
year = {2025},
|
||
url = {https://huggingface.co/Azzedde/llama3.1-8b-text2cypher}
|
||
}
|
||
```
|
||
|
||
**APA:**
|
||
Azzedde. (2025). *Llama3.1-8B-Cypher: A Cypher Query Generation Model*. Retrieved from [Hugging Face](https://huggingface.co/Azzedde/llama3.1-8b-text2cypher)
|
||
|
||
### More Information
|
||
For questions, reach out via **Hugging Face discussions** or GitHub issues.
|
||
|
||
### Model Card Authors
|
||
- **Azzedde** (GitHub: Azzedde)
|
||
|
||
### Model Card Contact
|
||
**Contact**: [Hugging Face Profile](https://huggingface.co/Azzedde) |