初始化项目,由ModelHub XC社区提供模型
Model: KurmaAI/AQUA-1B Source: Original Platform
This commit is contained in:
133
README.md
Normal file
133
README.md
Normal file
@@ -0,0 +1,133 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- transformers
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- google/gemma-3-1b-it
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<img src="./AQUA-1B.png" alt="AQUA-1B" width="600" style="border-radius: 6px;"/>
|
||||
</p>
|
||||
|
||||
# Model Information
|
||||
|
||||
**AQUA-1B** is Kurma AI’s compact and efficient **1-billion parameter Small Language Model (SLM)**, It is the **first lightweight Aquaculture domain-specific model** purpose-built for real-time aquaculture operations involving IoT sensor data, autonomous systems, and robotic agents.
|
||||
|
||||
Designed for edge deployments and low-latency environments, **AQUA-1B** enables on-device decision-making, real-time alert generation, and agentic task execution. It powers intelligent aquaculture systems for Water quality monitoring, Automated feeding routines, Mobile robotic inspections across ponds, tanks, and recirculating aquaculture systems (RAS)
|
||||
|
||||
|
||||
Learn more about [Kurma AI](https://kurma.ai/company).
|
||||
|
||||
---
|
||||
|
||||
# Key Features
|
||||
|
||||
- **Edge-Ready Intelligence** Optimized for low-power, real-time inference on embedded devices like Raspberry Pi, Jetson Nano, and Coral TPU.
|
||||
- **Agentic Task Execution** Supports multi-step agent-based workflows such as sensor checks, feeding triggers, water exchange scheduling, and autonomous health checks using instruction-following prompts.
|
||||
- **IoT-Aware Reasoning** Natively understands and reasons over sensor data inputs (e.g., temperature, pH, TDS, turbidity, DO), enabling rapid decisions in fluctuating aquaculture environments.
|
||||
- **Robotic Automation Control** Designed to interact with robotic systems, including underwater, and mobile pond inspectors.
|
||||
- **Autonomous Alerting Systems** Powers local alert mechanisms (via SMS, Telegram bots, or MQTT) that notify farmers when water parameters exceed safe thresholds or when interventions are required.
|
||||
- **Field-Deployable Decision Engine** Enables fully autonomous operation in remote hatcheries and ponds, ensuring uninterrupted control even in offline conditions or low-connectivity zones.
|
||||
|
||||
---
|
||||
|
||||
# Training Data Highlights
|
||||
|
||||
- Extension worker–farmer dialogues and field advisory logs
|
||||
- FAO, ICAR, NOAA, and peer-reviewed aquaculture research
|
||||
- Synthetic Q&A from 5,000+ aquaculture-focused topics
|
||||
- Climate-resilient practices, hatchery SOPs, and water quality datasets
|
||||
- Carefully curated to support **species-specific culture** methods
|
||||
- **Scale:** Trained on approximately **3 million real and synthetic Q&A pairs**, totaling around **1 billion tokens** of high-quality, domain-specific data.
|
||||
|
||||
|
||||
---
|
||||
|
||||
# Model Specifications
|
||||
|
||||
- **Base Model**: Gemma 3 1B (by [Google DeepMind](https://deepmind.google/))
|
||||
- **Training Tokens**: ~1 Billion
|
||||
- **Released On** 4, July 2025
|
||||
- **Data Volume**: 3M+ expert-verified and synthetic instructions
|
||||
- **Origin**: Made in America by [Kurma AI](https://kurma.ai/)
|
||||
- **Training Technic** Model is trained via Fine-tuning using (LoRA-based) Supervised Fine-Tuning (SFT).
|
||||
- **Training Infrastructure**: Trained using **8 NVIDIA H200 GPU Multi Cluster**
|
||||
Special Thanks to [Nebius](https://nebius.com/)
|
||||
|
||||
---
|
||||
|
||||
# Quickstart
|
||||
|
||||
Transformers (Google Colab/ jupyter)
|
||||
|
||||
|
||||
- Install dependencies
|
||||
```python
|
||||
!pip install transformers accelerate
|
||||
```
|
||||
|
||||
- Log in with your Hugging Face access token
|
||||
```python
|
||||
from huggingface_hub import login
|
||||
```
|
||||
|
||||
- Import model from Huggingface
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
import torch
|
||||
|
||||
model_id = "KurmaAI/AQUA-1B"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
device_map="auto", # Automatically uses GPU if available
|
||||
torch_dtype=torch.float16, # Use torch.float32 if no GPU
|
||||
trust_remote_code=True
|
||||
)
|
||||
```
|
||||
|
||||
- Test Prompt
|
||||
```python
|
||||
prompt = "What are the most common diseases in shrimp farming and how can they be prevented?"
|
||||
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
||||
outputs = model.generate(**inputs, max_new_tokens=256)
|
||||
|
||||
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
||||
print(response)
|
||||
```
|
||||
---
|
||||
|
||||
# 🙏 Acknowledgements
|
||||
This project was made possible thanks to:
|
||||
- [Nebius](https://nebius.com/) for providing a compute grant and access to NVIDIA H200 GPU servers, which powered the model training process.
|
||||
- [Google DeepMind](https://deepmind.google/) for sharing their open-source language models, which made this project possible.
|
||||
- Kurma AI research team: including aquaculture experts, machine learning engineers, data annotators, and advisors who collaborated to curate, verify, and refine the domain-specific dataset used for fine-tuning this model.
|
||||
|
||||
---
|
||||
|
||||
# ⚠️ Disclaimer, Bias & Limitations
|
||||
|
||||
- **Domain Bias**: The model may reflect inherent biases present in the aquaculture data sources and industry practices on which it was trained.
|
||||
- **Temporal Data Limitation**: Climate and environmental recommendations are based on information available up to 2024. Users should cross-check any climate-related advice against the latest advisories (e.g., IMD or NOAA updates).
|
||||
- **Potential Hallucinations**: Like all large language models, Aqua-1B may occasionally generate inaccurate or misleading responses ("hallucinations").
|
||||
- **Always validate critical, regulatory, or high-impact decisions with a qualified aquaculture professional.**
|
||||
|
||||
---
|
||||
|
||||
# Citation
|
||||
|
||||
```bibtex
|
||||
@article{narisetty2025aqua,
|
||||
title={AQUA: A Large Language Model for Aquaculture \& Fisheries},
|
||||
author={Narisetty, Praneeth and Kattamanchi, Uday Kumar Reddy and Nimma, Lohit Akshant and Karnati, Sri Ram Kaushik and Kore, Shiva Nagendra Babu and Golamari, Mounika and Nageshreddy, Tejashree},
|
||||
journal={arXiv preprint arXiv:2507.20520},
|
||||
year={2025},
|
||||
doi={10.48550/arXiv.2507.20520}
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user