ModelHub XC be418c02d8 初始化项目,由ModelHub XC社区提供模型
Model: reaperdoesntknow/Symiotic-14B
Source: Original Platform
2026-04-20 15:21:08 +08:00

license, datasets, language, base_model, pipeline_tag, library_name, tags
license datasets language base_model pipeline_tag library_name tags
afl-3.0
0xZee/dataset-CoT-Advanced-Calculus-268
en
Qwen/Qwen3-14B
text-generation transformers
qwen3
symbiotic
symbioticai
llm
Symbols
convergentintel

SymbioticLM-14B

Model Type: Hybrid SymbolicTransformer with Persistent Memory
Base Model: Qwen-14B
Framework: PyTorch + HuggingFace Transformers
Purpose: Full-scale cognitive reasoning model with self-organizing memory and generative symbolic evolution


Overview

SymbioticLM-14B is a state-of-the-art 17.8 billion parameter symbolictransformer hybrid model that tightly couples high-capacity neural representation with structured symbolic cognition. Designed to match or exceed performance of top-tier LLMs in symbolic domains, it supports persistent memory, entropic recall, multi-stage symbolic routing, and self-organizing knowledge structures.

This model is ideal for advanced reasoning agents, research assistants, and symbolic math/code generation systems.


Architecture Highlights

  • Backbone: Qwen-14B transformer with rotary embeddings + FlashAttention
  • Symbolic Dim: 8192
  • Symbolic Modules:
    • ThoughtDynamicsLNN (multi-head LSTM attention)
    • LiquidThoughtProcessor
    • CrystallineProcessor (DNAConv GNN)
    • HelicalDNAProcessor (linear helical encoding)
  • Memory: 4096 symbolic states in FP32, retrieved using entropy + contextual similarity
  • Dream Mode: Background symbolic simulation for open-ended cognition
  • Router: Intent classifier + entropy gating for processor path selection

Files Included

File Description
model.bin Transformer weights (LFS)
model.safetensors Memory-safe weights, optimized for loading
memory.pt 4096-symbolic vector bank
config.json Model and architectural metadata
generation_config.json Top-p, temperature, decoding settings
tokenizer.json Full tokenizer with symbolic tag support
added_tokens.json Tags like <D_LIM>, <PROOF>, <BY_MEASURE>, etc.
special_tokens_map.json Special token mapping for tokenizer

Intended Uses

  • Multi-step conversational agents with true memory
  • Long-form symbolic theorem generation and proof planning
  • Scientific dialogue, symbolic simulations, math/code synthesis
  • Reasoning in fuzzy, discontinuous, or non-smooth problem domains

Limitations

  • Memory requires curation and seeding for maximum utility
  • Symbolic cognition is not instruction-tuned for general QA
  • FlashAttention and symbolic modules increase VRAM usage during generation

Citations

Please cite "SymbioticLM" when using symbolic memory components in research or applications.


Convergent Intelligence Portfolio

Part of the Symbiotic AI Series by Convergent Intelligence LLC: Research Division

Mathematical Foundations: Discrepancy Calculus (DISC)

SymbioticLM's persistent memory and symbolic evolution connect to Discrepancy Calculus through self-generating completeness (Ch. 3 of the DISC monograph) and symbolic-root domains. The discrepancy operator:

Df(x) = \lim_{\varepsilon \downarrow 0} \frac{1}{\varepsilon} \int_x^{x+\varepsilon} \frac{|f(t) - f(x)|}{|t - x|}\, dt

quantifies local mismatch between integration and differentiation. In the symbolic-transformer context, D measures the gap between what the symbolic system encodes (discrete structure) and what the transformer integrates (continuous representation). The self-generating completeness theorem establishes that completeness emerges dynamically via energy computation on symbolic-root domains — the mathematical foundation for why symbolic-neural hybrids can produce structure that neither component generates alone.

The discrepancy energy E_{\text{disc}}[f] = \frac{1}{2}\int w(x)(Df(x))^2 d\mu(x) provides a natural stability criterion for the memory consolidation process: memory states with bounded discrepancy energy are stable; those with divergent energy indicate structural transitions requiring reorganization.

Full theory: "On the Formal Analysis of Discrepancy Calculus" (Colca, 2026; Convergent Intelligence LLC: Research Division).

Model Downloads Format
Symbiotic-1B 4 HF
Symbiotic-8B 4 HF
Symbiotic-Beta 3 HF

Top Models from Our Lab

Model Downloads
Qwen3-1.7B-Thinking-Distil 501
LFM2.5-1.2B-Distilled-SFT 342
Qwen3-1.7B-Coder-Distilled-SFT 302
Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF 203
Qwen3-1.7B-Coder-Distilled-SFT-GGUF 194

Total Portfolio: 49 models, 22,598 total downloads

Last updated: 2026-03-28 12:57 UTC


From the Convergent Intelligence Portfolio

DistilQwen Collection — Our only BF16 series. Proof-weighted distillation from Qwen3-30B-A3B → 1.7B and 0.6B on H100. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. The rest of the portfolio proves structure beats scale on CPU. This collection shows what happens when you give the methodology real hardware.

Top model: Qwen3-1.7B-Coder-Distilled-SFT — 508 downloads

Full methodology: Structure Over Scale (DOI: 10.57967/hf/8165)

Convergent Intelligence LLC: Research Division

Description
Model synced from source: reaperdoesntknow/Symiotic-14B
Readme 2 MiB