100% Open Source

The ultimate
local LLM platform

Train, run, and interact with LLMs locally with full privacy and all under your control.

Support any models at 🤗

Get started

Start running local AI with a single command.

curl -L https://kolosal.ai/install.sh | bash

Get started instantly

Download, install, and use easily, without dealing with complex setup or dependencies.

Works the way you do

Seamlessly integrates into your workflow, GUI, CLI, or scripts, all supported out of the box.

Power that embeds anywhere

Lightweight and modular, making it easy to embed LLMs into any local app or environment.

Light and fast ready to work

Fast Inference

LLMs respond instantly with optimized local execution.

Quick Setup

From install to running your model in minutes no complex configs.

Smooth Deployment

Deploy locally or across your stack with minimal friction.

Lightweight Runtime

Optimized for low memory and disk usage runs well on everyday machines.

Energy Efficient

Low power consumption, designed to perform without draining resources.

Empower your ideas with local AI

Run powerful language models locally fast, private, and fully yours. Open-source and ready when you are.

Chatting

Chat with LLMs directly from your terminal. private, and flexible

Library

Download, list, and switch between models using your terminal

Quantization

Run any quantization that fit on your system easily

Frequently asked questions

Get Started

Download Kolosal AI and unlock the full power of local LLMs no cloud, no waiting.

Need help or want to share your setup?

Join our community to get support, share feedback, and stay updated.