Andrej Karpathy Releases NanoChat: The Minimal ChatGPT Clone - The Best $100 ChatGPT Alternative

Updated 14 October 2025 06:34 PM

by

Andrej Karpathy Releases NanoChat: The Minimal ChatGPT Clone - The Best $100 ChatGPT Alternative

Andrej Karpathy Releases NanoChat: The Minimal ChatGPT Clone - The Best $100 ChatGPT Alternative

Andrej Karpathy has unveiled nanochat, a groundbreaking open-source project that allows anyone to train and deploy a ChatGPT-like conversational AI for around $100. Nanochat stands out as a minimal, full-stack pipeline, enabling rapid, accessible experimentation with chatbot technology.

What Is NanoChat?

Nanochat is a from-scratch, readably concise codebase Comprising only about 8,000 lines that builds a conversational LLM complete with a web UI reminiscent of ChatGPT. Unlike Karpathy’s earlier nanoGPT, which focused on pretraining, nanochat delivers a full training and inference stack in a single repository.

Key Features and Technical Highlights

  • Minimal Dependencies and Easy Deployment

    • Boot up a cloud GPU instance, run a single script, and in about four hours you’ll have your own functioning ChatGPT clone.

  • Tokenizer Training in Rust

    • Exploits a new Rust-based tokenizer for speed and efficiency.

  • Layered Training Pipeline

    • The system implements pretraining on massive datasets (FineWeb), midtraining on user-assistant data (SmolTalk), knowledge and skill SFT (ARC-E/C, MMLU, GSM8K, HumanEval), and even optional RL with GRPO on math problems.

  • Efficient Inference Engine

    • Includes KV cache, prefill/decode optimizations, and the ability to interact either through CLI or a full-featured web interface.

  • Tool Use Integration

    • The chat model can utilize Python in a lightweight sandboxed environment, replicating tool-use capabilities of advanced LLMs.

  • Gamified Report Card

    • Generates a markdown summary covering performance benchmarks and progress across tasks, encouraging user competition and improvement.

Efficiency and Affordability

Nanochat’s most compelling selling point is its low cost: roughly $100 covers about 4 hours of training on an 8xH100 GPU node, resulting in a chat model that can hold conversations, write stories, and answer simple questions. Scaling up the training time increases capabilities, quickly surpassing the GPT-2 quality bar and approaching benchmarks like MMLU and ARC-Easy.

Performance Benchmarks

Model Comparison

Training Time Model Size/Depth Cost MMLU ARC-Easy GSM8K
~4 hours Small ~$100 Basic Basic Basic
~12 hours Larger ~$300 >GPT-2 >GPT-2 >GPT-2
~24 hours (depth 30) ~GPT-3 Small FLOPs ~$600 40s 70s 20s

With longer, deeper training, nanochat approaches the strengths of much larger commercial models, all within an affordable budget.

Community and Future Potential

Karpathy aims for nanochat to be maximally forkable, modular, and a strong baseline for anyone learning or researching LLMs. The project is envisioned as the capstone of his LLM101n curriculum and might develop into a lightweight but powerful research harness that rivals earlier efforts like nanoGPT.

Why NanoChat Matters

Nanochat democratizes LLM development, letting anyone from hobbyists to researchers experiment, hack, and improve their own chatbot within a few hours and a modest budget. The repo’s clarity and scope make it a unique launchpad for future AI chat research and applications.

For a detailed walkthrough and the repository link, refer to Andrej Karpathy’s official channels.Andrej Karpathy has released nanochat a clean, minimal, full-stack ChatGPT alternative for roughly $100, making advanced LLM training accessible to individual enthusiasts and researchers.

What Sets NanoChat Apart

Nanochat is radically different from Karpathy’s earlier nanoGPT project, which focused only on model pretraining. Nanochat offers an entire pipeline: tokenizer training (with a new Rust-based implementation), pretraining a transformer LLM on the FineWeb dataset, midtraining on curated conversational data (from SmolTalk, MCQs, tool use logs), supervised fine-tuning (SFT) on knowledge, math, and code datasets (ARC-E/C, MMLU, GSM8K, HumanEval), and optional reinforcement learning (RL) on math tasks. The repo is dependency-light and consists of ~8,000 lines of code.

Full-Stack Training and Inference

By booting up a cloud GPU box and running a single script, users can assemble a fully working ChatGPT-like model and web UI in as little as four hours. The model supports tool use (via a lightweight Python sandbox), efficient inference with KV cache, and direct interaction over CLI or the web.

Gamified Model Evaluation

Nanochat includes a single markdown report card, presenting scores and metrics in a "speedrun" format. With roughly $100 of compute (about four hours on an 8xH100 node), users can train a conversational AI capable of answering simple questions, writing stories, or poems. Investing up to $1,000 for extended training (about 41 hours) results in models that rival GPT-2 and begin to solve math/code problems, scoring in the 40s on MMLU, 70s on ARC-Easy, 20s on GSM8K, and more.

Community and Research Potential

Karpathy’s goal is for nanochat to be a strong, readable, hackable baseline for LLM enthusiasts and researchers, and an anchor for his upcoming LLM101n course content. The project hopes to inspire collaborative improvements and might become a standard benchmark or research harness for LLM architectures and conversational agent development.

Nanochat is not yet fully tuned or optimized, and the repo is meant to stimulate community refinement and enhancement.

For repo links and in-depth walkthroughs, see Karpathy’s official announcement and documentation.

Disclaimer: All technical details and cost estimates are based on public information from Andrej Karpathy’s official channels. Users should verify specifications and hardware costs before attempting to replicate results.

Tags: NanoChat, Andrej Karpathy, ChatGPT alternative, $100 chatbot, open-source AI, nanoGPT, LLM101n, conversational AI