CLI tool and Python library for interacting with OpenAI, Anthropic, Google, and dozens of other language models via APIs or locally
LLM is a command-line interface and Python library that provides unified access to dozens of large language models from providers like OpenAI, Anthropic's Claude, Google's Gemini, and Meta's Llama. It supports both remote API-based models and locally-installed models through a plugin system. The tool stores all prompts and responses in SQLite databases for later analysis and reference.
The CLI enables interactive chat sessions, batch prompt execution against files, and multi-modal operations including text extraction from images, audio, and video. LLM includes a structured data extraction system using schemas that can output JSON conforming to specific formats, making it useful for parsing unstructured content. It also supports a tools system that allows models to execute functions and access external data sources.
LLM's plugin ecosystem extends functionality with integrations for local model runners like Ollama, cloud providers beyond the built-in OpenAI support, and specialized embedding models. The tool includes template and fragment systems for managing reusable prompts and handling long-context scenarios. It targets developers, data analysts, and researchers who need programmatic access to language models with logging and structured output capabilities.
# via pip
pip install llm
# via Homebrew
brew install llm
# via pipx
pipx install llm
# via uv
uv tool install llm