CLI tool and Python library for interacting with OpenAI, Anthropic, Google, and other LLMs via APIs or local models.
LLM is a command-line interface and Python library for working with large language models from multiple providers including OpenAI, Anthropic's Claude, Google's Gemini, Meta's Llama, and dozens of others. It supports both remote API calls and locally-installed models through a plugin system. The tool automatically logs all prompts and responses to SQLite for later analysis.
The CLI provides direct prompt execution, interactive chat sessions, and multi-modal capabilities for processing text, images, audio, and video. Users can extract structured data using JSON schemas, implement custom tools that models can execute, and manage conversation continuity. System prompts can be applied to modify model behavior, and attachments allow processing of various file types.
LLM includes a comprehensive plugin ecosystem for extending model support. Popular plugins enable integration with Ollama for local models, Gemini for Google's services, and Anthropic for Claude access. The tool supports embeddings generation and storage, template systems for reusable prompts, and fragment management for handling long-context scenarios.
# via pip
pip install llm
# via Homebrew
brew install llm
# via pipx
pipx install llm
# via uv
uv tool install llm