Stop Juggling API Keys: Meet llm-env — One Command, Any LLM Provider

S

Sam Estrin

Guest

TL;DR​


If you bounce between multiple AI providers like OpenAI, Gemini, Groq, Cerebras, or local LLMs—and you want an OpenAI-compatible workflow—this tiny Bash environment helper is for you. It simplifies LLM provider switching, keeps your API keys organized, and boosts developer productivity.

llm-env is a tiny Bash script that standardizes your Bash environment around the familiar OPENAI_* variables so OpenAI-compatible tools "just work" across providers.


Code:
# Switch providers in one command
llm-env set openai
llm-env set gemini
llm-env set groq

Result: Your existing AI tools (aider, llm, qwen-code, LiteLLM) immediately pick up the right API key, base URL, and model. No manual edits, no copy/paste, no restarts.

The Problem (You May Have Felt This Today)​

  • Multiple providers, each with different endpoints and auth
  • OPENAI_* has become the de facto standard—but not every provider uses those names
  • You end up editing ~/.bashrc or ~/.zshrc over and over
  • Context switching kills flow, and small mistakes cause mysterious 401s/404s

A Developer Story​


Sarah, an ML engineer at a fintech startup, prototypes using the Gemini free tier, uses Groq for CI speed, and ships with OpenAI in production. With llm-env, she changes providers with a single command and avoids configuration drift across environments.

The result: faster cycles and fewer “why is this failing?” moments.

The Solution: llm-env


llm-env --help

A single script that:

  1. Centralizes provider configuration in one place (~/.config/llm-env/llm-env.conf)
  2. Normalizes every provider to OPENAI_* environment variables
  3. Let's you switch providers instantly with llm-env set <provider>
  4. Includes a built-in connectivity test so you know your provider works

What Using It Feels Like​


Example llm-env Workflow

Before vs. After:


Code:
# Before (manual OPENAI_* exports)
export OPENAI_API_KEY="sk-••••abcd"
export OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_MODEL="gpt-5"
source ~/.bashrc  # reload to apply changes

# After (one command)
llm-env set openai  # sets OPENAI_API_KEY, OPENAI_BASE_URL, OPENAI_MODEL

Common Commands:


Code:
$ llm-env list         # Browse configured providers
$ llm-env set openai   # Switch instantly
$ llm-env test openai  # Verify connectivity and permissions
$ llm-env show         # See exactly what’s active

Installation (30 seconds)​


Quickly install llm-env with:


Code:
curl -fsSL https://raw.githubusercontent.com/samestrin/llm-env/main/install.sh | bash

Add your keys to your shell profile (examples):


Code:
export LLM_OPENAI_API_KEY="your_openai_key"
export LLM_CEREBRAS_API_KEY="your_cerebras_key"
export LLM_GROQ_API_KEY="your_groq_key"
# ...add keys for the providers you use

Start using llm-env right away:


Code:
llm-env list
llm-env set openai
llm-env test openai
llm-env show

Pre‑Configured for the Modern AI Stack​


llm-env ships with 20 popular providers ready to go and works with any OpenAI‑compatible API. You can easily add your own providers by editing a single config file.

  • Cloud providers (OpenAI, Groq, Gemini, Cerebras, xAI, and more)
  • OpenRouter presets (including free options)
  • Self‑hosted setups (Ollama, LM Studio, vLLM)

Why Standardize on OPENAI_*?​


Most AI tools already expect these variables:

  • OPENAI_API_KEY
  • OPENAI_BASE_URL
  • OPENAI_MODEL

llm-env embraces that reality. It updates those variables for you—correctly—no matter the provider. Your tools stay unchanged; your provider becomes a one‑line decision.

Security First​


llm-env show (demonstrating masked keys)

Keys are masked in output (e.g., ••••15x0) to keep secrets safe on screen and in screenshots.

Security is a top priority:

  • Keys live in environment variables—never written to config files
  • Outputs are masked (e.g., ••••abcd) — see the llm-env show output for an example
  • Switching is local; nothing is sent over the network except your own API calls during tests

Why Bash?​


I wrote llm-env in Bash so it runs anywhere Bash runs—macOS, Linux, containers, CI—without asking you to install Python or Node first. It’s intentionally compatible with older shells and includes shims for pre-4.0 behavior.

  • Works out-of-the-box on macOS’s default Bash 3.2 and modern Bash 5.x installations; Linux distros with Bash 4.0+ are covered as well.
  • Backwards-compatible layer for older shells ensures features like associative arrays “just work,” even on Bash 3.2.
  • Verified by an automated test matrix across Bash 3.2, 4.0+, and 5.x on macOS and Linux (see README → Testing).

Advanced Workflows (Examples)​


Cost‑optimized development:


Code:
llm-env set gemini     # take advantage of Gemini's free tier
# ... iterate quickly
llm-env set openai     # switch to OpenAI for final runs

Provider‑specific optimization:


Code:
# Code generation and debugging
llm-env set deepseek
# Generate functions, fix bugs, code reviews

# Real-time applications requiring speed
llm-env set groq
# Chat interfaces, live demos, rapid prototyping

# Complex analysis and reasoning tasks
llm-env set openai
# Strategic planning, research synthesis, complex problem-solving

Environment‑aware deployment:


Code:
# dev → staging → prod with different providers
llm-env set cerebras
llm-env set openrouter2
llm-env set openai

Try It​

Install (Quick)​


Code:
curl -fsSL https://raw.githubusercontent.com/samestrin/llm-env/main/install.sh | bash

# Configure your OpenAI key
echo 'export LLM_OPENAI_API_KEY="your_key"' >> ~/.bashrc

# Switch in one line
llm-env set openai

Install​


Code:
curl -fsSL https://raw.githubusercontent.com/samestrin/llm-env/main/install.sh | bash
llm-env config init
llm-env config edit  # Configure your API keys variables here
llm-env set openai   # Now you're ready to go!

Repository: https://github.com/samestrin/llm-env (docs) (tests)

Question for the community: What's your biggest pain point when working with multiple LLM providers? How do you currently manage API keys and environment switching?

Drop a comment below—I'd love to hear about your workflow and how llm-env might fit in.

⭐ Star the repo if this solves a problem you've been facing. The more developers who adopt standardized tooling, the better the entire ecosystem becomes.

Continue reading...
 


Join 𝕋𝕄𝕋 on Telegram
Channel PREVIEW:
Back
Top