How a Hackathon Rejection Became 6,000+ PyPI Downloads

S

Sreenath

Guest
I was working on a hackathon project - an AI assistant that lets you chat with your infrastructure using RAG + MCP. Think of it as a live conversation with your entire cloud setup.

We built support for multiple LLM providers - Gemini, Watsonx, and Ollama. The switching logic was there, but embedded deep in the project code.

Then I came across this VentureBeat article where Armand Ruiz, IBM's AI VP, discussed how enterprise customers use multiple AI providers -
"the challenge is matching the LLM to the right use case." That validated what we were building.

During the hackathon, we implemented a config system where users could specify different providers and models, pass API keys through config files or environment variables, and set a default provider. It worked well for our use case.

By the hackathon submission deadline, we supported Anthropic, Gemini, Watsonx, and Ollama. Same app, different brains.

We didn't make the shortlist. But I wasn't ready to end the story there and kept improving what I had in control.

I realized we had built something valuable - this provider-switching logic buried in our codebase was solving a real problem. So I extracted it, restructured it properly, added more providers and CLI tools, and open-sourced it as llmswap.


Code:
# Instead of this mess in every project
if provider == "openai":
    from openai import OpenAI
    client = OpenAI(api_key=key)
elif provider == "anthropic":
    from anthropic import Anthropic
    client = Anthropic(api_key=key)
# ... repeat for 7 providers

# Just this
from llmswap import LLMSwap
llm = LLMSwap()  # Reads from config or env vars
response = llm.ask("Your question")

The extracted version is much cleaner than what we had in the hackathon.
Plus, I added CLI tools that became my daily workflow:


Code:
# Quick infrastructure questions
llmswap ask "Which logs should I check to debug Nova VM creation failure?"

# Interactive troubleshooting
llmswap chat

# Debug OpenStack errors
llmswap debug --error "QuotaPoolLimit:"

# Review infrastructure code
llmswap review heat_template.yaml --focus security

These CLI tools alone save us 10+ ChatGPT tab switches daily.

6,000+ downloads on PyPI in the first month.

Sometimes your best open source contributions come from recognizing the valuable pieces in larger projects. What started as embedded hackathon code became a tool helping thousands of developers.

PyPI: https://pypi.org/project/llmswap/

Continue reading...
 


Join 𝕋𝕄𝕋 on Telegram
Channel PREVIEW:
Back
Top