TL;DR
From a Conversation to a Spark
Recently, I participated in a World Café event organized by CareerTeam. The topic was how AI is being used in HR and beyond. My original curiosity was about the "arms race" between AI-generated CVs and motivation letters on one side, and AI-driven applicant tracking systems on the other.
Unfortunately, that discussion never really happened. But the event was still eye-opening: most participants were from HR, and I got an insider's view into how they're dealing with AI---both for themselves and for their organizations.
One moment stuck with me: An HR professional explained how her company implemented a chatbot for employees who don't know where to find information. A small company, limited resources---yet they made it work. That was my spark. If they could deploy something useful, then anyone should be able to.
Monday: RAG in 24 Hours
That thought turned into action the following Monday (August 25). I dove into RAG (Retrieval-Augmented Generation). After watching a few YouTube tutorials, I pieced together how it works:
- Split text into chunks using LangChain
- Generate embeddings for retrieval
- Pass context to the LLM
I crafted a first prompt and let Claude Code help me build the prototype. Within 24 hours I had something working. Another day of refinement made it reliable enough to demo.
Then I thought---why stop at RAG?
- I added a regular chat interface.
- Integrated vision models.
- Redesigned the whole site to support model selection at runtime via a simple JSON configuration.
By midweek, I had a much more flexible system in place.
Friday: Enter MCP
On Friday, I decided to add MCP (Model Context Protocol) support to the chat function. This was trickier. Claude Code struggled at first with the tool-calling flow, so it took multiple iterations. I started simple with a Weather API (OpenMeteo). After debugging and refining, we had a working Weather MCP server.
Then Claude suggested: "Why not build a registry of MCP services?"
That was all the encouragement I needed---I spent Saturday afternoon building out five more MCP servers. Here's where it got really interesting.
The AHA Moments
When I looked for publicly available MCP services, I found almost none. Most were designed to run locally in Python, which didn't fit my cloud-centric TypeScript stack. So I asked Claude: "Convert this Python MCP server into TypeScript."
And it just... did. Smoothly. That was my first AHA moment.
The second AHA moment came a few hours later when I realized I could do this over and over. Conversion after conversion, commit after commit. The workflow was smooth, and I wasn't hitting the dreaded token context limit I had learned to work around. Why? Because Anthropic's Sonnet 4 model had just been upgraded with a 1 million token context window. I'd read about it before, even commented on LinkedIn---but now I could actually use it.
It felt transformative.
Wrapping Up
Today is Sunday, August 31st. What started as a casual event turned into a full week of exploration, coding, and a few breakthrough realizations. The result is the AI Playground---a space to illustrate, experiment, and hopefully educate others about how these systems fit together.
🔗 Check it out here: AI Playground
📘 Read the guide here: User Guide
Comments
Post a Comment