Skip to main content

I've Been Vibe Coding for 2 Months, Here's What I Believe Will Happen


In the past few months, I've embarked on an experiment that has fundamentally changed how I approach software development. I've been "vibe coding" - essentially directing AI to build software for me without writing a single line of code myself. This journey has been eye-opening, and I'd like to share what I've learned and where I think this is all heading.

My Vibe Coding Journey

I started vibe coding with Claude and Anthropic's Sonnet 3.5 model, later upgrading to Sonnet 3.7, Claude Code, and other tools. My goal was straightforward but comprehensive: create a CRM system with all the features I needed: - Contact management (CRUD operations, relationships, email integration, notes) - Calendar management (scheduling meetings, avoiding conflicts) - Group management for organizing contacts - A campaign system with templates - A standalone application using the CRM's APIs for external contacts to book meetings directly The technical evolution of this project was interesting. I changed backends four times: 1. Initially used local browser storage 2. Migrated to Google Drive file storage 3. Finally settled on Firebase storage 4. Created APIs for all the CRUD functions with Firebase as backend I ultimately integrated fully with Google Cloud/Firebase since my domain was already on Google Workspace, allowing me to leverage Google APIs for email and calendar functionality. The entire project took about 5 days over a 2.5 months period and cost approximately USD $300. And the most surprising part? I haven't written a single line of code myself!

Key Learnings

Context Window Limitations Matter

The most critical insight was that context window size is everything. Using aggregators like Openrouter, I could get a 200K token context from Anthropic. This directly affects what your project can encompass - there's a hard limit to how much code and context an AI can process at once.

Project Management Must Adapt

Given token limitations, you need to manage projects by breaking them into smaller, well-insulated components. APIs help tremendously with this, as do clear implementation guidelines about what to build when. The traditional monolithic approach simply doesn't work when vibe coding.

Prompting is a Critical Skill

Providing the right amount of context with each prompt remains hugely important. Every instruction needs to be carefully balanced - not too verbose, not too sparse. Modern coding agents have built-in tools to help with this (like Claude-code's `/init` function), but prompt engineering is still an essential skill.

Version Control is Non-Negotiable

Commit changes frequently, ideally after every successful prompt. Some agents can do this automatically, including relevant context as comments. Without this discipline, you risk the AI corrupting your codebase as it loses context of previous changes.

Testing Remains Essential

Every change needs thorough testing, ideally with automated unit tests. This is relatively straightforward for frontend components with APIs, but becomes more complex when they're behind security layers.

Security Requires Extra Vigilance

Never pass access tokens or other security credentials directly to your codebase. These should be kept separate and away from AI's reach to prevent tokens from being accidentally embedded in code.

Where I Think This is Heading

Vibe Coding is Here to Stay

However, we'll still need programmers and architects to make it work effectively. Especially in corporate environments, human experts will be essential for code review, compliance checking, and security validation. This won't replace programmers but will dramatically increase their productivity. The main limitation today is that current LLMs don't have large enough context windows to handle extensive codebases - but I expect this will be solved sooner rather than later.

Specialized Coding Models Are Coming

We're just at the beginning of vibe coding. I anticipate many advances with models specifically trained for software development rather than general-purpose use. The speed of progress suggests that vibe coding will become increasingly precise and powerful.

No-Code May Make a Comeback

I was never a big fan of traditional no-code approaches because they inevitably hit limitations. However, I believe GenAI could revitalize this space. Rather than using middleware with components that users connect, AI could generate actual code with specific constraints that non-programmers could work with effectively. This could represent an entirely new category of LLMs - models specifically designed for software development that bridge the gap between traditional programming and no-code solutions. The future of software development is changing rapidly, and my experience with vibe coding has convinced me we're just seeing the beginning of a major transformation in how software gets built.


 


Comments

Popular posts from this blog

The case for central bank digital currency as public infrastructure to enable digital assets

I have dabbled a fair amount with all sorts of crypto currencies and their respective permissionless networks. In fact, I have been dabbling since 2012 which is by my last count a whopping 12 years. While I have always maintained that I do believe the general concept for digitization and programmability of assets is on the right path, its implementation, the user experience, the accessibility, the fraudulent activities, and the overall inefficiencies permissionless DLTs have, never made me into a true believer. I have stated that opinion on several occasions, here , here and here . There are still barriers to entry when it comes to digitization of assets: sustainable- and interoperable infrastructure. To illustrate this, I recently asked a notary public here in Zurich, why they can’t store the notarized documents as PDFs, the answer surprised me: because they must keep records for at least 70 years. Now, think about what would have happened if we stored these documents on floppy disks...

Will smart phone cameras be better than digital mirrorless cameras?

  If you believe Terushi Shimizu or rather, the way the press is formulating it , then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point.     However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality.     The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera. Camera phones or cameras in smartph...

Apples Vision Pro Headset strategy is all about its Arm-chips.

  Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.  Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.  I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “kille...