Building an AI-Powered Investment Platform — One Week In
What I’ve learned building Bluelabel AIOS (Agent/Artificial Intel Operating System), and why I haven’t felt this excited in years.
One week into building Bluelabel AIOS — my AI-first investment productivity platform — and I’ve already rewritten half my architecture, tested 12+ tools, thought about writing a whitepaper on pricing models, and briefly fantasized about launching a full-blown AI studio. Oh, and I found myself constantly saving content to my “read later” WhatsApp channel.
It’s been overwhelming, addictive, and more energizing than anything I’ve done in years.
This post is a snapshot of the first week: what I built, what surprised me, where I got stuck, and where I think this is going.
Vibe coding is real — and it’s addictive
I’ve spent more time inside Cursor, Claude Code, and LLM-powered terminals this week than I expected — and I loved it.
I feel I cheated on my old friend ChatGPT (sorry!), though I still used it to polish things up and help me decide on tools and architecture for the project.
Even with zero Python experience, I was able to spin up real, working agents and iterate fast. The tools are that powerful.
It wasn’t always smooth (more on that below), but I got lost in the kind of creative flow that only happens when the gap between idea and execution disappears. That’s the magic of vibe coding — and now I’m hooked.
You need more than one model (and interface)
As I started experimenting, I quicky realized you can’t rely on just one model or interface. Running the same prompt through ChatGPT Plus ($20/mo), Claude.ai, Claude Code, and Cursor often gave totally different outcomes.
Here’s how I approached it:
Early in the week, I used Claude.ai as a step-by-step guide, then pasted scripts into Cursor.
As I felt more comfortable, I upgraded Cursor’s IDE subscription to Cursor Pro ($192/yr), letting its AI agent act as my copilot (he was the pilot, actually, and I wasn’t even the copilot: I was sitting comfortably in the back seat!). Cursor worked really well for deep iteration with memory directly on my code, but I found it entering in loops often.
By midweek, I upgraded to Claude Max ($100/mo, includes Claude Code + API tokens), which was better at reasoning and executing in code, and I ended up constantly coming back and forth from Cursor to Claude Code. This proved to be a good strategy to get out of infinite loops or fine solutions when one of them got stuck.
ChatGPT? Still great for structure and non-code tasks, but for some reason I didn’t think it would do as good a job as the rest in coding.
The bottom line: treat models and tools like instruments in a band, not a single solo player.
Tokens (the AI pricing kind) are a mess
Coming from Web3, the word “token” instantly triggered a flashback to utility tokens, ICO mechanics, and reward loops.
AI tokens — as in compute cost — aren’t all that different. They’re abstract, often poorly explained, and vary wildly depending on the provider and interface. At least the Gary Gensler is not all over them (bad joke for my Web3 followers).
I had to stop myself from going full whitepaper-mode and mapping out a tokenomics strategy for prompt-based workflows, but I actually think the idea of using a crypto token model for AI pricing is not as crazy as it sounds. Maybe next week I write more about this idea…
Real talk: if you’re building with LLMs, expect to spend $100–200/month just to explore seriously (Right now I am spending close to $200). Hosting and infra not included. But hey — that’s less than feeding one developer in pizza, and this one never gets sick or pushes back on crazy ideas. (Although let’s be honest: no dev has ever said “no” to a wild idea — they just say it’ll take two weeks.)
Sometimes, the best fix Is a reset
Vibe coding spoils you.
So when I hit bugs or looped endlessly in Cursor or Claude Code, I got frustrated — until I remembered: I’m programming in a language I didn’t know last week.
That’s insane.
So instead of debugging forever, I laughed, threw out the broken code, and started fresh.
What helped:
Accepting that AI-native workflows are messy by nature
Using version control and writing changelogs (even if just for myself)
Letting the agents read those changelogs to get better and understand what I was building
Lesson learned: don’t debug forever. Back out, reset, and re-vibe. It is not that you will be writing the code from scratch anyway…
Infrastructure first, features second
I knew that before I could scale to the 100+ agent vision, I had to set the right foundations.
This week was all about:
Getting consistent folder and file structures
Defining configs and prompts clearly
Researching about AI dev tools, including MCPs (Multi-Component Prompting), which feel like the REST API equivalent for agents
Doing and undoing a lot of architectural / tech stack decisions as I learned more about things
Last week was definitely not about coming up with a beautiful UI/UX. Hopefully, next week I make a bit of progress there.
The chat interfaces are amazing… until they aren’t
ChatGPT and Claude.ai are fantastic for learning and iterating, but once you need:
File system access
Long context windows
GitHub workflows
Persistent memory
…they hit their limits fast.
Initially, I was copy/pasting everything from Claude.ai into the Cursor IDE, and that helped me gain confidence because it felt like I was writing the code (I was not!), but soon I needed tools that could read/write directly from code files and keep track of the latest changes without me having to explain the changes first. That’s where IDE-based tools like Cursor’s internal AI agents and Claude Code’s terminal became essential.
Speaking of my new friend Claude Code, I wish it had a more robust UI — the minimalism is clean, but sometimes it’s too abstract.
Some coding background helps — but it’s not required
I had never touched Python before this week.
But my past full-stack bootcamp (which included some JavaScript and Node.js) saved me. I knew just enough to spot a broken variable or misaligned function.
That said, you don’t need to be a dev — but it helps to understand how systems behave: files, dependencies, logic flow.
And remember that LLMs will happily walk you through almost anything. Just try not to say “please” or “thank you” each time, it costs ChatGPT millions!!
Vibe designing and “frontend first” thinking
I started this project from the backend — because it felt natural. Build the foundation first, then work on the exterior.
The result is that my UI sucks, and I ended up having quite a bit of issues with the frameworks recommended by Cursor and Claude (Streamlit and Flask).
In fact, after doing some research, I found that frontend first might also be a good approach — especially for LLM-native tools: The concept of vibe designing focuses on building the user experience first, often in collaboration with AI, and letting that guide the architecture. Is the designers answer to the nerd’s vibe coding.
The battle is on!
Tools like Bolt.new, PRDKit, and UXPilot.ai are interesting early signals in this space. Next week, I may build a UI prototype first and work backwards — just to test the contrast. If I do, I will report back in a future post. In the meantime, here’s a screenshot of the current UI (yes, I know, no bueno!):
Main lesson of the week: try this at home
If you're curious, the best way to understand any of this is to get your hands dirty. A few good places to start:
Cursor – AI-native IDE
Claude Code – minimalistic terminal + assistant
Ollama – local LLM runner
Perplexity – fast, citation-rich research
ChatGPT – Of course, I cannot forget about my first love in AI… still rocking.
If you still don’t know where to start, just search for “vibe coding” in YouTube. But my recommendation is to download some of these tools and start with a simple prompt and go from there. You’ll be surprised how far you get.
My plan for next week:
I want to finish the infra, including a robust local LLM system so I can start training it to become a “mini me” and help me with my investor workflow: I ordered a high-end Mac Mini to start testing local LLM deployment, but it arrives next week. Can’t wait to get my hands on it — more on that in a future post.
I’m evaluating Vercel, Cloudflare Pages, and other options for a lightweight web deployment.
The goal is to launch a working MVP version of Bluelabel AIOS soon, and offer early access to a small circle of founders and operators to get real feedback.
Can’t wait to share what that looks like.
One more thing…
The post I published last week sparked something unexpected:
A wave of conversations with other founders and VCs — many in their 40s and 50s — who told me they felt exactly what I described:
“This changes everything.”
“I’ve never felt the ground move like this before.”
I’ll share more about those conversations — and where I think this is all heading — in my next post.
Cheers!
PS: If you like this content, please share in your socials!