
Last weekend I was at Big Berlin Hack, the second edition of a hackathon organized by {Tech:Europe}, CODE University, and The Delta Campus. 300+ builders, €50K+ in prizes, two days in Neukölln.
I didn't go in with a team. I went in with a problem.
The Problem: Lovable Apps Are Invisible to AI
Lovable has gotten genuinely good at shipping beautiful marketing sites fast. The issue is structural: most Lovable apps are React SPAs that render client-side. Google crawlers and AI engines like ChatGPT, Perplexity, and Claude see an empty root <div>, not your content. So a founder ships a polished product site and then wonders why AI search has nothing to say about them when a potential customer asks.
I'd been thinking about this for a while. A hackathon felt like the right place to actually build a fix rather than write about it.
The Team Came Together Through START Munich
Emanuel Morhard, a fellow START Munich alum, pinged me on Slack the day before so we'd find each other at the venue. At the hack we teamed up with Tom Dietrich, Anton Boyko (both originally from Munich), and Lianne (Zhizhen Liu). Five people, working repo by Sunday morning.
Assembling a team of strangers in an hour is its own skill. What made it work: Emanuel's warm intro, a concrete problem to align on immediately, and everyone being willing to just start.
What We Built: Instant SEO & GEO
We called it lovabletoseo: an AI SEO + GEO fixer for founders using Lovable to build their marketing websites.
The flow works in two steps:
Step 1: Audit any site. Paste a URL, get a real AI search audit: visibility, share of voice, which third-party URLs LLMs cite instead of yours, and the queries they fan out to. Works on any site, not just Lovable.
Step 2: Fix it end-to-end (for Lovable apps). If we recognize a Lovable app, we offer to connect your GitHub and run the fix automatically. You get a PR opened on your repo with technical SEO and GEO repairs applied inside your framework's idiom (Vite+React or TanStack Start), plus a Peec project populated with 20–50 buyer-shaped LLM tracking prompts grounded in real Google ranking data, and a snapshot showing exactly where you stand.
About 4 minutes end-to-end. About €0.50 in API spend.
A Few Parts I'm Proud Of
Three Tavily approaches running in parallel for competitor discovery. Each one fails in its own way, so agreement between them is the signal. Voting on disagreeing results rather than trusting any single approach turned out to be the right call. Robustness through redundancy.
Translating DataForSEO ranking data into LLM-shaped prompts. Bridging classic SEO signals (keyword rankings, search volume, competition) to GEO (which queries LLMs actually fire when someone asks about your space). These are different problems. The translation layer was the interesting technical piece.
The GEO audit itself. Getting LLMs to report their own citation behavior for a given domain is a small kind of meta-cognition. It surfaces things traditional rank trackers can't: you might rank #3 on Google and still be cited zero times when someone asks an AI assistant what tools to use.
What Hackathons Actually Teach You
Speed forces prioritization. You can't build everything so you're constantly asking: what's the one thing that would make someone understand why this matters? For us it was the audit: the moment someone pastes their URL and sees their AI visibility score. Everything else is downstream of that moment working.
Working fast with people you just met also teaches you how much trust you extend by default. You have to. There's no time to establish track records. You either sync quickly or you waste the weekend.
I came home with a working prototype, a team I'd build with again, and a cleaner mental model of why GEO isn't just SEO with a different acronym. They share infrastructure but they're answering different questions about different systems.
More on that second point soon.
Built with Peec AI (track partner), Tavily, DataForSEO, and a lot of coffee from The Delta's terrace.