FAILING with OpenClaw so you don't have to
(and how to use it right)
Hello, Hoodies! I’m EXHAUSETED
As of last night midnight, after an almost all nighter and spending 18 of 24 hours trying to get openclaw to do the thing, I finally threw my hands up, scrapped it, and am reinstalling (again).
But what happened??? I heard that people were getting rich on polymarket with that??
Maybe. But that wasn’t my use case.
I wanted to push its limits to see where I could go. What’s the absolute most I could get it to do. And I was going to run it all through ollama.
So I made what I can only describe is beautiful architecture to keep it secure, affordable, AND leverage my fat monster of a GPU in my gaming PC.
Let me explain:
OpenClaw is an autonomous AI agent platform. You give it skills and API keys, it uses an LLM to reason, and it goes and does things.
Super powerful. A lot of manual config. And gets EXPENSIVE if you’re using Opus (claude max model)
So I had a brilliant idea. Why not run it on ollama? Surely that’ll work!
But I would need some beefy hardware. In comes the gaming PC.
And I don’t want to run it locally. That’s not safe nor secure. But I want to leverage my GPU.
So this was my plan (and you can apply this to any AI agent that you want to access somewhere else via API without configuring the external authentication for said API)
This is the basic overview:
digital ocean droplet
ollama running on my gaming pc
wireguard VPN to my gaming PC
updated firewall rule for post 11434
And then I can hit local AI from a cheap droplet with $0 in API fees.
AND IT WORKS! Reliably well.
Just one problem.
Ollama should not be used for openclaw.
“BUT ITS SUPPORTED!” I know, but hear me out.
There’s something that these big providers have that we don’t. Infrastructure.
You can try to run a small model like llama3.2, but after an hour, you’ll quickly run into context window issues. At that point, it becomes almost unusable.
OpenClaw crams A TON of context into the windows. The big providers have more efficient algorithm and ways of handling it. Can you run deepseek 30b and get a good result? Yes! But also your responses will be at a rate of .5 characters/second and eat your whole machine in the process.
So what’s the takeaway? Should you use OpenClaw?
Yes. If you want to experiment - do the following.
Make a digital ocean droplet that’s 2 cores and 4gb/ram for $24/month.
Use OpenAI API key or Claude. Pick your model. You can experiment with 4oMini but I would think the new 5.4Mini is likely the sweet spot. I will be playing with both.
Finally, I may create a fun small program called LlamaClaw that would be designed to run a simple, lightweight openclaw with Ollama. Stay tuned!
Oh and I’ll be dropping a new SaaS app soon as well so keep your eyes peeled for that. A resume tailoring and job tracking tool. few bucks a month.
Upload a resume, paste a job description, push a button, and away you go.
Love you all. If you want the most up to date updates on AI and tech, go join free skool.
Cheers,
Evan Lutz (BowTiedCyber)







