Hello Asmblers!
Would anyone be interested in playing around with OpenClaw tomorrow (Saturday 7th) anytime before 3pm?
I know very little, but it’s very intriguing to me, so instead of spending the day playing all by myself, I was wondering if anyone would want to get together and figure out what this thing is, what it can do, how to deploy it in our servers and what we can do with it!
I’m thinking of a work-alongside and break things together session, so you would need to come with a way to deploy this (slightly dangerous) AI in a computer of yours either onsite or remotely… just make sure it’s a computer (or a VM) you can format multiple times and has no information about you.
In my case, I have a Proxmox Server, so I’m planning to deploy it in a small LCX and ideally use an other VM running Ollama with a 3080… or just use Claud’s API credits to compare.
I was in the middle of upgrading my Home Assistant with a local LLM and make it smarter right when all this OpenClaw crazyness happened last weekend, so my initial end-goal is to figure out if I could use OpenClaw and HA somehow.
I haven’t started my membership yet, so I’m not sure if we could meet at Asmbly , but all we need is our laptops, so I’m down to meet anywhere in case we can’t do it there.
I’d be very interested in chatting about it. I poked around at it out of curiosity last week (before it changed names, twice!) but didn’t go very far.
Particularly interested in (a) what interfaces people are setting up to it and (b) how people are securing their setup/guardrailing it to be useful without having free rein.
I’ll be around tomorrow but am mainly planning to keep busy in the woodshop. If others are talking about it I’ll try and come join!
No worries!
I’ll stay home for now and play around with it for a bit. We can try this some other time when you are available or in case we get any other takers
Ok, I’ve finally made some progress and I can say that Home Assistant + OpenClaw is actually pretty cool.
I have it set up with a a mix of my local LLM, local voice pipeline and Anthropic’s API for more complicated things and it just feels like talking to Jarvis a little bit.
I wish I could do it all locally, but the context required is usually too big for my setup. That’s not preventing me from doing STT and TTS faster than real-time and not consuming external tokens and still having local and private memory, so that’s pretty nice.
We are still figuring out the true power and benefits of this setup, but so far creating relatively complicated automations or checking on our dogs while we are out with just a quick message on telegram is pretty cool.
Has anyone played around with this yet? I’m curious to see how other people are using this new super-power for our HA!