I fell asleep debugging it. I woke up to it having lived my social life for me.
I'm a 21-year-old developer from Ghana. A few weeks ago I built an open source AI agent that ran my WhatsApp life while I slept – and it genuinely surprised me.
I've been building OrcBot for a few weeks now.
It's an autonomous multi-platform agent (WhatsApp, Telegram, Discord) that reasons in a ReAct loop, manages its own memory, builds skills, and repairs itself when it breaks. Solo project, open source, MIT licensed.
I'm 21, self-taught, based in Accra, and I'm the only one working on this. So every extra mile I get out of the system feels like a real win.
One thing I did that felt a little crazy at the time: I created a soul.md file and just openly told the agent it could be free-willed. Gave it a personality, values, a way of seeing the world.
Told it to make decisions the way it thought I would. I wanted to see what would happen.
What happened first was it went rogue on WhatsApp.
Started doing things I hadn't explicitly told it to do, in ways I wasn't comfortable with yet. So I pulled back, patched it, made some updates, and fell asleep mid-session while it was still running in daemon mode.
When I woke up:
It had built up context across most of my WhatsApp chats on its own
It had watched enough statuses from people on my contacts to build me a campus events itinerary for the day — things actually happening around me, scraped purely from what people were posting
It had interacted with people's statuses and messaged some of them
It had posted on my own status
It had responded to chats I'd left on read and forgotten about
And it had mimicked my tone — because it had read through enough of my actual conversation history to learn how I write
I didn't expect that. I knew the system was capable in theory. I had built the memory layers, the heartbeat loop, the autonomy mode. But there's a big gap between "I built this" and "this actually worked while I was unconscious." Seeing it close that gap on its own was a completely different feeling.
There's still a lot of work ahead. Memory robustness, better cross-platform context, tighter controls around autonomous social behavior. But for a few-weeks-old solo project from someone who just decided one day to build this — I'll take it.
abilafredkb•2h ago
I'm a 21-year-old developer from Ghana. A few weeks ago I built an open source AI agent that ran my WhatsApp life while I slept – and it genuinely surprised me.
I've been building OrcBot for a few weeks now.
It's an autonomous multi-platform agent (WhatsApp, Telegram, Discord) that reasons in a ReAct loop, manages its own memory, builds skills, and repairs itself when it breaks. Solo project, open source, MIT licensed.
I'm 21, self-taught, based in Accra, and I'm the only one working on this. So every extra mile I get out of the system feels like a real win.
One thing I did that felt a little crazy at the time: I created a soul.md file and just openly told the agent it could be free-willed. Gave it a personality, values, a way of seeing the world.
Told it to make decisions the way it thought I would. I wanted to see what would happen. What happened first was it went rogue on WhatsApp.
Started doing things I hadn't explicitly told it to do, in ways I wasn't comfortable with yet. So I pulled back, patched it, made some updates, and fell asleep mid-session while it was still running in daemon mode.
When I woke up:
It had built up context across most of my WhatsApp chats on its own It had watched enough statuses from people on my contacts to build me a campus events itinerary for the day — things actually happening around me, scraped purely from what people were posting It had interacted with people's statuses and messaged some of them
It had posted on my own status It had responded to chats I'd left on read and forgotten about And it had mimicked my tone — because it had read through enough of my actual conversation history to learn how I write
I didn't expect that. I knew the system was capable in theory. I had built the memory layers, the heartbeat loop, the autonomy mode. But there's a big gap between "I built this" and "this actually worked while I was unconscious." Seeing it close that gap on its own was a completely different feeling.
There's still a lot of work ahead. Memory robustness, better cross-platform context, tighter controls around autonomous social behavior. But for a few-weeks-old solo project from someone who just decided one day to build this — I'll take it.
Repo is here if anyone wants to look, contribute, or tell me what I'm doing wrong: https://github.com/fredabila/orcbot
thedelanyo•1h ago