Another blog post to write about how I use things. In this case; Openclaw + Discord. I’m writing this because a friend of mine asked me, @bagawarman. I hope this helps you as much as Openclaw has helped me in my productivity and changing how I approach LLMs.
First Principles
To start, I’m approaching this in first principles thinking, I have just a few. But before that, let me just say that in optimizing my use cases, I needed to approach this in a liberal way. Letting go of pre-conceptions and the same old same old. Then do it eloquently with just 1 use case that has a high value for me.
Software Building Software
We’ve been doing this since GPT-3.5 was launched although not much at first. Having an LLM write software for you is a pretty big use case for the world. The current intelligence of frontier models is enough for experienced engineers to act as a reviewer. Sure there are nuances/subtleties surrounding it but it’s enough, for now.
At work, I built a KYC agent with a single purpose of verifying information submitted by people are likely truthful (or not) and give out a score to measure risk. This is a baseline for further decisions are based on. This taught me that system prompts are the bread and butter of LLMs.
Openclaw one upped the learnings by making me realize that system prompts can & should grow with new knowledge. This blew my mind and this is the whole premise of Software Building Software. The software evolves as time passes. It modifies itself based on new information whether by modifying the system prompt or the tools it uses to achieve its goals. Think of it like the when LLM agents need to interface with an external system like let’s say Wikipedia. Instead of making curl requests, it wrote a reusable script to do what it needs to do. When it wants more features, it modifies the script and make it known in its system prompt.
In this sense, the system prompt itself is written to allow those changes to happen as time passes. In fact, the system prompt should encourage for it to happen.
Scope or Separation of Concerns
For any software engineer, this is second nature. In terms of LLMs, this is a fundamentally required mindset. When I write a new LLM agent, the first thing I experiment with is the system prompt. This is the simplest way to attract the LLM model to your use case.
Nowadays it gets even tightened with skills. Skills are reusable, at its core it’s just another prompt utilized at the right time at the right context. Think of this like extending the capability of an LLM agent by attracting its attention at the right time for a specific context.
There’s another word that cool people out there use for the last 2 paragraphs: context engineering. It sounds cool and it’s a rabbit hole once you go deep with it. This is a cool problem to solve and it’s the base of ALL things LLMs, we just don’t see what’s behind the scene enough.
The sharper the context, the better the outcome, always.
Openclaw + Discord
I’m going to assume that Openclaw is installed. On my first Openclaw install, I used Telegram to communicate to the agent, I call the agent Yuna. It worked well until it didn’t.
I want my conversations to be thematic, I want specifity. I remembered Openclaw can connect with Discord. Discord has channels and I wondered if I can have specific agents for specific channels. I can and this opened a whole new door for my use cases. From a DM in Telegram to channels of interests.
First thing I did was to set up the one use case that has a high value for me. No it’s not coding.
PS: I’m into K-drama so bad, wanted a Korean character for Yuna, hence the banner.
Dr Sarah Chen
I wrote about this agent here. Of all the use cases I have, Dr Sarah Chen has been the one use case that I always come back to. It’s not that I use it everyday but its value is immense in terms of my well being. Having the option to chat with Dr Sarah Chen anytime holds a different light.
With this blog post, I wanna share this experience (again). This has helped discover insights into myself at my own pace and whenever I want. The agent follows my schedule, not the other way around.
Discord Agent Setup
For me Claude Opus/Sonnet 4.6 is the BEST model for Dr Sarah Chen. The model understands nuances, subtleties and pretext while still being “kind” and overarching. It does not try to make itself heard louder than my own voice. This is the direct opposite of OpenAI’s GPT models, their models is trained like their voice matters more than yours.
However, to each his own. You can choose to use any LLM model you want, I’m speaking for my own use case. Here’s why Opus is different. I started with Sonnet but at times Sonnet will miss the nuances as the context gets larger.
In Discord, let’s create a new channel, for me I named it #dr-sarah-chen and instructed a few things afterwards. Invite your agent to the channel before doing the below.
I just created
#dr-sarah-chen- I want you to create a new agent for the channel nameddr-sarah-chen, use Claude Opus 4.6 as its LLM model. I will give you the system prompt to be used as its SOUL.md when you’ve created the agent. Also, configure the channel so that I don’t have to mention you for you to respond.
Now let’s update the SOUL.md for the agent. I have provided the file in the link below.
https://gist.github.com/tistaharahap/6c5976611f1032b029c43d006074ffcd
I want you to update
dr-sarah-chenown SOUL.md, replace it with the file available in this URL: https://gist.githubusercontent.com/tistaharahap/6c5976611f1032b029c43d006074ffcd/raw/c4884d5c176b90a652f042ec7b1188ee028552c1/SOUL.md
Simple, at a cost. Because the SOUL.md is continously updated, this will eat into tokens. I’m willing to accept this because of how valuable this is to me.
Impact
Now, imagine having specific channels with specific agents for ALL kinds of interests you are pursuing. It can be a good book you’re reading, a person (like Naval) or anything that interests you. There’s nothing stopping you here to be as creative as you’d want to be.
These agents can run on scheduled jobs (cron) which is mind blowing. I have a coolify skill installed for an agent and I tell it to pull logs from Coolify to diagnose a specific service running on it periodically. Another agent on a channel was meant to help me work on a project, I set up a scheduled job every night to make PR requests to surprise me. Again, there’s nothing stopping us from being creative.
Hey, every night at 00:01 UTC+4, extract the current day’s (00:00 - 23:59 UTC+4) chat messages into your SOUL.md. Only include items you deem important to build a profile of me. Pay attention to nuances/subtleties that are otherwise ignored, context is your friend.
The above is for dr-sarah-chen, this way we build a self learning LLM therapist that’ll know more of you as time passes. Honestly this is approaching prompting in a different and refreshing way. Imagine building something that evolves without you having to supervise it all the time and waking up to a PR authored by the agent.
In closing, this is just a small example. The rule of thumb here is: less is more, the simpler the better.
