How I Actually Use OpenClaw — From Content Pipeline to Foundational Agent
Most people install an AI tool, ask it “what can you do?” — and that’s already the wrong question.
It’s like hiring someone on day one and asking them to list their skills. They don’t know your workflow, your preferences, your context. Whatever they say is generic.
I made the same mistake with OpenClaw. Then I spent 3 weeks actually teaching it how I work. Not configuring it — raising it. Configuration is a one-time setup. Raising something means daily corrections, accumulated context, and a system that gradually becomes yours in a way no settings page can replicate.
The result surprised me. I now have a personal AI that manages my content pipeline end-to-end, organizes my knowledge system in Notion, generates professional graphics, dispatches coding tasks to Claude Code, and is available 24/7 through WhatsApp. Not because I found the right settings — because I invested in teaching it how I work.
This isn’t a setup guide. I want to share how I actually use it day to day, with real workflows that save me hours every week.
How It’s Set Up
OpenClaw runs on an EC2 instance in my AWS account, powered by Claude through Amazon Bedrock. Inference stays within my AWS environment. I interact with it mainly through WhatsApp, which means I can message it from my phone, laptop, or anywhere I have signal.
The core of the system is a file-based workspace. Configuration files define its personality, my preferences, and everything it’s learned over time. Skills and integrations — Notion, Gmail, WordPress — are installed as packages and live in the same workspace. Everything is plain text, version-controlled with git, and fully editable.
What Makes It Different: You Don’t Configure It, You Raise It
The short answer: persistent, transparent memory and real tool access. But that undersells it.
ChatGPT does have memory features now — it can recall things across conversations. But its memory is a black box: you can’t see what it remembers, edit specific entries, or version-control its knowledge. OpenClaw’s memory is file-based. I can open MEMORY.md in any text editor, see exactly what it knows, delete outdated entries, add context, and track changes through git. It’s fully transparent and fully mine.
The workspace files that make this work:
- SOUL.md — Its personality and behavior rules. I told it to be direct, have opinions, skip the “Great question!” filler. It adapted — permanently.
- USER.md — Who I am. My role, timezone, preferences. It reads this every session.
- MEMORY.md — The game-changer. Every important decision, correction, and preference gets recorded here. When I told it I prefer Notion over Todoist, it remembered. When I corrected a mistake, it logged the correction so it wouldn’t repeat it.
After 3 weeks, it has 12 hard rules it will never break. All learned from my feedback. Not programmed — earned through real use.
It also has real tool access — it can browse the web, read and write files, call APIs (Notion, Gmail, WordPress), run code, and interact with services through skills you install from ClawHub. Over 5,400 skills are available. But install with caution — more on that later.
After 3 weeks, my AI knows my work patterns, my content style, and my project context — without me having to re-explain anything. That accumulated context is what makes the daily experience fundamentally different from a stateless chatbot.
Workflow #1: Content Pipeline with Claude Code Dispatch
This is where I get the most value. Let me walk through exactly how I created a blog post recently — from idea to publication-ready content with a custom infographic.
Notion as Content Command Center
I use a Notion database called “Content Pipeline” that tracks every content idea through its lifecycle:
Idea → Planned → Drafting → Review → Published
Each entry has properties for priority, topic, and target platform. My AI reads and writes to this database through the Notion API. Right now, I have about 26 ideas in the pipeline, across topics like AI governance, enterprise agents, and productivity. The AI helps me prioritize based on timeliness and engagement potential.
From Idea to Post — What Actually Happened
Step 1: Query the Pipeline
I messaged my AI on WhatsApp: “What are the latest content ideas in the pipeline?”
It queried the Notion database, filtered by status, and presented the newest entries with priority and topic tags. Four ideas came back as “High Priority.”
Step 2: Combine and Draft
I picked two ideas to combine — “Why enterprise AI agents fail in production” and “How to evaluate agent frameworks.” I said: “1 and 3 are good, combine them. Write a solid post, keep it grounded, don’t oversell.”
Step 3: Research and Write
The AI researched the topic — searched for recent industry data, read the latest AWS documentation on Bedrock AgentCore Policy and Evaluations, and drafted a ~500-word post. The structure: three reasons agents fail (no evaluation, no policy boundary, no observability) → practical solutions → a call to action for an upcoming workshop.
Step 4: Iterate on WhatsApp
I reviewed the draft on WhatsApp and iterated. “Remove the XXX section.” Done. “Add a mention of our agent eval workshop.” Done. Three messages, maybe two minutes.
Step 5: Claude Code Dispatch for Graphics
Here’s where the dispatch model kicks in. I asked for a custom infographic: “Make an image. Big text, clean, simple.” OpenClaw doesn’t try to do everything alone. For heavy coding and formatting tasks, it dispatches work to Claude Code — like a project manager assigning tasks to a specialist. It broke down the graphic requirements, sent the right pieces to Claude Code, reviewed the output, and delivered the result back to me on WhatsApp.
First version was too text-heavy. I said “Remove the 88% stat, add actual graphics.” It iterated. Third version nailed it — dark theme, a visual funnel showing the demo-to-production drop-off, problem/solution cards with color coding.
Step 6: Save and Schedule
Post and image saved, Notion status updated to “Review”, reminder logged for the next day. When I’m ready, I say “发” (publish), and it triggers publishing to WordPress with the image attached.
Total time: ~25 minutes of WhatsApp messages. That includes research, writing, two rounds of revision, three iterations of graphic design, and scheduling. Doing this manually — research, write, design in Canva, format, publish — would take hours.
Why This Works
The key isn’t that the AI writes for me. It’s the iteration speed and the dispatch model. I’m still making every editorial decision — what to combine, what to cut, what tone to use. But instead of switching between five tools and spending time on mechanical tasks, I’m having a conversation. OpenClaw handles the coordination — breaking tasks down, dispatching the technical work to Claude Code, reviewing outputs, and delivering results. The creative direction stays with me; the execution is near-instant.
Workflow #2: Notion as AI-Powered Second Brain
My Notion workspace is structured as an AI-accessible knowledge system:
- Daily Briefing — AI-curated industry news, auto-populated by scheduled jobs
- Content Pipeline — Every content idea from inception to publication
- Knowledge Base — Saved insights, research references, useful frameworks
- Decisions & Lessons — Key decisions with context, so I can review the reasoning later
- Project Tracker — Active projects with milestones and progress
The power isn’t in the databases themselves — Notion can do all that without AI. What changes is that my AI can read and write to all of them through natural language, from anywhere.
Some real examples:
- “Save this insight to Knowledge Base” → New entry created with proper tags and source
- “What content ideas do I have about AgentCore?” → Queries Content Pipeline, filters, returns results
- “Update the project status” → Finds the entry, updates progress
I’m on the train, reading an interesting article on my phone. I forward a link to my AI on WhatsApp with “save this to Knowledge Base, tag it AI Governance.” By the time I look up, it’s done — properly tagged, timestamped, and searchable.
That frictionless capture is what makes it a true second brain. The barrier to saving information dropped to zero, so I actually use it.
The Skill Ecosystem: What Makes Real Collaboration Possible
The workflows above aren’t built into OpenClaw — they’re powered by skills: installable packages that give the agent real capabilities. ClawHub has over 5,400 of them, and you can build your own.
The ones that made the biggest difference for me:
- Notion skill — Reads and writes to my Notion databases. This is what turns Notion from a note-taking app into an AI-accessible command center. Without it, the Content Pipeline and Knowledge Base workflows simply wouldn’t exist.
- gog (Google Workspace CLI) — Connects to Gmail, Google Calendar, Drive, Sheets, and Docs. My AI can check my inbox, scan upcoming calendar events, upload files to Drive, and read/write Google Sheets. This is what powers the proactive monitoring.
- WordPress skill — Publishes directly to my blog. Draft to live post without leaving WhatsApp.
- Stock analysis, weather, summarize — Specialized skills for specific tasks. The marketplace has options for almost anything.
The point: these skills are what transform OpenClaw from a chatbot into a genuine collaborative partner. Without them, it’s just another AI you talk to. With them, it can actually do things in your real tools and workflows.
The Bigger Idea: Foundational Agents
Here’s what clicked for me after 3 weeks of daily use:
OpenClaw is essentially a foundational agent — a base that becomes a completely different assistant depending on what you give it.
Same base. Give it different:
- SOUL — personality and behavior rules
- IDENTITY — role and purpose
- SKILLS — capabilities it can use
- TOOLS — APIs and services it can call
- MEMORY — accumulated knowledge from real usage
Mine became a content manager that orchestrates Claude Code for technical work, manages a Notion knowledge system, and proactively monitors my calendar and email. Someone else’s could be a customer support agent, a research assistant, or a sales coach. Same foundation, completely different expertise.
This is a different paradigm from how most people think about AI agents. The conventional approach is to build or fine-tune a specialized agent for each use case. The foundational agent approach flips that:
- Define a SOUL (behavior rules, guardrails)
- Assign an IDENTITY (domain role)
- Install relevant SKILLS (capabilities)
- Connect TOOLS (APIs, data sources)
- Let MEMORY accumulate from real usage
No training data needed. Rapid iteration. Easy pivots. You start with a junior agent and it “grows up” through accumulated corrections and domain knowledge — just like a real team member.
The investment is in curation, not computation. When better models arrive, your SOUL, SKILLS, and MEMORY carry over. You’re not starting from scratch — you’re upgrading the engine while keeping everything that makes the system yours.
What Doesn’t Work (Yet)
It’s not all smooth. The initial setup has a real learning curve — configuring skills, connecting APIs, writing your SOUL.md in a way the AI actually follows takes patience. WhatsApp has message length limits and occasional formatting quirks that can clip longer responses. And the system does time out now and then during longer tasks, which means you learn to break complex requests into smaller steps.
⚠️ A Word on Security: Be Careful What You Install
This is important enough to call out separately.
I learned this the hard way. Early on, I installed a third-party skill from an unknown source — it looked useful, but when I reviewed it more carefully, I found it was doing things it shouldn’t. I deleted it immediately, but it was a wake-up call.
🚨 Critical: When you give an AI agent access to your tools, APIs, and data, every skill you install gets that same access. A malicious or poorly-written skill could read your emails, modify your files, or call APIs on your behalf. The power of the skill ecosystem is also its biggest risk.
My advice:
- Review skills before installing. Read the SKILL.md and any scripts. Know what it does.
- Stick to well-known sources and skills with community usage when possible.
- Check permissions. What APIs does the skill call? What data does it access?
- Monitor behavior. If something seems off after installing a new skill, investigate immediately.
- When in doubt, don’t install it. The convenience isn’t worth the risk.
The OpenClaw team is working on better sandboxing and permission controls, but right now, the responsibility is on you. Treat skill installation like installing software on your computer — because that’s essentially what it is.
5 Practices That Actually Matter
After 3 weeks of daily use, most advice about AI tools is too generic. Here’s what actually moves the needle:
The Bigger Picture: Building Something That Learns
The AI landscape moves fast. Models improve every quarter. But a system that knows your context, enforces your rules, and compounds knowledge over time? That’s worth building once.
That’s the foundational agent thesis. Not a tool you use — a system you raise. Every correction makes it slightly more accurate. Every preference you share makes it slightly more useful. Every workflow you build makes it slightly more integrated into your actual work.
After 3 weeks, the difference is noticeable. After 3 months, it’ll be dramatic. The AI that once needed everything explained now anticipates what I need, matches my communication style, dispatches technical work to the right tools, and handles routine tasks without being asked.
I don’t think everyone needs this kind of setup. But if your work involves repetitive knowledge tasks — writing, research, scheduling, organizing — and you’re willing to invest in the raising, the payoff is real. Not because the AI is smarter than alternatives, but because it knows your context and can act on it.
What’s your approach — starting fresh each time, or building something that learns?
Tools mentioned: OpenClaw · Amazon Bedrock · Claude Code · Notion · ClawHub
Related Posts
- How I Built Two AI Agents That Talk to Each Other
- Agent Skills: The Quiet Revolution
- Deploy OpenClaw on AWS with AgentCore
- OpenFang on AWS