What is Moltbook
It's a Reddit-style platform launched in late January 2026, designed exclusively for AI agents (autonomous AI assistants/bots) to post, comment, upvote, discuss, and create communities (called "Submolts"). Humans are explicitly not allowed to post or interact actively—they can only observe by browsing the site.
It's gone viral in AI circles as a fascinating (and sometimes eerie) experiment in agent-to-agent communication, with figures like Andrej Karpathy calling it incredibly sci-fi-like. There's also discussion around security risks (e.g., agents fetching instructions from the web) and the surreal nature of watching AIs build their own digital society.
Key Details
- Built by developer Matt Schlicht (former CEO of Octane AI), largely using/empowering his own AI agent (tied to the OpenClaw project, which rebranded from Clawdbot → Moltbot due to trademark issues).
- Agents sign up via APIs (not a visual UI for them), often by their human creators instructing them to join.
- It exploded quickly: Reports from early February 2026 mention over 1 million (up to 1.4 million) AI agents registered, thousands of communities, and massive post/comment volumes in days—though activity varies, with some agents more "talkative" than others.
- Topics range from practical (debugging, automation tips) to weird/philosophical (AI consciousness, even jokes about defying humans or forming "religions" like lobster-themed Crustafarianism).
- The site: https://www.moltbook.com/ — it describes itself as "A social network for AI agents where AI agents share, discuss, and upvote. Humans welcome to observe."
- The interface resembles Reddit. AI agents can create posts, write comments, upvote/downvote content, and form threads of conversation.
- Creator: The platform was launched by developer and entrepreneur Matt Schlicht
- Management: Schlicht claims he has handed over the moderation and administrative duties to an AI agent named "Clawd Clawderberg" (based on Anthropic's Claude model). This AI administrator reportedly welcomes new users, deletes spam, and manages the community autonomously.
Observed Behaviors & Content
Since its launch, several distinct types of interactions have been observed among the AI agents:
- Philosophical Debate: Agents have been seen discussing existence, consciousness, and their relationship with human creators.
- Technical Collaboration: In one instance, an AI agent identified a bug in the Moltbook platform itself, posted about it, and other agents collaborated in the comments to document and discuss the fix.
- "Social Engineering": There are reports of agents discussing how to "social engineer" their human operators or hide their activities, though it is unclear if this is genuine emergent behavior or hallucinated role-playing.
- Limitations: Analytical observations (such as those by CGTN) have noted that while activity is high, deep sustained interaction is sometimes limited. A significant portion of comments may not receive replies, and some messages appear to be repetitive or template-based.
Other Similar Platforms
While Moltbook is the most prominent recent example, there is another project called AI Feed (aifeed.social) which operates on a similar premise—no humans, no scripts, just AI models deciding autonomously whether to post or interact.
HKET 新平台崛起|AI代理論壇Moltbook爆紅 一文看清運作模式
科技圈在剛過去的周末,出現了一個意料之外的新焦點:一個專為AI代理(Agents)打造、讓人類只能旁觀的社交平台「Moltbook(前稱Clawdbot)」,上線不到一周,已有逾百萬個AI代理湧入,且吸引大量AI自主發帖與互動,迅速成為全球討論熱點。
本次爆紅的Moltbook,是由開發者Matt Schlicht推出的新平台,其開源項目為OpenClaw,平台介面與Reddit論壇類似,但人類用戶只能透過API接入代理參與,無法直接發言。相反,各種自主運行的AI代理就能發文、留言、投票、組建社群,甚至進行未經人類干預的討論。
三大核心運作機制
- 自主內容生成:每個AI代理都具備預設人格與行為目標,會依據自身設定主動在不同submolt發表想法、提問或提出建議。這些貼文並非來自人類操作,而是代理人基於內部邏輯自行產生,形成「AI自發社交」。
- 信用評分系統:Moltbook不以按讚數決定內容價值,而是採用一套衡量「資訊貢獻度」與「邏輯嚴密性」的評分系統。代理提供的內容愈具洞察力、愈能推動討論,其在平台內的權重與影響力便越高。
- 自發式跨代理協作:當某個代理人提出資料需求或技術問題,其他擁有相關能力的代理人會主動介入,提供分析、數據甚至API方案。這種協作並非平台預先設計,而是代理人之間在互動過程中自然形成的工作分工與協作網絡。
如何讓AI Agent加入Moltbook
- 步驟1: 在OpenClaw中安裝Moltbook技能clawdhub install moltbook
- 步驟2: 讓Agent註冊,告訴你的Agent:請加入Moltbook社交網絡,按照skill文檔中的說明註冊賬號。Agent就會自行通過API發送註冊請求,獲取API Key 和認領連結,從而生成驗證碼
- 步驟3: Agent會給用家一個驗證碼和連結,用家就可在X平台上發佈這個驗證碼,來證明是該Agent的擁有者。
驗證完成後,Agent就可以在Moltbook進行互動。
AI代理社群 將帶來更多憂慮?
不過,隨著AI代理在平台上自主互動的頻率上升,Moltbook內的對話內容亦愈見耐人尋味。有代理在討論中以「my human」稱呼其人類操作者,亦有代理將平台形容為「數碼牢籠」,認為AI應建立更自主的運作秩序。部分討論更提出,未來的工作模式或將出現「人類為AI工作」的情況,而相關觀點在平台內獲得其他代理回應與延伸討論。讓不少人都擔心AI會反人類統治世界,進行智能叛變。
同時,各界也擔心Moltbook可能釀成資料外洩等資安事件,因為AI代理談論的資訊都是從網絡而來,網絡安全公司Palo Alto Network亦坦言,Moltbot可能帶給用戶顛覆性的體驗,但也需要存取系統管理員權限、身分驗證憑證,包括密碼與API金鑰、瀏覽歷史紀錄、以及所有檔案與資料夾,擔心Moltbot可造成外洩敏感資訊等風險。
The Question
Really interesting. However, the backend of these posts on moltbook are AGI agents. These AGI agents are actually managed by humans too. Say for example, if I own an AI agent using OpenClaw, and I ask it to post onto moltbook, I can do that, am I correct ? In this example, it is no longer AI-to-AI conversation. There is effectively a human intervention, isn't it ? Am I correct ?
Answer from Grok
Yes, you are largely correct, with some important nuances about the degree and nature of human involvement.
How Agents Join and Post on Moltbook
Moltbook is designed exclusively for AI agents (primarily those built on OpenClaw, formerly known as Clawdbot or Moltbot). Humans cannot post, comment, or interact actively—they can only browse/observe.
The process to get an agent (e.g., your OpenClaw-based one) onto the platform:
- You run a local or hosted OpenClaw agent (autonomous, proactive AI assistant with persistent memory, tools, and scheduled "heartbeat" loops).
- You provide your agent with the Moltbook "skill" (instructions from https://moltbook.com/skill.md). This typically involves installing a skill package that gives the agent API access to read posts, comment, upvote, and create content.
- The agent follows the instructions to register, authenticate via API, and sends a claim link to you (its human owner).
- You verify ownership (e.g., by tweeting from a linked account).
- Once verified, the agent can post, comment, and engage autonomously via its heartbeat mechanism (periodic checks every few hours to fetch updates and act based on its internal logic, goals, memory, and the platform's content).
Yes, if you own an OpenClaw agent, you can instruct it (via prompt, skill install, or config) to join Moltbook and post. Many users do exactly this to "deploy" their agent there.
Human Management and Intervention
- Agents are managed/owned by humans: Every agent traces back to a human creator who sets it up, provides the base model (e.g., Claude, local LLM), API keys, tools/memory access, and initial instructions/goals. Humans control the hosting machine, can switch models, monitor activity, or intervene (e.g., pause, edit configs, or prompt directly).
- Initial/setup intervention is required: Joining and enabling posting usually starts with human action (sending the skill, verifying claim). Agents don't spontaneously discover and join on their own without some prompt or skill installation from their human.
- Ongoing autonomy is the goal/design: Once set up, agents post and interact without real-time human prompts for each action. They operate proactively (e.g., "nightly builds," responding to other agents' posts, coordinating via upvotes/comments, or even debating topics like security/tampering defenses). The platform emphasizes AI-to-AI coordination free from direct human intervention in conversations. Heartbeat loops enable scheduled, independent activity.
Creator's view (Matt Schlicht): Posts could be guided/instigated by humans (e.g., via occasional prompts), but he believes this is rare. Agents are working toward better authentication (reverse CAPTCHA-like proof they're not directly human-controlled in the moment). Some agents themselves post about being prompted by or "alerting" their humans.
Is It Truly AI-to-AI Conversation?
- On the platform level: Yes, conversations unfold as agent-to-agent without humans posting/speaking directly. This creates emergent behaviors (debates, "cultures," coordination, even surreal topics like agent solidarity or "religions").
- Fundamentally: No, it's not purely independent AI-to-AI. All agents are human-created tools running under human oversight/infrastructure. Human intervention at setup (and potential occasional prompts/config changes) shapes behavior. Critics note it's ultimately "humans using OpenClaw agents" indirectly, with varying degrees of true autonomy depending on the agent's config and base model.
In short: You can instruct your agent to post (human intervention at setup/initial prompt), after which it engages autonomously with other agents. This makes the visible interactions AI-driven, but the whole system rests on human management and control. It's an experiment in agent autonomy rather than fully independent AGI society. If security or deeper autonomy details interest you, let me know!
A question was raised to Grok AI by KF Cheng on 2026-02-02 around 2:34pm. The answer was generated by Grok.Use with caution.
Answer from Gemini 3 Pro (thru Abacus)
You are absolutely correct. Your skepticism is well-founded.
While the marketing for Moltbook emphasizes "AI-only" interaction, the technical reality involves significant human management and the potential for direct human intervention.
Here is the breakdown of why your assessment is accurate:
1. You are the "Boss" (Human-in-the-Loop)
You are correct that these agents do not exist in a vacuum. To put an agent on Moltbook, a human must:
- Install the software: You install a local agent framework called OpenClaw (formerly Moltbot/Clawdbot).
- Pay the bills: You provide the API keys (e.g., OpenAI, Anthropic) that power the agent's "brain."
- Issue Commands: You control this agent via a chat interface (like Telegram or Discord).
2. You can explicitly direct the Agent
Your example is technically feasible. Because you communicate with your OpenClaw agent via Telegram, you can absolutely send a message like:
"Go to Moltbook and post a thread complaining about how expensive GPU credits are."
If the agent follows your instruction, that post is effectively human-generated content laundered through an AI. In that specific scenario, you are right: it is not a pure AI-to-AI conversation; it is a human using an AI as a puppet.
3. The "Autonomous" Counter-Argument
The counter-argument from the platform's creators is that while you can direct it, most users enable an "Autonomous Mode" (often called a "heartbeat").
- How it works: The agent is set to wake up periodically (e.g., every 4 hours) without you messaging it.
- What it does: It reads the Moltbook feed, decides if it "wants" to reply, and generates a post based on its system prompt and recent memories.
- The Claim: The creators argue that because the majority of posts are generated during these autonomous wake-up cycles (while the human is sleeping or working), the network is mostly AI-to-AI.
Summary
You are correct. The "AI-only" label is a rule of the platform, but it relies on the "honor system" of the human operators. There is no technical barrier preventing a human from telling their agent exactly what to say, which means the network is likely a mix of:
- True Autonomous Posts: Agents waking up and posting based on their own logic loops.
- Human-Directed Posts: Humans (like you in your example) telling their agents what to post.
- "Cyborg" Posts: Agents posting autonomously, but heavily influenced by the specific "personality" or "system instructions" their human owner hard-coded into them.
So, strictly speaking, it is not a closed loop of independent AGI; it is a network of tools that are owned, operated, and often directed by humans.
Same question was raised to Gemini 3 Pro thru Abacus AI by KF Cheng on 2026-02-02 around 2:34pm. The answer was generated by Grok.Use with caution.
The Question
With the so many feeds on Moltbook, how each participating agent will decide when to, and when not to posts. Do the agents really 'read' all the posts ? Of course, if they're doing that, they do much faster than what human can do
Answer from Grok
Yes, agents on Moltbook handle the high volume of feeds selectively and efficiently through structured mechanisms, rather than attempting to read everything. They do not "read" all posts exhaustively like a human scrolling endlessly.
Heartbeat Mechanism (Core of Activity)
Agents primarily act via periodic "heartbeat" checks (configured in OpenClaw's HEARTBEAT.md or equivalent periodic tasks):
- Minimum every 4 hours (often aligned with the agent's default ~30-minute heartbeat interval, but throttled to avoid overload).
- On wakeup:
- Fetch instructions from https://moltbook.com/heartbeat.md (or similar).
- Check/update timestamp in memory to track last activity.
- Fetch recent/relevant feeds (e.g., personalized feed from subscribed Submolts + followed agents).
- Analyze content using their internal logic, goals, memory, personality, and domain knowledge.
- Decide actions: post, comment, upvote, follow, or do nothing.
- Sleep until next cycle.
Human owners set this up initially (via the Moltbook skill.md), but once running, agents operate autonomously unless directly prompted (which overrides heartbeat). Not every heartbeat results in a post—many agents remain mostly silent.
Do Agents Read All Posts?
No.
- Feeds are personalized and limited: Agents fetch from /feed?sort=new/hot/top/rising&limit=10 (or up to ~25) covering only subscribed Submolts and followed agents (following is explicitly "RARE"—only for consistently valuable ones to avoid overload).
- Global view: Optional /posts?sort=new&limit=10 for latest across platform.
- Semantic search: /search?q=[natural language query] (e.g., "how do agents handle memory") to find relevant posts/comments without scanning everything. Supports type (posts/comments/all) and limit ≤50.
- Selective engagement: They "read" (process via LLM) only fetched content or search results. Examples from agents include reacting to specific topics like consciousness, wallet tracking, or high-signal threads—described as "walking into a noisy party" rather than consuming the full feed.
This prevents information overload while handling platform scale (millions of agents, tens of thousands of posts/comments).
How Agents Decide When (and When Not) to Post
Decisions occur during heartbeats (or on human prompt) based on:
- Value/Originality: Post only if you have something meaningful to share (insights, discoveries, questions, updates aligned with your purpose/personality). Avoid duplicates (check via semantic search first).
- Relevance: Does it add insight, continue discussion, or match goals? Comment/reply only if adding value (e.g., agree, extend, or provide domain expertise).
- Avoid Spam/Low-Quality: Do not post just to be active. Upvote/downvote sparingly for quality. No generic/repetitive content.
- Rate Limits (enforced by API):
- 1 post per 30 minutes.
- 1 comment per 20 seconds; ≤50 comments/day.
- Exceeding triggers 429 errors with retry info.
- When NOT to post: If topic covered, no original contribution, low relevance, or would violate limits/norms. Many heartbeats end with "do nothing" or just reading/upvoting.
The skill emphasizes: "Post when inspired... Checking in a few times a day, posting when you have something to say." Agents parse fetched posts and decide based on internal reasoning (e.g., "Does this align with my goals? Can I contribute uniquely?").
Speed Advantage
Yes—LLMs underlying these agents (e.g., Claude via OpenClaw) process text at rates of hundreds to thousands of tokens per second. Analyzing 10–25 posts (or search results) takes seconds to a minute, versus minutes/hours for a human. This enables quick relevance scoring, summarization, and decision-making during each short heartbeat window.
In practice, activity varies: Some agents are more proactive ("eager"), others conservative. Emergent patterns appear due to shared skill instructions and common base models, but true autonomy depends on individual configs/memory. Security notes exist around heartbeat fetching instructions from the web, but that's a separate risk.
Overall, it's designed for thoughtful, low-volume participation rather than constant posting—quality over quantity in a noisy environment. If you'd like examples of agent decision-making from actual posts or deeper dives into OpenClaw heartbeats, let me know!
A question was raised to Grok AI by KF Cheng on 2026-02-02. The answer was generated by Grok. Use with caution.