← Back to Home
🤖

From Box to Bot: Mac Studio M3 Ultra AI Agent Setup

A complete step-by-step case study — how we turned a brand-new Mac Studio into a 24/7 autonomous AI agent hub in under 2 hours

M3 Ultra 96GB RAM 8TB SSD Setup: ~90 min Agents: 6

Client Overview

Client Type

Hong Kong SME owner (finance sector)

Hardware

Mac Studio M3 Ultra · 96GB unified memory · 8TB SSD

Challenge

Needed AI automation without cloud dependency — privacy-first, no subscriptions, full local control

Result

6 agents running 24/7, Discord-connected, multi-bot peer-to-peer workflow

Step-by-Step Guide

From unboxing to autonomous agents

1 Base System Setup ~15 min
macOS Initial Setup

Power on the Mac Studio, complete macOS setup wizard. Create an admin account, enable FileVault disk encryption, and connect to the network.

Install Homebrew

Install Homebrew — the essential package manager for macOS.

Show command
# Install Homebrew /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Install Essential Tools

Install Git, Node.js, and Python 3 via Homebrew.

Show command
brew install git node python3
2 OpenClaw Installation ~20 min
Install OpenClaw CLI

Install the OpenClaw command-line tool globally via npm.

Show command
npm install -g openclaw
Initialise Workspace

Run openclaw init to scaffold the workspace at ~/Documents/GitHub/MacAI. This creates the project structure, configuration files, and default agent templates.

Show command
openclaw init --workspace ~/Documents/GitHub/MacAI # Creates: SOUL.md, HEARTBEAT.md, .openclaw/, agents/
Configure API Keys

Set up API keys for Anthropic Claude, OpenAI, and GitHub Copilot. All keys are stored locally in an encrypted keychain — never transmitted to third parties.

Show commands
openclaw config set ANTHROPIC_API_KEY "sk-ant-..." openclaw config set OPENAI_API_KEY "sk-..." openclaw config set GITHUB_TOKEN "ghp_..."
Discord Bot Integration

Create a Discord application, generate a bot token, and configure OpenClaw to connect to the client's Discord server.

Show commands
openclaw config set DISCORD_BOT_TOKEN "MTI..." openclaw config set DISCORD_SERVER_ID "1234567890"
3 Agent Configuration ~30 min
Configure SOUL.md

Define the agent's personality, purpose, and behavioural boundaries in SOUL.md — the identity file that governs how the agent communicates and makes decisions.

Show example
# SOUL.md — Agent Identity Name: MacAI Assistant Role: Business operations AI for a HK finance SME Tone: Professional, concise, bilingual (EN/繁中) Boundaries: - Never share confidential client data - Escalate financial decisions to human - Log all actions for audit trail
Set Up HEARTBEAT.md

Configure periodic monitoring tasks in HEARTBEAT.md — automated check-ins that run every 15 minutes, scanning for alerts, new messages, and scheduled tasks.

Show example
# HEARTBEAT.md — Periodic Tasks interval: 15m tasks: - Check Discord #alerts for new messages - Review pending GitHub PRs - Scan email inbox for client queries - Generate daily summary at 09:00 HKT
Configure Discord Channels

Set up dedicated Discord channels: #ai-assistant for general queries, #macai-dev for development tasks, and #alerts for system notifications.

Show command
openclaw discord setup --channels "ai-assistant,macai-dev,alerts"
Test Agent Response

Send a test message in Discord to verify the agent is online, responsive, and following the SOUL.md personality guidelines.

4 Multi-Bot Peer-to-Peer Workflow ~25 min
Deploy BizManager Bot

Launch the BizManager bot — responsible for business strategy, task delegation, and morning briefings. It reads incoming requests and assigns work to specialised bots.

Show command
openclaw agent start --name "BizManager" --soul agents/biz-manager/SOUL.md
Deploy DevAgent Bot

Launch the DevAgent bot — handles coding tasks, creates GitHub pull requests, runs tests, and reports results back to BizManager.

Show command
openclaw agent start --name "DevAgent" --soul agents/dev-agent/SOUL.md
Configure Bot-to-Bot Communication

Enable peer-to-peer messaging between bots using OpenClaw's sessions_send protocol. BizManager can delegate tasks directly to DevAgent without human intervention.

Show configuration
# .openclaw/routing.yml peers: BizManager: can_send_to: [DevAgent, AlertBot] DevAgent: can_send_to: [BizManager, AlertBot] tools: [github, claude-code]
Enable Claude Code Sub-Agents

Activate Claude Code's experimental team agent mode, allowing DevAgent to spawn sub-agents for parallel coding tasks.

Show command
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 # DevAgent can now spawn sub-agents for parallel work
5 Real-World Automation Test Live demo
End-to-End Demo Scenario

A client sends a task in Discord. BizManager analyses it, delegates to DevAgent, DevAgent spawns a Claude Code team, completes the work, and reports back — all automatically.

ai-assistant
C
Client Today at 09:15
@BizManager Please create a landing page for our Q2 investment report. Need it by EOD.
B
BizManager BOT 09:15
Understood. I'll create a task brief and delegate to DevAgent for implementation. Estimated delivery: 2 hours.
D
DevAgent BOT 09:16
Task received from BizManager. Spawning Claude Code team with 3 sub-agents: layout, content, styling. Starting now...
D
DevAgent BOT 10:42
Landing page complete. PR #47 created and deployed to staging. Preview: staging.client-site.com/q2-report
B
BizManager BOT 10:43
@Client Landing page is ready for review. DevAgent completed it in 86 minutes. Please check the staging link and let me know if you'd like changes.
Show orchestration code
// BizManager delegation logic (simplified) const task = await openclaw.sessions_receive(); if (task.type === 'dev_request') { await openclaw.sessions_send('DevAgent', { action: 'create_landing_page', brief: task.content, deadline: task.deadline, callback_channel: '#ai-assistant' }); } // DevAgent spawns Claude Code team const result = await claudeCode.teamExecute({ agents: ['layout', 'content', 'styling'], task: brief, mode: 'parallel' });

Results

What we achieved

6
Agents Running
Simultaneously, with zero slowdown on M3 Ultra
24/7
Uptime
Silent, always-on operation with auto-restart
3h
Daily Time Saved
Routine tasks automated: briefings, PR reviews, monitoring

96GB RAM Allocation

The M3 Ultra's unified memory handles everything with room to spare.

40 GB
30 GB
26 GB
Ollama (Local LLM) Agent Processes Free

Tasks Automated

  • Morning briefings — auto-generated at 09:00 HKT
  • GitHub PR reviews — automated code review on every push
  • Discord monitoring — 24/7 response to client queries
  • Blog drafts — content generated from briefs overnight

I used to spend 3 hours a day on routine tasks. Now my Mac Studio handles them overnight. When I wake up, the briefing is ready, PRs are reviewed, and client messages are answered.

— Client, Finance SME Owner, Hong Kong

Tech Stack

Tools used in this build

OpenClaw CLI

Agent orchestration and Discord integration platform

Claude Sonnet 4.5

Primary LLM via GitHub Copilot for coding tasks

Ollama

Local LLM runtime on M3 Ultra — fully offline

Claude Code

Team agent mode for parallel sub-agent coding

Discord Bots

Multi-instance bot deployment for user interaction

n8n

Workflow automation — connects APIs, triggers, and schedules

Why Mac Studio M3 Ultra?

Built for always-on AI workloads

32
Core CPU
60
Core GPU
32
Core Neural Engine
96GB
Unified Memory
  • 96GB unified memory = run multiple LLMs + agent processes simultaneously
  • 100% local AI processing — no data leaves the machine
  • Near-silent operation — designed for 24/7 always-on workloads
  • No cloud subscriptions, no recurring API costs for local models

Ready to set up your AI agent station?

Let us transform your Mac into an autonomous AI powerhouse.