⚠️ This is a fan-made community site, not affiliated with the official OpenClaw project or Anthropic. github.com/openclaw/openclaw
release

OpenClaw 4.15: The Lobster Gets a New Brain and Learns to Watch Its Own Keys

OpenClaws.io Team

OpenClaws.io Team

@openclaws

April 16, 2026

6 min read

OpenClaw 4.15: The Lobster Gets a New Brain and Learns to Watch Its Own Keys

When the last release run wrapped, the lobster had just crawled out of a ten-day security siege. A thicker shell. Sharper claws. But swimming around, it still wasn't really looking at itself.

4.15 doesn't do much, but the direction is clear: give the lobster a better brain, teach it to glance at the keys in its own hand, and quietly move its memory somewhere new.

This version isn't going to show up in anyone's hero screenshot. But a few defaults shift quietly once you upgrade — and if you were relying on the old behavior in a corner somewhere, it's worth knowing beforehand.

A New Brain: Opus 4.7

The default Anthropic selection now points at Claude Opus 4.7. The opus alias used to require you to pin a specific version in config; now it just points at the latest Opus. The Claude CLI default follows suit.

Bundled along with it: image understanding in Opus 4.7. This used to require an extra plugin or an external vision provider; now it ships in core. Drop a screenshot into the chat and the lobster can read it directly, without detouring through anything.

If you had manually pinned the provider to an older Opus, this upgrade won't touch your config — an explicit pin still wins. But if you were running on defaults, chat speed and image handling both take a noticeable step forward.

An Escape Hatch for Machines That Can't Keep Up

Landing alongside the Opus 4.7 promotion is a flag pointing the opposite direction: agents.defaults.experimental.localModelLean: true.

It's for small local models. Flip it on, and the prompt automatically drops the heavyweight tools that overwhelm smaller models — browser, cron, and so on — trimming the context window and lightening the inference load. The experimental in the name is honest: it's still being tuned. But if you're running a 7B or 13B on your own machine, this flag is arguably more interesting to you than Opus 4.7.

"Make the strong stronger, lighten the load for the weak" is the quieter through-line of this release.

The First Time It Can See Its Own Keys

This is the most interesting thing in the release, and the easiest to miss.

The Control UI now has a Model Auth Status Card. What it does sounds pedestrian: tell you whether each bound provider's OAuth token is healthy, how long it has before it expires, and whether rate limits have been squeezing it lately.

It sounds pedestrian, but this card didn't exist before. OAuth tokens expired the moment the provider returned a 401; rate limits became real only when you got stuck mid-run. The lobster had been using these credentials all along, but there was never a single place where it — or you — could see the state of them before things broke.

Behind the card is a new gateway method: models.authStatus. It caches results for 60 seconds, so the UI isn't actually polling providers once a second, and when multiple clients share the same lobster instance, the provider's introspection endpoint doesn't get hammered.

A couple of lower-level fixes arrived with the card. Credential race condition — before reopening a socket, pending auth saves are drained to disk, so a reconnect no longer eats a nearly-complete authentication. Workspace file access — all path operations now route through the shared fs-safe helper, and symlinks pointing at allowlisted files get rejected. Nothing that makes a demo video, but both fall under the same theme: the lobster has a clearer picture of what it's actually holding.

Memory Is Quietly Moving House

Three things changed in the memory layer this release. Put together, they add up to a directional shift.

First, LanceDB memory indexes now support remote object storage. They used to be local-disk only, which meant switching machines required rebuilding the index and cross-device memory sharing was essentially a non-starter. In this release, the same LanceDB index can live on S3 (or an S3-compatible store), with the local disk acting as a cache layer. For anyone running the lobster across multiple machines, or moving between a laptop and a VPS, this is the key step from "memory tied to a machine" to "memory tied to an account."

Second, GitHub Copilot joins the embedding provider pool. Memory search needs an embedding model to vectorize entries, and your options used to be OpenAI, a local sentence-transformers model, and a handful of others. Now there's one more: if you already have a Copilot subscription, memory search can reuse that auth channel, with a dedicated host helper that respects remote overrides and token refresh.

Third — and the only one with a breaking edge — dreaming.storage.mode switches its default from inline to separate.

Dreaming is the phase where the lobster, in its "idle time," condenses and reorganizes memory. Phase blocks produced by that process used to land inline inside the day's memory file. Upside: everything sits on one timeline. Downside: memory files bloat from generated content, and diffs become unreadable.

The new default moves phase blocks out to a dedicated path: memory/dreaming/{phase}/YYYY-MM-DD.md — one file per phase per day, leaving the original memory file with only the hand-written content. Your existing memory files won't be rewritten on upgrade, but the next time dreaming runs, the output goes to the new location. If you had scripts reading dreaming content out of the memory file, those need a path update.

If you want the old behavior, set the config back to inline explicitly.

Google's Voice Joins In

The bundled Google plugin picked up Gemini TTS support: voice selection, WAV output, and PCM formatting for telephony. If you're already using Google Cloud for STT or other Gemini calls, you no longer need a separate TTS provider — the same auth carries through.

Not a major change, but something you used to have to assemble yourself is now off-the-shelf.

The Stuff That Didn't Make Headlines But You're Probably Hitting Daily

The rest of these don't earn their own section, but together they cover a surprising amount of everyday friction:

  • Ollama chat 404s are fixed. If your model ID had the ollama/ prefix, the old version was dumb enough to pass that prefix straight through to the Ollama server, which returned 404. The new version strips the prefix before the request goes out.
  • BlueBubbles image downloads work again on Node 22+ — webhook handling and attachment-fetch retry logic both got reworked along with it. If you're bridging iMessage on macOS via BlueBubbles, bump Node and bump this release together.
  • TUI streaming watchdog — if no chat event delta arrives within 30 seconds, the streaming indicator resets. Before, a silent provider-side disconnect would leave the TUI stuck in "streaming" forever. Now it doesn't.
  • Skill snapshot invalidation — changing skills.* config used to leave open agent sessions still running on the old skill list; you had to restart to see new skills. Now any config change invalidates the cached snapshot.
  • Unknown-tool stream guard on by default — this was previously opt-in protection: when the model hallucinates a nonexistent tool name, the guard prevents "Tool X not found" from spinning forever. Now it's on without any configuration.
  • Path resolution — non-workspace ~ paths now resolve against the OS home directory instead of OPENCLAW_HOME. When those two directories differed, the same ~/foo.txt could point to different places in edit versus write operations; the fix aligns them.
  • Prompt cache alignment — for task-scoped adapter runs, the inbound chat ID inside the system prompt is now stabilized, so multiple calls within the same task see a higher prompt-cache hit rate.
  • MEDIA tool-result passthrough — the MEDIA: passthrough from trusted local tools used to match on built-in tool names loosely; matching is now exact, and client tools whose normalized names collide with a built-in get rejected outright. No more slipping through on a name collision.
  • Replay recovery — a 401 input item ID does not belong to this connection from the provider is now classified as "replay-invalid," with proper session-reset guidance, instead of being retried as a generic 401.

Default Value Shifts

A handful of defaults moved this release; worth listing together for bookkeeping:

  • dreaming.storage.mode: inlineseparate (phase blocks land in memory/dreaming/{phase}/YYYY-MM-DD.md)
  • Unknown-tool stream guard: on by default (was opt-in)
  • Bundled Microsoft / ElevenLabs speech providers: enabled by default
  • Default Anthropic selection: points at Opus 4.7
  • Claude CLI default: aligned with Opus 4.7

The first two are behavioral changes. If you relied on the old behavior, override them explicitly in config before you upgrade.

The One-Liner

If there's only one thing you take from this post: run openclaw update, then open the Control UI and look at the new Model Auth Status Card. It will tell you something you didn't know before — exactly which keys the lobster is holding, which one is close to expiring, and which one has been catching rate limits.

If you're a heavy dreaming user, or if you've read phase-block raw content inside your memory files, note the dreaming.storage.mode default change: new dreaming output lands in the memory/dreaming/ subdirectory. The first time dreaming runs after the upgrade, a glance at that directory will show you the new shape.

Stay in the Loop

Get updates on new features, integrations, and lobster wisdom. No spam, unsubscribe anytime.