techFeatured

Claude Code Routines: How Anthropic Put Claude on a Cron

Anthropic shipped Claude Code routines on April 14, 2026: a research-preview feature that turns any prompt into a cloud-hosted agent triggered by schedule, HTTP POST, or GitHub event. Here is what it is, how to set one up, and where the daily caps actually bite.

Tech Talk News Editorial8 min read
ShareXLinkedInRedditEmail
Claude Code Routines: How Anthropic Put Claude on a Cron

If you opened your Claude settings recently and saw a line about "Daily included routine runs," that is the new thing. On April 14, 2026, Anthropic shipped routines in Claude Code as a research preview. A routine is a Claude Code prompt you save once and then run automatically, on a schedule, from a webhook, or in response to a GitHub event, from a cloud session you do not have to keep your laptop open for.

The short version: Claude Code now has cron, and the cron lives on Anthropic's infrastructure instead of yours. That sounds small. It is not. The thing it replaces is the pile of glue code developers have been writing for the last year to get an LLM to do unattended work, and consolidating that glue into a managed product is the actual story.

Call that pile the glue tax: the cron job, the server to run the cron on, the checked-out repo, the secrets store, the MCP server holding your Slack and Linear tokens, the retry logic when the model call fails. Routines absorb all of it. You write a prompt, pick a repo, pick a trigger, and the work runs without you. That is the framework to carry through the rest of this piece.

What a Routine Actually Is

Per Anthropic's official documentation, a routine is "a saved Claude Code configuration: a prompt, one or more repositories, and a set of connectors, packaged once and run automatically." Each routine has at least one trigger attached, and triggers come in three flavors you can mix on the same routine:

  • Scheduled. Recurring cadence: hourly, daily, weekdays, weekly. Minimum interval is one hour. Custom cron expressions are supported via /schedule update in the CLI.
  • API. A per-routine HTTPS endpoint plus a bearer token. POST to it and a session starts. Useful for alerting systems, deploy pipelines, or any tool that can make an authenticated HTTP call.
  • GitHub. Subscribes to repository events (pull requests, pushes, issues, workflow runs, discussions, release events, and more) through the Claude GitHub App.

A single routine can combine all three. A pull-request reviewer can run on every PR, run nightly against the backlog, and also fire from a Slack bot over API. The routine itself is the unit of configuration; the triggers are just ways to start it.

Runs execute as full Claude Code cloud sessions. There is no permission-mode picker, no approval prompts mid-run. The session can run shell commands, use skills committed to the cloned repo, and call any connectors you include. By default Claude can only push to branches prefixed with claude/, which is a sensible guardrail against a routine flattening main at 3am.

A routine is a prompt with a cron tab, a webhook, and a repo. The product is Anthropic taking over the glue.

The Three Triggers, Side by Side

Each trigger solves a different problem. The choice is less about preference and more about where the work is actually starting from.

TriggerStarts whenBest forConfigured from
ScheduledA preset or cron expression firesBacklog grooming, docs drift, weekly digestsWeb, CLI, Desktop
APIAuthenticated POST to /fireAlert triage, deploy verification, tool-to-ClaudeWeb only
GitHubRepository event matches a filterCustom PR review, library port, label-gated workflowsWeb only

The API trigger ships behind a dated beta header (experimental-cc-routine-2026-04-01) while the preview runs. The docs are explicit that breaking changes will ship under a new header version and the two most recent versions keep working, which is a reasonable contract for a preview surface.

How to Set One Up in Under Five Minutes

The fastest path is the CLI, because you are probably already in a Claude Code session.

  1. Run /schedule inside a session. You can pass a natural-language description, like /schedule nightly PR review at 9pm. Claude walks you through the prompt, repo, and environment. /schedule in the CLI can only create scheduled routines; for API or GitHub triggers, jump to the web.
  2. Or open claude.ai/code/routines. Click New routine. Name it, write the prompt, select a model, pick one or more GitHub repositories, pick an environment (or the default), and pick a trigger.
  3. For an API trigger, save first, then generate the token. The URL and token only exist after the routine has an ID. The token is shown once and cannot be retrieved later. Put it in your alerting tool's secret store immediately.
  4. For a GitHub trigger, install the Claude GitHub App. Running /web-setup in the CLI grants cloning access, but it does not install the GitHub App, and without the app the webhook does not fire. The setup wizard prompts you when needed.
  5. Review connectors. All your connected Model Context Protocol (MCP) servers are included by default. Remove the ones the routine does not need. Routines inherit your identity, so a Slack message sent by a routine looks like it came from you.
  6. Click Create, then Run now. The test run opens as a normal session where you can see every tool call, every file change, and every command the routine executed. Review it before leaving the routine to run unsupervised.

The prompt is the load-bearing part. Routines run unsupervised, so the prompt needs to describe success, not intent. A good routine prompt looks like a runbook: what to check, what to produce, what to skip, what to do on failure. A vague prompt produces a vague nightly session that burns your daily cap. If you have not already internalized the patterns, our prompt engineering advanced techniques piece walks through the structure that holds up in autonomous runs.

The Daily Caps Are the Real Ceiling

Routines draw down normal Claude Code usage, but they also have their own per-account daily run cap on top. Per Anthropic's announcement post, confirmed in 9to5Mac's launch coverage, the numbers are:

PlanDaily routine runsPractical ceiling
Pro5One nightly routine plus a handful of webhook fires
Max15Hourly-ish cadence during the workday
Team25One active reviewer per repo with room for scheduled jobs
Enterprise25Same cap; organizations lean on metered overage

GitHub webhook events also have per-routine and per-account hourly caps during the preview, and events beyond the limit are dropped rather than queued. If you wire a routine to pull_request.opened on a busy repo, you will hit those caps before you hit your daily one. Plan trigger filters accordingly.

The caps are the real product constraint. If you are on Pro and you want a routine that reviews every PR on a five-person team that ships thirty PRs a day, routines will not do that. If you want a nightly backlog sweep, a weekly docs-drift run, and an on-call API trigger for alerts, five runs a day is plenty. Fit the use case to the cap before the cap starts rejecting runs.

Five routines a day is enough to prove out one workflow. It is not enough to automate a team.

Routines vs /loop vs Desktop Scheduled Tasks

Claude Code now has three scheduling surfaces and they are not interchangeable. Picking the wrong one either wastes tokens or fails quietly at 3am when your laptop is asleep.

SurfaceRuns whereWhen to reach for it
/loopInside your current CLI sessionPolling a build, babysitting a deploy, re-running a prompt until a condition flips
Desktop scheduled taskYour local machineTasks that need your filesystem, your local credentials, or your dev environment
RoutineAnthropic cloudUnattended work against a repo, fires while your laptop is closed, triggered from outside systems

The dividing line is where the work has to happen. If it touches your local filesystem, use desktop. If it lives inside one session, use /loop. If it has to survive your laptop being closed and needs to be triggered by something that is not you, use a routine.

What to Actually Build First

The best routines are the boring ones that used to be somebody's Monday-morning task. A few worth stealing from the docs' example list:

  • Nightly backlog grooming. A scheduled routine reads issues opened since the last run, applies labels, assigns owners by code area, and posts a digest to Slack.
  • Alert-to-draft-PR. Your monitoring tool POSTs to the routine endpoint with the alert body as text. Claude pulls the stack trace, correlates it with recent commits, and opens a draft PR with a proposed fix.
  • Opinionated PR review. A GitHub trigger on pull_request.opened runs your team's checklist and leaves inline comments so human reviewers can focus on design instead of mechanical checks.
  • Docs drift. A weekly scheduled routine scans merged PRs, flags documentation that references changed APIs, and opens update PRs against the docs repo.

The pattern that links all of these: a bounded task, a primary source of truth (the repo), a clear success state, and an artifact at the end (a PR, a comment, a Slack message). Routines are good at that shape and bad at open-ended "figure out what to do" prompts.

The Take

Routines are not a leap in model capability. They are an infra product wrapped around the existing model. The pitch is that you stop paying the glue tax, which is the right pitch for the audience that was actually paying it. If you have been running a Claude prompt in a cron on a tiny DigitalOcean droplet with a duct-taped MCP server and a secrets file you do not want to think about, routines will feel like someone cleaning your kitchen.

The honest caveats are the caps and the preview status. Five runs a day on Pro is enough to prove out one workflow; it is not enough to run a small team's automation. And "research preview" means the /fire endpoint shape, the token semantics, and the event filters can still move. Build against it, but keep the routines idempotent and assume you will rewire them once before this thing goes GA.

The thing worth watching is what happens to the category below this. A chunk of the tooling people bought or built to run scheduled LLM work just got absorbed into a feature. That is a familiar pattern, and the last time it showed up at this scale was when hosted CI ate self-managed Jenkins. The glue layer moved, and the question is which tools in your automation stack still earn their keep when Anthropic's cloud is on the other side of a webhook (our developer experience scorecard is one way to think about what belongs in the stack and what is just drag).

Written by

Tech Talk News Editorial

Tech Talk News covers engineering, AI, and tech investing for people who build and invest in technology.

ShareXLinkedInRedditEmail