Echofold Logoechofold
FractiumLaunchLoopServicesTrainingEvents
Echofold Logoechofold
Back to News
5th April 2026•18 min read•Tutorial•Kevin CollinsKevin Collins

How to Fully Automate Claude Code: The Ultimate Guide to Autonomous AI Development

Build a production-grade pipeline that takes a Jira ticket as input and produces a reviewed, tested, security-scanned Pull Request as output. Covering hooks, MCP servers, CLAUDE.md, quality gates, SonarQube, Chrome DevTools, and automated PR creation.

How to fully automate Claude Code - autonomous AI development pipeline diagram showing hooks, MCP servers, quality gates, and automated PR creation

How do you fully automate Claude Code?

Fully automating Claude Code requires four interlocking components: hooks for deterministic quality gates that prevent broken code from advancing, MCP servers for external tool integration with Jira and SonarQube, CLAUDE.md files for persistent project rules that survive context compaction, and verification loops that enforce test passage, security scanning, and frontend checks. The result is a pipeline that takes a Jira ticket as input and produces a reviewed, tested, security-scanned Pull Request as output.

TL;DR

→The Pipeline

  • •PRD to structured Jira tickets via Atlassian MCP
  • •CLAUDE.md enforces project rules across sessions
  • •Auto Mode enables safe autonomous execution
  • •Hooks create deterministic quality gates
  • •Subagents with git worktrees enable parallel implementation

The Quality Gates

  • •Stop hook enforces test passage before completion
  • •Code-simplifier reduces complexity by 20 to 30%
  • •SonarQube MCP scans for security vulnerabilities
  • •Chrome DevTools verifies frontend rendering
  • •Automated PR creation with full documentation

Table of Contents

  1. 1. The Anatomy of a Fully Autonomous Pipeline
  2. 2. Phase 1: PRD to Jira via MCP
  3. 3. Phase 2: CLAUDE.md and Auto Mode
  4. 4. Phase 3: Claude Code Hooks
  5. 5. Phase 4: Autonomous Ticket Execution
  6. 6. Phase 5: Stop Hook and Test Runner
  7. 7. Phase 6: Code Simplification
  8. 8. Phase 7: SonarQube Security Scanning
  9. 9. Phase 8: Frontend Verification with Chrome DevTools
  10. 10. Phase 9: Session-Per-Ticket with the Agent SDK
  11. 11. Phase 10: Automated PR Creation
  12. 12. Common Failure Modes
  13. 13. Complete Configuration Reference
  14. 14. Conclusion
  15. 15. Frequently Asked Questions
  16. 16. References

01.The Anatomy of a Fully Autonomous Pipeline

Fully automating Claude Code is not about writing a single clever prompt. It is about constructing a system of interlocking components where each stage feeds the next and every transition is guarded by a verification step that cannot be bypassed by the AI's optimism [1]. The distinction matters: a prompt can be ignored or misinterpreted, but a hook that blocks completion until tests pass is deterministic. The AI either satisfies the gate or it loops until it does.

The architecture draws from principles that Anthropic themselves have articulated in their best practices documentation [1]: give Claude Code explicit rules, use hooks for enforcement, and keep your project context in CLAUDE.md so that instructions survive across sessions and compaction events. What this guide adds is the full integration layer: connecting Jira for work intake, SonarQube for security scanning, Chrome DevTools for frontend verification, and a set of hooks that tie the entire pipeline together into a self-correcting loop [3].

Think of it as a factory floor for code. Raw material (a Jira ticket) enters at one end. It passes through multiple stations (implementation, testing, simplification, security scanning, visual verification). At each station, a quality inspector (a hook or MCP tool) checks the output. If the output fails inspection, it gets sent back for rework. Only when every inspector has signed off does the finished product (a Pull Request) leave the factory [9].

Pipeline Stages

StageInputOutputVerification
PlanningPRD documentStructured Jira ticketsAcceptance criteria review
ImplementationJira ticket + CLAUDE.mdFeature branch with codeStop hook: test suite
SimplificationPassing codeSimplified codeSubagentStop hook
SecuritySimplified codeScanned codeSonarQube Quality Gate
Visual CheckRunning applicationVerified UIChrome DevTools MCP
DeliveryVerified branchPull RequestJira transition + PR template

The key insight is that each gate is enforced by a hook or MCP tool call, not by Claude's good intentions. Claude cannot decide to skip the security scan because it feels confident. The SonarQube MCP server either returns a passing Quality Gate or it does not. This determinism is what makes the pipeline production-grade rather than a clever demo.

02.Phase 1: PRD to Jira via MCP

The pipeline begins with work intake. Rather than manually creating Jira tickets, you feed Claude a PRD (Product Requirements Document) and let the Atlassian Rovo MCP (Model Context Protocol) server handle ticket creation [2]. The MCP server gives Claude full CRUD access to your Jira instance, including creating issues, reading ticket details, updating statuses, and transitioning tickets through your workflow.

Setting up the Atlassian MCP server requires adding it to your Claude Code configuration. The configuration lives in your .claude/settings.json file and points to the Atlassian MCP Docker container [2]:

{
  "mcpServers": {
    "atlassian": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "-e", "JIRA_URL=https://your-org.atlassian.net",
        "-e", "JIRA_EMAIL=your-email@company.com",
        "-e", "JIRA_API_TOKEN=your-api-token",
        "ghcr.io/anthropics/claude-code-atlassian-mcp:latest"
      ]
    }
  }
}

Once the MCP server is running, you can feed Claude a PRD and ask it to decompose the work into tickets. The prompt should be explicit about the structure you want. Here is an example that produces well-formed tickets with acceptance criteria [3]:

Read @prd.md and create Jira tickets in project MYAPP.
For each feature:
  - Create an Epic with a clear title
  - Break it into Stories with acceptance criteria
  - Each Story must have:
    - A "Definition of Done" checklist
    - Estimated story points (1, 2, 3, 5, or 8)
    - Labels: "ai-generated", "sprint-backlog"
  - Order Stories by dependency (implement foundations first)

Critical: Acceptance Criteria Quality

The quality of your acceptance criteria directly determines how well Claude can autonomously implement each ticket. Vague criteria like "the feature should work well" give Claude no verifiable target. Instead, write criteria that can be checked by automated tests: "POST /api/users returns 201 with valid payload and 400 with missing required fields." Every criterion that can be expressed as an assertion becomes a test that the Stop hook can enforce [3].

The output of this phase is a backlog of well-structured Jira tickets, each with clear acceptance criteria that can be translated into automated tests. This is the raw material for the rest of the pipeline. The better the acceptance criteria, the more reliably the autonomous implementation phase will succeed.

03.Phase 2: CLAUDE.md and Auto Mode

CLAUDE.md is the AI's rulebook. It is a markdown file that Claude Code reads at the start of every session and re-reads after every context compaction event. Anything you put in CLAUDE.md becomes a persistent instruction that shapes Claude's behaviour across the entire project lifecycle [1]. This is where you encode your team's conventions, architecture decisions, and quality standards.

A production CLAUDE.md for an autonomous pipeline should cover branch naming, commit format, quality gates, and architecture rules. Here is an example that demonstrates each category:

# Project Rules

## Branch Naming
- Feature branches: feat/MYAPP-{ticket-number}-{short-description}
- Bug fixes: fix/MYAPP-{ticket-number}-{short-description}
- Always create a new branch from main before starting work

## Commit Format
- Use Conventional Commits: feat:, fix:, refactor:, test:, docs:
- Include the Jira ticket number: feat(MYAPP-42): add user auth endpoint
- Keep the first line under 72 characters

## Quality Gates (Non-Negotiable)
- ALL tests must pass before marking work complete
- Run: npm test -- --coverage
- Minimum coverage: 80% on new code
- No lint errors: npm run lint must exit 0
- No TypeScript errors: npx tsc --noEmit must exit 0

## Architecture Rules
- Use the repository pattern for data access
- All API endpoints must have input validation with Zod
- Error responses follow RFC 7807 (Problem Details)
- No direct database queries in route handlers
- Environment variables accessed only through config.ts

The power of CLAUDE.md is that these rules are injected into every session automatically. You do not need to remind Claude about your commit format or branch naming convention. It reads CLAUDE.md, internalises the rules, and follows them. When compaction occurs (the context window fills up and Claude summarises the conversation to free space), CLAUDE.md is re-read, ensuring rules survive even long sessions [13].

Auto Mode: Safe Autonomous Execution

To run Claude Code autonomously, you need to address the permissions problem. By default, Claude asks for confirmation before writing files, running commands, or making commits. In a fully automated pipeline, there is nobody to click "approve." The --auto flag solves this with a clever approach: it uses a secondary model (Sonnet) as a classifier that evaluates each proposed action before execution [5].

Auto Mode approves routine development actions (writing files, running test suites, making git commits) while blocking potentially dangerous actions (accessing credentials, modifying infrastructure configuration, running destructive commands). This is fundamentally different from --dangerously-skip-permissions, which bypasses all safety checks entirely. Auto Mode retains the safety layer while eliminating the human bottleneck [5].

# Start Claude Code in Auto Mode for autonomous execution
claude --auto

# Or with a specific prompt for headless operation
claude --auto -p "Read Jira ticket MYAPP-42 and implement it following CLAUDE.md rules"

The combination of CLAUDE.md (persistent rules) and Auto Mode (safe autonomous execution) forms the foundation for everything that follows. CLAUDE.md tells Claude what to do; Auto Mode lets it do so without human approval at every step. The hooks we cover in the next section ensure it does so correctly.

04.Phase 3: Claude Code Hooks

Hooks are the enforcement mechanism of the entire pipeline. They are shell commands, HTTP endpoints, or LLM prompts that execute automatically at specific points in Claude Code's lifecycle [6]. Unlike CLAUDE.md rules (which Claude can theoretically ignore), hooks are deterministic: they run outside Claude's decision-making process and can block, modify, or redirect Claude's behaviour programmatically.

There are ten hook events in the Claude Code lifecycle. Understanding when each fires and what it can do is essential for building a robust autonomous pipeline [6] [7]:

Hook EventWhen It FiresPrimary Use Case
PreToolUseBefore Claude executes any toolBlock dangerous commands, validate inputs
PostToolUseAfter a tool completes successfullyAuto-format, auto-lint after file edits
StopWhen Claude finishes respondingEnforce test passage, quality gates
SessionStartWhen a session begins or resumesInject context, load sprint state
PreCompactBefore context compactionSave state (fallback for long sessions)
PostCompactAfter context compactionRe-inject essential context
SubagentStopWhen a subagent finishesTrigger code-simplifier after subagent work
NotificationWhen Claude sends a notificationRoute alerts to Slack or webhook
PreMessageBefore processing a user messageInject dynamic context per message
PostMessageAfter Claude completes a responseLogging, analytics, audit trail

Hook Configuration Structure

Hooks are configured in .claude/settings.json under the hooks key. Each hook event maps to an array of hook definitions. Here is the general structure [6]:

{
  "hooks": {
    "Stop": [
      {
        "type": "command",
        "command": "bash .claude/hooks/enforce-tests.sh",
        "matcher": "",
        "blocking": true
      }
    ],
    "PreToolUse": [
      {
        "type": "command",
        "command": "bash .claude/hooks/block-dangerous.sh",
        "matcher": "Bash",
        "blocking": true
      }
    ]
  }
}

Three Hook Types

Claude Code supports three hook types, each suited to different use cases. Command hooks run a shell command and use the exit code and stdout to determine the result. Prompt hooks send a text prompt to a lightweight model for classification. Agent hooks spawn a full Claude subagent that can reason about the situation and produce a structured decision [6] [7].

Command hooks are the most common because they are deterministic and fast. The hook receives context as JSON on stdin and must return a JSON result on stdout. Here is what the input looks like for a Stop hook:

// Stop hook input (received on stdin)
{
  "session_id": "abc123",
  "transcript_summary": "Implemented user authentication endpoint...",
  "stop_reason": "end_turn"
}

And here is a blocking response that forces Claude to continue working:

// Stop hook output (returned on stdout)
{
  "decision": "block",
  "reason": "3 tests failed. Fix the failing tests before completing."
}

When the hook returns "decision": "block", Claude does not stop. Instead, it receives the reason as a new message and continues working to address the issue. This creates a self-correcting loop: Claude implements, the hook checks, Claude fixes, the hook re-checks, until the gate passes or a maximum iteration count is exceeded [9].

05.Phase 4: Autonomous Ticket Execution

With Jira integration, CLAUDE.md rules, Auto Mode, and hooks in place, you can now wire them together for autonomous ticket execution. The SessionStart hook is the entry point: it fires when Claude begins a session and can inject the current sprint context so Claude knows exactly which ticket to work on [3].

{
  "hooks": {
    "SessionStart": [
      {
        "type": "command",
        "command": "bash .claude/hooks/load-sprint-context.sh",
        "matcher": "startup",
        "blocking": false
      }
    ]
  }
}

The load-sprint-context.sh script reads a sprint-context.md file that you maintain (or that a previous session generated). This file contains the current sprint goal, the active ticket, and any relevant context that Claude needs to pick up work immediately:

# sprint-context.md

## Current Sprint: Sprint 14 (1-15 April 2026)
## Sprint Goal: Complete user authentication module

## Active Ticket: MYAPP-42
- Title: Implement JWT authentication middleware
- Status: In Progress
- Branch: feat/MYAPP-42-jwt-auth
- Acceptance Criteria:
  1. POST /api/auth/login returns JWT token for valid credentials
  2. Middleware rejects requests with expired or invalid tokens
  3. Refresh token rotation implemented
  4. Rate limiting: max 5 failed attempts per minute per IP

## Completed This Sprint:
- MYAPP-40: Database schema for users table
- MYAPP-41: User registration endpoint

## Dependencies:
- MYAPP-40 must be complete before starting (DONE)
- Uses bcrypt for password hashing (already in package.json)

Subagents for Parallel Implementation

For larger tickets or epics with multiple independent stories, Claude can spawn subagents to work in parallel. Each subagent operates in its own context, working on a separate branch, and reports back to the main agent when complete. This is particularly effective for epics where stories have minimal interdependencies [9].

The main agent acts as an orchestrator: it reads the Jira epic, identifies which stories can be parallelised, spawns a subagent for each, monitors their progress, and merges the results. The SubagentStop hook (covered in Phase 6) fires when each subagent completes, allowing the pipeline to trigger code simplification automatically on the subagent's output.

Git Worktrees: Isolated Parallel Development

The biggest risk of running subagents in parallel is file conflicts. If two subagents edit the same file simultaneously, one will overwrite the other's changes. Git worktrees solve this problem entirely by giving each subagent its own isolated copy of the repository.

A worktree is a lightweight, separate checkout of the same Git repository. Each worktree has its own working directory and its own branch, but they all share the same .git history. When you spawn a subagent with isolation: "worktree", Claude Code automatically:

1

Creates a Temporary Worktree

Runs git worktree add to create a fresh checkout on a new branch, giving the subagent its own directory with no interference from other agents.

2

Runs the Subagent in Isolation

The subagent operates entirely within its worktree. It can edit files, run tests, and make commits without affecting the main working directory or other subagents.

3

Returns Results with Branch Info

When the subagent finishes, the result includes the worktree path and branch name. If no changes were made, the worktree is automatically cleaned up.

4

Parent Merges the Branches

The orchestrating agent merges each subagent's branch back into the feature branch, resolving any conflicts before proceeding to quality gates.

In practice, the orchestration prompt looks like this:

"For each independent Jira story in this epic, spawn a subagent
with isolation: 'worktree'. Each subagent should:
  1. Create a branch named feature/PROJ-{ticket}/{description}
  2. Implement the ticket requirements
  3. Run the full test suite
  4. Commit with the correct message format

After all subagents complete, merge each branch into the
main feature branch and resolve any conflicts."
!

Why Worktrees Matter for Autonomous Pipelines

Without worktree isolation, parallel subagents operating in the same directory will cause race conditions: one agent writes a file, another overwrites it moments later, and both agents' tests start failing for reasons neither can diagnose. Worktrees eliminate this entire class of failure. They also mean that if a subagent goes off the rails, you can simply delete the worktree and its branch with zero impact on the rest of the pipeline.

06.Phase 5: Stop Hook and Test Runner

The Stop hook is the most important hook in the autonomous pipeline. It fires every time Claude attempts to finish responding and has the power to block that completion, forcing Claude back into a work loop [6]. By attaching a test runner to the Stop hook, you create a system where Claude literally cannot mark a ticket as done until every test passes.

{
  "hooks": {
    "Stop": [
      {
        "type": "command",
        "command": "bash .claude/hooks/enforce-tests.sh",
        "matcher": "",
        "blocking": true
      }
    ]
  }
}

The enforce-tests.sh script runs the test suite and returns a blocking decision if any tests fail [7] [9]:

#!/bin/bash
# .claude/hooks/enforce-tests.sh

# Run the test suite and capture output
TEST_OUTPUT=$(npm test 2>&1)
TEST_EXIT=$?

# Run linter
LINT_OUTPUT=$(npm run lint 2>&1)
LINT_EXIT=$?

# Run type checker
TSC_OUTPUT=$(npx tsc --noEmit 2>&1)
TSC_EXIT=$?

# If everything passes, allow Claude to stop
if [ $TEST_EXIT -eq 0 ] && [ $LINT_EXIT -eq 0 ] && [ $TSC_EXIT -eq 0 ]; then
  echo '{"decision": "allow"}'
  exit 0
fi

# Build a failure message
FAILURES=""
if [ $TEST_EXIT -ne 0 ]; then
  FAILURES="$FAILURES\nTest failures:\n$TEST_OUTPUT"
fi
if [ $LINT_EXIT -ne 0 ]; then
  FAILURES="$FAILURES\nLint errors:\n$LINT_OUTPUT"
fi
if [ $TSC_EXIT -ne 0 ]; then
  FAILURES="$FAILURES\nTypeScript errors:\n$TSC_OUTPUT"
fi

# Block completion and send failures back to Claude
echo "{\"decision\": \"block\", \"reason\": \"Quality gate failed. Fix these issues:\n$FAILURES\"}"
exit 0

PreToolUse Hook: Dangerous Command Prevention

While the Stop hook enforces quality on output, the PreToolUse hook prevents dangerous actions on input. This hook fires before Claude executes any tool (file writes, bash commands, git operations) and can block the action entirely [6]. For an autonomous pipeline, you want to prevent commands that could damage your environment:

#!/bin/bash
# .claude/hooks/block-dangerous.sh

# Read the tool input from stdin
INPUT=$(cat)

# Extract the command being run
COMMAND=$(echo "$INPUT" | jq -r '.tool_input.command // empty')

# Block dangerous patterns
if echo "$COMMAND" | grep -qE '(rm -rf /|DROP TABLE|force push|--force|main.*delete)'; then
  echo '{"decision": "block", "reason": "Blocked: this command matches a dangerous pattern."}'
  exit 0
fi

# Block access to sensitive files
if echo "$COMMAND" | grep -qE '(.env|credentials|secrets|private.key)'; then
  echo '{"decision": "block", "reason": "Blocked: access to sensitive files is not permitted."}'
  exit 0
fi

# Allow everything else
echo '{"decision": "allow"}'
exit 0

Together, the Stop hook (enforcing quality on completion) and the PreToolUse hook (preventing dangerous actions) create a safety corridor. Claude can work autonomously within this corridor, writing code, running tests, and iterating on failures, but it cannot escape the boundaries you have defined.

07.Phase 6: Code Simplification

Once code passes the test suite and lint checks, the next pipeline stage is simplification. Anthropic's official code-simplifier plugin runs on Opus and focuses on recently modified code. It preserves functionality while enhancing clarity [12]. This is not a cosmetic step. Simpler code consumes fewer tokens in subsequent sessions, which means Claude can hold more context and make better decisions downstream.

Installing the code-simplifier is straightforward:

# The code-simplifier is a built-in plugin
# Enable it in your Claude Code settings or run:
/simplify

Five Principles of Simplification

The code-simplifier operates according to five principles [12]:

1

Reduce nesting

Flatten deeply nested conditionals using early returns and guard clauses.

2

Eliminate redundancy

Remove duplicate logic by extracting shared patterns into utility functions.

3

Improve naming

Replace cryptic variable names with descriptive ones that communicate intent.

4

Preserve behaviour

All simplifications must keep the existing test suite passing. No functional changes.

5

Follow project standards

Respect the conventions defined in CLAUDE.md, including naming patterns and architecture rules.

To trigger simplification automatically after subagent work, use the SubagentStop hook:

{
  "hooks": {
    "SubagentStop": [
      {
        "type": "prompt",
        "prompt": "Review the code changes made by the subagent. Run /simplify on all modified files. Ensure tests still pass after simplification.",
        "matcher": "",
        "blocking": true
      }
    ]
  }
}

The practical impact of this step is significant. Simplified code can reduce token consumption in subsequent sessions by 20 to 30 percent [12]. Over an entire sprint of autonomous development, this compounding reduction means Claude can process more complex tasks without hitting context limits.

08.Phase 7: SonarQube Security Scanning

Security scanning is a critical gate in any production pipeline. The SonarQube MCP server creates a closed-loop workflow where Claude writes code, scans it for vulnerabilities, reads the results, fixes any issues, and re-scans until the Quality Gate passes [8]. This is not a one-shot scan. It is an iterative process that mirrors how a human security reviewer would work: find an issue, understand it, fix it, verify the fix.

SonarQube MCP Setup

{
  "mcpServers": {
    "sonarqube": {
      "command": "npx",
      "args": [
        "-y",
        "@anthropic/sonarqube-mcp-server"
      ],
      "env": {
        "SONARQUBE_URL": "http://localhost:9000",
        "SONARQUBE_TOKEN": "your-sonarqube-token",
        "SONARQUBE_ORGANIZATION": "your-org"
      }
    }
  }
}

You also need a sonar-project.properties file in your project root:

# sonar-project.properties
sonar.projectKey=myapp
sonar.projectName=My Application
sonar.sources=src
sonar.tests=src/__tests__
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.exclusions=**/node_modules/**,**/dist/**
sonar.qualitygate.wait=true

CLAUDE.md Quality Gate Instructions

Add these instructions to your CLAUDE.md to direct Claude through the SonarQube workflow [8]:

## Security Scanning (Non-Negotiable)
After all tests pass, run the SonarQube scan:
1. Run: npx sonar-scanner
2. Use MCP tool: sonarqube_check_quality_gate
3. If the gate FAILS:
   a. Use MCP tool: sonarqube_get_issues to list violations
   b. Fix each violation in the source code
   c. Re-run: npx sonar-scanner
   d. Re-check the quality gate
4. Repeat until the gate passes
5. Maximum 5 scan iterations (prevent infinite loops)

The Autonomous Fix Loop

The SonarQube integration creates a seven-step autonomous fix loop [8]:

1

Claude runs npx sonar-scanner to analyse the codebase

2

Claude calls sonarqube_check_quality_gate via MCP

3

If the gate fails, Claude calls sonarqube_get_issues to read violations

4

Claude reads the affected source files

5

Claude applies targeted fixes to resolve each violation

6

Claude re-runs npx sonar-scanner

7

Claude re-checks the Quality Gate, looping back to step 3 if needed

Loop Limit Caveat

Always set a maximum iteration count to prevent infinite loops. If a SonarQube violation requires an architectural change that Claude cannot make autonomously, the loop will cycle indefinitely without the limit. Add this to your CLAUDE.md:

## Loop Limits
- SonarQube scan: max 5 iterations
- If the gate still fails after 5 attempts, commit what you have,
  add a TODO comment on each unresolved violation, and flag the
  ticket for human review in Jira.

Real-World Example: S3 Bucket Confused-Deputy

SonarQube might flag an S3 bucket policy that lacks a Condition key restricting access by aws:SourceArn. Without this condition, any AWS service could assume the role and access the bucket (a confused-deputy attack). Claude reads the SonarQube issue description, understands the vulnerability class, adds the missing condition to the IAM policy, re-scans, and the Quality Gate passes. The entire fix cycle takes seconds, not the hours a human security review might require [8].

09.Phase 8: Frontend Verification with Chrome DevTools

Tests verify logic. Linters verify syntax. But neither verifies what the user actually sees. The Chrome DevTools MCP server bridges this gap by giving Claude direct access to a running browser, allowing it to inspect rendered pages, check console errors, analyse network requests, and take screenshots for visual verification [10] [11].

Chrome DevTools MCP Tools

CategoryToolsPurpose
Pagenavigate, screenshot, getContentNavigate to URLs, capture screenshots, read DOM
ConsolegetLogs, evaluateRead console errors and warnings, run JS
NetworkgetRequests, getFailedRequestsInspect API calls, find failed requests
ElementsquerySelector, getStylesInspect DOM elements and computed styles
AccessibilitygetAccessibilityTreeCheck ARIA roles, labels, and tree structure
PerformancegetMetrics, traceStart, traceStopMeasure load times, identify bottlenecks

Installation

# Install Chrome DevTools MCP
npx @anthropic/chrome-devtools-mcp@latest

CLAUDE.md Verification Workflow

Add a visual verification workflow to your CLAUDE.md that Claude follows after passing all other quality gates [10]:

## Frontend Verification (after tests pass)
1. Start the dev server: npm run dev
2. Wait for the server to be ready
3. Use Chrome DevTools MCP to:
   a. Navigate to the affected pages
   b. Check for console errors (zero tolerance)
   c. Verify no failed network requests
   d. Take a screenshot and describe what you see
   e. Check accessibility tree for missing labels
4. If any issues found, fix them and re-verify
5. Stop the dev server when done

Practical Example: Debugging a Form Submission

Consider a scenario where Claude has implemented a user registration form. Tests pass (the API endpoint works, validation logic is correct), but the form does not actually work in the browser because a CSRF (Cross-Site Request Forgery) token is missing from the request headers. Here is how Chrome DevTools catches what tests miss:

Claude navigates to the registration page using the Page tools. It fills in the form fields using the evaluate tool to set input values. It clicks the submit button. The Network tools reveal a 403 response on the POST request. Claude reads the response body and finds a "CSRF token missing" error. It traces the issue back to the form component, adds the CSRF token to the request headers, rebuilds, re-navigates, re-submits, and this time sees a 201 response. The Console tools confirm zero errors. The form works [11].

This kind of integration-level bug is invisible to unit tests. Chrome DevTools MCP makes it visible to Claude, closing a gap that previously required human verification.

10.Phase 9: Session-Per-Ticket with the Agent SDK

Long autonomous sessions inevitably hit context compaction. When the context window fills up, Claude summarises the conversation to free space, and that summarisation is lossy. It can drop project conventions, active ticket details, and pipeline state [13]. You can mitigate this with PreCompact and PostCompact hooks, but the better architectural decision is to avoid the problem entirely: give every Jira ticket its own fresh Claude Code session.

The Claude Agent SDK (@anthropic-ai/claude-code) makes this possible by giving you programmatic control over Claude Code sessions from TypeScript or Node.js. Instead of one marathon session that accumulates context and eventually compacts, your orchestrator spawns a clean session per ticket. Each session starts fresh with CLAUDE.md fully loaded, zero accumulated drift, and a focused scope.

Why Session-Per-Ticket Beats Compaction Hooks

Long Session + Compaction

  • •Context degrades over time
  • •Handoff documents are imperfect
  • •One bad compaction derails the entire session
  • •Debugging failures across compaction boundaries is painful

Agent SDK + Fresh Sessions

  • •Every ticket gets full context window
  • •CLAUDE.md is re-read from scratch each time
  • •A failed ticket does not poison the next one
  • •Logs are clean: one session, one ticket, one PR

Installing the Agent SDK

The Agent SDK is a TypeScript/JavaScript package that wraps Claude Code's CLI into a programmatic interface. Install it as a dependency in your orchestrator project:

npm install @anthropic-ai/claude-code

The Orchestrator Script

The orchestrator is a Node.js script that sits above Claude Code and manages the pipeline. It queries Jira for the sprint backlog, then processes each ticket in a fresh session. Here is a production-ready example:

import { claude } from "@anthropic-ai/claude-code";

interface JiraTicket {
  key: string;
  summary: string;
  acceptanceCriteria: string;
}

async function processTicket(ticket: JiraTicket): Promise<void> {
  console.log(`Processing ${ticket.key}: ${ticket.summary}`);

  const result = await claude({
    prompt: `
      You are working on Jira ticket ${ticket.key}: ${ticket.summary}.

      Acceptance criteria:
      ${ticket.acceptanceCriteria}

      Instructions:
      1. Create branch feature/${ticket.key}/${ticket.summary.toLowerCase().replace(/\s+/g, "-")}
      2. Implement the requirements
      3. Run all quality gates (tests, lint, SonarQube, Chrome DevTools)
      4. Create a PR with a description referencing ${ticket.key}
    `,
    options: {
      maxTurns: 200,
      allowedTools: ["Bash", "Read", "Write", "Edit", "Glob", "Grep"],
    },
  });

  console.log(`Completed ${ticket.key}: ${result.exitCode === 0 ? "SUCCESS" : "FAILED"}`);
}

async function runPipeline(tickets: JiraTicket[]): Promise<void> {
  for (const ticket of tickets) {
    try {
      await processTicket(ticket);
    } catch (error) {
      console.error(`Failed on ${ticket.key}:`, error);
      // Continue to next ticket rather than stopping the pipeline
    }
  }
}

Each call to claude() spawns a completely fresh session. CLAUDE.md is loaded from scratch. The hooks defined in .claude/settings.json fire normally. There is zero context bleed between tickets. If ticket PROJ-42 fails spectacularly and burns through its entire context window, ticket PROJ-43 starts with a clean slate.

Parallel Ticket Processing

For tickets with no interdependencies, you can process them in parallel. Combined with git worktrees (covered in Phase 4), this means multiple tickets are being implemented simultaneously, each in its own session and its own isolated checkout:

async function runParallelPipeline(tickets: JiraTicket[]): Promise<void> {
  const concurrency = 3; // Run 3 tickets at a time

  for (let i = 0; i < tickets.length; i += concurrency) {
    const batch = tickets.slice(i, i + concurrency);
    await Promise.allSettled(
      batch.map((ticket) => processTicket(ticket))
    );
  }
}

Quality Gates as Separate Sessions

The Agent SDK also lets you run quality gates as their own sessions. If the implementation session creates a PR but you want an independent review pass, you can spawn a second session focused purely on verification:

async function reviewTicket(branch: string, ticketKey: string) {
  return claude({
    prompt: `
      Review the code on branch ${branch} for ticket ${ticketKey}.
      1. Run the full test suite
      2. Run SonarQube scan and check the Quality Gate
      3. Use Chrome DevTools to verify all UI changes
      4. If any issues found, fix them and push
      5. If the PR description is incomplete, update it
    `,
    options: {
      maxTurns: 100,
      allowedTools: ["Bash", "Read", "Write", "Edit", "Glob", "Grep"],
    },
  });
}

This pattern gives you defence in depth: the implementation session writes the code with hooks enforcing quality gates, and a separate review session verifies the result with fresh eyes and a full context window. The review session has no memory of the implementation decisions, so it evaluates the code on its own merits rather than confirming the implementation agent's assumptions.

11.Phase 10: Automated PR Creation

The final phase of the pipeline is delivery. Once all quality gates have passed (tests, lint, TypeScript, code simplification, SonarQube, Chrome DevTools), Claude creates a Pull Request with comprehensive documentation [3]. The PR is not just a code diff. It includes a summary of what was done, which Jira ticket it addresses, the test results, and the SonarQube scan outcome.

The Six-Step Delivery Process

1

Stage and commit all changes with a Conventional Commits message referencing the Jira ticket

2

Push the branch to the remote repository

3

Create the PR using gh pr create with a structured template

4

Link the Jira ticket by including the ticket key in the PR description

5

Transition the Jira ticket to "In Review" using the Atlassian MCP server

6

Add a Jira comment with the PR URL and a summary of what was implemented

The Jira integration is what closes the loop. The ticket that entered the pipeline as a backlog item now sits in "In Review" with a linked Pull Request, ready for a human reviewer to approve. The entire journey, from ticket to PR, happened without human intervention [3].

You can add instructions to CLAUDE.md that define the PR template Claude should use, ensuring consistent documentation across all autonomously created Pull Requests. Include sections for a summary, the Jira ticket link, test results, SonarQube status, and a list of modified files with brief descriptions of the changes.

12.Common Failure Modes

No autonomous system is perfect. Understanding the most common failure modes helps you build resilience into your pipeline before problems occur [4] [9].

1. Infinite Fix Loop

Claude gets stuck cycling between a SonarQube violation and an attempted fix that introduces a new violation. Each fix creates a new problem, and the loop never converges.

Prevention

Always set maximum iteration counts in your CLAUDE.md. After the limit is reached, commit the current state with TODO comments on unresolved issues and flag the ticket for human review. A counter variable in your hook script can track iterations and force an exit.

2. Context Drift After Compaction

In long sessions, compaction causes Claude to forget project conventions: incorrect branch names, skipped quality gates, and ignored commit formats.

Prevention

Use the Agent SDK to run one session per Jira ticket (described in Phase 9). Each session starts with full context, so compaction never occurs for reasonably scoped tickets. For unavoidably long sessions, use PreCompact hooks as a fallback to save state and PostCompact hooks to re-inject essential rules.

3. Scope Creep

Claude starts "improving" code outside the scope of the current ticket, refactoring unrelated modules, or adding features that were not requested.

Prevention

Add explicit scope boundaries in your CLAUDE.md: "Only modify files directly related to the active Jira ticket. Do not refactor unrelated code. Do not add features not listed in the acceptance criteria." The sprint-context.md file reinforces this by listing exactly which files and endpoints are in scope.

4. Credential Exposure

Claude accidentally logs, commits, or outputs sensitive values like API keys, database passwords, or tokens.

Prevention

The PreToolUse hook should block access to .env files and credential stores. Add a PostToolUse hook on git commit operations that runs git diff --cached and scans for patterns matching API keys, tokens, and passwords. Block the commit if any are found. Tools like git-secrets or trufflehog can be integrated into this hook.

13.Complete Configuration Reference

Here is a production-ready .claude/settings.json that combines all the hooks and MCP servers described in this guide. You can use this as a starting template and customise it for your project [6] [7]:

{
  "mcpServers": {
    "atlassian": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "-e", "JIRA_URL=https://your-org.atlassian.net",
        "-e", "JIRA_EMAIL=your-email@company.com",
        "-e", "JIRA_API_TOKEN",
        "ghcr.io/anthropics/claude-code-atlassian-mcp:latest"
      ]
    },
    "sonarqube": {
      "command": "npx",
      "args": ["-y", "@anthropic/sonarqube-mcp-server"],
      "env": {
        "SONARQUBE_URL": "http://localhost:9000",
        "SONARQUBE_TOKEN": "your-sonarqube-token"
      }
    }
  },
  "hooks": {
    "SessionStart": [
      {
        "type": "command",
        "command": "cat .claude/sprint-context.md 2>/dev/null || echo 'No sprint context found.'",
        "matcher": "startup",
        "blocking": false
      }
    ],
    "PostToolUse": [
      {
        "type": "command",
        "command": "npx prettier --write $CLAUDE_FILE_PATH 2>/dev/null || true",
        "matcher": "Edit|Write",
        "blocking": false
      }
    ],
    "PreToolUse": [
      {
        "type": "command",
        "command": "bash .claude/hooks/block-dangerous.sh",
        "matcher": "Bash",
        "blocking": true
      }
    ],
    "Stop": [
      {
        "type": "command",
        "command": "bash .claude/hooks/enforce-tests.sh",
        "matcher": "",
        "blocking": true
      }
    ],
    "SubagentStop": [
      {
        "type": "prompt",
        "prompt": "Run /simplify on all files modified by the subagent. Verify tests still pass.",
        "matcher": "",
        "blocking": true
      }
    ],
    "Notification": [
      {
        "type": "command",
        "command": "bash .claude/hooks/notify-slack.sh",
        "matcher": "",
        "blocking": false
      }
    ]
  }
}

This configuration gives you the full autonomous pipeline: sprint context injection on startup, automatic formatting after edits, dangerous command blocking, test enforcement on completion, code simplification after subagent work, and Slack notifications for monitoring. Pair this with an Agent SDK orchestrator script (covered in Phase 9) that spawns a fresh session per Jira ticket, and you have a system that processes an entire sprint backlog without human intervention. Customise the paths, matchers, and MCP server credentials for your specific environment.

Conclusion

Key Takeaways

Hooks are the backbone. Every quality gate in the pipeline is enforced by a hook, not by Claude's willingness to follow instructions. Deterministic enforcement is what makes the difference between a demo and a production system.
MCP servers extend reach. Jira for work intake, SonarQube for security scanning, and Chrome DevTools for visual verification turn Claude from a code generator into a full development agent.
The Agent SDK is the orchestration layer. Spawning a fresh session per ticket via @anthropic-ai/claude-code eliminates context drift entirely. Each ticket gets a full context window, clean state, and isolated failure domain.
Loop limits prevent runaway costs. Every iterative process (SonarQube scanning, test fixing, simplification) needs a maximum iteration count to prevent infinite loops and unexpected API bills.

The pipeline described in this guide is not theoretical. Teams are running variations of this architecture today, shipping features overnight while developers sleep [4] [9]. The technology has matured to the point where the limiting factor is no longer the AI's coding ability. It is the quality of the orchestration around it.

Building an autonomous development pipeline is itself an exercise in systems thinking. You are not writing code; you are designing a system of constraints, feedback loops, and safety boundaries within which an AI agent operates. The hooks are your quality inspectors. CLAUDE.md is your operations manual. MCP servers are your supply chain. And the result, when assembled correctly, is a factory that turns Jira tickets into Pull Requests while you focus on the work that requires human judgment: architecture decisions, product strategy, and the creative problems that machines have not yet learned to solve.

14.Frequently Asked Questions

What is the best way to automate Claude Code in 2026?

The most effective approach is building an autonomous pipeline that combines four core components: hooks for deterministic quality gates, MCP (Model Context Protocol) servers for external tool integration (Jira, SonarQube), CLAUDE.md for persistent project rules, and verification loops that prevent broken code from advancing. This pipeline takes a Jira ticket as input and produces a reviewed, tested Pull Request as output.

How do Claude Code hooks enforce quality gates?

Hooks are shell commands, HTTP endpoints, or LLM prompts that execute automatically at specific points in Claude Code's lifecycle. The Stop hook fires when Claude finishes responding and can return a blocking decision that forces Claude to continue working. For example, a Stop hook can run the test suite and block completion until all tests pass, creating a self-correcting loop without human intervention.

Can Claude Code integrate with Jira for autonomous development?

Yes. The Atlassian Rovo MCP server gives Claude Code full CRUD access to Jira, including creating issues, reading ticket details, updating statuses, and transitioning tickets. You can feed Claude a PRD and have it automatically generate structured Jira tickets with acceptance criteria, then implement each ticket autonomously.

How does the SonarQube MCP server work with Claude Code?

The SonarQube MCP server creates a closed-loop security review workflow. Claude writes code, runs sonar-scanner, checks the Quality Gate status via MCP tools, reads any violations, applies targeted fixes, and re-scans. This loop continues until the Quality Gate passes or a maximum iteration count is reached to prevent infinite loops.

What is Claude Code Auto Mode and is it safe?

Auto Mode, enabled with the --auto flag, uses a secondary model (Sonnet) as a classifier that evaluates each proposed action before execution. It approves routine development actions (writing files, running tests, making commits) while blocking potentially dangerous actions (accessing credentials, modifying infrastructure). It is safer than --dangerously-skip-permissions and is recommended for production autonomous workflows.

How do you prevent context drift in long Claude Code sessions?

The best approach is to avoid long sessions entirely. Use the Claude Agent SDK (@anthropic-ai/claude-code) to spawn a fresh Claude Code session for every Jira ticket. Each session starts with full context, CLAUDE.md loaded from scratch, and zero accumulated drift. If one ticket fails, the next starts clean. This session-per-ticket pattern is more reliable than compaction hooks and also enables parallel ticket processing.

What is the code-simplifier plugin for Claude Code?

The code-simplifier is Anthropic's official plugin that runs on Opus and focuses on recently modified code. It preserves functionality while enhancing clarity by reducing nesting, eliminating redundancy, and improving naming. It follows your CLAUDE.md project standards and can reduce token consumption in subsequent sessions by 20 to 30 percent.

Kevin Collins

Written by Kevin Collins

Founder at Echofold · Claude Ambassador · Manus Fellow

I've won hackathons, trained enterprise teams, and built startups with AI agents. I write about all of it — the wins, the failures, and the stuff nobody talks about.

Production Vibe Coding course: learn to build and deploy production websites with Claude Code, Manus, SonarQube, and AWS
Course1h 39m•€49 one-time

Production Vibe Coding

Learn my exact workflow to go from zero to a fully deployed, production-grade website. Research with Manus, document in Obsidian, build with Claude Code, scan with SonarQube, deploy on AWS. No coding experience required.

Get the course

Stay in the loop

Subscribe for tutorials, automation guides, and community events.

15.References

[1] Anthropic. "Best Practices for Claude Code."

[2] Builder.io. "How to use Claude Code for Jira."

[3] Kinto Technologies. "Claude Code: Fully Automating from Jira Ticket to PR."

[4] Reddit /r/ClaudeCode. "Has anyone automated a full agentic code solution?"

[5] Anthropic Engineering. "Claude Code auto mode: a safer way to skip permissions."

[6] Anthropic. "Hooks Reference."

[7] MorphLLM. "Claude Code Hooks: Automate Every Edit, Commit, and Tool Call."

[8] SonarSource. "Claude Code + SonarQube MCP: Building an autonomous code review workflow."

[9] ClaudeFast. "Claude Code Autonomous Loops: Ship Features While You Sleep."

[10] Anthropic. "Use Claude Code with Chrome (beta)."

[11] Google Chrome Developers. "Chrome DevTools MCP for your AI agent."

[12] Cyrus. "What is Claude Code's Code-Simplifier Agent?"

[13] Nick Porter. "Claude Code: Post-Compaction Hooks for Context Renewal."

Related Articles

Tutorial
Ralph Wiggum Claude Code Plugin: Autonomous AI Development Loops
Tutorial
Gas Town: Steve Yegge's Multi-Agent Orchestrator for Claude Code
Tutorial
Claude Code Chrome Extension: AI Browser Automation Now in Your Terminal
Previous ArticleCarn.ie: Top 6 at Claude Code Hackathon with Autonomous Drone Search & Rescue19th March 2026