Exam GuideMarch 17, 202618 min read

The Complete Claude Certified Architect (CCA) Exam Guide for 2026

Everything you need to know about the Claude Certified Architect — Foundations certification: exam format, domain breakdown, study strategy, and how to pass on your first attempt.

1. What is the Claude Certified Architect Exam?

The Claude Certified Architect — Foundations (CCA-F) is Anthropic's official certification for professionals who design and build production applications with Claude. It validates that you can make sound architectural trade-off decisions across the full Claude technology stack.

Unlike generic AI certifications, the CCA is scenario-based. Every question presents a realistic situation — building a customer support agent, integrating Claude Code into CI/CD, designing multi-agent research systems — and asks you to choose the most effective solution. This isn't about memorizing API parameters; it's about understanding when and why to use specific patterns.

The certification covers four core technologies: the Claude API, the Claude Agent SDK, Claude Code, and the Model Context Protocol (MCP).

2. Who Should Take This Exam?

The ideal candidate is a solution architect or senior developer who builds production applications with Claude. Anthropic recommends at least 6 months of hands-on experience with:

  • Claude Agent SDK — multi-agent orchestration, subagent delegation, tool integration, lifecycle hooks
  • Claude Code — CLAUDE.md configuration, MCP servers, Agent Skills, planning mode
  • Model Context Protocol (MCP) — tools, resources, and server architecture
  • Prompt engineering — JSON schemas, few-shot examples, structured output, data extraction
  • Context management — working with long documents, multi-agent context passing, summarization

If you're a developer who has built at least one non-trivial Claude-powered application, you have the foundation to pass this exam with proper preparation.

3. Exam Format and Scoring

Here's what to expect on exam day:

ParameterValue
Question typeMultiple choice (1 correct out of 4)
Scoring100–1000 scale
Passing score720
Guessing penaltyNone (answer every question)
Scenarios4 out of 6 possible (randomly selected)

Key insight: Since there's no guessing penalty, never leave a question blank. Even an educated guess gives you a 25% chance. With elimination of one wrong answer, that jumps to 33%.

The 6 Possible Scenarios

Your exam will randomly feature 4 of these 6 scenarios. Each scenario provides the context for multiple questions:

  1. Customer Support Agent — Building an agent to handle returns, billing, and account issues using MCP tools
  2. Code Generation with Claude Code — Using Claude Code for development, refactoring, and documentation
  3. Multi-Agent Research System — Coordinator delegates to specialized subagents for research and synthesis
  4. Developer Productivity Tools — Agent that helps explore codebases and automate tasks using built-in tools
  5. Claude Code for CI/CD — Integrating Claude Code into automated pipelines for code review and testing
  6. Structured Data Extraction — Extracting information from unstructured documents with JSON schema validation

4. The 5 Exam Domains (Detailed Breakdown)

Domain 1: Agent Architecture and Orchestration (27%)

This is the heaviest domain — over a quarter of the exam. You need to understand the agentic loop pattern, hub-and-spoke multi-agent architecture, subagent delegation, and when to use single vs. multi-agent systems.

Key concepts:

  • The agentic loop: send request → check stop_reason → execute tools or stop
  • stop_reason values: tool_use, end_turn, max_tokens
  • Hub-and-spoke: coordinator + specialized subagents with isolated context
  • Explicit context passing — subagents do NOT inherit coordinator history
  • Evaluator-optimizer pattern for self-critique and revision
  • Model routing: Haiku for simple tasks, Sonnet/Opus for complex ones

Domain 2: Tool Design and MCP Integration (20%)

This domain tests your understanding of how Claude selects and uses tools, MCP architecture, JSON schema design, and the critical difference between syntax and semantic errors.

Key concepts:

  • Tool descriptions are the primary selection mechanism — make them detailed
  • tool_choice: auto (default), any (must call a tool), tool (forced specific tool)
  • MCP tools vs. resources: tools perform actions, resources provide read-only context
  • Nullable fields: type: ["string", "null"] prevents hallucination for missing data
  • Enum + "other" + "unclear" for edge case categorization
  • Built-in tools vs. MCP tools — agents may prefer built-in tools over custom MCP tools

Domain 3: Claude Code Configuration and Workflows (21%)

Tests your knowledge of Claude Code's configuration system, slash commands, planning mode, headless mode, and CI/CD integration.

Key concepts:

  • CLAUDE.md hierarchy: user → project → directory level
  • Planning mode vs. direct execution: when to use each
  • Custom slash commands: project-level (.claude/commands/) vs. user-level
  • Headless mode for CI/CD: claude --print for non-interactive use
  • Hooks: PreToolUse, PostToolUse, Stop — for validation and quality checks
  • Subagent types: Explore (codebase search), Plan (architecture), custom agents

Domain 4: Prompt Engineering and Structured Output (16%)

Covers system prompt design, few-shot examples, JSON schema for structured extraction, and the difference between syntax guarantees and semantic correctness.

Key concepts:

  • System prompt: separate from messages, has priority, defines behavior and constraints
  • Few-shot examples: most effective for ambiguous scenarios and output formatting
  • tool_use + JSON schema = guaranteed syntactically valid JSON
  • Semantic errors (wrong values, hallucinations) require separate validation
  • Schema design: required vs. optional fields, nullable types, enum + other
  • Self-critique pattern for improving output quality without human oversight

Domain 5: Context Management and Reliability (16%)

Focuses on managing context windows effectively, handling long conversations, summarization pitfalls, and reliability patterns.

Key concepts:

  • Lost-in-the-middle effect: critical info should go at the start or end of long inputs
  • Progressive summarization loses precise details (numbers, dates, percentages)
  • "Case facts" pattern: extract key data into a persistent block outside summarized history
  • RAG for documents exceeding the context window
  • Subagent context isolation to preserve coordinator context
  • Verbose tool results consuming context — design tools to return minimal output

5. Study Strategy: How to Prepare

Phase 1: Understand the Fundamentals (Week 1)

Read the official documentation for all four core technologies. Don't try to memorize — focus on understanding the why behind each pattern:

  • Claude API Messages reference and tool use documentation
  • Claude Agent SDK: overview, hooks, subagents, sessions
  • Claude Code: CLAUDE.md, skills, hooks, sub-agents, MCP integration
  • MCP specification: tools, resources, servers

Phase 2: Practice Scenario Questions (Week 2-3)

The exam is scenario-based, so rote memorization won't work. You need to practice reasoning through realistic situations:

  • Work through practice questions by domain, not randomly
  • For each wrong answer, understand why the correct answer is better
  • Pay attention to patterns: the exam loves asking about proportional solutions (simplest effective fix wins)
  • Track your accuracy by domain to identify weak areas

Phase 3: Focus on Weak Domains (Week 3-4)

Use your domain accuracy scores to prioritize study time. If you're scoring 90% on prompt engineering but 60% on agent architecture, spend your time on agent architecture.

Phase 4: Full Practice Exam (Final Week)

Take a timed, full-length practice exam to simulate the real experience. Review every wrong answer and revisit the underlying concepts.

6. Common Mistakes to Avoid

  1. Choosing the most complex solution. The CCA exam favors proportional solutions. If improving tool descriptions fixes the problem, don't build a routing classifier.
  2. Confusing prompt-based vs. programmatic enforcement. When business-critical logic requires a specific sequence, programmatic guardrails (PreToolUse hooks, tool_choice forcing) beat prompt instructions every time.
  3. Forgetting context isolation. Subagents do NOT inherit the coordinator's conversation history. This is tested heavily — always look for answers that explicitly pass context.
  4. Confusing syntax vs. semantic errors. tool_use + JSON schema eliminates syntax errors (invalid JSON). It does NOT prevent semantic errors (wrong values, hallucinations).
  5. Ignoring the lost-in-the-middle effect. When questions mention that the agent misses information in a long prompt, think about information placement.

7. Sample Practice Questions

Question 1 (Domain 1 — Agent Architecture): Your agentic loop checks if the assistant's response contains "I've completed the task" to decide when to stop. During testing, the agent generates this phrase mid-conversation while still needing to call tools. What should you change?

  • A) Add more termination phrases for robust detection
  • B) Check stop_reason: continue on tool_use, stop on end_turn
  • C) Set a maximum iteration count
  • D) Parse the response for tool call JSON
Show Answer

Answer: B. The stop_reason field is the reliable, API-provided mechanism. Parsing natural language (A, D) is an anti-pattern — the model may include completion-sounding phrases while still intending to call tools.

Question 2 (Domain 2 — Tool Design): Your MCP tool's 'process_refund' has a 'reason' enum: ['defective', 'wrong_item', 'not_as_described', 'changed_mind']. A customer reports a counterfeit product. The agent picks 'defective' which is inaccurate. How do you fix this?

  • A) Add 'counterfeit' to the enum
  • B) Remove the enum, use free text
  • C) Add 'other' + an 'other_detail' string field
  • D) Add every possible reason to the enum
Show Answer

Answer: C. Adding 'other' + a detail string captures edge cases without losing structured categorization for common cases. Adding every value (D) is unmaintainable. Removing the enum (B) loses structure.

Question 3 (Domain 5 — Context Management): After summarization kicks in at turn 30, your agent gives incorrect answers about specific dollar amounts mentioned earlier. What is the most effective fix?

  • A) Increase the context window to avoid summarization
  • B) Extract key facts into a persistent "case facts" block outside summarized history
  • C) Improve the summarization prompt to preserve numbers
  • D) Store full history externally and retrieve on demand
Show Answer

Answer: B. Progressive summarization inherently loses precise details. Extracting transactional facts into a persistent block ensures they're always available regardless of summarization state.

Want more? We have 300+ practice questions covering all 5 domains with detailed explanations. View the full practice test bank →

8. Recommended Resources

Official Documentation

Practice Materials

Study Tips Summary

  1. Allocate study time proportional to domain weights (27/20/21/16/16)
  2. Focus on why answers are correct, not just what is correct
  3. Practice with scenarios — the exam never asks isolated facts
  4. Remember: simplest effective solution wins
  5. Never leave a question blank — there's no guessing penalty

Ready to Start Practicing?

Access 300+ practice questions with detailed explanations, covering all 5 CCA exam domains.

View Practice Tests