Elite AI Certification Coaching

Your Path to
AI Expert Status

A strategic, step-by-step certification roadmap built for enterprise consulting, AI product building, and ecosystem partnerships.

7
Certifications
7 Mo
Timeline
30 Min
Daily
~$1,014+
Total Cost
0%
Current Readiness
Overall Progress 0 / 0
Certification Stack

7 certifications. 7 months. Maximum ROI across consulting, product building, and AI ecosystem partnerships. Three clouds + Anthropic native. One coherent story.

⚠ AWS ML Specialty (MLS-C01) — RETIRED March 31, 2026. Cannot be taken. AWS officially directs candidates to ML Engineer Associate (MLA-C01) as the replacement. More practical, more affordable ($150 vs $300), more aligned to consulting. Updated in this plan.
🥇 Must-Have · Week 1 (First)
Claude Certified Architect — Foundations (CCA-F)
ROI: Very High ~4 weeks Anthropic Official
First certification. Issued directly by Anthropic. Tests agentic architecture, MCP, Claude Code, prompt engineering, and context management across 5 domains. 720/1000 pass. Scenario-based — 4 of 6 real-world scenarios presented at random. Zero overlap with cloud certs — pure AI builder/architect credential.
🥇 Must-Have · Week 1
GCP Generative AI Leader
ROI: High 3–6 days $99–200 USD
Fastest cloud credential in the stack. Maps the entire GCP AI landscape before PMLE prep begins. Designed explicitly for "consultants advising on AI strategies." No coding. 90 minutes. First cloud cert live in Week 1.
🥇 Must-Have · Month 1–2 Primary
Salesforce Agentforce Specialist
ROI: High 3–4 weeks $200 USD
Replaces retired AI Associate (Feb 2026). Covers Agentforce, Einstein Copilot, MCP, A2A protocol, Prompt Builder, Trust Layer. Your 4+ SF certs = massive head start. First major cert by end of Month 2.
🥇 Must-Have · Month 2–3
GCP Professional ML Engineer
ROI: High 8–12 weeks $200 USD
Strongest enterprise AI signal. Full ML lifecycle on GCP including GenAI, Vertex AI, Model Garden, RAG. Deeply embedded in banking and financial services. Opens Google partnership programs.
🥈 High-Value · Month 1–2 Parallel
Salesforce Data Cloud Consultant
ROI: High 4–6 weeks $200 USD
Rebranded to Data 360 (Oct 2025). #1 in-demand Salesforce cert 2026. Run parallel to Agentforce at 10 min/day — 30% content overlap makes this efficient. Completes the Salesforce AI picture: Agentforce = HOW, Data Cloud = FUEL.
🥈 High-Value · Month 4–5
AWS ML Engineer Associate (MLA-C01)
ROI: High 6–8 weeks $150 USD
Replaces retired ML Specialty. SageMaker-focused, practical, operational. GCP ML Engineer knowledge transfers ~70% — Vertex AI = SageMaker, Pub/Sub = Kinesis, Dataflow = Glue. Completes two-cloud ML coverage.
🥈 High-Value · Month 6
Azure AI-103 App & Agent Developer
ROI: High 5–7 weeks ~$165 USD
Replaces retiring AI-102 (June 2026). Microsoft Foundry, GenAI apps, multi-agent orchestration, RAG, responsible AI. Azure holds 23–25% enterprise cloud (fastest growing). Completes three-cloud AI coverage.
7-Month Execution Timeline

Parallel execution rule: Never more than 2 certifications at once. 70/30 focus split. Seven certs, one coherent story.

Week 1 (First)
CCA-F — Claude Certified Architect Foundations
  • 30 min/day → 5 domains: Agentic Architecture, MCP, Claude Code, Prompt Engineering, Context Management
  • Scenario-based exam: 4 of 6 real-world scenarios drawn at random
  • First certification — Anthropic native. Establishes AI builder identity immediately.
Week 1
GCP Generative AI Leader — First Cloud Credential
  • 30 min/day → 3–6 days to exam-ready
  • No code. Scenario-based. Maps full GCP AI landscape.
  • First cloud certification badge live in Week 1
Month 1–2
Agentforce Specialist (Primary 20 min) + Data Cloud Consultant (Parallel 10 min)
  • 20 min/day → Agentforce Specialist — all 4 phases
  • 10 min/day → Data Cloud Consultant — 30% content overlap = efficient
  • Target: both Salesforce certs complete by end of Month 2
Month 2–3
GCP Professional ML Engineer — Full Focus
  • 30 min/day → All 6 GCP phases + MLOps + mock exams
  • GenAI Leader knowledge accelerates Phase 1 prep significantly
  • Book exam when readiness ≥ 80% on mocks
Month 4–5
AWS ML Engineer Associate (MLA-C01) — GCP Knowledge Transfers
  • 30 min/day → All 4 AWS phases
  • ~70% GCP knowledge transfers: SageMaker = Vertex AI, Kinesis = Pub/Sub, Glue = Dataflow
  • Book exam when readiness ≥ 80% — you are now dual-cloud ML certified
Month 6
Azure AI-103 App & Agent Developer — Three-Cloud Complete
  • 30 min/day → Azure AI-103 (live exam June 2026)
  • Do NOT pursue AI-102 — retires June 30, 2026, no renewal path
  • Three-cloud AI architect: GCP + AWS + Azure. Stack complete.
Why This Stack Works: CCA-F is Anthropic's own cert — establishes you as a Claude architect first. GCP + AWS + Azure covers 95%+ of enterprise cloud. Salesforce certs leverage your existing edge. Seven certifications, one coherent story: Anthropic-certified AI architect who can consult, build, and partner across the full AI ecosystem.
Claude Certified Architect — Foundations (CCA-F)

Cert #1. Anthropic official. 720/1000 pass. 5 domains. 60 questions · 120 min · 6-week plan · 30 min/day.

Your CORA advantage: CORA maps directly to 5 of 6 exam scenarios. S1 (Customer Support Agent) = OrchestratorAgent + MCP tools. S3 (Multi-Agent Research) = OrchestratorAgent → PortfolioAgent. S5 (CI/CD) = Your CI fix. S6 (Structured Extraction) = ReportAgent JSON schemas. You are not starting from zero.
⚠ Access note: Exam exclusive to approved Anthropic partners. Complete all free courses at anthropic.skilljar.com — same material, counts toward readiness.
4 Answer Patterns (memorise these): (1) Programmatic beats prompt-based — when compliance must be deterministic, use code not prompts. (2) Fix root cause not symptom — tool misrouting? fix descriptions, not routing layers. (3) Explicit criteria beats heuristics — escalation off? add criteria + few-shot, not sentiment/ML. (4) Match API to latency — Batch API = overnight jobs only, never pre-merge blocking checks.
DOMAIN 01
Agentic Architecture & Orchestration
27% of exam · Week 1
Weight: 27% — Highest weighted domain. The agentic loop, multi-agent patterns, and hooks must be solid cold.
Agentic loop — stop_reason control flow
stop_reason: "tool_use" → continue loop. stop_reason: "end_turn" → terminate loop. Never use iteration caps as primary stopping mechanism. Never parse natural language signals for termination.
Multi-agent coordinator-subagent patterns
Hub-and-spoke: coordinator manages all inter-subagent comms. Subagents do NOT inherit coordinator context automatically. Coordinator decomposes, delegates, aggregates. Parallel subagents: multiple Task calls in ONE coordinator response.
Subagent invocation and context passing
Task tool spawns subagents — allowedTools must include "Task". Pass complete findings explicitly in subagent prompt. Use structured data to separate content from metadata. fork_session for parallel exploration branches.
Multi-step workflows with programmatic enforcement
Programmatic enforcement > prompt-based when deterministic compliance needed. Prompt instructions have a non-zero failure rate — unacceptable for financial operations. Use structured handoff summaries for human escalation.
Agent SDK hooks — PostToolUse & interception
PostToolUse: normalize heterogeneous data formats before model sees it. Tool call interception: block policy-violating actions. Hooks guarantee compliance; prompts provide probabilistic guidance only.
Task decomposition strategies
Prompt chaining for predictable multi-aspect reviews. Dynamic decomposition for open-ended investigation. Split large reviews: per-file passes + cross-file integration pass. Coordinator decomposition failure = most common multi-agent bug.
Session management — resume, fork, restart
--resume <session-name> for named continuation. fork_session for independent branches from shared baseline. Start fresh with summary when prior tool results are stale. Context degradation = inconsistent answers, references to "typical patterns".
DOMAIN 02
Tool Design & MCP Integration
18% of exam · Week 4
Effective tool interface design
Tool descriptions = primary mechanism LLMs use for tool selection. Minimal descriptions → unreliable selection between similar tools. Include: input formats, example queries, edge cases, boundary conditions. Ambiguous descriptions cause misrouting — rename and differentiate first.
Structured error responses — isError, errorCategory, isRetryable
MCP isError flag for tool failures. errorCategory: transient | validation | business | permission. isRetryable: false — prevents wasted retry attempts. Generic "Operation failed" prevents intelligent agent recovery. Distinguish access failures (timeout) from valid empty results (no matches).
Tool distribution across agents
Too many tools (18 vs 4-5) degrades selection reliability. Give each agent only tools needed for its role. tool_choice: "any" guarantees a tool call over conversational text. tool_choice: "auto" = model may return text instead of calling a tool.
MCP server configuration — .mcp.json vs ~/.claude.json
Project scope: .mcp.json — version controlled, shared with team. User scope: ~/.claude.json — personal, not version controlled. Environment variable expansion: ${GITHUB_TOKEN} — never commit secrets. MCP resources: expose content catalogs to reduce exploratory tool calls.
Built-in tools — Grep, Glob, Read, Write, Edit, Bash
Grep: search file contents for patterns (function names, errors). Glob: find files by name or extension patterns. Read+Write: fallback when Edit fails due to non-unique anchor text. Build codebase understanding incrementally — Grep entry points, Read to trace dependencies.
DOMAIN 03
Claude Code Configuration & Workflows
20% of exam · Week 2
CLAUDE.md hierarchy — user, project, directory scoping
User-level (~/.claude/CLAUDE.md) — NOT version controlled, not shared. Project-level (.claude/CLAUDE.md or root) — shared via repo. Directory-level (subdirectory) — most specific scope wins. @import syntax for modular CLAUDE.md. .claude/rules/ for topic-specific rule files with YAML frontmatter.
Custom slash commands and skills
Project commands: .claude/commands/ — version controlled, team-wide. Personal commands: ~/.claude/commands/ — not shared. Skills: .claude/skills/ with SKILL.md frontmatter. context: fork → skill runs isolated, no main context pollution. argument-hint → prompts for missing parameters.
Path-specific rules with YAML frontmatter
.claude/rules/ with YAML paths: ["glob/**/*"]. Rules load ONLY when editing matching files. Use for test files spread across codebase — NOT subdirectory CLAUDE.md for scattered files.
Plan mode vs direct execution
Plan mode: complex, multi-file, architectural, large-scale changes — explore before touching code. Direct execution: simple, single-file, clear scope, well-understood problem. Use Explore subagent for verbose discovery to avoid context exhaustion. Monolith-to-microservices = always plan mode first.
CI/CD integration — -p flag, --output-format json
-p / --print flag: non-interactive mode for pipelines — prevents hangs. --output-format json --json-schema for structured CI output. CLAUDE.md provides project context to CI-invoked Claude Code. Independent review instance catches more than self-review (model retains reasoning, less likely to question itself).
DOMAIN 04
Prompt Engineering & Structured Output
20% of exam · Week 3
Explicit criteria to reduce false positives
Explicit categorical criteria > vague "be conservative" instructions. Define which issues to report vs skip with concrete examples. High false positive rates in one category undermine trust in ALL categories — credibility is holistic.
Few-shot prompting — when and how many
Most effective for format consistency and ambiguous case handling. 2-4 targeted examples for ambiguous scenarios. Show reasoning for why one action was chosen over alternatives. Reduces hallucination in extraction by demonstrating structure handling. Few-shot is NOT a substitute for fixing tool descriptions.
Structured output via tool_use and JSON schemas
tool_use + JSON schema = eliminates syntax errors. Does NOT eliminate semantic errors (values in wrong fields). tool_choice: "auto" = model may return text. tool_choice: "any" = must call a tool, picks which. {"type":"tool","name":"..."} = must call specific tool. Nullable fields prevent hallucination when info may be absent.
Validation, retry, and feedback loops
Retry-with-error-feedback: include original doc + failed extraction + specific error in prompt. Retries fail when information is simply absent from source. Semantic validation: add calculated_total alongside stated_total to flag discrepancies automatically.
Message Batches API — when to use and when NOT to
50% cost savings, up to 24hr, no guaranteed SLA. Does NOT support multi-turn tool calling. custom_id for correlating request/response pairs. Batch = overnight reports, weekly audits. Real-time API = pre-merge blocking checks. Never swap these — latency requirements are non-negotiable.
Multi-instance and multi-pass review
Self-review limitation: model retains reasoning, less likely to question itself. Independent instance (no prior context) catches more subtle issues. Split reviews: per-file local passes + separate cross-file integration pass for 14+ file PRs. Attention dilution = inconsistent depth across files.
DOMAIN 05
Context Management & Reliability
15% of exam · Week 5
Lost-in-the-middle — position-aware ordering
Models reliably process start and end of long inputs, miss middle. Extract transactional facts into a "case facts" block in every prompt. Trim verbose tool outputs to only relevant fields before they accumulate. Place key findings at BEGINNING of aggregated inputs with section headers.
Escalation and ambiguity resolution
Escalate: customer explicitly requests human, policy gap, no meaningful progress after genuine attempts. Honor explicit customer requests immediately — do not investigate first. Sentiment-based escalation is unreliable proxy for complexity. Multiple customer matches → ask for more identifiers, never heuristic selection.
Error propagation in multi-agent systems
Return structured error context: failure type, attempted query, partial results, alternative approaches. Distinguish access failures from valid empty results. Never silently suppress errors or return empty as success. Subagents implement local recovery; propagate only unresolvable errors up to coordinator.
Context in large codebase exploration
Context degradation signs: inconsistent answers, references to "typical patterns" without specifics. Scratchpad files for persisting key findings across context boundaries. Subagent delegation isolates verbose exploration from main coordination context. /compact reduces context during extended exploration sessions.
Human review workflows and confidence calibration
Aggregate accuracy (97% overall) may mask poor performance on specific types. Stratified random sampling to measure high-confidence extraction error rates. Field-level confidence scores calibrated with labeled validation sets. Route low-confidence or ambiguous extractions to human review queue.
Information provenance in multi-source synthesis
Source attribution lost during summarization without structured mappings. Require claim-source mappings: URL, document name, relevant excerpt, date. Conflicting sources: annotate both with attribution, never arbitrarily select one. Include publication dates to prevent temporal differences being misread as contradictions.
PHASE 06
Exam Preparation — Scenarios & Practice
Week 6
6 exam scenarios — 4 drawn at random. Know all 6. Your CORA experience gives you a huge edge on S1, S3, S5, S6.
S1: Customer Support Resolution Agent
Tools: get_customer, lookup_order, process_refund, escalate_to_human. Key: get_customer MUST run before lookup_order — programmatic enforcement, not prompt instruction. Target: 80%+ first-contact resolution. Escalation = explicit request OR policy gap, never sentiment alone.
S2: Code Generation with Claude Code
CLAUDE.md hierarchy, custom slash commands, plan mode vs direct execution. Key decisions: when to use plan mode (architectural changes), how to configure skills (context: fork), CI integration (-p flag).
S3: Multi-Agent Research System
Coordinator → web search + document analysis + synthesis + report generation subagents. Key failure mode: coordinator decomposes too narrowly (visual arts only for "creative industries"). Structured error context when subagents time out. Scratchpad files for cross-boundary findings.
S4: Developer Productivity with Claude
Built-in tools (Grep, Glob, Read, Write, Bash) + MCP servers. Codebase exploration strategy: Grep entry points → Read to trace → incremental understanding. Avoid context exhaustion by scoping subagent exploration tasks.
S5: Claude Code for CI/CD
Automated code review in pipelines. -p flag = non-interactive mode. --output-format json for structured results. Explicit criteria reduce false positives. Independent instance reviews catch more. Batch API NOT suitable for blocking pre-merge checks (no latency SLA).
S6: Structured Data Extraction
tool_use + JSON schema eliminates syntax errors but NOT semantic errors. Retry-with-error-feedback loop. Semantic validation (calculated_total vs stated_total). Nullable fields prevent hallucination. Route low-confidence extractions to human review.
Key Terms — memorise verbatim
stop_reason "tool_use" = continue loop · stop_reason "end_turn" = terminate · allowedTools: ["Task"] = coordinator can spawn subagents · context: fork = isolated skill context · -p/--print = non-interactive CI mode · Message Batches = 50% savings, 24hr max, no multi-turn · tool_choice "any" = must call a tool · isRetryable: false = don't retry · PostToolUse = intercept before model sees result · fork_session = parallel branches · lost-in-the-middle = put key info at start · custom_id = correlates batch pairs · .mcp.json = project-scoped MCP config
GCP Generative AI Leader

First cert. Week 1. No coding. 90 min exam. 3–6 days to exam-ready. Designed for consultants advising on AI strategy.

Why Take This First: Maps the entire GCP AI landscape before PMLE prep begins. Fast credential. Boosts confidence. Explicitly designed for "business leaders and consultants advising on AI strategies" — that's your exact role. Domain breakdown: GenAI fundamentals 30% / GCP AI products 30% / Optimization 25% / Business strategy 15%.
PHASE 01
GenAI Fundamentals & GCP AI Product Map
Day 1–2
Goal: Understand the GCP AI product landscape end-to-end. Know which product maps to which use case. This is the entire foundation of the exam.
GCP AI Product Landscape
The exam tests product knowledge, not technical depth. Know: Vertex AI (end-to-end ML platform), Model Garden (access to foundation models incl. Gemini), Agent Builder (build AI agents with RAG), Gemini (Google's flagship LLM family), Duet AI / Gemini for Workspace (productivity AI), AI APIs (Vision, NLP, Document AI). Map each to a business use case.
Generative AI Core Concepts
LLMs, foundation models, prompt engineering, RAG (Retrieval Augmented Generation), embeddings, vector databases, fine-tuning vs prompting. The exam tests conceptual understanding not implementation details. Know WHY RAG reduces hallucinations, not HOW to code it.
Responsible AI & Google's SAIF Framework
Google's Secure AI Framework (SAIF). 15% of exam. Know the 6 SAIF principles: Expand strong security foundations to AI, Extend detection and response, Automate defenses, Harmonize platform level controls, Adapt controls to address uniqueness of AI, Contextualize AI risks in surrounding business processes. Banking angle: AI governance and bias in regulated environments.
PHASE 02
Vertex AI, Agent Builder & Business Strategy
Day 3–4
Vertex AI Platform — Consultant View
Unified ML platform. Key distinction for this exam (not PMLE): know WHAT it does for the business, not HOW to code pipelines. Vertex AI = train + deploy + monitor ML models at scale. Model Garden = access 100+ foundation models. Agent Builder = build grounded AI agents for enterprise use cases without deep ML expertise.
Optimization: Cost, Performance, Grounding
25% of exam. Know how to optimize GenAI systems: prompt engineering reduces cost vs fine-tuning. Grounding with Google Search or enterprise data reduces hallucinations. Caching reduces latency. Quantization reduces model size. Exam: scenario where a chatbot hallucinates → ground it with Vertex AI Search or RAG.
PHASE 03
Exam Preparation
Day 5–6
Target readiness: 80%+ on practice questions before booking. At $99–200, this exam should be passed first attempt. 2 days of practice questions is sufficient if Phases 1–2 are solid.
GCP GenAI Leader Official Exam Guide & Sample Questions
Download the official exam guide on Day 1 — it lists every topic with weightings. Use GCP Skills Boost practice questions to calibrate. The exam is non-technical — it tests whether you can advise on GenAI strategy, not implement it.
GCP Professional ML Engineer

Primary certification. 6 phases. 8–12 weeks at 20–30 min/day. Exam: $200 USD.

Exam Focus: The GCP ML Engineer exam is heavily scenario-based. It tests your ability to choose the RIGHT GCP service for a given problem — not just know what each service does. Think: "When do I use AutoML vs Custom Training? When is Dataflow better than BigQuery?" That's the mindset.
PHASE 01
GCP Architecture & ML Fundamentals
Week 1–2
Goal: Understand how GCP is structured as a 4-layer ML platform and when to use each layer. This is the mental model everything else builds on.
GCP 4-Layer ML Architecture
The foundation. GCP organizes ML into 4 layers: AI APIs (pre-built), AutoML (low-code), Vertex AI (custom training), and infrastructure (TPUs/GPUs). Every exam question maps to one of these layers. Know when to use which.
ML Problem Types & Framing
The exam tests whether you can frame a business problem as an ML problem. Classification vs Regression vs Clustering vs Recommendation vs Time Series. You must identify the right problem type before selecting a solution.
Build vs Buy Decision Framework
Critical for consulting AND the exam. When do you use a pre-built API (Vision AI, NLP API) vs build a custom model? Rule: Buy when the problem is generic. Build when you have unique data or need domain-specific accuracy.
Batch vs Real-Time Inference
Heavily tested. Batch = process large datasets offline (cheaper, slower). Real-time = low latency predictions on demand (more expensive). Banking use cases: Batch = end-of-day credit scoring. Real-time = fraud detection at transaction time.
PHASE 02
Data Engineering for ML (BigQuery, Dataflow, Pub/Sub)
Week 2–3
Pattern to memorize: Pub/Sub → Dataflow → BigQuery = the canonical GCP streaming data pipeline. This shows up constantly in exam scenarios.
BigQuery & BigQuery ML
GCP's serverless data warehouse. BigQuery ML lets you train ML models directly in SQL — no Python needed. Exam tests: When to use BigQuery ML vs Vertex AI? Answer: BigQuery ML for structured tabular data where you want simplicity and speed. Vertex AI when you need custom architectures or computer vision.
Dataflow (Apache Beam)
Fully managed stream and batch data processing. Built on Apache Beam. Key insight: Dataflow processes data IN TRANSIT. BigQuery stores data AT REST. Exam often presents streaming scenarios — Dataflow is almost always the answer for real-time data transformation.
Pub/Sub (Event Streaming)
GCP's managed message queue. Think of it as the entry point for real-time data. Pattern: Source events → Pub/Sub (ingest) → Dataflow (process) → BigQuery (store) → Vertex AI (train/predict). Pub/Sub decouples producers and consumers — critical for enterprise banking event architectures.
Vertex AI Feature Store
Central repository for ML features. Solves training-serving skew: the same feature values are used in training AND serving. Exam: If a scenario mentions inconsistent model performance between training and production — Feature Store is the solution.
Cloud Storage & Data Formats
GCS = object storage. Training data lives here. Know when to use CSV vs TFRecord vs Avro vs Parquet. TFRecord = optimized for TensorFlow training. Parquet = columnar format, efficient for BigQuery. Avro = schema evolution, good for Pub/Sub.
PHASE 03
Vertex AI — Training, Deployment & Pre-built AI
Week 3–5
Exam weight: Vertex AI is the highest-weighted domain on the exam. Spend the most time here. Know every major component and the scenarios that call for each one.
Vertex AI Training: AutoML vs Custom Training
AutoML = no code, managed, GCP picks the architecture. Custom Training = bring your own code (TF, PyTorch, sklearn). Exam pattern: If you have tabular/image/text data and want speed → AutoML. If you have a unique model architecture or specific framework → Custom Training. AutoML requires minimum data thresholds — exam tests this.
Online vs Batch Prediction Endpoints
Online Endpoints = real-time, low latency, REST API. Batch Prediction = asynchronous, large datasets, cheaper. Always ask: does the business need immediate predictions or can it wait? Fraud detection = Online. End-of-month churn scoring = Batch.
Vertex AI Model Registry
Central store for all trained models — versioning, lineage, metadata. Exam: Every production model MUST be registered before deployment. If a question asks about tracking model versions or promoting models to production — Model Registry is the answer.
Pre-built AI APIs (Vision AI, NLP API, Document AI)
These are plug-and-play AI APIs for common problems. Vision AI = image classification, object detection. NLP API = sentiment, entity extraction. Document AI = extract structured data from scanned documents. Banking use case: Document AI for loan application processing — extract data from PDFs automatically.
Vertex AI Workbench & Notebooks
Managed Jupyter notebooks on GCP. Used for exploration and experimentation. Exam: Workbench is the environment for data scientists to explore data and prototype models before training at scale.
PHASE 04
MLOps — Pipelines, CI/CD & Automation
Week 5–7
Why this matters for consulting: MLOps is what separates toy AI projects from production-grade enterprise AI. This is where your PM background becomes a superpower — you already understand process, governance, and quality gates.
Vertex AI Pipelines (Kubeflow / TFX)
Orchestrate end-to-end ML workflows: data ingestion → preprocessing → training → evaluation → deployment. Built on Kubeflow Pipelines or TFX. Each step is a containerized component. Exam: If a scenario mentions repeatable, automated ML workflows — Vertex AI Pipelines is the answer.
CI/CD for ML with Cloud Build
Apply software engineering practices to ML. Triggered code changes → automated build → unit tests → model training → evaluation gate → deploy if metrics pass. Cloud Build = GCP's CI/CD service. Exam tests understanding of where quality gates go in the ML pipeline.
Cloud Composer (Apache Airflow)
Workflow orchestration for data pipelines. Important distinction: Cloud Composer (Airflow) orchestrates DATA workflows. Vertex AI Pipelines orchestrates ML workflows. Exam trap: Don't confuse these two. If the question is about scheduling data ETL → Composer. If it's about ML pipeline steps → Vertex AI Pipelines.
Container Registry & Artifact Registry
Store and manage Docker container images for ML training jobs. Artifact Registry = newer, recommended. Container Registry = legacy. Custom training jobs run in containers — your model code + dependencies are packaged as a Docker image and pushed here before training.
MLOps Maturity Levels (Level 0, 1, 2)
Google's framework for ML system maturity. Level 0 = manual, ad-hoc. Level 1 = automated training pipeline. Level 2 = fully automated CI/CD ML pipeline. Exam scenario: A bank is manually retraining models every quarter — which MLOps level are they at, and what do they need to advance?
PHASE 05
Model Monitoring, Governance & Explainability
Week 7–9
Banking/Regulated Industry Angle: This phase is CRITICAL for your consulting work. Regulators require explainability and bias monitoring for AI in banking. Know this cold — it's both exam content AND client value.
Data Drift vs Concept Drift
Data drift = the distribution of input features changes over time (e.g., customer demographics shift). Concept drift = the relationship between inputs and outputs changes (e.g., what predicts default risk changes after an economic crisis). Both degrade model accuracy. Exam tests: Which type of drift is described in the scenario?
Vertex AI Model Monitoring
Automatically monitors deployed models for feature drift and prediction drift. Sends alerts when metrics exceed defined thresholds. Exam: If a question asks how to detect when a model's performance is degrading in production — Model Monitoring is the answer.
Explainable AI (SHAP / Feature Attributions)
Vertex AI Explainable AI provides feature attributions — which features contributed most to each prediction. Based on SHAP (Shapley values). Banking use case: Explaining to a loan officer WHY the model denied a credit application. Exam: Explainability is required for regulated industry models.
Bias Detection & Model Fairness
What-If Tool and Vertex AI Explainable AI help detect demographic bias. Critical in banking for fair lending compliance. Exam: Scenario where a credit model shows different accuracy for different demographic groups — use Explainable AI + What-If Tool to investigate.
PHASE 06
Exam Preparation — Mock Exams & Readiness
Week 9–12
Only book the exam when readiness ≥ 80–85% on mock exams. Do not rush. Two weeks of mock exam practice is worth more than cramming new content.
Official Google Sample Questions
Start here. Google publishes sample questions that reveal the style, tone, and depth of real exam questions. Do these first to calibrate your readiness baseline.
Whizlabs Practice Exams
Best third-party practice exams for GCP ML Engineer. Scenario-based questions that closely mirror the real exam. Use timed mode to simulate exam conditions. Target: 80%+ consistently before booking.
A Cloud Guru / Linux Foundation Practice
Additional practice exam source. Good for exposure to different question styles. Use alongside Whizlabs — not instead of it.
AWS ML Engineer Associate (MLA-C01)

Starts Month 4 after GCP complete. 4 phases. 6–8 weeks. GCP knowledge transfers ~70% directly. $150 USD — replaces retired ML Specialty.

⚠ AWS ML Specialty (MLS-C01) was RETIRED March 31, 2026. Cannot be taken. MLA-C01 is the official replacement. More practical (SageMaker-focused), more aligned to operational consulting, $150 cheaper. Your GCP ML Engineer knowledge does ~70% of the heavy lifting here.
GCP → AWS Concept Map: Vertex AI = SageMaker. Pub/Sub = Kinesis. Dataflow = Kinesis Data Analytics / Glue. BigQuery = Redshift + Athena. Cloud Storage = S3. Model Registry = SageMaker Model Registry. Vertex AI Pipelines = SageMaker Pipelines. You already know the concepts — you're just learning the AWS service names.
PHASE 01
SageMaker Core & AWS AI Services
Week 1–2
Amazon SageMaker — Core Platform
SageMaker is AWS's fully managed ML platform — equivalent to Vertex AI on GCP. Covers everything from data labeling to model deployment. Know: SageMaker Studio (notebook IDE), SageMaker Training, SageMaker Endpoints (online/batch), SageMaker Model Monitor, SageMaker Pipelines.
SageMaker Built-in Algorithms
AWS provides optimized built-in algorithms: XGBoost, Linear Learner, K-Means, DeepAR (time series), BlazingText (NLP), Object Detection. Exam: Know which algorithm fits which problem type. DeepAR = forecasting. XGBoost = tabular classification/regression.
AWS AI Services (Rekognition, Comprehend, Textract)
Pre-built AI APIs. Rekognition = image/video analysis (= Vision AI). Comprehend = NLP, sentiment, entities (= NLP API). Textract = extract text and structured data from documents (= Document AI). Exam: Map business problem → correct AWS service. Don't overthink it.
SageMaker Ground Truth (Data Labeling)
Managed data labeling service. Human labelers + automated labeling. Exam: If a scenario needs labeled training data at scale with human-in-the-loop review — Ground Truth is the answer.
PHASE 02
AWS Data Engineering for ML
Week 2–4
AWS Data Pattern: Kinesis → Lambda/Kinesis Analytics → S3 → Glue → Athena/Redshift → SageMaker. Map this to GCP: Pub/Sub → Dataflow → GCS → Dataproc → BigQuery → Vertex AI.
Amazon Kinesis (Data Streams, Firehose, Analytics)
Real-time data streaming on AWS. Kinesis Data Streams = ingestion (= Pub/Sub). Kinesis Firehose = delivery to S3/Redshift (= Pub/Sub + Dataflow). Kinesis Analytics = SQL on streaming data (= Dataflow). Exam: Match the Kinesis variant to the scenario requirement.
S3, Glue & Athena
S3 = object storage (= GCS). AWS Glue = serverless ETL + data catalog (= Dataflow + Data Catalog). Athena = serverless SQL on S3 (= BigQuery). Exam: When data lives in S3 and you need to query it without loading it into a database — Athena. When you need to transform and prepare data — Glue.
Amazon Redshift
Data warehouse for structured analytics (= BigQuery). Redshift ML allows training models using SageMaker Autopilot directly from Redshift SQL. Exam: Structured analytical queries at scale → Redshift. Ad-hoc queries on S3 data → Athena.
PHASE 03
MLOps, Monitoring & Governance on AWS
Week 4–7
SageMaker Pipelines & Model Registry
End-to-end ML pipeline orchestration on AWS (= Vertex AI Pipelines). Model Registry tracks versions and manages approval workflows for production deployment. Exam: Any scenario about automating the ML lifecycle end-to-end — SageMaker Pipelines.
SageMaker Model Monitor
Detects data drift, model quality drift, and bias drift in deployed models (= Vertex AI Model Monitoring). Exam: If a question asks how to detect degrading model performance post-deployment — SageMaker Model Monitor.
SageMaker Clarify (Bias & Explainability)
Detects bias in datasets and trained models. Provides SHAP-based feature attributions (= Vertex AI Explainable AI). Critical for regulated industries. Exam: Fair lending compliance scenario → SageMaker Clarify.
AWS Security for ML (IAM, VPC, KMS)
Heavily tested in AWS exams. IAM = who can access what. VPC = network isolation for training jobs. KMS = encrypt data at rest and in transit. Banking requirement: All ML training data must be encrypted. Training jobs must run inside a private VPC.
PHASE 04
Exam Preparation — Mock Exams & Readiness
Week 7–10
AWS Official Practice Exam
AWS sells an official practice exam ($40) that reflects the real exam style closely. Do this first to calibrate. Then use Tutorials Dojo for volume.
Tutorials Dojo Practice Exams
Best third-party practice exams for AWS ML Associate. Detailed explanations. Timed mode available. Community-maintained and highly accurate. Target 80%+ before booking exam.
Salesforce Agentforce Specialist

Primary Month 1–2 cert at 20 min/day. Replaces retired AI Associate (Feb 2026). Leverages your existing 4+ Salesforce certifications. 4 phases. 60 Q · 105 min · $200 USD.

⚠ Salesforce AI Associate was RETIRED February 2026. Agentforce Specialist is the current Salesforce AI credential. It covers Agentforce, Einstein Copilot, MCP (Model Context Protocol), A2A (Agent-to-Agent), Prompt Builder, and the Einstein Trust Layer. This is exactly what your banking and enterprise clients are asking about in 2026.
Your Advantage: You already understand Salesforce architecture, CRM data models, and platform capabilities. You are learning the AI agent layer on top of what you already know — not starting from scratch. This is why it's only 3–4 weeks at 20 min/day.
PHASE 01
Einstein AI Architecture & Data Cloud Foundation
Week 1
Einstein AI 4-Layer Architecture
Layer 1: Einstein Copilot (conversational AI assistant). Layer 2: Einstein Features (Lead Scoring, Forecasting, etc.). Layer 3: Einstein Platform (build custom AI). Layer 4: Data Cloud (the fuel — unified customer data). Every exam question maps to one of these layers. Know the hierarchy cold.
Data Cloud as AI Foundation
Data Cloud unifies all customer data into a single profile. Einstein AI features get more accurate as Data Cloud provides richer data. Exam: Why is Einstein Lead Scoring inaccurate? Most likely = insufficient or poor quality CRM data. Data Cloud solves this by unifying data sources.
PHASE 02
Einstein Predictions & Scoring Features
Week 1–2
Einstein Lead & Opportunity Scoring
Ranks leads and opportunities by likelihood to convert. Requires minimum 1,000 records with historical outcomes to activate. Exam: Most common trap = saying Einstein Scoring doesn't work because it hasn't been configured — it actually requires sufficient historical data first.
Einstein Prediction Builder
Build custom AI predictions on any Salesforce object without code. Exam: When a business needs a prediction that Einstein doesn't provide out of the box (e.g., predict customer churn on a custom object) → Prediction Builder.
Einstein Next Best Action
Surfaces AI-powered recommendations to sales/service reps at the right moment. Combines predictions + business rules. Exam: When a question asks about surfacing contextual recommendations to reps during a customer interaction → Next Best Action.
PHASE 03
Einstein Copilot, Agentforce & Prompt Builder
Week 2–3
Highest growth area on the exam. Agentforce and Einstein Copilot are Salesforce's biggest AI bets right now. Expect heavy exam weighting here.
Einstein Copilot
Conversational AI assistant embedded across Salesforce. Users ask questions in natural language, Copilot takes actions. Powered by LLMs. Exam: Know what Copilot can and cannot do. It operates within the Salesforce data model — it can't access external data without Prompt Builder + Data Cloud grounding.
Agentforce (Autonomous AI Agents)
Next generation beyond Copilot. Agents can autonomously complete multi-step tasks without human prompting. Built on Topics + Actions framework. Exam: Copilot = assists humans. Agentforce = operates autonomously. Know when each is appropriate.
Prompt Builder
Build, manage, and deploy prompt templates with dynamic data from Salesforce. Uses merge fields to inject live CRM data into prompts. Can be grounded with Data Cloud for richer context. Exam: Any scenario about customizing what an LLM knows or says using Salesforce data → Prompt Builder.
PHASE 04
Einstein Trust Layer & Exam Preparation
Week 3–4
Most heavily tested topic for regulated industries. Banking clients will ask you about this constantly. Know it cold — every component and why it matters for data security and compliance.
Einstein Trust Layer — All 5 Components
Salesforce's security framework for AI. 1) Zero Data Retention — LLM provider never stores your data. 2) Data Masking — PII stripped before data leaves Salesforce. 3) Toxicity Detection — filters harmful outputs. 4) Audit Trail — logs all AI interactions for compliance. 5) Grounding — grounds AI responses in your actual Salesforce data. Exam: Know what each component does and why it exists.
Focus on Force Practice Exams
The gold standard for Salesforce exam prep. Questions are style-matched to real Salesforce exams. Detailed explanations. Use when you hit 50% readiness in coaching sessions.
Official Salesforce Agentforce Specialist Exam Guide
Download this on Day 1 and use it as your checklist. Every exam topic is listed with percentage weighting. Study the highest-weighted topics most deeply. Agentforce + Trust Layer = highest weight right now. AI Associate is RETIRED — only use the Agentforce Specialist guide.
Salesforce Data Cloud Consultant

Parallel cert. Runs Month 1–2 at 10 min/day alongside Agentforce. Rebranded to Data 360 (Oct 2025). #1 in-demand Salesforce cert 2026. 4 phases. $200 USD.

Why Run This Parallel: ~30% content overlap with Agentforce Specialist — studying both simultaneously is efficient, not doubled work. Agentforce = HOW AI agents work. Data Cloud = WHAT powers them (unified customer data). Together they make you the complete Salesforce AI expert. Run at 10 min/day while Agentforce gets 20 min/day.
PHASE 01
Data Cloud Architecture & Data Ingestion
Week 1–2
Mental model: Data Cloud = Salesforce's customer data platform (CDP). It ingests data from every source (CRM, website, mobile, external), unifies it into a single customer profile, and makes that unified data available to Einstein AI. Without Data Cloud, Einstein only sees CRM data. With Data Cloud, Einstein sees the full customer picture.
Data Cloud Architecture & Data Streams
Data Cloud ingests data via Data Streams. Know the data stream types: Salesforce CRM connector (native, no code), Cloud Storage connector (S3, GCS), MobileConnect, Marketing Cloud, API-based connectors. Exam: Map the data source to the correct connector type. CRM data = Salesforce CRM connector. External behavioral data = API or Cloud Storage connector.
Data Model Objects (DMOs) & Data Mapping
Data Cloud uses a canonical data model with standard Data Model Objects (DMOs): Individual, Contact Point, Engagement, etc. Ingested data must be mapped to these DMOs. Exam: Understanding the mapping layer is critical — if data isn't mapped correctly, identity resolution fails. Know the difference between Source Objects (raw) and Data Model Objects (standardised).
PHASE 02
Identity Resolution & Unified Profiles
Week 2–3
Most exam-tested concept in Data Cloud. Identity resolution is what makes Data Cloud valuable — it stitches together data from multiple sources into one profile. Know every component.
Identity Resolution Rulesets
Identity resolution matches and merges records from different sources into a single Unified Individual profile. Uses fuzzy matching and exact matching rules (email, phone, name + address). Exam: If two sources have the same customer but different IDs — identity resolution unifies them. A client asking "why are we seeing duplicate customers?" → check identity resolution ruleset configuration.
Unified Individual vs Individual
Individual = a single source record. Unified Individual = the merged profile created by identity resolution across multiple sources. Einstein AI and segmentation use the Unified Individual, not raw Individual records. Exam: Always ask "which profile does the AI use?" → Unified Individual.
PHASE 03
Segmentation, Activation & AI Integration
Week 3–4
Segments & Activation Targets
Segments = audiences built from unified profile attributes (e.g., "customers who bought in last 30 days and are in high LTV tier"). Activation = publishing segment membership to a target (Marketing Cloud, Advertising Studio, CRM, etc.). Exam: Segmentation is built on Unified Individuals. Activation targets determine WHERE the segment goes. Banking use case: Segment high-churn-risk customers → activate to service team in Service Cloud.
Data Cloud + Einstein AI Integration
Data Cloud feeds rich, unified customer data to Einstein AI features. Einstein Copilot grounded with Data Cloud = AI that knows the full customer history, not just CRM data. Exam: Why is Einstein Copilot giving incomplete or inaccurate answers? → It's not grounded with Data Cloud. Grounding is the connection between LLM responses and real customer data.
PHASE 04
Calculated Insights, Data Actions & Exam Prep
Week 4–6
Calculated Insights
Computed metrics derived from Data Cloud data using SOQL or SQL. Examples: customer lifetime value, average order frequency, recency score. These become attributes on the Unified Individual profile and can be used in segmentation and Einstein AI. Exam: When a client needs a custom metric for segmentation that doesn't exist as a standard field → Calculated Insights.
Data Actions & Real-Time Triggers
Data Actions allow Data Cloud to trigger actions in other systems when a segment qualification event occurs. E.g., when a customer enters the "at-risk churn" segment → trigger a Service Cloud case creation or a Flow automation. Exam: Real-time response to customer behaviour → Data Action. Batch-scheduled → Activation target.
Data Cloud Consultant Exam Prep
Focus on Force is the gold standard for Salesforce exam prep. Download the official exam guide on Day 1 — use it as your checklist throughout. Identity Resolution, Segmentation, and Data Activation are typically the highest-weighted domains.
Azure AI-103 App & Agent Developer

Month 6. Completes three-cloud coverage. Replaces retiring AI-102. Microsoft Foundry, GenAI apps, multi-agent orchestration. ~$165 USD.

⚠ DO NOT pursue Azure AI-102. AI-102 retires June 30, 2026, with NO renewal path — it's a dead-end credential. AI-103 is the replacement. Beta: April 2026. Live exam: June 2026. Build your plan around AI-103 only.
Why Azure Completes the Stack: Azure holds 23–25% of enterprise cloud (fastest growing). Exclusive OpenAI partnership = Azure is where many enterprises deploy GPT-4 and production AI. AI-103 focuses on what matters in 2026: building GenAI apps, multi-agent systems, and responsible AI — not legacy Azure cognitive services. After GCP + AWS, Azure is the final piece of three-cloud AI architect status.
PHASE 01
Azure AI Foundry & Foundation Models
Week 1–2
Key mindset shift: AI-103 is about BUILDING GenAI applications on Azure, not managing traditional ML pipelines. Think: Azure AI Foundry (model hub + deployment), Azure OpenAI Service (GPT-4/GPT-4o), Semantic Kernel (agent orchestration), Prompt Flow (LLM app development). This is the GenAI application developer exam.
Azure AI Foundry (Model Catalog & Deployments)
Azure AI Foundry = Microsoft's unified hub for foundation models. Access to OpenAI models (GPT-4, Whisper, DALL-E), Meta LLaMA, Mistral, and Microsoft Phi models. Exam: Know which model tier to use for which scenario. GPT-4o = multimodal tasks. Phi-3 = lightweight, cost-efficient. LLaMA = open-source, customisable. Azure OpenAI Service = enterprise-grade deployment with private endpoint.
Azure OpenAI Service
Enterprise access to OpenAI models on Azure infrastructure. Private networking, compliance controls, regional data residency. Key exam distinction: Azure OpenAI ≠ OpenAI.com. Azure OpenAI = same models + Azure security + private endpoints + no data leaving your tenant. Banking use case: GPT-4 for internal document summarisation without data leaving the corporate Azure tenant.
PHASE 02
RAG, Semantic Kernel & Agent Orchestration
Week 2–4
RAG with Azure AI Search
Retrieval Augmented Generation on Azure uses Azure AI Search (formerly Cognitive Search) as the vector store. Pattern: Documents → chunk → embed → store in AI Search → at query time, retrieve relevant chunks → inject into LLM prompt. Exam: Hallucination reduction = RAG. Grounding the LLM in enterprise documents = Azure AI Search + Azure OpenAI. Know the end-to-end RAG pipeline on Azure.
Semantic Kernel (Agent Orchestration Framework)
Microsoft's open-source SDK for building AI agents and multi-agent systems. Supports Python, C#, Java. Agents use Plugins (skills/tools), Memory (context persistence), and Planners (reasoning chains). Exam: Building a multi-step AI workflow that requires reasoning, tool use, and memory → Semantic Kernel. Connects to Azure OpenAI, Hugging Face, or any LLM.
Prompt Flow (LLM App Development)
Visual development tool for building, testing, and deploying LLM applications. Defines flows as DAGs (directed acyclic graphs) with nodes for LLM calls, Python code, tools. Exam: When asked how to build, evaluate, and iterate on a prompt-based application workflow → Prompt Flow. Think of it as a CI/CD pipeline specifically for LLM applications.
PHASE 03
Responsible AI, Safety & Governance on Azure
Week 4–5
Banking angle: Azure's responsible AI framework is critical for regulated industry clients. This phase directly maps to client conversations about AI risk, compliance, and governance — exam content AND consulting value simultaneously.
Azure Content Safety & Guardrails
Azure Content Safety = API for detecting harmful content (hate speech, violence, sexual content, self-harm). Integrated into Azure OpenAI as content filters. Exam: When building a GenAI app for enterprise — content safety filters MUST be configured. Default filters exist but can be adjusted with approval. Banking: Content filters on internal chatbots prevent reputational and regulatory risk.
Microsoft Responsible AI Principles
6 principles: Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, Accountability. Exam: Know each principle and which Azure tool addresses it. Fairness → Azure AI Fairness dashboard. Transparency → Responsible AI dashboard. Reliability → model evaluation tools in Prompt Flow. These map directly to what enterprise clients in banking require from AI governance frameworks.
PHASE 04
Exam Preparation
Week 5–7
Beta exam timing: AI-103 beta opens April 2026, live June 2026. If sitting beta, you earn the full certification + early adopter recognition. If waiting for stable release, sit in June–July 2026 after completing Month 5 AWS cert.
Microsoft Learn — Official Study Path for AI-103
Microsoft Learn is your primary study resource for Azure exams. Free, official, well-structured. Always start with the official exam guide to download the skill measurement document (lists every topic and weighting). Microsoft exams heavily test the Learn documentation — read it closely, not just the concepts.
MeasureUp / Whizlabs Practice Exams (Azure)
MeasureUp is Microsoft's official practice exam partner — highest fidelity to real exam. Whizlabs also has strong Azure AI coverage. Use at 60%+ readiness. Target 80%+ before booking. Azure exams are scenario-heavy — practice reading complex multi-condition scenarios and applying elimination technique.
Anthropic Courses — Skilljar

Official Anthropic Academy courses at anthropic.skilljar.com — track completion and depth revisits.

Status key: ■ To Complete  ·  ■ Done  ·  ■ Revisit for depth
To Complete
To Do Claude with the Anthropic API
Core API usage — messages, tool use, streaming, multi-turn, system prompts. Essential foundation for all Claude integrations and XERONE / NemoClaw builds. Complete before MCP series.
To Do Introduction to Model Context Protocol
MCP server/client model, primitives (tools, resources, prompts), transport layer. Gateway to the Advanced MCP course. Complete this before Advanced Topics.
To Do Model Context Protocol — Advanced Topics
Auth, lifecycle management, error handling, connecting MCP servers to Claude for real tool-call flows. Directly applicable to XERONE NemoClaw MCP integration. Complete after Intro MCP.
Completed
Done Claude Code in Action
Hands-on Claude Code usage — agentic coding, project automation, CLI workflows. Foundation for all Claude Code work.
Done Introduction to Claude Cowork
Claude Cowork desktop tool — file and task management automation for non-developers.
Done Claude 101
Core Claude capabilities, use cases, and interaction fundamentals.
Done Claude Code 101
Claude Code foundations — installation, setup, basic agentic coding workflows.
Completed — Revisit for Depth
Revisit Introduction to Agent Skills
Agent skills architecture — how to build reusable skill modules. Surface-level pass completed. Need depth on skill composition and delegation patterns for CCA-F.
Revisit Introduction to Subagents
Subagent orchestration and delegation. First pass done. Revisit for depth on coordinator/subagent handoff patterns and error propagation — directly CCA-F exam relevant.
Revisit AI Capabilities and Limitations
Model capabilities, context windows, hallucinations, failure modes. First pass done. Needs depth for CCA-F exam — especially limitations in regulated/high-stakes contexts.
Daily Coaching Structure

30 minutes. Every session. High-impact and focused.

OPTION ASplit Day (Preferred)
0–7 min
Concept deep-dive — primary cert. Mental models, not memorization.
7–13 min
Real-world application — banking/enterprise scenario mapping.
13–22 min
3–5 exam-style practice questions — scenario-based, elimination technique.
22–27 min
Answer breakdown — correct + incorrect options explained.
27–30 min
Secondary cert (10 min Salesforce) OR weakness log update.
OPTION BAlternating Days
Day 1
Full 30 min → GCP ML Engineer (Primary)
Day 2
Full 30 min → Agentforce / Data Cloud Specialist (Secondary)
Day 3
Full 30 min → GCP (repeat cycle)
Use when
Concept is complex and needs full focus. Better for deep topics.
Special Session Modes

Use these commands to trigger different coaching modes.

💬 "Revision Mode"
Rapid-fire questions on weak areas. No explanations until you answer. Timed. Forces active recall. Use 3+ days before exam.
🧪 "Mock Exam"
20–30 timed questions. Real exam simulation. Score + weak area report delivered after. Use weekly at 60%+ readiness.
🔍 "Deep Dive [Topic]"
Full 30 min on one concept. Maximum depth. Use when a topic is unclear or exam-critical. Example: "Deep Dive MLOps"
📊 "Dashboard Update"
Review readiness %, update weak areas, plan next 5 sessions. Use weekly or when pivoting topics.
Weekly Rhythm

Structure your week to balance depth, practice, and review.

Mon–Wed
New concepts — go deep on 2–3 topics per week maximum.
Thu
Application day — map concepts to real banking/enterprise scenarios.
Fri
Practice questions — 10–15 questions across the week's topics.
Sat/Sun
Optional: 15–20 min Trailhead or GCP Skills Boost lab to reinforce.
Weekly
From Week 9: Add one 20–30 min timed mock exam per week.
All Study Resources

Every resource ranked by priority, cost, and when to use it. Use these alongside daily coaching sessions.

Strategy: Week 1–2 = coaching only. Week 3+ = add official platform (Trailhead / GCP Skills Boost) for the same topic as that day's session. At 60% readiness = start practice exams.
Anthropic Academy Courses
CourseStatusCostWhen to UseLink
Claude with the Anthropic API
Messages, tool use, streaming, multi-turn — full API fundamentals
To Do Free Complete first — before MCP series. Core foundation. Open →
Introduction to Model Context Protocol
MCP server/client model, tools, resources, prompts, transport layer
To Do Free Week 1 of MCP sprint. Do before Advanced Topics. Open →
MCP — Advanced Topics
Auth, lifecycle, error handling, full Claude integration flows
To Do Free Week 2 of MCP sprint. After Intro MCP. Open →
Claude Code in Action
Agentic coding, project automation, CLI workflows
Done Free Complete. Reference as needed. Open →
Introduction to Claude Cowork
Desktop automation tool for non-developers
Done Free Complete. Open →
Claude 101
Core Claude capabilities and interaction fundamentals
Done Free Complete. Open →
Claude Code 101
Claude Code setup, installation, basic workflows
Done Free Complete. Open →
Introduction to Agent Skills
Reusable skill modules, composition, delegation patterns
Revisit Free Revisit for CCA-F depth — skill composition patterns. Open →
Introduction to Subagents
Subagent orchestration, coordinator/subagent handoff, error propagation
Revisit Free Revisit for CCA-F depth — orchestration patterns. Open →
AI Capabilities and Limitations
Model capabilities, context windows, hallucinations, failure modes
Revisit Free Revisit for CCA-F — limitations in regulated/high-stakes contexts. Open →
GCP ML Engineer Resources
Optional. If you prefer video alongside coaching.
Resource Type Cost When to Use Link
GCP ML Engineer Exam Guide
Official exam blueprint — lists every topic and domain weighting
Official Free Download Week 1. Use as checklist throughout. Open →
Google Cloud Skills Boost
Official Google learning platform with ML path, labs, and quizzes
Official Free/Paid Week 3+. Use after each session on the same topic. Open →
Google ML Crash Course
Free ML fundamentals by Google engineers. Covers core concepts.
Free Free Week 1–2. Foundation concepts only. Open →
Coursera: GCP ML Engineer Certificate
Official Google course on Coursera. Structured video learning.
Course ~$50/mo Open →
GCP Architecture Center
Real reference architectures — exactly what the exam scenario tests
Official Free Phase 4–5. When studying MLOps and system design. Open →
Whizlabs Practice Exams
Best third-party practice exam for GCP ML Engineer
Practice ~$30 At 60%+ readiness. Weekly timed practice. Open →
A Cloud Guru — GCP ML Engineer
Video course + practice exams. Good supplementary resource.
Course ~$40/mo Optional. Additional practice exam source at 60%+. Open →
AWS ML Engineer Associate Resources
Resource Type Cost When to Use Link
AWS ML Engineer Associate Exam Guide
Official exam blueprint with domain weightings and topic list (MLA-C01)
Official Free Download at start of Month 4. Use as checklist. Open →
AWS Skill Builder — ML Engineer Path
AWS's official free ML learning content and enhanced exam prep plan
Official Free Phase 1–2 of AWS. Use after sessions on same topic. Open →
AWS Official Practice Exam
Official practice questions from AWS — closest to real exam style
Official $40 At 60%+ readiness. First practice exam to take. Open →
Tutorials Dojo — AWS ML Engineer Associate
Best third-party practice exams for MLA-C01. Detailed explanations.
Practice ~$20 At 60%+ readiness. Weekly timed practice exams. Open →
AWS SageMaker Documentation
Official SageMaker docs — reference when concepts are unclear
Official Free Use as reference throughout. Don't read cover to cover. Open →
Salesforce Agentforce Specialist Resources
Resource Type Cost When to Use Link
Agentforce Specialist Exam Guide
Official exam blueprint — download Week 1 and use as checklist. AI Associate is RETIRED.
Official Free Download Week 1. Check off topics as you cover them. Open →
Trailhead: Agentforce Trail
Getting started with Agentforce — highest exam weighting area
Official Free Week 1+. Primary study material alongside coaching. Open →
Trailhead: Einstein Trust Layer
Deep dive module on all 5 components of Einstein Trust Layer
Official Free Phase 4. Critical for exam and banking clients. Open →
Trailhead: Einstein Copilot & Prompt Builder
Copilot and Prompt Builder modules — new high-weight exam areas
Official Free Phase 3. High exam weighting — prioritize this. Open →
Focus on Force — Agentforce Specialist
Best Salesforce exam prep. Questions closest to real exam style.
Practice ~$40 At 50%+ readiness. Weekly practice before exam. Open →
Salesforce Data Cloud Consultant Resources
Resource Type Cost When to Use Link
Data Cloud Consultant Exam Guide
Official exam blueprint (Data 360 rebranded Oct 2025)
Official Free Download Week 1. Use as parallel checklist. Open →
Trailhead: Data Cloud Path
Official Salesforce learning path for Data Cloud architecture and features
Official Free Week 1+. Primary study material alongside coaching. Open →
Focus on Force — Data Cloud Consultant
Best Salesforce practice exams for Data Cloud. High quality explanations.
Practice ~$40 At 50%+ readiness. Weekly practice before exam. Open →
Azure AI-103 Resources
Resource Type Cost When to Use Link
Azure AI-103 Exam Page
Official exam details — DO NOT use AI-102 page, it retires June 2026
Official Free Download skill measurement doc Month 5. Use as checklist. Open →
Microsoft Learn — Azure AI Path
Official free learning paths for Azure AI, Foundry, and OpenAI Service
Official Free Primary study resource. Use after each coaching session. Open →
MeasureUp Azure Practice Exams
Microsoft's official practice exam partner. Highest fidelity to real exam.
Practice ~$99 At 60%+ readiness. Best practice exam for Azure. Open →
Azure AI Foundry Documentation
Official docs for Azure AI Foundry, Prompt Flow, Semantic Kernel
Official Free Reference throughout Phase 1–3. Read closely — exam tests docs knowledge. Open →
Certification Progress Tracker

Click any topic checkbox to mark it done. Progress saves in your browser instantly. Use the cert tabs above to study.

How to use: Click any cert tab above (GCP GenAI Leader, Agentforce Specialist, etc.) → open a phase → click any topic name to check it done. Progress is tracked in the overall bar at the top and reflected here. The checkboxes are on every topic in every cert tab.
Jump To Cert
Cost Summary

Full investment to tri-cloud AI architect status.

CCA-F Foundations
TBD · Anthropic partner · 6 weeks · Week 1
GCP GenAI Leader
$99–200 · 1 week · Week 1
Agentforce Specialist
$200 · 4 weeks · Month 1–2
Data Cloud Consultant
$200 · 6 weeks parallel · Month 1–2
GCP ML Engineer
$200 · 12 weeks · Month 2–3
AWS ML Associate
$150 · 8 weeks · Month 4–5
Azure AI-103
~$165 · 7 weeks · Month 6
TOTAL
~$1,014 — saves $150 vs retired AWS Specialty