AWS DevOps Professional Certification

Domain 1 - SDLC Automation: Complete Study Guide
150
Practice Questions
22%
Exam Weight
14
Key Services

📖 How to Use This Study Guide

Domain Overview

Domain 1 (SDLC Automation) accounts for 22% of the exam. This domain focuses on implementing and managing continuous integration and continuous delivery (CI/CD) pipelines, deploying applications using various strategies, and automating the software development lifecycle.

---

Syllabus Breakdown

Task Statement 1.1: Implement CI/CD Pipelines

  • Design and implement CI/CD pipelines using AWS services
  • Integrate source control, build, test, and deployment stages
  • Manage pipeline artifacts and dependencies
  • Implement pipeline notifications and monitoring

Task Statement 1.2: Integrate Automated Testing

  • Implement unit, integration, and end-to-end testing in pipelines
  • Configure test environments and test data management
  • Implement quality gates and approval processes

Task Statement 1.3: Build and Manage Artifacts

  • Implement artifact repositories and versioning
  • Manage dependencies and package management
  • Implement caching strategies for build optimization

Task Statement 1.4: Implement Deployment Strategies

  • Blue/green deployments
  • Canary deployments
  • Rolling deployments
  • Immutable infrastructure deployments
  • Feature flags and A/B testing

---

Key AWS Services to Master

1. AWS CodeCommit

What it is: Fully managed source control service that hosts Git repositories. Key concepts to memorize:
  • Repositories are encrypted at rest using AWS KMS
  • Supports triggers (SNS, Lambda) for repository events
  • IAM policies control access (Git credentials, SSH keys, HTTPS)
  • Cross-account access via IAM roles
  • Pull requests and approval rules
  • Branch-level permissions using IAM policies

2. AWS CodeBuild

What it is: Fully managed build service that compiles code, runs tests, and produces artifacts. Key concepts to memorize:
  • buildspec.yml defines build commands in phases: install, pre_build, build, post_build
  • Build environments: managed images (Ubuntu, Amazon Linux, Windows) or custom Docker images
  • Artifacts stored in S3
  • Caching (S3 cache, local cache) to speed up builds
  • Environment variables (plaintext, Parameter Store, Secrets Manager)
  • VPC support for accessing private resources
  • Build badges for status visualization
  • Batch builds for parallel execution
  • Reports for test results (JUnit, Cucumber, etc.)

3. AWS CodeDeploy

What it is: Deployment service that automates deployments to EC2, Lambda, ECS, and on-premises servers. Key concepts to memorize:
  • appspec.yml (YAML for EC2/on-premises, YAML/JSON for Lambda/ECS)
  • Deployment types:
  • In-place (EC2/on-premises only)
  • Blue/green (EC2, Lambda, ECS)
  • Deployment configurations:
  • AllAtOnce, HalfAtATime, OneAtATime
  • Canary (Linear10PercentEvery1Minute, etc.)
  • Linear (Linear10PercentEvery1Minute, etc.)
  • Lifecycle hooks for EC2: ApplicationStop → DownloadBundle → BeforeInstall → Install → AfterInstall → ApplicationStart → ValidateService
  • Rollback configurations (automatic on failure, alarm thresholds)
  • CodeDeploy Agent required on EC2/on-premises
  • Deployment groups define target instances (tags, ASG)

4. AWS CodePipeline

What it is: Continuous delivery service that orchestrates build, test, and deploy phases. Key concepts to memorize:
  • Stages contain action groups with actions
  • Action types: Source, Build, Test, Deploy, Approval, Invoke
  • Artifacts pass between stages via S3
  • Cross-region actions supported
  • Cross-account deployments using IAM roles
  • Manual approval actions with SNS notifications
  • CloudWatch Events/EventBridge for pipeline state changes
  • Pipeline execution modes: Superseded, Queued, Parallel
  • Integrates with third-party tools (Jenkins, GitHub Actions)

5. AWS CodeArtifact

What it is: Artifact repository service for software packages (npm, PyPI, Maven, NuGet, etc.). Key concepts to memorize:
  • Domains contain repositories
  • Upstream repositories for package resolution chain
  • Can proxy public repositories (npmjs, PyPI, Maven Central)
  • Cross-account access via resource policies
  • Package versioning and retention policies
  • Integration with CodeBuild for dependency caching

6. AWS CodeStar / CodeCatalyst

What it is: Unified development service for project templates and team collaboration. Key concepts to memorize:
  • Project templates for common application patterns
  • Integrated IDE support (Cloud9, VS Code)
  • Team member management
  • CodeCatalyst: newer service with blueprints, workflows, and dev environments

7. Amazon ECR (Elastic Container Registry)

What it is: Managed Docker container registry. Key concepts to memorize:
  • Image scanning (basic and enhanced with Inspector)
  • Lifecycle policies for image cleanup
  • Cross-region and cross-account replication
  • Immutable image tags
  • Pull-through cache for public registries
  • Encryption at rest with KMS

8. AWS Lambda (for Automation)

Key concepts for SDLC:
  • Custom actions in CodePipeline
  • CodeDeploy hooks for Lambda deployments
  • Traffic shifting (AllAtOnce, Canary, Linear)
  • Alias and version management
  • Provisioned concurrency for deployment warmup

9. Amazon ECS/EKS Deployments

Key concepts:
  • ECS blue/green with CodeDeploy
  • Task definition versioning
  • Service updates (rolling, blue/green)
  • EKS deployments with CodePipeline and kubectl

10. AWS CloudFormation (for SDLC)

Key concepts:
  • StackSets for multi-account/multi-region deployments
  • Change sets for previewing changes
  • Nested stacks for modular templates
  • Drift detection
  • cfn-init, cfn-signal, cfn-hup for EC2 bootstrapping
  • CreationPolicy and WaitCondition for resource signaling
  • DeletionPolicy (Retain, Snapshot, Delete)
  • UpdatePolicy for ASG rolling updates

11. AWS Elastic Beanstalk

Key concepts:
  • Deployment policies: All at once, Rolling, Rolling with additional batch, Immutable, Traffic splitting
  • .ebextensions for configuration
  • Saved configurations and environment cloning
  • Blue/green via environment swap (CNAME swap)
  • Managed platform updates

12. AWS Systems Manager

Key concepts for SDLC:
  • Parameter Store for configuration and secrets
  • Run Command for remote execution
  • Automation documents for runbooks
  • State Manager for configuration compliance
  • Session Manager for secure access
  • Patch Manager for automated patching

13. Amazon EventBridge (CloudWatch Events)

Key concepts:
  • Pipeline event rules
  • Scheduled pipeline triggers
  • Cross-account event buses
  • Event patterns for filtering

14. AWS Secrets Manager

Key concepts:
  • Automatic rotation with Lambda
  • Cross-account access
  • Integration with RDS, Redshift, DocumentDB
  • Versioning (AWSCURRENT, AWSPREVIOUS)

---

Key Concepts to Understand

CI/CD Pipeline Patterns

  1. Single-account pipeline: Source → Build → Deploy (same account)
  2. Cross-account pipeline: Build in tools account, deploy to dev/staging/prod accounts
  3. Multi-region pipeline: Deploy to multiple regions with cross-region actions

Deployment Strategies Comparison

Strategy Downtime Rollback Speed Risk Use Case
All-at-once Yes Redeploy High Dev/test
Rolling Minimal Slow Medium Cost-sensitive
Rolling with additional batch No Medium Medium Production
Immutable No Fast Low Production
Blue/Green No Instant Lowest Critical apps
Canary No Fast Low Feature testing

AppSpec File Structures

For EC2/On-Premises:
version: 0.0
os: linux
files:
  - source: /
    destination: /var/www/html
hooks:
  BeforeInstall:
    - location: scripts/before_install.sh
      timeout: 300
  AfterInstall:
    - location: scripts/after_install.sh
For Lambda:
version: 0.0
Resources:
  - MyFunction:
      Type: AWS::Lambda::Function
      Properties:
        Name: "MyLambdaFunction"
        Alias: "live"
        CurrentVersion: "1"
        TargetVersion: "2"
Hooks:
  - BeforeAllowTraffic: "BeforeAllowTrafficHookFunction"
  - AfterAllowTraffic: "AfterAllowTrafficHookFunction"
For ECS:
version: 0.0
Resources:
  - TargetService:
      Type: AWS::ECS::Service
      Properties:
        TaskDefinition: "arn:aws:ecs:..."
        LoadBalancerInfo:
          ContainerName: "my-container"
          ContainerPort: 80

BuildSpec File Structure

version: 0.2
env:
  variables:
    KEY: "value"
  parameter-store:
    SECRET: "/path/to/secret"
  secrets-manager:
    DB_PASS: "arn:aws:secretsmanager:..."
phases:
  install:
    runtime-versions:
      nodejs: 18
    commands:
      - npm install
  pre_build:
    commands:
      - npm test
  build:
    commands:
      - npm run build
  post_build:
    commands:
      - echo "Build complete"
artifacts:
  files:
    - '**/*'
  base-directory: dist
cache:
  paths:
    - 'node_modules/**/*'
reports:
  junit-reports:
    files:
      - 'test-results.xml'
    file-format: JUNITXML

---

📝 Practice Questions

Question 1
A company has a CodePipeline that deploys a web application to multiple AWS accounts (development, staging, production). The pipeline is in the tools account. Deployments to the production account are failing with "Access Denied" errors. The cross-account IAM role exists in the production account. What is the MOST likely cause?
A. The CodePipeline service role does not have permission to assume the cross-account role
B. The S3 artifact bucket is not replicated to the production account
C. The cross-account role trust policy does not allow the tools account to assume it
D. CodePipeline does not support cross-account deployments
Answer: A

Explanation:

For cross-account deployments in CodePipeline, the CodePipeline service role in the tools account must have sts:AssumeRole permission for the cross-account role in the target account. Additionally, the cross-account role must have a trust policy allowing the tools account. However, the question states the cross-account role "exists," implying it's configured correctly, so the most likely issue is the CodePipeline service role permissions. The S3 bucket must be accessible cross-account (via bucket policy), but the error message specifically indicates an assume role issue.

Question 2
A development team uses AWS CodeBuild to build their Java application. Build times have increased from 5 minutes to 25 minutes. Investigation shows that Maven downloads all dependencies for every build. What should be implemented to reduce build times?
A. Use a larger compute type for CodeBuild
B. Configure S3 caching for the Maven .m2 directory in the buildspec file
C. Move the build process to EC2 instances with dependencies pre-installed
D. Use CodeArtifact to host dependencies closer to the build environment
Answer: B

Explanation:

CodeBuild supports caching via S3 to persist files between builds. For Maven projects, caching the .m2/repository directory dramatically reduces build times by avoiding repeated downloads. The buildspec.yml would include:
cache:
  paths:
    - '/root/.m2/**/*'
While option D (CodeArtifact) would help, it still requires downloading dependencies each build without local caching. Option A wouldn't reduce dependency download time. Option C introduces operational overhead.
Question 3
A company uses AWS CodeDeploy to deploy applications to EC2 instances in an Auto Scaling group. During a deployment, new instances are launched by the Auto Scaling group but receive the old application version. How should this be resolved?
A. Configure the Auto Scaling group to use a lifecycle hook that triggers CodeDeploy
B. Use an immutable deployment configuration
C. Configure CodeDeploy to use a blue/green deployment type
D. Suspend the Auto Scaling group during deployments
Answer: A

Explanation:

When Auto Scaling launches new instances during a CodeDeploy deployment, those instances might receive the old AMI version. To ensure new instances get the current deployment, configure an Auto Scaling lifecycle hook that triggers a CodeDeploy deployment to newly launched instances. This ensures consistency across all instances in the deployment group. Option D would work but creates operational complexity and potential availability issues. Option C changes the deployment model entirely. Option B (immutable) is a Beanstalk concept, not CodeDeploy.

Question 4
A team is implementing a CI/CD pipeline using AWS CodePipeline. They need to run integration tests against a deployed application in a test environment before proceeding to production deployment. The test takes 15 minutes to complete. Which approach should they use?
A. Add a CodeBuild action in the test stage that runs the integration tests
B. Add a Lambda invoke action that triggers the test and immediately returns success
C. Add a manual approval action and run tests outside the pipeline
D. Add a CodeBuild action with a custom image that includes test frameworks, configured with appropriate timeout
Answer: D

Explanation:

CodeBuild is ideal for running integration tests within a pipeline. The default timeout is 60 minutes (configurable up to 8 hours), which accommodates the 15-minute test. Using a custom Docker image with pre-installed test frameworks optimizes build time. Option A is partially correct but doesn't mention the custom image optimization. Option B would complete before tests finish. Option C introduces manual intervention unnecessarily.

Question 5
A company wants to implement a deployment strategy that routes 10% of traffic to a new Lambda function version, monitors for errors for 10 minutes, then shifts all traffic if successful. Which configuration achieves this?
A. CodeDeploy with deployment configuration Canary10Percent10Minutes
B. CodeDeploy with deployment configuration Linear10PercentEvery1Minute
C. API Gateway with canary release settings
D. Lambda alias with weighted routing at 10%
Answer: A

Explanation:

CodeDeploy's Canary10Percent10Minutes deployment configuration shifts exactly 10% of traffic to the new version, waits 10 minutes (allowing monitoring and potential rollback), then shifts the remaining 90%. This matches the requirement exactly. Linear10PercentEvery1Minute would incrementally shift 10% every minute, completing in 10 minutes total. Option C requires manual configuration. Option D is manual and doesn't automate the full shift.

Question 6
A DevOps engineer needs to store sensitive database credentials for use in CodeBuild. The credentials must be encrypted and automatically rotated every 30 days. Which solution meets these requirements?
A. Store credentials in CodeBuild environment variables with encryption enabled
B. Store credentials in Systems Manager Parameter Store SecureString with a rotation Lambda function
C. Store credentials in AWS Secrets Manager with automatic rotation enabled
D. Store credentials in an encrypted S3 object and download during build
Answer: C

Explanation:

AWS Secrets Manager is designed for storing and automatically rotating credentials. It provides native integration with RDS, Redshift, and DocumentDB for automatic rotation, and supports custom rotation Lambda functions for other credential types. CodeBuild can reference Secrets Manager secrets in the buildspec using secrets-manager environment variable references. Parameter Store (option B) can store secrets but doesn't have built-in rotation; you'd need to implement custom rotation. Option A doesn't support rotation. Option D is operationally complex.

Question 7
A company's CodePipeline includes a source stage using CodeCommit, a build stage using CodeBuild, and a deploy stage using CodeDeploy. The pipeline should only trigger when changes are made to the 'main' branch, not feature branches. How should this be configured?
A. Configure the CodeCommit trigger in CodePipeline to filter by branch name
B. Create a CloudWatch Events rule with a branch filter pattern
C. Configure a CodeCommit trigger that only fires for the main branch
D. Add a Lambda function as the first action to check the branch name
Answer: A and B are both acceptable, but B is more accurate

Explanation:

CodePipeline's native CodeCommit source action triggers on the specified branch only. When creating the source action, you specify the repository and branch name (e.g., "main"). The underlying mechanism uses CloudWatch Events/EventBridge to detect changes. For explicit filtering, you can create a CloudWatch Events rule with an event pattern filtering by branch:
{
  "source": ["aws.codecommit"],
  "detail-type": ["CodeCommit Repository State Change"],
  "detail": {
    "referenceType": ["branch"],
    "referenceName": ["main"]
  }
}
The question's best answer is A because CodePipeline source action configuration specifies the branch directly.
Question 8
An application team uses AWS CodeBuild with a managed Ubuntu image. The build requires a commercial tool that must be installed during every build, adding 5 minutes to build time. What is the MOST efficient solution?
A. Add installation commands to the install phase of buildspec.yml with caching enabled
B. Create a custom Docker image with the tool pre-installed and use it as the build environment
C. Use an EC2 build fleet with the tool pre-installed
D. Store the tool in S3 and download it with caching
Answer: B

Explanation:

Creating a custom Docker image with pre-installed tools is the most efficient approach. The custom image can be stored in Amazon ECR and referenced in the CodeBuild project configuration. This eliminates installation time completely for each build. Option A with caching might help but still requires initial installation in each new build environment. Option C (EC2 build fleet) introduces unnecessary complexity. Option D still requires extraction/installation time.

Question 9
A company uses CodePipeline with CodeDeploy for EC2 deployments. They need to receive notifications when deployments fail so the on-call team can respond. Which approach provides immediate notification?
A. Configure CodeDeploy to send SNS notifications on deployment failure
B. Create a CloudWatch Events rule for CodeDeploy deployment failure events that triggers SNS
C. Create a CloudWatch alarm on the CodeDeploy failure metric
D. Use CodePipeline notifications feature with SNS
Answer: B or D

Explanation:

Both options B and D work, but the question asks about immediate notification. CloudWatch Events (EventBridge) provides near real-time event-driven notifications. CodePipeline's native notification feature also uses EventBridge under the hood. For CodeDeploy-specific failures, option B with a rule like:
{
  "source": ["aws.codedeploy"],
  "detail-type": ["CodeDeploy Deployment State-change Notification"],
  "detail": {
    "state": ["FAILURE"]
  }
}
This triggers an SNS topic for immediate notification. Option D would catch pipeline-level failures which includes CodeDeploy failures when CodeDeploy is a pipeline stage.
Question 10
A DevOps team needs to implement a blue/green deployment for an application running on EC2 instances behind an Application Load Balancer. The deployment should automatically roll back if CloudWatch alarms indicate increased error rates. Which configuration is required?
A. CodeDeploy with blue/green deployment type, ALB target groups, and alarm-based automatic rollback
B. Elastic Beanstalk with immutable deployment policy
C. CloudFormation with AutoScalingReplacingUpdate policy
D. CodePipeline with parallel deploy actions to blue and green environments
Answer: A

Explanation:

AWS CodeDeploy supports blue/green deployments for EC2 instances using ALB target groups. The deployment configuration includes:
  • Two target groups (blue and green)
  • CodeDeploy shifts traffic between target groups
  • CloudWatch alarms can trigger automatic rollback if error thresholds are exceeded
  • Configurable traffic shifting (all-at-once, canary, linear)

The rollback configuration in CodeDeploy can specify alarms that, when triggered, automatically roll back the deployment by shifting traffic back to the original target group.

Question 11
A team is using AWS CodeArtifact as their npm package repository. Developers need to configure their local npm clients to use CodeArtifact. The authentication token expires every 12 hours. What is the recommended approach for local development?
A. Store the CodeArtifact token in .npmrc permanently
B. Use the aws codeartifact login command before running npm commands
C. Create an IAM user with long-term credentials for CodeArtifact access
D. Configure npm to use the CodeArtifact endpoint without authentication
Answer: B

Explanation:

The aws codeartifact login command retrieves an authentication token and configures npm automatically. The command:
aws codeartifact login --tool npm --repository my-repo --domain my-domain --domain-owner 111122223333
This updates the user's npm configuration with the token. Since tokens expire after 12 hours (configurable up to 12 hours), developers should run this command regularly or script it. Option A is insecure and tokens expire anyway. Option C creates security risks with long-term credentials. Option D wouldn't work as authentication is required.
Question 12
An organization requires that all production deployments receive approval from the security team before proceeding. The approval must be documented for audit purposes. How should this be implemented in CodePipeline?
A. Add a manual approval action with SNS notification to the security team
B. Implement a Lambda function that checks a ticketing system for approval
C. Use IAM policies to require security team credentials for deployment
D. Add a CodeBuild action that pauses for approval input
Answer: A

Explanation:

CodePipeline's manual approval action is designed for this use case. Configuration includes:
  • SNS topic for notification (emails the security team)
  • Custom approval message with deployment details
  • Optional URL to review changes
  • Comments field for approval documentation
When approved/rejected, CodePipeline records:
  • Who approved/rejected (IAM identity)
  • When the action was taken
  • Comments provided

This information is available in CloudTrail and pipeline history for audit purposes. Option B could work but adds complexity. Options C and D don't provide the workflow CodePipeline approval actions offer.

Question 13
A company uses CodeBuild for their CI process. They need to run unit tests and generate a test coverage report. The coverage report must be stored and viewable in the AWS Console. Which CodeBuild feature should be used?
A. Build artifacts uploaded to S3
B. CodeBuild Reports with coverage report type
C. CloudWatch Logs with custom metrics
D. Build badges on the repository
Answer: B

Explanation:

CodeBuild Reports feature supports test and code coverage reports. In buildspec.yml:
reports:
  coverage-report:
    files:
      - 'coverage/clover.xml'
    file-format: CLOVERXML  # or COBERTURAXML, JACOCOXML, etc.
Supported coverage formats include Clover, Cobertura, JaCoCo, and SimpleCov. Reports are viewable in the CodeBuild console with trend analysis across builds. Option A stores files but doesn't provide console visualization. Option C is for logs, not reports. Option D shows build status, not coverage.
Question 14
A development team has a monorepo containing multiple microservices. They want to configure CodePipeline to only build and deploy services that have changed, not all services for every commit. What approach should they implement?
A. Create separate pipelines for each microservice with path-based triggers using CloudWatch Events
B. Use a single pipeline with conditional actions based on file changes
C. Implement a Lambda function that analyzes git changes and triggers appropriate pipelines
D. Use CodeBuild batch builds with dynamic project selection
Answer: A or C

Explanation:

For monorepo patterns with selective builds:

Option A: Create separate pipelines per microservice. Use CloudWatch Events with custom event patterns that Lambda enriches with changed file paths. Each pipeline is triggered only when its service's files change. Option C: A Lambda function can:
  1. Receive CodeCommit events
  2. Use Git APIs to determine changed files
  3. Start only the relevant pipelines using start-pipeline-execution

This is a common pattern because CodePipeline doesn't natively support path-based filtering. The Lambda approach offers more flexibility for complex monorepo structures.

AWS has also introduced pipeline triggers with filtering capabilities in CodePipeline, which can filter based on file paths in newer versions.

Question 15
An application runs on Amazon ECS with Fargate. The team wants to implement blue/green deployments with traffic shifting and automatic rollback capabilities. Which combination of services should be used?
A. CodePipeline with ECS standard deployment action
B. CodePipeline with CodeDeploy ECS deployment action
C. CodePipeline with CloudFormation deployment action
D. CodePipeline with custom Lambda action for ECS updates
Answer: B

Explanation:

AWS CodeDeploy supports blue/green deployments for Amazon ECS services. The configuration requires:
  • Application Load Balancer with two target groups
  • ECS service configured for CodeDeploy deployment controller
  • appspec.yml defining the task definition and container details
  • CodePipeline action type: Deploy > Amazon ECS (Blue/Green)
CodeDeploy manages:
  • Creating replacement task set
  • Traffic shifting (all-at-once, canary, linear)
  • Health checks and CloudWatch alarm monitoring
  • Automatic rollback on failures

The standard ECS deployment action (option A) only supports rolling updates, not blue/green.

Question 16
A company's CodePipeline uses S3 as the artifact store. Build artifacts from CodeBuild are several gigabytes in size and are causing slow pipeline execution. What optimization should be implemented?
A. Use CodeArtifact instead of S3 for artifacts
B. Enable S3 Transfer Acceleration on the artifact bucket
C. Reduce artifact size by excluding unnecessary files in buildspec artifacts section
D. Move the pipeline to a region closer to development teams
Answer: C

Explanation:

The most effective optimization is reducing artifact size at the source. In buildspec.yml, carefully specify only necessary files:
artifacts:
  files:
    - 'app/**/*'
    - 'config/*.json'
  exclude-paths:
    - 'node_modules/**/*'
    - 'test/**/*'
  base-directory: dist
Using discard-paths and base-directory options helps minimize artifact size. Also consider:
  • Excluding test files, documentation, source maps
  • Compressing artifacts
  • Using artifact caching instead of passing large unchanged files

Option B adds cost and minimal benefit for inter-service transfers. Option A is for packages, not build artifacts.

Question 17
A DevOps engineer is configuring CodeDeploy for an on-premises server fleet. The servers can communicate with AWS over the internet. What must be configured on the servers for CodeDeploy to work?
A. AWS CLI and IAM user credentials
B. CodeDeploy agent and IAM instance profile
C. CodeDeploy agent and IAM user credentials with appropriate permissions
D. SSM agent and IAM role
Answer: C

Explanation:

For on-premises servers with CodeDeploy:

  1. CodeDeploy Agent must be installed and running on each server
  2. IAM User credentials (access key/secret key) must be configured because on-premises servers cannot use IAM instance profiles (those are EC2-only)
The IAM user needs permissions to:
  • Access S3 buckets containing deployment artifacts
  • Communicate with CodeDeploy service

Configuration file location: /etc/codedeploy-agent/conf/codedeploy.onpremises.yml contains the IAM credentials and region.

Option B is incorrect because instance profiles are EC2-specific. Option D (SSM) is not required for CodeDeploy (though it can complement it).

Question 18
A CodePipeline has source, build, and deploy stages. The team wants to add automated security scanning that checks for vulnerabilities in dependencies before the build stage. The scan results should block the pipeline if critical vulnerabilities are found. Which approach is recommended?
A. Add a CodeBuild action before the build stage that runs security scanning tools
B. Configure Amazon Inspector to scan the source repository
C. Add a Lambda action that triggers AWS SecurityHub analysis
D. Use CodeGuru Security in the source stage
Answer: A

Explanation:

Adding a CodeBuild action for security scanning is the most flexible approach. The CodeBuild project can run tools like:
  • OWASP Dependency-Check
  • Snyk
  • npm audit / pip-audit
  • Trivy for container images
The buildspec can be configured to fail the build (exit code non-zero) if critical vulnerabilities are found:
phases:
  build:
    commands:
      - npm audit --audit-level=critical
      - snyk test --severity-threshold=critical

Pipeline stops if CodeBuild reports failure. Option D (CodeGuru Security) is newer and can be integrated but the question implies more general vulnerability scanning.

Question 19
A company uses Elastic Beanstalk for their web application. They want to update the application with zero downtime and the ability to quickly roll back. They also want to run the new version alongside the old version temporarily to compare performance. Which deployment policy should they use?
A. Rolling deployment
B. Immutable deployment
C. Blue/green deployment using environment swap
D. Traffic splitting deployment
Answer: D

Explanation:

Elastic Beanstalk's Traffic Splitting deployment policy (canary testing) allows:
  • New version deployed to a fresh set of instances
  • Configurable percentage of traffic routed to new version
  • Evaluation period for monitoring
  • Automatic rollback if health checks fail
  • Quick rollback by terminating new instances

This matches the requirements of running both versions simultaneously for comparison. Option C (blue/green with CNAME swap) also works but doesn't allow percentage-based traffic splitting. Option B (immutable) replaces instances but doesn't maintain both versions simultaneously after deployment completes.

Question 20
A development team needs to share build artifacts between a CodeBuild project in the us-east-1 region and a deployment in the eu-west-1 region within the same CodePipeline. How should this be configured?
A. Configure cross-region artifact replication in the CodePipeline settings
B. Manually copy artifacts to S3 in the target region
C. Use a CodeBuild action in each region with separate artifact buckets
D. Enable S3 cross-region replication on the artifact bucket
Answer: A

Explanation:

CodePipeline natively supports cross-region actions. When configuring a cross-region action, CodePipeline automatically:
  • Creates an artifact bucket in the target region (or uses one you specify)
  • Replicates necessary artifacts to the target region's bucket
  • Handles encryption key management across regions
Configuration requires:
  • Specifying the region for each action
  • Ensuring IAM roles have cross-region permissions
  • KMS keys in each region if using customer-managed keys

This is configured in the pipeline structure by setting the region property on actions that need to run in different regions.

Question 21
A company has a CodePipeline that deploys to three environments: dev, staging, and prod. They want staging and prod deployments to wait for a minimum time after the previous environment's deployment before proceeding, to allow for testing. How should this be implemented?
A. Add wait actions between deployment stages
B. Add manual approval actions with estimated wait times in notifications
C. Use Lambda actions that implement wait logic using Step Functions
D. Configure deployment configuration with minimum wait time
Answer: C (or a combination approach)

Explanation:

CodePipeline doesn't have a native "wait" action. Options include:

Lambda + Step Functions (Option C): Create a Lambda action that triggers a Step Functions workflow with a Wait state. The workflow waits the specified time, then signals CodePipeline to continue using put-job-success-result. Alternative approach: Use CloudWatch Events with scheduled rules that:
  1. Pipeline pauses at approval action
  2. Scheduled event triggers Lambda after wait period
  3. Lambda approves the pending action via API

Option B with manual approvals works but requires human intervention. There's no "wait action" in CodePipeline (option A), and CodeDeploy configurations don't control inter-stage timing (option D).

Question 22
An organization wants to prevent direct commits to the main branch of their CodeCommit repository. All changes must go through pull requests that require at least two approvals. How should this be configured?
A. Create an IAM policy denying GitPush to the main branch
B. Configure branch-level permissions and approval rule templates
C. Use a Lambda trigger to reject direct commits
D. Implement pre-commit hooks on developer machines
Answer: B

Explanation:

CodeCommit provides:

  1. Approval Rule Templates: Define rules requiring specific numbers of approvals and optionally specific approvers (by IAM ARN or wildcard patterns)
  1. Branch-level Permissions: IAM policies can restrict push access to specific branches:
{
  "Effect": "Deny",
  "Action": ["codecommit:GitPush"],
  "Resource": "arn:aws:codecommit:*:*:repo-name",
  "Condition": {
    "StringEqualsIfExists": {
      "codecommit:References": ["refs/heads/main"]
    }
  }
}

The combination ensures only merged pull requests (after approval) update the main branch. Option D doesn't work because local hooks can be bypassed.

Question 23
A CodeBuild project needs to access a private RDS database during integration tests. The database is in a private subnet with no internet access. How should CodeBuild be configured?
A. Configure CodeBuild with VPC settings specifying private subnets and security groups
B. Create a VPC endpoint for RDS in the private subnet
C. Use RDS Proxy with public accessibility
D. Set up a NAT gateway for CodeBuild to access RDS
Answer: A

Explanation:

CodeBuild can be configured to run inside a VPC:
  • Specify VPC ID
  • Specify private subnet IDs (CodeBuild runs in these subnets)
  • Specify security group IDs
When running in VPC:
  • CodeBuild can access VPC resources (RDS, ElastiCache, etc.)
  • For internet access (downloading dependencies), you need NAT Gateway or VPC endpoints
  • Consider using S3 and CodeArtifact VPC endpoints for build dependencies

Security group must allow outbound traffic to RDS and RDS security group must allow inbound from CodeBuild security group.

Question 24
A pipeline uses AWS CodeDeploy to deploy a containerized application to Amazon ECS. The team wants to validate the new deployment by running synthetic tests before shifting production traffic. Which CodeDeploy feature supports this?
A. BeforeInstall lifecycle hook
B. AfterInstall lifecycle hook
C. BeforeAllowTraffic lifecycle hook with Lambda function
D. ValidateService lifecycle hook
Answer: C

Explanation:

For ECS blue/green deployments, CodeDeploy supports Lambda-based hooks:

  • BeforeInstall: Runs before replacement task set is created
  • AfterInstall: Runs after replacement task set is created but before traffic shifts
  • AfterAllowTraffic: Runs after traffic has shifted to replacement
BeforeAllowTraffic is ideal for validation testing because:
  • New task set is running and accessible via test target group
  • Production traffic still goes to original task set
  • Lambda function can run synthetic tests against test endpoint
  • If tests fail, Lambda returns failure and deployment rolls back
appspec.yml:
Hooks:
  - BeforeAllowTraffic: "ValidateDeploymentLambda"
Question 25
A team uses CodePipeline with a GitHub source. They want the pipeline to trigger only for pull request merges to the main branch, not for direct pushes. How should this be configured?
A. Use GitHub webhook with event filtering
B. Configure CodePipeline GitHub source action with pull request filter
C. Use AWS CodeStar Connections with trigger filters
D. Add a Lambda function to verify the commit was from a merged PR
Answer: C

Explanation:

AWS CodeStar Connections (which replaced GitHub OAuth tokens for CodePipeline) supports trigger configuration with filters:

When creating a pipeline with GitHub (via CodeStar Connections), you can configure:
  • Push triggers: Trigger on push to specified branches
  • Pull request triggers: Trigger on PR events (opened, updated, merged)
  • Tag triggers: Trigger on tag creation
For the requirement, configure:
  • Pipeline trigger type: Push
  • Branch filter: main
  • This triggers only when commits land on main (which happens after PR merge)

Alternatively, use EventBridge with GitHub events and filter for merged PR events, then trigger the pipeline.

Question 26
A development team wants to implement feature flags to control the rollout of new features without redeploying the application. Which AWS service combination provides this capability?
A. AppConfig with feature flag configuration profile
B. Systems Manager Parameter Store with application polling
C. Lambda@Edge for traffic routing
D. API Gateway with stage variables
Answer: A

Explanation:

AWS AppConfig (part of Systems Manager) provides feature flag functionality:

  • Feature Flag configuration profile type specifically designed for feature flags
  • Supports gradual rollout percentages
  • Built-in validation before deployment
  • Integration with CloudWatch for monitoring
  • Rollback capabilities
  • SDK caching for performance
Configuration includes:
  • Creating a feature flag configuration profile
  • Defining flags with enabled/disabled states
  • Configuring deployment strategy (percentage-based rollout)
  • Application polls AppConfig or uses cached configuration

AppConfig is preferred over Parameter Store for feature flags because it provides deployment strategies, validation, and rollback capabilities specifically designed for configuration changes.

Question 27
A company runs CodeBuild projects frequently throughout the day. They notice that builds are sometimes delayed waiting for compute capacity. What solution ensures builds start immediately?
A. Increase the build timeout setting
B. Configure reserved capacity for CodeBuild
C. Use larger compute types that have more availability
D. Configure a CodeBuild fleet with persistent instances
Answer: D

Explanation:

CodeBuild supports two capacity types:

  1. On-demand (default): Build environments provisioned on-demand; may have slight delays during peak usage
  1. Reserved capacity (Fleets): Pre-provisioned build instances that are always available:
  • Eliminate cold start delays
  • Faster build starts
  • Cost-effective for consistent build workloads
  • Instances remain available between builds
Fleet configuration includes:
  • Compute type
  • Number of instances
  • Environment type

For consistent, immediate build starts, reserved capacity fleets are recommended for teams with frequent builds.

Question 28
A CodePipeline needs to deploy the same application to 5 AWS accounts (dev, qa, staging, uat, prod). Creating a 15-stage pipeline is unmanageable. What architecture pattern should be used?
A. Use parallel deployment actions within a single stage for all environments
B. Implement a fan-out pattern using Step Functions to orchestrate parallel pipelines
C. Create separate pipelines per environment triggered by the previous pipeline's success
D. Use CodePipeline stages with multiple parallel actions per environment tier
Answer: B or D (depending on requirements)

Explanation:

For multi-account deployments at scale:

Option D - Grouped stages with parallel actions:
  • Stage 1: Source
  • Stage 2: Build
  • Stage 3: Deploy to Dev + QA (parallel actions, same tier)
  • Stage 4: Manual Approval
  • Stage 5: Deploy to Staging + UAT (parallel)
  • Stage 6: Approval
  • Stage 7: Deploy to Prod

This reduces stages while maintaining logical separation.

Option B - Step Functions orchestration: For more complex scenarios:
  • CodePipeline triggers Step Functions
  • Step Functions manages parallel deployments across accounts
  • More flexibility for conditional logic, retries, error handling

For the exam, option D is the more "AWS-native" approach using CodePipeline's parallel actions feature.

Question 29
A company uses AWS Elastic Beanstalk with a load-balanced environment. During deployments, users sometimes see errors while instances are being updated. The team wants to eliminate any user-facing errors during deployments. Which deployment policy should they select?
A. Rolling
B. Rolling with additional batch
C. Immutable
D. All at once
Answer: C (or Traffic Splitting)

Explanation:

Immutable deployment:
  • Launches a temporary Auto Scaling group with new version
  • Full capacity maintained throughout deployment
  • New instances pass health checks before traffic shifts
  • Original instances only terminated after new ones are healthy
  • If deployment fails, only temporary instances are terminated
This guarantees no user-facing errors because:
  1. Original healthy instances continue serving traffic
  2. New instances are validated before receiving traffic
  3. Traffic only shifts to new instances after health checks pass
Rolling with additional batch maintains capacity but some requests might hit instances during deployment. Traffic splitting also works and allows percentage-based testing.
Question 30
A DevOps engineer is implementing a deployment pipeline for a Lambda function. The function must be deployed using the existing alias "prod" with traffic shifting. After deployment, a validation Lambda function must verify the new version works correctly. If validation fails, traffic should automatically revert to the previous version. Which services and configuration are required?
A. CodeDeploy with Lambda deployment, appspec.yml with hooks
B. CodePipeline with Lambda deployment action
C. CloudFormation with AWS::Lambda::Alias and CodeDeploy integration
D. Lambda alias with provisioned concurrency and CloudWatch alarms
Answer: A

Explanation:

CodeDeploy Lambda deployments provide:

  1. Alias traffic shifting: Configure in deployment configuration (Canary, Linear, AllAtOnce)
  1. Validation hooks: appspec.yml with BeforeAllowTraffic and AfterAllowTraffic hooks
version: 0.0
Resources:
  - MyFunction:
      Type: AWS::Lambda::Function
      Properties:
        Name: "my-function"
        Alias: "prod"
        CurrentVersion: "1"
        TargetVersion: "2"
Hooks:
  - AfterAllowTraffic: "ValidationFunction"
  1. Automatic rollback: If validation function returns failure, CodeDeploy automatically shifts traffic back to previous version

The validation Lambda function receives deployment ID and lifecycle hook information, runs tests, and calls put-lifecycle-event-hook-execution-status with Succeeded or Failed.

Question 31
A company has multiple development teams using CodePipeline. Each team should only be able to view and manage their own pipelines. How should access be controlled?
A. Create separate AWS accounts per team
B. Use resource tags and tag-based IAM policies
C. Create IAM groups per team with pipeline-specific policies
D. Use AWS Organizations SCPs to restrict pipeline access
Answer: B

Explanation:

Tag-based access control in IAM:

  1. Tag resources: Each team's pipelines tagged with Team: team-name
  1. IAM policy with conditions:
{
  "Effect": "Allow",
  "Action": ["codepipeline:*"],
  "Resource": "*",
  "Condition": {
    "StringEquals": {
      "aws:ResourceTag/Team": "${aws:PrincipalTag/Team}"
    }
  }
}
  1. Tag IAM principals: Users/roles tagged with their team identifier

This scales better than creating resource-specific policies (option C) and doesn't require separate accounts (option A). SCPs (option D) don't provide granular resource-level control.

Question 32
A CodeBuild project builds a Docker image and pushes it to Amazon ECR. The build is failing with "no basic auth credentials" error when pushing to ECR. What is the likely cause and solution?
A. The buildspec is missing the ECR login command
B. The CodeBuild service role lacks ECR permissions
C. ECR repository doesn't exist
D. Docker daemon is not running in CodeBuild
Answer: A

Explanation:

ECR requires authentication before pushing images. The buildspec must include the ECR login command:

phases:
  pre_build:
    commands:
      - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
  build:
    commands:
      - docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
      - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
  post_build:
    commands:
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

Option B (IAM permissions) would cause a different error message. The "no basic auth credentials" specifically indicates missing login.

Question 33
A company wants to enforce that all CodeBuild projects use VPC configuration and cannot access the public internet during builds. How should this be enforced organization-wide?
A. Create an SCP denying CodeBuild project creation without VPC configuration
B. Use AWS Config rules to check CodeBuild configuration
C. Implement a Lambda function triggered on CodeBuild project creation
D. Use CodeBuild service control policies
Answer: A (or B for monitoring, A for prevention)

Explanation:

Service Control Policies (SCPs) in AWS Organizations can enforce CodeBuild configuration:

{
  "Effect": "Deny",
  "Action": ["codebuild:CreateProject", "codebuild:UpdateProject"],
  "Resource": "*",
  "Condition": {
    "Null": {
      "codebuild:VpcConfig": "true"
    }
  }
}

This prevents creating or updating CodeBuild projects without VPC configuration. For monitoring existing projects, AWS Config rules (option B) complement this by detecting non-compliant resources.

Note: The exact condition key syntax may vary; verify current documentation for precise implementation.

Question 34
A development team's CodePipeline source action uses an S3 bucket. They want the pipeline to trigger when a new object is uploaded to a specific prefix in the bucket. Currently, the pipeline uses polling. What change will provide faster pipeline triggering?
A. Enable S3 event notifications to CloudWatch Events
B. Configure S3 versioning on the bucket
C. Reduce the polling interval in CodePipeline
D. Enable CloudWatch Events detection for the source action
Answer: D

Explanation:

CodePipeline S3 source actions can use:

  1. Polling (periodic check): Default behavior, checks every few minutes
  2. Event-based (CloudWatch Events): Near real-time triggering
To enable event-based triggering:
  1. Enable "CloudWatch Events" option on the S3 source action
  2. CodePipeline automatically creates the necessary CloudWatch Events rule and S3 bucket notification

This configuration detects S3 object creation events and triggers the pipeline within seconds of the upload, compared to polling's multi-minute delay.

Note: S3 bucket must have versioning enabled for change detection with event-based triggers.

Question 35
An application deployed with CodeDeploy on EC2 is experiencing issues after deployment. The DevOps engineer needs to investigate what happened during the deployment on a specific instance. Where should they look?
A. CloudWatch Logs for the CodeDeploy deployment
B. CodeDeploy deployment logs in the console
C. CodeDeploy agent logs on the EC2 instance
D. AWS X-Ray traces for the deployment
Answer: C

Explanation:

CodeDeploy agent logs on EC2 instances contain detailed deployment information:

Log locations:
  • Linux: /var/log/aws/codedeploy-agent/codedeploy-agent.log
  • Windows: C:\ProgramData\Amazon\CodeDeploy\log\codedeploy-agent.log
Additional log files:
  • /opt/codedeploy-agent/deployment-root/{deployment-group-id}/{deployment-id}/logs/scripts.log - lifecycle hook script output
These logs show:
  • File download status
  • Lifecycle hook execution
  • Script output and errors
  • Deployment timing

The CodeDeploy console (option B) provides high-level status but not detailed instance-level debugging information.

Question 36
A company is implementing CI/CD for a microservices architecture. Each service has its own CodePipeline. They need to coordinate deployments across services to ensure compatibility. How should they implement deployment coordination?
A. Use a parent CodePipeline that triggers child pipelines sequentially
B. Implement a Step Functions workflow that orchestrates pipeline executions
C. Use EventBridge to chain pipeline executions based on completion events
D. Create a single pipeline with all services in parallel stages
Answer: B (best for complex coordination) or C (for simpler cases)

Explanation:

For microservices deployment coordination:

Step Functions (Option B) - Best for complex orchestration:
  • Coordinate multiple pipeline executions
  • Handle dependencies between services
  • Implement retry logic and error handling
  • Support parallel and sequential deployments
  • Maintain state across long-running deployments
EventBridge (Option C) - Simpler cases:
Pipeline A completes → EventBridge rule → Triggers Pipeline B
Example Step Functions workflow:
  1. Deploy core services (parallel)
  2. Wait for all core services to complete
  3. Deploy dependent services (parallel)
  4. Run integration tests
  5. Deploy API gateway service

Option D loses independent service deployments. Option A is limited in flexibility.

Question 37
A CodePipeline needs to deploy to Amazon EKS. The deployment should use Kubernetes manifests stored in the source repository. Which approach is recommended?
A. Use CodeBuild to run kubectl commands against the EKS cluster
B. Use CodeDeploy with EKS deployment action
C. Use CloudFormation to deploy Kubernetes manifests
D. Use a Lambda function to interact with EKS API
Answer: A

Explanation:

For EKS deployments from CodePipeline:

CodeBuild with kubectl (Recommended):
  1. CodeBuild project configured with VPC access to EKS cluster
  2. Build environment includes kubectl
  3. Authentication using IAM role mapped to Kubernetes RBAC
buildspec.yml:
phases:
  install:
    commands:
      - curl -LO "https://dl.k8s.io/release/stable.txt"
      - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
      - chmod +x kubectl && mv kubectl /usr/local/bin/
  build:
    commands:
      - aws eks update-kubeconfig --name my-cluster --region $AWS_REGION
      - kubectl apply -f kubernetes/

Note: CodeDeploy (option B) doesn't natively support EKS. CloudFormation (option C) can work but is more complex. AWS also offers Controllers for Kubernetes (ACK) for IaC approaches.

Question 38
A team uses CodeCommit and needs to implement code review requirements. They want to ensure that at least one person from the security team reviews changes to files in the `/security/` directory. How should this be configured?
A. Create an approval rule template with path-based conditions
B. Create a Lambda trigger that enforces approval based on changed files
C. Use branch protection with required reviewers
D. Implement a custom CodeGuru reviewer for security files
Answer: B (currently the best approach for path-based requirements)

Explanation:

CodeCommit's approval rule templates don't support path-based conditions natively. For path-specific approval requirements:

Lambda Trigger Approach:
  1. Configure CodeCommit trigger for pull request events
  2. Lambda function:
  • Analyzes changed files in the PR
  • If /security/ files are changed, updates required approvals
  • Enforces approval from security team members (by IAM ARN)
  • Blocks merge until requirements met
The Lambda can:
  • Use CodeCommit APIs to get PR details and file changes
  • Create/update approval rules dynamically
  • Comment on PR with requirements

This is more complex but necessary for path-based requirements. Standard approval rule templates only support repository-level and branch-level rules.

Question 39
A CodeBuild project runs integration tests that require access to test data in an S3 bucket. The bucket is encrypted with a customer-managed KMS key. Builds are failing with access denied errors when downloading test data. The CodeBuild service role has S3 permissions. What additional configuration is needed?
A. Grant the CodeBuild service role kms:Decrypt permission for the KMS key
B. Enable S3 bucket versioning
C. Configure the KMS key policy to allow CodeBuild service
D. Use S3 server-side encryption with S3-managed keys instead
Answer: A

Explanation:

When accessing S3 objects encrypted with customer-managed KMS keys, the accessing principal needs:

  1. S3 permissions (already configured per question)
  2. KMS permissions to decrypt the data key
Required KMS permissions:
{
  "Effect": "Allow",
  "Action": ["kms:Decrypt", "kms:GenerateDataKey"],
  "Resource": "arn:aws:kms:region:account:key/key-id"
}
This can be granted via:
  • IAM policy on the CodeBuild service role (Option A)
  • KMS key policy (Option C - also works but typically both are configured)

For downloads, kms:Decrypt is essential. The error message indicates the service role's IAM policy is missing KMS permissions.

Question 40
A company uses CodePipeline with GitHub Enterprise as the source. They need to ensure the pipeline only processes code that has passed branch protection rules in GitHub. How can they verify this?
A. Configure CodeStar Connections to respect GitHub branch protection
B. Add a CodeBuild action that uses GitHub API to verify PR status
C. Use GitHub Actions as an intermediate step before CodePipeline
D. Configure a Lambda source action that validates before proceeding
Answer: A (or B for explicit validation)

Explanation:

CodeStar Connections integrates with GitHub and respects GitHub's authentication and authorization. However, CodePipeline source actions trigger on commits, not on GitHub's internal status.

For explicit verification: Option B - CodeBuild validation:
phases:
  pre_build:
    commands:
      - |
        # Check if commit was from a merged PR that passed branch protection
        COMMIT_SHA="${CODEBUILD_RESOLVED_SOURCE_VERSION}"
        PR_STATUS=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \
          "https://api.github.com/repos/owner/repo/commits/$COMMIT_SHA/status")
        if [[ $(echo $PR_STATUS | jq -r '.state') != "success" ]]; then
          echo "Commit did not pass required checks"
          exit 1
        fi

This explicitly validates the commit's status before proceeding. GitHub's branch protection prevents merging without passing checks, but this adds defense-in-depth in the pipeline.

Question 41
A DevOps team is implementing a multi-region active-active deployment for their application. They want CodePipeline to deploy to both regions simultaneously and only proceed if both deployments succeed. How should this be configured?
A. Create separate pipelines per region triggered by the same source
B. Configure cross-region actions in parallel within the same stage
C. Use CloudFormation StackSets for multi-region deployment
D. Implement a Step Functions workflow for coordinated multi-region deployment
Answer: B

Explanation:

CodePipeline supports cross-region actions within the same stage. Configure:

Stage: Deploy
├── Action: Deploy-us-east-1 (region: us-east-1)
└── Action: Deploy-eu-west-1 (region: eu-west-1)

Both actions run in parallel. The stage only completes successfully if ALL actions succeed. If either region's deployment fails, the stage fails and the pipeline stops.

Requirements:
  • Artifact buckets in each region
  • IAM roles with cross-region permissions
  • KMS keys in each region (if using encryption)

This is the native CodePipeline approach for multi-region parallel deployments with coordinated success/failure.

Question 42
A company's CodeBuild project uses a buildspec.yml file stored in the source repository. Security team wants to ensure developers cannot modify the build commands. How should this be enforced?
A. Store buildspec.yml in a separate secured repository
B. Use CodeBuild project-level buildspec override
C. Insert buildspec commands directly in the CodeBuild project configuration
D. Create a CodeCommit approval requirement for buildspec.yml changes
Answer: C (or B for more flexibility)

Explanation:

To prevent developers from modifying build commands:

Option C - Inline buildspec in project: Configure the CodeBuild project with buildspec commands defined in the project settings rather than using a file from the source. Developers with source access cannot modify the build process. Option B - Buildspec override: Specify a buildspec file path that points to a location outside the developer-accessible source, or use the inline buildspec feature. Console configuration:
  • Build specification: "Insert build commands"
  • Enter commands directly in the project configuration

Developers with only source repository access cannot modify the build process. Only users with CodeBuild project modification permissions can change the buildspec.

Question 43
An application uses CodeDeploy for EC2 deployments. During the BeforeInstall hook, a script checks if a dependent service is available. If the service is unavailable, the script should wait and retry before failing. The current script immediately fails if the service is unavailable. What change should be made?
A. Increase the hook timeout in appspec.yml
B. Modify the script to implement retry logic with exponential backoff
C. Configure CodeDeploy to automatically retry failed hooks
D. Add a wait condition in the CodeDeploy deployment configuration
Answer: B

Explanation:

The lifecycle hook script should implement retry logic:

#!/bin/bash
MAX_RETRIES=5
RETRY_INTERVAL=10

for i in $(seq 1 $MAX_RETRIES); do
  if check_service_available; then
    echo "Service is available"
    exit 0
  fi
  echo "Attempt $i: Service unavailable, waiting ${RETRY_INTERVAL}s..."
  sleep $RETRY_INTERVAL
  RETRY_INTERVAL=$((RETRY_INTERVAL * 2))  # Exponential backoff
done

echo "Service unavailable after $MAX_RETRIES attempts"
exit 1
Also increase timeout in appspec.yml if needed:
hooks:
  BeforeInstall:
    - location: scripts/check_service.sh
      timeout: 300  # 5 minutes

CodeDeploy doesn't have built-in hook retry (option C). The script must handle retries internally.

Question 44
A team is implementing canary deployments for their Lambda function using CodeDeploy. They want to shift 10% of traffic initially, wait 5 minutes, then shift another 10% every 2 minutes until complete. Which deployment configuration should they use?
A. Canary10Percent5Minutes
B. Linear10PercentEvery2Minutes with initial wait
C. Custom deployment configuration with specified intervals
D. AllAtOnce with CloudWatch-based traffic shifting
Answer: C

Explanation:

The described requirement doesn't match standard AWS deployment configurations:

  • Canary10Percent5Minutes: 10% initially, wait 5 minutes, then 100% (not gradual)
  • Linear10PercentEvery2Minutes: Shifts 10% every 2 minutes from the start (no initial wait)

For custom behavior, create a custom deployment configuration:

aws deploy create-deployment-config \
  --deployment-config-name Custom-Canary-Then-Linear \
  --compute-platform Lambda \
  --traffic-routing-config '{
    "type": "TimeBasedLinear",
    "timeBasedLinear": {
      "linearPercentage": 10,
      "linearInterval": 2
    }
  }'

However, the exact requirement (initial canary with wait, then linear) may require combining approaches or accepting the closest standard configuration. The exam may present this as "Custom deployment configuration."

Question 45
A CodePipeline retrieves source code from CodeCommit and needs to include the Git metadata (history, branches) for the build process. Currently, only the latest commit's files are available in CodeBuild. How can full Git metadata be accessed?
A. Configure the source action to include full clone
B. Clone the repository again in CodeBuild using Git commands
C. Use the "Full clone" option in the CodeCommit source action
D. Enable deep source cloning in CodePipeline settings
Answer: C

Explanation:

CodePipeline's CodeCommit source action supports two clone modes:

  1. Full clone: Includes complete Git history and metadata
  • Source action outputs Git repository with full history
  • Useful for build processes that need: git log, git describe, branch information
  1. Default (zip download): Only current commit files
  • Faster for simple builds
  • No Git metadata
Configuration: In the source action settings, enable "Full clone" output artifact format. CodeBuild then receives the full repository with .git directory, enabling commands like:
git describe --tags
git log --oneline -10
git branch -a
Question 46
An organization uses AWS Organizations with multiple accounts. They want to standardize CodePipeline creation across accounts using a template that includes all required stages and actions. How should this be implemented?
A. Create a CloudFormation StackSet that deploys the pipeline template
B. Use AWS Service Catalog with a pipeline product
C. Implement a custom CDK construct for pipeline creation
D. Create a CodeCatalyst blueprint for pipelines
Answer: B (or A, both are valid)

Explanation:

Both options work for standardization:

AWS Service Catalog (Option B):
  • Create a portfolio with pipeline product
  • Product defined using CloudFormation template
  • Share portfolio across accounts
  • Users launch standardized pipelines with customizable parameters
  • Governance through constraints and launch roles
  • Version management for pipeline templates
CloudFormation StackSets (Option A):
  • Deploy identical pipelines across multiple accounts
  • Central management of pipeline infrastructure
  • Automatic deployment to new accounts via OU targeting
Service Catalog is generally preferred for:
  • Self-service pipeline creation by teams
  • Parameterized customization within guardrails
  • Approval workflows for provisioning
StackSets is better for:
  • Identical infrastructure across accounts
  • Central IT-managed deployments
  • Compliance enforcement
Question 47
A company's deployment process requires that production deployments only happen during a specific maintenance window (Saturday 2AM-6AM UTC). How should this be enforced in CodePipeline?
A. Use a Lambda action that checks the current time before the deploy action
B. Configure deployment windows in CodeDeploy deployment group settings
C. Add an approval action with SNS notification that's only approved during the window
D. Use EventBridge Scheduler to enable/disable the production deploy stage
Answer: A (or B for CodeDeploy-specific control)

Explanation:

Option A - Lambda validation:
import datetime

def lambda_handler(event, context):
    now = datetime.datetime.utcnow()
    
    # Check if Saturday between 2AM and 6AM UTC
    if now.weekday() == 5 and 2 <= now.hour < 6:
        # Return success to CodePipeline
        codepipeline.put_job_success_result(jobId=event['CodePipeline.job']['id'])
    else:
        # Return failure - blocks deployment
        codepipeline.put_job_failure_result(
            jobId=event['CodePipeline.job']['id'],
            failureDetails={
                'message': 'Deployments only allowed Saturday 2AM-6AM UTC',
                'type': 'JobFailed'
            }
        )
Option B - CodeDeploy has deployment windows (if using CodeDeploy): Not a native feature, but can be implemented via automation.

The Lambda approach provides the most flexibility and clear enforcement.

Question 48
A DevOps team is implementing a GitOps workflow where the Git repository is the source of truth for all deployments. When infrastructure or application changes are pushed to the repository, deployments should automatically sync to match the repository state. Which AWS service combination supports this?
A. CodeCommit with CloudFormation deployment action in CodePipeline
B. CodeCommit with AWS App Runner auto-deployment
C. GitHub with AWS Proton
D. CodeCommit with ArgoCD on EKS
Answer: D (for true GitOps), or A/B depending on context

Explanation:

True GitOps requires:
  • Git as single source of truth
  • Declarative infrastructure/application definitions
  • Automated agents that sync actual state to desired state
  • Pull-based deployment model
ArgoCD on EKS (Option D):
  • Continuously monitors Git repository
  • Automatically syncs Kubernetes cluster state to match repository
  • Reconciliation loop maintains desired state
  • True GitOps implementation
AWS-Native Options:
  • CodePipeline (Option A): Push-based CI/CD, not pure GitOps but common
  • App Runner (Option B): Auto-deploys on repository changes (for containers)
  • Proton (Option C): Templates for infrastructure/applications

For exam purposes, understand that GitOps is a methodology. AWS provides building blocks, but tools like ArgoCD/Flux provide pure GitOps implementation on EKS.

Question 49
A CodePipeline execution failed at the deploy stage. The team fixed the issue and wants to restart the pipeline from the failed stage rather than from the beginning. How can this be accomplished?
A. Use the "Retry failed actions" feature in the console
B. Stop and restart the pipeline execution
C. Create a new pipeline execution starting at the deploy stage
D. Manually trigger the deploy stage using AWS CLI
Answer: A

Explanation:

CodePipeline provides the ability to retry failed stages:

Console:
  1. Navigate to the failed pipeline execution
  2. Click "Retry" on the failed stage
  3. Pipeline resumes from that stage using existing artifacts
CLI:
aws codepipeline retry-stage-execution \
  --pipeline-name MyPipeline \
  --stage-name Deploy \
  --pipeline-execution-id abc123 \
  --retry-mode FAILED_ACTIONS

This uses artifacts from the original execution, avoiding the need to rebuild. Note that retry is only available for the most recent execution of a stage and must be initiated before a new pipeline execution processes that stage.

Question 50
A company uses CodeArtifact for npm package management. Developers are experiencing slow installs because packages are being fetched from the upstream public npm registry for every build. How can this be optimized?
A. Configure CodeArtifact to cache packages from upstream repositories
B. Increase the package retention period
C. Enable external connection to npmjs and let CodeArtifact cache packages automatically
D. Pre-populate the CodeArtifact repository with all required packages
Answer: C

Explanation:

CodeArtifact with external connections:

  1. External connection: Links CodeArtifact repository to public registries (npm, PyPI, Maven Central)
  1. Automatic caching: When a package is requested:
  • If cached in CodeArtifact → served immediately
  • If not cached → fetched from upstream, cached, then served
  • Subsequent requests served from cache
Configuration:
aws codeartifact associate-external-connection \
  --domain my-domain \
  --repository my-repo \
  --external-connection public:npmjs
After initial fetch, packages are cached and served from CodeArtifact, providing:
  • Faster installs (closer/faster than public internet)
  • Availability if upstream is down
  • Security scanning of cached packages
Question 51
A company is implementing blue/green deployments for their EC2 application using CodeDeploy. They want to keep the original (blue) environment running for 24 hours after deployment for potential rollback. How should this be configured?
A. Set the termination wait time to 24 hours in the deployment group
B. Configure a manual termination action in the deployment
C. Use a CloudWatch Events rule to terminate instances after 24 hours
D. Set the BlueGreenDeploymentConfiguration termination wait time
Answer: A or D (same setting)

Explanation:

In CodeDeploy blue/green deployments, the terminateBlueInstancesOnDeploymentSuccess setting controls what happens to original instances:

Configuration options:
  1. Terminate after wait period: Keeps blue instances for specified duration
  2. Terminate immediately: Removes blue instances right after traffic shift
  3. Keep alive: Never automatically terminates blue instances
CLI/API configuration:
{
  "blueGreenDeploymentConfiguration": {
    "terminateBlueInstancesOnDeploymentSuccess": {
      "action": "TERMINATE",
      "terminationWaitTimeInMinutes": 1440  // 24 hours
    }
  }
}

During this window, rollback to blue instances is near-instant (traffic shift). After the window, blue instances are terminated.

Question 52
A CodeDeploy deployment to EC2 instances is failing with "The overall deployment failed because too many individual instances failed deployment." The deployment configuration is HalfAtATime. Investigation shows that the first batch of instances failed during the AfterInstall hook. What should be checked first?
A. The AfterInstall script exit code and script content
B. The instance IAM role permissions
C. The CodeDeploy agent version
D. The deployment group target instances
Answer: A

Explanation:

When lifecycle hook scripts fail:

  1. Non-zero exit code = hook failure = instance deployment failure
  2. Scripts must explicitly exit with status 0 for success
Debugging steps:
  1. Check the script on a failed instance:
/opt/codedeploy-agent/deployment-root/{deployment-group}/{deployment-id}/logs/scripts.log
  1. Common issues:
  • Script doesn't have execute permissions
  • Script has syntax errors
  • Dependencies not installed
  • Script doesn't handle errors properly
  • Missing shebang (#!/bin/bash)
  1. CodeDeploy agent log:
/var/log/aws/codedeploy-agent/codedeploy-agent.log

The deployment group configuration and IAM roles are less likely to cause AfterInstall script failures (those would cause earlier failures).

Question 53
An application uses CodeDeploy with an in-place deployment on EC2 instances. The deployment should automatically roll back if the CPU utilization exceeds 80% after deployment. How should this be configured?
A. Create a CloudWatch alarm for CPU and configure it as a rollback trigger
B. Add a ValidateService hook that checks CPU utilization
C. Configure auto-rollback based on deployment health metrics
D. Use CodeDeploy automatic rollback on failed health checks
Answer: A

Explanation:

CodeDeploy supports CloudWatch alarms as automatic rollback triggers:

Configuration:
  1. Create CloudWatch alarm:
  • Metric: CPUUtilization
  • Threshold: 80%
  • Period and evaluation settings appropriate for post-deployment monitoring
  1. Configure deployment group:
{
  "autoRollbackConfiguration": {
    "enabled": true,
    "events": ["DEPLOYMENT_STOP_ON_ALARM"]
  },
  "alarmConfiguration": {
    "alarms": [
      {"name": "HighCPU-Alarm"}
    ],
    "enabled": true
  }
}

If the alarm enters ALARM state during or shortly after deployment, CodeDeploy automatically rolls back.

Note: Alarms are evaluated during deployment and for a period after. The monitoring window depends on alarm configuration and deployment duration.
Question 54
A company has 500 EC2 instances across multiple Auto Scaling groups that need to receive the same application deployment. How should CodeDeploy be configured for efficient deployment?
A. Create separate deployment groups for each Auto Scaling group
B. Create a single deployment group using EC2 tag-based targeting
C. Create a deployment group that targets multiple Auto Scaling groups
D. Use CodeDeploy deployment configurations with high parallelism
Answer: C (or B, depending on the scenario)

Explanation:

CodeDeploy deployment groups can target multiple Auto Scaling groups:

Option C - Multiple ASG targeting: A single deployment group can include multiple Auto Scaling groups. This is ideal when:
  • All ASGs should receive the same deployment
  • You want unified deployment management
  • You need consistent deployment configuration across ASGs
Option B - Tag-based targeting: Useful when instances aren't in ASGs or you need flexible grouping based on tags (e.g., Environment: Production). Deployment efficiency: Configure deployment configuration for parallelism:
  • AllAtOnce for fastest deployment (higher risk)
  • Custom configuration specifying percentage or fixed number
For 500 instances, consider:
  • Batch sizes appropriate for risk tolerance
  • Monitoring during deployment
  • Rollback triggers configured
Question 55
A DevOps engineer is troubleshooting a CodeDeploy deployment that succeeded but the application isn't working correctly. The deployment logs show all lifecycle hooks completed successfully. What should be investigated next?
A. The application health check configuration
B. The ValidateService hook implementation
C. The ApplicationStart hook script
D. The deployment configuration minimum healthy hosts setting
Answer: B

Explanation:

If deployment succeeded but application isn't working:

ValidateService hook analysis (Option B): The ValidateService hook is specifically designed to verify the deployment worked correctly. Issues:
  • Hook might not be implemented (no validation)
  • Hook validation might be insufficient
  • Hook might not exit with failure on actual problems
What ValidateService should do:
#!/bin/bash
# Check if application is responding
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health)
if [ "$RESPONSE" != "200" ]; then
  echo "Health check failed"
  exit 1
fi
echo "Application healthy"
exit 0

If no ValidateService hook exists, the deployment succeeds based on file deployment and earlier hooks, not actual application functionality.

Also check:
  • ApplicationStart hook - verify it actually started the application
  • Application logs for startup errors
Question 56
A Lambda function is deployed using CodeDeploy with the Canary10Percent5Minutes configuration. During the canary period, CloudWatch detects an increase in error rate. What happens automatically?
A. Traffic shifts back to the original version immediately
B. The deployment pauses and waits for manual intervention
C. CodeDeploy triggers an automatic rollback based on configured alarms
D. Nothing happens unless alarms are explicitly configured
Answer: D (but C if alarms are configured)

Explanation:

CodeDeploy doesn't automatically monitor CloudWatch metrics for rollback. You must explicitly configure:

  1. CloudWatch alarms that trigger on error conditions:
{
  "alarmConfiguration": {
    "enabled": true,
    "alarms": [{"name": "Lambda-Error-Rate-Alarm"}]
  }
}
  1. Automatic rollback on alarm:
{
  "autoRollbackConfiguration": {
    "enabled": true,
    "events": ["DEPLOYMENT_STOP_ON_ALARM"]
  }
}

Without this configuration, the deployment continues regardless of errors. The canary period gives you TIME to monitor, but doesn't provide automatic metric-based rollback unless configured.

This is a common exam topic: CodeDeploy alarm-based rollback requires explicit configuration.

Question 57
A company uses CodeDeploy for on-premises server deployments. They need to register new servers with CodeDeploy automatically when they're provisioned by their configuration management tool. What approach should they use?
A. Use CodeDeploy API calls from the configuration management tool
B. Configure SSM agent to automatically register with CodeDeploy
C. Use an on-premises instance registration script with IAM user credentials
D. Enable automatic registration in the CodeDeploy deployment group
Answer: A or C

Explanation:

For on-premises instance registration with CodeDeploy:

Automated registration process:
  1. Prerequisite: Create an IAM user for the on-premises instance
  • Generate access keys
  • Attach policy with CodeDeploy permissions
  1. Registration (Option C or A):
# Using CLI/API (can be scripted in config management)
aws deploy register-on-premises-instance \
  --instance-name server-001 \
  --iam-user-arn arn:aws:iam::account:user/codedeploy-user

# Configure instance with credentials
aws deploy install \
  --config-file /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
  1. Add to deployment group:
aws deploy add-tags-to-on-premises-instances \
  --instance-names server-001 \
  --tags Key=Environment,Value=Production

Configuration management tools (Ansible, Puppet, Chef) can execute these commands during server provisioning.

Question 58
An ECS service uses CodeDeploy for blue/green deployments. After a successful deployment, the team notices that the old task definition is still running tasks alongside the new one. What is the likely cause?
A. The deployment is still in progress (traffic shifting)
B. The termination wait time hasn't elapsed
C. ECS service auto-scaling launched tasks from old task definition
D. The deployment succeeded but traffic shift failed
Answer: B

Explanation:

In ECS blue/green deployments:

Deployment lifecycle:
  1. New task set created with new task definition
  2. Traffic gradually shifted to new task set
  3. After successful traffic shift, old task set waits for termination
  4. After termination wait time, old tasks are terminated
Termination wait time:
{
  "blueGreenDeploymentConfiguration": {
    "terminationWaitTimeInMinutes": 60
  }
}
During this period:
  • Old tasks remain running (for potential rollback)
  • New tasks serve production traffic
  • Both task sets exist simultaneously

After wait time expires, old task set is terminated. This is expected behavior, not an error.

Question 59
A company's CodeDeploy deployment is stuck at the Install lifecycle event. The CodeDeploy agent log shows "The specified key does not exist" when attempting to download the deployment bundle. What should be checked?
A. The S3 bucket policy allows access from the EC2 instance role
B. The deployment bundle was correctly uploaded to S3
C. The EC2 instance has internet access to reach S3
D. All of the above
Answer: D

Explanation:

"The specified key does not exist" error indicates S3 access issues:

Check all of the following:
  1. Bundle exists in S3:
  • Verify the artifact key path is correct
  • Check if the revision was successfully uploaded
  • Verify the S3 bucket and key in deployment configuration
  1. IAM permissions:
  • EC2 instance role needs s3:GetObject on the artifact
  • If cross-account, both bucket policy and IAM role needed
  • If KMS encrypted, need kms:Decrypt
  1. Network access:
  • Instance can reach S3 (internet gateway, NAT, or VPC endpoint)
  • Security groups allow outbound HTTPS
  • NACLs allow S3 traffic
  1. S3 bucket configuration:
  • Bucket policy doesn't explicitly deny
  • Bucket isn't in a different region without proper configuration

The error message typically means the object truly doesn't exist at that key, but access issues can produce similar errors.

Question 60
A development team wants to test CodeDeploy deployments locally before pushing to AWS. What tool or approach enables local deployment testing?
A. CodeDeploy Local Deployments feature
B. LocalStack with CodeDeploy support
C. Docker containers simulating EC2 with CodeDeploy agent
D. The codedeploy-local command-line tool
Answer: D

Explanation:

AWS provides codedeploy-local CLI for local testing:

Installation:
gem install aws-codedeploy-agent
Usage:
codedeploy-local \
  --bundle-location /path/to/application \
  --type directory \
  --deployment-group myDeploymentGroup
Benefits:
  • Tests appspec.yml syntax
  • Executes lifecycle hooks locally
  • Validates deployment bundle structure
  • Debugging without AWS deployment costs/time
Limitations:
  • Simulates deployment process
  • Not all features available locally
  • Network-dependent features may not work

This is useful for rapid iteration on appspec.yml and deployment scripts before actual AWS deployments.

Question 61
An application running on EC2 with CodeDeploy needs to maintain at least 50% capacity during deployments. The deployment should proceed as fast as possible while meeting this requirement. Which deployment configuration should be used?
A. HalfAtATime
B. OneAtATime
C. AllAtOnce with minimum healthy hosts at 50%
D. Custom configuration with minimum healthy percentage of 50%
Answer: A (or D for more control)

Explanation:

HalfAtATime (Option A):
  • Deploys to 50% of instances at a time
  • Maintains 50% healthy capacity throughout
  • Built-in AWS deployment configuration
Custom configuration (Option D):
aws deploy create-deployment-config \
  --deployment-config-name Fast50Percent \
  --minimum-healthy-hosts type=FLEET_PERCENT,value=50

This allows customization while maintaining 50% capacity.

Comparison:
  • HalfAtATime: Exactly 50% deploys at once
  • Custom: Can specify different parallelism while maintaining 50% minimum healthy

For fastest deployment with 50% capacity, HalfAtATime deploys half the fleet simultaneously, which is the maximum parallelism possible while maintaining 50% healthy hosts.

Question 62
A CodeDeploy deployment to an Auto Scaling group includes a BeforeBlockTraffic hook that deregisters instances from the load balancer before stopping the application. However, users are still seeing connection errors. What is the likely issue?
A. The load balancer connection draining timeout is too short
B. The deregistration action isn't waiting for in-flight requests
C. The hook should be AfterBlockTraffic instead
D. The load balancer needs time to propagate deregistration
Answer: A or B (related issues)

Explanation:

When deregistering instances from load balancers:

Connection draining (deregistration delay):
  • ALB/NLB settings specify how long to wait for in-flight requests
  • Default: 300 seconds
  • If too short, existing requests are dropped
BeforeBlockTraffic hook should:
  1. Deregister from target group
  2. Wait for connection draining to complete
  3. Then proceed with deployment
Correct implementation:
#!/bin/bash
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)

# Deregister from target group
aws elbv2 deregister-targets \
  --target-group-arn $TARGET_GROUP_ARN \
  --targets Id=$INSTANCE_ID

# Wait for draining (check deregistration state)
while true; do
  STATE=$(aws elbv2 describe-target-health \
    --target-group-arn $TARGET_GROUP_ARN \
    --targets Id=$INSTANCE_ID \
    --query 'TargetHealthDescriptions[0].TargetHealth.State' \
    --output text)
  if [ "$STATE" = "unused" ] || [ "$STATE" = "draining" ]; then
    sleep 10
  else
    break
  fi
done
Question 63
A company wants to implement zero-downtime deployments for their EC2 application but doesn't want the overhead of blue/green deployments. They have a fleet of 10 instances. Which approach achieves this with minimal infrastructure?
A. In-place deployment with Rolling update configuration
B. Blue/green deployment with reuse of existing instances
C. Immutable deployment creating temporary instances
D. In-place deployment with OneAtATime configuration
Answer: D

Explanation:

For zero-downtime without blue/green overhead:

OneAtATime configuration (Option D):
  • Deploys to one instance at a time
  • 9 of 10 instances remain healthy throughout
  • Longest deployment time but zero downtime
  • No additional infrastructure required
Rolling update (Option A):
  • Similar concept but may deploy to multiple instances
  • Can be configured similar to OneAtATime
Trade-offs:
  • OneAtATime: Safest, slowest
  • HalfAtATime: Faster, 50% capacity during deployment
  • AllAtOnce: Fastest, but causes downtime

For 10 instances with zero-downtime requirement and no infrastructure overhead, OneAtATime sequentially updates each instance while maintaining 90% capacity.

Question 64
A DevOps engineer is implementing blue/green deployments for an ECS service. The service uses an Application Load Balancer. What must be configured before CodeDeploy can manage the deployments?
A. Two target groups associated with the ALB
B. Two separate ECS services for blue and green
C. CodeDeploy deployment controller on the ECS service
D. Both A and C
Answer: D

Explanation:

ECS blue/green with CodeDeploy requires:

1. Two target groups (Option A):
  • Production traffic target group
  • Test traffic target group
  • Both associated with the ALB (different listener rules or ports)
2. ECS service with CodeDeploy deployment controller (Option C):
{
  "serviceName": "my-service",
  "deploymentController": {
    "type": "CODE_DEPLOY"
  }
}
3. CodeDeploy application and deployment group:
  • Application type: ECS
  • Deployment group linked to ECS service, cluster, target groups
Additional requirements:
  • ALB listener(s) configured for traffic routing
  • Task definition for the service
  • appspec.yml defining the deployment

Without both the target groups and the correct deployment controller type, CodeDeploy cannot manage ECS blue/green deployments.

Question 65
A Lambda function deployment with CodeDeploy is using a Linear10PercentEvery1Minute configuration. The function processes messages from an SQS queue. How does CodeDeploy handle the traffic shifting for this type of invocation?
A. CodeDeploy cannot control traffic for SQS-triggered Lambda functions
B. Traffic shifting applies to new function invocations from SQS
C. SQS continues to invoke the original version until fully shifted
D. You must manually update the SQS event source mapping
Answer: B

Explanation:

For Lambda functions with aliases:

How CodeDeploy traffic shifting works:
  • Lambda alias points to weighted versions
  • During deployment: alias points to original version + new version with weights
  • Example at 10% shift: 90% invocations → v1, 10% → v2
For SQS event sources:
  • Event source mapping triggers the alias
  • Each invocation goes to version based on alias weights
  • Individual SQS messages may invoke different versions
  • This is request-level shifting, not message-level
Important considerations:
  • Eventual consistency in Lambda's traffic shifting
  • Some SQS messages processed by old version, some by new
  • Both versions should be able to handle messages correctly
  • Consider idempotency in message processing

Traffic shifting works for all Lambda invocation types (API Gateway, SQS, EventBridge, etc.) because it operates at the alias level.

Question 66
A company wants to implement feature flags that control which code paths are executed in their Lambda functions deployed via CodeDeploy. The feature flags should be changeable without redeploying the function. Which AWS service should be used?
A. Lambda environment variables updated via CodeDeploy
B. Systems Manager Parameter Store with caching in Lambda
C. AWS AppConfig with feature flag configuration profile
D. DynamoDB table for feature flag storage
Answer: C

Explanation:

AWS AppConfig feature flags provide:

Benefits over alternatives:
  1. Purpose-built for feature flags:
  • Boolean, number, or JSON flag types
  • Percentage-based rollouts
  • User segment targeting
  1. Safe deployments:
  • Validation before deployment
  • Gradual rollout strategies
  • Automatic rollback on errors
  1. Lambda integration:
  • AppConfig Lambda extension for efficient caching
  • Minimal latency impact
  • Automatic refresh of configuration
Implementation:
import appconfig_helper

# Lambda extension handles caching and updates
appconfig = appconfig_helper.AppConfigHelper(
    app='my-app',
    env='production',
    name='feature-flags'
)

def handler(event, context):
    if appconfig.get('new-feature-enabled'):
        return new_feature_code()
    else:
        return original_code()

AppConfig is preferred over Parameter Store for feature flags due to deployment strategies and validation capabilities.

Question 67
A CodeDeploy deployment group targets EC2 instances using the tag "Environment: Production". A new instance was launched with this tag but didn't receive the current deployment. What is the most likely reason?
A. The instance was launched after the deployment started
B. The CodeDeploy agent isn't installed on the instance
C. The instance isn't in the same VPC as other instances
D. The deployment group configuration needs to be refreshed
Answer: B

Explanation:

When instances don't receive deployments:

Most common causes:
  1. No CodeDeploy agent (Option B):
  • Agent must be installed and running
  • Check: sudo service codedeploy-agent status
  • Agent communicates with CodeDeploy service
  1. Instance launched after deployment:
  • True, but for Auto Scaling groups, lifecycle hooks can trigger deployment
  • For standalone EC2, needs separate mechanism
  1. AMI without agent:
  • If using custom AMI, agent must be included
  • Or use user data to install agent on launch
Resolution:
  1. Install CodeDeploy agent
  2. For future instances, include agent in AMI or user data
  3. Use Auto Scaling lifecycle hooks for automatic deployment

For instances matching tag criteria but not receiving deployments, verify agent status first.

Question 68
An application requires zero-downtime deployment to a single EC2 instance (no Auto Scaling group). Blue/green deployment isn't possible. What deployment approach should be used?
A. In-place deployment with careful application restart
B. Create a temporary instance, deploy, then swap Elastic IP
C. Use CodeDeploy rolling deployment configuration
D. Implement custom deployment with Route 53 weighted routing
Answer: B or D

Explanation:

For single-instance zero-downtime deployment:

Challenge: In-place deployment to one instance inherently has downtime during application restart. Solutions: Option B - Elastic IP swap:
  1. Launch new instance
  2. Deploy to new instance
  3. Test new instance
  4. Swap Elastic IP from old to new
  5. Terminate old instance

This provides near-zero-downtime (seconds for IP reassignment).

Option D - Route 53 weighted routing:
  1. Use weighted routing to current instance
  2. Launch new instance with deployment
  3. Add new instance to Route 53 with weight
  4. Gradually shift traffic
  5. Remove old instance
Limitations:
  • Requires DNS client cache considerations
  • More complex setup
  • Longer transition period

For single-instance scenarios, Option B with Elastic IP is cleaner and faster for cutover.

Question 69
A company uses CodeDeploy for Lambda deployments. They want the AfterAllowTraffic hook to run automated tests against the deployed function. If tests fail, the deployment should roll back. How should the AfterAllowTraffic hook be implemented?
A. Lambda function that calls the deployed function and validates responses
B. Lambda function that triggers Step Functions for complex testing
C. CodeBuild project that runs the test suite
D. Either A or B, returning success/failure to CodeDeploy
Answer: D (A or B, both with proper CodeDeploy signaling)

Explanation:

AfterAllowTraffic hook for Lambda:

Implementation requirements:
  1. Hook is a Lambda function
  2. Receives deployment lifecycle event
  3. Must call CodeDeploy to report status
import boto3

codedeploy = boto3.client('codedeploy')

def handler(event, context):
    deployment_id = event['DeploymentId']
    lifecycle_event_hook_execution_id = event['LifecycleEventHookExecutionId']
    
    try:
        # Run tests against deployed function
        test_result = run_integration_tests()
        
        if test_result['passed']:
            status = 'Succeeded'
        else:
            status = 'Failed'
            
    except Exception as e:
        status = 'Failed'
    
    # Report back to CodeDeploy
    codedeploy.put_lifecycle_event_hook_execution_status(
        deploymentId=deployment_id,
        lifecycleEventHookExecutionId=lifecycle_event_hook_execution_id,
        status=status
    )

If status is 'Failed', CodeDeploy automatically rolls back the deployment.

Question 70
A team is troubleshooting slow CodeDeploy deployments. The deployment to 50 instances takes over an hour. The deployment configuration is OneAtATime. What change would reduce deployment time while maintaining safety?
A. Change to HalfAtATime configuration
B. Change to AllAtOnce configuration
C. Create a custom configuration with 10% minimum healthy hosts
D. Increase the deployment timeout
Answer: A or C

Explanation:

Deployment speed analysis:

Current (OneAtATime):
  • 50 instances × (deployment time per instance)
  • If each takes ~1 minute, total = ~50 minutes
  • Safest but slowest
HalfAtATime (Option A):
  • 25 instances deploy simultaneously
  • Then remaining 25
  • Roughly 2× faster than OneAtATime
  • Maintains 50% capacity
Custom with 10% minimum healthy (Option C):
  • 90% of instances can deploy simultaneously
  • Only 5 instances must remain healthy
  • Much faster, but higher risk if deployment fails
  • 45 instances deploy at once, then remaining 5
AllAtOnce (Option B):
  • Fastest but causes complete outage if issues
  • No capacity during deployment
Recommendation for exam: Balance speed vs. risk. HalfAtATime is a common safe choice. Custom configurations allow fine-tuning for specific requirements.
Question 71
A CodeDeploy deployment for an Auto Scaling group uses blue/green deployment. The deployment is configured to copy Auto Scaling group settings. After deployment, the new Auto Scaling group has different instance types than the original. What happened?
A. The ASG copy process doesn't copy instance type settings
B. The launch template/configuration was modified during deployment
C. CodeDeploy uses default instance types for new ASG
D. The deployment group override settings changed the instance type
Answer: B (most likely) or A

Explanation:

When CodeDeploy creates replacement Auto Scaling group for blue/green:

What gets copied:
  • ASG configuration (min, max, desired, health checks)
  • Tags
  • Load balancer/target group associations
What might differ:
  • Launch template version (if "Latest" was specified)
  • Launch configuration (if modified between deployment creation and execution)
Likely scenario (Option B): If the original ASG uses "Latest" launch template version, and someone updated the template before deployment completed, the new ASG gets the updated configuration. Resolution:
  • Use specific launch template versions, not "Latest"
  • Or use deployment group settings to explicitly specify launch template version

For exam: Understand that blue/green copies ASG at deployment time, and "Latest" version references can cause unexpected changes.

Question 72
An organization wants all CodeDeploy deployments to production to require approval from a specific IAM user before traffic is shifted. How should this be implemented?
A. Add a manual approval in CodeDeploy deployment configuration
B. Add a manual approval action in CodePipeline before CodeDeploy action
C. Configure IAM policies requiring the user to start the deployment
D. Use AfterInstall hook to wait for external approval
Answer: B

Explanation:

CodeDeploy itself doesn't have built-in approval workflows. Implement approvals in CodePipeline:

Configuration:
{
  "stageName": "Production",
  "actions": [
    {
      "name": "ManualApproval",
      "actionTypeId": {
        "category": "Approval",
        "owner": "AWS",
        "provider": "Manual"
      },
      "configuration": {
        "NotificationArn": "arn:aws:sns:...",
        "CustomData": "Please review deployment before approving"
      }
    },
    {
      "name": "DeployToProduction",
      "actionTypeId": {
        "category": "Deploy",
        "provider": "CodeDeploy"
      }
    }
  ]
}
IAM permissions for approval:
{
  "Effect": "Allow",
  "Action": "codepipeline:PutApprovalResult",
  "Resource": "arn:aws:codepipeline:*:*:pipeline-name/Production/*"
}

Only specified users can approve. All approval actions are logged in CloudTrail.

Question 73
A company's ECS blue/green deployment with CodeDeploy is failing. The error indicates "The ECS service cannot be updated because the cluster is in draining state." What is the issue?
A. The ECS cluster is being deleted
B. Container instances are being drained for maintenance
C. The cluster capacity is insufficient for blue/green deployment
D. The cluster doesn't support CodeDeploy
Answer: A or B

Explanation:

ECS cluster "draining" state:

Causes:
  1. Cluster deletion initiated - Cluster is being deleted
  2. Container instance draining - Instances marked for removal
  3. Capacity provider changes - Underlying capacity being modified
Blue/green requirement:
  • Cluster must be able to run both original and replacement task sets
  • Draining state prevents new task placement
Resolution:
  1. Wait for drain operation to complete
  2. If cluster being deleted, cancel or use different cluster
  3. Verify sufficient capacity for both task sets
  4. Check capacity provider status
For exam: Understand ECS states and their impact on CodeDeploy operations. Blue/green needs double capacity during deployment.
Question 74
A development team uses CodeDeploy deployment groups with both EC2 instances and on-premises servers. They want to deploy to EC2 instances first, validate, then deploy to on-premises servers. How should this be configured?
A. Create two deployment groups, one for EC2 and one for on-premises
B. Use deployment group tags to sequence the deployments
C. Configure deployment waves in the deployment configuration
D. Create two separate deployments with a pipeline to sequence them
Answer: D (or A with pipeline orchestration)

Explanation:

CodeDeploy doesn't have built-in deployment sequencing within a deployment group.

Solution: Multiple deployment groups with orchestration
  1. Create separate deployment groups:
  • DeploymentGroup-EC2 targeting EC2 instances
  • DeploymentGroup-OnPrem targeting on-premises servers
  1. Orchestrate with CodePipeline:
Stage: Deploy-EC2
  Action: Deploy to DeploymentGroup-EC2

Stage: Validate
  Action: Run validation tests/approval

Stage: Deploy-OnPrem
  Action: Deploy to DeploymentGroup-OnPrem
This approach:
  • Allows validation between deployments
  • Provides clear deployment sequencing
  • Enables rollback at each stage

Alternative: Manual deployment sequencing via CLI/SDK.

Question 75
A company uses CodeDeploy with EC2 instances. They need to ensure that the application gracefully handles the shutdown sequence before CodeDeploy stops it. Which lifecycle hook should contain this logic?
A. BeforeInstall
B. ApplicationStop
C. BeforeBlockTraffic
D. BeforeInstall or ApplicationStop, depending on deployment type
Answer: B (ApplicationStop)

Explanation:

For graceful shutdown during CodeDeploy:

ApplicationStop hook:
  • Runs before new revision is installed
  • Purpose: Stop running application gracefully
  • Executes scripts from PREVIOUS deployment
  • Ideal for cleanup, connection draining, state saving
Implementation:
#!/bin/bash
# scripts/application_stop.sh

# Signal application to start graceful shutdown
kill -SIGTERM $(cat /var/run/app.pid)

# Wait for application to finish processing
sleep 30

# Verify application stopped
if pgrep -f "myapp" > /dev/null; then
  echo "Application didn't stop gracefully, forcing..."
  kill -9 $(cat /var/run/app.pid)
fi
Important note: ApplicationStop scripts are from the PREVIOUS deployment. If it's the first deployment, this hook is skipped.

BeforeBlockTraffic is for in-place deployments with load balancer integration (removing from LB before stopping).

Question 76
A DevOps engineer needs to configure CodeDeploy to integrate with an external configuration management system. After deployment, configuration from the external system should be applied. Which lifecycle hook is most appropriate?
A. AfterInstall
B. ApplicationStart
C. ValidateService
D. BeforeAllowTraffic
Answer: A (AfterInstall)

Explanation:

Lifecycle hook purposes:

AfterInstall (Option A):
  • Application files deployed but app not yet started
  • Perfect for: configuration application, permissions, external config sync
  • Configuration management (Ansible, Puppet, Chef) integration point
Typical AfterInstall tasks:
#!/bin/bash
# Sync configuration from external system
/opt/configuration-management/sync-config.sh

# Apply environment-specific settings
ansible-playbook /opt/playbooks/configure-app.yml

# Set file permissions
chown -R app:app /var/www/app
Order of operations:
  1. BeforeInstall - pre-deployment tasks
  2. Install - files copied
  3. AfterInstall - configure installed files ← External config here
  4. ApplicationStart - start application
  5. ValidateService - verify application works

Configuration should be applied before the application starts (AfterInstall), not after (ValidateService).

Question 77
A company has 200 EC2 instances in an Auto Scaling group. They want to deploy a new version with a deployment configuration that updates 10 instances at a time, with a 5-minute wait between batches to monitor for issues. Which deployment configuration achieves this?
A. Rolling deployment with batch size of 10
B. HalfAtATime with monitoring pauses
C. Custom configuration with fixed number of healthy hosts
D. CodeDeploy doesn't support batch-with-wait deployments
Answer: C (partial - CodeDeploy deploys in one wave, not batches with pauses)

Explanation:

Important clarification about CodeDeploy behavior:

What CodeDeploy DOES:
  • Deploys to X instances simultaneously (based on minimum healthy hosts)
  • Waits for those to complete
  • Then continues to next instances
  • All within a single deployment operation
What CodeDeploy DOES NOT do natively:
  • Pause between batches for monitoring
  • Time-based delays between instance updates
For batch-with-wait requirements: Option 1: Multiple deployments with pipeline:
Stage 1: Deploy (10% of instances)
Stage 2: Wait (Lambda with sleep or Step Functions)
Stage 3: Deploy (next 10%)
...
Option 2: Custom deployment with Step Functions:
  • Orchestrate multiple smaller deployments
  • Add wait states between deployments
For exam: Understand that CodeDeploy's minimum healthy hosts controls parallelism but doesn't create distinct batches with pause between them.
Question 78
An application uses CodeDeploy for EC2 deployments. The team wants to automatically run database migrations before deploying the new application version. The migrations must complete successfully before any instance receives the new code. How should this be implemented?
A. Add a BeforeInstall hook on one instance to run migrations
B. Use a CodeBuild action before CodeDeploy in the pipeline
C. Add an AfterInstall hook that runs migrations
D. Use ApplicationStart hook to run migrations
Answer: B

Explanation:

For database migrations that must complete before ANY deployment:

Why CodeBuild (Option B) is correct:
  • Migrations run once, before deployment starts
  • Single execution, not per-instance
  • Can fail pipeline before any CodeDeploy activity
  • Clear separation of concerns
Why not hooks (Options A, C, D):
  • Hooks run on EACH instance
  • Migrations would run multiple times (race conditions, failures)
  • First instance might succeed, subsequent fail on already-migrated DB
  • No rollback capability for partial deployments
Implementation:
Pipeline:
├── Source
├── Build (CodeBuild)
├── Migrate (CodeBuild - run DB migrations)
│   └── Fails here = no deployment
└── Deploy (CodeDeploy - application code)
Migration CodeBuild project:
phases:
  build:
    commands:
      - npm run db:migrate
Question 79
A CodeDeploy deployment to Lambda functions is configured with a BeforeAllowTraffic hook. The hook function runs but the deployment times out. The hook function takes 8 minutes to complete. What is the issue?
A. Lambda functions have a 15-minute maximum timeout
B. The lifecycle hook timeout is not configured
C. BeforeAllowTraffic hook has a 5-minute default timeout
D. The hook function isn't returning a response to CodeDeploy
Answer: D (most likely) or timeout configuration

Explanation:

CodeDeploy Lambda hooks have specific requirements:

Timeout considerations:
  • Lambda deployment lifecycle hooks have their own timeout
  • Default hook timeout: 1 hour (configurable)
  • But the hook Lambda function must RESPOND to CodeDeploy
Common issue (Option D): Hook function might:
  1. Run for 8 minutes
  2. Complete its work
  3. Exit without calling put_lifecycle_event_hook_execution_status
  4. CodeDeploy waits for response until timeout
Required hook response:
def handler(event, context):
    # Do validation work (up to 8 minutes)
    perform_validation()
    
    # MUST report status back to CodeDeploy
    codedeploy = boto3.client('codedeploy')
    codedeploy.put_lifecycle_event_hook_execution_status(
        deploymentId=event['DeploymentId'],
        lifecycleEventHookExecutionId=event['LifecycleEventHookExecutionId'],
        status='Succeeded'  # or 'Failed'
    )

Without this callback, CodeDeploy assumes the hook is still running.

Question 80
A company uses CodeDeploy with an Application Load Balancer for blue/green EC2 deployments. During deployments, they notice that new instances become healthy in the target group but deployment still fails. The error mentions "health check failures." What should be investigated?
A. ALB target group health check settings
B. CodeDeploy health check type and thresholds
C. EC2 instance health checks
D. All health check configurations
Answer: D (but specifically B for CodeDeploy-specific behavior)

Explanation:

Blue/green deployments have multiple health check layers:

1. ALB Target Group Health Checks:
  • Path, port, protocol
  • Healthy/unhealthy thresholds
  • Interval and timeout
2. CodeDeploy Health Checks:
  • ELB health check type (instances must pass ALB checks)
  • Or EC2 health check type (basic EC2 status)
3. EC2 Instance Status Checks:
  • System status, instance status
Potential issues:
  • ALB says healthy, but CodeDeploy has different threshold
  • CodeDeploy's evaluation period differs from ALB
  • Code Deploy waits for configurable duration for instances to become healthy
Investigation:
# Check deployment events
aws deploy get-deployment \
  --deployment-id d-123456 \
  --query 'deploymentInfo.deploymentOverview'

Check CodeDeploy deployment group health check settings and compare with ALB target group settings for consistency.

Question 81
A team needs to implement a CodeDeploy deployment that proceeds only during business hours (9 AM - 5 PM EST). If a deployment is triggered outside these hours, it should wait until the next business hours window. How can this be implemented?
A. Configure deployment windows in CodeDeploy
B. Use EventBridge Scheduler to enable/disable deployments
C. Implement a Lambda function as a pre-deployment gate in CodePipeline
D. CodeDeploy doesn't support scheduled deployment windows
Answer: C

Explanation:

CodeDeploy doesn't have native deployment window scheduling. Implement via pipeline:

Lambda pre-deployment gate:
import datetime
from pytz import timezone

def handler(event, context):
    est = timezone('America/New_York')
    now = datetime.datetime.now(est)
    
    # Check if within business hours
    if now.weekday() < 5 and 9 <= now.hour < 17:
        # Proceed with deployment
        return put_success(event)
    else:
        # Calculate wait time until next window
        wait_message = calculate_next_window(now)
        return put_failure(event, wait_message)
Alternative approaches:
  1. EventBridge + Lambda: Pause pipeline outside hours
  2. Step Functions: Implement wait until business hours
  3. Approval action: Automated approval only during hours

For exam: Know that CodeDeploy itself doesn't have window scheduling; implement at pipeline level.

Question 82
A development team is deploying a serverless application using CodeDeploy with AWS SAM. The SAM template defines a Lambda function with an AutoPublishAlias property set to "live". How does this integrate with CodeDeploy?
A. SAM automatically creates CodeDeploy resources for traffic shifting
B. CodeDeploy must be configured separately from SAM
C. SAM and CodeDeploy are not compatible
D. AutoPublishAlias only creates versions, not CodeDeploy integration
Answer: A

Explanation:

AWS SAM integrates CodeDeploy for gradual deployments:

SAM Template configuration:
MyFunction:
  Type: AWS::Serverless::Function
  Properties:
    AutoPublishAlias: live
    DeploymentPreference:
      Type: Linear10PercentEvery1Minute
      Alarms:
        - !Ref AliasErrorMetricAlarm
      Hooks:
        PreTraffic: !Ref PreTrafficHookFunction
        PostTraffic: !Ref PostTrafficHookFunction
What SAM creates automatically:
  1. New Lambda version on each deployment
  2. CodeDeploy application and deployment group
  3. Alias updates with traffic shifting
  4. Alarm-based rollback configuration
Deployment types supported:
  • Canary10Percent5Minutes, Canary10Percent10Minutes, etc.
  • Linear10PercentEvery1Minute, Linear10PercentEvery2Minutes, etc.
  • AllAtOnce

SAM handles the CodeDeploy resource creation, simplifying serverless CI/CD.

Question 83
An application running on EC2 requires environment-specific configuration. Different configuration files should be deployed to development, staging, and production environments. The deployment bundle is the same for all environments. How should this be handled with CodeDeploy?
A. Create separate deployment bundles per environment
B. Use appspec.yml with conditional file mappings
C. Use environment-specific lifecycle hooks to apply configuration
D. Store configurations in Parameter Store and fetch during deployment
Answer: C or D

Explanation:

Multiple approaches for environment-specific config:

Option C - Lifecycle hooks:
# scripts/configure_environment.sh (AfterInstall)
ENV=$(curl -s http://169.254.169.254/latest/meta-data/tags/instance/Environment)

case $ENV in
  development)
    cp /opt/config/dev.properties /var/app/config/app.properties
    ;;
  staging)
    cp /opt/config/staging.properties /var/app/config/app.properties
    ;;
  production)
    cp /opt/config/prod.properties /var/app/config/app.properties
    ;;
esac
Option D - Parameter Store (preferred):
# scripts/configure_environment.sh
ENV=$(curl -s http://169.254.169.254/latest/meta-data/tags/instance/Environment)

aws ssm get-parameters-by-path \
  --path "/${ENV}/app/config" \
  --output text > /var/app/config/app.properties
Benefits of Parameter Store:
  • Configuration changes without redeployment
  • Encryption for sensitive values
  • Audit trail of changes
  • Version control
Question 84
A company is using CodeDeploy to deploy Docker containers to EC2 instances. The deployment should pull the latest image from ECR and restart the container. Which lifecycle hooks should be used?
A. ApplicationStop to stop container, ApplicationStart to pull and start new container
B. BeforeInstall to pull image, AfterInstall to start container
C. ApplicationStop to stop container, Install to pull image, ApplicationStart to start container
D. Download bundle handles Docker image pull automatically
Answer: A

Explanation:

For Docker container deployments on EC2 with CodeDeploy:

Lifecycle hook implementation: ApplicationStop:
#!/bin/bash
# Stop existing container
docker stop my-app-container || true
docker rm my-app-container || true
ApplicationStart:
#!/bin/bash
# Login to ECR
aws ecr get-login-password --region $AWS_REGION | \
  docker login --username AWS --password-stdin $ECR_REGISTRY

# Pull latest image
docker pull $ECR_REGISTRY/my-app:$VERSION

# Start container
docker run -d \
  --name my-app-container \
  -p 80:80 \
  $ECR_REGISTRY/my-app:$VERSION
Important notes:
  • Install phase copies files from S3 (scripts, configs)
  • Docker image pull happens in hooks, not Install
  • Version can be passed via environment variable or appspec

The deployment bundle contains scripts and configuration, not the Docker image itself.

Question 85
A CodeDeploy deployment to an ECS service is failing. The error states "The deployment failed because the ECS service couldn't reach steady state." What are possible causes?
A. Task definition has errors causing container failures
B. Insufficient ECS cluster capacity
C. Container health checks are failing
D. All of the above
Answer: D

Explanation:

ECS "steady state" requires all tasks to be running and healthy. Failure causes:

1. Task definition issues (Option A):
  • Invalid image reference
  • Incorrect environment variables
  • Resource limits too low
  • Missing IAM permissions
2. Capacity issues (Option B):
  • Not enough CPU/memory in cluster
  • No available container instances
  • Fargate capacity not available
3. Health check failures (Option C):
  • Container starts but health check fails
  • Load balancer health check path incorrect
  • Application startup time exceeds health check grace period
Debugging:
# Check ECS service events
aws ecs describe-services \
  --cluster my-cluster \
  --services my-service \
  --query 'services[0].events[:10]'

# Check stopped tasks
aws ecs describe-tasks \
  --cluster my-cluster \
  --tasks <task-id> \
  --query 'tasks[0].stoppedReason'

Check ECS events and stopped task reasons for specific failure cause.

Question 86
A DevOps team wants to test CodeDeploy deployments in a lower environment before production. They want to use the same deployment configuration but with faster rollback detection. Which deployment group settings should differ between environments?
A. Deployment configuration (faster thresholds for non-prod)
B. Alarm configuration (stricter alarms for lower environments)
C. Rollback settings (faster rollback in lower environments)
D. All settings should be identical for accurate testing
Answer: A or B (depends on strategy)

Explanation:

Environment-specific deployment considerations:

Lower environment optimizations: Option A - Faster deployment configurations:
Production: Linear10PercentEvery10Minutes
Non-prod: Linear10PercentEvery1Minute or AllAtOnce

Faster in non-prod for quick feedback.

Option B - Stricter alarms: Lower thresholds in non-prod to catch issues:
Production alarm: Error rate > 5%
Non-prod alarm: Error rate > 1%

Catch problems earlier in lower environments.

Option D consideration: Some organizations prefer identical settings to accurately simulate production behavior. Trade-off between speed and accuracy. Best practice:
  • Same deployment TYPE (e.g., blue/green)
  • Faster time intervals in non-prod
  • Similar but appropriately-scaled thresholds
  • Test rollback procedures in non-prod
Question 87
A company uses CodeDeploy for Lambda deployments with traffic shifting. They want to implement a "bake time" where the new version runs with production traffic for 30 minutes before the deployment is considered complete, even after traffic is fully shifted. How can this be achieved?
A. Configure the deployment wait time in deployment configuration
B. Use AfterAllowTraffic hook with a Lambda that waits and monitors
C. Extend the deployment with CloudWatch Events and manual completion
D. CodeDeploy automatically waits after traffic shift
Answer: B

Explanation:

For post-traffic-shift bake time:

AfterAllowTraffic hook with monitoring:
import time
import boto3

def handler(event, context):
    deployment_id = event['DeploymentId']
    hook_execution_id = event['LifecycleEventHookExecutionId']
    
    cloudwatch = boto3.client('cloudwatch')
    codedeploy = boto3.client('codedeploy')
    
    # Monitor for 30 minutes
    end_time = time.time() + (30 * 60)
    
    while time.time() < end_time:
        # Check metrics
        if check_error_metrics(cloudwatch):
            # Errors detected - fail deployment
            codedeploy.put_lifecycle_event_hook_execution_status(
                deploymentId=deployment_id,
                lifecycleEventHookExecutionId=hook_execution_id,
                status='Failed'
            )
            return
        time.sleep(60)  # Check every minute
    
    # Bake time completed successfully
    codedeploy.put_lifecycle_event_hook_execution_status(
        deploymentId=deployment_id,
        lifecycleEventHookExecutionId=hook_execution_id,
        status='Succeeded'
    )
Note: Lambda timeout must exceed bake time (max 15 minutes). For longer bake times, use Step Functions.
Question 88
A CodeDeploy deployment fails with "No instances found for deployment group." The deployment group is configured to target an Auto Scaling group that has 5 running instances. All instances are tagged correctly and have the CodeDeploy agent running. What should be checked?
A. The Auto Scaling group name in deployment group configuration
B. The deployment group is targeting EC2 tags instead of ASG
C. The ASG instances are in a different region
D. The IAM service role permissions
Answer: A or B

Explanation:

"No instances found" error troubleshooting:

Check deployment group target type:
  1. Auto Scaling group targeting:
  • Verify ASG name is correct in deployment group
  • ASG must exist and have running instances
  • Instances must be InService state
  1. Tag-based targeting:
  • If configured with tags, instances must match tags
  • Multiple tag conditions use AND logic
  • Check tag key/value spelling
Common issues:
  • Deployment group references ASG that was deleted/recreated
  • ASG name changed but deployment group not updated
  • Mixed targeting (expecting ASG but configured for tags)
Verification:
# Check deployment group configuration
aws deploy get-deployment-group \
  --application-name MyApp \
  --deployment-group-name MyDG \
  --query 'deploymentGroupInfo.autoScalingGroups'

# Verify ASG
aws autoscaling describe-auto-scaling-groups \
  --auto-scaling-group-names MyASG
Question 89
An organization wants to standardize CodeDeploy configurations across multiple deployment groups. They want to ensure all deployment groups use specific lifecycle hooks and alarm configurations. What approach should they use?
A. Use AWS CloudFormation templates for deployment group creation
B. Create a custom CodeDeploy deployment configuration
C. Use AWS Config rules to enforce configuration
D. Implement a CI/CD pipeline that validates deployment group settings
Answer: A (with D as complementary)

Explanation:

Standardizing CodeDeploy configurations:

CloudFormation (Option A):
Resources:
  DeploymentGroup:
    Type: AWS::CodeDeploy::DeploymentGroup
    Properties:
      ApplicationName: !Ref Application
      DeploymentConfigName: CodeDeployDefault.AllAtOnce
      AutoRollbackConfiguration:
        Enabled: true
        Events:
          - DEPLOYMENT_FAILURE
          - DEPLOYMENT_STOP_ON_ALARM
      AlarmConfiguration:
        Enabled: true
        Alarms:
          - Name: !Ref ErrorAlarm
Benefits:
  • Version-controlled infrastructure
  • Consistent deployment group creation
  • Parameterized for different environments
Complementary approach (Option D):
  • Pre-deployment validation of deployment group configuration
  • CI/CD check before allowing deployments
  • Audit trail of changes

AWS Config (Option C) can detect drift but not enforce settings proactively.

Question 90
A company's CodeDeploy deployment includes running integration tests during the ValidateService hook. The tests take 10 minutes to complete. The deployment times out. What is the maximum timeout that can be configured for a lifecycle hook?
A. 5 minutes
B. 1 hour
C. 3600 seconds (1 hour)
D. Lifecycle hook timeout is the remaining deployment timeout
Answer: C (3600 seconds = 1 hour)

Explanation:

CodeDeploy lifecycle hook timeout:

appspec.yml timeout configuration:
hooks:
  ValidateService:
    - location: scripts/run_tests.sh
      timeout: 3600  # Maximum: 3600 seconds (1 hour)
Important notes:
  • Default timeout: 3600 seconds
  • Maximum timeout: 3600 seconds
  • If script doesn't complete within timeout, hook fails
  • Hook failure causes deployment failure (unless configured otherwise)
For longer operations:
  • Break tests into smaller scripts
  • Run tests asynchronously (script returns, tests continue)
  • Use external systems for long-running validations
  • Consider moving tests to separate pipeline stage

For 10-minute tests, the default timeout is sufficient. Check if script is hanging or tests are actually taking longer than expected.

Question 91
A team uses CodeDeploy with EC2 instances in multiple Availability Zones. They want deployments to update one AZ at a time to maintain cross-AZ availability. How should this be configured?
A. Create separate deployment groups per AZ
B. Use the AZ-aware deployment configuration option
C. CodeDeploy automatically handles AZ distribution
D. Tag instances by AZ and use multiple deployment groups with pipeline sequencing
Answer: D

Explanation:

CodeDeploy doesn't have native AZ-aware deployment ordering. Implement via:

Solution: Multiple deployment groups with orchestration Setup:
  1. Tag instances by AZ:
  • Tag: AZ: us-east-1a
  • Tag: AZ: us-east-1b
  1. Create deployment groups per AZ:
  • DeploymentGroup-AZ-1a (targets AZ: us-east-1a)
  • DeploymentGroup-AZ-1b (targets AZ: us-east-1b)
  1. Pipeline orchestration:
Stage: Deploy-AZ-1a
  Action: Deploy to DeploymentGroup-AZ-1a
  
Stage: Validate-AZ-1a
  Action: Health check / manual approval
  
Stage: Deploy-AZ-1b
  Action: Deploy to DeploymentGroup-AZ-1b
Alternative: Use CodeDeploy's minimum healthy hosts with high threshold to force sequential updates across AZ boundaries (less control).
Question 92
A CodeDeploy deployment to Lambda uses a traffic-shifting configuration. The deployment completes successfully, but some clients are still being routed to the old version hours later. What is the cause?
A. Lambda alias caching at the client
B. CloudFront caching the old Lambda responses
C. DNS caching of Lambda endpoints
D. This behavior indicates a failed deployment
Answer: A (Lambda invocations are consistent within a session context)

Explanation:

Traffic shifting behavior clarification:

How Lambda alias traffic shifting works:
  • Each NEW invocation is routed based on current alias weights
  • During traffic shift: some invocations go to v1, some to v2
  • After shift complete: all invocations go to new version
"Old version still receiving traffic" after completion: Possible causes:
  1. Connection reuse: Some SDKs/clients reuse connections
  2. Provisioned concurrency: Old version PC instances still exist
  3. Step Functions/SQS: Messages queued before shift complete
  4. Event source mapping: Takes time to fully shift
Troubleshooting:
# Check alias configuration
aws lambda get-alias \
  --function-name MyFunction \
  --name live

# Should show 100% to new version

If alias shows 100% new version but old version still invoked, check client connection patterns and event sources.

Question 93
An organization needs to deploy the same application to EC2 instances, Lambda functions, and ECS services. They want to use a single deployment pipeline. How should this be architected?
A. Use a single CodeDeploy application with multiple deployment groups
B. Create separate CodeDeploy applications for each compute platform
C. Use CloudFormation StackSets for unified deployment
D. Use a CodePipeline with parallel deploy actions for each platform
Answer: D (with B as supporting detail)

Explanation:

CodeDeploy applications are platform-specific:

CodeDeploy application platforms:
  • EC2/On-premises
  • Lambda
  • ECS
Cannot mix platforms in single application. Therefore: Architecture:
CodePipeline:
├── Source Stage
├── Build Stage
├── Deploy Stage (parallel actions):
│   ├── Action: Deploy to EC2 (CodeDeploy EC2 app)
│   ├── Action: Deploy to Lambda (CodeDeploy Lambda app)
│   └── Action: Deploy to ECS (CodeDeploy ECS app)
Implementation:
  1. Create separate CodeDeploy applications:
  • MyApp-EC2 (platform: Server)
  • MyApp-Lambda (platform: Lambda)
  • MyApp-ECS (platform: ECS)
  1. CodePipeline Deploy stage with parallel actions
  1. Build stage produces artifacts for all platforms

This provides unified pipeline with platform-appropriate deployments.

Question 94
A company uses CodeDeploy for EC2 deployments. They want to automatically notify a Slack channel when deployments start, succeed, or fail. What is the recommended approach?
A. Configure CodeDeploy notifications directly to Slack
B. Use CloudWatch Events to trigger Lambda that posts to Slack
C. Use Amazon SNS with Slack integration
D. Configure lifecycle hooks to post to Slack
Answer: B (or C via Amazon SNS + Lambda/Chatbot)

Explanation:

CodeDeploy notification options:

CloudWatch Events + Lambda (Option B):
def lambda_handler(event, context):
    deployment_id = event['detail']['deploymentId']
    state = event['detail']['state']
    
    message = f"CodeDeploy deployment {deployment_id}: {state}"
    
    # Post to Slack
    slack_webhook = os.environ['SLACK_WEBHOOK']
    requests.post(slack_webhook, json={'text': message})
EventBridge rule:
{
  "source": ["aws.codedeploy"],
  "detail-type": ["CodeDeploy Deployment State-change Notification"],
  "detail": {
    "state": ["START", "SUCCESS", "FAILURE"]
  }
}
AWS Chatbot (newer, simpler):
  • Native AWS Chatbot integration with Slack
  • Configure SNS topic notifications
  • Chatbot formats and delivers to Slack

Both approaches work. AWS Chatbot is simpler if you have standard notifications. Lambda provides more customization.

Question 95
A DevOps engineer notices that CodeDeploy deployments to an Auto Scaling group sometimes skip newly launched instances. The ASG is configured with a CodeDeploy lifecycle hook. What should be verified?
A. The lifecycle hook timeout is sufficient for CodeDeploy agent installation
B. The lifecycle hook is on LAUNCHING, not TERMINATING
C. The IAM role allows CodeDeploy to complete the lifecycle action
D. All of the above
Answer: D

Explanation:

ASG lifecycle hooks for CodeDeploy require:

1. Correct hook timing (Option B):
AutoScaling group → Launching → Pending:Wait → CodeDeploy deploys → Complete lifecycle → InService
Hook must be on autoscaling:EC2_INSTANCE_LAUNCHING 2. Sufficient timeout (Option A):
  • Hook timeout must exceed: agent installation + deployment time
  • Default: 3600 seconds (1 hour)
  • If timeout expires, instance transitions based on default action
3. IAM permissions (Option C): CodeDeploy needs permission to:
{
  "Action": [
    "autoscaling:CompleteLifecycleAction",
    "autoscaling:RecordLifecycleActionHeartbeat"
  ],
  "Resource": "*"
}
Verification checklist:
  • Hook exists for EC2_INSTANCE_LAUNCHING
  • Hook HeartbeatTimeout is sufficient
  • CodeDeploy service role has ASG permissions
  • CodeDeploy agent is in AMI or installed via user data
Question 96
A company wants to implement a deployment strategy where 1% of traffic goes to the new version for 1 hour before proceeding. If any errors occur during this period, the deployment should automatically roll back. Which configuration achieves this with CodeDeploy Lambda deployments?
A. Canary10Percent30Minutes
B. Custom deployment configuration with 1% shift and 60-minute interval
C. Linear1PercentEvery1Minute
D. CodeDeploy doesn't support less than 10% traffic shifts
Answer: B

Explanation:

For 1% canary deployments:

Custom deployment configuration:
aws deploy create-deployment-config \
  --deployment-config-name Canary1Percent1Hour \
  --compute-platform Lambda \
  --traffic-routing-config '{
    "type": "TimeBasedCanary",
    "timeBasedCanary": {
      "canaryPercentage": 1,
      "canaryInterval": 60
    }
  }'
This creates:
  • 1% traffic to new version
  • 60-minute evaluation period
  • Then 100% shift if successful
Combine with alarm-based rollback:
{
  "alarmConfiguration": {
    "enabled": true,
    "alarms": [{"name": "Lambda-Errors"}]
  },
  "autoRollbackConfiguration": {
    "enabled": true,
    "events": ["DEPLOYMENT_STOP_ON_ALARM"]
  }
}

AWS-provided configurations use 10% minimum for canary. Custom configurations allow lower percentages.

Question 97
An ECS service deployed with CodeDeploy blue/green is experiencing connection drops during the traffic shift. Current configuration shifts traffic all at once. How can this be improved?
A. Use linear or canary traffic shifting
B. Increase the deregistration delay on target groups
C. Add connection draining to the deployment configuration
D. Both A and B
Answer: D

Explanation:

Connection drops during traffic shift have two causes:

1. Traffic shifting strategy (Option A): AllAtOnce shifts 100% immediately. Use:
  • Linear10PercentEvery1Minute: Gradual shift
  • Canary10Percent10Minutes: Test with 10% first
2. Connection draining (Option B): Target group deregistration delay allows in-flight requests to complete:
aws elbv2 modify-target-group-attributes \
  --target-group-arn $TG_ARN \
  --attributes Key=deregistration_delay.timeout_seconds,Value=300
Combined solution:
  • Linear traffic shift gives time for monitoring
  • Deregistration delay allows requests on old tasks to complete
  • New requests go to new tasks
Additional considerations:
  • Application graceful shutdown handling
  • Connection timeout settings
  • Health check intervals
Question 98
A team needs to deploy a new version of their application but keep the previous version available for manual rollback for 7 days. They're using CodeDeploy with EC2 instances in an Auto Scaling group. What deployment strategy supports this?
A. In-place deployment with 7-day rollback window
B. Blue/green deployment with termination wait time of 7 days
C. Blue/green deployment with "keep original instances" option
D. Create a separate ASG with previous version manually
Answer: B or C (depending on cost considerations)

Explanation:

Long-term rollback options:

Option B - Extended termination wait:
{
  "blueGreenDeploymentConfiguration": {
    "terminateBlueInstancesOnDeploymentSuccess": {
      "action": "TERMINATE",
      "terminationWaitTimeInMinutes": 10080  // 7 days
    }
  }
}
Cost implication: Running two full environments for 7 days. Option C - Keep original instances:
{
  "terminateBlueInstancesOnDeploymentSuccess": {
    "action": "KEEP_ALIVE"
  }
}

Instances remain until manually terminated. Rollback by traffic shift or ASG swap.

Alternative (cost-optimized):
  • Keep AMI or snapshot of previous version
  • Maintain previous version in a scaled-down ASG
  • Create rollback deployment if needed

For exam: Understand the cost implications of keeping old environments running.

Question 99
A CodeDeploy deployment to EC2 instances uses the AllAtOnce configuration. The deployment succeeds on 3 of 5 instances but fails on 2. What happens to the deployment?
A. The deployment fails, all instances are rolled back
B. The deployment succeeds, failed instances retain old version
C. The deployment fails, 3 successful instances keep new version
D. Depends on minimum healthy hosts configuration
Answer: D

Explanation:

AllAtOnce behavior depends on minimum healthy hosts:

AllAtOnce default configuration:
  • minimumHealthyHosts: type: HOST_COUNT, value: 0
  • With this setting: Any failures = deployment fails
  • But successfully deployed instances KEEP the new version
Scenario analysis: If minimum healthy = 0:
  • Deployment status: FAILED (not all instances succeeded)
  • 3 instances have new version
  • 2 instances have old version (deployment failed on them)
If minimum healthy = 3:
  • Deployment status: SUCCEEDED (3 meets minimum)
  • 3 instances have new version
  • 2 instances need follow-up deployment
Key point: Failed deployments don't automatically roll back successfully deployed instances unless you have automatic rollback configured:
{
  "autoRollbackConfiguration": {
    "enabled": true,
    "events": ["DEPLOYMENT_FAILURE"]
  }
}
Question 100
A company is migrating from a third-party deployment tool to CodeDeploy. Their current tool supports "stop on first failure" behavior. How can this be achieved in CodeDeploy?
A. Use OneAtATime deployment configuration
B. Configure minimum healthy hosts to (total instances - 1)
C. Enable stop deployment on first failure setting
D. Create a custom deployment configuration with fail-fast behavior
Answer: A

Explanation:

"Stop on first failure" implementation:

OneAtATime (Option A):
  • Deploys to one instance at a time
  • If instance fails, deployment stops immediately
  • No additional instances are attempted
  • Provides stop-on-first-failure behavior
Behavior:
Instance 1: Success ✓ → Continue
Instance 2: Success ✓ → Continue
Instance 3: Failure ✗ → Stop deployment
Instance 4: Not attempted
Instance 5: Not attempted
Why it works:
  • Sequential deployment
  • Failure immediately fails the entire deployment
  • No additional instances are affected
Contrast with other configurations:
  • HalfAtATime: Continues with remaining instances in the batch
  • AllAtOnce: All instances attempted regardless of failures

For strict stop-on-first-failure, OneAtATime is the answer.

Question 101
A company uses CloudFormation to deploy infrastructure and CodePipeline for CI/CD. They want CloudFormation stack updates to fail if they would cause resource replacement (potential data loss). How should this be configured?
A. Use stack policies to prevent replacement actions
B. Enable termination protection on the stack
C. Use change sets and implement a Lambda function to analyze changes
D. Configure DeletionPolicy on all resources
Answer: C (or A for specific resources)

Explanation:

Preventing unintended resource replacement:

Option C - Change sets with analysis:
  1. Create change set instead of direct update
  2. Lambda analyzes change set for replacements:
def analyze_change_set(change_set_id):
    changes = cfn.describe_change_set(ChangeSetName=change_set_id)
    
    for change in changes['Changes']:
        if change['ResourceChange']['Replacement'] == 'True':
            return 'REJECT'
    
    return 'APPROVE'
  1. Pipeline approval based on analysis
Option A - Stack policies (for known critical resources):
{
  "Statement": [{
    "Effect": "Deny",
    "Action": "Update:Replace",
    "Principal": "*",
    "Resource": "LogicalResourceId/MyDatabase"
  }]
}

Stack policies protect specific resources but require knowing which to protect. Change set analysis provides dynamic checking.

Question 102
A CloudFormation template creates an EC2 instance and installs software using cfn-init. The stack creation succeeds but the software installation fails. How can the template be modified to fail the stack creation if cfn-init fails?
A. Add a CreationPolicy with a signal timeout
B. Add a WaitCondition for the cfn-init completion
C. Use cfn-signal to report success/failure with CreationPolicy
D. Both A and C
Answer: D

Explanation:

Using CreationPolicy and cfn-signal:

CloudFormation template:
Resources:
  MyEC2Instance:
    Type: AWS::EC2::Instance
    CreationPolicy:
      ResourceSignal:
        Count: 1
        Timeout: PT15M  # 15 minutes
    Metadata:
      AWS::CloudFormation::Init:
        config:
          packages:
            yum:
              httpd: []
          services:
            sysvinit:
              httpd:
                enabled: true
                ensureRunning: true
    Properties:
      UserData:
        Fn::Base64: !Sub |
          #!/bin/bash -xe
          yum update -y aws-cfn-bootstrap
          /opt/aws/bin/cfn-init -v \
            --stack ${AWS::StackName} \
            --resource MyEC2Instance \
            --region ${AWS::Region}
          /opt/aws/bin/cfn-signal -e $? \
            --stack ${AWS::StackName} \
            --resource MyEC2Instance \
            --region ${AWS::Region}
How it works:
  1. CreationPolicy makes CloudFormation wait for signal
  2. cfn-signal sends success/failure based on cfn-init exit code ($?)
  3. Stack fails if timeout or failure signal received
Question 103
An organization uses CloudFormation StackSets to deploy resources across multiple accounts. A new account is added to the organization. How can the StackSet automatically deploy to the new account?
A. Enable automatic deployment in StackSet configuration
B. Add the new account to the StackSet target accounts
C. Configure the StackSet to target an Organization Unit (OU)
D. Both A and C
Answer: D

Explanation:

StackSets automatic deployment:

Automatic deployment configuration:
StackSet:
  AutoDeployment:
    Enabled: true
    RetainStacksOnAccountRemoval: false
OU targeting:
StackSetDeploymentTargets:
  OrganizationalUnitIds:
    - ou-abc123
Combined behavior:
  1. StackSet targets OU (not individual accounts)
  2. Automatic deployment enabled
  3. When new account joins OU → StackSet automatically deploys
  4. When account leaves OU → stacks optionally removed
Requirements:
  • AWS Organizations integration
  • StackSet created with SERVICE_MANAGED permissions
  • Trusted access enabled for CloudFormation StackSets

This provides automatic governance and baseline deployment for new accounts.

Question 104
A company's CloudFormation template includes a Lambda function that should only be updated when the function code changes, not when other template parameters change. How should this be configured?
A. Use a custom resource to manage Lambda updates
B. Package Lambda code in S3 with versioned keys
C. Use the AWS::Lambda::Version resource
D. Configure the function with UpdateReplacePolicy: Retain
Answer: B (or use SAM/CloudFormation deployment package)

Explanation:

Controlling Lambda update behavior:

Option B - S3 versioning approach:
MyFunction:
  Type: AWS::Lambda::Function
  Properties:
    Code:
      S3Bucket: !Ref DeploymentBucket
      S3Key: !Sub "functions/my-function-${CodeVersion}.zip"

When CodeVersion parameter changes → function code updates When other parameters change → function code doesn't update

Alternative - SAM packaging: SAM CLI packages code with content-based hashes:
sam package --s3-bucket my-bucket
Creates unique S3 keys based on code content. AWS::Lambda::Version: Creates new version on each update but doesn't control when updates happen.

The key is controlling the S3 key changes to match code changes only.

Question 105
A DevOps engineer is implementing blue/green deployments using CloudFormation. The template defines an Auto Scaling group and Application Load Balancer. What CloudFormation update policy enables blue/green behavior?
A. AutoScalingReplacingUpdate with WillReplace: true
B. AutoScalingRollingUpdate with custom batch sizes
C. UpdatePolicy with AutoScalingScheduledAction
D. CloudFormation doesn't have native blue/green update policies
Answer: A

Explanation:

CloudFormation ASG update policies:

AutoScalingReplacingUpdate:
MyASG:
  Type: AWS::AutoScaling::AutoScalingGroup
  UpdatePolicy:
    AutoScalingReplacingUpdate:
      WillReplace: true
Behavior:
  1. Creates new ASG with updated launch template
  2. New instances launch and register with target group
  3. Health checks pass → old ASG instances terminate
  4. Old ASG deleted

This provides blue/green behavior within CloudFormation.

Contrast with AutoScalingRollingUpdate:
UpdatePolicy:
  AutoScalingRollingUpdate:
    MinInstancesInService: 1
    MaxBatchSize: 1
    PauseTime: PT10M
Rolling update modifies existing ASG instances in batches.

For true blue/green: Use AutoScalingReplacingUpdate or external tools (CodeDeploy).

Question 106
A CloudFormation stack update fails and rolls back. The DevOps engineer wants to investigate what went wrong before the resources are rolled back. What feature allows this?
A. Enable termination protection before updates
B. Disable rollback in the stack update settings
C. Use the UPDATE_ROLLBACK_FAILED status and continue rollback later
D. Enable detailed stack events logging
Answer: B

Explanation:

Disabling rollback for debugging:

Console: Stack settings → Rollback on failure: Disabled CLI:
aws cloudformation update-stack \
  --stack-name my-stack \
  --template-body file://template.yaml \
  --disable-rollback
Behavior when disabled:
  1. Update fails → stack enters UPDATE_FAILED status
  2. Resources remain in their current state (partially updated)
  3. Engineer can investigate (check logs, resource state)
  4. After investigation, continue or roll back:
aws cloudformation continue-update-rollback --stack-name my-stack
Use cases:
  • Development and debugging
  • Understanding complex failure scenarios
  • NOT recommended for production (leaves stack in inconsistent state)
Question 107
An application uses CloudFormation nested stacks for modular infrastructure. Updates to the parent stack sometimes fail because child stack exports are in use. How should this be handled?
A. Use cross-stack references instead of exports
B. Delete dependent stacks before updating exports
C. Use SSM Parameter Store for shared values
D. Use export names that don't change
Answer: C or D

Explanation:

Managing cross-stack dependencies:

The problem:
  • Stack A exports !Export { Name: VpcId }
  • Stack B imports !ImportValue VpcId
  • Trying to update Stack A's export fails because it's in use
Solution Option C - SSM Parameter Store:
# Stack A
Resources:
  VpcIdParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: /infra/vpc-id
      Type: String
      Value: !Ref VPC

# Stack B
Resources:
  MyResource:
    Properties:
      VpcId: "{{resolve:ssm:/infra/vpc-id}}"
Benefits:
  • No hard coupling between stacks
  • Values can be updated independently
  • Dynamic resolution at deployment time
Solution Option D - Stable export names: Don't change export names; change export values instead. This requires careful design upfront.

Cross-stack references via SSM provide more flexibility than CloudFormation exports.

Question 108
A company uses Elastic Beanstalk for their web application. They need to customize the Nginx configuration to add custom headers. What is the recommended approach?
A. SSH into instances and modify nginx.conf
B. Use .ebextensions to add configuration files
C. Create a custom AMI with modified Nginx configuration
D. Use .platform hooks to modify Nginx configuration
Answer: D (for newer Amazon Linux 2 platforms) or B

Explanation:

Elastic Beanstalk customization options:

For Amazon Linux 2 platforms - .platform (Option D):
.platform/
  nginx/
    conf.d/
      custom-headers.conf
custom-headers.conf:
add_header X-Custom-Header "MyValue";
add_header X-Frame-Options "DENY";

This automatically merges with Nginx configuration.

For older platforms or additional customization - .ebextensions (Option B):
# .ebextensions/nginx-headers.config
files:
  "/etc/nginx/conf.d/custom-headers.conf":
    mode: "000644"
    owner: root
    group: root
    content: |
      add_header X-Custom-Header "MyValue";
.platform is preferred for Amazon Linux 2:
  • Cleaner structure
  • Survives platform updates better
  • Specific directories for known customizations (nginx, hooks)

Option A (SSH) is not reproducible. Option C (custom AMI) adds maintenance burden.

Question 109
An Elastic Beanstalk application needs to run a script every time a new version is deployed, after the application is running. Which hook should be used?
A. appdeploy/pre hook
B. appdeploy/post hook
C. configdeploy/post hook
D. postdeploy hook
Answer: B (or D depending on EB platform version)

Explanation:

Elastic Beanstalk deployment hooks (Amazon Linux 2):

Hook directories:
.platform/hooks/
  prebuild/    # Before application builds
  predeploy/   # After build, before deployment
  postdeploy/  # After deployment complete
For post-deployment scripts (Option D for AL2):
.platform/hooks/postdeploy/99_run_migrations.sh
#!/bin/bash
cd /var/app/current
./run-migrations.sh
Older .ebextensions approach:
container_commands:
  01_run_script:
    command: "./post-deploy-script.sh"
Key difference:
  • container_commands run BEFORE app is live
  • postdeploy hooks run AFTER app is live

For post-deployment with running application, use .platform/hooks/postdeploy/ on Amazon Linux 2.

Question 110
A company wants to implement a deployment pipeline where infrastructure changes and application code changes are deployed together atomically. If either fails, both should roll back. How should this be designed?
A. Separate pipelines for infrastructure and application with manual coordination
B. Single pipeline with CloudFormation deploying both infrastructure and application
C. Single pipeline with infrastructure stage followed by application stage
D. Use CloudFormation StackSets for coordinated deployment
Answer: B

Explanation:

Atomic infrastructure and application deployment:

Option B - Combined CloudFormation template:
Resources:
  # Infrastructure
  MySecurityGroup:
    Type: AWS::EC2::SecurityGroup
    
  MyLoadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    
  # Application
  MyLambdaFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket: !Ref DeploymentBucket
        S3Key: !Ref CodeVersion
        
  # Or ECS Service
  MyECSService:
    Type: AWS::ECS::Service
How it achieves atomicity:
  1. CloudFormation creates change set with all changes
  2. If any resource fails → entire stack rolls back
  3. Both infrastructure and application return to previous state
Pipeline integration:
Source → Build → Deploy (CloudFormation with application code packaged)

CloudFormation's native rollback handles the atomicity requirement.

Question 111
An organization uses Elastic Beanstalk with a worker environment processing SQS messages. The worker occasionally processes the same message multiple times. How can this be prevented?
A. Enable FIFO queue for the environment
B. Increase the visibility timeout on the SQS queue
C. Implement idempotent message processing in the application
D. Configure Elastic Beanstalk to delete messages immediately
Answer: B and C

Explanation:

Duplicate message processing causes:

1. Visibility timeout too short (Option B):
  • Message becomes visible again before processing completes
  • Another worker picks up the same message
  • Solution: Increase visibility timeout to exceed maximum processing time
2. Application doesn't handle duplicates (Option C):
  • SQS provides "at least once" delivery (standard queues)
  • Same message might be delivered multiple times
  • Solution: Implement idempotent processing
python
  def process_message(message):
      message_id = message['MessageId']
      if already_processed(message_id):  # Check DynamoDB/cache
          return
      do_work(message)
      mark_processed(message_id)
Elastic Beanstalk worker behavior:
  • Automatically deletes messages after successful processing (HTTP 200)
  • Keeps messages visible during processing
  • Returns messages to queue on failure

Both visibility timeout adjustment AND idempotent processing are best practices.

Question 112
A CloudFormation template creates an RDS database. The team wants to ensure the database is not deleted even if the stack is deleted. What configuration achieves this?
A. Enable deletion protection on the RDS instance
B. Set DeletionPolicy to Retain on the RDS resource
C. Enable termination protection on the stack
D. Both A and B for defense in depth
Answer: D

Explanation:

Protecting RDS from deletion:

DeletionPolicy: Retain (Option B):
MyDatabase:
  Type: AWS::RDS::DBInstance
  DeletionPolicy: Retain
  Properties:
    ...

When stack is deleted, RDS instance remains (orphaned).

Deletion Protection (Option A):
MyDatabase:
  Type: AWS::RDS::DBInstance
  Properties:
    DeletionProtection: true
    ...

Prevents deletion via API/console. Must disable before deleting.

Defense in depth (Option D):
  1. DeletionPolicy: Retain - CloudFormation won't delete
  2. DeletionProtection: true - Even if someone tries to delete directly, blocked
  3. Stack termination protection - Prevents accidental stack deletion
Best practice for production databases: Use all three protections for critical data.
Question 113
A company uses CloudFormation with nested stacks. They want to update a child stack independently without updating the parent. Is this possible and how?
A. Yes, update the child stack directly
B. No, child stacks must be updated through the parent
C. Yes, but only if the child stack was created with UPDATE capability
D. No, nested stacks don't support independent updates
Answer: A

Explanation:

Nested stack update options:

Direct child stack updates (Option A):
  • Child stacks are regular CloudFormation stacks
  • Can be updated directly using stack name or ID
  • Changes are independent of parent
When to use direct updates:
  • Quick fixes to child stack
  • Independent component updates
  • Parent template doesn't need changes
Considerations:
  • Parent template might become out of sync with child state
  • Next parent update might cause unexpected child changes
  • Drift detection can identify differences
When to use parent updates:
  • Coordinated changes across stacks
  • Version control of complete infrastructure
  • Consistent state management
Best practice: Update through parent for version control and consistency. Direct child updates for emergencies or independent components.
Question 114
An Elastic Beanstalk environment uses environment properties for configuration. The team wants to rotate a database password without redeploying the application. How can this be achieved?
A. Update environment properties, which triggers instance refresh
B. Use Secrets Manager with application-level caching and rotation
C. Store credentials in .ebextensions and update the file
D. Use Parameter Store SecureString with application polling
Answer: B (or D)

Explanation:

Credential rotation without deployment:

Option B - Secrets Manager (preferred for credentials):
# Application code
import boto3
import json

def get_db_credentials():
    client = boto3.client('secretsmanager')
    response = client.get_secret_value(SecretId='prod/db-credentials')
    return json.loads(response['SecretString'])

# Connection with refresh
def get_connection():
    creds = get_db_credentials()  # Gets current credentials
    return connect(creds['username'], creds['password'])
Automatic rotation:
  • Secrets Manager rotates credentials
  • Application fetches new credentials on next call
  • No deployment required
Option D - Parameter Store: Similar pattern but manual rotation:
ssm = boto3.client('ssm')
password = ssm.get_parameter(Name='/prod/db-password', WithDecryption=True)
Why not environment properties (Option A):
  • Updating env properties restarts instances
  • Credentials visible in Beanstalk console
  • No automatic rotation
Question 115
A CloudFormation template uses a custom resource backed by a Lambda function. The Lambda function creates resources that take 10 minutes to complete. CloudFormation shows the stack creation still in progress after 15 minutes. What is likely happening?
A. Lambda function timeout is too short
B. Lambda function isn't sending a response to CloudFormation
C. CloudFormation is waiting for additional resources
D. Custom resource requires more time than standard timeout
Answer: B

Explanation:

Custom resource behavior:

Required response: Custom resource Lambda MUST send response to CloudFormation:
import cfnresponse

def handler(event, context):
    try:
        # Do work (10 minutes)
        result = create_external_resource()
        
        # MUST send response
        cfnresponse.send(event, context, cfnresponse.SUCCESS, {
            'ResourceId': result['id']
        })
    except Exception as e:
        cfnresponse.send(event, context, cfnresponse.FAILED, {
            'Error': str(e)
        })
Common issues:
  1. Lambda timeout before work completes (Lambda dies, no response)
  2. Function completes but doesn't send response (CloudFormation waits)
  3. Response sent to wrong URL (misconfigured)
Troubleshooting:
  • Check Lambda logs for completion
  • Verify cfnresponse.send is called
  • Check Lambda timeout (max 15 minutes)

For long-running tasks, use Step Functions or asynchronous pattern with status polling.

Question 116
A company wants to deploy the same Elastic Beanstalk application to multiple regions with region-specific configuration. What is the recommended approach?
A. Create saved configurations per region and restore in each region
B. Use CloudFormation with parameters for region-specific values
C. Create separate applications per region with .ebextensions containing region configs
D. Use Elastic Beanstalk environment cloning across regions
Answer: B (or A)

Explanation:

Multi-region Beanstalk deployment options:

Option B - CloudFormation (most flexible):
Parameters:
  Region:
    Type: String
  DatabaseEndpoint:
    Type: String

Resources:
  BeanstalkEnvironment:
    Type: AWS::ElasticBeanstalk::Environment
    Properties:
      OptionSettings:
        - Namespace: aws:elasticbeanstalk:application:environment
          OptionName: DB_ENDPOINT
          Value: !Ref DatabaseEndpoint

Deploy with region-specific parameters.

Option A - Saved configurations:
  1. Create environment in one region
  2. Save configuration
  3. Download and modify for other regions
  4. Create environments from saved configs
Why not environment cloning (Option D):
  • Cloning is same-region only
  • Doesn't work across regions
Best practice: Use CloudFormation or Terraform for multi-region with region-specific parameters stored in SSM Parameter Store per region.
Question 117
A CloudFormation stack uses a custom resource to create a DNS record in an external DNS provider. When the stack is deleted, the DNS record should also be deleted. How is this implemented?
A. The custom resource Lambda automatically handles deletions
B. Implement Delete handling in the custom resource Lambda
C. Set DeletionPolicy to Delete on the custom resource
D. Custom resources cannot handle deletions
Answer: B

Explanation:

Custom resource lifecycle handling:

Lambda must handle all request types:
def handler(event, context):
    request_type = event['RequestType']
    
    if request_type == 'Create':
        dns_record_id = create_dns_record(event['ResourceProperties'])
        response_data = {'RecordId': dns_record_id}
        cfnresponse.send(event, context, cfnresponse.SUCCESS, response_data)
        
    elif request_type == 'Delete':
        record_id = event['PhysicalResourceId']
        delete_dns_record(record_id)
        cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
        
    elif request_type == 'Update':
        # Handle updates
        pass
Request types:
  1. Create: Stack creation or resource addition
  2. Update: Resource property changes
  3. Delete: Stack deletion or resource removal
PhysicalResourceId:
  • Returned in Create response
  • Provided in Update/Delete events
  • Used to identify the external resource

The Lambda MUST handle Delete requests for proper cleanup.

Question 118
An organization uses CloudFormation for infrastructure deployment. They want to prevent any modifications to production stacks except through the CI/CD pipeline. How should this be enforced?
A. Use IAM policies to deny CloudFormation actions for console users
B. Enable stack policy with deny all updates
C. Use SCPs to restrict CloudFormation access
D. Combine IAM policies restricting CloudFormation with pipeline role exceptions
Answer: D

Explanation:

Enforcing pipeline-only updates:

IAM policy for developers/operators:
{
  "Effect": "Deny",
  "Action": [
    "cloudformation:UpdateStack",
    "cloudformation:DeleteStack",
    "cloudformation:CreateStack"
  ],
  "Resource": "arn:aws:cloudformation:*:*:stack/prod-*/*"
}
Pipeline role (exception):
{
  "Effect": "Allow",
  "Action": ["cloudformation:*"],
  "Resource": "arn:aws:cloudformation:*:*:stack/prod-*/*"
}
Additional protections:
  1. Stack termination protection: Prevents accidental deletion
  2. Stack policies: Control specific resource modifications
  3. SCPs: Organization-level restrictions
Pipeline flow:
Code Change → PR Approval → Pipeline Triggers → 
Pipeline Role → CloudFormation Update

Humans cannot directly modify production stacks; all changes go through pipeline.

Question 119
An Elastic Beanstalk application needs to run commands after instances are replaced during a scaling event. Which hook should be used?
A. .ebextensions container_commands
B. .platform/hooks/postdeploy
C. .ebextensions commands
D. Both A and B during deployments
Answer: B

Explanation:

Scaling event vs deployment hooks:

Scaling events:
  • New instances launched from AMI
  • Not the same as deployment
  • Need hooks that run on instance launch
.platform/hooks/postdeploy (Option B for AL2): Actually runs on deployment, not scaling. For scaling events, use:
# .ebextensions/01-scale-commands.config
commands:
  01_run_on_scale:
    command: "/opt/scripts/scale-setup.sh"

OR User Data in launch template customization.

Clarification:
  • commands: Run during instance launch (before app deployment)
  • container_commands: Run during deployment (leader election available)
  • .platform/hooks/: Run during deployment lifecycle

For scaling-only commands (new instances, no deployment), use commands section or instance launch scripts.

Question 120
A company uses CloudFormation for infrastructure and wants to implement drift detection to identify manual changes. How should this be automated?
A. Schedule Lambda function to run DetectStackDrift API
B. Enable automatic drift detection in CloudFormation
C. Use AWS Config rules for drift detection
D. CloudFormation Events with EventBridge for drift alerts
Answer: A (or C for more comprehensive detection)

Explanation:

CloudFormation drift detection automation:

Option A - Scheduled drift detection:
import boto3

def lambda_handler(event, context):
    cfn = boto3.client('cloudformation')
    
    # List all stacks
    stacks = cfn.list_stacks(StackStatusFilter=['CREATE_COMPLETE', 'UPDATE_COMPLETE'])
    
    for stack in stacks['StackSummaries']:
        # Initiate drift detection
        cfn.detect_stack_drift(StackName=stack['StackName'])

# Schedule with CloudWatch Events (daily)
Follow-up Lambda for results:
def check_drift_results(event, context):
    cfn = boto3.client('cloudformation')
    
    drift_status = cfn.describe_stack_resource_drifts(StackName=stack_name)
    
    drifted = [r for r in drift_status['StackResourceDrifts'] 
               if r['StackResourceDriftStatus'] == 'MODIFIED']
    
    if drifted:
        send_alert(drifted)
Option C - AWS Config: AWS Config rule cloudformation-stack-drift-detection-check can monitor for drift.

CloudFormation doesn't have built-in automatic drift detection (Option B doesn't exist).

Question 121
A CloudFormation template creates an S3 bucket. The team wants to ensure the bucket has encryption enabled and blocks public access, regardless of what the template specifies. How can this be enforced?
A. Use CloudFormation hooks to validate templates
B. Use AWS Config rules to check bucket configuration
C. Use SCPs to deny bucket creation without encryption
D. Use CloudFormation Guard for policy-as-code validation
Answer: A or D (preventive) and B (detective)

Explanation:

Enforcing S3 security standards:

Option A - CloudFormation Hooks (preventive):
# Hook Lambda
def validate_s3_bucket(event):
    resource_properties = event['requestData']['targetLogicalId']['properties']
    
    # Check encryption
    if 'BucketEncryption' not in resource_properties:
        return {'status': 'FAILED', 'message': 'Encryption required'}
    
    # Check public access block
    if 'PublicAccessBlockConfiguration' not in resource_properties:
        return {'status': 'FAILED', 'message': 'Public access block required'}
    
    return {'status': 'SUCCESS'}
Option D - CloudFormation Guard:
AWS::S3::Bucket {
  BucketEncryption.ServerSideEncryptionConfiguration[*].ServerSideEncryptionByDefault.SSEAlgorithm == "aws:kms"
  PublicAccessBlockConfiguration.BlockPublicAcls == true
  PublicAccessBlockConfiguration.BlockPublicPolicy == true
}
Option B - AWS Config (detective):
  • s3-bucket-server-side-encryption-enabled
  • s3-bucket-public-read-prohibited
Best practice: Use hooks/guard for prevention AND Config for continuous monitoring.
Question 122
An Elastic Beanstalk environment has immutable deployment configured. During a deployment, the team notices double the number of instances running. The deployment eventually succeeds. Is this expected behavior?
A. No, immutable deployments should maintain the same instance count
B. Yes, immutable deployments create a temporary Auto Scaling group
C. No, this indicates a deployment failure
D. Yes, but only if health check grace period is enabled
Answer: B

Explanation:

Immutable deployment process:

How immutable deployments work:
  1. Create temporary Auto Scaling group
  2. Launch new instances with new version
  3. New instances pass health checks
  4. New instances added to load balancer
  5. Old instances terminated
  6. Temporary ASG deleted
During deployment:
Original ASG: 3 instances (old version)
Temporary ASG: 3 instances (new version)
Total: 6 instances
After successful deployment:
Original ASG: 3 instances (new version)
Temporary ASG: deleted
Total: 3 instances
Benefits:
  • No capacity reduction during deployment
  • Quick rollback (terminate temporary ASG)
  • Clean instances with new version
Cost consideration: Temporary double capacity means temporary double cost during deployment window.
Question 123
A company uses CloudFormation to deploy VPCs and associated resources. They want to ensure that VPC CIDR blocks don't overlap with existing VPCs in the account. How can this be implemented?
A. Use a CloudFormation macro to validate CIDR blocks
B. Create a custom resource that validates CIDR before VPC creation
C. Use CloudFormation Guard to check CIDR blocks
D. Implement a CloudFormation hook for pre-create validation
Answer: B or D

Explanation:

CIDR validation options:

Option B - Custom resource:
Resources:
  CIDRValidation:
    Type: Custom::CIDRValidation
    Properties:
      ServiceToken: !GetAtt ValidationFunction.Arn
      ProposedCIDR: "10.0.0.0/16"
      
  MyVPC:
    Type: AWS::EC2::VPC
    DependsOn: CIDRValidation
    Properties:
      CidrBlock: "10.0.0.0/16"

Lambda checks existing VPCs for overlaps.

Option D - CloudFormation Hook (newer, preferred):
# Pre-create hook for AWS::EC2::VPC
def validate_cidr(event):
    proposed_cidr = event['requestData']['targetLogicalId']['properties']['CidrBlock']
    
    # Check existing VPCs
    ec2 = boto3.client('ec2')
    vpcs = ec2.describe_vpcs()
    
    for vpc in vpcs['Vpcs']:
        if cidrs_overlap(proposed_cidr, vpc['CidrBlock']):
            return {'status': 'FAILED', 'message': 'CIDR overlap detected'}
    
    return {'status': 'SUCCESS'}

Hooks are cleaner and don't create resources for validation.

Question 124
An Elastic Beanstalk application requires a mounted EFS filesystem for shared storage between instances. How should this be configured?
A. Use .ebextensions to mount EFS
B. Configure EFS in the Beanstalk console storage settings
C. Create a custom AMI with EFS mount configured
D. Use .platform configuration for EFS mounting
Answer: A

Explanation:

EFS mounting in Elastic Beanstalk:

.ebextensions configuration:
# .ebextensions/efs-mount.config
packages:
  yum:
    amazon-efs-utils: []

commands:
  01_mount:
    command: |
      mkdir -p /mnt/efs
      mount -t efs fs-12345678:/ /mnt/efs
      
files:
  "/etc/fstab":
    mode: "000644"
    owner: root
    group: root
    content: |
      fs-12345678:/ /mnt/efs efs defaults,_netdev 0 0
Additional requirements:
  1. Security group allowing NFS (port 2049) from Beanstalk instances
  2. EFS mount targets in same subnets as Beanstalk instances
  3. IAM permissions if using IAM authentication
For Amazon Linux 2 - .platform alternative:
# .platform/hooks/prebuild/01-mount-efs.sh
#!/bin/bash
yum install -y amazon-efs-utils
mkdir -p /mnt/efs
mount -t efs fs-12345678:/ /mnt/efs

Elastic Beanstalk doesn't have native EFS integration in the console; use .ebextensions or .platform hooks.

Question 125
A CloudFormation template uses the AWS::Include transform to incorporate template snippets from S3. During stack updates, the snippets have been modified in S3 but CloudFormation isn't picking up the changes. What is the issue?
A. Include transform only runs during stack creation
B. CloudFormation caches transformed templates
C. S3 objects need versioning for change detection
D. CloudFormation needs stack policy update to detect include changes
Answer: B (partially) or the answer involves cache invalidation

Explanation:

AWS::Include behavior:

How Include works:
Transform: AWS::Include
Parameters:
  Location: s3://bucket/snippet.yaml
Issue: CloudFormation may cache transformed templates within a session. The include is processed at transform time. Solutions:
  1. Version in URL: s3://bucket/snippet-v2.yaml
  2. S3 versioning + version ID: Reference specific versions
  3. Force template change: Any change to parent template triggers re-transform
  4. Cache busting: Add parameter that changes template hash
Best practice:
Transform: AWS::Include
Parameters:
  Location: !Sub "s3://bucket/snippets/config-${Version}.yaml"

Where ${Version} is a parameter that changes with snippet updates.

CloudFormation processes transforms fresh on each stack operation if the template itself changes.

Question 126
A company uses CloudFormation StackSets to deploy a baseline configuration across 50 accounts. They need to update the StackSet with a new configuration change. The update should complete within 2 hours and minimize concurrent updates per region. What configuration should be used?
A. Use default deployment settings
B. Configure MaxConcurrentPercentage and RegionConcurrencyType
C. Use sequential deployment across all accounts
D. Create multiple StackSets with smaller account subsets
Answer: B

Explanation:

StackSet deployment configuration:

Deployment options:
aws cloudformation update-stack-set \
  --stack-set-name my-stackset \
  --template-body file://template.yaml \
  --operation-preferences '{
    "RegionConcurrencyType": "PARALLEL",
    "MaxConcurrentCount": 10,
    "FailureToleranceCount": 5
  }'
Key settings:
  1. RegionConcurrencyType:
  • SEQUENTIAL: One region at a time
  • PARALLEL: Multiple regions simultaneously
  1. MaxConcurrentCount/Percentage:
  • How many accounts can update concurrently
  • Balance speed vs. impact of failures
  1. FailureToleranceCount/Percentage:
  • How many failures before operation stops
  • Allows graceful handling of individual account issues
For 50 accounts in 2 hours:
  • PARALLEL regions
  • MaxConcurrentCount: 10-15 accounts
  • This allows ~3-4 waves to complete within 2 hours
Question 127
An Elastic Beanstalk environment uses a rolling deployment policy. During deployment, the team notices that the environment becomes unhealthy and instances are repeatedly terminated. What is the likely cause?
A. The new application version fails health checks
B. The rolling batch size is too large
C. The deployment timeout is too short
D. All of the above could cause this behavior
Answer: D

Explanation:

Rolling deployment failures:

Option A - Health check failures:
  • New version has bugs or configuration issues
  • Fails health checks → instance marked unhealthy
  • Auto Scaling terminates and replaces
  • Cycle continues
Option B - Batch size too large:
  • Large batches reduce capacity significantly
  • Remaining instances overwhelmed by traffic
  • Performance degradation → health check failures
Option C - Timeout too short:
  • Application needs longer startup time
  • Times out before reaching healthy state
  • Deployment fails and rolls back
Debugging steps:
  1. Check Beanstalk events for specific error messages
  2. Review application logs in CloudWatch
  3. Test new version in a clone environment
  4. Increase health check grace period
  5. Reduce batch size for safer rollout
Health settings:
# .ebextensions/healthcheck.config
option_settings:
  aws:elasticbeanstalk:command:
    Timeout: 600
  aws:elasticbeanstalk:healthreporting:system:
    HealthCheckSuccessThreshold: Degraded
Question 128
A CloudFormation template creates an Application Load Balancer and Lambda function for a serverless application. The team wants to test changes to the Lambda code without deploying infrastructure changes. How should the template be structured?
A. Separate stacks for infrastructure and Lambda
B. Use nested stacks with Lambda in a child stack
C. Use AWS SAM for Lambda and CloudFormation for ALB
D. Any of the above would work
Answer: D

Explanation:

Separation strategies:

Option A - Separate stacks:
infra-stack: ALB, VPC, Security Groups
lambda-stack: Lambda function (references infra exports)
Update lambda-stack independently. Option B - Nested stacks:
# parent-stack
Resources:
  InfraStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: s3://bucket/infra.yaml
      
  LambdaStack:
    Type: AWS::CloudFormation::Stack
    Properties:
      TemplateURL: s3://bucket/lambda.yaml
Update child stacks independently. Option C - Mixed tooling:
  • CloudFormation for long-lived infrastructure
  • SAM for Lambda (faster iterations)
  • SAM integrates with CloudFormation
Best practice considerations:
  • Separate things that change at different rates
  • Lambda code changes frequently → separate stack
  • ALB changes rarely → infrastructure stack
  • Use stack outputs/imports or SSM for integration
Question 129
An organization uses Elastic Beanstalk across multiple teams. They want to ensure all environments use specific instance types and are deployed to approved subnets. How should this be enforced?
A. Use saved configurations that all teams must use
B. Implement custom platform with restrictions built-in
C. Use IAM policies to restrict Beanstalk configuration options
D. Use Service Control Policies to restrict EC2 instance types
Answer: C (or D for organization-wide)

Explanation:

Restricting Beanstalk configurations:

Option C - IAM policy conditions:
{
  "Effect": "Allow",
  "Action": "elasticbeanstalk:CreateEnvironment",
  "Resource": "*",
  "Condition": {
    "StringEquals": {
      "elasticbeanstalk:InVPC": "true"
    }
  }
}
Option D - SCPs for EC2 restrictions:
{
  "Effect": "Deny",
  "Action": "ec2:RunInstances",
  "Resource": "arn:aws:ec2:*:*:instance/*",
  "Condition": {
    "ForAnyValue:StringNotLike": {
      "ec2:InstanceType": ["t3.small", "t3.medium"]
    }
  }
}

This prevents Beanstalk from launching unapproved instance types.

Subnet restrictions:
{
  "Effect": "Deny",
  "Action": "ec2:RunInstances",
  "Resource": "arn:aws:ec2:*:*:subnet/*",
  "Condition": {
    "ForAnyValue:StringNotEquals": {
      "ec2:Subnet": ["subnet-approved1", "subnet-approved2"]
    }
  }
}

SCPs provide organization-wide enforcement regardless of which service launches resources.

Question 130
A CloudFormation template uses the Serverless transform (AWS SAM). When deploying changes to a Lambda function, the team wants to implement canary deployments. What needs to be added to the template?
A. Add CodeDeploy application and deployment group resources
B. Add DeploymentPreference configuration to the Lambda function
C. Use AutoPublishAlias with traffic shifting configuration
D. Both B and C
Answer: D

Explanation:

SAM deployment preferences:

Complete configuration:
Transform: AWS::Serverless-2016-10-31

Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: app.handler
      Runtime: python3.9
      AutoPublishAlias: live  # Option C - required
      DeploymentPreference:   # Option B - deployment configuration
        Type: Canary10Percent10Minutes
        Alarms:
          - !Ref ErrorAlarm
          - !Ref LatencyAlarm
        Hooks:
          PreTraffic: !Ref PreTrafficHook
          PostTraffic: !Ref PostTrafficHook
Required components:
  1. AutoPublishAlias: Creates new version on each deploy and maintains alias
  2. DeploymentPreference: Configures how traffic shifts to new version
What SAM creates automatically:
  • Lambda versions
  • CodeDeploy application
  • CodeDeploy deployment group
  • Alias traffic shifting configuration

Without AutoPublishAlias, there's no alias for traffic shifting. Without DeploymentPreference, deployment is immediate (AllAtOnce).

Question 131
A company uses CloudFormation and wants to validate that templates follow best practices before deployment. The validation should check for things like encryption requirements, logging enabled, and proper tagging. What solution provides this?
A. Use cfn-lint for template validation
B. Use CloudFormation Guard for policy validation
C. Use AWS Config for resource validation
D. Use CloudFormation hooks for pre-deployment checks
Answer: B (for policy enforcement) and A (for syntax/best practices)

Explanation:

Template validation tools:

cfn-lint (Option A):
  • Validates CloudFormation template syntax
  • Checks against CloudFormation specification
  • Catches errors before deployment
  • Limited policy enforcement
CloudFormation Guard (Option B):
# Rules file
AWS::S3::Bucket {
  BucketEncryption EXISTS
  Tags[*].Key == "Environment"
  LoggingConfiguration EXISTS
}

AWS::EC2::SecurityGroup {
  SecurityGroupIngress[*].CidrIp != "0.0.0.0/0"
}
Usage:
cfn-guard validate -d template.yaml -r rules.guard
CI/CD integration:
# buildspec.yml
phases:
  build:
    commands:
      - cfn-lint template.yaml
      - cfn-guard validate -d template.yaml -r company-rules.guard

Guard is specifically designed for policy-as-code validation against CloudFormation templates.

Question 132
An Elastic Beanstalk application uses a Classic Load Balancer. The team wants to migrate to an Application Load Balancer without recreating the environment. How should this be done?
A. Update the environment configuration to change load balancer type
B. Clone the environment with ALB, then swap CNAMEs
C. Use .ebextensions to change the load balancer type
D. It's not possible to change load balancer type without recreating
Answer: B

Explanation:

Load balancer type migration:

Why direct change isn't possible (Option D is partially correct):
  • Load balancer type is an immutable environment configuration
  • Cannot be changed after environment creation
  • Requires environment recreation
Option B - Blue/Green via environment swap:
  1. Clone environment (or create new) with ALB:
eb clone env-with-clb --environment-name env-with-alb \
  --option-settings option-settings.json
Where option-settings.json specifies ALB.
  1. Test the new environment
  2. Swap CNAMEs:
eb swap env-with-clb --destination-name env-with-alb
  1. Terminate old environment
Alternative - Saved configuration:
  1. Save current configuration
  2. Modify saved config for ALB
  3. Create new environment from modified config
  4. Swap and terminate

This provides zero-downtime migration with ALB benefits.

Question 133
A CloudFormation template needs to create resources in a specific order due to dependencies that aren't automatically detected. How can explicit dependencies be defined?
A. Use DependsOn attribute on resources
B. Use Ref function to create implicit dependencies
C. Use AWS::CloudFormation::WaitCondition
D. Both A and B
Answer: D

Explanation:

CloudFormation dependency management:

Implicit dependencies (Option B):
Resources:
  MySecurityGroup:
    Type: AWS::EC2::SecurityGroup
    
  MyInstance:
    Type: AWS::EC2::Instance
    Properties:
      SecurityGroupIds:
        - !Ref MySecurityGroup  # Implicit dependency
CloudFormation automatically creates SecurityGroup before Instance. Explicit dependencies (Option A):
Resources:
  MyDatabase:
    Type: AWS::RDS::DBInstance
    
  MyAppServer:
    Type: AWS::EC2::Instance
    DependsOn: MyDatabase  # Explicit dependency
    Properties:
      # No direct reference to database
When to use DependsOn:
  • Resources don't reference each other
  • Order matters for external reasons
  • Custom resources that depend on other resources
  • Wait for resource to be fully operational (not just created)
DependsOn vs Ref:
  • Ref creates both dependency AND passes value
  • DependsOn only creates dependency order
Question 134
An organization uses CloudFormation StackSets. They want certain accounts to be exempt from StackSet deployments. How can this be configured?
A. Use StackSet account filters with exclusion list
B. Remove accounts from the StackSet target accounts
C. Use SCPs to prevent StackSet deployments in specific accounts
D. Configure StackSet deployment targets with OU and account exclusions
Answer: D

Explanation:

StackSet account exclusions:

Deployment targets configuration:
aws cloudformation update-stack-set \
  --stack-set-name my-stackset \
  --deployment-targets '{
    "OrganizationalUnitIds": ["ou-abc123"],
    "AccountFilterType": "DIFFERENCE",
    "Accounts": ["111111111111", "222222222222"]
  }'
AccountFilterType options:
  • INTERSECTION: Deploy only to specified accounts within OUs
  • DIFFERENCE: Deploy to all accounts in OUs EXCEPT specified accounts
  • UNION: Deploy to OU accounts plus additional accounts
Use case examples:
  1. Exclude management account:
  • Target: All accounts in root OU
  • Exclude: Management account
  1. Exclude sandbox accounts:
  • Target: Production OU
  • Exclude: Test/sandbox accounts within that OU

This is cleaner than managing individual account lists for large organizations.

Question 135
An Elastic Beanstalk environment runs a Docker application. The Docker image is stored in Amazon ECR. How should authentication to ECR be configured?
A. Store ECR credentials in Beanstalk environment variables
B. Assign an instance profile with ECR permissions to the environment
C. Include docker login commands in Dockerrun.aws.json
D. Configure ECR repository policy to allow public access
Answer: B

Explanation:

ECR authentication for Beanstalk:

Instance profile configuration (Option B):
  1. IAM Policy:
{
  "Effect": "Allow",
  "Action": [
    "ecr:GetAuthorizationToken",
    "ecr:BatchCheckLayerAvailability",
    "ecr:GetDownloadUrlForLayer",
    "ecr:BatchGetImage"
  ],
  "Resource": "*"
}
  1. Assign to Beanstalk instance profile:
Add policy to the ec2 instance role.
  1. Dockerrun.aws.json:
{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
    "Update": "true"
  },
  "Authentication": {
    "Bucket": "not-needed-with-instance-profile"
  }
}
How it works:
  • EC2 instances assume instance profile role
  • Docker daemon uses instance metadata for ECR authentication
  • No credential management required

Never store credentials (Option A) or make repositories public (Option D).

Question 136
A CloudFormation template creates an Auto Scaling group with instances that need to download application code from S3 during launch. The download occasionally fails because the S3 VPC endpoint isn't ready when instances launch. How can this be resolved?
A. Add DependsOn between ASG and VPC endpoint
B. Add retry logic in the instance user data script
C. Use CreationPolicy on the ASG with proper signaling
D. All of the above are valid approaches
Answer: D

Explanation:

Handling timing dependencies:

Option A - DependsOn:
Resources:
  S3Endpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      ServiceName: com.amazonaws.region.s3
      
  MyASG:
    Type: AWS::AutoScaling::AutoScalingGroup
    DependsOn: S3Endpoint
Ensures endpoint exists before ASG, but endpoint might not be immediately functional. Option B - Retry logic:
#!/bin/bash
MAX_RETRIES=5
for i in $(seq 1 $MAX_RETRIES); do
  aws s3 cp s3://bucket/app.zip /tmp/app.zip && break
  sleep 10
done
Handles transient failures robustly. Option C - CreationPolicy:
MyASG:
  CreationPolicy:
    ResourceSignal:
      Count: !Ref DesiredCapacity
      Timeout: PT15M
Stack waits for instances to fully initialize before proceeding. Best practice: Combine all approaches for robust deployment:
  • DependsOn for ordering
  • Retry logic for transient issues
  • CreationPolicy for completion verification
Question 137
A company wants to share CloudFormation templates across accounts in their AWS Organization. The templates should be version-controlled and teams should be able to deploy approved templates only. What solution provides this?
A. S3 bucket with cross-account access for template storage
B. AWS Service Catalog with portfolios and products
C. CodeCommit repository with cross-account access
D. CloudFormation Registry with public extensions
Answer: B

Explanation:

AWS Service Catalog for template sharing:

Service Catalog components:
  1. Product: CloudFormation template packaged as deployable product
  2. Portfolio: Collection of products
  3. Principal: Users/groups who can access portfolio
Configuration:
# Service Catalog Product
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Portfolio:
    Type: AWS::ServiceCatalog::Portfolio
    Properties:
      DisplayName: "Approved Infrastructure"
      ProviderName: "Platform Team"
      
  Product:
    Type: AWS::ServiceCatalog::CloudFormationProduct
    Properties:
      Name: "VPC Template"
      Owner: "Platform Team"
      ProvisioningArtifactParameters:
        - Name: "v1.0"
          Info:
            LoadTemplateFromURL: "s3://bucket/templates/vpc-v1.0.yaml"
Cross-account sharing:
aws servicecatalog create-portfolio-share \
  --portfolio-id port-123 \
  --organization-node Type=ORGANIZATION,Value=o-abc123
Benefits:
  • Version control of templates
  • Approval workflow for new versions
  • Usage tracking and auditing
  • Constraints for parameter validation
  • Launch role configuration
Question 138
An Elastic Beanstalk environment uses scheduled scaling to handle predictable traffic patterns. The scheduled actions should only run on weekdays. How is this configured?
A. Configure cron expressions in scheduled scaling actions
B. Use CloudWatch Events to trigger scaling
C. Configure scheduled scaling in .ebextensions
D. Use recurrence schedule with day-of-week specification
Answer: D (or A, same mechanism)

Explanation:

Beanstalk scheduled scaling:

Console/CLI configuration:
aws autoscaling put-scheduled-action \
  --auto-scaling-group-name awseb-e-abc123-stack-AWSEBAutoScalingGroup-xyz \
  --scheduled-action-name scale-up-weekdays \
  --recurrence "0 8 * * 1-5" \
  --min-size 5 \
  --max-size 10 \
  --desired-capacity 5
Cron format: minute hour day-of-month month day-of-week
  • 0 8 * * 1-5 = 8:00 AM on Monday-Friday
.ebextensions approach (Option C):
# .ebextensions/scheduled-scaling.config
Resources:
  ScaleUpWeekdays:
    Type: AWS::AutoScaling::ScheduledAction
    Properties:
      AutoScalingGroupName: 
        Ref: AWSEBAutoScalingGroup
      Recurrence: "0 8 * * 1-5"
      MinSize: 5
      MaxSize: 10
      DesiredCapacity: 5

Both approaches use cron expressions with day-of-week (1=Monday through 7=Sunday or 0=Sunday).

Question 139
A CloudFormation stack creates a VPC with custom DHCP options. The template update changes the DHCP option set, but instances in the VPC aren't using the new DHCP options. What is happening?
A. DHCP options changes require VPC recreation
B. Instances need to renew DHCP lease to get new options
C. CloudFormation drift has occurred
D. DHCP option association wasn't updated
Answer: B

Explanation:

DHCP options propagation:

How DHCP options work:
  1. DHCP option set is associated with VPC
  2. Instances receive options via DHCP lease
  3. Lease renewal happens at specific intervals
When options change:
  • New instances immediately get new options
  • Existing instances keep old options until lease renewal
  • Lease renewal depends on lease duration (typically hours)
To force new options:
# Option 1: Restart instance networking
sudo dhclient -r && sudo dhclient

# Option 2: Stop and start instance
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
aws ec2 start-instances --instance-ids i-1234567890abcdef0
CloudFormation behavior:
  • Successfully updates DHCP option set
  • Successfully associates with VPC
  • Cannot force instance DHCP renewal

For exam: Understand that DHCP option propagation has delay; may require instance action.

Question 140
A company uses Elastic Beanstalk with a load balanced environment. They want to configure the load balancer to use a custom SSL certificate from ACM. The environment currently uses HTTP only. What changes are needed?
A. Upload certificate to IAM and configure in .ebextensions
B. Configure HTTPS listener with ACM certificate ARN in environment settings
C. Enable HTTPS in the Beanstalk console and select ACM certificate
D. Both B and C are valid approaches
Answer: D

Explanation:

SSL configuration for Beanstalk:

Console approach (Option C):
  1. Environment → Configuration → Load Balancer
  2. Add listener: Port 443, HTTPS
  3. Select ACM certificate from dropdown
Configuration/CLI approach (Option B):
# .ebextensions/https.config
option_settings:
  aws:elb:listener:443:
    ListenerProtocol: HTTPS
    SSLCertificateId: arn:aws:acm:region:account:certificate/id
    InstancePort: 80
    InstanceProtocol: HTTP
For ALB:
option_settings:
  aws:elbv2:listener:443:
    ListenerEnabled: true
    Protocol: HTTPS
    SSLCertificateArns: arn:aws:acm:region:account:certificate/id
  aws:elbv2:listener:default:
    ListenerEnabled: false  # Disable HTTP if needed
ACM vs IAM certificates:
  • ACM: Recommended for ALB/NLB, auto-renewal
  • IAM: Legacy, required for Classic Load Balancer in some regions

Both console and .ebextensions approaches work; choose based on environment management preference.

Question 141
A CloudFormation template creates an Amazon Aurora cluster. During stack updates, the team wants to create a snapshot before any modifications. How can this be automated?
A. Use UpdateReplacePolicy: Snapshot
B. Create a custom resource that takes a snapshot before update
C. Configure DeletionPolicy: Snapshot
D. Use CloudFormation hooks with pre-update snapshot
Answer: B or D

Explanation:

Pre-update snapshots:

DeletionPolicy: Snapshot (Option C): Only takes snapshot when resource is DELETED, not updated. UpdateReplacePolicy: Snapshot (Option A): Takes snapshot when resource is REPLACED during update, not for all updates. Custom resource approach (Option B):
Resources:
  PreUpdateSnapshot:
    Type: Custom::Snapshot
    Properties:
      ServiceToken: !GetAtt SnapshotFunction.Arn
      ClusterIdentifier: !Ref AuroraCluster
      
  AuroraCluster:
    Type: AWS::RDS::DBCluster
    DependsOn: PreUpdateSnapshot

Lambda creates snapshot before cluster updates proceed.

CloudFormation hooks (Option D): Configure a hook that triggers on AWS::RDS::DBCluster updates and takes snapshot before proceeding.

For guaranteed pre-update snapshots, custom resources or hooks are necessary; built-in policies don't cover this scenario.

Question 142
An organization uses multiple Elastic Beanstalk applications across teams. They want to ensure all applications use the latest platform version. How can this be enforced and monitored?
A. Use AWS Config rule for Beanstalk platform compliance
B. Enable managed platform updates for all environments
C. Use EventBridge to detect platform version changes
D. Create a Lambda function that audits platform versions
Answer: B (for automation) and D (for monitoring)

Explanation:

Platform version management:

Managed platform updates (Option B):
# .ebextensions/managed-updates.config
option_settings:
  aws:elasticbeanstalk:managedactions:
    ManagedActionsEnabled: true
    PreferredStartTime: "Sun:02:00"
  aws:elasticbeanstalk:managedactions:platformupdate:
    UpdateLevel: minor
    InstanceRefreshEnabled: true
Settings:
  • UpdateLevel: patch, minor, or major
  • Automatic updates during maintenance window
  • Instance refresh ensures all instances updated
Monitoring/Auditing (Option D):
def audit_platform_versions():
    eb = boto3.client('elasticbeanstalk')
    
    # Get latest platform versions
    platforms = eb.list_platform_versions(
        Filters=[{'Type': 'PlatformBranchName', 'Values': ['Python 3.9']}]
    )
    latest = platforms['PlatformSummaryList'][0]['PlatformVersion']
    
    # Check all environments
    envs = eb.describe_environments()
    for env in envs['Environments']:
        if env['PlatformArn'] != latest:
            report_outdated(env)

Combine managed updates for automation with auditing for visibility.

Question 143
A CloudFormation template creates an S3 bucket and needs to enable versioning only in production environments. How should this conditional configuration be implemented?
A. Use template conditions and Fn::If
B. Create separate templates for each environment
C. Use CloudFormation parameters with default values
D. Use AWS::NoValue to conditionally omit properties
Answer: A (with D for property handling)

Explanation:

Conditional configuration:

Template with conditions:
Parameters:
  Environment:
    Type: String
    AllowedValues: [dev, staging, production]

Conditions:
  IsProd: !Equals [!Ref Environment, production]

Resources:
  MyBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub "myapp-${Environment}"
      VersioningConfiguration: !If 
        - IsProd
        - Status: Enabled
        - !Ref AWS::NoValue  # Omit property entirely
AWS::NoValue behavior: When Fn::If returns AWS::NoValue, the entire property is omitted from the resource, as if it wasn't specified. Alternative for complex conditionals:
VersioningConfiguration: !If 
  - IsProd
  - Status: Enabled
  - Status: Suspended

This explicitly sets versioning status in both cases.

Conditions provide single-template solution for environment-specific configurations.

Question 144
An Elastic Beanstalk worker environment processes jobs that can take up to 30 minutes. The default SQS visibility timeout is too short. How should this be configured?
A. Configure SQS queue visibility timeout in worker configuration
B. Modify visibility timeout in .ebextensions
C. Set InactivityTimeout in worker environment settings
D. Create SQS queue separately and configure worker to use it
Answer: A or B

Explanation:

Worker environment SQS configuration:

Option A - Environment configuration:
# Beanstalk configuration
option_settings:
  aws:elasticbeanstalk:sqsd:
    WorkerQueueURL: https://sqs.region.amazonaws.com/123456789/my-queue
    VisibilityTimeout: 1800  # 30 minutes
    HttpPath: /worker
    MimeType: application/json
Key settings:
  • VisibilityTimeout: Time message is hidden after delivery (1800 = 30 min)
  • HttpConnections: Maximum concurrent connections to worker
  • InactivityTimeout: Time to wait for worker response
  • RetentionPeriod: How long messages are kept
Best practice for long jobs:
  1. Set VisibilityTimeout > maximum processing time
  2. Set InactivityTimeout appropriately
  3. Consider implementing heartbeat pattern for very long jobs
If using custom queue (Option D): Create queue with appropriate settings, then configure worker to use that queue URL.

Beanstalk worker settings control how the environment interacts with SQS.

Question 145
A company uses CloudFormation to deploy a multi-tier application. The database tier should only be created once and never deleted, even if the entire stack is deleted. The application tier should be updated normally. How should this be structured?
A. Use DeletionPolicy: Retain on database resources
B. Use separate stacks for database and application tiers
C. Use CloudFormation stack policies to protect database resources
D. Both A and B for maximum protection
Answer: D

Explanation:

Protecting database resources:

Option A - DeletionPolicy:
Resources:
  Database:
    Type: AWS::RDS::DBInstance
    DeletionPolicy: Retain
    UpdateReplacePolicy: Retain
  • Retain: Resource survives stack deletion
  • Becomes orphaned (must manage separately)
Option B - Separate stacks:
database-stack:
  - RDS instance
  - Exports: DatabaseEndpoint
  
application-stack:
  - EC2 instances
  - ImportValue: DatabaseEndpoint
Benefits of separate stacks:
  • Independent lifecycle management
  • Different update frequencies
  • Better access control
  • Cleaner separation of concerns
Stack policy (Option C):
{
  "Statement": [{
    "Effect": "Deny",
    "Action": ["Update:Replace", "Update:Delete"],
    "Principal": "*",
    "Resource": "LogicalResourceId/Database"
  }]
}

Prevents accidental updates that would replace/delete database.

Best practice: Use separate stacks (Option B) for true independence, with DeletionPolicy: Retain as safety net.
Question 146
A CloudFormation template uses a macro to generate repetitive resources. The macro runs successfully during template processing but the generated resources have errors. How can the processed template be viewed for debugging?
A. Check CloudFormation events for processed template
B. Use aws cloudformation describe-template command
C. Process the template locally with aws cloudformation transform
D. Use CloudWatch Logs for macro execution output
Answer: C

Explanation:

Debugging CloudFormation macros:

Option C - Process template locally:
aws cloudformation describe-template-body \
  --stack-name my-stack
Or process a template to see transformation result:
aws cloudformation describe-type \
  --type RESOURCE \
  --type-name Custom::MyResource
Better approach - Get processed template:
# Get the processed template from an existing stack
aws cloudformation get-template \
  --stack-name my-stack \
  --template-stage Processed
Template stages:
  • Original: Template as submitted
  • Processed: Template after macro execution
Debugging macro execution (Option D): If macro Lambda function has logging:
def handler(event, context):
    logger.info(f"Input fragment: {event['fragment']}")
    processed = transform(event['fragment'])
    logger.info(f"Output fragment: {processed}")
    return {'requestId': event['requestId'], 'status': 'success', 'fragment': processed}

CloudWatch Logs show macro input/output for each execution.

Question 147
An Elastic Beanstalk application uses a Classic Load Balancer with connection draining enabled. During deployments, some requests are still failing with connection reset errors. What should be investigated?
A. Connection draining timeout is too short
B. Deployment type isn't compatible with connection draining
C. Health check settings conflict with draining
D. Application doesn't handle graceful shutdown
Answer: A and D

Explanation:

Connection issues during deployment:

Connection draining behavior:
  1. Instance marked for removal
  2. LB stops sending new connections
  3. Existing connections continue for draining timeout
  4. After timeout, connections forcibly closed
Option A - Timeout too short:
option_settings:
  aws:elb:policies:
    ConnectionDrainingEnabled: true
    ConnectionDrainingTimeout: 300  # Increase from default 20s

If requests take longer than timeout, they're terminated.

Option D - Application graceful shutdown: Application should:
  1. Stop accepting new work when SIGTERM received
  2. Complete in-flight requests
  3. Close connections cleanly
  4. Exit when done
# Python example
def shutdown_handler(signum, frame):
    global accepting_requests
    accepting_requests = False
    # Wait for current requests to complete
    wait_for_completion()
    sys.exit(0)

signal.signal(signal.SIGTERM, shutdown_handler)
Combination issue: If draining timeout is 20s but requests take 30s AND app doesn't handle SIGTERM, connections reset.
Question 148
A CloudFormation template creates an ECS service with a desired count of 10 tasks. Stack creation times out waiting for the service to reach steady state. The ECS events show tasks are starting but being killed after health check failures. What should be configured in CloudFormation?
A. Increase stack timeout in CloudFormation settings
B. Configure service HealthCheckGracePeriodSeconds
C. Add CreationPolicy with longer timeout
D. Use WaitCondition for ECS service stabilization
Answer: B

Explanation:

ECS health check configuration:

The problem:
  1. Task starts
  2. ALB immediately health checks
  3. App isn't ready → health check fails
  4. ECS kills task
  5. Loop continues
Solution - Health check grace period:
Resources:
  MyService:
    Type: AWS::ECS::Service
    Properties:
      ServiceName: my-service
      TaskDefinition: !Ref TaskDef
      DesiredCount: 10
      HealthCheckGracePeriodSeconds: 120  # Wait 2 minutes before health checking
      LoadBalancers:
        - ContainerName: app
          ContainerPort: 80
          TargetGroupArn: !Ref TargetGroup
HealthCheckGracePeriodSeconds:
  • Gives tasks time to start before ECS evaluates health
  • Should exceed application startup time
  • Only applies when using load balancer
Additional considerations:
  • Target group health check interval and threshold
  • Application startup optimization
  • Container health checks vs LB health checks
Question 149
A company uses CloudFormation with CodePipeline for infrastructure deployment. They want to implement a process where infrastructure changes are reviewed before deployment, showing exactly what will change. What should be implemented?
A. Add a manual approval action after CloudFormation action
B. Use CloudFormation change sets with review before execution
C. Implement a Lambda function that analyzes CloudFormation templates
D. Use CloudFormation drift detection before updates
Answer: B

Explanation:

Change set workflow:

Pipeline with change sets:
# CodePipeline stage configuration
- Name: CreateChangeSet
  Actions:
    - Name: CreateChangeSet
      ActionTypeId:
        Category: Deploy
        Provider: CloudFormation
      Configuration:
        ActionMode: CHANGE_SET_REPLACE
        StackName: my-stack
        ChangeSetName: pipeline-changeset

- Name: ReviewChanges
  Actions:
    - Name: ManualApproval
      ActionTypeId:
        Category: Approval
        Provider: Manual
      Configuration:
        NotificationArn: !Ref NotifyTopic
        CustomData: "Review CloudFormation changes"

- Name: ExecuteChangeSet
  Actions:
    - Name: ExecuteChangeSet
      ActionTypeId:
        Category: Deploy
        Provider: CloudFormation
      Configuration:
        ActionMode: CHANGE_SET_EXECUTE
        StackName: my-stack
        ChangeSetName: pipeline-changeset
Workflow:
  1. CHANGE_SET_REPLACE creates change set (no deployment)
  2. Manual approval - reviewer checks change set in console
  3. CHANGE_SET_EXECUTE applies changes
Change set shows:
  • Resources to be added, modified, deleted
  • Replacement vs in-place updates
  • Property changes
Question 150
A DevOps team is implementing infrastructure as code for a complex application with dependencies between resources across multiple CloudFormation stacks. They need to manage the deployment order and handle cross-stack references efficiently. What approach should they use?
A. Use nested stacks with all resources in a single parent stack
B. Use independent stacks with exports/imports and deployment scripts
C. Use CloudFormation StackSets for coordinated deployment
D. Use AWS CDK with dependency management between stacks
Answer: D (for new projects) or B (for existing CloudFormation)

Explanation:

Managing complex stack dependencies:

Option D - AWS CDK:
from aws_cdk import Stack, App
from constructs import Construct

class NetworkStack(Stack):
    def __init__(self, scope: Construct, id: str, **kwargs):
        super().__init__(scope, id, **kwargs)
        self.vpc = ec2.Vpc(self, "VPC")

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, id: str, vpc: ec2.Vpc, **kwargs):
        super().__init__(scope, id, **kwargs)
        rds.DatabaseInstance(self, "DB", vpc=vpc)

class ApplicationStack(Stack):
    def __init__(self, scope: Construct, id: str, vpc: ec2.Vpc, **kwargs):
        super().__init__(scope, id, **kwargs)
        # Use VPC from network stack

app = App()
network = NetworkStack(app, "Network")
database = DatabaseStack(app, "Database", vpc=network.vpc)
application = ApplicationStack(app, "Application", vpc=network.vpc)

# CDK handles dependency ordering and cross-stack references
app.synth()
CDK benefits:
  • Automatic dependency detection
  • Type-safe cross-stack references
  • Synthesizes to CloudFormation
  • Deploy in correct order: cdk deploy --all
Option B - Manual CloudFormation:
# deploy.sh
aws cloudformation deploy --stack-name network --template network.yaml
aws cloudformation deploy --stack-name database --template database.yaml
aws cloudformation deploy --stack-name app --template app.yaml

With exports/imports for references. Works but requires manual ordering.

For exam: Understand that CDK provides higher-level abstractions with automatic dependency management, while raw CloudFormation requires explicit management.

---

📚 Summary: Key Points for Domain 1

Critical Services and Concepts

  1. CodePipeline:
  • Stages, actions, artifacts
  • Cross-region and cross-account deployments
  • Manual approvals and automated gates
  • Integration with all AWS deployment services
  1. CodeBuild:
  • buildspec.yml structure (phases, artifacts, cache, reports)
  • VPC support for private resources
  • Custom Docker images
  • Environment variables from Parameter Store/Secrets Manager
  1. CodeDeploy:
  • Three platforms: EC2/On-premises, Lambda, ECS
  • appspec.yml for each platform
  • Deployment configurations (AllAtOnce, HalfAtATime, Canary, Linear)
  • Lifecycle hooks and rollback triggers
  • Blue/green vs in-place deployments
  1. CloudFormation:
  • Stack policies and DeletionPolicy
  • Nested stacks and StackSets
  • cfn-init, cfn-signal, CreationPolicy
  • Change sets for safe updates
  • Drift detection
  1. Elastic Beanstalk:
  • Deployment policies (All at once, Rolling, Immutable, Traffic splitting)
  • .ebextensions and .platform hooks
  • Environment configuration options
  • Blue/green via CNAME swap

Deployment Strategy Selection

Requirement Recommended Approach
Zero downtime, quick rollback Blue/Green
Gradual rollout with monitoring Canary or Linear
Cost-sensitive, acceptable brief downtime Rolling
Test in production with real traffic Traffic splitting/Canary
Simple applications, dev/test All at once

Common Exam Scenarios

  1. Cross-account deployments: IAM roles for cross-account access, artifact bucket policies, assume role permissions
  1. Rollback configurations: CloudWatch alarms, automatic rollback on failure, manual rollback procedures
  1. Security in pipelines: Secrets Manager for credentials, IAM least privilege, encryption at rest and in transit
  1. Pipeline optimization: Caching in CodeBuild, parallel actions, artifact size reduction
  1. Multi-region deployments: Cross-region CodePipeline actions, StackSets for infrastructure

Good luck with your certification exam!