Click "Show Answer & Explanation" to see detailed explanations
All answers are hidden by default to test your knowledge
Review the explanations to understand the reasoning behind each answer
Domain Overview
Domain 1 (SDLC Automation) accounts for 22% of the exam. This domain focuses on implementing and managing continuous integration and continuous delivery (CI/CD) pipelines, deploying applications using various strategies, and automating the software development lifecycle.
---
Syllabus Breakdown
Task Statement 1.1: Implement CI/CD Pipelines
Design and implement CI/CD pipelines using AWS services
Integrate source control, build, test, and deployment stages
Manage pipeline artifacts and dependencies
Implement pipeline notifications and monitoring
Task Statement 1.2: Integrate Automated Testing
Implement unit, integration, and end-to-end testing in pipelines
Configure test environments and test data management
Implement quality gates and approval processes
Task Statement 1.3: Build and Manage Artifacts
Implement artifact repositories and versioning
Manage dependencies and package management
Implement caching strategies for build optimization
A company has a CodePipeline that deploys a web application to multiple AWS accounts (development, staging, production). The pipeline is in the tools account. Deployments to the production account are failing with "Access Denied" errors. The cross-account IAM role exists in the production account. What is the MOST likely cause?
A. The CodePipeline service role does not have permission to assume the cross-account role
B. The S3 artifact bucket is not replicated to the production account
C. The cross-account role trust policy does not allow the tools account to assume it
D. CodePipeline does not support cross-account deployments
Answer: A
Explanation:
For cross-account deployments in CodePipeline, the CodePipeline service role in the tools account must have sts:AssumeRole permission for the cross-account role in the target account. Additionally, the cross-account role must have a trust policy allowing the tools account. However, the question states the cross-account role "exists," implying it's configured correctly, so the most likely issue is the CodePipeline service role permissions. The S3 bucket must be accessible cross-account (via bucket policy), but the error message specifically indicates an assume role issue.
Question 2
A development team uses AWS CodeBuild to build their Java application. Build times have increased from 5 minutes to 25 minutes. Investigation shows that Maven downloads all dependencies for every build. What should be implemented to reduce build times?
A. Use a larger compute type for CodeBuild
B. Configure S3 caching for the Maven .m2 directory in the buildspec file
C. Move the build process to EC2 instances with dependencies pre-installed
D. Use CodeArtifact to host dependencies closer to the build environment
Answer: B
Explanation:
CodeBuild supports caching via S3 to persist files between builds. For Maven projects, caching the .m2/repository directory dramatically reduces build times by avoiding repeated downloads. The buildspec.yml would include:
cache:
paths:
- '/root/.m2/**/*'
While option D (CodeArtifact) would help, it still requires downloading dependencies each build without local caching. Option A wouldn't reduce dependency download time. Option C introduces operational overhead.
Question 3
A company uses AWS CodeDeploy to deploy applications to EC2 instances in an Auto Scaling group. During a deployment, new instances are launched by the Auto Scaling group but receive the old application version. How should this be resolved?
A. Configure the Auto Scaling group to use a lifecycle hook that triggers CodeDeploy
B. Use an immutable deployment configuration
C. Configure CodeDeploy to use a blue/green deployment type
D. Suspend the Auto Scaling group during deployments
Answer: A
Explanation:
When Auto Scaling launches new instances during a CodeDeploy deployment, those instances might receive the old AMI version. To ensure new instances get the current deployment, configure an Auto Scaling lifecycle hook that triggers a CodeDeploy deployment to newly launched instances. This ensures consistency across all instances in the deployment group. Option D would work but creates operational complexity and potential availability issues. Option C changes the deployment model entirely. Option B (immutable) is a Beanstalk concept, not CodeDeploy.
Question 4
A team is implementing a CI/CD pipeline using AWS CodePipeline. They need to run integration tests against a deployed application in a test environment before proceeding to production deployment. The test takes 15 minutes to complete. Which approach should they use?
A. Add a CodeBuild action in the test stage that runs the integration tests
B. Add a Lambda invoke action that triggers the test and immediately returns success
C. Add a manual approval action and run tests outside the pipeline
D. Add a CodeBuild action with a custom image that includes test frameworks, configured with appropriate timeout
Answer: D
Explanation:
CodeBuild is ideal for running integration tests within a pipeline. The default timeout is 60 minutes (configurable up to 8 hours), which accommodates the 15-minute test. Using a custom Docker image with pre-installed test frameworks optimizes build time. Option A is partially correct but doesn't mention the custom image optimization. Option B would complete before tests finish. Option C introduces manual intervention unnecessarily.
Question 5
A company wants to implement a deployment strategy that routes 10% of traffic to a new Lambda function version, monitors for errors for 10 minutes, then shifts all traffic if successful. Which configuration achieves this?
A. CodeDeploy with deployment configuration Canary10Percent10Minutes
B. CodeDeploy with deployment configuration Linear10PercentEvery1Minute
C. API Gateway with canary release settings
D. Lambda alias with weighted routing at 10%
Answer: A
Explanation:
CodeDeploy's Canary10Percent10Minutes deployment configuration shifts exactly 10% of traffic to the new version, waits 10 minutes (allowing monitoring and potential rollback), then shifts the remaining 90%. This matches the requirement exactly. Linear10PercentEvery1Minute would incrementally shift 10% every minute, completing in 10 minutes total. Option C requires manual configuration. Option D is manual and doesn't automate the full shift.
Question 6
A DevOps engineer needs to store sensitive database credentials for use in CodeBuild. The credentials must be encrypted and automatically rotated every 30 days. Which solution meets these requirements?
A. Store credentials in CodeBuild environment variables with encryption enabled
B. Store credentials in Systems Manager Parameter Store SecureString with a rotation Lambda function
C. Store credentials in AWS Secrets Manager with automatic rotation enabled
D. Store credentials in an encrypted S3 object and download during build
Answer: C
Explanation:
AWS Secrets Manager is designed for storing and automatically rotating credentials. It provides native integration with RDS, Redshift, and DocumentDB for automatic rotation, and supports custom rotation Lambda functions for other credential types. CodeBuild can reference Secrets Manager secrets in the buildspec using secrets-manager environment variable references. Parameter Store (option B) can store secrets but doesn't have built-in rotation; you'd need to implement custom rotation. Option A doesn't support rotation. Option D is operationally complex.
Question 7
A company's CodePipeline includes a source stage using CodeCommit, a build stage using CodeBuild, and a deploy stage using CodeDeploy. The pipeline should only trigger when changes are made to the 'main' branch, not feature branches. How should this be configured?
A. Configure the CodeCommit trigger in CodePipeline to filter by branch name
B. Create a CloudWatch Events rule with a branch filter pattern
C. Configure a CodeCommit trigger that only fires for the main branch
D. Add a Lambda function as the first action to check the branch name
Answer: A and B are both acceptable, but B is more accurate
Explanation:
CodePipeline's native CodeCommit source action triggers on the specified branch only. When creating the source action, you specify the repository and branch name (e.g., "main"). The underlying mechanism uses CloudWatch Events/EventBridge to detect changes. For explicit filtering, you can create a CloudWatch Events rule with an event pattern filtering by branch:
The question's best answer is A because CodePipeline source action configuration specifies the branch directly.
Question 8
An application team uses AWS CodeBuild with a managed Ubuntu image. The build requires a commercial tool that must be installed during every build, adding 5 minutes to build time. What is the MOST efficient solution?
A. Add installation commands to the install phase of buildspec.yml with caching enabled
B. Create a custom Docker image with the tool pre-installed and use it as the build environment
C. Use an EC2 build fleet with the tool pre-installed
D. Store the tool in S3 and download it with caching
Answer: B
Explanation:
Creating a custom Docker image with pre-installed tools is the most efficient approach. The custom image can be stored in Amazon ECR and referenced in the CodeBuild project configuration. This eliminates installation time completely for each build. Option A with caching might help but still requires initial installation in each new build environment. Option C (EC2 build fleet) introduces unnecessary complexity. Option D still requires extraction/installation time.
Question 9
A company uses CodePipeline with CodeDeploy for EC2 deployments. They need to receive notifications when deployments fail so the on-call team can respond. Which approach provides immediate notification?
A. Configure CodeDeploy to send SNS notifications on deployment failure
B. Create a CloudWatch Events rule for CodeDeploy deployment failure events that triggers SNS
C. Create a CloudWatch alarm on the CodeDeploy failure metric
D. Use CodePipeline notifications feature with SNS
Answer: B or D
Explanation:
Both options B and D work, but the question asks about immediate notification. CloudWatch Events (EventBridge) provides near real-time event-driven notifications. CodePipeline's native notification feature also uses EventBridge under the hood. For CodeDeploy-specific failures, option B with a rule like:
This triggers an SNS topic for immediate notification. Option D would catch pipeline-level failures which includes CodeDeploy failures when CodeDeploy is a pipeline stage.
Question 10
A DevOps team needs to implement a blue/green deployment for an application running on EC2 instances behind an Application Load Balancer. The deployment should automatically roll back if CloudWatch alarms indicate increased error rates. Which configuration is required?
A. CodeDeploy with blue/green deployment type, ALB target groups, and alarm-based automatic rollback
B. Elastic Beanstalk with immutable deployment policy
C. CloudFormation with AutoScalingReplacingUpdate policy
D. CodePipeline with parallel deploy actions to blue and green environments
Answer: A
Explanation:
AWS CodeDeploy supports blue/green deployments for EC2 instances using ALB target groups. The deployment configuration includes:
Two target groups (blue and green)
CodeDeploy shifts traffic between target groups
CloudWatch alarms can trigger automatic rollback if error thresholds are exceeded
The rollback configuration in CodeDeploy can specify alarms that, when triggered, automatically roll back the deployment by shifting traffic back to the original target group.
Question 11
A team is using AWS CodeArtifact as their npm package repository. Developers need to configure their local npm clients to use CodeArtifact. The authentication token expires every 12 hours. What is the recommended approach for local development?
A. Store the CodeArtifact token in .npmrc permanently
B. Use the aws codeartifact login command before running npm commands
C. Create an IAM user with long-term credentials for CodeArtifact access
D. Configure npm to use the CodeArtifact endpoint without authentication
Answer: B
Explanation:
The aws codeartifact login command retrieves an authentication token and configures npm automatically. The command:
This updates the user's npm configuration with the token. Since tokens expire after 12 hours (configurable up to 12 hours), developers should run this command regularly or script it. Option A is insecure and tokens expire anyway. Option C creates security risks with long-term credentials. Option D wouldn't work as authentication is required.
Question 12
An organization requires that all production deployments receive approval from the security team before proceeding. The approval must be documented for audit purposes. How should this be implemented in CodePipeline?
A. Add a manual approval action with SNS notification to the security team
B. Implement a Lambda function that checks a ticketing system for approval
C. Use IAM policies to require security team credentials for deployment
D. Add a CodeBuild action that pauses for approval input
Answer: A
Explanation:
CodePipeline's manual approval action is designed for this use case. Configuration includes:
SNS topic for notification (emails the security team)
Custom approval message with deployment details
Optional URL to review changes
Comments field for approval documentation
When approved/rejected, CodePipeline records:
Who approved/rejected (IAM identity)
When the action was taken
Comments provided
This information is available in CloudTrail and pipeline history for audit purposes. Option B could work but adds complexity. Options C and D don't provide the workflow CodePipeline approval actions offer.
Question 13
A company uses CodeBuild for their CI process. They need to run unit tests and generate a test coverage report. The coverage report must be stored and viewable in the AWS Console. Which CodeBuild feature should be used?
A. Build artifacts uploaded to S3
B. CodeBuild Reports with coverage report type
C. CloudWatch Logs with custom metrics
D. Build badges on the repository
Answer: B
Explanation:
CodeBuild Reports feature supports test and code coverage reports. In buildspec.yml:
reports:
coverage-report:
files:
- 'coverage/clover.xml'
file-format: CLOVERXML # or COBERTURAXML, JACOCOXML, etc.
Supported coverage formats include Clover, Cobertura, JaCoCo, and SimpleCov. Reports are viewable in the CodeBuild console with trend analysis across builds. Option A stores files but doesn't provide console visualization. Option C is for logs, not reports. Option D shows build status, not coverage.
Question 14
A development team has a monorepo containing multiple microservices. They want to configure CodePipeline to only build and deploy services that have changed, not all services for every commit. What approach should they implement?
A. Create separate pipelines for each microservice with path-based triggers using CloudWatch Events
B. Use a single pipeline with conditional actions based on file changes
C. Implement a Lambda function that analyzes git changes and triggers appropriate pipelines
D. Use CodeBuild batch builds with dynamic project selection
Answer: A or C
Explanation:
For monorepo patterns with selective builds:
Option A: Create separate pipelines per microservice. Use CloudWatch Events with custom event patterns that Lambda enriches with changed file paths. Each pipeline is triggered only when its service's files change.
Option C: A Lambda function can:
Receive CodeCommit events
Use Git APIs to determine changed files
Start only the relevant pipelines using start-pipeline-execution
This is a common pattern because CodePipeline doesn't natively support path-based filtering. The Lambda approach offers more flexibility for complex monorepo structures.
AWS has also introduced pipeline triggers with filtering capabilities in CodePipeline, which can filter based on file paths in newer versions.
Question 15
An application runs on Amazon ECS with Fargate. The team wants to implement blue/green deployments with traffic shifting and automatic rollback capabilities. Which combination of services should be used?
A. CodePipeline with ECS standard deployment action
B. CodePipeline with CodeDeploy ECS deployment action
C. CodePipeline with CloudFormation deployment action
D. CodePipeline with custom Lambda action for ECS updates
Answer: B
Explanation:
AWS CodeDeploy supports blue/green deployments for Amazon ECS services. The configuration requires:
Application Load Balancer with two target groups
ECS service configured for CodeDeploy deployment controller
appspec.yml defining the task definition and container details
The standard ECS deployment action (option A) only supports rolling updates, not blue/green.
Question 16
A company's CodePipeline uses S3 as the artifact store. Build artifacts from CodeBuild are several gigabytes in size and are causing slow pipeline execution. What optimization should be implemented?
A. Use CodeArtifact instead of S3 for artifacts
B. Enable S3 Transfer Acceleration on the artifact bucket
C. Reduce artifact size by excluding unnecessary files in buildspec artifacts section
D. Move the pipeline to a region closer to development teams
Answer: C
Explanation:
The most effective optimization is reducing artifact size at the source. In buildspec.yml, carefully specify only necessary files:
Using discard-paths and base-directory options helps minimize artifact size. Also consider:
Excluding test files, documentation, source maps
Compressing artifacts
Using artifact caching instead of passing large unchanged files
Option B adds cost and minimal benefit for inter-service transfers. Option A is for packages, not build artifacts.
Question 17
A DevOps engineer is configuring CodeDeploy for an on-premises server fleet. The servers can communicate with AWS over the internet. What must be configured on the servers for CodeDeploy to work?
A. AWS CLI and IAM user credentials
B. CodeDeploy agent and IAM instance profile
C. CodeDeploy agent and IAM user credentials with appropriate permissions
D. SSM agent and IAM role
Answer: C
Explanation:
For on-premises servers with CodeDeploy:
CodeDeploy Agent must be installed and running on each server
IAM User credentials (access key/secret key) must be configured because on-premises servers cannot use IAM instance profiles (those are EC2-only)
The IAM user needs permissions to:
Access S3 buckets containing deployment artifacts
Communicate with CodeDeploy service
Configuration file location: /etc/codedeploy-agent/conf/codedeploy.onpremises.yml contains the IAM credentials and region.
Option B is incorrect because instance profiles are EC2-specific. Option D (SSM) is not required for CodeDeploy (though it can complement it).
Question 18
A CodePipeline has source, build, and deploy stages. The team wants to add automated security scanning that checks for vulnerabilities in dependencies before the build stage. The scan results should block the pipeline if critical vulnerabilities are found. Which approach is recommended?
A. Add a CodeBuild action before the build stage that runs security scanning tools
B. Configure Amazon Inspector to scan the source repository
C. Add a Lambda action that triggers AWS SecurityHub analysis
D. Use CodeGuru Security in the source stage
Answer: A
Explanation:
Adding a CodeBuild action for security scanning is the most flexible approach. The CodeBuild project can run tools like:
OWASP Dependency-Check
Snyk
npm audit / pip-audit
Trivy for container images
The buildspec can be configured to fail the build (exit code non-zero) if critical vulnerabilities are found:
Pipeline stops if CodeBuild reports failure. Option D (CodeGuru Security) is newer and can be integrated but the question implies more general vulnerability scanning.
Question 19
A company uses Elastic Beanstalk for their web application. They want to update the application with zero downtime and the ability to quickly roll back. They also want to run the new version alongside the old version temporarily to compare performance. Which deployment policy should they use?
Configurable percentage of traffic routed to new version
Evaluation period for monitoring
Automatic rollback if health checks fail
Quick rollback by terminating new instances
This matches the requirements of running both versions simultaneously for comparison. Option C (blue/green with CNAME swap) also works but doesn't allow percentage-based traffic splitting. Option B (immutable) replaces instances but doesn't maintain both versions simultaneously after deployment completes.
Question 20
A development team needs to share build artifacts between a CodeBuild project in the us-east-1 region and a deployment in the eu-west-1 region within the same CodePipeline. How should this be configured?
A. Configure cross-region artifact replication in the CodePipeline settings
B. Manually copy artifacts to S3 in the target region
C. Use a CodeBuild action in each region with separate artifact buckets
D. Enable S3 cross-region replication on the artifact bucket
Answer: A
Explanation:
CodePipeline natively supports cross-region actions. When configuring a cross-region action, CodePipeline automatically:
Creates an artifact bucket in the target region (or uses one you specify)
Replicates necessary artifacts to the target region's bucket
Handles encryption key management across regions
Configuration requires:
Specifying the region for each action
Ensuring IAM roles have cross-region permissions
KMS keys in each region if using customer-managed keys
This is configured in the pipeline structure by setting the region property on actions that need to run in different regions.
Question 21
A company has a CodePipeline that deploys to three environments: dev, staging, and prod. They want staging and prod deployments to wait for a minimum time after the previous environment's deployment before proceeding, to allow for testing. How should this be implemented?
A. Add wait actions between deployment stages
B. Add manual approval actions with estimated wait times in notifications
C. Use Lambda actions that implement wait logic using Step Functions
D. Configure deployment configuration with minimum wait time
Answer: C (or a combination approach)
Explanation:
CodePipeline doesn't have a native "wait" action. Options include:
Lambda + Step Functions (Option C):
Create a Lambda action that triggers a Step Functions workflow with a Wait state. The workflow waits the specified time, then signals CodePipeline to continue using put-job-success-result.
Alternative approach:
Use CloudWatch Events with scheduled rules that:
Pipeline pauses at approval action
Scheduled event triggers Lambda after wait period
Lambda approves the pending action via API
Option B with manual approvals works but requires human intervention. There's no "wait action" in CodePipeline (option A), and CodeDeploy configurations don't control inter-stage timing (option D).
Question 22
An organization wants to prevent direct commits to the main branch of their CodeCommit repository. All changes must go through pull requests that require at least two approvals. How should this be configured?
A. Create an IAM policy denying GitPush to the main branch
B. Configure branch-level permissions and approval rule templates
C. Use a Lambda trigger to reject direct commits
D. Implement pre-commit hooks on developer machines
Answer: B
Explanation:
CodeCommit provides:
Approval Rule Templates: Define rules requiring specific numbers of approvals and optionally specific approvers (by IAM ARN or wildcard patterns)
Branch-level Permissions: IAM policies can restrict push access to specific branches:
The combination ensures only merged pull requests (after approval) update the main branch. Option D doesn't work because local hooks can be bypassed.
Question 23
A CodeBuild project needs to access a private RDS database during integration tests. The database is in a private subnet with no internet access. How should CodeBuild be configured?
A. Configure CodeBuild with VPC settings specifying private subnets and security groups
B. Create a VPC endpoint for RDS in the private subnet
C. Use RDS Proxy with public accessibility
D. Set up a NAT gateway for CodeBuild to access RDS
Answer: A
Explanation:
CodeBuild can be configured to run inside a VPC:
Specify VPC ID
Specify private subnet IDs (CodeBuild runs in these subnets)
Specify security group IDs
When running in VPC:
CodeBuild can access VPC resources (RDS, ElastiCache, etc.)
For internet access (downloading dependencies), you need NAT Gateway or VPC endpoints
Consider using S3 and CodeArtifact VPC endpoints for build dependencies
Security group must allow outbound traffic to RDS and RDS security group must allow inbound from CodeBuild security group.
Question 24
A pipeline uses AWS CodeDeploy to deploy a containerized application to Amazon ECS. The team wants to validate the new deployment by running synthetic tests before shifting production traffic. Which CodeDeploy feature supports this?
A. BeforeInstall lifecycle hook
B. AfterInstall lifecycle hook
C. BeforeAllowTraffic lifecycle hook with Lambda function
D. ValidateService lifecycle hook
Answer: C
Explanation:
For ECS blue/green deployments, CodeDeploy supports Lambda-based hooks:
BeforeInstall: Runs before replacement task set is created
AfterInstall: Runs after replacement task set is created but before traffic shifts
AfterAllowTraffic: Runs after traffic has shifted to replacement
BeforeAllowTraffic is ideal for validation testing because:
New task set is running and accessible via test target group
Production traffic still goes to original task set
Lambda function can run synthetic tests against test endpoint
If tests fail, Lambda returns failure and deployment rolls back
A team uses CodePipeline with a GitHub source. They want the pipeline to trigger only for pull request merges to the main branch, not for direct pushes. How should this be configured?
A. Use GitHub webhook with event filtering
B. Configure CodePipeline GitHub source action with pull request filter
C. Use AWS CodeStar Connections with trigger filters
D. Add a Lambda function to verify the commit was from a merged PR
Answer: C
Explanation:
AWS CodeStar Connections (which replaced GitHub OAuth tokens for CodePipeline) supports trigger configuration with filters:
When creating a pipeline with GitHub (via CodeStar Connections), you can configure:
Push triggers: Trigger on push to specified branches
Pull request triggers: Trigger on PR events (opened, updated, merged)
Tag triggers: Trigger on tag creation
For the requirement, configure:
Pipeline trigger type: Push
Branch filter: main
This triggers only when commits land on main (which happens after PR merge)
Alternatively, use EventBridge with GitHub events and filter for merged PR events, then trigger the pipeline.
Question 26
A development team wants to implement feature flags to control the rollout of new features without redeploying the application. Which AWS service combination provides this capability?
A. AppConfig with feature flag configuration profile
B. Systems Manager Parameter Store with application polling
C. Lambda@Edge for traffic routing
D. API Gateway with stage variables
Answer: A
Explanation:
AWS AppConfig (part of Systems Manager) provides feature flag functionality:
Feature Flag configuration profile type specifically designed for feature flags
Application polls AppConfig or uses cached configuration
AppConfig is preferred over Parameter Store for feature flags because it provides deployment strategies, validation, and rollback capabilities specifically designed for configuration changes.
Question 27
A company runs CodeBuild projects frequently throughout the day. They notice that builds are sometimes delayed waiting for compute capacity. What solution ensures builds start immediately?
A. Increase the build timeout setting
B. Configure reserved capacity for CodeBuild
C. Use larger compute types that have more availability
D. Configure a CodeBuild fleet with persistent instances
Answer: D
Explanation:
CodeBuild supports two capacity types:
On-demand (default): Build environments provisioned on-demand; may have slight delays during peak usage
Reserved capacity (Fleets): Pre-provisioned build instances that are always available:
Eliminate cold start delays
Faster build starts
Cost-effective for consistent build workloads
Instances remain available between builds
Fleet configuration includes:
Compute type
Number of instances
Environment type
For consistent, immediate build starts, reserved capacity fleets are recommended for teams with frequent builds.
Question 28
A CodePipeline needs to deploy the same application to 5 AWS accounts (dev, qa, staging, uat, prod). Creating a 15-stage pipeline is unmanageable. What architecture pattern should be used?
A. Use parallel deployment actions within a single stage for all environments
B. Implement a fan-out pattern using Step Functions to orchestrate parallel pipelines
C. Create separate pipelines per environment triggered by the previous pipeline's success
D. Use CodePipeline stages with multiple parallel actions per environment tier
Answer: B or D (depending on requirements)
Explanation:
For multi-account deployments at scale:
Option D - Grouped stages with parallel actions:
Stage 1: Source
Stage 2: Build
Stage 3: Deploy to Dev + QA (parallel actions, same tier)
Stage 4: Manual Approval
Stage 5: Deploy to Staging + UAT (parallel)
Stage 6: Approval
Stage 7: Deploy to Prod
This reduces stages while maintaining logical separation.
Option B - Step Functions orchestration:
For more complex scenarios:
CodePipeline triggers Step Functions
Step Functions manages parallel deployments across accounts
More flexibility for conditional logic, retries, error handling
For the exam, option D is the more "AWS-native" approach using CodePipeline's parallel actions feature.
Question 29
A company uses AWS Elastic Beanstalk with a load-balanced environment. During deployments, users sometimes see errors while instances are being updated. The team wants to eliminate any user-facing errors during deployments. Which deployment policy should they select?
A. Rolling
B. Rolling with additional batch
C. Immutable
D. All at once
Answer: C (or Traffic Splitting)
Explanation:
Immutable deployment:
Launches a temporary Auto Scaling group with new version
Full capacity maintained throughout deployment
New instances pass health checks before traffic shifts
Original instances only terminated after new ones are healthy
If deployment fails, only temporary instances are terminated
This guarantees no user-facing errors because:
Original healthy instances continue serving traffic
New instances are validated before receiving traffic
Traffic only shifts to new instances after health checks pass
Rolling with additional batch maintains capacity but some requests might hit instances during deployment. Traffic splitting also works and allows percentage-based testing.
Question 30
A DevOps engineer is implementing a deployment pipeline for a Lambda function. The function must be deployed using the existing alias "prod" with traffic shifting. After deployment, a validation Lambda function must verify the new version works correctly. If validation fails, traffic should automatically revert to the previous version. Which services and configuration are required?
A. CodeDeploy with Lambda deployment, appspec.yml with hooks
B. CodePipeline with Lambda deployment action
C. CloudFormation with AWS::Lambda::Alias and CodeDeploy integration
D. Lambda alias with provisioned concurrency and CloudWatch alarms
Answer: A
Explanation:
CodeDeploy Lambda deployments provide:
Alias traffic shifting: Configure in deployment configuration (Canary, Linear, AllAtOnce)
Validation hooks: appspec.yml with BeforeAllowTraffic and AfterAllowTraffic hooks
Automatic rollback: If validation function returns failure, CodeDeploy automatically shifts traffic back to previous version
The validation Lambda function receives deployment ID and lifecycle hook information, runs tests, and calls put-lifecycle-event-hook-execution-status with Succeeded or Failed.
Question 31
A company has multiple development teams using CodePipeline. Each team should only be able to view and manage their own pipelines. How should access be controlled?
A. Create separate AWS accounts per team
B. Use resource tags and tag-based IAM policies
C. Create IAM groups per team with pipeline-specific policies
D. Use AWS Organizations SCPs to restrict pipeline access
Answer: B
Explanation:
Tag-based access control in IAM:
Tag resources: Each team's pipelines tagged with Team: team-name
Tag IAM principals: Users/roles tagged with their team identifier
This scales better than creating resource-specific policies (option C) and doesn't require separate accounts (option A). SCPs (option D) don't provide granular resource-level control.
Question 32
A CodeBuild project builds a Docker image and pushes it to Amazon ECR. The build is failing with "no basic auth credentials" error when pushing to ECR. What is the likely cause and solution?
A. The buildspec is missing the ECR login command
B. The CodeBuild service role lacks ECR permissions
C. ECR repository doesn't exist
D. Docker daemon is not running in CodeBuild
Answer: A
Explanation:
ECR requires authentication before pushing images. The buildspec must include the ECR login command:
Option B (IAM permissions) would cause a different error message. The "no basic auth credentials" specifically indicates missing login.
Question 33
A company wants to enforce that all CodeBuild projects use VPC configuration and cannot access the public internet during builds. How should this be enforced organization-wide?
A. Create an SCP denying CodeBuild project creation without VPC configuration
B. Use AWS Config rules to check CodeBuild configuration
C. Implement a Lambda function triggered on CodeBuild project creation
D. Use CodeBuild service control policies
Answer: A (or B for monitoring, A for prevention)
Explanation:
Service Control Policies (SCPs) in AWS Organizations can enforce CodeBuild configuration:
This prevents creating or updating CodeBuild projects without VPC configuration. For monitoring existing projects, AWS Config rules (option B) complement this by detecting non-compliant resources.
Note: The exact condition key syntax may vary; verify current documentation for precise implementation.
Question 34
A development team's CodePipeline source action uses an S3 bucket. They want the pipeline to trigger when a new object is uploaded to a specific prefix in the bucket. Currently, the pipeline uses polling. What change will provide faster pipeline triggering?
A. Enable S3 event notifications to CloudWatch Events
B. Configure S3 versioning on the bucket
C. Reduce the polling interval in CodePipeline
D. Enable CloudWatch Events detection for the source action
Answer: D
Explanation:
CodePipeline S3 source actions can use:
Polling (periodic check): Default behavior, checks every few minutes
Event-based (CloudWatch Events): Near real-time triggering
To enable event-based triggering:
Enable "CloudWatch Events" option on the S3 source action
CodePipeline automatically creates the necessary CloudWatch Events rule and S3 bucket notification
This configuration detects S3 object creation events and triggers the pipeline within seconds of the upload, compared to polling's multi-minute delay.
Note: S3 bucket must have versioning enabled for change detection with event-based triggers.
Question 35
An application deployed with CodeDeploy on EC2 is experiencing issues after deployment. The DevOps engineer needs to investigate what happened during the deployment on a specific instance. Where should they look?
A. CloudWatch Logs for the CodeDeploy deployment
B. CodeDeploy deployment logs in the console
C. CodeDeploy agent logs on the EC2 instance
D. AWS X-Ray traces for the deployment
Answer: C
Explanation:
CodeDeploy agent logs on EC2 instances contain detailed deployment information:
The CodeDeploy console (option B) provides high-level status but not detailed instance-level debugging information.
Question 36
A company is implementing CI/CD for a microservices architecture. Each service has its own CodePipeline. They need to coordinate deployments across services to ensure compatibility. How should they implement deployment coordination?
A. Use a parent CodePipeline that triggers child pipelines sequentially
B. Implement a Step Functions workflow that orchestrates pipeline executions
C. Use EventBridge to chain pipeline executions based on completion events
D. Create a single pipeline with all services in parallel stages
Answer: B (best for complex coordination) or C (for simpler cases)
Explanation:
For microservices deployment coordination:
Step Functions (Option B) - Best for complex orchestration:
Coordinate multiple pipeline executions
Handle dependencies between services
Implement retry logic and error handling
Support parallel and sequential deployments
Maintain state across long-running deployments
EventBridge (Option C) - Simpler cases:
Pipeline A completes → EventBridge rule → Triggers Pipeline B
Example Step Functions workflow:
Deploy core services (parallel)
Wait for all core services to complete
Deploy dependent services (parallel)
Run integration tests
Deploy API gateway service
Option D loses independent service deployments. Option A is limited in flexibility.
Question 37
A CodePipeline needs to deploy to Amazon EKS. The deployment should use Kubernetes manifests stored in the source repository. Which approach is recommended?
A. Use CodeBuild to run kubectl commands against the EKS cluster
B. Use CodeDeploy with EKS deployment action
C. Use CloudFormation to deploy Kubernetes manifests
D. Use a Lambda function to interact with EKS API
Answer: A
Explanation:
For EKS deployments from CodePipeline:
CodeBuild with kubectl (Recommended):
CodeBuild project configured with VPC access to EKS cluster
Build environment includes kubectl
Authentication using IAM role mapped to Kubernetes RBAC
Note: CodeDeploy (option B) doesn't natively support EKS. CloudFormation (option C) can work but is more complex. AWS also offers Controllers for Kubernetes (ACK) for IaC approaches.
Question 38
A team uses CodeCommit and needs to implement code review requirements. They want to ensure that at least one person from the security team reviews changes to files in the `/security/` directory. How should this be configured?
A. Create an approval rule template with path-based conditions
B. Create a Lambda trigger that enforces approval based on changed files
C. Use branch protection with required reviewers
D. Implement a custom CodeGuru reviewer for security files
Answer: B (currently the best approach for path-based requirements)
Explanation:
CodeCommit's approval rule templates don't support path-based conditions natively. For path-specific approval requirements:
Lambda Trigger Approach:
Configure CodeCommit trigger for pull request events
Lambda function:
Analyzes changed files in the PR
If /security/ files are changed, updates required approvals
Enforces approval from security team members (by IAM ARN)
Blocks merge until requirements met
The Lambda can:
Use CodeCommit APIs to get PR details and file changes
Create/update approval rules dynamically
Comment on PR with requirements
This is more complex but necessary for path-based requirements. Standard approval rule templates only support repository-level and branch-level rules.
Question 39
A CodeBuild project runs integration tests that require access to test data in an S3 bucket. The bucket is encrypted with a customer-managed KMS key. Builds are failing with access denied errors when downloading test data. The CodeBuild service role has S3 permissions. What additional configuration is needed?
A. Grant the CodeBuild service role kms:Decrypt permission for the KMS key
B. Enable S3 bucket versioning
C. Configure the KMS key policy to allow CodeBuild service
D. Use S3 server-side encryption with S3-managed keys instead
Answer: A
Explanation:
When accessing S3 objects encrypted with customer-managed KMS keys, the accessing principal needs:
IAM policy on the CodeBuild service role (Option A)
KMS key policy (Option C - also works but typically both are configured)
For downloads, kms:Decrypt is essential. The error message indicates the service role's IAM policy is missing KMS permissions.
Question 40
A company uses CodePipeline with GitHub Enterprise as the source. They need to ensure the pipeline only processes code that has passed branch protection rules in GitHub. How can they verify this?
A. Configure CodeStar Connections to respect GitHub branch protection
B. Add a CodeBuild action that uses GitHub API to verify PR status
C. Use GitHub Actions as an intermediate step before CodePipeline
D. Configure a Lambda source action that validates before proceeding
Answer: A (or B for explicit validation)
Explanation:
CodeStar Connections integrates with GitHub and respects GitHub's authentication and authorization. However, CodePipeline source actions trigger on commits, not on GitHub's internal status.
For explicit verification:Option B - CodeBuild validation:
phases:
pre_build:
commands:
- |
# Check if commit was from a merged PR that passed branch protection
COMMIT_SHA="${CODEBUILD_RESOLVED_SOURCE_VERSION}"
PR_STATUS=$(curl -s -H "Authorization: token $GITHUB_TOKEN" \
"https://api.github.com/repos/owner/repo/commits/$COMMIT_SHA/status")
if [[ $(echo $PR_STATUS | jq -r '.state') != "success" ]]; then
echo "Commit did not pass required checks"
exit 1
fi
This explicitly validates the commit's status before proceeding. GitHub's branch protection prevents merging without passing checks, but this adds defense-in-depth in the pipeline.
Question 41
A DevOps team is implementing a multi-region active-active deployment for their application. They want CodePipeline to deploy to both regions simultaneously and only proceed if both deployments succeed. How should this be configured?
A. Create separate pipelines per region triggered by the same source
B. Configure cross-region actions in parallel within the same stage
C. Use CloudFormation StackSets for multi-region deployment
D. Implement a Step Functions workflow for coordinated multi-region deployment
Answer: B
Explanation:
CodePipeline supports cross-region actions within the same stage. Configure:
Both actions run in parallel. The stage only completes successfully if ALL actions succeed. If either region's deployment fails, the stage fails and the pipeline stops.
Requirements:
Artifact buckets in each region
IAM roles with cross-region permissions
KMS keys in each region (if using encryption)
This is the native CodePipeline approach for multi-region parallel deployments with coordinated success/failure.
Question 42
A company's CodeBuild project uses a buildspec.yml file stored in the source repository. Security team wants to ensure developers cannot modify the build commands. How should this be enforced?
A. Store buildspec.yml in a separate secured repository
B. Use CodeBuild project-level buildspec override
C. Insert buildspec commands directly in the CodeBuild project configuration
D. Create a CodeCommit approval requirement for buildspec.yml changes
Answer: C (or B for more flexibility)
Explanation:
To prevent developers from modifying build commands:
Option C - Inline buildspec in project:
Configure the CodeBuild project with buildspec commands defined in the project settings rather than using a file from the source. Developers with source access cannot modify the build process.
Option B - Buildspec override:
Specify a buildspec file path that points to a location outside the developer-accessible source, or use the inline buildspec feature.
Console configuration:
Build specification: "Insert build commands"
Enter commands directly in the project configuration
Developers with only source repository access cannot modify the build process. Only users with CodeBuild project modification permissions can change the buildspec.
Question 43
An application uses CodeDeploy for EC2 deployments. During the BeforeInstall hook, a script checks if a dependent service is available. If the service is unavailable, the script should wait and retry before failing. The current script immediately fails if the service is unavailable. What change should be made?
A. Increase the hook timeout in appspec.yml
B. Modify the script to implement retry logic with exponential backoff
C. Configure CodeDeploy to automatically retry failed hooks
D. Add a wait condition in the CodeDeploy deployment configuration
Answer: B
Explanation:
The lifecycle hook script should implement retry logic:
#!/bin/bash
MAX_RETRIES=5
RETRY_INTERVAL=10
for i in $(seq 1 $MAX_RETRIES); do
if check_service_available; then
echo "Service is available"
exit 0
fi
echo "Attempt $i: Service unavailable, waiting ${RETRY_INTERVAL}s..."
sleep $RETRY_INTERVAL
RETRY_INTERVAL=$((RETRY_INTERVAL * 2)) # Exponential backoff
done
echo "Service unavailable after $MAX_RETRIES attempts"
exit 1
CodeDeploy doesn't have built-in hook retry (option C). The script must handle retries internally.
Question 44
A team is implementing canary deployments for their Lambda function using CodeDeploy. They want to shift 10% of traffic initially, wait 5 minutes, then shift another 10% every 2 minutes until complete. Which deployment configuration should they use?
A. Canary10Percent5Minutes
B. Linear10PercentEvery2Minutes with initial wait
C. Custom deployment configuration with specified intervals
D. AllAtOnce with CloudWatch-based traffic shifting
Answer: C
Explanation:
The described requirement doesn't match standard AWS deployment configurations:
Canary10Percent5Minutes: 10% initially, wait 5 minutes, then 100% (not gradual)
Linear10PercentEvery2Minutes: Shifts 10% every 2 minutes from the start (no initial wait)
For custom behavior, create a custom deployment configuration:
However, the exact requirement (initial canary with wait, then linear) may require combining approaches or accepting the closest standard configuration. The exam may present this as "Custom deployment configuration."
Question 45
A CodePipeline retrieves source code from CodeCommit and needs to include the Git metadata (history, branches) for the build process. Currently, only the latest commit's files are available in CodeBuild. How can full Git metadata be accessed?
A. Configure the source action to include full clone
B. Clone the repository again in CodeBuild using Git commands
C. Use the "Full clone" option in the CodeCommit source action
D. Enable deep source cloning in CodePipeline settings
Answer: C
Explanation:
CodePipeline's CodeCommit source action supports two clone modes:
Full clone: Includes complete Git history and metadata
Source action outputs Git repository with full history
Useful for build processes that need: git log, git describe, branch information
Default (zip download): Only current commit files
Faster for simple builds
No Git metadata
Configuration:
In the source action settings, enable "Full clone" output artifact format.
CodeBuild then receives the full repository with .git directory, enabling commands like:
git describe --tags
git log --oneline -10
git branch -a
Question 46
An organization uses AWS Organizations with multiple accounts. They want to standardize CodePipeline creation across accounts using a template that includes all required stages and actions. How should this be implemented?
A. Create a CloudFormation StackSet that deploys the pipeline template
B. Use AWS Service Catalog with a pipeline product
C. Implement a custom CDK construct for pipeline creation
D. Create a CodeCatalyst blueprint for pipelines
Answer: B (or A, both are valid)
Explanation:
Both options work for standardization:
AWS Service Catalog (Option B):
Create a portfolio with pipeline product
Product defined using CloudFormation template
Share portfolio across accounts
Users launch standardized pipelines with customizable parameters
Governance through constraints and launch roles
Version management for pipeline templates
CloudFormation StackSets (Option A):
Deploy identical pipelines across multiple accounts
Central management of pipeline infrastructure
Automatic deployment to new accounts via OU targeting
Service Catalog is generally preferred for:
Self-service pipeline creation by teams
Parameterized customization within guardrails
Approval workflows for provisioning
StackSets is better for:
Identical infrastructure across accounts
Central IT-managed deployments
Compliance enforcement
Question 47
A company's deployment process requires that production deployments only happen during a specific maintenance window (Saturday 2AM-6AM UTC). How should this be enforced in CodePipeline?
A. Use a Lambda action that checks the current time before the deploy action
B. Configure deployment windows in CodeDeploy deployment group settings
C. Add an approval action with SNS notification that's only approved during the window
D. Use EventBridge Scheduler to enable/disable the production deploy stage
Answer: A (or B for CodeDeploy-specific control)
Explanation:
Option A - Lambda validation:
import datetime
def lambda_handler(event, context):
now = datetime.datetime.utcnow()
# Check if Saturday between 2AM and 6AM UTC
if now.weekday() == 5 and 2 <= now.hour < 6:
# Return success to CodePipeline
codepipeline.put_job_success_result(jobId=event['CodePipeline.job']['id'])
else:
# Return failure - blocks deployment
codepipeline.put_job_failure_result(
jobId=event['CodePipeline.job']['id'],
failureDetails={
'message': 'Deployments only allowed Saturday 2AM-6AM UTC',
'type': 'JobFailed'
}
)
Option B - CodeDeploy has deployment windows (if using CodeDeploy):
Not a native feature, but can be implemented via automation.
The Lambda approach provides the most flexibility and clear enforcement.
Question 48
A DevOps team is implementing a GitOps workflow where the Git repository is the source of truth for all deployments. When infrastructure or application changes are pushed to the repository, deployments should automatically sync to match the repository state. Which AWS service combination supports this?
A. CodeCommit with CloudFormation deployment action in CodePipeline
B. CodeCommit with AWS App Runner auto-deployment
C. GitHub with AWS Proton
D. CodeCommit with ArgoCD on EKS
Answer: D (for true GitOps), or A/B depending on context
Automated agents that sync actual state to desired state
Pull-based deployment model
ArgoCD on EKS (Option D):
Continuously monitors Git repository
Automatically syncs Kubernetes cluster state to match repository
Reconciliation loop maintains desired state
True GitOps implementation
AWS-Native Options:
CodePipeline (Option A): Push-based CI/CD, not pure GitOps but common
App Runner (Option B): Auto-deploys on repository changes (for containers)
Proton (Option C): Templates for infrastructure/applications
For exam purposes, understand that GitOps is a methodology. AWS provides building blocks, but tools like ArgoCD/Flux provide pure GitOps implementation on EKS.
Question 49
A CodePipeline execution failed at the deploy stage. The team fixed the issue and wants to restart the pipeline from the failed stage rather than from the beginning. How can this be accomplished?
A. Use the "Retry failed actions" feature in the console
B. Stop and restart the pipeline execution
C. Create a new pipeline execution starting at the deploy stage
D. Manually trigger the deploy stage using AWS CLI
Answer: A
Explanation:
CodePipeline provides the ability to retry failed stages:
Console:
Navigate to the failed pipeline execution
Click "Retry" on the failed stage
Pipeline resumes from that stage using existing artifacts
This uses artifacts from the original execution, avoiding the need to rebuild. Note that retry is only available for the most recent execution of a stage and must be initiated before a new pipeline execution processes that stage.
Question 50
A company uses CodeArtifact for npm package management. Developers are experiencing slow installs because packages are being fetched from the upstream public npm registry for every build. How can this be optimized?
A. Configure CodeArtifact to cache packages from upstream repositories
B. Increase the package retention period
C. Enable external connection to npmjs and let CodeArtifact cache packages automatically
D. Pre-populate the CodeArtifact repository with all required packages
Answer: C
Explanation:
CodeArtifact with external connections:
External connection: Links CodeArtifact repository to public registries (npm, PyPI, Maven Central)
Automatic caching: When a package is requested:
If cached in CodeArtifact → served immediately
If not cached → fetched from upstream, cached, then served
After initial fetch, packages are cached and served from CodeArtifact, providing:
Faster installs (closer/faster than public internet)
Availability if upstream is down
Security scanning of cached packages
Question 51
A company is implementing blue/green deployments for their EC2 application using CodeDeploy. They want to keep the original (blue) environment running for 24 hours after deployment for potential rollback. How should this be configured?
A. Set the termination wait time to 24 hours in the deployment group
B. Configure a manual termination action in the deployment
C. Use a CloudWatch Events rule to terminate instances after 24 hours
D. Set the BlueGreenDeploymentConfiguration termination wait time
Answer: A or D (same setting)
Explanation:
In CodeDeploy blue/green deployments, the terminateBlueInstancesOnDeploymentSuccess setting controls what happens to original instances:
Configuration options:
Terminate after wait period: Keeps blue instances for specified duration
Terminate immediately: Removes blue instances right after traffic shift
Keep alive: Never automatically terminates blue instances
During this window, rollback to blue instances is near-instant (traffic shift). After the window, blue instances are terminated.
Question 52
A CodeDeploy deployment to EC2 instances is failing with "The overall deployment failed because too many individual instances failed deployment." The deployment configuration is HalfAtATime. Investigation shows that the first batch of instances failed during the AfterInstall hook. What should be checked first?
A. The AfterInstall script exit code and script content
The deployment group configuration and IAM roles are less likely to cause AfterInstall script failures (those would cause earlier failures).
Question 53
An application uses CodeDeploy with an in-place deployment on EC2 instances. The deployment should automatically roll back if the CPU utilization exceeds 80% after deployment. How should this be configured?
A. Create a CloudWatch alarm for CPU and configure it as a rollback trigger
B. Add a ValidateService hook that checks CPU utilization
C. Configure auto-rollback based on deployment health metrics
D. Use CodeDeploy automatic rollback on failed health checks
Answer: A
Explanation:
CodeDeploy supports CloudWatch alarms as automatic rollback triggers:
Configuration:
Create CloudWatch alarm:
Metric: CPUUtilization
Threshold: 80%
Period and evaluation settings appropriate for post-deployment monitoring
If the alarm enters ALARM state during or shortly after deployment, CodeDeploy automatically rolls back.
Note: Alarms are evaluated during deployment and for a period after. The monitoring window depends on alarm configuration and deployment duration.
Question 54
A company has 500 EC2 instances across multiple Auto Scaling groups that need to receive the same application deployment. How should CodeDeploy be configured for efficient deployment?
A. Create separate deployment groups for each Auto Scaling group
B. Create a single deployment group using EC2 tag-based targeting
C. Create a deployment group that targets multiple Auto Scaling groups
D. Use CodeDeploy deployment configurations with high parallelism
Answer: C (or B, depending on the scenario)
Explanation:
CodeDeploy deployment groups can target multiple Auto Scaling groups:
Option C - Multiple ASG targeting:
A single deployment group can include multiple Auto Scaling groups. This is ideal when:
All ASGs should receive the same deployment
You want unified deployment management
You need consistent deployment configuration across ASGs
Option B - Tag-based targeting:
Useful when instances aren't in ASGs or you need flexible grouping based on tags (e.g., Environment: Production).
Deployment efficiency:
Configure deployment configuration for parallelism:
AllAtOnce for fastest deployment (higher risk)
Custom configuration specifying percentage or fixed number
For 500 instances, consider:
Batch sizes appropriate for risk tolerance
Monitoring during deployment
Rollback triggers configured
Question 55
A DevOps engineer is troubleshooting a CodeDeploy deployment that succeeded but the application isn't working correctly. The deployment logs show all lifecycle hooks completed successfully. What should be investigated next?
A. The application health check configuration
B. The ValidateService hook implementation
C. The ApplicationStart hook script
D. The deployment configuration minimum healthy hosts setting
Answer: B
Explanation:
If deployment succeeded but application isn't working:
ValidateService hook analysis (Option B):
The ValidateService hook is specifically designed to verify the deployment worked correctly. Issues:
Hook might not be implemented (no validation)
Hook validation might be insufficient
Hook might not exit with failure on actual problems
What ValidateService should do:
#!/bin/bash
# Check if application is responding
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/health)
if [ "$RESPONSE" != "200" ]; then
echo "Health check failed"
exit 1
fi
echo "Application healthy"
exit 0
If no ValidateService hook exists, the deployment succeeds based on file deployment and earlier hooks, not actual application functionality.
Also check:
ApplicationStart hook - verify it actually started the application
Application logs for startup errors
Question 56
A Lambda function is deployed using CodeDeploy with the Canary10Percent5Minutes configuration. During the canary period, CloudWatch detects an increase in error rate. What happens automatically?
A. Traffic shifts back to the original version immediately
B. The deployment pauses and waits for manual intervention
C. CodeDeploy triggers an automatic rollback based on configured alarms
D. Nothing happens unless alarms are explicitly configured
Answer: D (but C if alarms are configured)
Explanation:
CodeDeploy doesn't automatically monitor CloudWatch metrics for rollback. You must explicitly configure:
CloudWatch alarms that trigger on error conditions:
Without this configuration, the deployment continues regardless of errors. The canary period gives you TIME to monitor, but doesn't provide automatic metric-based rollback unless configured.
This is a common exam topic: CodeDeploy alarm-based rollback requires explicit configuration.
Question 57
A company uses CodeDeploy for on-premises server deployments. They need to register new servers with CodeDeploy automatically when they're provisioned by their configuration management tool. What approach should they use?
A. Use CodeDeploy API calls from the configuration management tool
B. Configure SSM agent to automatically register with CodeDeploy
C. Use an on-premises instance registration script with IAM user credentials
D. Enable automatic registration in the CodeDeploy deployment group
Answer: A or C
Explanation:
For on-premises instance registration with CodeDeploy:
Automated registration process:
Prerequisite: Create an IAM user for the on-premises instance
Generate access keys
Attach policy with CodeDeploy permissions
Registration (Option C or A):
# Using CLI/API (can be scripted in config management)
aws deploy register-on-premises-instance \
--instance-name server-001 \
--iam-user-arn arn:aws:iam::account:user/codedeploy-user
# Configure instance with credentials
aws deploy install \
--config-file /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
Configuration management tools (Ansible, Puppet, Chef) can execute these commands during server provisioning.
Question 58
An ECS service uses CodeDeploy for blue/green deployments. After a successful deployment, the team notices that the old task definition is still running tasks alongside the new one. What is the likely cause?
A. The deployment is still in progress (traffic shifting)
B. The termination wait time hasn't elapsed
C. ECS service auto-scaling launched tasks from old task definition
D. The deployment succeeded but traffic shift failed
Answer: B
Explanation:
In ECS blue/green deployments:
Deployment lifecycle:
New task set created with new task definition
Traffic gradually shifted to new task set
After successful traffic shift, old task set waits for termination
After termination wait time, old tasks are terminated
After wait time expires, old task set is terminated. This is expected behavior, not an error.
Question 59
A company's CodeDeploy deployment is stuck at the Install lifecycle event. The CodeDeploy agent log shows "The specified key does not exist" when attempting to download the deployment bundle. What should be checked?
A. The S3 bucket policy allows access from the EC2 instance role
B. The deployment bundle was correctly uploaded to S3
C. The EC2 instance has internet access to reach S3
D. All of the above
Answer: D
Explanation:
"The specified key does not exist" error indicates S3 access issues:
Check all of the following:
Bundle exists in S3:
Verify the artifact key path is correct
Check if the revision was successfully uploaded
Verify the S3 bucket and key in deployment configuration
IAM permissions:
EC2 instance role needs s3:GetObject on the artifact
If cross-account, both bucket policy and IAM role needed
If KMS encrypted, need kms:Decrypt
Network access:
Instance can reach S3 (internet gateway, NAT, or VPC endpoint)
Security groups allow outbound HTTPS
NACLs allow S3 traffic
S3 bucket configuration:
Bucket policy doesn't explicitly deny
Bucket isn't in a different region without proper configuration
The error message typically means the object truly doesn't exist at that key, but access issues can produce similar errors.
Question 60
A development team wants to test CodeDeploy deployments locally before pushing to AWS. What tool or approach enables local deployment testing?
A. CodeDeploy Local Deployments feature
B. LocalStack with CodeDeploy support
C. Docker containers simulating EC2 with CodeDeploy agent
D. The codedeploy-local command-line tool
Answer: D
Explanation:
AWS provides codedeploy-local CLI for local testing:
This is useful for rapid iteration on appspec.yml and deployment scripts before actual AWS deployments.
Question 61
An application running on EC2 with CodeDeploy needs to maintain at least 50% capacity during deployments. The deployment should proceed as fast as possible while meeting this requirement. Which deployment configuration should be used?
A. HalfAtATime
B. OneAtATime
C. AllAtOnce with minimum healthy hosts at 50%
D. Custom configuration with minimum healthy percentage of 50%
This allows customization while maintaining 50% capacity.
Comparison:
HalfAtATime: Exactly 50% deploys at once
Custom: Can specify different parallelism while maintaining 50% minimum healthy
For fastest deployment with 50% capacity, HalfAtATime deploys half the fleet simultaneously, which is the maximum parallelism possible while maintaining 50% healthy hosts.
Question 62
A CodeDeploy deployment to an Auto Scaling group includes a BeforeBlockTraffic hook that deregisters instances from the load balancer before stopping the application. However, users are still seeing connection errors. What is the likely issue?
A. The load balancer connection draining timeout is too short
B. The deregistration action isn't waiting for in-flight requests
C. The hook should be AfterBlockTraffic instead
D. The load balancer needs time to propagate deregistration
Answer: A or B (related issues)
Explanation:
When deregistering instances from load balancers:
Connection draining (deregistration delay):
ALB/NLB settings specify how long to wait for in-flight requests
Default: 300 seconds
If too short, existing requests are dropped
BeforeBlockTraffic hook should:
Deregister from target group
Wait for connection draining to complete
Then proceed with deployment
Correct implementation:
#!/bin/bash
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# Deregister from target group
aws elbv2 deregister-targets \
--target-group-arn $TARGET_GROUP_ARN \
--targets Id=$INSTANCE_ID
# Wait for draining (check deregistration state)
while true; do
STATE=$(aws elbv2 describe-target-health \
--target-group-arn $TARGET_GROUP_ARN \
--targets Id=$INSTANCE_ID \
--query 'TargetHealthDescriptions[0].TargetHealth.State' \
--output text)
if [ "$STATE" = "unused" ] || [ "$STATE" = "draining" ]; then
sleep 10
else
break
fi
done
Question 63
A company wants to implement zero-downtime deployments for their EC2 application but doesn't want the overhead of blue/green deployments. They have a fleet of 10 instances. Which approach achieves this with minimal infrastructure?
A. In-place deployment with Rolling update configuration
B. Blue/green deployment with reuse of existing instances
C. Immutable deployment creating temporary instances
D. In-place deployment with OneAtATime configuration
Answer: D
Explanation:
For zero-downtime without blue/green overhead:
OneAtATime configuration (Option D):
Deploys to one instance at a time
9 of 10 instances remain healthy throughout
Longest deployment time but zero downtime
No additional infrastructure required
Rolling update (Option A):
Similar concept but may deploy to multiple instances
Can be configured similar to OneAtATime
Trade-offs:
OneAtATime: Safest, slowest
HalfAtATime: Faster, 50% capacity during deployment
AllAtOnce: Fastest, but causes downtime
For 10 instances with zero-downtime requirement and no infrastructure overhead, OneAtATime sequentially updates each instance while maintaining 90% capacity.
Question 64
A DevOps engineer is implementing blue/green deployments for an ECS service. The service uses an Application Load Balancer. What must be configured before CodeDeploy can manage the deployments?
A. Two target groups associated with the ALB
B. Two separate ECS services for blue and green
C. CodeDeploy deployment controller on the ECS service
D. Both A and C
Answer: D
Explanation:
ECS blue/green with CodeDeploy requires:
1. Two target groups (Option A):
Production traffic target group
Test traffic target group
Both associated with the ALB (different listener rules or ports)
2. ECS service with CodeDeploy deployment controller (Option C):
Deployment group linked to ECS service, cluster, target groups
Additional requirements:
ALB listener(s) configured for traffic routing
Task definition for the service
appspec.yml defining the deployment
Without both the target groups and the correct deployment controller type, CodeDeploy cannot manage ECS blue/green deployments.
Question 65
A Lambda function deployment with CodeDeploy is using a Linear10PercentEvery1Minute configuration. The function processes messages from an SQS queue. How does CodeDeploy handle the traffic shifting for this type of invocation?
A. CodeDeploy cannot control traffic for SQS-triggered Lambda functions
B. Traffic shifting applies to new function invocations from SQS
C. SQS continues to invoke the original version until fully shifted
D. You must manually update the SQS event source mapping
Answer: B
Explanation:
For Lambda functions with aliases:
How CodeDeploy traffic shifting works:
Lambda alias points to weighted versions
During deployment: alias points to original version + new version with weights
Example at 10% shift: 90% invocations → v1, 10% → v2
For SQS event sources:
Event source mapping triggers the alias
Each invocation goes to version based on alias weights
Individual SQS messages may invoke different versions
This is request-level shifting, not message-level
Important considerations:
Eventual consistency in Lambda's traffic shifting
Some SQS messages processed by old version, some by new
Both versions should be able to handle messages correctly
Consider idempotency in message processing
Traffic shifting works for all Lambda invocation types (API Gateway, SQS, EventBridge, etc.) because it operates at the alias level.
Question 66
A company wants to implement feature flags that control which code paths are executed in their Lambda functions deployed via CodeDeploy. The feature flags should be changeable without redeploying the function. Which AWS service should be used?
A. Lambda environment variables updated via CodeDeploy
B. Systems Manager Parameter Store with caching in Lambda
C. AWS AppConfig with feature flag configuration profile
AppConfig is preferred over Parameter Store for feature flags due to deployment strategies and validation capabilities.
Question 67
A CodeDeploy deployment group targets EC2 instances using the tag "Environment: Production". A new instance was launched with this tag but didn't receive the current deployment. What is the most likely reason?
A. The instance was launched after the deployment started
B. The CodeDeploy agent isn't installed on the instance
C. The instance isn't in the same VPC as other instances
D. The deployment group configuration needs to be refreshed
Answer: B
Explanation:
When instances don't receive deployments:
Most common causes:
No CodeDeploy agent (Option B):
Agent must be installed and running
Check: sudo service codedeploy-agent status
Agent communicates with CodeDeploy service
Instance launched after deployment:
True, but for Auto Scaling groups, lifecycle hooks can trigger deployment
For standalone EC2, needs separate mechanism
AMI without agent:
If using custom AMI, agent must be included
Or use user data to install agent on launch
Resolution:
Install CodeDeploy agent
For future instances, include agent in AMI or user data
Use Auto Scaling lifecycle hooks for automatic deployment
For instances matching tag criteria but not receiving deployments, verify agent status first.
Question 68
An application requires zero-downtime deployment to a single EC2 instance (no Auto Scaling group). Blue/green deployment isn't possible. What deployment approach should be used?
A. In-place deployment with careful application restart
B. Create a temporary instance, deploy, then swap Elastic IP
C. Use CodeDeploy rolling deployment configuration
D. Implement custom deployment with Route 53 weighted routing
Answer: B or D
Explanation:
For single-instance zero-downtime deployment:
Challenge: In-place deployment to one instance inherently has downtime during application restart.
Solutions:Option B - Elastic IP swap:
Launch new instance
Deploy to new instance
Test new instance
Swap Elastic IP from old to new
Terminate old instance
This provides near-zero-downtime (seconds for IP reassignment).
Option D - Route 53 weighted routing:
Use weighted routing to current instance
Launch new instance with deployment
Add new instance to Route 53 with weight
Gradually shift traffic
Remove old instance
Limitations:
Requires DNS client cache considerations
More complex setup
Longer transition period
For single-instance scenarios, Option B with Elastic IP is cleaner and faster for cutover.
Question 69
A company uses CodeDeploy for Lambda deployments. They want the AfterAllowTraffic hook to run automated tests against the deployed function. If tests fail, the deployment should roll back. How should the AfterAllowTraffic hook be implemented?
A. Lambda function that calls the deployed function and validates responses
B. Lambda function that triggers Step Functions for complex testing
C. CodeBuild project that runs the test suite
D. Either A or B, returning success/failure to CodeDeploy
Answer: D (A or B, both with proper CodeDeploy signaling)
Explanation:
AfterAllowTraffic hook for Lambda:
Implementation requirements:
Hook is a Lambda function
Receives deployment lifecycle event
Must call CodeDeploy to report status
import boto3
codedeploy = boto3.client('codedeploy')
def handler(event, context):
deployment_id = event['DeploymentId']
lifecycle_event_hook_execution_id = event['LifecycleEventHookExecutionId']
try:
# Run tests against deployed function
test_result = run_integration_tests()
if test_result['passed']:
status = 'Succeeded'
else:
status = 'Failed'
except Exception as e:
status = 'Failed'
# Report back to CodeDeploy
codedeploy.put_lifecycle_event_hook_execution_status(
deploymentId=deployment_id,
lifecycleEventHookExecutionId=lifecycle_event_hook_execution_id,
status=status
)
If status is 'Failed', CodeDeploy automatically rolls back the deployment.
Question 70
A team is troubleshooting slow CodeDeploy deployments. The deployment to 50 instances takes over an hour. The deployment configuration is OneAtATime. What change would reduce deployment time while maintaining safety?
A. Change to HalfAtATime configuration
B. Change to AllAtOnce configuration
C. Create a custom configuration with 10% minimum healthy hosts
D. Increase the deployment timeout
Answer: A or C
Explanation:
Deployment speed analysis:
Current (OneAtATime):
50 instances × (deployment time per instance)
If each takes ~1 minute, total = ~50 minutes
Safest but slowest
HalfAtATime (Option A):
25 instances deploy simultaneously
Then remaining 25
Roughly 2× faster than OneAtATime
Maintains 50% capacity
Custom with 10% minimum healthy (Option C):
90% of instances can deploy simultaneously
Only 5 instances must remain healthy
Much faster, but higher risk if deployment fails
45 instances deploy at once, then remaining 5
AllAtOnce (Option B):
Fastest but causes complete outage if issues
No capacity during deployment
Recommendation for exam:
Balance speed vs. risk. HalfAtATime is a common safe choice. Custom configurations allow fine-tuning for specific requirements.
Question 71
A CodeDeploy deployment for an Auto Scaling group uses blue/green deployment. The deployment is configured to copy Auto Scaling group settings. After deployment, the new Auto Scaling group has different instance types than the original. What happened?
A. The ASG copy process doesn't copy instance type settings
B. The launch template/configuration was modified during deployment
C. CodeDeploy uses default instance types for new ASG
D. The deployment group override settings changed the instance type
Answer: B (most likely) or A
Explanation:
When CodeDeploy creates replacement Auto Scaling group for blue/green:
What gets copied:
ASG configuration (min, max, desired, health checks)
Tags
Load balancer/target group associations
What might differ:
Launch template version (if "Latest" was specified)
Launch configuration (if modified between deployment creation and execution)
Likely scenario (Option B):
If the original ASG uses "Latest" launch template version, and someone updated the template before deployment completed, the new ASG gets the updated configuration.
Resolution:
Use specific launch template versions, not "Latest"
Or use deployment group settings to explicitly specify launch template version
For exam: Understand that blue/green copies ASG at deployment time, and "Latest" version references can cause unexpected changes.
Question 72
An organization wants all CodeDeploy deployments to production to require approval from a specific IAM user before traffic is shifted. How should this be implemented?
A. Add a manual approval in CodeDeploy deployment configuration
B. Add a manual approval action in CodePipeline before CodeDeploy action
C. Configure IAM policies requiring the user to start the deployment
D. Use AfterInstall hook to wait for external approval
Answer: B
Explanation:
CodeDeploy itself doesn't have built-in approval workflows. Implement approvals in CodePipeline:
Only specified users can approve. All approval actions are logged in CloudTrail.
Question 73
A company's ECS blue/green deployment with CodeDeploy is failing. The error indicates "The ECS service cannot be updated because the cluster is in draining state." What is the issue?
A. The ECS cluster is being deleted
B. Container instances are being drained for maintenance
C. The cluster capacity is insufficient for blue/green deployment
D. The cluster doesn't support CodeDeploy
Answer: A or B
Explanation:
ECS cluster "draining" state:
Causes:
Cluster deletion initiated - Cluster is being deleted
Container instance draining - Instances marked for removal
Capacity provider changes - Underlying capacity being modified
Blue/green requirement:
Cluster must be able to run both original and replacement task sets
Draining state prevents new task placement
Resolution:
Wait for drain operation to complete
If cluster being deleted, cancel or use different cluster
Verify sufficient capacity for both task sets
Check capacity provider status
For exam: Understand ECS states and their impact on CodeDeploy operations. Blue/green needs double capacity during deployment.
Question 74
A development team uses CodeDeploy deployment groups with both EC2 instances and on-premises servers. They want to deploy to EC2 instances first, validate, then deploy to on-premises servers. How should this be configured?
A. Create two deployment groups, one for EC2 and one for on-premises
B. Use deployment group tags to sequence the deployments
C. Configure deployment waves in the deployment configuration
D. Create two separate deployments with a pipeline to sequence them
Answer: D (or A with pipeline orchestration)
Explanation:
CodeDeploy doesn't have built-in deployment sequencing within a deployment group.
Solution: Multiple deployment groups with orchestration
Stage: Deploy-EC2
Action: Deploy to DeploymentGroup-EC2
Stage: Validate
Action: Run validation tests/approval
Stage: Deploy-OnPrem
Action: Deploy to DeploymentGroup-OnPrem
This approach:
Allows validation between deployments
Provides clear deployment sequencing
Enables rollback at each stage
Alternative: Manual deployment sequencing via CLI/SDK.
Question 75
A company uses CodeDeploy with EC2 instances. They need to ensure that the application gracefully handles the shutdown sequence before CodeDeploy stops it. Which lifecycle hook should contain this logic?
A. BeforeInstall
B. ApplicationStop
C. BeforeBlockTraffic
D. BeforeInstall or ApplicationStop, depending on deployment type
Answer: B (ApplicationStop)
Explanation:
For graceful shutdown during CodeDeploy:
ApplicationStop hook:
Runs before new revision is installed
Purpose: Stop running application gracefully
Executes scripts from PREVIOUS deployment
Ideal for cleanup, connection draining, state saving
Implementation:
#!/bin/bash
# scripts/application_stop.sh
# Signal application to start graceful shutdown
kill -SIGTERM $(cat /var/run/app.pid)
# Wait for application to finish processing
sleep 30
# Verify application stopped
if pgrep -f "myapp" > /dev/null; then
echo "Application didn't stop gracefully, forcing..."
kill -9 $(cat /var/run/app.pid)
fi
Important note: ApplicationStop scripts are from the PREVIOUS deployment. If it's the first deployment, this hook is skipped.
BeforeBlockTraffic is for in-place deployments with load balancer integration (removing from LB before stopping).
Question 76
A DevOps engineer needs to configure CodeDeploy to integrate with an external configuration management system. After deployment, configuration from the external system should be applied. Which lifecycle hook is most appropriate?
A. AfterInstall
B. ApplicationStart
C. ValidateService
D. BeforeAllowTraffic
Answer: A (AfterInstall)
Explanation:
Lifecycle hook purposes:
AfterInstall (Option A):
Application files deployed but app not yet started
Configuration management (Ansible, Puppet, Chef) integration point
Typical AfterInstall tasks:
#!/bin/bash
# Sync configuration from external system
/opt/configuration-management/sync-config.sh
# Apply environment-specific settings
ansible-playbook /opt/playbooks/configure-app.yml
# Set file permissions
chown -R app:app /var/www/app
Order of operations:
BeforeInstall - pre-deployment tasks
Install - files copied
AfterInstall - configure installed files ← External config here
ApplicationStart - start application
ValidateService - verify application works
Configuration should be applied before the application starts (AfterInstall), not after (ValidateService).
Question 77
A company has 200 EC2 instances in an Auto Scaling group. They want to deploy a new version with a deployment configuration that updates 10 instances at a time, with a 5-minute wait between batches to monitor for issues. Which deployment configuration achieves this?
A. Rolling deployment with batch size of 10
B. HalfAtATime with monitoring pauses
C. Custom configuration with fixed number of healthy hosts
D. CodeDeploy doesn't support batch-with-wait deployments
Answer: C (partial - CodeDeploy deploys in one wave, not batches with pauses)
Explanation:
Important clarification about CodeDeploy behavior:
What CodeDeploy DOES:
Deploys to X instances simultaneously (based on minimum healthy hosts)
Waits for those to complete
Then continues to next instances
All within a single deployment operation
What CodeDeploy DOES NOT do natively:
Pause between batches for monitoring
Time-based delays between instance updates
For batch-with-wait requirements:Option 1: Multiple deployments with pipeline:
Stage 1: Deploy (10% of instances)
Stage 2: Wait (Lambda with sleep or Step Functions)
Stage 3: Deploy (next 10%)
...
Option 2: Custom deployment with Step Functions:
Orchestrate multiple smaller deployments
Add wait states between deployments
For exam: Understand that CodeDeploy's minimum healthy hosts controls parallelism but doesn't create distinct batches with pause between them.
Question 78
An application uses CodeDeploy for EC2 deployments. The team wants to automatically run database migrations before deploying the new application version. The migrations must complete successfully before any instance receives the new code. How should this be implemented?
A. Add a BeforeInstall hook on one instance to run migrations
B. Use a CodeBuild action before CodeDeploy in the pipeline
C. Add an AfterInstall hook that runs migrations
D. Use ApplicationStart hook to run migrations
Answer: B
Explanation:
For database migrations that must complete before ANY deployment:
Why CodeBuild (Option B) is correct:
Migrations run once, before deployment starts
Single execution, not per-instance
Can fail pipeline before any CodeDeploy activity
Clear separation of concerns
Why not hooks (Options A, C, D):
Hooks run on EACH instance
Migrations would run multiple times (race conditions, failures)
First instance might succeed, subsequent fail on already-migrated DB
No rollback capability for partial deployments
Implementation:
Pipeline:
├── Source
├── Build (CodeBuild)
├── Migrate (CodeBuild - run DB migrations)
│ └── Fails here = no deployment
└── Deploy (CodeDeploy - application code)
Migration CodeBuild project:
phases:
build:
commands:
- npm run db:migrate
Question 79
A CodeDeploy deployment to Lambda functions is configured with a BeforeAllowTraffic hook. The hook function runs but the deployment times out. The hook function takes 8 minutes to complete. What is the issue?
A. Lambda functions have a 15-minute maximum timeout
B. The lifecycle hook timeout is not configured
C. BeforeAllowTraffic hook has a 5-minute default timeout
D. The hook function isn't returning a response to CodeDeploy
Answer: D (most likely) or timeout configuration
Explanation:
CodeDeploy Lambda hooks have specific requirements:
Timeout considerations:
Lambda deployment lifecycle hooks have their own timeout
Default hook timeout: 1 hour (configurable)
But the hook Lambda function must RESPOND to CodeDeploy
Common issue (Option D):
Hook function might:
Run for 8 minutes
Complete its work
Exit without calling put_lifecycle_event_hook_execution_status
CodeDeploy waits for response until timeout
Required hook response:
def handler(event, context):
# Do validation work (up to 8 minutes)
perform_validation()
# MUST report status back to CodeDeploy
codedeploy = boto3.client('codedeploy')
codedeploy.put_lifecycle_event_hook_execution_status(
deploymentId=event['DeploymentId'],
lifecycleEventHookExecutionId=event['LifecycleEventHookExecutionId'],
status='Succeeded' # or 'Failed'
)
Without this callback, CodeDeploy assumes the hook is still running.
Question 80
A company uses CodeDeploy with an Application Load Balancer for blue/green EC2 deployments. During deployments, they notice that new instances become healthy in the target group but deployment still fails. The error mentions "health check failures." What should be investigated?
A. ALB target group health check settings
B. CodeDeploy health check type and thresholds
C. EC2 instance health checks
D. All health check configurations
Answer: D (but specifically B for CodeDeploy-specific behavior)
Explanation:
Blue/green deployments have multiple health check layers:
1. ALB Target Group Health Checks:
Path, port, protocol
Healthy/unhealthy thresholds
Interval and timeout
2. CodeDeploy Health Checks:
ELB health check type (instances must pass ALB checks)
Or EC2 health check type (basic EC2 status)
3. EC2 Instance Status Checks:
System status, instance status
Potential issues:
ALB says healthy, but CodeDeploy has different threshold
CodeDeploy's evaluation period differs from ALB
Code Deploy waits for configurable duration for instances to become healthy
Check CodeDeploy deployment group health check settings and compare with ALB target group settings for consistency.
Question 81
A team needs to implement a CodeDeploy deployment that proceeds only during business hours (9 AM - 5 PM EST). If a deployment is triggered outside these hours, it should wait until the next business hours window. How can this be implemented?
A. Configure deployment windows in CodeDeploy
B. Use EventBridge Scheduler to enable/disable deployments
C. Implement a Lambda function as a pre-deployment gate in CodePipeline
D. CodeDeploy doesn't support scheduled deployment windows
Answer: C
Explanation:
CodeDeploy doesn't have native deployment window scheduling. Implement via pipeline:
Lambda pre-deployment gate:
import datetime
from pytz import timezone
def handler(event, context):
est = timezone('America/New_York')
now = datetime.datetime.now(est)
# Check if within business hours
if now.weekday() < 5 and 9 <= now.hour < 17:
# Proceed with deployment
return put_success(event)
else:
# Calculate wait time until next window
wait_message = calculate_next_window(now)
return put_failure(event, wait_message)
Step Functions: Implement wait until business hours
Approval action: Automated approval only during hours
For exam: Know that CodeDeploy itself doesn't have window scheduling; implement at pipeline level.
Question 82
A development team is deploying a serverless application using CodeDeploy with AWS SAM. The SAM template defines a Lambda function with an AutoPublishAlias property set to "live". How does this integrate with CodeDeploy?
A. SAM automatically creates CodeDeploy resources for traffic shifting
B. CodeDeploy must be configured separately from SAM
C. SAM and CodeDeploy are not compatible
D. AutoPublishAlias only creates versions, not CodeDeploy integration
Answer: A
Explanation:
AWS SAM integrates CodeDeploy for gradual deployments:
Canary10Percent5Minutes, Canary10Percent10Minutes, etc.
Linear10PercentEvery1Minute, Linear10PercentEvery2Minutes, etc.
AllAtOnce
SAM handles the CodeDeploy resource creation, simplifying serverless CI/CD.
Question 83
An application running on EC2 requires environment-specific configuration. Different configuration files should be deployed to development, staging, and production environments. The deployment bundle is the same for all environments. How should this be handled with CodeDeploy?
A. Create separate deployment bundles per environment
B. Use appspec.yml with conditional file mappings
C. Use environment-specific lifecycle hooks to apply configuration
D. Store configurations in Parameter Store and fetch during deployment
Answer: C or D
Explanation:
Multiple approaches for environment-specific config:
A company is using CodeDeploy to deploy Docker containers to EC2 instances. The deployment should pull the latest image from ECR and restart the container. Which lifecycle hooks should be used?
A. ApplicationStop to stop container, ApplicationStart to pull and start new container
B. BeforeInstall to pull image, AfterInstall to start container
C. ApplicationStop to stop container, Install to pull image, ApplicationStart to start container
D. Download bundle handles Docker image pull automatically
Answer: A
Explanation:
For Docker container deployments on EC2 with CodeDeploy:
Install phase copies files from S3 (scripts, configs)
Docker image pull happens in hooks, not Install
Version can be passed via environment variable or appspec
The deployment bundle contains scripts and configuration, not the Docker image itself.
Question 85
A CodeDeploy deployment to an ECS service is failing. The error states "The deployment failed because the ECS service couldn't reach steady state." What are possible causes?
A. Task definition has errors causing container failures
B. Insufficient ECS cluster capacity
C. Container health checks are failing
D. All of the above
Answer: D
Explanation:
ECS "steady state" requires all tasks to be running and healthy. Failure causes:
1. Task definition issues (Option A):
Invalid image reference
Incorrect environment variables
Resource limits too low
Missing IAM permissions
2. Capacity issues (Option B):
Not enough CPU/memory in cluster
No available container instances
Fargate capacity not available
3. Health check failures (Option C):
Container starts but health check fails
Load balancer health check path incorrect
Application startup time exceeds health check grace period
Check ECS events and stopped task reasons for specific failure cause.
Question 86
A DevOps team wants to test CodeDeploy deployments in a lower environment before production. They want to use the same deployment configuration but with faster rollback detection. Which deployment group settings should differ between environments?
A. Deployment configuration (faster thresholds for non-prod)
B. Alarm configuration (stricter alarms for lower environments)
C. Rollback settings (faster rollback in lower environments)
D. All settings should be identical for accurate testing
Answer: A or B (depends on strategy)
Explanation:
Environment-specific deployment considerations:
Lower environment optimizations:Option A - Faster deployment configurations:
Production: Linear10PercentEvery10Minutes
Non-prod: Linear10PercentEvery1Minute or AllAtOnce
Faster in non-prod for quick feedback.
Option B - Stricter alarms:
Lower thresholds in non-prod to catch issues:
Option D consideration:
Some organizations prefer identical settings to accurately simulate production behavior. Trade-off between speed and accuracy.
Best practice:
Same deployment TYPE (e.g., blue/green)
Faster time intervals in non-prod
Similar but appropriately-scaled thresholds
Test rollback procedures in non-prod
Question 87
A company uses CodeDeploy for Lambda deployments with traffic shifting. They want to implement a "bake time" where the new version runs with production traffic for 30 minutes before the deployment is considered complete, even after traffic is fully shifted. How can this be achieved?
A. Configure the deployment wait time in deployment configuration
B. Use AfterAllowTraffic hook with a Lambda that waits and monitors
C. Extend the deployment with CloudWatch Events and manual completion
D. CodeDeploy automatically waits after traffic shift
Note: Lambda timeout must exceed bake time (max 15 minutes). For longer bake times, use Step Functions.
Question 88
A CodeDeploy deployment fails with "No instances found for deployment group." The deployment group is configured to target an Auto Scaling group that has 5 running instances. All instances are tagged correctly and have the CodeDeploy agent running. What should be checked?
A. The Auto Scaling group name in deployment group configuration
B. The deployment group is targeting EC2 tags instead of ASG
C. The ASG instances are in a different region
D. The IAM service role permissions
Answer: A or B
Explanation:
"No instances found" error troubleshooting:
Check deployment group target type:
Auto Scaling group targeting:
Verify ASG name is correct in deployment group
ASG must exist and have running instances
Instances must be InService state
Tag-based targeting:
If configured with tags, instances must match tags
Multiple tag conditions use AND logic
Check tag key/value spelling
Common issues:
Deployment group references ASG that was deleted/recreated
ASG name changed but deployment group not updated
Mixed targeting (expecting ASG but configured for tags)
An organization wants to standardize CodeDeploy configurations across multiple deployment groups. They want to ensure all deployment groups use specific lifecycle hooks and alarm configurations. What approach should they use?
A. Use AWS CloudFormation templates for deployment group creation
B. Create a custom CodeDeploy deployment configuration
C. Use AWS Config rules to enforce configuration
D. Implement a CI/CD pipeline that validates deployment group settings
Pre-deployment validation of deployment group configuration
CI/CD check before allowing deployments
Audit trail of changes
AWS Config (Option C) can detect drift but not enforce settings proactively.
Question 90
A company's CodeDeploy deployment includes running integration tests during the ValidateService hook. The tests take 10 minutes to complete. The deployment times out. What is the maximum timeout that can be configured for a lifecycle hook?
A. 5 minutes
B. 1 hour
C. 3600 seconds (1 hour)
D. Lifecycle hook timeout is the remaining deployment timeout
Run tests asynchronously (script returns, tests continue)
Use external systems for long-running validations
Consider moving tests to separate pipeline stage
For 10-minute tests, the default timeout is sufficient. Check if script is hanging or tests are actually taking longer than expected.
Question 91
A team uses CodeDeploy with EC2 instances in multiple Availability Zones. They want deployments to update one AZ at a time to maintain cross-AZ availability. How should this be configured?
A. Create separate deployment groups per AZ
B. Use the AZ-aware deployment configuration option
C. CodeDeploy automatically handles AZ distribution
D. Tag instances by AZ and use multiple deployment groups with pipeline sequencing
Answer: D
Explanation:
CodeDeploy doesn't have native AZ-aware deployment ordering. Implement via:
Solution: Multiple deployment groups with orchestrationSetup:
Tag instances by AZ:
Tag: AZ: us-east-1a
Tag: AZ: us-east-1b
Create deployment groups per AZ:
DeploymentGroup-AZ-1a (targets AZ: us-east-1a)
DeploymentGroup-AZ-1b (targets AZ: us-east-1b)
Pipeline orchestration:
Stage: Deploy-AZ-1a
Action: Deploy to DeploymentGroup-AZ-1a
Stage: Validate-AZ-1a
Action: Health check / manual approval
Stage: Deploy-AZ-1b
Action: Deploy to DeploymentGroup-AZ-1b
Alternative:
Use CodeDeploy's minimum healthy hosts with high threshold to force sequential updates across AZ boundaries (less control).
Question 92
A CodeDeploy deployment to Lambda uses a traffic-shifting configuration. The deployment completes successfully, but some clients are still being routed to the old version hours later. What is the cause?
A. Lambda alias caching at the client
B. CloudFront caching the old Lambda responses
C. DNS caching of Lambda endpoints
D. This behavior indicates a failed deployment
Answer: A (Lambda invocations are consistent within a session context)
Explanation:
Traffic shifting behavior clarification:
How Lambda alias traffic shifting works:
Each NEW invocation is routed based on current alias weights
During traffic shift: some invocations go to v1, some to v2
After shift complete: all invocations go to new version
"Old version still receiving traffic" after completion:Possible causes:
Connection reuse: Some SDKs/clients reuse connections
Provisioned concurrency: Old version PC instances still exist
Step Functions/SQS: Messages queued before shift complete
Event source mapping: Takes time to fully shift
Troubleshooting:
# Check alias configuration
aws lambda get-alias \
--function-name MyFunction \
--name live
# Should show 100% to new version
If alias shows 100% new version but old version still invoked, check client connection patterns and event sources.
Question 93
An organization needs to deploy the same application to EC2 instances, Lambda functions, and ECS services. They want to use a single deployment pipeline. How should this be architected?
A. Use a single CodeDeploy application with multiple deployment groups
B. Create separate CodeDeploy applications for each compute platform
C. Use CloudFormation StackSets for unified deployment
D. Use a CodePipeline with parallel deploy actions for each platform
Answer: D (with B as supporting detail)
Explanation:
CodeDeploy applications are platform-specific:
CodeDeploy application platforms:
EC2/On-premises
Lambda
ECS
Cannot mix platforms in single application. Therefore:
Architecture:
This provides unified pipeline with platform-appropriate deployments.
Question 94
A company uses CodeDeploy for EC2 deployments. They want to automatically notify a Slack channel when deployments start, succeed, or fail. What is the recommended approach?
A. Configure CodeDeploy notifications directly to Slack
B. Use CloudWatch Events to trigger Lambda that posts to Slack
C. Use Amazon SNS with Slack integration
D. Configure lifecycle hooks to post to Slack
Answer: B (or C via Amazon SNS + Lambda/Chatbot)
Explanation:
CodeDeploy notification options:
CloudWatch Events + Lambda (Option B):
def lambda_handler(event, context):
deployment_id = event['detail']['deploymentId']
state = event['detail']['state']
message = f"CodeDeploy deployment {deployment_id}: {state}"
# Post to Slack
slack_webhook = os.environ['SLACK_WEBHOOK']
requests.post(slack_webhook, json={'text': message})
Both approaches work. AWS Chatbot is simpler if you have standard notifications. Lambda provides more customization.
Question 95
A DevOps engineer notices that CodeDeploy deployments to an Auto Scaling group sometimes skip newly launched instances. The ASG is configured with a CodeDeploy lifecycle hook. What should be verified?
A. The lifecycle hook timeout is sufficient for CodeDeploy agent installation
B. The lifecycle hook is on LAUNCHING, not TERMINATING
C. The IAM role allows CodeDeploy to complete the lifecycle action
CodeDeploy agent is in AMI or installed via user data
Question 96
A company wants to implement a deployment strategy where 1% of traffic goes to the new version for 1 hour before proceeding. If any errors occur during this period, the deployment should automatically roll back. Which configuration achieves this with CodeDeploy Lambda deployments?
A. Canary10Percent30Minutes
B. Custom deployment configuration with 1% shift and 60-minute interval
C. Linear1PercentEvery1Minute
D. CodeDeploy doesn't support less than 10% traffic shifts
AWS-provided configurations use 10% minimum for canary. Custom configurations allow lower percentages.
Question 97
An ECS service deployed with CodeDeploy blue/green is experiencing connection drops during the traffic shift. Current configuration shifts traffic all at once. How can this be improved?
A. Use linear or canary traffic shifting
B. Increase the deregistration delay on target groups
C. Add connection draining to the deployment configuration
D. Both A and B
Answer: D
Explanation:
Connection drops during traffic shift have two causes:
Deregistration delay allows requests on old tasks to complete
New requests go to new tasks
Additional considerations:
Application graceful shutdown handling
Connection timeout settings
Health check intervals
Question 98
A team needs to deploy a new version of their application but keep the previous version available for manual rollback for 7 days. They're using CodeDeploy with EC2 instances in an Auto Scaling group. What deployment strategy supports this?
A. In-place deployment with 7-day rollback window
B. Blue/green deployment with termination wait time of 7 days
C. Blue/green deployment with "keep original instances" option
D. Create a separate ASG with previous version manually
Instances remain until manually terminated. Rollback by traffic shift or ASG swap.
Alternative (cost-optimized):
Keep AMI or snapshot of previous version
Maintain previous version in a scaled-down ASG
Create rollback deployment if needed
For exam: Understand the cost implications of keeping old environments running.
Question 99
A CodeDeploy deployment to EC2 instances uses the AllAtOnce configuration. The deployment succeeds on 3 of 5 instances but fails on 2. What happens to the deployment?
A. The deployment fails, all instances are rolled back
B. The deployment succeeds, failed instances retain old version
C. The deployment fails, 3 successful instances keep new version
D. Depends on minimum healthy hosts configuration
Answer: D
Explanation:
AllAtOnce behavior depends on minimum healthy hosts:
AllAtOnce default configuration:
minimumHealthyHosts: type: HOST_COUNT, value: 0
With this setting: Any failures = deployment fails
But successfully deployed instances KEEP the new version
Scenario analysis:If minimum healthy = 0:
Deployment status: FAILED (not all instances succeeded)
3 instances have new version
2 instances have old version (deployment failed on them)
If minimum healthy = 3:
Deployment status: SUCCEEDED (3 meets minimum)
3 instances have new version
2 instances need follow-up deployment
Key point: Failed deployments don't automatically roll back successfully deployed instances unless you have automatic rollback configured:
A company is migrating from a third-party deployment tool to CodeDeploy. Their current tool supports "stop on first failure" behavior. How can this be achieved in CodeDeploy?
A. Use OneAtATime deployment configuration
B. Configure minimum healthy hosts to (total instances - 1)
C. Enable stop deployment on first failure setting
D. Create a custom deployment configuration with fail-fast behavior
HalfAtATime: Continues with remaining instances in the batch
AllAtOnce: All instances attempted regardless of failures
For strict stop-on-first-failure, OneAtATime is the answer.
Question 101
A company uses CloudFormation to deploy infrastructure and CodePipeline for CI/CD. They want CloudFormation stack updates to fail if they would cause resource replacement (potential data loss). How should this be configured?
A. Use stack policies to prevent replacement actions
B. Enable termination protection on the stack
C. Use change sets and implement a Lambda function to analyze changes
D. Configure DeletionPolicy on all resources
Answer: C (or A for specific resources)
Explanation:
Preventing unintended resource replacement:
Option C - Change sets with analysis:
Create change set instead of direct update
Lambda analyzes change set for replacements:
def analyze_change_set(change_set_id):
changes = cfn.describe_change_set(ChangeSetName=change_set_id)
for change in changes['Changes']:
if change['ResourceChange']['Replacement'] == 'True':
return 'REJECT'
return 'APPROVE'
Pipeline approval based on analysis
Option A - Stack policies (for known critical resources):
Stack policies protect specific resources but require knowing which to protect. Change set analysis provides dynamic checking.
Question 102
A CloudFormation template creates an EC2 instance and installs software using cfn-init. The stack creation succeeds but the software installation fails. How can the template be modified to fail the stack creation if cfn-init fails?
A. Add a CreationPolicy with a signal timeout
B. Add a WaitCondition for the cfn-init completion
C. Use cfn-signal to report success/failure with CreationPolicy
CreationPolicy makes CloudFormation wait for signal
cfn-signal sends success/failure based on cfn-init exit code ($?)
Stack fails if timeout or failure signal received
Question 103
An organization uses CloudFormation StackSets to deploy resources across multiple accounts. A new account is added to the organization. How can the StackSet automatically deploy to the new account?
A. Enable automatic deployment in StackSet configuration
B. Add the new account to the StackSet target accounts
C. Configure the StackSet to target an Organization Unit (OU)
When new account joins OU → StackSet automatically deploys
When account leaves OU → stacks optionally removed
Requirements:
AWS Organizations integration
StackSet created with SERVICE_MANAGED permissions
Trusted access enabled for CloudFormation StackSets
This provides automatic governance and baseline deployment for new accounts.
Question 104
A company's CloudFormation template includes a Lambda function that should only be updated when the function code changes, not when other template parameters change. How should this be configured?
A. Use a custom resource to manage Lambda updates
B. Package Lambda code in S3 with versioned keys
C. Use the AWS::Lambda::Version resource
D. Configure the function with UpdateReplacePolicy: Retain
Answer: B (or use SAM/CloudFormation deployment package)
When CodeVersion parameter changes → function code updates
When other parameters change → function code doesn't update
Alternative - SAM packaging:
SAM CLI packages code with content-based hashes:
sam package --s3-bucket my-bucket
Creates unique S3 keys based on code content.
AWS::Lambda::Version:
Creates new version on each update but doesn't control when updates happen.
The key is controlling the S3 key changes to match code changes only.
Question 105
A DevOps engineer is implementing blue/green deployments using CloudFormation. The template defines an Auto Scaling group and Application Load Balancer. What CloudFormation update policy enables blue/green behavior?
A. AutoScalingReplacingUpdate with WillReplace: true
B. AutoScalingRollingUpdate with custom batch sizes
C. UpdatePolicy with AutoScalingScheduledAction
D. CloudFormation doesn't have native blue/green update policies
Rolling update modifies existing ASG instances in batches.
For true blue/green: Use AutoScalingReplacingUpdate or external tools (CodeDeploy).
Question 106
A CloudFormation stack update fails and rolls back. The DevOps engineer wants to investigate what went wrong before the resources are rolled back. What feature allows this?
A. Enable termination protection before updates
B. Disable rollback in the stack update settings
C. Use the UPDATE_ROLLBACK_FAILED status and continue rollback later
D. Enable detailed stack events logging
Answer: B
Explanation:
Disabling rollback for debugging:
Console:
Stack settings → Rollback on failure: Disabled
CLI:
NOT recommended for production (leaves stack in inconsistent state)
Question 107
An application uses CloudFormation nested stacks for modular infrastructure. Updates to the parent stack sometimes fail because child stack exports are in use. How should this be handled?
A. Use cross-stack references instead of exports
B. Delete dependent stacks before updating exports
C. Use SSM Parameter Store for shared values
D. Use export names that don't change
Answer: C or D
Explanation:
Managing cross-stack dependencies:
The problem:
Stack A exports !Export { Name: VpcId }
Stack B imports !ImportValue VpcId
Trying to update Stack A's export fails because it's in use
Solution Option D - Stable export names:
Don't change export names; change export values instead. This requires careful design upfront.
Cross-stack references via SSM provide more flexibility than CloudFormation exports.
Question 108
A company uses Elastic Beanstalk for their web application. They need to customize the Nginx configuration to add custom headers. What is the recommended approach?
A. SSH into instances and modify nginx.conf
B. Use .ebextensions to add configuration files
C. Create a custom AMI with modified Nginx configuration
D. Use .platform hooks to modify Nginx configuration
Answer: D (for newer Amazon Linux 2 platforms) or B
Explanation:
Elastic Beanstalk customization options:
For Amazon Linux 2 platforms - .platform (Option D):
Specific directories for known customizations (nginx, hooks)
Option A (SSH) is not reproducible. Option C (custom AMI) adds maintenance burden.
Question 109
An Elastic Beanstalk application needs to run a script every time a new version is deployed, after the application is running. Which hook should be used?
A. appdeploy/pre hook
B. appdeploy/post hook
C. configdeploy/post hook
D. postdeploy hook
Answer: B (or D depending on EB platform version)
Explanation:
Elastic Beanstalk deployment hooks (Amazon Linux 2):
Hook directories:
.platform/hooks/
prebuild/ # Before application builds
predeploy/ # After build, before deployment
postdeploy/ # After deployment complete
For post-deployment scripts (Option D for AL2):
.platform/hooks/postdeploy/99_run_migrations.sh
#!/bin/bash
cd /var/app/current
./run-migrations.sh
For post-deployment with running application, use .platform/hooks/postdeploy/ on Amazon Linux 2.
Question 110
A company wants to implement a deployment pipeline where infrastructure changes and application code changes are deployed together atomically. If either fails, both should roll back. How should this be designed?
A. Separate pipelines for infrastructure and application with manual coordination
B. Single pipeline with CloudFormation deploying both infrastructure and application
C. Single pipeline with infrastructure stage followed by application stage
D. Use CloudFormation StackSets for coordinated deployment
CloudFormation creates change set with all changes
If any resource fails → entire stack rolls back
Both infrastructure and application return to previous state
Pipeline integration:
Source → Build → Deploy (CloudFormation with application code packaged)
CloudFormation's native rollback handles the atomicity requirement.
Question 111
An organization uses Elastic Beanstalk with a worker environment processing SQS messages. The worker occasionally processes the same message multiple times. How can this be prevented?
A. Enable FIFO queue for the environment
B. Increase the visibility timeout on the SQS queue
C. Implement idempotent message processing in the application
D. Configure Elastic Beanstalk to delete messages immediately
Answer: B and C
Explanation:
Duplicate message processing causes:
1. Visibility timeout too short (Option B):
Message becomes visible again before processing completes
Another worker picks up the same message
Solution: Increase visibility timeout to exceed maximum processing time
Automatically deletes messages after successful processing (HTTP 200)
Keeps messages visible during processing
Returns messages to queue on failure
Both visibility timeout adjustment AND idempotent processing are best practices.
Question 112
A CloudFormation template creates an RDS database. The team wants to ensure the database is not deleted even if the stack is deleted. What configuration achieves this?
A. Enable deletion protection on the RDS instance
B. Set DeletionPolicy to Retain on the RDS resource
Best practice for production databases:
Use all three protections for critical data.
Question 113
A company uses CloudFormation with nested stacks. They want to update a child stack independently without updating the parent. Is this possible and how?
A. Yes, update the child stack directly
B. No, child stacks must be updated through the parent
C. Yes, but only if the child stack was created with UPDATE capability
D. No, nested stacks don't support independent updates
Answer: A
Explanation:
Nested stack update options:
Direct child stack updates (Option A):
Child stacks are regular CloudFormation stacks
Can be updated directly using stack name or ID
Changes are independent of parent
When to use direct updates:
Quick fixes to child stack
Independent component updates
Parent template doesn't need changes
Considerations:
Parent template might become out of sync with child state
Next parent update might cause unexpected child changes
Drift detection can identify differences
When to use parent updates:
Coordinated changes across stacks
Version control of complete infrastructure
Consistent state management
Best practice:
Update through parent for version control and consistency. Direct child updates for emergencies or independent components.
Question 114
An Elastic Beanstalk environment uses environment properties for configuration. The team wants to rotate a database password without redeploying the application. How can this be achieved?
A. Update environment properties, which triggers instance refresh
B. Use Secrets Manager with application-level caching and rotation
C. Store credentials in .ebextensions and update the file
D. Use Parameter Store SecureString with application polling
Answer: B (or D)
Explanation:
Credential rotation without deployment:
Option B - Secrets Manager (preferred for credentials):
A CloudFormation template uses a custom resource backed by a Lambda function. The Lambda function creates resources that take 10 minutes to complete. CloudFormation shows the stack creation still in progress after 15 minutes. What is likely happening?
A. Lambda function timeout is too short
B. Lambda function isn't sending a response to CloudFormation
C. CloudFormation is waiting for additional resources
D. Custom resource requires more time than standard timeout
Answer: B
Explanation:
Custom resource behavior:
Required response:
Custom resource Lambda MUST send response to CloudFormation:
import cfnresponse
def handler(event, context):
try:
# Do work (10 minutes)
result = create_external_resource()
# MUST send response
cfnresponse.send(event, context, cfnresponse.SUCCESS, {
'ResourceId': result['id']
})
except Exception as e:
cfnresponse.send(event, context, cfnresponse.FAILED, {
'Error': str(e)
})
Common issues:
Lambda timeout before work completes (Lambda dies, no response)
Function completes but doesn't send response (CloudFormation waits)
Response sent to wrong URL (misconfigured)
Troubleshooting:
Check Lambda logs for completion
Verify cfnresponse.send is called
Check Lambda timeout (max 15 minutes)
For long-running tasks, use Step Functions or asynchronous pattern with status polling.
Question 116
A company wants to deploy the same Elastic Beanstalk application to multiple regions with region-specific configuration. What is the recommended approach?
A. Create saved configurations per region and restore in each region
B. Use CloudFormation with parameters for region-specific values
C. Create separate applications per region with .ebextensions containing region configs
D. Use Elastic Beanstalk environment cloning across regions
Best practice:
Use CloudFormation or Terraform for multi-region with region-specific parameters stored in SSM Parameter Store per region.
Question 117
A CloudFormation stack uses a custom resource to create a DNS record in an external DNS provider. When the stack is deleted, the DNS record should also be deleted. How is this implemented?
A. The custom resource Lambda automatically handles deletions
B. Implement Delete handling in the custom resource Lambda
C. Set DeletionPolicy to Delete on the custom resource
The Lambda MUST handle Delete requests for proper cleanup.
Question 118
An organization uses CloudFormation for infrastructure deployment. They want to prevent any modifications to production stacks except through the CI/CD pipeline. How should this be enforced?
A. Use IAM policies to deny CloudFormation actions for console users
B. Enable stack policy with deny all updates
C. Use SCPs to restrict CloudFormation access
D. Combine IAM policies restricting CloudFormation with pipeline role exceptions
commands: Run during instance launch (before app deployment)
container_commands: Run during deployment (leader election available)
.platform/hooks/: Run during deployment lifecycle
For scaling-only commands (new instances, no deployment), use commands section or instance launch scripts.
Question 120
A company uses CloudFormation for infrastructure and wants to implement drift detection to identify manual changes. How should this be automated?
A. Schedule Lambda function to run DetectStackDrift API
B. Enable automatic drift detection in CloudFormation
C. Use AWS Config rules for drift detection
D. CloudFormation Events with EventBridge for drift alerts
Answer: A (or C for more comprehensive detection)
Explanation:
CloudFormation drift detection automation:
Option A - Scheduled drift detection:
import boto3
def lambda_handler(event, context):
cfn = boto3.client('cloudformation')
# List all stacks
stacks = cfn.list_stacks(StackStatusFilter=['CREATE_COMPLETE', 'UPDATE_COMPLETE'])
for stack in stacks['StackSummaries']:
# Initiate drift detection
cfn.detect_stack_drift(StackName=stack['StackName'])
# Schedule with CloudWatch Events (daily)
Follow-up Lambda for results:
def check_drift_results(event, context):
cfn = boto3.client('cloudformation')
drift_status = cfn.describe_stack_resource_drifts(StackName=stack_name)
drifted = [r for r in drift_status['StackResourceDrifts']
if r['StackResourceDriftStatus'] == 'MODIFIED']
if drifted:
send_alert(drifted)
Option C - AWS Config:
AWS Config rule cloudformation-stack-drift-detection-check can monitor for drift.
CloudFormation doesn't have built-in automatic drift detection (Option B doesn't exist).
Question 121
A CloudFormation template creates an S3 bucket. The team wants to ensure the bucket has encryption enabled and blocks public access, regardless of what the template specifies. How can this be enforced?
A. Use CloudFormation hooks to validate templates
B. Use AWS Config rules to check bucket configuration
C. Use SCPs to deny bucket creation without encryption
D. Use CloudFormation Guard for policy-as-code validation
Answer: A or D (preventive) and B (detective)
Explanation:
Enforcing S3 security standards:
Option A - CloudFormation Hooks (preventive):
# Hook Lambda
def validate_s3_bucket(event):
resource_properties = event['requestData']['targetLogicalId']['properties']
# Check encryption
if 'BucketEncryption' not in resource_properties:
return {'status': 'FAILED', 'message': 'Encryption required'}
# Check public access block
if 'PublicAccessBlockConfiguration' not in resource_properties:
return {'status': 'FAILED', 'message': 'Public access block required'}
return {'status': 'SUCCESS'}
Best practice: Use hooks/guard for prevention AND Config for continuous monitoring.
Question 122
An Elastic Beanstalk environment has immutable deployment configured. During a deployment, the team notices double the number of instances running. The deployment eventually succeeds. Is this expected behavior?
A. No, immutable deployments should maintain the same instance count
B. Yes, immutable deployments create a temporary Auto Scaling group
C. No, this indicates a deployment failure
D. Yes, but only if health check grace period is enabled
Answer: B
Explanation:
Immutable deployment process:
How immutable deployments work:
Create temporary Auto Scaling group
Launch new instances with new version
New instances pass health checks
New instances added to load balancer
Old instances terminated
Temporary ASG deleted
During deployment:
Original ASG: 3 instances (old version)
Temporary ASG: 3 instances (new version)
Total: 6 instances
After successful deployment:
Original ASG: 3 instances (new version)
Temporary ASG: deleted
Total: 3 instances
Benefits:
No capacity reduction during deployment
Quick rollback (terminate temporary ASG)
Clean instances with new version
Cost consideration:
Temporary double capacity means temporary double cost during deployment window.
Question 123
A company uses CloudFormation to deploy VPCs and associated resources. They want to ensure that VPC CIDR blocks don't overlap with existing VPCs in the account. How can this be implemented?
A. Use a CloudFormation macro to validate CIDR blocks
B. Create a custom resource that validates CIDR before VPC creation
C. Use CloudFormation Guard to check CIDR blocks
D. Implement a CloudFormation hook for pre-create validation
Elastic Beanstalk doesn't have native EFS integration in the console; use .ebextensions or .platform hooks.
Question 125
A CloudFormation template uses the AWS::Include transform to incorporate template snippets from S3. During stack updates, the snippets have been modified in S3 but CloudFormation isn't picking up the changes. What is the issue?
A. Include transform only runs during stack creation
B. CloudFormation caches transformed templates
C. S3 objects need versioning for change detection
D. CloudFormation needs stack policy update to detect include changes
Answer: B (partially) or the answer involves cache invalidation
Where ${Version} is a parameter that changes with snippet updates.
CloudFormation processes transforms fresh on each stack operation if the template itself changes.
Question 126
A company uses CloudFormation StackSets to deploy a baseline configuration across 50 accounts. They need to update the StackSet with a new configuration change. The update should complete within 2 hours and minimize concurrent updates per region. What configuration should be used?
A. Use default deployment settings
B. Configure MaxConcurrentPercentage and RegionConcurrencyType
C. Use sequential deployment across all accounts
D. Create multiple StackSets with smaller account subsets
Allows graceful handling of individual account issues
For 50 accounts in 2 hours:
PARALLEL regions
MaxConcurrentCount: 10-15 accounts
This allows ~3-4 waves to complete within 2 hours
Question 127
An Elastic Beanstalk environment uses a rolling deployment policy. During deployment, the team notices that the environment becomes unhealthy and instances are repeatedly terminated. What is the likely cause?
A. The new application version fails health checks
B. The rolling batch size is too large
C. The deployment timeout is too short
D. All of the above could cause this behavior
Answer: D
Explanation:
Rolling deployment failures:
Option A - Health check failures:
New version has bugs or configuration issues
Fails health checks → instance marked unhealthy
Auto Scaling terminates and replaces
Cycle continues
Option B - Batch size too large:
Large batches reduce capacity significantly
Remaining instances overwhelmed by traffic
Performance degradation → health check failures
Option C - Timeout too short:
Application needs longer startup time
Times out before reaching healthy state
Deployment fails and rolls back
Debugging steps:
Check Beanstalk events for specific error messages
A CloudFormation template creates an Application Load Balancer and Lambda function for a serverless application. The team wants to test changes to the Lambda code without deploying infrastructure changes. How should the template be structured?
A. Separate stacks for infrastructure and Lambda
B. Use nested stacks with Lambda in a child stack
C. Use AWS SAM for Lambda and CloudFormation for ALB
D. Any of the above would work
Answer: D
Explanation:
Separation strategies:
Option A - Separate stacks:
infra-stack: ALB, VPC, Security Groups
lambda-stack: Lambda function (references infra exports)
Update lambda-stack independently.
Option B - Nested stacks:
Update child stacks independently.
Option C - Mixed tooling:
CloudFormation for long-lived infrastructure
SAM for Lambda (faster iterations)
SAM integrates with CloudFormation
Best practice considerations:
Separate things that change at different rates
Lambda code changes frequently → separate stack
ALB changes rarely → infrastructure stack
Use stack outputs/imports or SSM for integration
Question 129
An organization uses Elastic Beanstalk across multiple teams. They want to ensure all environments use specific instance types and are deployed to approved subnets. How should this be enforced?
A. Use saved configurations that all teams must use
B. Implement custom platform with restrictions built-in
C. Use IAM policies to restrict Beanstalk configuration options
D. Use Service Control Policies to restrict EC2 instance types
SCPs provide organization-wide enforcement regardless of which service launches resources.
Question 130
A CloudFormation template uses the Serverless transform (AWS SAM). When deploying changes to a Lambda function, the team wants to implement canary deployments. What needs to be added to the template?
A. Add CodeDeploy application and deployment group resources
B. Add DeploymentPreference configuration to the Lambda function
C. Use AutoPublishAlias with traffic shifting configuration
AutoPublishAlias: Creates new version on each deploy and maintains alias
DeploymentPreference: Configures how traffic shifts to new version
What SAM creates automatically:
Lambda versions
CodeDeploy application
CodeDeploy deployment group
Alias traffic shifting configuration
Without AutoPublishAlias, there's no alias for traffic shifting. Without DeploymentPreference, deployment is immediate (AllAtOnce).
Question 131
A company uses CloudFormation and wants to validate that templates follow best practices before deployment. The validation should check for things like encryption requirements, logging enabled, and proper tagging. What solution provides this?
A. Use cfn-lint for template validation
B. Use CloudFormation Guard for policy validation
C. Use AWS Config for resource validation
D. Use CloudFormation hooks for pre-deployment checks
Answer: B (for policy enforcement) and A (for syntax/best practices)
Guard is specifically designed for policy-as-code validation against CloudFormation templates.
Question 132
An Elastic Beanstalk application uses a Classic Load Balancer. The team wants to migrate to an Application Load Balancer without recreating the environment. How should this be done?
A. Update the environment configuration to change load balancer type
B. Clone the environment with ALB, then swap CNAMEs
C. Use .ebextensions to change the load balancer type
D. It's not possible to change load balancer type without recreating
Answer: B
Explanation:
Load balancer type migration:
Why direct change isn't possible (Option D is partially correct):
Load balancer type is an immutable environment configuration
This provides zero-downtime migration with ALB benefits.
Question 133
A CloudFormation template needs to create resources in a specific order due to dependencies that aren't automatically detected. How can explicit dependencies be defined?
A. Use DependsOn attribute on resources
B. Use Ref function to create implicit dependencies
Docker daemon uses instance metadata for ECR authentication
No credential management required
Never store credentials (Option A) or make repositories public (Option D).
Question 136
A CloudFormation template creates an Auto Scaling group with instances that need to download application code from S3 during launch. The download occasionally fails because the S3 VPC endpoint isn't ready when instances launch. How can this be resolved?
A. Add DependsOn between ASG and VPC endpoint
B. Add retry logic in the instance user data script
C. Use CreationPolicy on the ASG with proper signaling
Stack waits for instances to fully initialize before proceeding.
Best practice:
Combine all approaches for robust deployment:
DependsOn for ordering
Retry logic for transient issues
CreationPolicy for completion verification
Question 137
A company wants to share CloudFormation templates across accounts in their AWS Organization. The templates should be version-controlled and teams should be able to deploy approved templates only. What solution provides this?
A. S3 bucket with cross-account access for template storage
B. AWS Service Catalog with portfolios and products
C. CodeCommit repository with cross-account access
D. CloudFormation Registry with public extensions
Answer: B
Explanation:
AWS Service Catalog for template sharing:
Service Catalog components:
Product: CloudFormation template packaged as deployable product
An Elastic Beanstalk environment uses scheduled scaling to handle predictable traffic patterns. The scheduled actions should only run on weekdays. How is this configured?
A. Configure cron expressions in scheduled scaling actions
B. Use CloudWatch Events to trigger scaling
C. Configure scheduled scaling in .ebextensions
D. Use recurrence schedule with day-of-week specification
Both approaches use cron expressions with day-of-week (1=Monday through 7=Sunday or 0=Sunday).
Question 139
A CloudFormation stack creates a VPC with custom DHCP options. The template update changes the DHCP option set, but instances in the VPC aren't using the new DHCP options. What is happening?
A. DHCP options changes require VPC recreation
B. Instances need to renew DHCP lease to get new options
C. CloudFormation drift has occurred
D. DHCP option association wasn't updated
Answer: B
Explanation:
DHCP options propagation:
How DHCP options work:
DHCP option set is associated with VPC
Instances receive options via DHCP lease
Lease renewal happens at specific intervals
When options change:
New instances immediately get new options
Existing instances keep old options until lease renewal
Lease renewal depends on lease duration (typically hours)
For exam: Understand that DHCP option propagation has delay; may require instance action.
Question 140
A company uses Elastic Beanstalk with a load balanced environment. They want to configure the load balancer to use a custom SSL certificate from ACM. The environment currently uses HTTP only. What changes are needed?
A. Upload certificate to IAM and configure in .ebextensions
B. Configure HTTPS listener with ACM certificate ARN in environment settings
C. Enable HTTPS in the Beanstalk console and select ACM certificate
IAM: Legacy, required for Classic Load Balancer in some regions
Both console and .ebextensions approaches work; choose based on environment management preference.
Question 141
A CloudFormation template creates an Amazon Aurora cluster. During stack updates, the team wants to create a snapshot before any modifications. How can this be automated?
A. Use UpdateReplacePolicy: Snapshot
B. Create a custom resource that takes a snapshot before update
C. Configure DeletionPolicy: Snapshot
D. Use CloudFormation hooks with pre-update snapshot
Answer: B or D
Explanation:
Pre-update snapshots:
DeletionPolicy: Snapshot (Option C):
Only takes snapshot when resource is DELETED, not updated.
UpdateReplacePolicy: Snapshot (Option A):
Takes snapshot when resource is REPLACED during update, not for all updates.
Custom resource approach (Option B):
Lambda creates snapshot before cluster updates proceed.
CloudFormation hooks (Option D):
Configure a hook that triggers on AWS::RDS::DBCluster updates and takes snapshot before proceeding.
For guaranteed pre-update snapshots, custom resources or hooks are necessary; built-in policies don't cover this scenario.
Question 142
An organization uses multiple Elastic Beanstalk applications across teams. They want to ensure all applications use the latest platform version. How can this be enforced and monitored?
A. Use AWS Config rule for Beanstalk platform compliance
B. Enable managed platform updates for all environments
C. Use EventBridge to detect platform version changes
D. Create a Lambda function that audits platform versions
def audit_platform_versions():
eb = boto3.client('elasticbeanstalk')
# Get latest platform versions
platforms = eb.list_platform_versions(
Filters=[{'Type': 'PlatformBranchName', 'Values': ['Python 3.9']}]
)
latest = platforms['PlatformSummaryList'][0]['PlatformVersion']
# Check all environments
envs = eb.describe_environments()
for env in envs['Environments']:
if env['PlatformArn'] != latest:
report_outdated(env)
Combine managed updates for automation with auditing for visibility.
Question 143
A CloudFormation template creates an S3 bucket and needs to enable versioning only in production environments. How should this conditional configuration be implemented?
A. Use template conditions and Fn::If
B. Create separate templates for each environment
C. Use CloudFormation parameters with default values
D. Use AWS::NoValue to conditionally omit properties
AWS::NoValue behavior:
When Fn::If returns AWS::NoValue, the entire property is omitted from the resource, as if it wasn't specified.
Alternative for complex conditionals:
This explicitly sets versioning status in both cases.
Conditions provide single-template solution for environment-specific configurations.
Question 144
An Elastic Beanstalk worker environment processes jobs that can take up to 30 minutes. The default SQS visibility timeout is too short. How should this be configured?
A. Configure SQS queue visibility timeout in worker configuration
B. Modify visibility timeout in .ebextensions
C. Set InactivityTimeout in worker environment settings
D. Create SQS queue separately and configure worker to use it
VisibilityTimeout: Time message is hidden after delivery (1800 = 30 min)
HttpConnections: Maximum concurrent connections to worker
InactivityTimeout: Time to wait for worker response
RetentionPeriod: How long messages are kept
Best practice for long jobs:
Set VisibilityTimeout > maximum processing time
Set InactivityTimeout appropriately
Consider implementing heartbeat pattern for very long jobs
If using custom queue (Option D):
Create queue with appropriate settings, then configure worker to use that queue URL.
Beanstalk worker settings control how the environment interacts with SQS.
Question 145
A company uses CloudFormation to deploy a multi-tier application. The database tier should only be created once and never deleted, even if the entire stack is deleted. The application tier should be updated normally. How should this be structured?
A. Use DeletionPolicy: Retain on database resources
B. Use separate stacks for database and application tiers
C. Use CloudFormation stack policies to protect database resources
Prevents accidental updates that would replace/delete database.
Best practice:
Use separate stacks (Option B) for true independence, with DeletionPolicy: Retain as safety net.
Question 146
A CloudFormation template uses a macro to generate repetitive resources. The macro runs successfully during template processing but the generated resources have errors. How can the processed template be viewed for debugging?
A. Check CloudFormation events for processed template
B. Use aws cloudformation describe-template command
C. Process the template locally with aws cloudformation transform
CloudWatch Logs show macro input/output for each execution.
Question 147
An Elastic Beanstalk application uses a Classic Load Balancer with connection draining enabled. During deployments, some requests are still failing with connection reset errors. What should be investigated?
A. Connection draining timeout is too short
B. Deployment type isn't compatible with connection draining
C. Health check settings conflict with draining
D. Application doesn't handle graceful shutdown
Answer: A and D
Explanation:
Connection issues during deployment:
Connection draining behavior:
Instance marked for removal
LB stops sending new connections
Existing connections continue for draining timeout
If requests take longer than timeout, they're terminated.
Option D - Application graceful shutdown:
Application should:
Stop accepting new work when SIGTERM received
Complete in-flight requests
Close connections cleanly
Exit when done
# Python example
def shutdown_handler(signum, frame):
global accepting_requests
accepting_requests = False
# Wait for current requests to complete
wait_for_completion()
sys.exit(0)
signal.signal(signal.SIGTERM, shutdown_handler)
Combination issue:
If draining timeout is 20s but requests take 30s AND app doesn't handle SIGTERM, connections reset.
Question 148
A CloudFormation template creates an ECS service with a desired count of 10 tasks. Stack creation times out waiting for the service to reach steady state. The ECS events show tasks are starting but being killed after health check failures. What should be configured in CloudFormation?
A. Increase stack timeout in CloudFormation settings
B. Configure service HealthCheckGracePeriodSeconds
C. Add CreationPolicy with longer timeout
D. Use WaitCondition for ECS service stabilization
Gives tasks time to start before ECS evaluates health
Should exceed application startup time
Only applies when using load balancer
Additional considerations:
Target group health check interval and threshold
Application startup optimization
Container health checks vs LB health checks
Question 149
A company uses CloudFormation with CodePipeline for infrastructure deployment. They want to implement a process where infrastructure changes are reviewed before deployment, showing exactly what will change. What should be implemented?
A. Add a manual approval action after CloudFormation action
B. Use CloudFormation change sets with review before execution
C. Implement a Lambda function that analyzes CloudFormation templates
D. Use CloudFormation drift detection before updates
CHANGE_SET_REPLACE creates change set (no deployment)
Manual approval - reviewer checks change set in console
CHANGE_SET_EXECUTE applies changes
Change set shows:
Resources to be added, modified, deleted
Replacement vs in-place updates
Property changes
Question 150
A DevOps team is implementing infrastructure as code for a complex application with dependencies between resources across multiple CloudFormation stacks. They need to manage the deployment order and handle cross-stack references efficiently. What approach should they use?
A. Use nested stacks with all resources in a single parent stack
B. Use independent stacks with exports/imports and deployment scripts
C. Use CloudFormation StackSets for coordinated deployment
D. Use AWS CDK with dependency management between stacks
Answer: D (for new projects) or B (for existing CloudFormation)
With exports/imports for references. Works but requires manual ordering.
For exam: Understand that CDK provides higher-level abstractions with automatic dependency management, while raw CloudFormation requires explicit management.