ANS

ISACA CISM

Huawei

Palo Alto

Aruba

Juniper

Comptia

Fortinet

Microsoft

F5

GCIH

Oracle

Itil-v4

CWNA

Opengroup

The SAP-C02 exam evaluates advanced technical abilities and experience in developing distributed applications and systems on the AWS platform. If you are planing on getting AWS SAP-C01 certified, try these practice test questions to see if you are ready for the real exam.

QUESTION 1

A financial services company loaded millions of historical stock trades into an Amazon DynamoDB table. The table uses on-demand capacity mode. Once each day at midnight, a few million new records are loaded into the table. Application read activity against the table happens in bursts throughout the day, and a limited set of keys are repeatedly looked up. The company needs to reduce costs associated with DynamoDB.

Which strategy should a solutions architect recommend to meet this requirement?

 

A. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

B. Deploy DynamoDB Accelerator (DAX). Configure DynamoDB auto scaling. Purchase Savings Plans in Cost Explorer.

C. Use provisioned capacity mode. Purchase Savings Plans in Cost Explorer.

D. Deploy DynamoDB Accelerator (DAX). Use provisioned capacity mode. Configure DynamoDB auto scaling.

 

Correct Answer: D

 

QUESTION 2

A company has introduced a new policy that allows employees to work remotely from their homes if they connect by using a VPN. The company is hosting internal applications with VPCs in multiple AWS accounts. Currently, the applications are accessible from the company’s on-premises office network through an AWS Site- to-Site VPN connection. The VPC in the company’s main AWS account has peering connections established with VPCs in other AWS accounts.

A solutions architect must design a scalable AWS Client VPN solution for employees to use while they work from home. What is the MOST cost-effective solution that meets these requirements?

 

A. Create a Client VPN endpoint in each AWS account. Configure required routing that allows access to internal applications.

B. Create a Client VPN endpoint in the main AWS account. Configure required routing that allows access to internal applications.

C. Create a Client VPN endpoint in the main AWS account. Provision a transit gateway that is connected to each AWS account Configure required routing that allows access to internal applications.

D. Create a Client VPN endpoint in the main AWS account. Establish connectivity between the Client VPN endpoint and the AWS Site-to-Site VPN.

 

Correct Answer: D

 

QUESTION 3

A company wants to migrate a 30 TB Oracle data warehouse from on premises to Amazon Redshift The company used the AWS Schema Conversion Tool (AWS SCT) to convert the schema of the existing data warehouse to an Amazon Redshift schema. The company also used a migration assessment report to identify manual tasks to complete.

The company needs to migrate the data to the new Amazon Redshift cluster during an upcoming data freeze period of 2 weeks. The only network connection between the on-premises data warehouse and AWS is a 50 Mbps internet connection.

Which migration strategy meets these requirements?

 

A. Create an AWS Database Migration Service (AWS DMS) replication instance. Authorize the public IP address of the replication instance to reach the data warehouse through the corporate firewall. Create a migration task to run at the beginning of the data freeze period.

B. Install the AWS SCT extraction agents on the on-premises servers. Define the extract, upload, and copy tasks to send the data to an Amazon S3 bucket Copy the data into the Amazon Redshift cluster. Run the tasks at the beginning of the data freeze period.

C. Install the AWS SCT extraction agents on the on-premises servers. Create a Site-to-Site VPN connection. Create an AWS Database Migration Service (AWS DMS) replication instance that is the appropriate size. Authorize the IP address of the replication instance to be able to access the on-premises data warehouse through the VPN connection.

D. Create a job in AWS Snowball Edge to import data into Amazon S3. Install AWS SCT extraction agents on the on-premises servers. Define the local and AWS Database Migration Service (AWS DMS) tasks to send the data to the Snowball Edge device. When the Snowball Edge device is returned to AWS and the data is available in Amazon S3, run the AWS DMS subtask to copy the data to Amazon Redshift.

 

Correct Answer: C

 

QUESTION 4

A large company has a business-critical application that runs in a single AWS Region. The application consists of multiple Amazon EC2 instances and an Amazon RDS Multi-AZ DB instance. The EC2 instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones.

A solutions architect is implementing a disaster recovery (DR) plan for the application. The solutions architect has created a pilot light application deployment in a new Region, which is referred to as the DR Region. The DR environment has an Auto Scaling group with a single EC2 instance and a read replica of the RDS DB instance.

The solutions architect must automate a failover from the primary application environment to the pilot light environment in the DR Region. Which solution meets these requirements with the MOST operational efficiency?

 

A. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region. Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Add an email subscription to the SNS topic that sends messages to the application owner. Upon notification, instruct a systems operator to sign in to the AWS Management Console and initiate failover operations for the application.

B. Create a cron task that runs every 5 minutes by using one of the application’s EC2 instances in the primary Region. Configure the cron task to check whether the application is available. Upon failure, the cron task notifies a systems operator and attempts to restart the application services.

C. Create a cron task that runs every 5 minutes by using one of the application’s EC2 instances in the primary Region. Configure the cron task to check whether the application is available. Upon failure, the cron task modifies the DR environment by promoting the read replica and by adding EC2 instances to the Auto Scaling group.

D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region. Create a CloudWatch alarm in the DR Region that is invoked when the application availability metric stops being delivered. Configure the CloudWatch alarm to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic in the DR Region. Use an AWS Lambda function that is invoked by Amazon SNS in the DR Region to promote the read replica and to add EC2 instances to the Auto Scaling group.

 

Correct Answer: D

 

QUESTION 5

A company is developing an application that stores data from more than 5,000 sensors. Each sensor has a unique ID and will send a data point every minute throughout the day. Each data point is 1 KB in size.

At any time, the company may analyze the sensor data for the current day to identify system anomalies. The application never accesses data that is older than 1 week. A solutions architect decides to use Amazon DynamoDB with provisioned throughput for read and write activity. The data must be stored for 5 years to meet compliance requirements.

 

Which storage solution meets these requirements MOST cost- effectively?

 

A. Create a single table with a partition key that is the concatenation of the sensor ID and the timestamp to indicate the data’s time period. Use adaptive capacity to ensure that there are no hot partitions. Enable DynamoDB Streams to export data for archiving. Create an AWS Lambda function that detects every new record that is written to DynamoDB and streams it to Amazon Kinesis Data Streams with Amazon S3 as the source. Expire items that are older than 1 week by setting DynamoDB TTL.

B. Create one table each day. Use write sharing by adding a random number to the end of the partition key values. Create an AWS Lambda function that detects every new record that is written to DynamoDB and streams it to Amazon Kinesis Data Streams with Amazon S3 as the source. Use DynamoDB Accelerator (DAX) to cache the latest sensor readings. Specify a TTL of 1 week in DAX.

C. Create one table each day. Assign the name of the table to indicate the time period. Reduce the provisioned write and read capacity of the table after 1 day. Enable point-in-time recovery on the DynamoDB table and export the daily table content to Amazon S3. Use Amazon CloudWatch Events to invoke a scheduled Lambda function to delete tables after 1 week.

D. Create a single table. Enable DynamoDB Streams on the table. Create an Amazon Kinesis Data Firehose delivery stream to load the data into Amazon S3. Create an AWS Lambda function to poll the DynamoDB stream and deliver batch records from DynamoDB Streams to Kinesis Data Firehose. Expire items that are older than 1 week by setting DynamoDB TTL. Monitor the expired items with the TimeToLiveDeletedltemCount metric in Amazon CloudWatch.

 

Correct Answer: A

QUESTION 6

A company is running an application distributed over several Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The security team requires that all application access attempts be made available for analysis. Information about the client IP address, connection type, and user agent must be included.

Which solution will meet these requirements?

 

A. Enable EC2 detailed monitoring, and include network logs. Send all logs through Amazon Kinesis Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis.

B. Enable VPC Flow Logs for all EC2 instance network interfaces. Publish VPC Flow Logs to an Amazon S3 bucket Have the security team use Amazon Athena to query and analyze the logs.

C. Enable access logs for the Application Load Balancer, and publish the logs to an Amazon S3 bucket. Have the security team use Amazon Athena to query and analyze the logs.

D. Enable Traffic Mirroring and specify all EC2 instance network interfaces as the source. Send all traffic information through Amazon Kinesis Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis.

 

Correct Answer: C

 

QUESTION 7

A company is building a hybrid solution between its existing on-premises systems and a new backend in AWS. The company has a management application to monitor the state of its current IT infrastructure and automate responses to issues. The company wants to incorporate the status of its consumed AWS services into the application. The application uses an HTTPS endpoint to receive updates.

Which approach meets these requirements with the LEAST amount of operational overhead?

 

A. Configure AWS Systems Manager OpsCenter to ingest operational events from the on-premises systems. Retire the on-premises management application and adopt OpsCenter as the hub.

B. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Personal Health Dashboard. Configure the EventBridge (CloudWatch Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the topic to the HTTPS endpoint of the management application.

C. Modify the on-premises management application to call the AWS Health API to poll for status events of AWS services.

D. Configure Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes for AWS Health events from the AWS Service Health Dashboard Configure the EventBridge (CloudWatch Events) event to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic and subscribe the topic to an HTTPS endpoint for the management application with a topic filter corresponding to the services being used.

 

Correct Answer: C

 

QUESTION 8

A company is serving files to its customers through an SFTP server that is accessible over the internet The SFTP server is running on a single Amazon EC2 instance with an Elastic IP address attached. Customers connect to the SFTP server through its Elastic IP address and use SSH for authentication. The EC2 instance also has an attached security group that allows access from all customer IP addresses.

A solutions architect must implement a solution to improve availability, minimize the complexity of infrastructure management, and minimize the disruption to customers who access files. The solution must not change the way customers connect.

Which solution will meet these requirements?

 

A. Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a publicly accessible endpoint. Associate the SFTP Elastic IP address with the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.

B. Disassociate the Elastic IP address from the EC2 instance. Create an Amazon S3 bucket to be used for SFTP file hosting. Create an AWS Transfer Family server. Configure the Transfer Family server with a VPC-hosted, internet-facing endpoint. Associate the SFTP Elastic IP address with the new endpoint Attach the security group with customer IP addresses to the new endpoint. Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.

C. Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an AWS Fargate task definition to run an SFTP server. Specify the EFS file system as a mount in the task definition. Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service. When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server. Associate the Elastic IP address with the NLB. Sync all files from the SFTP server to the S3 bucket.

D. Disassociate the Elastic IP address from the EC2 instance. Create a multi- attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SLTP file hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create an Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS volume. Configure the Auto Scaling group to automatically add instances behind the NLB. Configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches. Sync all files from the SFTP server to the new multi-attach EBS volume.

 

Correct Answer: A

 

QUESTION 9

A company runs an e-commerce platform with front-end and e-commerce tiers. Both tiers run on LAMP stacks with the front-end instances running behind a load balancing appliance that has a virtual offering on AWS. Currently, the operations team uses SSH to log in to the instances to maintain patches and address other concerns. The platform has recently been the target of multiple attacks, including:

  • A DDoS attack.
  • An SQL injection attack.
  • Several successful dictionary attacks on SSH accounts on the web servers.

The company wants to improve the security of the e-commerce platform by migrating to AWS. The company’s solutions architects have decided to use the following approach:

  • Code review the existing application and fix any SQL injection issues.
  • Migrate the web application to AWS and leverage the latest AWS Linux AMI to address initial security patching.
  • Install AWS Systems Manager to manage patching and allow the system administrators to run commands on all instances, as needed.

What additional steps will address all of the identified attack types while providing high availability and minimizing risk?

 

A. Enable SSH access to the Amazon EC2 instances using a security group that limits access to specific IPs. Migrate on-premises MySQL to Amazon RDS Multi- AZ. Install the third-party load balancer from the AWS Marketplace and migrate the existing rules to the load balancer’s AWS instances. Enable AWS Shield Standard for DDoS protection.

B. Disable SSH access to the Amazon EC2 instances. Migrate on-premises MySQL to Amazon RDS Multi-AZ. Leverage an Elastic Load Balancer to spread the load and enable AWS Shield Advanced for protection. Add an Amazon CloudFront distribution in front of the website. Enable AWS WAF on the distribution to manage the rules.

C. Enable SSH access to the Amazon EC2 instances through a bastion host secured by limiting access to specific IP addresses. Migrate on-premises MySQL to a self-managed EC2 instance. Leverage an AWS Elastic Load Balancer to spread the load, and enable AWS Shield Standard for DDoS protection. Add an Amazon CloudFront distribution in front of the website.

D. Disable SSH access to the EC2 instances. Migrate on-premises MySQL to Amazon RDS Single-AZ. Leverage an AWS Elastic Load Balancer to spread the load. Add an Amazon CloudFront distribution in front of the website. Enable AWS WAF on the distribution to manage the rules.

 

Correct Answer: B

 

QUESTION 10

A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance. The DB instance is expected to receive many more reads than writes. The solutions architect needs to ensure that the large amount of read traffic can be accommodated and that the DB instance is highly available. Which steps should the solutions architect take to meet these requirements? (Select THREE.)

 

A. Create multiple read replicas and put them into an Auto Scaling group.

B. Create multiple read replicas in different Availability Zones.

C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy.

D. Create an Application Load Balancer (ALB) and put the read replicas behind the ALB.

E. Configure an Amazon CloudWatch alarm to detect a failed read replica. Set the alarm to directly invoke an AWS Lambda function to delete its Route 53 record set.

F. Configure an Amazon Route 53 health check for each read replica using its endpoint.

 

Correct Answer: BDF

 

Conclusion

If you want to get AWS SAP-C01 certification, you can use SAP-C01 practice tests or our complete SAP-C01 dumps for a single success. Now we have back to school offer for all AWS practice tests to help you save more!

back to school offer

Latest Passing Report-100% pass guarantee

Please follow and like us:
Last modified: November 7, 2023

Author

Comments

Write a Reply or Comment

Your email address will not be published.