لا تريد أن تفوت شيئا؟

نصائح اجتياز امتحان الشهادة

آخر أخبار الامتحانات ومعلومات الخصم

برعاية وحديثة من قبل خبرائنا

نعم، أرسل لي النشرة الإخبارية

خذ اختبارات أخرى عبر الإنترنت

السؤال #1
A company stores its sales and marketing data that includes personally identifiable information (PII) in Amazon S3. The company allows its analysts to launch their own Amazon EMR cluster and run analytics reports with the data. To meet compliance requirements, the company must ensure the data is not publicly accessible throughout this process. A data engineer has secured Amazon S3 but must ensure the individual EMR clusters created by the analysts are not exposed to the public internet. Which solution shoul
A. Create an EMR security configuration and ensure the security configuration is associated with the EMR clusters when they are created
B. Check the security group of the EMR clusters regularly to ensure it does not allow inbound traffic from IPv4 0
C. Enable the block public access setting for Amazon EMR at the account level before any EMR cluster is created
D. Use AWS WAF to block public internet access to the EMR clusters across the board
عرض الإجابة
اجابة صحيحة: C
السؤال #2
A banking company is currently using Amazon Redshift for sensitive data. An audit found that the current cluster isunencrypted. Compliance requires that a database with sensitive data must be encrypted using a hardware security module(HSM) with customer managed keys.Which modifications are required in the cluster to ensure compliance?
A. Create a new HSM-encrypted Amazon Redshift cluster and migrate the data to the new cluster
B. Modify the DB parameter group with the appropriate encryption settings and then restart the cluster
C. Enable HSM encryption in Amazon Redshift using the command line
D. Modify the Amazon Redshift cluster from the console and enable encryption using the HSM option
عرض الإجابة
اجابة صحيحة: A
السؤال #3
A company hosts an Apache Flink application on premises. The application processes data from several Apache Kafka clusters. The data originates from a variety of sources, such as web applications mobile apps and operational databases The company has migrated some of these sources to AWS and now wants to migrate the Flink application. The company must ensure that data that resides in databases within the VPC does not traverse the internet The application must be able to process all the data that comes from t
A. Implement Flink on Amazon EC2 within the company's VPC Create Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in the VPC to collect data that comes from applications and databases within the VPC Use Amazon Kinesis Data Streams to collect data that comes from the public internet Configure Flink to have sources from Kinesis Data Streams Amazon MSK and any on-premises Kafka clusters by using AWS Client VPN or AWS Direct Connect
B. Implement Flink on Amazon EC2 within the company's VPC Use Amazon Kinesis Data Streams to collect data that comes from applications and databases within the VPC and the public internet Configure Flink to have sources from Kinesis Data Streams and any on-premises Kafka clusters by using AWS Client VPN or AWS Direct Connect
C. Create an Amazon Kinesis Data Analytics application by uploading the compiled Flink jar file Use Amazon Kinesis Data Streams to collect data that comes from applications and databases within the VPC and the public internet Configure the Kinesis Data Analytics application to have sources from Kinesis Data Streams and any on-premises Kafka clusters by using AWS Client VPN or AWS Direct Connect
D. Create an Amazon Kinesis Data Analytics application by uploading the compiled Flink jar file Create Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in the company's VPC to collect data that comes from applications and databases within the VPC Use Amazon Kinesis Data Streams to collect data that comes from the public internet Configure the Kinesis Data Analytics application to have sources from Kinesis Data Stream
E. Amazon MSK and any on-premises Kafka clusters by using AWS Client VPN or AWS Direct Connect
عرض الإجابة
اجابة صحيحة: D
السؤال #4
A bank wants to migrate a Teradata data warehouse to the AWS Cloud The bank needs a solution for reading large amounts of data and requires the highest possible performance. The solution also must maintain the separation of storage and compute Which solution meets these requirements?
A. Use Amazon Athena to query the data in Amazon S3
B. Use Amazon Redshift with dense compute nodes to query the data in Amazon Redshift managed storage
C. Use Amazon Redshift with RA3 nodes to query the data in Amazon Redshift managed storage
D. Use PrestoDB on Amazon EMR to query the data in Amazon S3
عرض الإجابة
اجابة صحيحة: B
السؤال #5
A power utility company is deploying thousands of smart meters to obtain real-time updates about power consumption. The company is using Amazon Kinesis Data Streams to collect the data streams from smart meters. The consumer application uses the Kinesis Client Library (KCL) to retrieve the stream data. The company has only one consumer application. The company observes an average of 1 second of latency from the moment that a record is written to the stream until the record is read by a consumer application.
A. Use enhanced fan-out in Kinesis Data Streams
B. Increase the number of shards for the Kinesis data stream
C. Reduce the propagation delay by overriding the KCL default settings
D. Develop consumers by using Amazon Kinesis Data Firehose
عرض الإجابة
اجابة صحيحة: C
السؤال #6
A software company hosts an application on AWS, and new features are released weekly. As part of the application testing process, a solution must be developed that analyzes logs from each Amazon EC2 instance to ensure that the application is working as expected after each deployment. The collection and analysis solution should be highly available with the ability to display new information with minimal delays. Which method should the company use to collect and analyze the logs?
A. Enable detailed monitoring on Amazon EC2, use Amazon CloudWatch agent to store logs in Amazon S3, and use Amazon Athena for fast, interactive log analytics
B. Use the Amazon Kinesis Producer Library (KPL) agent on Amazon EC2 to collect and send data to Kinesis Data Streams to further push the data to Amazon Elasticsearch Service and visualize using Amazon QuickSight
C. Use the Amazon Kinesis Producer Library (KPL) agent on Amazon EC2 to collect and send data to Kinesis Data Firehose to further push the data to Amazon Elasticsearch Service and Kibana
D. Use Amazon CloudWatch subscriptions to get access to a real-time feed of logs and have the logs delivered to Amazon Kinesis Data Streams to further push the data to Amazon Elasticsearch Service and Kibana
عرض الإجابة
اجابة صحيحة: C
السؤال #7
A company has developed several AWS Glue jobs to validate and transform its data from Amazon S3 and load it intoAmazon RDS for MySQL in batches once every day. The ETL jobs read the S3 data using a DynamicFrame. Currently, theETL developers are experiencing challenges in processing only the incremental data on every run, as the AWS Glue jobprocesses all the S3 input data on each run.Which approach would allow the developers to solve the issue with minimal coding effort?
A. Have the ETL jobs read the data from Amazon S3 using a DataFrame
B. Enable job bookmarks on the AWS Glue jobs
C. Create custom logic on the ETL jobs to track the processed S3 objects
D. Have the ETL jobs delete the processed objects or data from Amazon S3 after each run
عرض الإجابة
اجابة صحيحة: D
السؤال #8
A hospital uses wearable medical sensor devices to collect data from patients. The hospital is architecting a near-real-timesolution that can ingest the data securely at scale. The solution should also be able to remove the patients protected healthinformation (PHI) from the streaming data and store the data in durable storage.Which solution meets these requirements with the least operational overhead?
A. Ingest the data using Amazon Kinesis Data Streams, which invokes an AWS Lambda function using Kinesis Client Library (KCL) to remove all PHI
B. Ingest the data using Amazon Kinesis Data Firehose to write the data to Amazon S3
C. Ingest the data using Amazon Kinesis Data Streams to write the data to Amazon S3
D. Ingest the data using Amazon Kinesis Data Firehose to write the data to Amazon S3
عرض الإجابة
اجابة صحيحة: C
السؤال #9
A company launched a service that produces millions of messages every day and uses Amazon Kinesis Data Streams asthe streaming service.The company uses the Kinesis SDK to write data to Kinesis Data Streams. A few months after launch, a data analyst foundthat write performance is significantly reduced. The data analyst investigated the metrics and determined that Kinesis isthrottling the write requests. The data analyst wants to address this issue without significant changes to the architecture.Which actions
A. Increase the Kinesis Data Streams retention period to reduce throttling
B. Replace the Kinesis API-based data ingestion mechanism with Kinesis Agent
C. Increase the number of shards in the stream using the UpdateShardCount API
D. Choose partition keys in a way that results in a uniform record distribution across shards
E. Customize the application code to include retry logic to improve performance
عرض الإجابة
اجابة صحيحة: AC
السؤال #10
A company needs to store objects containing log data in JSON format. The objects are generated by eight applicationsrunning in AWS. Six of the applications generate a total of 500 KiB of data per second, and two of the applications cangenerate up to 2 MiB of data per second.A data engineer wants to implement a scalable solution to capture and store usage data in an Amazon S3 bucket. Theusage data objects need to be reformatted, converted to .csv format, and then compressed before they are stored inAmazon S3
A. Configure an Amazon Kinesis Data Firehose delivery stream for each application
B. Configure an Amazon Kinesis data stream with one shard per application
C. Configure an Amazon Kinesis data stream for each application
D. Store usage data objects in an Amazon DynamoDB table
عرض الإجابة
اجابة صحيحة: B
السؤال #11
A company has a business unit uploading .csv files to an Amazon S3 bucket. The companys data platform team has set upan AWS Glue crawler to do discovery, and create tables and schemas. An AWS Glue job writes processed data from thecreated tables to an Amazon Redshift database. The AWS Glue job handles column mapping and creating the AmazonRedshift table appropriately. When the AWS Glue job is rerun for any reason in a day, duplicate records are introduced intothe Amazon Redshift table.Which solution will up
A. Modify the AWS Glue job to copy the rows into a staging table
B. Load the previously inserted data into a MySQL database in the AWS Glue job
C. Use Apache Spark’s DataFrame dropDuplicates() API to eliminate duplicates and then write the data to Amazon Redshift
D. Use the AWS Glue ResolveChoice built-in transform to select the most recent value of the column
عرض الإجابة
اجابة صحيحة: B
السؤال #12
A company wants to use an automatic machine learning (ML) Random Cut Forest (RCF) algorithm to visualize complex real-world scenarios, such as detecting seasonality and trends, excluding outers, and imputing missing values.The team working on this project is non-technical and is looking for an out-of-the-box solution that will require the LEASTamount of management overhead.Which solution will meet these requirements?
A. Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the data
B. Use Amazon QuickSight to visualize the data and then use ML-powered forecasting to forecast the key business metrics
C. Use a pre-build ML AMI from the AWS Marketplace to create forecasts and then use Amazon QuickSight to visualize the data
D. Use calculated fields to create a new forecast and then use Amazon QuickSight to visualize the data
عرض الإجابة
اجابة صحيحة: A
السؤال #13
A company has several Amazon EC2 instances sitting behind an Application Load Balancer (ALB) The company wants its IT Infrastructure team to analyze the IP addresses coming into the company's ALB The ALB is configured to store access logs in Amazon S3 The access logs create about 1 TB of data each day, and access to the data will be infrequent The company needs a solution that is scalable, cost-effective and has minimal maintenance requirements Which solution meets these requirements?
A. Copy the data into Amazon Redshift and query the data
B. Use Amazon EMR and Apache Hive to query the S3 data
C. Use Amazon Athena to query the S3 data
D. Use Amazon Redshift Spectrum to query the S3 data
عرض الإجابة
اجابة صحيحة: D
السؤال #14
A company has an encrypted Amazon Redshift cluster. The company recently enabled Amazon Redshift audit logs and needs to ensure that the audit logs are also encrypted at rest. The logs are retained for 1 year. The auditor queries the logs once a month. What is the MOST cost-effective way to meet these requirements?
A. Encrypt the Amazon S3 bucket where the logs are stored by using AWS Key Management Service (AWS KMS)
B. Query the data as required
C. Disable encryption on the Amazon Redshift cluster, configure audit logging, and encrypt the Amazon Redshift cluste
D. Use Amazon Redshift Spectrum to query the data as required
E. Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryptio
F. Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basi G
عرض الإجابة
اجابة صحيحة: B
السؤال #15
A financial services company needs to aggregate daily stock trade data from the exchanges into a data store. The company requires that data be streamed directly into the data store, but also occasionally allows data to be modified using SQL. The solution should integrate complex, analytic queries running with minimal latency. The solution must provide a business intelligence dashboard that enables viewing of the top contributors to anomalies in stock prices. Which solution meets the company’s requirements?
A. Use Amazon Kinesis Data Firehose to stream data to Amazon S3
B. Use Amazon Kinesis Data Streams to stream data to Amazon Redshif
C. Use Amazon Redshift as a data source for Amazon QuickSight to create a business intelligence dashboard
D. Use Amazon Kinesis Data Firehose to stream data to Amazon Redshif
E. Use Amazon Redshift as a data source for Amazon QuickSight to create a business intelligence dashboard
F. Use Amazon Kinesis Data Streams to stream data to Amazon S3
عرض الإجابة
اجابة صحيحة: CD
السؤال #16
A company is building a data lake and needs to ingest data from a relational database that has time-series data. Thecompany wants to use managed services to accomplish this. The process needs to be scheduled daily and bring incrementaldata only from the source into Amazon S3.What is the MOST cost-effective approach to meet these requirements?
A. Use AWS Glue to connect to the data source using JDBC Drivers
B. Use AWS Glue to connect to the data source using JDBC Drivers
C. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the entire dataset
D. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the full data
عرض الإجابة
اجابة صحيحة: B
السؤال #17
A large university has adopted a strategic goal of increasing diversity among enrolled students. The data analytics team is creating a dashboard with data visualizations to enable stakeholders to view historical trends. All access must be authenticated using Microsoft Active Directory. All data in transit and at rest must be encrypted. Which solution meets these requirements?
A. Amazon QuickSight Standard edition configured to perform identity federation using SAML 2
B. Amazon QuickSight Enterprise edition configured to perform identity federation using SAML 2
C. Amazon QuckSight Standard edition using AD Connector to authenticate using Active Directory
D. Amazon QuickSight Enterprise edition using AD Connector to authenticate using Active Directory
عرض الإجابة
اجابة صحيحة: A
السؤال #18
A company uses Amazon Redshift as its data warehouse. A new table includes some columns that contain sensitive dataand some columns that contain non-sensitive data. The data in the table eventually will be referenced by several existingqueries that run many times each day.A data analytics specialist must ensure that only members of the companys auditing team can read the columns that containsensitive data. All other users must have read-only access to the columns that contain non-sensitive data.Which soluti
A. Grant the auditing team permission to read from the table
B. Grant all users read-only permissions to the columns that contain non-sensitive data
C. Grant all users read-only permissions to the columns that contain non-sensitive data
D. Grant the auditing team permission to read from the table
عرض الإجابة
اجابة صحيحة: D
السؤال #19
A market data company aggregates external data sources to create a detailed view of product consumption in different countries. The company wants to sell this data to external parties through a subscription. To achieve this goal, the company needs to make its data securely available to external parties who are also AWS users. What should the company do to meet these requirements with the LEAST operational overhead?
A. Store the data in Amazon S3
B. Store the data in Amazon S3
C. Upload the data to AWS Data Exchange for storag
D. Share the data by using presigned URLs for security
E. Upload the data to AWS Data Exchange for storag
F. Share the data by using the AWS Data Exchange sharing wizard
عرض الإجابة
اجابة صحيحة: A
السؤال #20
A team of data scientists plans to analyze market trend data for their company’s new investment strategy. The trend data comes from five different data sources in large volumes. The team wants to utilize Amazon Kinesis to support their use case. The team uses SQL-like queries to analyze trends and wants to send notifications based on certain significant patterns in the trends. Additionally, the data scientists want to save the data to Amazon S3 for archival and historical re- processing, and use AWS managed
A. Publish data to one Kinesis data strea
B. Deploy a custom application using the Kinesis Client Library (KCL) for analyzing trends, and send notifications using Amazon SN
C. Configure Kinesis Data Firehose on the Kinesis data stream to persist data to an S3 bucket
D. Publish data to one Kinesis data strea
E. Deploy Kinesis Data Analytic to the stream for analyzing trends, and configure an AWS Lambda function as an output to send notifications using Amazon SN
F. Configure Kinesis Data Firehose on the Kinesis data stream to persist data to an S3 bucket
عرض الإجابة
اجابة صحيحة: C
السؤال #21
A manufacturing company has been collecting IoT sensor data from devices on its factory floor for a year and is storing thedata in Amazon Redshift for daily analysis. A data analyst has determined that, at an expected ingestion rate of about 2 TBper day, the cluster will be undersized in less than 4 months. A long-term solution is needed. The data analyst has indicatedthat most queries only reference the most recent 13 months of data, yet there are also quarterly reports that need to query allthe data gener
A. Create a daily job in AWS Glue to UNLOAD records older than 13 months to Amazon S3 and delete those records from Amazon Redshift
B. Take a snapshot of the Amazon Redshift cluster
C. Execute a CREATE TABLE AS SELECT (CTAS) statement to move records that are older than 13 months to quarterly partitioned data in Amazon Redshift Spectrum backed by Amazon S3
D. Unload all the tables in Amazon Redshift to an Amazon S3 bucket using S3 Intelligent-Tiering
عرض الإجابة
اجابة صحيحة: B
السؤال #22
A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files that have been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing. Which AWS Glue feature should the data analytics specialist use to meet this requirement?
A. Workflows
B. Triggers
C. Job bookmarks
D. Classifiers
عرض الإجابة
اجابة صحيحة: D

عرض الإجابات بعد التقديم

يرجى إرسال البريد الإلكتروني الخاص بك والواتس اب للحصول على إجابات الأسئلة.

ملحوظة: يرجى التأكد من صلاحية معرف البريد الإلكتروني وWhatsApp حتى تتمكن من الحصول على نتائج الاختبار الصحيحة.

بريد إلكتروني:
رقم الواتس اب/الهاتف: