لا تريد أن تفوت شيئا؟

نصائح اجتياز امتحان الشهادة

آخر أخبار الامتحانات ومعلومات الخصم

برعاية وحديثة من قبل خبرائنا

نعم، أرسل لي النشرة الإخبارية

خذ اختبارات أخرى عبر الإنترنت

السؤال #1
A company hosts an on-premises PostgreSQL database that contains historical data. An internal legacy application uses the database for read-only activities. The company’s business team wants to move the data to a data lake in Amazon S3 as soon as possible and enrich the data for analytics. The company has set up an AWS Direct Connect connection between its VPC and its on-premises network. A data analytics specialist must design a solution that achieves the business team’s goals with the least operational ov
A. Upload the data from the on-premises PostgreSQL database to Amazon S3 by using a customized batch upload proces
B. Use the AWS Glue crawler to catalog the data in Amazon S3
C. Use Amazon Athena to query the data
D. Create an Amazon RDS for PostgreSQL database and use AWS Database Migration Service (AWS DMS) to migrate the data into Amazon RD
E. Use AWS Data Pipeline to copy and enrich the data from the Amazon RDS for PostgreSQL table and move the data to Amazon S3
F. Configure an AWS Glue crawler to use a JDBC connection to catalog the data in the on-premises databas G
عرض الإجابة
اجابة صحيحة: A
السؤال #2
A data analyst runs a large number of data manipulation language (DML) queries by using Amazon Athena with the JDBC driver. Recently, a query failed after It ran for 30 minutes. The query returned the following message Java.sql.SGLException: Query timeout The data analyst does not immediately need the query results However, the data analyst needs a long-term solution for this problem Which solution will meet these requirements?
A. Split the query into smaller queries to search smaller subsets of data
B. In the settings for Athena, adjust the DML query timeout limit
C. In the Service Quotas console, request an increase for the DML query timeout
D. Save the tables as compressed
عرض الإجابة
اجابة صحيحة: D
السؤال #3
A company wants to enrich application logs in near-real-time and use the enriched dataset for further analysis. The application is running on Amazon EC2 instances across multiple Availability Zones and storing its logs using Amazon CloudWatch Logs. The enrichment source is stored in an Amazon DynamoDB table. Which solution meets the requirements for the event collection and enrichment?
A. Use a CloudWatch Logs subscription to send the data to Amazon Kinesis Data Firehos
B. Use AWS Lambda to transform the data in the Kinesis Data Firehose delivery stream and enrich it with the data inthe DynamoDB tabl
C. Configure Amazon S3 as the Kinesis Data Firehose delivery destination
D. Export the raw logs to Amazon S3 on an hourly basis using the AWS CL
E. Use AWS Glue crawlers to catalog the log
F. Set up an AWS Glue connection for the DynamoDB table and set up an AWS Glue ETL job to enrich the dat G
عرض الإجابة
اجابة صحيحة: A
السؤال #4
A banking company is currently using an Amazon Redshift cluster with dense storage (DS) nodes to store sensitive data. An audit found that the cluster is unencrypted. Compliance requirements state that a database with sensitive data must be encrypted through a hardware security module (HSM) with automated key rotation. Which combination of steps is required to achieve compliance? (Choose two.)
A. Set up a trusted connection with HSM using a client and server certificate with automatic key rotation
B. Modify the cluster with an HSM encryption option and automatic key rotation
C. Create a new HSM-encrypted Amazon Redshift cluster and migrate the data to the new cluster
D. Enable HSM with key rotation through the AWS CLI
E. Enable Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) encryption in the HSM
عرض الإجابة
اجابة صحيحة: C
السؤال #5
A company uses an Amazon EMR cluster with 50 nodes to process operational data and make the data available for data analysts These jobs run nightly use Apache Hive with the Apache Jez framework as a processing model and write results to Hadoop Distributed File System (HDFS) In the last few weeks, jobs are failing and are producing the following error message "File could only be replicated to 0 nodes instead of 1" A data analytics specialist checks the DataNode logs the NameNode logs and network connectivity
A. Monitor the HDFSUtilization metri
B. If the value crosses a user-defined threshold add task nodes to the EMR cluster
C. Monitor the HDFSUtilization metri
D. Monitor the MemoryAllocatedMB metri
E. If the value crosses a user-defined threshold, add task nodes to the EMR cluster
F. Monitor the MemoryAllocatedMB metri G
عرض الإجابة
اجابة صحيحة: D
السؤال #6
A company is migrating from an on-premises Apache Hadoop cluster to an Amazon EMR cluster. The cluster runs only during business hours. Due to a company requirement to avoid intraday cluster failures, the EMR cluster must be highly available. When the cluster is terminated at the end of each business day, the data must persist. Which configurations would enable the EMR cluster to meet these requirements? (Choose three.)
A. EMR File System (EMRFS) for storage
B. Hadoop Distributed File System (HDFS) for storage
C. AWS Glue Data Catalog as the metastore for Apache Hive
D. MySQL database on the master node as the metastore for Apache Hive
E. Multiple master nodes in a single Availability Zone
F. Multiple master nodes in multiple Availability Zones
عرض الإجابة
اجابة صحيحة: D
السؤال #7
A global pharmaceutical company receives test results for new drugs from various testing facilities worldwide. The results are sent in millions of 1 KB-sized JSON objects to an Amazon S3 bucket owned by the company. The data engineering team needs to process those files, convert them into Apache Parquet format, and load them into Amazon Redshift for data analysts to perform dashboard reporting. The engineering team uses AWS Glue to process the objects, AWS Step Functions for process orchestration, and Amazo
A. Use AWS Lambda to group the small files into larger file
B. Write the files back to Amazon S3
C. Use the AWS Glue dynamic frame file grouping option while ingesting the raw input file
D. Process the files and load them into Amazon Redshift tables
E. Use the Amazon Redshift COPY command to move the files from Amazon S3 into Amazon Redshift tables directl
F. Process the files in Amazon Redshift
عرض الإجابة
اجابة صحيحة: C
السؤال #8
A bank is using Amazon Managed Streaming for Apache Kafka (Amazon MSK) to populate real-time data into a data lake The data lake is built on Amazon S3, and data must be accessible from the data lake within 24 hours Different microservices produce messages to different topics in the cluster The cluster is created with 8 TB of Amazon Elastic Block Store (Amazon EBS) storage and a retention period of 7 days The customer transaction volume has tripled recently and disk monitoring has provided an alert that the
A. Use the Amazon MSK console to triple the broker storage and restart the cluster
B. Create an Amazon CloudWatch alarm that monitors the KafkaDataLogsDiskUsed metric Automaticallyflush the oldest messages when the value of this metric exceeds 85%
C. Create a custom Amazon MSK configuration Set the log retention hours parameter to 48 Update the cluster with the new configuration file
D. Triple the number of consumers to ensure that data is consumed as soon as it is added to a topic
عرض الإجابة
اجابة صحيحة: C
السؤال #9
A retail company is building its data warehouse solution using Amazon Redshift. As a part of that effort, the company is loading hundreds of files into the fact table created in its Amazon Redshift cluster. The company wants the solution to achieve the highest throughput and optimally use cluster resources when loading data into the company’s fact table. How should the company meet these requirements?
A. Use multiple COPY commands to load the data into the Amazon Redshift cluster
B. Use S3DistCp to load multiple files into the Hadoop Distributed File System (HDFS) and use an HDFS connector to ingest the data into the Amazon Redshift cluster
C. Use LOAD commands equal to the number of Amazon Redshift cluster nodes and load the data in parallel into each node
D. Use a single COPY command to load the data into the Amazon Redshift cluster
عرض الإجابة
اجابة صحيحة: ACE
السؤال #10
A company stores Apache Parquet-formatted files in Amazon S3 The company uses an AWS Glue Data Catalog to store the table metadata and Amazon Athena to query and analyze the data The tables have a large number of partitions The queries are only run on small subsets of data in the table A data analyst adds new time partitions into the table as new data arrives The data analyst has been asked to reduce the query runtime Which solution will provide the MOST reduction in the query runtime?
A. Convert the Parquet files to the csv file format
B. Convert the Parquet files to the Apache ORC file forma
C. Then attempt to query the data again
D. Use partition projection to speed up the processing of the partitioned table
E. Add more partitions to be used over the tabl
F. Then filter over two partitions and put all columns in the WHERE clause
عرض الإجابة
اجابة صحيحة: A
السؤال #11
An online retailer is rebuilding its inventory management system and inventory reordering system to automatically reorder products by using Amazon Kinesis Data Streams. The inventory management system uses the Kinesis Producer Library (KPL) to publish data to a stream. The inventory reordering system uses the Kinesis Client Library (KCL) to consume data from the stream. The stream has been configured to scale as needed. Just before production deployment, the retailer discovers that the inventory reordering
A. The producer has a network-related timeout
B. The stream’s value for the IteratorAgeMilliseconds metric is too high
C. There was a change in the number of shards, record processors, or both
D. The AggregationEnabled configuration property was set to true
E. The max_records configuration property was set to a number that is too high
عرض الإجابة
اجابة صحيحة: C
السؤال #12
A telecommunications company is looking for an anomaly-detection solution to identify fraudulent calls. The company currently uses Amazon Kinesis to stream voice call records in a JSON format from its on-premises database to Amazon S3. The existing dataset contains voice call records with 200 columns. To detect fraudulent calls, the solution would need to look at 5 of these columns only. The company is interested in a cost-effective solution using AWS that requires minimal effort and experience in anomaly-d
A. Use an AWS Glue job to transform the data from JSON to Apache Parque
B. Use AWS Glue crawlers todiscover the schema and build the AWS Glue Data Catalo
C. Use Amazon Athena to create a table with a subset of column
D. Use Amazon QuickSight to visualize the data and then use Amazon QuickSight machine learning-powered anomaly detection
E. Use Kinesis Data Firehose to detect anomalies on a data stream from Kinesis by running SQL queries, which compute an anomaly score for all calls and store the output in Amazon RD
F. Use Amazon Athena to build a dataset and Amazon QuickSight to visualize the results
عرض الإجابة
اجابة صحيحة: B
السؤال #13
A company wants to improve user satisfaction for its smart home system by adding more features to its recommendation engine. Each sensor asynchronously pushes its nested JSON data into Amazon Kinesis Data Streams using the Kinesis Producer Library (KPL) in Java. Statistics from a set of failed sensors showed that, when a sensor is malfunctioning, its recorded data is not always sent to the cloud. The company needs a solution that offers near-real-time analytics on the data from the most updated sensors. Whi
A. Set the RecordMaxBufferedTime property of the KPL to "1" to disable the buffering on the sensor side
B. Push the enriched data to a fleet of Kinesis data streams and enable the data transformation feature to flatten the JSON fil
C. Instantiate a dense storage Amazon Redshift cluster and use it as the destination for the Kinesis Data Firehose delivery stream
D. Update the sensors code to use the PutRecord/PutRecords call from the Kinesis Data Streams API with the AWS SDK for Jav
E. Use Kinesis Data Analytics to enrich the data based on a company-developed anomaly detection SQL scrip
F. Direct the output of KDA application to a Kinesis Data Firehose delivery stream, enable the data transformation feature to flatten the JSON file, and set the Kinesis Data Firehose destination to an Amazon Elasticsearch Service cluster
عرض الإجابة
اجابة صحيحة: B
السؤال #14
A company with a video streaming website wants to analyze user behavior to make recommendations to users in real time Clickstream data is being sent to Amazon Kinesis Data Streams and reference data is stored in Amazon S3 The company wants a solution that can use standard SQL quenes The solution must also provide a way to look up pre-calculated reference data while making recommendations Which solution meets these requirements?
A. Use an AWS Glue Python shell job to process incoming data from Kinesis Data Streams Use the Boto3 library to write data to Amazon Redshift
B. Use AWS Glue streaming and Scale to process incoming data from Kinesis Data Streams Use the AWS Glue connector to write data to Amazon Redshift
C. Use Amazon Kinesis Data Analytics to create an in-application table based upon the reference data Process incoming data from Kinesis Data Streams Use a data stream to write results to Amazon Redshift
D. Use Amazon Kinesis Data Analytics to create an in-application table based upon the reference dataProcess incoming data from Kinesis Data Streams Use an Amazon Kinesis Data Firehose delivery stream to write results to Amazon Redshift
عرض الإجابة
اجابة صحيحة: B
السؤال #15
A data analyst is designing an Amazon QuickSight dashboard using centralized sales data that resides in Amazon Redshift. The dashboard must be restricted so that a salesperson in Sydney, Australia, can see only the Australia view and that a salesperson in New York can see only United States (US) data. What should the data analyst do to ensure the appropriate data security is in place?
A. Place the data sources for Australia and the US into separate SPICE capacity pools
B. Set up an Amazon Redshift VPC security group for Australia and the US
C. Deploy QuickSight Enterprise edition to implement row-level security (RLS) to the sales table
D. Deploy QuickSight Enterprise edition and set up different VPC security groups for Australia and the US
عرض الإجابة
اجابة صحيحة: BD
السؤال #16
A media company wants to perform machine learning and analytics on the data residing in its Amazon S3 data lake. There are two data transformation requirements that will enable the consumers within the company to create reports: Daily transformations of 300 GB of data with different file formats landing in Amazon S3 at a scheduled time. One-time transformations of terabytes of archived data residing in the S3 data lake. Which combination of solutions cost-effectively meets the company’s requirements for tra
A. For daily incoming data, use AWS Glue crawlers to scan and identify the schema
B. For daily incoming data, use Amazon Athena to scan and identify the schema
C. For daily incoming data, use Amazon Redshift to perform transformations
D. For daily incoming data, use AWS Glue workflows with AWS Glue jobs to perform transformations
E. For archived data, use Amazon EMR to perform data transformations
F. For archived data, use Amazon SageMaker to perform data transformations
عرض الإجابة
اجابة صحيحة: B
السؤال #17
A manufacturing company has many loT devices in different facilities across the world The company is using Amazon Kinesis Data Streams to collect the data from the devices The company's operations team has started to observe many WnteThroughputExceeded exceptions The operations team determines that the reason is the number of records that are being written to certain shards The data contains device ID capture date measurement type, measurement value and facility ID The facility ID is used as the partition k
A. Change the partition key from facility ID to a randomly generated key
B. Increase the number of shards
C. Archive the data on the producers' side
D. Change the partition key from facility ID to capture date
عرض الإجابة
اجابة صحيحة: A

عرض الإجابات بعد التقديم

يرجى إرسال البريد الإلكتروني الخاص بك والواتس اب للحصول على إجابات الأسئلة.

ملحوظة: يرجى التأكد من صلاحية معرف البريد الإلكتروني وWhatsApp حتى تتمكن من الحصول على نتائج الاختبار الصحيحة.

بريد إلكتروني:
رقم الواتس اب/الهاتف: