لا تريد أن تفوت شيئا؟

نصائح اجتياز امتحان الشهادة

آخر أخبار الامتحانات ومعلومات الخصم

برعاية وحديثة من قبل خبرائنا

نعم، أرسل لي النشرة الإخبارية

خذ اختبارات أخرى عبر الإنترنت

السؤال #1
A company uses Amazon EC2 instances to receive files from external vendors throughout each day. At the end of each day, the EC2 instances combine the files into a single file, perform gzip compression, and upload the single file to an Amazon S3 bucket. The total size of all the files is approximately 100 GB each day. When the files are uploaded to Amazon S3, an AWS Batch job runs a COPY command to load the files into an Amazon Redshift cluster. Which solution will MOST accelerate the COPY process?
A. pload the individual files to Amazon S3
B. plit the files so that the number of files is equal to a multiple of the number of slices in the Redshift cluster
C. plit the files so that each file uses 50% of the free storage on each compute node in the Redshift cluster
D. ply sharding by breaking up the files so that the DISTKEY columns with the same values go to the same file
عرض الإجابة
اجابة صحيحة: B
السؤال #2
A company uses Amazon Connect to manage its contact center. The company uses Salesforce to manage its customer relationship management (CRM) dat a. The company must build a pipeline to ingest data from Amazon Connect and Salesforce into a data lake that is built on Amazon S3. Which solution will meet this requirement with the LEAST operational overhead?
A. se Amazon Kinesis Data Streams to ingest the Amazon Connect data
B. se Amazon Kinesis Data Firehose to ingest the Amazon Connect data
C. se Amazon Kinesis Data Firehose to ingest the Amazon Connect data
D. se Amazon AppFlow to ingest the Amazon Connect data
عرض الإجابة
اجابة صحيحة: B
السؤال #3
A company wants to use a data lake that is hosted on Amazon S3 to provide analytics services for historical dat a. The data lake consists of 800 tables but is expected to grow to thousands of tables. More than 50 departments use the tables, and each department has hundreds of users. Different departments need access to specific tables and columns. Which solution will meet these requirements with the LEAST operational overhead?
A. reate an 1AM role for each department
B. reate an Amazon Redshift cluster for each department
C. reate an 1AM role for each department
D. reate an Amazon EMR cluster for each department
E. elevant S3 files
عرض الإجابة
اجابة صحيحة: C
السؤال #4
A US-based sneaker retail company launched its global website. All the transaction data is stored in Amazon RDS and curated historic transaction data is stored in Amazon Redshift in the us-east-1 Region. The business intelligence (BI) team wants to enhance the user experience by providing a dashboard for sneaker trends. The BI team decides to use Amazon QuickSight to render the website dashboards. During development, a team in Japan provisioned Amazon QuickSight in ap- northeast-1. The team is having diffic
A. In the Amazon Redshift console, choose to configure cross-Region snapshots and set the destination Region as ap-northeast-1
B. Create a VPC endpoint from the Amazon QuickSight VPC to the Amazon Redshift VPC so Amazon QuickSight can access data from Amazon Redshift
C. Create an Amazon Redshift endpoint connection string with Region information in the string and use this connection string in Amazon QuickSight to connect to Amazon Redshift
D. Create a new security group for Amazon Redshift in us-east-1 with an inbound rule authorizing access from the appropriate IP address range for the Amazon QuickSight servers in ap-northeast-1
عرض الإجابة
اجابة صحيحة: D
السؤال #5
A company wants to improve user satisfaction for its smart home system by adding more features to its recommendation engine. Each sensor asynchronously pushes its nested JSON data into Amazon Kinesis Data Streams using the Kinesis Producer Library (KPL) in Java. Statistics from a set of failed sensors showed that, when a sensor is malfunctioning, its recorded data is not always sent to the cloud. The company needs a solution that offers near-real-time analytics on the data from the most updated sensors. Whi
A. Set the RecordMaxBufferedTime property of the KPL to "1" to disable the buffering on the sensor side
B. Push the enriched data to a fleet of Kinesis data streams and enable the data transformation feature to flatten the JSON fil
C. Instantiate a dense storage Amazon Redshift cluster and use it as the destination for the Kinesis Data Firehose delivery stream
D. Update the sensors code to use the PutRecord/PutRecords call from the Kinesis Data Streams API with the AWS SDK for Jav
E. Use Kinesis Data Analytics to enrich the data based on a company-developed anomaly detection SQL scrip
F. Direct the output of KDA application to a Kinesis Data Firehose delivery stream, enable the data transformation feature to flatten the JSON file, and set the Kinesis Data Firehose destination to an Amazon Elasticsearch Service cluster
عرض الإجابة
اجابة صحيحة: A
السؤال #6
A manufacturing company has been collecting IoT sensor data from devices on its factory floor for a year and is storing the data in Amazon Redshift for daily analysis. A data analyst has determined that, at an expected ingestion rate of about 2 TB per day, the cluster will be undersized in less than 4 months. A long-term solution is needed. The data analyst has indicated that most queries only reference the most recent 13 months of data, yet there are also quarterly reports that need to query all the data g
A. Create a daily job in AWS Glue to UNLOAD records older than 13 months to Amazon S3 and delete those records from Amazon Redshif
B. Create an external table in Amazon Redshift to point to the S3 locatio
C. Use Amazon Redshift Spectrum to join to data that is older than 13 months
D. Take a snapshot of the Amazon Redshift cluste
E. Restore the cluster to a new cluster using dense storage nodes with additional storage capacity
F. Execute a CREATE TABLE AS SELECT (CTAS) statement to move records that are older than 13 months to quarterly partitioned data in Amazon Redshift Spectrum backed by Amazon S3
عرض الإجابة
اجابة صحيحة: ACE
السؤال #7
A company has 10-15 of uncompressed .csv files in Amazon S3. The company is evaluating Amazon Athena as a one-time query engine. The company wants to transform the data to optimize query runtime and storage costs.Which option for data format and compression meets these requirements?
A. CSV compressed with zip
B. JSON compressed with bzip2
C. Apache Parquet compressed with Snappy
D. Apache Avro compressed with LZO
عرض الإجابة
اجابة صحيحة: B
السؤال #8
A company is building an analytical solution that includes Amazon S3 as data lake storage and Amazon Redshift for data warehousing. The company wants to use Amazon Redshift Spectrum to query the data that is stored in Amazon S3. Which steps should the company take to improve performance when the company uses Amazon Redshift Spectrum to query the S3 data files? (Select THREE ) Use gzip compression with individual file sizes of 1-5 GB
A. se a columnar storage file format
B. artition the data based on the most common query predicates
C. plit the data into KB-sized files
D. eep all files about the same size
E. se file formats that are not splittable
عرض الإجابة
اجابة صحيحة: BCD
السؤال #9
A central government organization is collecting events from various internal applications using Amazon Managed Streamingfor Apache Kafka (Amazon MSK). The organization has configured a separate Kafka topic for each application to separatethe data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts thedata at rest.A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to aKafka topic that
A. Create a different Amazon EC2 security group for each application
B. Install Kafka Connect on each application instance and configure each Kafka Connect instance to write to a specific topic only
C. Use Kafka ACLs and configure read and write permissions for each topic
D. Create a different Amazon EC2 security group for each application
عرض الإجابة
اجابة صحيحة: B
السؤال #10
A mobile gaming company wants to capture data from its gaming app and make the data available for analysis immediately.The data record size will be approximately 20 KB. The company is concerned about achieving optimal throughput from eachdevice. Additionally, the company wants to develop a data stream processing application with dedicated throughput for eachconsumer.Which solution would achieve this goal?
A. Have the app call the PutRecords API to send data to Amazon Kinesis Data Streams
B. Have the app call the PutRecordBatch API to send data to Amazon Kinesis Data Firehose
C. Have the app use Amazon Kinesis Producer Library (KPL) to send data to Kinesis Data Firehose
D. Have the app call the PutRecords API to send data to Amazon Kinesis Data Streams
عرض الإجابة
اجابة صحيحة: D
السؤال #11
A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files thathave been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing.Which AWS Glue feature should the data analytics specialist use to meet this requirement?
A. Workflows
B. Triggers
C. Job bookmarks
D. Classifiers
عرض الإجابة
اجابة صحيحة: B
السؤال #12
A retail company is building its data warehouse solution using Amazon Redshift. As a part of that effort, the company is loading hundreds of files into the fact table created in its Amazon Redshift cluster. The company wants the solution to achieve the highest throughput and optimally use cluster resources when loading data into the company’s fact table. How should the company meet these requirements?
A. Use multiple COPY commands to load the data into the Amazon Redshift cluster
B. Use S3DistCp to load multiple files into the Hadoop Distributed File System (HDFS) and use an HDFSconnector to ingest the data into the Amazon Redshift cluster
C. Use LOAD commands equal to the number of Amazon Redshift cluster nodes and load the data in parallel into each node
D. Use a single COPY command to load the data into the Amazon Redshift cluster
عرض الإجابة
اجابة صحيحة: B
السؤال #13
A university intends to use Amazon Kinesis Data Firehose to collect JSON-formatted batches of water quality readings in Amazon S3. The readings are from 50 sensors scattered across a local lake. Students will query the stored data using Amazon Athena to observe changes in a captured metric over time, such as water temperature or acidity. Interest has grown in the study, prompting the university to reconsider how data will be stored. Which data format and partitioning choices will MOST significantly reduce c
A. Store the data in Apache Avro format using Snappy compression
B. Partition the data by year, month, and day
C. Store the data in Apache ORC format using no compression
D. Store the data in Apache Parquet format using Snappy compression
E. Partition the data by sensor, year, month, and day
عرض الإجابة
اجابة صحيحة: B
السؤال #14
A healthcare company uses AWS data and analytics tools to collect, ingest, and store electronic health record (EHR) data about its patients. The raw EHR data is stored in Amazon S3 in JSON format partitioned by hour, day, and year and is updated every hour. The company wants to maintain the data catalog and metadata in an AWS Glue Data Catalog to be able to access the data using Amazon Athena or Amazon Redshift Spectrum for analytics. When defining tables in the Data Catalog, the company has the following r
A. Run an AWS Glue crawler that connects to one or more data stores, determines the data structures, and writes tables in the Data Catalog
B. Use the AWS Glue console to manually create a table in the Data Catalog and schedule an AWS Lambda function to update the table partitions hourly
C. Use the AWS Glue API CreateTable operation to create a table in the Data Catalo
D. Create an AWS Glue crawler and specify the table as the source
E. Create an Apache Hive catalog in Amazon EMR with the table schema definition in Amazon S3, and update the table partition with a scheduled jo
F. Migrate the Hive catalog to the Data Catalog
عرض الإجابة
اجابة صحيحة: C
السؤال #15
A social media company is using business intelligence tools to analyze data for forecasting. The company is using Apache Kafka to ingest dat a. The company wants to build dynamic dashboards that include machine learning (ML) insights to forecast key business trends. The dashboards must show recent batched data that is not more than 75 minutes old. Various teams at the company want to view the dashboards by using Amazon QuickSight with ML insights. Which solution will meet these requirements?
A. eplace Kafka with Amazon Managed Streaming for Apache Kafka (Amazon MSK)
B. eplace Kafka with an Amazon Kinesis data stream
C. onfigure the Kafka-Kinesis-Connector to publish the data to an Amazon Kinesis Data Firehose delivery stream
D. onfigure the Kafka-Kinesis-Connector to publish the data to an Amazon Kinesis Data Firehose delivery stream
عرض الإجابة
اجابة صحيحة: C
السؤال #16
A transportation company uses IoT sensors attached to trucks to collect vehicle data for its global delivery fleet. Thecompany currently sends the sensor data in small .csv files to Amazon S3. The files are then loaded into a 10-node AmazonRedshift cluster with two slices per node and queried using both Amazon Athena and Amazon Redshift. The company wantsto optimize the files to reduce the cost of querying and also improve the speed of data loading into the Amazon Redshiftcluster.Which solution meets these
A. Use AWS Glue to convert all the files from
B. Use Amazon EMR to convert each
C. Use AWS Glue to convert the files from
D. Use AWS Glue to convert the files from
عرض الإجابة
اجابة صحيحة: D

عرض الإجابات بعد التقديم

يرجى إرسال البريد الإلكتروني الخاص بك والواتس اب للحصول على إجابات الأسئلة.

ملحوظة: يرجى التأكد من صلاحية معرف البريد الإلكتروني وWhatsApp حتى تتمكن من الحصول على نتائج الاختبار الصحيحة.

بريد إلكتروني:
رقم الواتس اب/الهاتف: