لا تريد أن تفوت شيئا؟

نصائح اجتياز امتحان الشهادة

آخر أخبار الامتحانات ومعلومات الخصم

برعاية وحديثة من قبل خبرائنا

نعم، أرسل لي النشرة الإخبارية

خذ اختبارات أخرى عبر الإنترنت

السؤال #1
A company is planning to do a proof of concept for a machine learning (ML) project using Amazon SageMaker with a subset of existing on-premises data hosted in the company’s 3 TB data warehouse. For part of the project, AWS Direct Connect is established and tested. To prepare the data for ML, data analysts are performing data curation. The data analysts want to perform multiple step, including mapping, dropping null fields, resolving choice, and splitting fields. The company needs the fastest solution to cur
A. Ingest data into Amazon S3 using AWS DataSync and use Apache Spark scrips to curate the data in an Amazon EMR cluste
B. Store the curated data in Amazon S3 for ML processing
C. Create custom ETL jobs on-premises to curate the dat
D. Use AWS DMS to ingest data into Amazon S3 for ML processing
E. Ingest data into Amazon S3 using AWS DM
F. Use AWS Glue to perform data curation and store the data in Amazon S3 for ML processing
عرض الإجابة
اجابة صحيحة: B
السؤال #2
A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift. The amount of data delivered is uneven throughout the day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spi
A. Increase the number of retrie
B. Decrease the timeout valu
C. Increase the job concurrency
D. Keep the number of retries at 0
E. Increase the job concurrency
F. Keep the number of retries at 0
عرض الإجابة
اجابة صحيحة: B
السؤال #3
A data analytics specialist is setting up workload management in manual mode for an Amazon Redshift environment. The data analytics specialist is defining query monitoring rules to manage system performance and user experience of an Amazon Redshift cluster. Which elements must each query monitoring rule include?
A. A unique rule name, a query runtime condition, and an AWS Lambda function to resubmit any failed queries in off hours
B. A queue name, a unique rule name, and a predicate-based stop condition
C. A unique rule name, one to three predicates, and an action
D. A workload name, a unique rule name, and a query runtime-based condition
عرض الإجابة
اجابة صحيحة: A
السؤال #4
A company has a marketing department and a finance department. The departments are storing data in Amazon S3 in their own AWS accounts in AWS Organizations. Both departments use AWS Lake Formation to catalog and secure their data. The departments have some databases and tables that share common names. The marketing department needs to securely access some tables from the finance department. Which two steps are required for this process? (Choose two.)
A. The finance department grants Lake Formation permissions for the tables to the external account for the marketing department
B. The finance department creates cross-account IAM permissions to the table for the marketing department role
C. The marketing department creates an IAM role that has permissions to the Lake Formation tables
عرض الإجابة
اجابة صحيحة: C
السؤال #5
A company has an application that uses the Amazon Kinesis Client Library (KCL) to read records from a Kinesis data stream. After a successful marketing campaign, the application experienced a significant increase in usage. As a result, a data analyst had to split some shards in the data stream. When the shards were split, the application started throwing an ExpiredIteratorExceptions error sporadically. What should the data analyst do to resolve this?
A. Increase the number of threads that process the stream records
B. Increase the provisioned read capacity units assigned to the stream’s Amazon DynamoDB table
C. Increase the provisioned write capacity units assigned to the stream’s Amazon DynamoDB table
D. Decrease the provisioned write capacity units assigned to the stream’s Amazon DynamoDB table
عرض الإجابة
اجابة صحيحة: A
السؤال #6
A technology company is creating a dashboard that will visualize and analyze time-sensitive data. The data will come in through Amazon Kinesis Data Firehose with the butter interval set to 60 seconds. The dashboard must support near-real-time data. Which visualization solution will meet these requirements?
A. Select Amazon Elasticsearch Service (Amazon ES) as the endpoint for Kinesis Data Firehos
B. Set up a Kibana dashboard using the data in Amazon ES with the desired analyses and visualizations
C. Select Amazon S3 as the endpoint for Kinesis Data Firehos
D. Read data into an Amazon SageMaker Jupyter notebook and carry out the desired analyses and visualizations
E. Select Amazon Redshift as the endpoint for Kinesis Data Firehos
F. Connect Amazon QuickSight with SPICE to Amazon Redshift to create the desired analyses and visualizations
عرض الإجابة
اجابة صحيحة: A

عرض الإجابات بعد التقديم

يرجى إرسال البريد الإلكتروني الخاص بك والواتس اب للحصول على إجابات الأسئلة.

ملحوظة: يرجى التأكد من صلاحية معرف البريد الإلكتروني وWhatsApp حتى تتمكن من الحصول على نتائج الاختبار الصحيحة.

بريد إلكتروني:
رقم الواتس اب/الهاتف: