لا تريد أن تفوت شيئا؟

نصائح اجتياز امتحان الشهادة

آخر أخبار الامتحانات ومعلومات الخصم

برعاية وحديثة من قبل خبرائنا

نعم، أرسل لي النشرة الإخبارية

خذ اختبارات أخرى عبر الإنترنت

السؤال #1
A data analyst is using AWS Glue to organize, cleanse, validate, and format a 200 GB dataset. The data analyst triggered the job to run with the Standard worker type. After 3 hours, the AWS Glue job status is still RUNNING. Logs from the job run show no error codes. The data analyst wants to improve the job execution time without overprovisioning. Which actions should the data analyst take?
A. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs)
B. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs)
C. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs)
D. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs)
عرض الإجابة
اجابة صحيحة: B
السؤال #2
An ecommerce company is migrating its business intelligence environment from on premises to the AWS Cloud. The company will use Amazon Redshift in a public subnet and Amazon QuickSight. The tables already are loaded into Amazon Redshift and can be accessed by a SQL tool. The company starts QuickSight for the first time. During the creation of the data source, a data analytics specialist enters all the information and tries to validate the connection. An error with the following message occurs: “Creating a c
A. Grant the SELECT permission on Amazon Redshift tables
B. Add the QuickSight IP address range into the Amazon Redshift security group
C. Create an IAM role for QuickSight to access Amazon Redshift
D. Use a QuickSight admin user for creating the dataset
عرض الإجابة
اجابة صحيحة: C
السؤال #3
A financial company uses Amazon S3 as its data lake and has set up a data warehouse using a multi-node Amazon Redshift cluster. The data files in the data lake are organized in folders based on the data source of each data file. All the data files are loaded to one table in the Amazon Redshift cluster using a separate COPY command for each data file location. With this approach, loading all the data files into Amazon Redshift takes a long time to complete. Users want a faster solution with little or no incr
A. Use Amazon EMR to copy all the data files into one folder and issue a COPY command to load the data into Amazon Redshift
B. Load all the data files in parallel to Amazon Aurora, and run an AWS Glue job to load the data into Amazon Redshift
C. Use an AWS Glue job to copy all the data files into one folder and issue a COPY command to load the data into Amazon Redshift
D. Create a manifest file that contains the data file locations and issue a COPY command to load the data into Amazon Redshift
عرض الإجابة
اجابة صحيحة: D
السؤال #4
A large telecommunications company is planning to set up a data catalog and metadata management for multiple data sources running on AWS. The catalog will be used to maintain the metadata of all the objects stored in the data stores. The data stores are composed of structured sources like Amazon RDS and Amazon Redshift, and semistructured sources like JSON and XML files stored in Amazon S3. The catalog must be updated on a regular basis, be able to detect the changes to object metadata, and require the leas
A. Use Amazon Aurora as the data catalo
B. Create AWS Lambda functions that will connect and gather themetadata information from multiple sources and update the data catalog in Auror
C. Schedule the Lambda functions periodically
D. Use the AWS Glue Data Catalog as the central metadata repositor
E. Use AWS Glue crawlers to connect to multiple data stores and update the Data Catalog with metadata change
F. Schedule the crawlers periodically to update the metadata catalog
عرض الإجابة
اجابة صحيحة: D
السؤال #5
A large ride-sharing company has thousands of drivers globally serving millions of unique customers every day. The company has decided to migrate an existing data mart to Amazon Redshift. The existing schema includes the following tables. A trips fact table for information on completed rides. A drivers dimension table for driver profiles. A customers fact table holding customer profile information. The company analyzes trip details by date and destination to examine profitability by region. The drivers data
A. Use DISTSTYLE KEY (destination) for the trips table and sort by dat
B. Use DISTSTYLE ALL for the drivers and customers tables
C. Use DISTSTYLE EVEN for the trips table and sort by dat
D. Use DISTSTYLE ALL for the drivers table
E. Use DISTSTYLE KEY (destination) for the trips table and sort by dat
F. Use DISTSTYLE ALL for the drivers tabl G
عرض الإجابة
اجابة صحيحة: B
السؤال #6
A data analyst is using Amazon QuickSight for data visualization across multiple datasets generated by applications. Each application stores files within a separate Amazon S3 bucket. AWS Glue Data Catalog is used as a central catalog across all application data in Amazon S3. A new application stores its data within a separate S3 bucket. After updating the catalog to include the new application data source, the data analyst created a new Amazon QuickSight data source from an Amazon Athena table, but the impo
A. Edit the permissions for the AWS Glue Data Catalog from within the Amazon QuickSight console
B. Edit the permissions for the new S3 bucket from within the Amazon QuickSight console
C. Edit the permissions for the AWS Glue Data Catalog from within the AWS Glue console
D. Edit the permissions for the new S3 bucket from within the S3 console
عرض الإجابة
اجابة صحيحة: B
السؤال #7
A company wants to enrich application logs in near-real-time and use the enriched dataset for further analysis. The application is running on Amazon EC2 instances across multiple Availability Zones and storing its logs using Amazon CloudWatch Logs. The enrichment source is stored in an Amazon DynamoDB table. Which solution meets the requirements for the event collection and enrichment?
A. Use a CloudWatch Logs subscription to send the data to Amazon Kinesis Data Firehos
B. Use AWS Lambda to transform the data in the Kinesis Data Firehose delivery stream and enrich it with the data in the DynamoDB tabl
C. Configure Amazon S3 as the Kinesis Data Firehose delivery destination
D. Export the raw logs to Amazon S3 on an hourly basis using the AWS CL
E. Use AWS Glue crawlers to catalog the log
F. Set up an AWS Glue connection for the DynamoDB table and set up an AWS Glue ETL job to enrich the dat G
عرض الإجابة
اجابة صحيحة: A
السؤال #8
A company uses Amazon Redshift as its data warehouse. A new table has columns that contain sensitive data. The data in the table will eventually be referenced by several existing queries that run many times a day. A data analyst needs to load 100 billion rows of data into the new table. Before doing so, the data analyst must ensure that only members of the auditing group can read the columns containing sensitive data. How can the data analyst meet these requirements with the lowest maintenance overhead?
A. Load all the data into the new table and grant the auditing group permission to read from the tabl
B. Load all the data except for the columns containing sensitive data into a second tabl
C. Grant the appropriate users read-only permissions to the second table
D. Load all the data into the new table and grant the auditing group permission to read from the tabl
E. Use the GRANT SQL command to allow read-only access to a subset of columns to the appropriate users
F. Load all the data into the new table and grant all users read-only permissions to non-sensitive columns
عرض الإجابة
اجابة صحيحة: C

عرض الإجابات بعد التقديم

يرجى إرسال البريد الإلكتروني الخاص بك والواتس اب للحصول على إجابات الأسئلة.

ملحوظة: يرجى التأكد من صلاحية معرف البريد الإلكتروني وWhatsApp حتى تتمكن من الحصول على نتائج الاختبار الصحيحة.

بريد إلكتروني:
رقم الواتس اب/الهاتف: