لا تريد أن تفوت شيئا؟

نصائح اجتياز امتحان الشهادة

آخر أخبار الامتحانات ومعلومات الخصم

برعاية وحديثة من قبل خبرائنا

نعم، أرسل لي النشرة الإخبارية

خذ اختبارات أخرى عبر الإنترنت

السؤال #1
A company is planning to create a data lake in Amazon S3. The company wants to create tiered storage based on access patterns and cost objectives. The solution must include support for JDBC connections from legacy clients, metadata management that allows federation for access control, and batch-based ETL using PySpark and Scala. Operational management should be limited. Which combination of components can meet these requirements? (Choose three.)
A. AWS Glue Data Catalog for metadata management
B. Amazon EMR with Apache Spark for ETL
C. AWS Glue for Scala-based ETL
D. Amazon EMR with Apache Hive for JDBC clients
E. Amazon Athena for querying data in Amazon S3 using JDBC drivers
F. Amazon EMR with Apache Hive, using an Amazon RDS with MySQL-compatible backed metastore
عرض الإجابة
اجابة صحيحة: A
السؤال #2
A company has developed several AWS Glue jobs to validate and transform its data from Amazon S3 and load it into Amazon RDS for MySQL in batches once every day. The ETL jobs read the S3 data using a DynamicFrame. Currently, the ETL developers are experiencing challenges in processing only the incremental data on every run, as the AWS Glue job processes all the S3 input data on each run. Which approach would allow the developers to solve the issue with minimal coding effort?
A. Have the ETL jobs read the data from Amazon S3 using a DataFrame
B. Enable job bookmarks on the AWS Glue jobs
C. Create custom logic on the ETL jobs to track the processed S3 objects
D. Have the ETL jobs delete the processed objects or data from Amazon S3 after each run
عرض الإجابة
اجابة صحيحة: B
السؤال #3
Once a month, a company receives a 100 MB .csv file compressed with gzip. The file contains 50,000 property listing records and is stored in Amazon S3 Glacier. The company needs its data analyst to query a subset of the data for a specific vendor. What is the most cost-effective solution?
A. Load the data into Amazon S3 and query it with Amazon S3 Select
B. Query the data from Amazon S3 Glacier directly with Amazon Glacier Select
C. Load the data to Amazon S3 and query it with Amazon Athena
D. Load the data to Amazon S3 and query it with Amazon Redshift Spectrum
عرض الإجابة
اجابة صحيحة: B
السؤال #4
A company uses Amazon Redshift for its data warehousing needs. ETL jobs run every night to load data, apply business rules, and create aggregate tables for reporting. The company's data analysis, data science, and business intelligence teams use the data warehouse during regular business hours. The workload management is set to auto, and separate queues exist for each team with the priority set to NORMAL. Recently, a sudden spike of read queries from the data analysis team has occurred at least twice daily,
A. Increase the query priority to HIGHEST for the data analysis queue
B. Configure the data analysis queue to enable concurrency scaling
C. Create a query monitoring rule to add more cluster capacity for the data analysis queue when queries are waiting for resources
D. Use workload management query queue hopping to route the query to the next matching queue
عرض الإجابة
اجابة صحيحة: C
السؤال #5
An advertising company has a data lake that is built on Amazon S3. The company uses AWS Glue Data Catalog to maintain the metadata. The data lake is several years old and its overall size has increased exponentially as additional data sources and metadata are stored in the data lake. The data lake administrator wants to implement a mechanism to simplify permissions management between Amazon S3 and the Data Catalog to keep them in sync Which solution will simplify permissions management with minimal developm
A. Set AWS Identity and Access Management (1AM) permissions tor AWS Glue
B. Use AWS Lake Formation permissions
C. Manage AWS Glue and S3 permissions by using bucket policies
D. Use Amazon Cognito user pools
عرض الإجابة
اجابة صحيحة: AC
السؤال #6
A streaming application is reading data from Amazon Kinesis Data Streams and immediately writing the data to an Amazon S3 bucket every 10 seconds. The application is reading data from hundreds of shards. The batch interval cannot be changed due to a separate requirement. The data is being accessed by Amazon Athena. Users are seeing degradation in query performance as time progresses. Which action can help improve query performance?
A. Merge the files in Amazon S3 to form larger files
B. Increase the number of shards in Kinesis Data Streams
C. Add more memory and CPU capacity to the streaming application
D. Write the files to multiple S3 buckets
عرض الإجابة
اجابة صحيحة: D
السؤال #7
An online retail company uses Amazon Redshift to store historical sales transactions. The company is required to encrypt data at rest in the clusters to comply with the Payment Card Industry Data Security Standard (PCI DSS). A corporate governance policy mandates management of encryption keys using an on-premises hardware security module (HSM). Which solution meets these requirements?
A. Create and manage encryption keys using AWS CloudHSM Classi
B. Launch an Amazon Redshift cluster in a VPC with the option to use CloudHSM Classic for key management
C. Create a VPC and establish a VPN connection between the VPC and the on-premises networ
D. Create an HSM connection and client certificate for the on-premises HS
E. Launch a cluster in the VPC with theoption to use the on-premises HSM to store keys
F. Create an HSM connection and client certificate for the on-premises HS G
عرض الإجابة
اجابة صحيحة: A
السؤال #8
An airline has been collecting metrics on flight activities for analytics. A recently completed proof of concept demonstrates how the company provides insights to data analysts to improve on-time departures. The proof of concept used objects in Amazon S3, which contained the metrics in .csv format, and used Amazon Athena for querying the data. As the amount of data increases, the data analyst wants to optimize the storage solution to improve query performance. Which options should the data analyst use to im
A. Add a randomized string to the beginning of the keys in S3 to get more throughput across partitions
B. Use an S3 bucket in the same account as Athena
C. Compress the objects to reduce the data transfer I/O
D. Use an S3 bucket in the same Region as Athena
E. Preprocess the
F. Preprocess the
عرض الإجابة
اجابة صحيحة: BC
السؤال #9
An online retail company is migrating its reporting system to AWS. The company’s legacy system runs data processing on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the online system to the reporting system several times a day. Schemas in the files are stable between updates. A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To keep storage costs low, the data analyst decides to store t
A. Create an AWS Glue Data Catalog to manage the Hive metadat
B. Create an AWS Glue crawler over Amazon S3 that runs when data is refreshed to ensure that data changes are update
C. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR
D. Create an AWS Glue Data Catalog to manage the Hive metadat
E. Create an Amazon EMR cluster with consistent view enable
F. Run emrfs sync before each analytics step to ensure data changes are update G
عرض الإجابة
اجابة صحيحة: BD
السؤال #10
A transportation company uses IoT sensors attached to trucks to collect vehicle data for its global delivery fleet. The company currently sends the sensor data in small .csv files to Amazon S3. The files are then loaded into a 10-node Amazon Redshift cluster with two slices per node and queried using both Amazon Athena and Amazon Redshift. The company wants to optimize the files to reduce the cost of querying and also improve the speed of data loading into the Amazon Redshift cluster. Which solution meets t
A. Use AWS Glue to convert all the files from
B. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3
C. Use Amazon EMR to convert each
D. COPY the files into Amazon Redshift and query the file with Athena from Amazon S3
E. Use AWS Glue to convert the files from
F. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3
عرض الإجابة
اجابة صحيحة: B
السؤال #11
A company is building a service to monitor fleets of vehicles. The company collects IoT data from a device in each vehicle and loads the data into Amazon Redshift in near-real time. Fleet owners upload .csv files containing vehicle reference data into Amazon S3 at different times throughout the day. A nightly process loads the vehicle reference data from Amazon S3 into Amazon Redshift. The company joins the IoT data from the device and the vehicle reference data to power reporting and dashboards. Fleet owne
A. Use S3 event notifications to trigger an AWS Lambda function to copy the vehicle reference data into Amazon Redshift immediately when the reference data is uploaded to Amazon S3
B. Create and schedule an AWS Glue Spark job to run every 5 minute
C. The job inserts reference data into Amazon Redshift
D. Send reference data to Amazon Kinesis Data Stream
E. Configure the Kinesis data stream to directly load the reference data into Amazon Redshift in real time
F. Send the reference data to an Amazon Kinesis Data Firehose delivery strea G
عرض الإجابة
اجابة صحيحة: D
السؤال #12
A company analyzes historical data and needs to query data that is stored in Amazon S3. New data is generated daily as .csv files that are stored in Amazon S3. The company’s analysts are using Amazon Athena to perform SQL queries against a recent subset of the overall data. The amount of data that is ingested into Amazon S3 has increased substantially over time, and the query latency also has increased. Which solutions could the company implement to improve query performance? (Choose two.)
A. Use MySQL Workbench on an Amazon EC2 instance, and connect to Athena by using a JDBC or ODBC connecto
B. Run the query from MySQL Workbench instead of Athena directly
C. Use Athena to extract the data and store it in Apache Parquet format on a daily basi
D. Query the extracted data
E. Run a daily AWS Glue ETL job to convert the data files to Apache Parquet and to partition the converted file
F. Create a periodic AWS Glue crawler to automatically crawl the partitioned data on a daily basis
عرض الإجابة
اجابة صحيحة: B
السؤال #13
A media analytics company consumes a stream of social media posts. The posts are sent to an Amazon Kinesis data stream partitioned on user_id. An AWS Lambda function retrieves the records and validates the content before loading the posts into an Amazon Elasticsearch cluster. The validation process needs to receive the posts for a given user in the order they were received. A data analyst has noticed that, during peak hours, the social media platform posts take more than an hour to appear in the Elasticsear
A. Migrate the validation process to Amazon Kinesis Data Firehose
B. Migrate the Lambda consumers from standard data stream iterators to an HTTP/2 stream consumer
C. Increase the number of shards in the stream
D. Configure multiple Lambda functions to process the stream
عرض الإجابة
اجابة صحيحة: D

عرض الإجابات بعد التقديم

يرجى إرسال البريد الإلكتروني الخاص بك والواتس اب للحصول على إجابات الأسئلة.

ملحوظة: يرجى التأكد من صلاحية معرف البريد الإلكتروني وWhatsApp حتى تتمكن من الحصول على نتائج الاختبار الصحيحة.

بريد إلكتروني:
رقم الواتس اب/الهاتف: