Kategori
ACA AWS Cloud General

Latihan Soal Cloud Architecture (1/2)

UnansweredQuestion 1

0 / 1 pts

A company has deployed Amazon RedShift for performing analytics on user data. When using Amazon RedShift, which of the following statements are correct in relation to availability and durability? (choose 2)

Correct Answer

RedShift always keeps three copies of your data

Single-node clusters support data replication

Correct Answer

RedShift provides continuous/incremental backups

RedShift always keeps five copies of your data

Manual backups are automatically deleted when you delete a cluster

• RedShift always keeps three copies of your data and provides continuous/incremental backups
• Single-node clusters do not support data replication
• Manual backups are not automatically deleted when you delete a cluster

UnansweredQuestion 2

0 / 1 pts

You are a Solutions Architect. A client from a large multinational corporation is working on a deployment of a significant amount of resources into AWS. The client would like to be able to deploy resources across multiple AWS accounts and regions using a single toolset and template. You have been asked to suggest a toolset that can provide this functionality?

This cannot be done, use separate CloudFormation templates per AWS account and region

Correct Answer

Use a CloudFormation StackSet and specify the target accounts and regions in which the stacks will be created

Use a CloudFormation template that creates a stack and specify the logical IDs of each account and region

Use a third-party product such as Terraform that has support for multiple AWS accounts and regions

AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation
• Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions. An administrator account is the AWS account in which you create stack sets
• A stack set is managed by signing in to the AWS administrator account in which it was created. A target account is the account into which you create, update, or delete one or more stacks in your stack set
• Before you can use a stack set to create stacks in a target account, you must set up a trust relationship between the administrator and target accounts
• A regular CloudFormation template cannot be used across regions and accounts. You would need to create copies of the template and then manage updates
• You do not need to use a third-party product such as Terraform as this functionality can be delivered through native AWS technology

UnansweredQuestion 3

0 / 1 pts

A new Big Data application you are developing will use hundreds of EC2 instances to write data to a shared file system. The file system must be stored redundantly across multiple AZs within a region and allow the EC2 instances to concurrently access the file system. The required throughput is multiple GB per second.
From the options presented which storage solution can deliver these requirements?

Amazon Storage Gateway

Correct Answer

Amazon EFS

Amazon S3

Amazon EBS using multiple volumes in a RAID 0 configuration

• Amazon EFS is the best solution as it is the only solution that is a file-level storage solution (not block/object-based), stores data redundantly across multiple AZs within a region and you can concurrently connect up to thousands of EC2 instances to a single filesystem
• Amazon EBS volumes cannot be accessed by concurrently by multiple instances
• Amazon S3 is an object store, not a file system
• Amazon Storage Gateway is a range of products used for onpremises storage management and can be configured to cache data locally, backup data to the cloud and also provides a virtual tape backup solution

UnansweredQuestion 4

0 / 1 pts

An application running on an external website is attempting to initiate a request to your company’s website on AWS using API calls. A problem has been reported in which the requests are failing with an error that includes the following text:
“Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource”
You have been asked to resolve the problem, what is the most likely solution?

The request is not secured with SSL/TLS

The IAM policy does not allow access to the API

Correct Answer

Enable CORS on the APIs resources using the selected methods under the API Gateway

The ACL on the API needs to be updated

You Can enable Cross Origin Resource Sharing (CORS) for multiple domain use with Javascript/AJAX:
– Can be used to enable requests from domains other the APIs domain
– Allows the sharing of resources between different domains
– The method (GET, PUT, POST etc.) for which you will enable CORS must be available in the API Gateway API before you enable CORS
If CORS is not enabled and an API resource received requests from another domain the request will be blocked Enable CORS on the APIs resources using the selected methods under the API Gateway
• IAM policies are not used to control CORS and there is no ACL on the API to update
• This error would display whether using SSL/TLS or not

UnansweredQuestion 5

0 / 1 pts

You need to configure an application to retain information about each user session and have decided to implement a layer within the application architecture to store this information.
Which of the options below could be used? (choose 2)

Correct Answer

Sticky sessions on an Elastic Load Balancer (ELB)

A block storage service such as Elastic Block Store (EBS)

A relational data store such as Amazon RDS

A workflow service such as Amazon Simple Workflow Service (SWF)

Correct Answer

A key/value store such as ElastiCache Redis

• In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached.
• Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing that individual user’s session. The session’s validity can be determined
by a number of methods, including a client-side cookie or via configurable duration parameters that can be set at the load balancer which routes requests to the web servers. You can configure sticky sessions on Amazon ELBs.
• Relational databases are not typically used for storing session state data due to their rigid schema that tightly controls the format in which data can be stored.
• Workflow services such as SWF are used for carrying out a series of tasks in a coordinated task flow. They are not suitable for storing session state data.
• In this instance the question states that a caching layer is being implemented and EBS volumes would not be suitable for creating an independent caching layer as they must be attached to EC2 instances.

UnansweredQuestion 6

0 / 1 pts

The data scientists in your company are looking for a service that can process and analyze real-time, streaming data. They would like to use standard SQL queries to query the streaming data.
Which combination of AWS services would deliver these requirements?

ElastiCache and EMR

DynamoDB and EMR

Correct Answer

Kinesis Data Streams and Kinesis Data Analytics

inesis Data Streams and Kinesis Firehose

• Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs
• Amazon Kinesis Data Analytics is the easiest way to process and analyze real-time, streaming data. Kinesis Data Analytics can use standard SQL queries to process Kinesis data streams and can ingest data from Kinesis Streams and Kinesis Firehose but Firehose cannot be used for running SQL queries
• DynamoDB is a NoSQL database that can be used for storing data from a stream but cannot be used to process or analyze the data or to query it with SQL queries. Elastic Map Reduce (EMR)
is a hosted Hadoop framework and is not used for analytics on streaming data

UnansweredQuestion 7

0 / 1 pts

Which of the following approaches provides the lowest cost for Amazon elastic block store snapshots while giving you the ability to fully restore data?

Correct Answer

Maintain a single snapshot; the latest snapshot is both incremental and complete

Maintain two snapshots: the original snapshot and the latest incremental snapshot

Maintain the most current snapshot; archive the original to Amazon Glacier

Maintain the original snapshot; subsequent snapshots will overwrite one another

• You can backup data on an EBS volume by periodically taking snapshots of the volume. The scenario is that you need to reduce storage costs by maintaining as few EBS snapshots as possible whilst ensuring you can restore all data when required.
• If you take periodic snapshots of a volume, the snapshots are incremental which means only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed such that you need to retain only the most recent snapshot in order to restore the volume
• You cannot just keep the original snapshot as it will not be incremental and complete
• You do not need to keep the original and latest snapshot as the latest snapshot is all that is needed
• There is no need to archive the original snapshot to Amazon Glacier. EBS copies your data across multiple servers in an AZ for durability

UnansweredQuestion 8

0 / 1 pts

You are undertaking a project to make some audio and video files that your company uses for onboarding new staff members available via a
mobile application. You are looking for a cost-effective way to convert the files from their current formats into formats that are compatible with smartphones and tablets. The files are currently stored in an S3 bucket.
What AWS service can help with converting the files?

Rekognition

Correct Answer

Elastic Transcoder

Data Pipeline

Amazon Personalize

• Amazon Elastic Transcoder is a highly scalable, easy to use and cost-effective way for developers and businesses to convert (or “transcode”) video and audio files from their source format into versions that will playback on devices like smartphones, tablets and PCs
• Amazon Personalize is a machine learning service that makes it easy for developers to create individualized recommendations for customers using their applications
• Data Pipeline helps you move, integrate, and process data across AWS compute and storage resources, as well as your onpremises resources
• Rekognition is a deep learning-based visual analysis service

UnansweredQuestion 9

0 / 1 pts

You are a Solutions Architect. A large multinational client has requested a design for a multi-region, multi-master database. The client has requested that the database be designed for fast, massively scaled applications for a global user base. The database should be a fully managed service including the replication.
Which AWS service can deliver these requirements?

Correct Answer

DynamoDB with Global Tables and Cross Region Replication

S3 with Cross Region Replication

RDS with Multi-AZ

EC2 instances with EBS replication

• Cross-region replication allows you to replicate across regions:
– Amazon DynamoDB global tables provides a fully managed solution for deploying a multi-region, multi-master database
– When you create a global table, you specify the AWS regions where you want the table to be available
DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them
• RDS with Multi-AZ is not multi-master (only one DB can be written to at a time), and does not span regions
• S3 is an object store not a multi-master database
• There is no such thing as EBS replication. You could build your own database stack on EC2 with DB-level replication but that is not what is presented in the answer

UnansweredQuestion 10

0 / 1 pts

A customer has asked you to recommend the best solution for a highly available database. The database is a relational OLTP type of database and the customer does not want to manage the operating system the database runs on. Failover between AZs must be automatic.
Which of the below options would you suggest to the customer?

Use RedShift in a Multi-AZ configuration

Use DynamoDB

Correct Answer

Use RDS in a Multi-AZ configuration

Install a relational database on EC2 instances in multiple AZs and create a cluster

• Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. With RDS you can configure Multi-AZ which creates a replica in another AZ and synchronously replicates to it (DR only)
• RedShift is used for analytics OLAP not OLTP
• If you install a DB on an EC2 instance you will need to manage to OS yourself and the customer wants it to be managed for them
• DynamoDB is a managed database of the NoSQL type. NoSQL
DBs are not relational DBs

UnansweredQuestion 11

0 / 1 pts

An application you manage uses Auto Scaling and a fleet of EC2 instances. You recently noticed that Auto Scaling is scaling the number of instances up and down multiple times in the same hour. You need to implement a remediation to reduce the amount of scaling events. The remediation must be cost-effective and preserve elasticity. What design changes would you implement? (choose 2)

Modify the Auto Scaling policy to use scheduled scaling actions

Modify the Auto Scaling group termination policy to terminate the oldest instance first

Correct Answer

Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy

Modify the Auto Scaling group termination policy to terminate the newest instance first

Correct Answer

Modify the Auto Scaling group cool-down timers

• The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect so this would help. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities
• The CloudWatch Alarm Evaluation Period is the number of the most recent data points to evaluate when determining alarm state. This would help as you can increase the number of data-points required to trigger an alarm
• The order in which Auto Scaling terminates instances is not the issue here, the problem is that the workload is dynamic and Auto Scaling is constantly reacting to change, and launching or terminating instances
• Using scheduled scaling actions may not be cost-effective and also affects elasticity as it is less dynamic

UnansweredQuestion 12

0 / 1 pts

One of your EC2 instances runs an application process that saves user data to an attached EBS volume. The EBS volume was attached to the EC2 instance after it was launched and is unencrypted. You would like to encrypt the data that is stored on the volume as it is considered sensitive however you cannot shutdown the instance due to other application processes that are running.
What is the best method of applying encryption to the sensitive data without any downtime?

Create an encrypted snapshot of the current EBS volume. Restore the snapshot to the EBS volume

Leverage the AWS Encryption CLI to encrypt the data on the volume

Correct Answer

Create and mount a new encrypted EBS volume. Move the data to the new volume and then delete the old volume

Unmount the volume and enable server-side encryption. Remount the EBS volume

• You cannot restore a snapshot of a root volume without downtime
• There is no direct way to change the encryption state of a volume
• Either create an encrypted volume and copy data to it or take a snapshot, encrypt it, and create a new encrypted volume from the snapshot

UnansweredQuestion 13

0 / 1 pts

You are planning to launch a RedShift cluster for processing and analyzing a large amount of data. The RedShift cluster will be deployed into a VPC with multiple subnets. Which construct is used when provisioning the cluster to allow you to specify a set of subnets in the VPC that the cluster will be deployed into?

Cluster Subnet Group

Correct Answer

Subnet Group

Availability Zone (AZ)

DB Subnet Group

• You create a cluster subnet group if you are provisioning your cluster in your virtual private cloud (VPC)
• A cluster subnet group allows you to specify a set of subnets in yourVPC
• When provisioning a cluster, you provide the subnet group and Amazon Redshift creates the cluster on one of the subnets in the group
• A DB Subnet Group is used by RDS
• A Subnet Group is used by ElastiCache
• Availability Zones are part of the AWS global infrastructure, subnets reside within AZs but in RedShift you provision the cluster into Cluster Subnet Groups

UnansweredQuestion 14

0 / 1 pts

A Solutions Architect is responsible for a web application that runs on EC2 instances that sit behind an Application Load Balancer (ALB). Auto Scaling is used to launch instances across 3 Availability Zones. The web application serves large image files and these are stored on an Amazon EFS file system. Users have experienced delays in retrieving the files and the Architect has been asked to improve the user experience.
What should the Architect do to improve user experience?

Correct Answer

Cache static content using CloudFront

Reduce the file size of the images

Move the digital assets to EBS

Use Spot instances

• CloudFront is ideal for caching static content such as the files in this scenario and would increase performance
• Moving the files to EBS would not make accessing the files easier or improve performance
• Reducing the file size of the images may result in better retrieval times, however CloudFront would still be the preferable option
• Using Spot EC2 instances may reduce EC2 costs but it won’t improve user experience

UnansweredQuestion 15

0 / 1 pts

A Solutions Architect is deploying an Auto Scaling Group (ASG) and needs to determine what CloudWatch monitoring option to use. Which of the statements below would assist the Architect in making his decision? (choose 2)

Correct Answer

Detailed monitoring is enabled by default if the ASG is created from the CLI

Basic monitoring is enabled by default if the ASG is created from the CLI

Detailed monitoring is free and can be manually enabled

Detailed monitoring is chargeable and must always be manually enabled

Correct Answer

Basic monitoring is enabled by default if the ASG is created from the console

• Basic monitoring sends EC2 metrics to CloudWatch about ASG instances every 5 minutes
• Detailed can be enabled and sends metrics every 1 minute (it is always chargeable)
• When the launch configuration is created from the CLI detailed monitoring of EC2 instances is enabled by default
• When you enable Auto Scaling group metrics, Auto Scaling sends sampled data to CloudWatch every minute

UnansweredQuestion 16

0 / 1 pts

A Linux instance running in your VPC requires some configuration changes to be implemented locally and you need to run some com-
mands. Which of the following can be used to securely connect to the instance?

EC2 password

Correct Answer

Key Pairs

Public key

SSL/TLS certificate

• A key pair consists of a public key that AWS stores, and a private key file that you store
• For Windows AMIs, the private key file is required to obtain the password used to log into your instance
• For Linux AMIs, the private key file allows you to securely SSH into your instance
• The “EC2 password” might refer to the operating system password. By default, you cannot login this way to Linux and must use a key pair. However, this can be enabled by setting a password and updating the /etc/ssh/sshd_config file
• You cannot login to an EC2 instance using certificates/public keys

UnansweredQuestion 17

0 / 1 pts

Your company would like to restrict the ability of most users to change their own passwords whilst continuing to allow a select group of users within specific user groups.
What is the best way to achieve this? (choose 2)

Create an IAM Policy that grants users the ability to change their own password and attach it to the individual user accounts

Correct Answer

Create an IAM Policy that grants users the ability to change their own password and attach it to the groups that contain the users

Create an IAM Role that grants users the ability to change their own password and attach it to the groups that contain the users

Disable the ability for all users to change their own passwords using the AWS Security Token Service

Correct Answer

Under the IAM Password Policy deselect the option to allow users to change their own passwords

• A password policy can be defined for enforcing password length, complexity etc. (applies to all users)
• You can allow or disallow the ability to change passwords using an IAM policy and you should attach this to the group that contains the users, not to the individual users themselves
• You cannot use an IAM role to perform this function
• The AWS STS is not used for controlling password policies

UnansweredQuestion 18

0 / 1 pts

A colleague from your company’s IT Security team has notified you of an Internet-based threat that affects a certain port and protocol combination. You have conducted an audit of your VPC and found that this port and protocol combination is allowed on an Inbound Rule with a source of 0.0.0.0/0. You have verified that this rule only exists for maintenance purposes and need to make an urgent change to block the access.
What is the fastest way to block access from the Internet to the specific ports and protocols?

Add a deny rule to the security group with a higher priority

Delete the security group

You don’t need to do anything; this rule will only allow access to VPC based resources

Correct Answer

Update the security group by removing the rule

• Security group membership can be changed whilst instances are running
• Any changes to security groups will take effect immediately
• You can only assign permit rules in a security group, you cannot assign deny rules
• If you delete the security you will remove all rules and potentially cause other problems
• You do need to make the update, as it’s the VPC based resources you’re concerned about

UnansweredQuestion 19

0 / 1 pts

You are an entrepreneur building a small company with some resources running on AWS. As you have limited funding, you’re extremely cost conscious. Which AWS service can send you alerts via email or SNS topic when you are forecast to exceed your funding capacity so you can take action?

Correct Answer

AWS Budgets

AWS Billing Dashboard

Cost & Usage reports

Cost Explorer

• AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic
• The AWS Cost Explorer is a free tool that allows you to view charts of your costs
• The AWS Billing Dashboard can send alerts when you’re bill reaches certain thresholds but you must use AWS Budgets to created custom budgets that notify you when you are forecast to exceed a budget
• The AWS Cost and Usage report tracks your AWS usage and provides estimated charges associated with your AWS account but does not send alerts

UnansweredQuestion 20

0 / 1 pts

Your company is starting to use AWS to host new web-based applications. A new two-tier application will be deployed that provides customers with access to data records. It is important that the application is highly responsive and retrieval times are optimized. You’re looking for a persistent data store that can provide the required performance. From the list below what AWS service would you recommend for this requirement?

Kinesis Data Streams

RDS in a multi-AZ configuration

ElastiCache with the Memcached engine

Correct Answer

ElastiCache with the Redis engine

ElastiCache is a web service that makes it easy to deploy and run Mem-cached or Redis protocol-compliant server nodes in the cloud. The in-mem-ory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads
There are two different database engines with different characteristics as per below:
Memcached
• Not persistent
• Cannot be used as a data store
• Supports large nodes with multiple cores or threads
• Scales out and in, by adding and removing nodes
Redis
• Data is persistent
• Can be used as a datastore
• Not multi-threaded
• Scales by adding shards, not nodes
Kinesis Data Streams is used for processing streams of data, it is not a persistent data store
RDS is not the optimum solution due to the requirement to optimize retrieval times which is a better fit for an in-memory data store such as ElastiCache

UnansweredQuestion 21

0 / 1 pts

An organization in the health industry needs to create an application that will transmit protected health data to thousands of service consumers in different AWS accounts. The application servers run on EC2 instances in private VPC subnets. The routing for the application must be fault tolerant.
What should be done to meet these requirements?

Create a virtual private gateway connection between each pair of service provider VPCs and service consumer VPCs

Create a proxy server in the service provider VPC to route requests from service consumers to the application servers

Correct Answer

Create a VPC endpoint service and grant permissions to specific service consumers to create a connection

Create an internal Application Load Balancer in the service provider VPC and put application servers behind it

• What you need to do here is offer the service through a service provider offering. This is a great use case for a VPC endpoint service using AWS PrivateLink (referred to as an endpoint service). Other AWS principals can then create a connection from their VPC to your endpoint service using an interface VPC endpoint. You are acting as the service provider and offering the service to service consumers. This configuration uses a Network Load Balancer and can be fault tolerant by configuring multiple sub-
nets in which the EC2 instances are running.
• Creating a virtual private gateway connection between each pair of service provider VPCs and service consumer VPCs would be extremely cumbersome and is not the best option.
• Creating an internal ALB would not work as you need consumers from outside your VPC to be able to connect.
• Using a proxy service is possible but would not scale as well and would present a single point of failure unless there is some load balancing to multiple proxies (not mentioned).

UnansweredQuestion 22

0 / 1 pts

A call center application consists of a three-tier application using Auto Scaling groups to automatically scale resources as needed. Users report that every morning at 9:00am the system becomes very slow for about 15 minutes. A Solutions Architect determines that a large percentage of the call center staff starts work at 9:00am, so Auto Scaling does not have enough time to scale to meet demand.
How can the Architect fix the problem?

Use Reserved Instances to ensure the system has reserved the right amount of capacity for the scaling events

Correct Answer

Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30am each morning

Permanently keep a steady state of instance that is needed at 9:00am to guarantee available resources, but use Spot Instances

Change the Auto Scaling group’s scale out event to scale based on network utilization

Scheduled scaling: Scaling based on a schedule allows you to
set your own scaling schedule for predictable load changes. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. This is ideal for situations where you know when and for how long you are going to need the additional capacity
• Changing the scale-out events to scale based on network utilization may not assist here. We’re not certain the network utilization will increase sufficiently to trigger an Auto Scaling scale out action as the load may be more CPU/memory or number of connections. The main problem however is that we need to ensure the EC2 instances are provisioned ahead of demand not in response to demand (which would incur a delay whilst the EC2 instances “warm up”)
• Using reserved instances ensures capacity is available within an AZ, however the issue here is not that the AZ does not have capacity for more instances, it is that the instances are not being launched by Auto Scaling ahead of the peak demand
• Keeping a steady state of Spot instances is not a good solution. Spot instances may be cheaper, but this is not guaranteed and keeping them online 24hrs a day is wasteful and could prove more expensive

UnansweredQuestion 23

0 / 1 pts

Your company keeps unstructured data on a filesystem. You need to provide access to employees via EC2 instances in your VPC. Which storage solution should you choose?

Correct Answer

Amazon EFS

Amazon S3

Amazon EBS

Amazon Snowball

• EFS is the only storage system presented that provides a file system. EFS is accessed by mounting filesystems using the NFS v4.1 protocol from your EC2 instances. You can concurrently connect up to thousands of instances to a single EFS filesystem
• Amazon S3 is an object-based storage system that is accessed over a REST API
• Amazon EBS is a block-based storage system that provides volumes that are mounted to EC2 instances but cannot be shared between EC2 instances
• Amazon Snowball is a device used for migrating very large amounts of data into or out of AWS

UnansweredQuestion 24

0 / 1 pts

You are a Solutions Architect at a media company and you need to build an application stack that can receive customer comments from sporting events. The application is expected to receive significant load that could scale to millions of messages within a short space of time following high-profile matches. As you are unsure of the load required for the database layer what is the most cost-effective way to ensure that the messages are not dropped?

Write the data to an S3 bucket, configure RDS to poll the bucket for new messages

Correct Answer

Create an SQS queue and modify the application to write to the SQS queue. Launch another application instance the polls the queue and writes messages to the database

Use DynamoDB and provision enough write capacity to handle the highest expected load

Use RDS Auto Scaling for the database layer which will automatically scale as required

• Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed. SQS offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers and is used for distributed/decoupled applications.
• This is a great use case for SQS as the messages you don’t have to over-provision the database layer or worry about messages being dropped
• RDS Auto Scaling does not exist. With RDS you have to select the underlying EC2 instance type to use and pay for that regardless of the actual load on the DB. Note that a new feature released in
June 2019 does allow Auto Scaling for the RDS storage, but not the compute layer.
With DynamoDB there are now 2 pricing options:
• Provisioned capacity has been around forever and is one of the incorrect answers to this question. With provisioned capacity you have to specify the number of read/write capacity units to provision and pay for these regardless of the load on the database.
• With the the new On-demand capacity mode DynamoDB is charged based on the data reads and writes your application performs on your tables. You do not need to specify how much read and write throughput you expect your application to perform because DynamoDB instantly accommodates your workloads as they ramp up or down, it might be a good solution to this question but is not an available option.

UnansweredQuestion 25

0 / 1 pts

A company needs to store data for 5 years. The company will need to have immediate and highly available access to the data at any point in time but will not require frequent access.
Which lifecycle action should be taken to meet the requirements while reducing costs?

Transition objects to expire after 5 years

Correct Answer

Transition objects from Amazon S3 Standard to Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

Transition objects from Amazon S3 Standard to the GLACIER storage class

Transition objects from Amazon S3 Standard to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

• This is a good use case for S3 Standard-IA which provides immediate access and 99.9% availability.
• Expiring the objects after 5 years is going to delete them at the end of the 5-year period, but you still need to work out the best storage solution to use before then, and this answer does not provide a solution.
• The S3 One Zone-IA tier provides immediate access, but the availability is lower at 99.5% so this is not the best option.
• The Glacier storage class does not provide immediate access.
You can retrieve within hours or minutes, but you do need to submit a job to retrieve the data.

UnansweredQuestion 26

0 / 1 pts

A Solutions Architect is designing an application that will run on Amazon ECS behind an Application Load Balancer (ALB). For security reasons, the Amazon EC2 host instances for the ECS cluster are in a private subnet.
What should be done to ensure that the incoming traffic to the host instances is from the ALB only?

Enable AWS WAF on the ALB and enable the ECS rule

Correct Answer

Modify the security group used by the EC2 cluster to allow incoming traffic from the security group used by the ALB only

Update the EC2 cluster security group to allow incoming access from the IP address of the ALB only

Create network ACL rules for the private subnet to allow incoming traffic on ports 32768 through 61000 from the IP address of the ALB only

• The best way to accomplish this requirement is to restrict incoming traffic to the Security Group used by the ALB. This will ensure that only the ALB (and its nodes) will be able to connect to the EC2 instances in the ECS cluster.
• You should not use the IP address of the ALB in the Security Group rules as an ALB has multiple nodes in each AZ in which it has subnets defined. Always use security groups whenever you can.
• Network ACLs work at the subnet level. It is preferable to use Security Groups which work at the instance level. Also, you should not use the IP of the ALB as it will have multiple nodes / IPs and it would be cumbersome to setup and administer.
• Enabling a WAF is useful when you need to protect against malicious code. However, this is not a requirement for this solution, you just need to restrict incoming traffic to the ALB.

UnansweredQuestion 27

0 / 1 pts

A Solutions Architect is designing a highly-scalable system to track records. Records must remain available for immediate download for three months, and then the records must be deleted.
What’s the most appropriate decision for this use case?

Correct Answer

Store the files on Amazon S3, and create a lifecycle policy to remove the files after three months

Store the files on Amazon EBS, and create a lifecycle policy to remove the files after three months

Store the files on Amazon EFS, and create a lifecycle policy to remove the files after three months

Store the files on Amazon Glacier, and create a lifecycle policy to remove the files after three months

• With S3 you can create a lifecycle action using the “expiration action element” which expires objects (deletes them) at the specified time
• S3 lifecycle actions apply to any storage class, including Glacier, however Glacier would not allow immediate download
• There is no lifecycle policy available for deleting files on EBS and EFS
• NOTE: The new Amazon Data Lifecycle Manager (DLM) feature automates the creation, retention, and deletion of EBS snapshots but not the individual files within an EBS volume.

UnansweredQuestion 28

0 / 1 pts

A Solutions Architect needs to allow another AWS account programmatic access to upload objects to his bucket. The Solutions Architect needs to ensure that he retains full control of the objects uploaded to the bucket. How can this be done?

The Architect will need to instruct the user in the other AWS account to grant him access when uploading objects

The Architect will need to take ownership of objects after they have been uploaded

The Architect can use a resource-based ACL with an IAM policy that grants cross-account access and include a conditional statement that only allows uploads if full control access is granted to the Architect

Correct Answer

The Architect can use a resource-based bucket policy that grants cross-account access and include a conditional statement that only allows uploads if full control access is granted to the Architect

• You can use a resource-based bucket policy to allow another AWS account to upload objects to your bucket and use a conditional statement to ensure that full control permissions are granted to a specific account identified by an ID (e.g. email address)
• You cannot use a resource-based ACL with IAM policy as this configuration does not support conditional statements
• Taking ownership of objects is not a concept that is valid in Amazon S3 and asking the user in the other AWS account to grant access when uploading is not a good method as technical controls to enforce this behavior are preferred

UnansweredQuestion 29

0 / 1 pts

You are creating a design for a web-based application that will be based on a web front-end using EC2 instances and a database back-end. This application is a low priority and you do not want to incur costs in general day to day management. Which AWS database service can you use that will require the least operational overhead?

RDS

RedShift

Correct Answer

DynamoDB

EMR

• Out of the options in the list, DynamoDB requires the least operational overhead as there are no backups, maintenance periods, software updates etc. to deal with
• RDS, RedShift and EMR all require some operational overhead to deal with backups, software updates and maintenance periods

UnansweredQuestion 30

0 / 1 pts

A company’s Amazon RDS MySQL DB instance may be rebooted for maintenance and to apply patches. This database is critical and potential user disruption must be minimized.
What should the Solution Architect do in this scenario?

Create an RDS MySQL Read Replica

Set up an Amazon RDS MySQL cluster

Create an Amazon EC2 instance MySQL cluster

Correct Answer

Set the Amazon RDS MySQL to Multi-AZ

• With RDS in multi-AZ configuration system upgrades like OS patching, DB Instance scaling and system upgrades, are applied first on the standby, before failing over and modifying the other DB Instance. This means the database is always available with minimal disruption.
• You cannot create a “RDS MySQL cluster” with Amazon RDS. If you want to create a MySQL cluster you need to install on EC2 (which is another option presented). If you install in EC2 you must manage the whole process of patching and failover yourself as it’s not a managed solution.
• Amazon RDS MySQL Read Replicas can serve reads but not writes so there would be a disruption if the application is writing to the DB while the system updates are taking place.

UnansweredQuestion 31

0 / 1 pts

You would like to share some documents with public users accessing an S3 bucket over the Internet. What are two valid methods of granting public read permissions so you can share the documents? (choose 2)

Share the documents using a bastion host in a public subnet

Correct Answer

Grant public read access to the objects when uploading

Share the documents using CloudFront and a static website

Grant public read on all objects using the S3 bucket ACL

Correct Answer

Use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket granting read access to public anonymous users

• Access policies define access to resources and can be associated with resources (buckets and objects) and users
• You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. Bucket policies can be used to grant permissions to objects
• You can define permissions on objects when uploading and at any time afterwards using the AWS Management Console.
• You cannot use a bucket ACL to grant permissions to objects within the bucket. You must explicitly assign the permissions to each object through an ACL attached as a subresource to that object
• Using an EC2 instance as a bastion host to share the documents is not a feasible or scalable solution
• You can configure an S3 bucket as a static website and use CloudFront as a front-end however this is not necessary just to share the documents and imposes some constraints on the solution

UnansweredQuestion 32

0 / 1 pts

You have created a private Amazon CloudFront distribution that serves files from an Amazon S3 bucket and is accessed using signed URLs. You need to ensure that users cannot bypass the controls provided by Amazon CloudFront and access content directly.
How can this be achieved? (choose 2)

Modify the Edge Location to restrict direct access to Amazon S3 buckets

Correct Answer

Create an origin access identity and associate it with your distribution

Correct Answer

Modify the permissions on the Amazon S3 bucket so that only the origin access identity has read and download permissions

Create a new signed URL that requires users to access the Amazon S3 bucket through Amazon CloudFront

Modify the permissions on the origin access identity to restrict read access to the Amazon S3 bucket

• If you’re using an Amazon S3 bucket as the origin for a Cloud-Front distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you limit access by using CloudFront signed URLs or signed cookies you also won’t want people to be able to view files by simply using the direct URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work. This can be achieved by creating an OAI and associating it with your distribution and then modifying the permissions on the S3 bucket to only allow the OAI to access the files
• You do not modify permissions on the OAI – you do this on the S3 bucket
• If users are accessing the S3 files directly, a new signed URL is not going to stop them
• You cannot modify edge locations to restrict access to S3 buckets

UnansweredQuestion 33

0 / 1 pts

Your company shares some HR videos stored in an Amazon S3 bucket via CloudFront. You need to restrict access to the private content so users coming from specific IP addresses can access the videos and ensure direct access via the Amazon S3 bucket is not possible.
How can this be achieved?

Correct Answer

Configure CloudFront to require users to access the files using a signed URL, create an origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket to the OAI

Configure CloudFront to require users to access the files using signed cookies, create an origin access identity (OAI) and instruct users to login with the OAI

Configure CloudFront to require users to access the files using a signed URL, and configure the S3 bucket as a website endpoint

Configure CloudFront to require users to access the files using signed cookies, and move the files to an encrypted EBS volume

• A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. You can also specify the IP address or range of IP addresses of the users who can access your content
• If you use CloudFront signed URLs (or signed cookies) to limit access to files in your Amazon S3 bucket, you may also want to prevent users from directly accessing your S3 files by using Amazon S3 URLs. To achieve this, you can create an origin access identity (OAI), which is a special CloudFront user, and associate the OAI with your distribution. You can then change the permissions either on your Amazon S3 bucket or on the files in your bucket so that only the origin access identity has read permission (or read and download permission)
• Users cannot login with an OAI
• You cannot use CloudFront and an OAI when you’re S3 bucket is configured as a website endpoint
• You cannot use CloudFront to pull data directly from an EBS volume

UnansweredQuestion 34

0 / 1 pts

There is a temporary need to share some video files that are stored in a private S3 bucket. The consumers do not have AWS accounts and you need to ensure that only authorized consumers can access the files. What is the best way to enable this access?

Configure an allow rule in the Security Group for the IP addresses of the consumers

Enable public read access for the S3 bucket

Use CloudFront to distribute the files using authorization hash tags

Correct Answer

Generate a pre-signed URL and distribute it to the consumers

• S3 pre-signed URLs can be used to provide temporary access to a specific object to those who do not have AWS credentials. This is the best option
• Enabling public read access does not restrict the content to authorized consumers
• You cannot use CloudFront as hash tags are not a CloudFront authentication mechanism
• Security Groups do not apply to S3 buckets

UnansweredQuestion 35

0 / 1 pts

You would like to provide some on-demand and live streaming video to your customers. The plan is to provide the users with both the media player and the media files from the AWS cloud. One of the features you need is for the content of the media files to begin playing while the file is still being downloaded.
What AWS services can deliver these requirements? (choose 2)

Store the media files on an EC2 instance

Store the media files on an EBS volume

Correct Answer

Store the media files in an S3 bucket

Correct Answer

Use CloudFront with a Web and RTMP distribution

Use CloudFront with an RTMP distribution

• For serving both the media player and media files you need two types of distributions:
– A web distribution for the media player
– An RTMP distribution for the media files
. RTMP:
Distribute streaming media files using Adobe Flash Media Server’s RTMP protocol
– Allows an end user to begin playing a media file before the file has finished downloading from a CloudFront edge location Files must be stored in an S3 bucket (not an EBS volume or EC2 instance)

UnansweredQuestion 36

0 / 1 pts

The company you work for is currently transitioning their infrastructure and applications into the AWS cloud. You are planning to deploy an Elastic Load Balancer (ELB) that distributes traffic for a web application running on EC2 instances. You still have some application servers running on-premise and you would like to distribute application traffic across both your AWS and on-premises resources.
How can this be achieved?

Provision an IPSec VPN connection between your on-premises location and AWS and create a CLB that uses cross-zone load balancing to distributed traffic across EC2 instances and on premises servers

This cannot be done, ELBs are an AWS service and can only distributed traffic within the AWS cloud

Correct Answer

Provision a Direct Connect connection between your onpremises location and AWS and create a target group on an ALB to use IP based targets for both your EC2 instances and on premises servers

Provision a Direct Connect connection between your onpremises location and AWS and create a target group on an ALB to use Instance ID based targets for both your EC2 instances and on-premises servers

• The ALB (and NLB) supports IP addresses as targets
• Using IP addresses as targets allows load balancing any application hosted in AWS or on-premises using IP addresses of the application back-ends as targets
• You must have a VPN or Direct Connect connection to enable this configuration to work
• You cannot use instance ID based targets for on-premises servers and you cannot mix instance ID and IP address target
types in a single target group
• The CLB does not support IP addresses as targets

UnansweredQuestion 37

0 / 1 pts

A Solutions Architect is developing a mobile web app that will provide access to health-related data. The web apps will be tested on Android and iOS devices. The Architect needs to run tests on multiple devices simultaneously and to be able to reproduce issues, and record logs and performance data to ensure quality before release.
What AWS service can be used for these requirements?

AWS Workspaces

Amazon Appstream 2.0

AWS Cognito

Correct Answer

AWS Device Farm

• AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time
• Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. It is not used for testing
• Amazon Workspaces is a managed, secure cloud desktop service
• Amazon AppStream 2.0 is a fully managed application streaming service

UnansweredQuestion 38

0 / 1 pts

There is a new requirement to implement in-memory caching for a Financial Services application due to increasing read-heavy load. The data must be stored persistently. Automatic failover across AZs is also required.
Which two items from the list below are required to deliver these requirements? (choose 2)

Read replica with failover mode enabled

Correct Answer

Multi-AZ with Cluster mode and Automatic Failover enabled

Multiple nodes placed in different AZs

Correct Answer

ElastiCache with the Redis engine

ElastiCache with the Memcached engine

• Redis engine stores data persistently
• Memached engine does not store data persistently
• Redis engine supports Multi-AZ using read replicas in another AZ in the same region
• You can have a fully automated, fault tolerant ElastiCache-Redis implementation by enabling both cluster mode and multi-AZ failover
• Memcached engine does not support Multi-AZ failover or replication

UnansweredQuestion 39

0 / 1 pts

An application you are designing receives and processes files. The files are typically around 4GB in size and the application extracts metadata from the files which typically takes a few seconds for each file. The pattern of updates is highly dynamic with times of little activity and then multiple uploads within a short period of time.
What architecture will address this workload the most cost efficiently?

Correct Answer

Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata

Store the file in an EBS volume which can then be accessed by another EC2 instance for processing

Place the files in an SQS queue, and use a fleet of EC2 instances to extract the metadata

Use a Kinesis data stream to store the file, and use Lambda for processing

• Storing the file in an S3 bucket is the most cost-efficient solution, and using S3 event notifications to invoke a Lambda func-
tion works well for this unpredictable workload
• Kinesis data streams runs on EC2 instances and you must therefore provision some capacity even when the application is not receiving files. This is not as cost-efficient as storing them in an S3 bucket prior to using Lambda for the processing
• SQS queues have a maximum message size of 256KB. You can use the extended client library for Java to use pointers to a pay-load on S3 but the maximum payload size is 2GB
• Storing the file in an EBS volume and using EC2 instances for processing is not cost efficient

UnansweredQuestion 40

0 / 1 pts

A Solutions Architect needs to transform data that is being uploaded into S3. The uploads happen sporadically and the transformation should be triggered by an event. The transformed data should then be loaded into a target data store.
What services would be used to deliver this solution in the MOST cost-effective manner? (choose 2)

Configure a CloudWatch alarm to send a notification to CloudFormation when data is uploaded

Correct Answer

Configure S3 event notifications to trigger a Lambda function when data is uploaded and use the Lambda function to trigger the ETL job

Configure CloudFormation to provision a Kinesis data stream to transform the data and load it into S3

Configure CloudFormation to provision AWS Data Pipeline to transform the data

Correct Answer

Use AWS Glue to extract, transform and load the data into the target data store

• S3 event notifications triggering a Lambda function is completely serverless and cost-effective
• AWS Glue can trigger ETL jobs that will transform that data and load it into a data store such as S3
• Kinesis Data Streams is used for processing data, rather than
extracting and transforming it. The Kinesis consumers are EC2 instances which are not as cost-effective as serverless solutions
• AWS Data Pipeline can be used to automate the movement and transformation of data, it relies on other services to actually transform the data

UnansweredQuestion 41

0 / 1 pts

A company has divested a single business unit and needs to move the AWS account owned by the business unit to another AWS Organization. How can this be achieved?

Correct Answer

Migrate the account using the AWS Organizations console

Create a new account in the destination AWS Organization and migrate resources

Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager

Migrate the account using AWS CloudFormation

• Accounts can be migrated between organizations. To do this you must have root or IAM access to both the member and master accounts. Resources will remain under the control of the migrated account.
• You do not need to use AWS CloudFormation. You can use the Organizations API or AWS CLI for when there are many accounts to migrate and therefore you could use CloudFormation for any additional automation but it is not necessary for this scenario.
• You do not need to create a new account in the destination AWS Organization as you can just migrate the existing account.

UnansweredQuestion 42

0 / 1 pts

A new application is to be published in multiple regions around the
world. The Architect needs to ensure only 2 IP addresses need to be whitelisted. The solution should intelligently route traffic for lowest latency and provide fast regional failover.
How can this be achieved?

Launch EC2 instances into multiple regions behind an ALB and use Amazon CloudFront with a pair of static IP addresses

Correct Answer

Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator

Launch EC2 instances into multiple regions behind an ALB and use a Route 53 failover routing policy

Launch EC2 instances into multiple regions behind an NLB with a static IP address

• AWS Global Accelerator uses the vast, congestion-free AWS global network to route TCP and UDP traffic to a healthy application endpoint in the closest AWS Region to the user. This means it will intelligently route traffic to the closest point of presence (reducing latency). Seamless failover is ensured as AWS Global Accelerator uses anycast IP address which means the IP does not change when failing over between regions so there are no issues with client caches having incorrect entries that need to expire. This is the only solution that provides deterministic failover.
• An NLB with a static IP is a workable solution as you could configure a primary and secondary address in applications. However, this solution does not intelligently route traffic for lowest latency.
• A Route 53 failover routing policy uses a primary and standby configuration. Therefore, it sends all traffic to the primary until it fails a health check at which time it sends traffic to the secondary. This solution does not intelligently route traffic for lowest latency.
• Amazon CloudFront cannot be configured with “a pair of static IP addresses”.

UnansweredQuestion 43

0 / 1 pts

An e-commerce web application needs a highly scalable key-value database. Which AWS database service should be used?

Amazon RedShift

Correct Answer

Amazon DynamoDB

Amazon ElastiCache

Amazon RDS

• A key-value database is a type of nonrelational (NoSQL) database that uses a simple key-value method to store data. A key-value database stores data as a collection of key-value pairs in which a key serves as a unique identifier. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability – this is the best database for these requirements.
• Amazon RDS is a relational (SQL) type of database, not a key-value / nonrelational database.
• Amazon RedShift is a data warehouse service used for online analytics processing (OLAP) workloads.
• Amazon ElastiCache is an in-memory caching database. This is not a nonrelational key-value database.

UnansweredQuestion 44

0 / 1 pts

A call center application consists of a three-tier application using Auto Scaling groups to automatically scale resources as needed. Users report that every morning at 9:00am the system becomes very slow for about 15 minutes. A Solutions Architect determines that a large percentage of the call center staff starts work at 9:00am, so Auto Scaling does not have enough time to scale to meet demand.
How can the Architect fix the problem?

Change the Auto Scaling group’s scale out event to scale based on network utilization

Permanently keep a steady state of instance that is needed at 9:00am to guarantee available resources, but use Spot Instances

Correct Answer

Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30am each morning

Use Reserved Instances to ensure the system has reserved the right amount of capacity for the scaling events

Scheduled scaling: Scaling based on a schedule allows you to
set your own scaling schedule for predictable load changes. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. This is ideal for situations where you know when and for how long you are going to need the additional capacity
• Changing the scale-out events to scale based on network utilization may not assist here. We’re not certain the network utilization will increase sufficiently to trigger an Auto Scaling scale out action as the load may be more CPU/memory or number of connections. The main problem however is that we need to ensure the EC2 instances are provisioned ahead of demand not in response to demand (which would incur a delay whilst the EC2 instances “warm up”)
• Using reserved instances ensures capacity is available within an AZ, however the issue here is not that the AZ does not have capacity for more instances, it is that the instances are not being launched by Auto Scaling ahead of the peak demand
• Keeping a steady state of Spot instances is not a good solution. Spot instances may be cheaper, but this is not guaranteed and keeping them online 24hrs a day is wasteful and could prove more expensive

UnansweredQuestion 45

0 / 1 pts

Your company keeps unstructured data on a filesystem. You need to provide access to employees via EC2 instances in your VPC. Which storage solution should you choose?

Amazon EBS

Correct Answer

Amazon EFS

Amazon S3

Amazon Snowball

• EFS is the only storage system presented that provides a file system. EFS is accessed by mounting filesystems using the NFS v4.1 protocol from your EC2 instances. You can concurrently connect up to thousands of instances to a single EFS filesystem
• Amazon S3 is an object-based storage system that is accessed over a REST API
• Amazon EBS is a block-based storage system that provides volumes that are mounted to EC2 instances but cannot be shared between EC2 instances
• Amazon Snowball is a device used for migrating very large amounts of data into or out of AWS

UnansweredQuestion 46

0 / 1 pts

You are a Solutions Architect at a media company and you need to build an application stack that can receive customer comments from sporting events. The application is expected to receive significant load that could scale to millions of messages within a short space of time following high-profile matches. As you are unsure of the load required for the database layer what is the most cost-effective way to ensure that the messages are not dropped?

Use DynamoDB and provision enough write capacity to handle the highest expected load

Correct Answer

Create an SQS queue and modify the application to write to the SQS queue. Launch another application instance the polls the queue and writes messages to the database

Use RDS Auto Scaling for the database layer which will automatically scale as required

Write the data to an S3 bucket, configure RDS to poll the bucket for new messages

• Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed. SQS offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers and is used for distributed/decoupled applications.
• This is a great use case for SQS as the messages you don’t have to over-provision the database layer or worry about messages being dropped
• RDS Auto Scaling does not exist. With RDS you have to select the underlying EC2 instance type to use and pay for that regardless of the actual load on the DB. Note that a new feature released in
June 2019 does allow Auto Scaling for the RDS storage, but not the compute layer.
With DynamoDB there are now 2 pricing options:
• Provisioned capacity has been around forever and is one of the incorrect answers to this question. With provisioned capacity you have to specify the number of read/write capacity units to provision and pay for these regardless of the load on the database.
• With the the new On-demand capacity mode DynamoDB is charged based on the data reads and writes your application performs on your tables. You do not need to specify how much read and write throughput you expect your application to perform because DynamoDB instantly accommodates your workloads as they ramp up or down, it might be a good solution to this question but is not an available option.

UnansweredQuestion 47

0 / 1 pts

A company needs to store data for 5 years. The company will need to have immediate and highly available access to the data at any point in time but will not require frequent access.
Which lifecycle action should be taken to meet the requirements while reducing costs?

Transition objects from Amazon S3 Standard to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Transition objects from Amazon S3 Standard to the GLACIER storage class

Transition objects to expire after 5 years

Correct Answer

Transition objects from Amazon S3 Standard to Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

• This is a good use case for S3 Standard-IA which provides immediate access and 99.9% availability.
• Expiring the objects after 5 years is going to delete them at the end of the 5-year period, but you still need to work out the best storage solution to use before then, and this answer does not provide a solution.
• The S3 One Zone-IA tier provides immediate access, but the availability is lower at 99.5% so this is not the best option.
• The Glacier storage class does not provide immediate access.
You can retrieve within hours or minutes, but you do need to submit a job to retrieve the data.

UnansweredQuestion 48

0 / 1 pts

A Solutions Architect is designing an application that will run on Amazon ECS behind an Application Load Balancer (ALB). For security reasons, the Amazon EC2 host instances for the ECS cluster are in a private subnet.
What should be done to ensure that the incoming traffic to the host instances is from the ALB only?

Create network ACL rules for the private subnet to allow incoming traffic on ports 32768 through 61000 from the IP address of the ALB only

Correct Answer

Modify the security group used by the EC2 cluster to allow incoming traffic from the security group used by the ALB only

Update the EC2 cluster security group to allow incoming access from the IP address of the ALB only

Enable AWS WAF on the ALB and enable the ECS rule

• The best way to accomplish this requirement is to restrict incoming traffic to the Security Group used by the ALB. This will ensure that only the ALB (and its nodes) will be able to connect to the EC2 instances in the ECS cluster.
• You should not use the IP address of the ALB in the Security Group rules as an ALB has multiple nodes in each AZ in which it has subnets defined. Always use security groups whenever you can.
• Network ACLs work at the subnet level. It is preferable to use Security Groups which work at the instance level. Also, you should not use the IP of the ALB as it will have multiple nodes / IPs and it would be cumbersome to setup and administer.
• Enabling a WAF is useful when you need to protect against malicious code. However, this is not a requirement for this solution, you just need to restrict incoming traffic to the ALB.

UnansweredQuestion 49

0 / 1 pts

A Solutions Architect is designing a highly-scalable system to track records. Records must remain available for immediate download for three months, and then the records must be deleted.
What’s the most appropriate decision for this use case?

Store the files on Amazon EBS, and create a lifecycle policy to remove the files after three months

Store the files on Amazon Glacier, and create a lifecycle policy to remove the files after three months

Correct Answer

Store the files on Amazon S3, and create a lifecycle policy to remove the files after three months

Store the files on Amazon EFS, and create a lifecycle policy to remove the files after three months

• With S3 you can create a lifecycle action using the “expiration action element” which expires objects (deletes them) at the specified time
• S3 lifecycle actions apply to any storage class, including Glacier, however Glacier would not allow immediate download
• There is no lifecycle policy available for deleting files on EBS and EFS
• NOTE: The new Amazon Data Lifecycle Manager (DLM) feature automates the creation, retention, and deletion of EBS snapshots but not the individual files within an EBS volume.

UnansweredQuestion 50

0 / 1 pts

A Solutions Architect needs to allow another AWS account programmatic access to upload objects to his bucket. The Solutions Architect needs to ensure that he retains full control of the objects uploaded to the bucket. How can this be done?

The Architect can use a resource-based ACL with an IAM policy that grants cross-account access and include a conditional statement that only allows uploads if full control access is granted to the Architect

The Architect will need to instruct the user in the other AWS account to grant him access when uploading objects

Correct Answer

The Architect can use a resource-based bucket policy that grants cross-account access and include a conditional statement that only allows uploads if full control access is granted to the Architect

The Architect will need to take ownership of objects after they have been uploaded

• You can use a resource-based bucket policy to allow another AWS account to upload objects to your bucket and use a conditional statement to ensure that full control permissions are granted to a specific account identified by an ID (e.g. email address)
• You cannot use a resource-based ACL with IAM policy as this configuration does not support conditional statements
• Taking ownership of objects is not a concept that is valid in Amazon S3 and asking the user in the other AWS account to grant access when uploading is not a good method as technical controls to enforce this behavior are preferred

UnansweredQuestion 51

0 / 1 pts

You are creating a design for a web-based application that will be based on a web front-end using EC2 instances and a database back-end. This application is a low priority and you do not want to incur costs in general day to day management. Which AWS database service can you use that will require the least operational overhead?

RDS

RedShift

Correct Answer

DynamoDB

EMR

• Out of the options in the list, DynamoDB requires the least operational overhead as there are no backups, maintenance periods, software updates etc. to deal with
• RDS, RedShift and EMR all require some operational overhead to deal with backups, software updates and maintenance periods

UnansweredQuestion 52

0 / 1 pts

A company’s Amazon RDS MySQL DB instance may be rebooted for maintenance and to apply patches. This database is critical and potential user disruption must be minimized.
What should the Solution Architect do in this scenario?

Create an RDS MySQL Read Replica

Create an Amazon EC2 instance MySQL cluster

Set up an Amazon RDS MySQL cluster

Correct Answer

Set the Amazon RDS MySQL to Multi-AZ

• With RDS in multi-AZ configuration system upgrades like OS patching, DB Instance scaling and system upgrades, are applied first on the standby, before failing over and modifying the other DB Instance. This means the database is always available with minimal disruption.
• You cannot create a “RDS MySQL cluster” with Amazon RDS. If you want to create a MySQL cluster you need to install on EC2 (which is another option presented). If you install in EC2 you must manage the whole process of patching and failover yourself as it’s not a managed solution.
• Amazon RDS MySQL Read Replicas can serve reads but not writes so there would be a disruption if the application is writing to the DB while the system updates are taking place.

UnansweredQuestion 53

0 / 1 pts

You would like to share some documents with public users accessing an S3 bucket over the Internet. What are two valid methods of granting public read permissions so you can share the documents? (choose 2)

Correct Answer

Use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket granting read access to public anonymous users

Grant public read on all objects using the S3 bucket ACL

Correct Answer

Grant public read access to the objects when uploading

Share the documents using CloudFront and a static website

Share the documents using a bastion host in a public subnet

• Access policies define access to resources and can be associated with resources (buckets and objects) and users
• You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket. Bucket policies can be used to grant permissions to objects
• You can define permissions on objects when uploading and at any time afterwards using the AWS Management Console.
• You cannot use a bucket ACL to grant permissions to objects within the bucket. You must explicitly assign the permissions to each object through an ACL attached as a subresource to that object
• Using an EC2 instance as a bastion host to share the documents is not a feasible or scalable solution
• You can configure an S3 bucket as a static website and use CloudFront as a front-end however this is not necessary just to share the documents and imposes some constraints on the solution

UnansweredQuestion 54

0 / 1 pts

You have created a private Amazon CloudFront distribution that serves files from an Amazon S3 bucket and is accessed using signed URLs. You need to ensure that users cannot bypass the controls provided by Amazon CloudFront and access content directly.
How can this be achieved? (choose 2)

Correct Answer

Modify the permissions on the Amazon S3 bucket so that only the origin access identity has read and download permissions

Create a new signed URL that requires users to access the Amazon S3 bucket through Amazon CloudFront

Correct Answer

Create an origin access identity and associate it with your distribution

Modify the Edge Location to restrict direct access to Amazon S3 buckets

Modify the permissions on the origin access identity to restrict read access to the Amazon S3 bucket

• If you’re using an Amazon S3 bucket as the origin for a Cloud-Front distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you limit access by using CloudFront signed URLs or signed cookies you also won’t want people to be able to view files by simply using the direct URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work. This can be achieved by creating an OAI and associating it with your distribution and then modifying the permissions on the S3 bucket to only allow the OAI to access the files
• You do not modify permissions on the OAI – you do this on the S3 bucket
• If users are accessing the S3 files directly, a new signed URL is not going to stop them
• You cannot modify edge locations to restrict access to S3 buckets

UnansweredQuestion 55

0 / 1 pts

Your company shares some HR videos stored in an Amazon S3 bucket via CloudFront. You need to restrict access to the private content so users coming from specific IP addresses can access the videos and ensure direct access via the Amazon S3 bucket is not possible.
How can this be achieved?

Correct Answer

Configure CloudFront to require users to access the files using a signed URL, create an origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket to the OAI

Configure CloudFront to require users to access the files using signed cookies, create an origin access identity (OAI) and instruct users to login with the OAI

Configure CloudFront to require users to access the files using signed cookies, and move the files to an encrypted EBS volume

Configure CloudFront to require users to access the files using a signed URL, and configure the S3 bucket as a website endpoint

• A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. You can also specify the IP address or range of IP addresses of the users who can access your content
• If you use CloudFront signed URLs (or signed cookies) to limit access to files in your Amazon S3 bucket, you may also want to prevent users from directly accessing your S3 files by using Amazon S3 URLs. To achieve this, you can create an origin access identity (OAI), which is a special CloudFront user, and associate the OAI with your distribution. You can then change the permissions either on your Amazon S3 bucket or on the files in your bucket so that only the origin access identity has read permission (or read and download permission)
• Users cannot login with an OAI
• You cannot use CloudFront and an OAI when you’re S3 bucket is configured as a website endpoint
• You cannot use CloudFront to pull data directly from an EBS volume

UnansweredQuestion 56

0 / 1 pts

There is a temporary need to share some video files that are stored in a private S3 bucket. The consumers do not have AWS accounts and you need to ensure that only authorized consumers can access the files. What is the best way to enable this access?

Configure an allow rule in the Security Group for the IP addresses of the consumers

Correct Answer

Generate a pre-signed URL and distribute it to the consumers

Use CloudFront to distribute the files using authorization hash tags

Enable public read access for the S3 bucket

• S3 pre-signed URLs can be used to provide temporary access to a specific object to those who do not have AWS credentials. This is the best option
• Enabling public read access does not restrict the content to authorized consumers
• You cannot use CloudFront as hash tags are not a CloudFront authentication mechanism
• Security Groups do not apply to S3 buckets

UnansweredQuestion 57

0 / 1 pts

You would like to provide some on-demand and live streaming video to your customers. The plan is to provide the users with both the media player and the media files from the AWS cloud. One of the features you need is for the content of the media files to begin playing while the file is still being downloaded.
What AWS services can deliver these requirements? (choose 2)

Use CloudFront with an RTMP distribution

Correct Answer

Store the media files in an S3 bucket

Correct Answer

Use CloudFront with a Web and RTMP distribution

Store the media files on an EBS volume

Store the media files on an EC2 instance

• For serving both the media player and media files you need two types of distributions:
– A web distribution for the media player
– An RTMP distribution for the media files
. RTMP:
Distribute streaming media files using Adobe Flash Media Server’s RTMP protocol
– Allows an end user to begin playing a media file before the file has finished downloading from a CloudFront edge location Files must be stored in an S3 bucket (not an EBS volume or EC2 instance)

UnansweredQuestion 58

0 / 1 pts

The company you work for is currently transitioning their infrastructure and applications into the AWS cloud. You are planning to deploy an Elastic Load Balancer (ELB) that distributes traffic for a web application running on EC2 instances. You still have some application servers running on-premise and you would like to distribute application traffic across both your AWS and on-premises resources.
How can this be achieved?

Provision a Direct Connect connection between your onpremises location and AWS and create a target group on an ALB to use Instance ID based targets for both your EC2 instances and on-premises servers

Correct Answer

Provision a Direct Connect connection between your onpremises location and AWS and create a target group on an ALB to use IP based targets for both your EC2 instances and on premises servers

Provision an IPSec VPN connection between your on-premises location and AWS and create a CLB that uses cross-zone load balancing to distributed traffic across EC2 instances and on premises servers

This cannot be done, ELBs are an AWS service and can only distributed traffic within the AWS cloud

• The ALB (and NLB) supports IP addresses as targets
• Using IP addresses as targets allows load balancing any application hosted in AWS or on-premises using IP addresses of the application back-ends as targets
• You must have a VPN or Direct Connect connection to enable this configuration to work
• You cannot use instance ID based targets for on-premises servers and you cannot mix instance ID and IP address target
types in a single target group
• The CLB does not support IP addresses as targets

UnansweredQuestion 59

0 / 1 pts

A Solutions Architect is developing a mobile web app that will provide access to health-related data. The web apps will be tested on Android and iOS devices. The Architect needs to run tests on multiple devices simultaneously and to be able to reproduce issues, and record logs and performance data to ensure quality before release.
What AWS service can be used for these requirements?

AWS Cognito

AWS Workspaces

Amazon Appstream 2.0

Correct Answer

AWS Device Farm

• AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time
• Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. It is not used for testing
• Amazon Workspaces is a managed, secure cloud desktop service
• Amazon AppStream 2.0 is a fully managed application streaming service

UnansweredQuestion 60

0 / 1 pts

There is a new requirement to implement in-memory caching for a Financial Services application due to increasing read-heavy load. The data must be stored persistently. Automatic failover across AZs is also required.
Which two items from the list below are required to deliver these requirements? (choose 2)

Read replica with failover mode enabled

Correct Answer

ElastiCache with the Redis engine

ElastiCache with the Memcached engine

Multiple nodes placed in different AZs

Correct Answer

Multi-AZ with Cluster mode and Automatic Failover enabled

• Redis engine stores data persistently
• Memached engine does not store data persistently
• Redis engine supports Multi-AZ using read replicas in another AZ in the same region
• You can have a fully automated, fault tolerant ElastiCache-Redis implementation by enabling both cluster mode and multi-AZ failover
• Memcached engine does not support Multi-AZ failover or replication

UnansweredQuestion 61

0 / 1 pts

An application you are designing receives and processes files. The files are typically around 4GB in size and the application extracts metadata from the files which typically takes a few seconds for each file. The pattern of updates is highly dynamic with times of little activity and then multiple uploads within a short period of time.
What architecture will address this workload the most cost efficiently?

Correct Answer

Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata

Place the files in an SQS queue, and use a fleet of EC2 instances to extract the metadata

Store the file in an EBS volume which can then be accessed by another EC2 instance for processing

Use a Kinesis data stream to store the file, and use Lambda for processing

• Storing the file in an S3 bucket is the most cost-efficient solution, and using S3 event notifications to invoke a Lambda func-
tion works well for this unpredictable workload
• Kinesis data streams runs on EC2 instances and you must therefore provision some capacity even when the application is not receiving files. This is not as cost-efficient as storing them in an S3 bucket prior to using Lambda for the processing
• SQS queues have a maximum message size of 256KB. You can use the extended client library for Java to use pointers to a pay-load on S3 but the maximum payload size is 2GB
• Storing the file in an EBS volume and using EC2 instances for processing is not cost efficient

UnansweredQuestion 62

0 / 1 pts

A Solutions Architect needs to transform data that is being uploaded into S3. The uploads happen sporadically and the transformation should be triggered by an event. The transformed data should then be loaded into a target data store.
What services would be used to deliver this solution in the MOST cost-effective manner? (choose 2)

Configure CloudFormation to provision a Kinesis data stream to transform the data and load it into S3

Correct Answer

Use AWS Glue to extract, transform and load the data into the target data store

Correct Answer

Configure S3 event notifications to trigger a Lambda function when data is uploaded and use the Lambda function to trigger the ETL job

Configure a CloudWatch alarm to send a notification to CloudFormation when data is uploaded

Configure CloudFormation to provision AWS Data Pipeline to transform the data

• S3 event notifications triggering a Lambda function is completely serverless and cost-effective
• AWS Glue can trigger ETL jobs that will transform that data and load it into a data store such as S3
• Kinesis Data Streams is used for processing data, rather than
extracting and transforming it. The Kinesis consumers are EC2 instances which are not as cost-effective as serverless solutions
• AWS Data Pipeline can be used to automate the movement and transformation of data, it relies on other services to actually transform the data

UnansweredQuestion 63

0 / 1 pts

A company has divested a single business unit and needs to move the AWS account owned by the business unit to another AWS Organization. How can this be achieved?

Create a new account in the destination AWS Organization and migrate resources

Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager

Correct Answer

Migrate the account using the AWS Organizations console

Migrate the account using AWS CloudFormation

• Accounts can be migrated between organizations. To do this you must have root or IAM access to both the member and master accounts. Resources will remain under the control of the migrated account.
• You do not need to use AWS CloudFormation. You can use the Organizations API or AWS CLI for when there are many accounts to migrate and therefore you could use CloudFormation for any additional automation but it is not necessary for this scenario.
• You do not need to create a new account in the destination AWS Organization as you can just migrate the existing account.

UnansweredQuestion 64

0 / 1 pts

A new application is to be published in multiple regions around the
world. The Architect needs to ensure only 2 IP addresses need to be whitelisted. The solution should intelligently route traffic for lowest latency and provide fast regional failover.
How can this be achieved?

Correct Answer

Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator

Launch EC2 instances into multiple regions behind an ALB and use Amazon CloudFront with a pair of static IP addresses

Launch EC2 instances into multiple regions behind an NLB with a static IP address

Launch EC2 instances into multiple regions behind an ALB and use a Route 53 failover routing policy

• AWS Global Accelerator uses the vast, congestion-free AWS global network to route TCP and UDP traffic to a healthy application endpoint in the closest AWS Region to the user. This means it will intelligently route traffic to the closest point of presence (reducing latency). Seamless failover is ensured as AWS Global Accelerator uses anycast IP address which means the IP does not change when failing over between regions so there are no issues with client caches having incorrect entries that need to expire. This is the only solution that provides deterministic failover.
• An NLB with a static IP is a workable solution as you could configure a primary and secondary address in applications. However, this solution does not intelligently route traffic for lowest latency.
• A Route 53 failover routing policy uses a primary and standby configuration. Therefore, it sends all traffic to the primary until it fails a health check at which time it sends traffic to the secondary. This solution does not intelligently route traffic for lowest latency.
• Amazon CloudFront cannot be configured with “a pair of static IP addresses”.

UnansweredQuestion 65

0 / 1 pts

An e-commerce web application needs a highly scalable key-value database. Which AWS database service should be used?

Amazon RDS

Correct Answer

Amazon DynamoDB

Amazon ElastiCache

Amazon RedShift

• A key-value database is a type of nonrelational (NoSQL) database that uses a simple key-value method to store data. A key-value database stores data as a collection of key-value pairs in which a key serves as a unique identifier. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability – this is the best database for these requirements.
• Amazon RDS is a relational (SQL) type of database, not a key-value / nonrelational database.
• Amazon RedShift is a data warehouse service used for online analytics processing (OLAP) workloads.
• Amazon ElastiCache is an in-memory caching database. This is not a nonrelational key-value database.