Difference between revisions of "Category:AWS"

From Christoph's Personal Wiki
Jump to: navigation, search
(Simple Workflow Service (SWF))
(CloudWatch)
 
(74 intermediate revisions by the same user not shown)
Line 1: Line 1:
This category/article is (currently) just a random collection of my notes on [http://aws.amazon.com/ Amazon Web Services] (AWS). I will organize each service type into a separate article, when I find the time.
+
This category/article is (currently) just a random collection of my notes on [http://aws.amazon.com/ Amazon Web Services] (AWS). I will organize each service-type into a separate article, when I find the time.
  
==Elastic Block Storage (EBS)==
+
==Identity and Access Management (IAM)==
* EBS Volume types:
+
: SEE: [https://aws.amazon.com/iam/faqs/ AWS IAM FAQs]
** General purpose SSD (GP2): up to 10,000 IOPS
+
** Provisioned IOPS SSD (IO1): more than 10,000 IOPS
+
** Magnetic (Standard): infrequently accessed storage
+
  
NOTE: One can not mount 1 EBS volume to multiple EC2 instances; use EFS instead.
+
'''AWS Identity and Access Management''' ('''IAM''') enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.  
  
In order to enable encryption at rest using EC2 and Elastic Block Store, one needs to configure encryption when creating the EBS volume.
+
IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.
 +
 
 +
* IAM Roles are more secure than storing your access key and secret access key on individual EC2 instances. This is consider a "best practice".
 +
* Roles are easier to manage
 +
* Roles can only be assigned when that EC2 instance is being provisioned (i.e., after provisioning an EC2 instance and adding an IAM Role to that instance, you are not able to delete the Role or add another Role. You can, however, add to or modify the existing Policies attached to the Role attached to the instance.)
 +
* Roles are universal, you can use them in any AWS region.
  
 
==Elastic Compute Cloud (EC2)==
 
==Elastic Compute Cloud (EC2)==
* EC2 provision types:
+
SEE: [[AWS/EC2]]
** On-Demand
+
** [https://aws.amazon.com/ec2/purchasing-options/reserved-instances/ Reserved]
+
*** Amazon EC2 Reserved Instances allow you to reserve Amazon EC2 computing capacity for 1 or 3 years, in exchange for a significant discount (up to 75%) compared to On-Demand instance pricing.
+
** [https://aws.amazon.com/ec2/spot/ Spot]
+
*** Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application's compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
+
*** Example use cases: genomics, drug companies, etc.
+
*** If you terminate, you pay for the hour the instance was terminated
+
*** If Amazon terminates it, you do not pay for the hour the instance was terminated
+
*** [http://ec2price.com/ EC2 Spot prices]
+
  
* [https://aws.amazon.com/ec2/instance-types/ EC2 Instance Types]:
+
==Elastic Load Balancer (ELB)==
** D for Density
+
: SEE: [https://aws.amazon.com/ec2/faqs/ Amazon ELB FAQs]
** I for IOPS
+
** R for RAM
+
** T general purpose (think T2 Micro)
+
** M for Main choice (for general purpose apps)
+
** C for Compute
+
** G for Graphics
+
** mnemonic: <code>DIRTMCG</code>
+
  
* [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html EC2 metadata]:
+
* Load Balancer types:
$ curl <nowiki>http://169.254.169.254/latest/meta-data/public-ipv4</nowiki> # returns the EC2 instance's public IPv4
+
** Application Load Balancer (ALB)
 +
**: Layer 7 Load Balancer
 +
**: Makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each EC2 instance or container instance in your VPC
 +
** Classic Load Balancer (ELB)
 +
**: Layer 4 Load Balancer
 +
**: Makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS), and support either EC2-Classic or a VPC.
  
NOTE: If one makes an Amazon Machine Image (AMI) public, this AMI is ''not'' immediately available across all regions, by default.
+
* ELBs are ''not'' free; one is charged by the hour and on a per GB basis of usage
 
+
With EC2 you can have 2 types of storage, EBS storage or Instance Store. EBS is persistent and if an EC2 instance is stopped with an EBS volume attached, there will be no data lost. Instance Store is ephemeral and if the EC2 instance is stopped, all data will be lost.
+
 
+
==Elastic Load Balancer (ELB)==
+
* ELBs are not free, you are charged by the hour and on a per GB basis of usage
+
  
 
* ELB supported ports:
 
* ELB supported ports:
Line 49: Line 35:
 
* ELB supported protocols:
 
* ELB supported protocols:
 
** HTTP, HTTPS, TCP, SSL
 
** HTTP, HTTPS, TCP, SSL
 +
 +
Instances monitored by ELBs are reported as either:
 +
* InService
 +
* OutofService
 +
 +
Health Checks check the instance's health by simply "talking" to it over HTTP/HTTPS (looking for specific files on the instance)
  
 
NOTE: One can have multiple SSL certificates (for multiple domain names) on a single Elastic Load Balancer.
 
NOTE: One can have multiple SSL certificates (for multiple domain names) on a single Elastic Load Balancer.
 +
 +
==CloudWatch==
 +
 +
SEE: [https://aws.amazon.com/cloudwatch/faqs/ CloudWatch FAQs]
 +
 +
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes to your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.
 +
 +
* Services CloudWatch can monitor include: EC2, Classic ELB, ALB, EBS, S3, SNS, Lambda, DynamoDB, IoT, etc.
 +
 +
* Standard (free) monitoring = every 5 minutes
 +
* Detailed (not free) monitoring = every 1 minute
 +
 +
* Default CloudWatch EC2 monitoring metrics
 +
** CPU (e.g., CPU utilization, credit usage, credit balance)
 +
** Disk (e.g., read/write bytes/ops)
 +
** Network (e.g., traffic in/out, packets in/out)
 +
** Status Checks (instance-level and host/hypervisor-level)
 +
** Able to create custom metrics
 +
 +
'''CloudWatch Dashboards''' allow you to create customizable dashboards to see what is happening within your AWS account.
 +
* Bashboard widgets
 +
** Line (plot): compare metrics over time
 +
** Stacked area (plot): compare the total over time
 +
** Number: instantly see the latest value for a metric
 +
** Text: free text with [[:wikipedia:markdown|markdown]] formatting. Example:
 +
<pre>
 +
# Heading
 +
## Sub-heading
 +
Paragraphs are separated by a blank line. Text attributes *italic*, **bold**, ~~strikethrough~~ .
 +
 +
A [link](http://amazon.com). A link to this dashboard: [MyWebServer](#dashboards:name=MyWebServer).
 +
 +
[button:Button link](http://amazon.com) [button:primary:Primary button link](http://amazon.com)
 +
 +
Table | Header
 +
----|-----
 +
CloudWatch | Dashboards
 +
 +
```
 +
Text block
 +
ssh my-host
 +
```
 +
List syntax:
 +
 +
* CloudWatch
 +
* Dashboards
 +
  1. Graphs
 +
  1. Text widget
 +
</pre>
 +
 +
'''CloudWatch Alarms''' allow you to set alarms that notify you (e.g., via email) when particular thresholds (you set) are hit.
 +
 +
'''Amazon EventBridge''' (formerly '''CloudWatch Events''') helps you to respond to state changes in your AWS resources. When your resources change state they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a pre-determined schedule. For example, you can configure rules to:
 +
* Automatically invoke an AWS Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the Running state
 +
* Direct specific API records from CloudTrail to a Kinesis stream for detailed analysis of potential security or availability risks
 +
* Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume
 +
 +
'''CloudWatch Logs''' helps you to aggregate, monitor, and store logs. Note: You must install an agent on the EC2 instance to use this service. For example, you can:
 +
* Monitor HTTP response codes in Apache logs
 +
* Receive alarms for errors in kernel logs
 +
* Count exceptions in application logs
 +
 +
Note the difference between CloudWatch and CloudTrail.
 +
 +
==CloudTrail==
 +
 +
[https://aws.amazon.com/cloudtrail/ AWS CloudTrail] is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.
 +
 +
With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.
 +
 +
==AWS Command Line Interface (CLI)==
 +
 +
The '''AWS Command Line Interface''' ('''CLI''') is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
 +
 +
The AWS CLI introduces a new set of simple file commands for efficient file transfers to and from Amazon S3.
  
 
==SDKs==
 
==SDKs==
Line 78: Line 145:
  
 
==Simple Storage Service (S3)==
 
==Simple Storage Service (S3)==
Amazon [https://aws.amazon.com/s3/ Simple Storage Service] (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage. Amazon S3 is easy to use object storage, with a simple web service interface to store and retrieve any amount of data from anywhere on the web. With Amazon S3, you pay only for the storage you actually use. There is no minimum fee and no setup cost.
+
SEE: [[AWS/S3]]
  
* Simple key-value store:
+
==Lambda==
** key = name of the object;
+
: SEE: [[AWS/Lambda]]
** value = the actual data (made up of a sequence of bytes);
+
** version ID (important for versioning); and
+
** metadata (data about the data you are storing)
+
** Sub-resources
+
** Access Control Lists (ACLs)
+
* S3 bucket URL: <nowiki>https://s3-<region>.amazonaws.com/<bucket_name></nowiki> (e.g., <nowiki>https://s3-us-west-1.amazonaws.com/foobar</nowiki>)
+
* SLA:
+
** availability: 99.99% (2 nines)
+
** durability: 99.99999999999% (11 nines)
+
* Files can be from 1 byte to 5 TB in size (split files larger than 5 TB into pieces to upload)
+
** Note that the largest size file you can transfer to S3 using a PUT operation is 5 GB
+
* Unlimited storage
+
* Files/objects are stored in "buckets"
+
* S3 is a universal namespace (i.e., bucket names must be unique globally; think domain names)
+
* Read-after-Write consistency for PUTs of new Objects
+
* Eventual Consistency for overwrite PUTs and DELETEs (can take some time to propagate)
+
* Lifecyle management
+
* Versioning
+
* Encryption (default Advanced Encryption Standard (AES) 256bit)
+
** In transit (SSL/TLS)
+
** At Rest:
+
*** Server Side Encryption (SSE):
+
**** S3 Managed Keys (SSE-S3; 256bit);
+
**** AWS Key Management Service, Managed Keys (SSE-KMS)
+
**** Server Siide Encryption with Customer Provided Keys (SSE-C)
+
** Client Side Encryption (user encypts data on their local machine and then upload to AWS S3)
+
* Secure your data with Bucket Policies and ACLs
+
* Storage tiers/classes:
+
** S3 99.99% (durable, immediately available, frequently accessed): stored across multiple devices in multiple facilities and is designed to sustain the loss of 2 facilities concurrently
+
** S3 IA (Infrequently Accessed) (durable, immediately available, infrequently accessed): for data that is accessed less frequently, but requires rapid access when needed. Lower fee than S3, but you are charged a retrieval fee
+
** S3 Reduced Redundancy Storage (RRS): designed to provide 99.99% availability/durability of objects over a given year (for objects where it is not critical if they are lost; e.g., thumbnails of images, as they can be easily regenerated). concurrent facility fault tollerance = 1
+
** Glacier: Very cheap, but used for archival only. It takes 3-5 hours to restore from Glacier.
+
  
* Storage Gateways:
+
==Databases==
** Gateway Stored Volumes (entire dataset is stored on site and is asynchronously backed up to S3)
+
: SEE: [https://aws.amazon.com/rds/faqs/ Amazon RDS FAQs]
** Gateway Cached Volumes (entire dataset is stored on S3 and the most frequently accessed data is cached on site)
+
** Gateway Virtual Tape Library (VTL) (used for backup and uses popular backup applications like NetBackup, Backup Exec, Veam, etc.)
+
  
* Import/Export Disk:
+
;RDS : Relational Database Server
** Import to EBS
+
** Import to S3
+
** Import to Glacier
+
** Export from S3
+
* Import/Export Snowball (only available in North America)
+
** Import to S3
+
** Export from S3
+
  
* S3 bucket names must contain only lower case characters
+
* RDS (OLTP) Relational Database Types:
* S3 stores data in alphabetical order (lexigraphical order)
+
** Aurora
* Largest file size one can transfer to S3 using a PUT operation: 5 GB ([http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html Use multi-part upload for files larger than 5GB])
+
** MySQL Server
 +
** MariaDB
 +
** PostgreSQL
 +
** MS SQL Server
 +
** Oracle
  
* S3 Static Website:
+
;RDS - Backups, multi-AZs, and read replicas
** <nowiki>http://foobar.s3-website-us-west-2.amazonaws.com/index.html</nowiki> (a static website link)
+
** <nowiki>https://s3-us-west-2.amazonaws.com/foobar/demo.jpeg</nowiki> (''not'' a static website link)
+
** Static websites are always HTTP (not HTTPS, for now)
+
  
* S3 Cross Origin Resource Sharing (CORS)
+
* There are two different types of RDS backups:
** One needs to enable CORS for one S3 bucket to reference objects in another S3 bucket
+
# Automated Backups; and
 +
# Snapshots (manual)
  
* S3 Versioning
+
* Automated Backups
** Stores all versions of an object (including all writes, including deleting the object)
+
** Automated Backups allow you to recover your database to any point in time within a "retention period". The retention period can be between 1 and 35 days. Automated Backups will take a full daily snapshot and will also store transaction logs throughout the day. When you do a recovery, AWS will first choose the most recent daily backup, and then apply transaction logs relevant to that day. This allows you to do a point-in-time recovery down to a second, within the retention period.
** Great backup tool
+
** Automated Backups are enabled by default. The backup data is stored in S3 and you get free storage space equal to the size of your database. Example: If you have an RDS instance of 10 GB, you will get 10 GB worth of storage.
** Once enabled, versioning cannot be disabled, only suspended
+
** Backups are taken within a defined window. During the backup window, storage I/O may be suspended while your data is being backed up and you may experience elevated latency.
** Integrates with Lifecycle rules
+
** Versioning's MFA (Multi-Factor Authentication) delete capability, which uses MFA, can be used to provide an additional layer of security.
+
** Cross Region Replication, requires versioning enabled on the source bucket.
+
  
* S3 Lifecyle Management
+
* Snapshots
** Transition objects to Infrequent Access Storage Class (must wait a minimum of 30 days from initial upload for the object to transition to the new storage class; minimum object size 128KB) or Glacier Storage Class after x amount of days.
+
** RDS Snapshots are done ''manually'' (i.e., they are user initiated). They are stored even after you delete the original RDS instance (unlike automated backups).
** Infrequent retrieval: ~milliseconds
+
** Glacier retrieval: 3 - 5 hours
+
** With Versioning: Transition object versions as well (including deleting old/current versions)
+
** Can not use Reduced Durability Storage Class with Lifecycle Management
+
  
* CloudFront (a content delivery network {CDN})
+
* Restoring backups
** [https://aws.amazon.com/about-aws/global-infrastructure/ AWS Global CDN infrastructure]
+
** Whenever you restore either an Automated Backup or a manual Snapshot, the restored version of the database will be a ''new'' RDS instance with a ''new'' DNS endpoint.
** Edge Location: this is the location where content will be cached. This is different from an AWS Region/AZ (over 50 Edge Locations, as of April 2016)
+
** Origin: This is the origin of all the files/objects that the CDN will distribute. This can be an S3 Bucket, an EC2 Instance, an Elastic Load Balancer, or Route53.
+
*** Possible to have multiple origin paths in the same distribution
+
** Distribution: This is the name given to the CDN, which consists of a collection of Edge Locations.
+
*** Web Distribution: Typically used for websites.
+
*** RTMP: Used for media streaming (e.g., Adobe Flash)
+
** Edge location are not just READ only, one can write to them as well (i.e., PUT and object to them)
+
** Objects are cached for the life of the TTL (Time To Live)
+
** One can clear the cache of an object stored on an Edge Location before the TTL expires, but one will be charged for that service.
+
  
* S3 Transfer Acceleration:
+
* Encryption
** Instead of uploading object directly to an S3 bucket in a given region, one can upload a object directly to the nearest Edge Location and AWS will then transfer the object to the S3 bucket.
+
** Encryption at rest is supported for MySQL, Oracle, SQL Server, PostreSQL, and MariaDB. Encryption is done using the AWS Key Management Service (KMS). Once your RDS instance is encrypted, the data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots.
** Example URL: <bucket_name>.s3-accelerate.amazonaws.com
+
** As of April 2017, encrypting an existing RDS instance is not supported. To use RDS encryption for an existing database, create a new instance with encryption enabled and migrate your data into it.
  
==Databases==
+
* Multi-AZ RDS
: SEE: [https://aws.amazon.com/rds/faqs/ Amazon RDS FAQs]
+
** Allows you to have an exact copy of your production database in another Availability Zone (AZ). AWS handles the replication for you, so when you production database is written to, this write will automatically be synchronised to the standby database.
 +
** In the event of a planned database maintenance, instance failure, or an AZ failure, Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention.
 +
** This is meant for Disaster Recovery (DR) only. It is not primarily used for improving performance. For performance improvement, you need Read Replicas.
 +
** You cannot use the secondary database as an independent read node when you have deployed an RDS instance into multiple AZs (use Read Replicas instead).
  
;RDS : Relational Database Server
+
* Read Replicas
 +
** Allow you to have a ''read-only'' copy of your production database. This is achieved by using ''asynchronous'' replication from the primary RDS instance to the read replica. You use read replica's primarily for very read-heavy database workloads.
 +
** Use for ''scaling''! Not for DR!
 +
** You must have automatic backups turned on in order to deploy a read replica.
 +
** You can have up to 5 read replica copies of any database.
 +
** You can have read replicas of read replicas (but watch out for latency).
 +
** Each read replica will have its own DNS endpoint.
 +
** You cannot have Read Replicas that have Multi-AZ.
 +
** You can, however, create Read Replicas of Multi-AZ source databases.
 +
** Read Replicas can be promoted to be their own databases (but this breaks the replication).
 +
** Able to create a Read Replica in a second region for MySQL and MariaDB. Not for PostgreSQL.
 +
** There is no charge associated with data transfer when replicating data from your primary RDS instance to your secondary instance.
 +
** Supported databases:
 +
*** MySQL
 +
*** PostgreSQL
 +
*** MariaDB
  
* RDS (OLTP) Relational Database Types:
+
Provisioned IOPS volumes can range in size from 100 GB to 6 TB for MySQL, MariaDB, PostgreSQL, and Oracle DB engines. SQL Server Express and Web editions can range in size from 100 GB to 4 TB, while SQL Server Standard and Enterprise editions can range in size from 200 GB to 4 TB.
** MS SQL Server
+
 
** Oracle
+
;DynamoDB vs. RDS
** MySQL Server
+
* DynamoDB offers "push button" scaling (i.e., you can scale your database on-the-fly, without any down time).
** PostgreSQL
+
* With RDS, it is not so easy and you usually have to use a bigger instance size or add a Read Replica.
** Aurora
+
 
** MariaDB
+
;DynamoDB
 +
* A fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.
 +
* Stored on SSD storage.
 +
* Spread across 3 geographically distinct data centres.
 +
* Eventually Consistent Reads (default)
 +
** Consistency across all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data.
 +
** Best read performance.
 +
* Strongly Consistent Reads
 +
** Returns a result that reflects all writes that received a successful response prior to the read.
  
 
* Non-relational Databases:
 
* Non-relational Databases:
Line 190: Line 228:
 
*** Document (=Row)
 
*** Document (=Row)
 
*** Key-value pairs (=Fields)
 
*** Key-value pairs (=Fields)
E.g., JSON/NoSQL document
+
An example (JSON) NoSQL document:
 
<pre>
 
<pre>
 
{
 
{
Line 206: Line 244:
 
</pre>
 
</pre>
  
* Redshift (OLAP)
+
;Data Warehousing
 +
* Used for business intelligence (BI). Tools include: Cognos, Jaspersoft, SQL Server Reporting Services, Oracle Hyperion, SAP NetWeaver, etc.
 +
* Used to pull in very large and complex datasets. Usually used by management to perform queries on data (e.g., current performance vs. targets, etc.).
 +
 
 +
* OLTP vs. OLAP
 +
** Online Transaction Processing (OLTP) differs from Online Analytics Processing (OLAP) in terms of the types of queries performed.
 +
** OLTP Example:
 +
*** Return order number: 1234567
 +
*** Pulls up a row of data (e.g., Name, Data, Address to deliver to, Delivery Status, etc.)
 +
** OLAP Example
 +
*** Return net profit for EMEA and Pacific for the Digital Radio Product
 +
*** Pulls in large numbers of records:
 +
***: Sum of radios sold in EMEA region
 +
***: Sum of radios sold in Pacific region
 +
***: Unit cost of radio in each region
 +
***: Sales price of each radio
 +
***: Sales price - unit cost
 +
 
 +
Data Warehousing databases use a different type of architecture, both from a database perspective and the infrastructure layer.
 +
 
 +
; Redshift (OLAP)
 +
Amazon Redshift is a fast and powerful, fully managed, petabyte-scale data warehouse service in the Cloud. Customers can start small for just $0.25 per hour with no commitment or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions.
 +
 
 +
* Start out: Single Node (160 GB)
 +
* Scale: Use Multi-Node, which consists of:
 +
** Leader Node (manages client connections and receives queries)
 +
** Compute Node (store data and perform queries and computations). Able to have up to 128 Compute Nodes.
 +
 
 +
* Columnar Data Storage
 +
** Instead of storing data as a series of rows, Redshift organizes the data by column. Unlike row-based systems, which are ideal for transaction processing, column-based systems are ideal for data warehousing and analytics, where queries often involve aggregates performed over large datasets. Since only the columns involved in the queries are processed and columnar data is stored sequentially on the storage media, column-based systems require far fewer I/Os, greatly improving query performance (up to 10x faster).
 +
 
 +
* Advanced Compression
 +
** Columnar data stores can be compressed much more than row-based data stores, because similar data is stored sequentially on disk. Redshift employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores. In addition, Redshift does not require indexes or materialized views and, as such, uses less space than traditional relational database systems. When loading data into an empty table, Redshift automatically samples your data and selects the most appropriate compression scheme.
 +
 
 +
* Massively Parallel Processing (MPP)
 +
** Redshift automatically distributes data and query load across all nodes.
 +
** Redshift makes it easy to add nodes to your data warehouse and enables you to maintain fast query performance as your data warehouse grows.
 +
 
 +
* Redshift pricing (you are charged for the following):
 +
** Compute Node Hours
 +
*** Total number of hours you run across all your compute nodes for the billing period. You are billed for 1 unit per node per hour, so a 3-node data warehouse cluster running persistently for an entire month would incur 2,160 instance hours (60x24x30).
 +
*** You are not charged for the Leader Node hours; only Compute Nodes will incur charges.
 +
** Backup
 +
** Data transfer (only within a VPC, not outside it)
 +
 
 +
* Redshift Security
 +
** Encrypted in transit using SSL
 +
** Encrypted at rest using AES-256 encryption
 +
** By default, Redshift takes care of key management. However you can also,
 +
*** Manage your own keys through Hardware Security Module (HSM); or
 +
*** Use AWS Key Management Service (KMS)
 +
 
 +
* Redshift Availability
 +
** Not designed to be multi-AZ. As of April 2017, only available in 1 AZ.
 +
** However, you can restore snapshots to new AZs in the event of an outage.
  
* Elasticache
+
; ElastiCache
 +
* A web service that makes it easy to deploy, operate, and scale and in-memory cache in the Cloud. The service improves the performance of web applications by allowing you to retrieve information for fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.
 +
* Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (e.g., social media, gaming, media sharing, and Q&A portals) or compute-intensive workloads (e.g., a recommendation engine).
 +
* Caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations.
 +
* ElastiCache is a good choice if your database is particularly read-heavy and not prone to frequent changing.
 +
* ElastiCache supports two open-source in-memory caching engines:
 
** Memcached
 
** Memcached
 +
*** A widely adopted memory object caching system. ElastiCache is protocol-compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service.
 
** Redis
 
** Redis
 +
*** A popular open-source in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports Master / Slave replication and Multi-AZ, which can be used to achieve cross-AZ redundancy.
  
DMS: (Online) Database Migration Service (convert Oracle to MySQL, etc.)
+
;Database Migration Service (DMS)
 +
* Announced at re:Invent 2015.
 +
* Allows you to (live) migrate your production database to AWS.
 +
* Once the migration has started, AWS manages all the complexities of the migration process like data type transformation, compression, and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target.
 +
* The AWS schema conversion tool automatically converts the source database schema and a majority of the custom code, including views, stored procedures, and functions, to a format compatible with the target database.
 +
* Example: convert Oracle to MySQL, etc.
  
* Data Warehousing (business intelligence; e.g., Cognos, Jaspersoft, etc.)
+
; Aurora
* Online Transaction Processing (OLTP); vs.
+
Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora provides up to five times better performance than MySQL at a price point 1/10th that of a commercial database while delivering similar performance and availability. It was announced at re:Invent 2014.
* Online Analytics Processing (OLAP)
+
 
 +
* Aurora scaling
 +
** Start with 10 GB, scales in 10 GB increments up to 64 TB (Storage Autoscaling; i.e., it autoscales for you)
 +
** Compute resources can scale up to 32 vCPUs and 244 GB of memory.
 +
** 2 copies of your data is maintained in each availability zone, with a minimum of 3 AZs. As such, Aurora maintains 6 copies of your data.
 +
** Aurora is designed to transparently handle the loss of up to two copies of data without affecting database ''write'' availability and up to three copies without affecting ''read'' availability.
 +
** Aurora storage is self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.
 +
 
 +
* Aurora Replicas
 +
** There are 2 types of Replicas:
 +
**# Aurora Replicas (currently 15). Automatic failover.
 +
**# MySQL Read Replicas (currently 5). No automatic failover.
  
 
==DynamoDB==
 
==DynamoDB==
'''IMPORTANT!''' [https://aws.amazon.com/dynamodb/faqs/ Read FAQ on AWS DynamoDB]
+
: SEE: [https://aws.amazon.com/dynamodb/faqs/ Amazon DynamoDB FAQs] (import to read for the exams!)
  
 
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.
 
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.
Line 264: Line 379:
  
 
* Supports up to 35 levels of nesting (JSON {foo,{bar,{baz,...)
 
* Supports up to 35 levels of nesting (JSON {foo,{bar,{baz,...)
 +
* For any AWS account, there is an initial limit of 256 tables per region (one can, however, request an increase from Amazon)
 +
* You can decrease the <code>ReadCapacityUnits</code> or <code>WriteCapacityUnits</code> settings for a table, but no more than ''four times per table in a single UTC calendar day''. In a single operation, you can decrease the provisioned throughput for a table, for any global secondary indexes on that table, or for any combination of these.
  
 
* Pricing
 
* Pricing
Line 270: Line 387:
 
*** Read throughput $0.0065 per hour for every 50 units
 
*** Read throughput $0.0065 per hour for every 50 units
 
** First 25GB stored per month is free
 
** First 25GB stored per month is free
** Storage costs of $0.25 GB per month thereafter
+
** Storage costs of $0.25 per GB per month thereafter
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Line 276: Line 393:
  
 
Assume that one's application needs to perform 1 million writes and 1 million reads per day, while storing 28 GB of data.
 
Assume that one's application needs to perform 1 million writes and 1 million reads per day, while storing 28 GB of data.
First, one needs to calculate how many writes and reads per second one needs.
 
  
* 1 million evenly spread writes per data is equivalents to:
+
* First, one needs to calculate how many writes and reads per second one needs.
 +
** 1 million evenly spread writes per data is equivalent to:
  
 
  1,000,000 (writes) / 24 (hours) / 60 (minutes) / 60 (seconds) = 11.6 writes per second
 
  1,000,000 (writes) / 24 (hours) / 60 (minutes) / 60 (seconds) = 11.6 writes per second
Line 298: Line 415:
 
Total Cost = $0.1872 per day + $0.0374 per day + storage of $0.75 per month, thus:
 
Total Cost = $0.1872 per day + $0.0374 per day + storage of $0.75 per month, thus:
  
  (30 * ($0.1872 _ $0.0374)) + $0.75 = $7.488
+
  (30 * ($0.1872 + $0.0374)) + $0.75 = $7.488
  
 
Answer: '''$7.488/month'''
 
Answer: '''$7.488/month'''
Line 314: Line 431:
 
*** Partition Key and Sort Key (Hash & Range) composed of two attributes
 
*** Partition Key and Sort Key (Hash & Range) composed of two attributes
  
; Partition Key : DynamoDB uses the partition key's value as input to an internal hash function. The output from the has function determines the partition (this is simply the physical location in which the data is stored).
+
; Partition Key : DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored).
 
: No two items in a table can have the same partition key value!
 
: No two items in a table can have the same partition key value!
  
; Composite Key (Partition Key and Sort Key) : DynamoDB uses the partition key's value as input to an internal has function. The output from the has function determines the partition (this is simply the physical location in which the data is stored).
+
; Composite Key (Partition Key and Sort Key) : DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored).
 
: The two items can have the same partition key, but the ''must'' have a different sort key.
 
: The two items can have the same partition key, but the ''must'' have a different sort key.
 
: All items with the same partition key are stored together, in sorted order by the sort key value.
 
: All items with the same partition key are stored together, in sorted order by the sort key value.
Line 323: Line 440:
 
* Indexes
 
* Indexes
 
** Local Secondary Index
 
** Local Secondary Index
*** Has the ''same'' partition key, different sort key.
+
*** an index that has the same hash key as the table, but a different range key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
*** Can ''only'' be created at table creation. They cannot be removed or modified later.
+
*** has the ''same'' partition key and a ''different'' sort key.
 +
*** can ''only'' be created at table creation. They cannot be removed or modified later.
 +
*** maximum of [http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html 5 local secondary indexes per table]
 +
*** each table can have up to 20 projected non-key attributes, in total across all local secondary indexes within the table. Each index may also specify that all non-key attributes from the primary index are projected. (note: "projections" are the set of attributes that is copied into a local secondary index.)
 
** Global Secondary Index
 
** Global Secondary Index
*** Has ''different'' partition key and different sort key.
+
*** an index with a hash or a hash-and-range key that can be different from those on the table. A global secondary index is considered "global" because queries on the index can span all items in a table, across all partitions.
*** Can be created at table creation or added later.
+
*** has a ''different'' partition key and a ''different'' sort key.
 +
*** can be created at table creation or added later.
 +
*** maximum of [http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html 5 global secondary indexes per table]
  
 
* DynamoDB Streams
 
* DynamoDB Streams
Line 352: Line 474:
 
A Scan operation always scans the entire table, then filters out values to provide the desired result, essentially adding the extra step of removing data from the result set. Avoid using a Scan operation on a large table with a filter that removes many results, if possible. Also, as a table grows, the Scan operation slows. The Scan operation examines every item for the requested values, and can use up the provisioned throughput for a large table in a single operation.
 
A Scan operation always scans the entire table, then filters out values to provide the desired result, essentially adding the extra step of removing data from the result set. Avoid using a Scan operation on a large table with a filter that removes many results, if possible. Also, as a table grows, the Scan operation slows. The Scan operation examines every item for the requested values, and can use up the provisioned throughput for a large table in a single operation.
  
For quicker response times, design your tables in a way that can use the Query, Get, or <code>BatchGetItem</code> APIs, instead. Alternatively, design your application to use Scan operations in a way that minimizes the impact on your table's request rate.
+
For quicker response times, design your tables in a way that can use the <code>Query</code>, <code>GetItem</code>, or <code>BatchGetItem</code> APIs, instead. Alternatively, design your application to use Scan operations in a way that minimizes the impact on your table's request rate.
 +
 
 +
What happens if you exceed your throughput? <code>400 HTTP Status Code - ProvisionedThroughputExceededException</code>
 +
You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes.
 +
 
 +
; DynamoDB Conditional Writes : If item = $10, then update to $12 (conditional writes are idempotent)
 +
: E.g., two users try to update the same item at the same time
 +
 
 +
; DynamoDB Atomic Counters : DynamoDB supports Atomic Counters, where you use the <code>UpdateItem</code> operation to increment or decrement the value of an existing attribute (or "field" in a table) without interfering with other write requests. (All write requests are applied in the order in which the were received.) For example, a web application might with to maintain a counter per visitor to their site, the application would need to increment this counter regardless of its current value.
 +
: Atomic Counters are ''not'' idempotent. This mean that the counter will increment each time you call <code>UpdateItem</code>. If you suspect that a previous request was unsuccessful, your application could retry the <code>UpdateItem</code> operation; however, this would risk updating the counter twice. This might be acceptable for a web site counter, because you can tolerate slightly over- or under-counting visitors. However, in a banking application, it would be safer to use conditional updates rather than atomic counters.
 +
 
 +
If you application needs to read multiple items, you can use the <code>BatchGetItem</code> API. A single <code>BatchGetItem</code> request can retrieve up to 1 MB of data, which can contain as many as 100 items. In addition, a single <code>BatchGetItem</code> request can retrieve items from multiple tables.
 +
 
 +
===DynamoDB API===
 +
''Note: This is an incomplete list. The following are the main API calls one can expect to see on an exam.''
 +
 
 +
;<code>CreateTable</code> : Creates a table and specifies the primary index used for data access.
 +
;<code>UpdateTable</code> : Updates the provisioned throughput values for the given table.
 +
;<code>DeleteTable</code> : Deletes a table.
 +
;<code>DescribeTable</code> : Returns table size, status, and index information.
 +
;<code>ListTables</code> : Returns a list of all tables associated with the current account and endpoint.
 +
;<code>PutItem</code> : Creates a new item, or replaces an old item with a new item (including all the attributes). If an item already exists in the specified table with the same primary key, the new item completely replaces the existing item. You can also use conditional operators to replace an item only if its attribute values match certain conditions, or to insert a new item only if that item does not already exist.
 +
;<code>BatchWriteItem</code> : Inserts, replaces, and deletes multiple items across multiple tables in a single request, but not as a single transaction. Supports batches of up to 25 items to Put or Delete, with a maximum total request size of 16 MB.
 +
;<code>UpdateItem</code> : Edits an existing item's attributes. You can also use conditional operators to perform an update only if the item's attribute values match certain conditions.
 +
;<code>DeleteItem</code> : Deletes a single item in a table by primary key. You can also use conditional operators to perform a delete an item only if the item's attribute values match certain conditions.
 +
;<code>GetItem</code> : The <code>GetItem</code> operation returns a set of Attributes for an item that matches the primary key. The <code>GetItem</code> operation provides an eventually consistent read by default. If eventually consistent reads are not acceptable for your application, use <code>ConsistentRead</code>.
 +
;<code>BatchGetItem</code> : The <code>BatchGetItem</code> operation returns the attributes for multiple items from multiple tables using their primary keys. A single response has a size limit of 16 MB and returns a maximum of 100 items. Supports both strong and eventual consistency.
 +
;<code>Query</code> : Gets one or more items using the table primary key, or from a secondary index using the index key. You can narrow the scope of the query on a table by using comparison operators or expressions. You can also filter the query results using filters on non-key attributes. Supports both strong and eventual consistency. A single response has a size limit of 1 MB.
 +
;<code>Scan</code> : Gets all items and attributes by performing a full scan across the table or a secondary index. You can limit the return set by specifying filters against one or more attributes.
 +
: A <code>Scan</code> operation on a table or secondary index has a limit of 1MB of data per operation. After the 1MB limit, it stops the operation and returns the matching values up to that point, and a <code>LastEvaluatedKey</code> to apply in a subsequent operation, so that you can pick up where you left off.
 +
 
 +
===Using Web Identity Providers with DynamoDB===
 +
 
 +
One can authenticate users using Web Identity providers (e.g., Facebook, Google, Amazon, or any other Open-ID Connect-compatible identity provider). This is done using <code>AssumeRoleWithWebIdentity</code> API.
 +
 
 +
You will need to create a role first.
 +
 
 +
# Authenticate with Identity Provider (e.g., Facebook): Log into Facebook with your username + password
 +
# Facebook returns a Web Identity Token
 +
# AssumeRoleWithWebIdentity request (containing Web Identity Token, App ID of provider, and ARN of role) is sent to AWS Security Token Service
 +
# Amazon then issues Temporary Security Credentials (limit from 15 minutes to 1 hour; default 1 hour)
 +
#: Credentials contain:
 +
## AccessKeyID, SecretAccessKey, SessionToken
 +
## Expiration (time limit)
 +
## AssumeRoleID
 +
## SubjectFromWebIdentity Token (the unique ID that appears in an IAM policy variable for this particular identity provider)
 +
# Using the above credentials, the user is allowed to access DynamoDB
 +
 
 +
===Example exam questions===
  
 
* DynamoDB provisioned throughput calculations
 
* DynamoDB provisioned throughput calculations
Line 360: Line 530:
 
*** Strongly consistent Reads consist of 1 read per second.
 
*** Strongly consistent Reads consist of 1 read per second.
 
** Unit of Write provisioned throughput
 
** Unit of Write provisioned throughput
*** All write are 1 KB in size.
+
*** All writes are 1 KB in size.
 
*** All writes consist of 1 write per second.
 
*** All writes consist of 1 write per second.
  
===Example questions===
 
 
* The Magic Formula
 
* The Magic Formula
  
Line 373: Line 542:
  
 
* First calculate how many READ units per item we need
 
* First calculate how many READ units per item we need
* 1 KB rounded up to nearest 4 KB increment = 8 KB
+
* 5 KB rounded up to nearest 4 KB increment = 8 KB
 
* 8 KB / 4 KB = 2 read units per item
 
* 8 KB / 4 KB = 2 read units per item
 
* 600 / 60 = 10 items per second
 
* 600 / 60 = 10 items per second
Line 415: Line 584:
 
* Using eventual consistency, we get 15 / 2 reads per second = 7.5 => 8
 
* Using eventual consistency, we get 15 / 2 reads per second = 7.5 => 8
  
Answer: '''8 units of read throughput'''
+
Answer: '''8 units of READ throughput'''
 
</div>
 
</div>
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have an application that needs to read 25 items of 13kb in size per second. Your application uses eventually consistent reads. What should you set the read throughput to?
+
Question: You have an application that needs to read 25 items of 13kb in size per second. Your application uses eventually consistent reads. What should you set the READ throughput to?
  
 
* First calculate how many read units per item we need
 
* First calculate how many read units per item we need
Line 427: Line 596:
 
* Using eventual consistency, we get 100 / 2 reads per second = 50
 
* Using eventual consistency, we get 100 / 2 reads per second = 50
  
Answer: '''50 units of read throughput'''
+
Answer: '''50 units of READ throughput'''
 
</div>
 
</div>
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have an application that requires to read 5 items of 10 KB per second using ''strong'' consistency. What should you set the read throughput to?
+
Question: You have an application that requires to read 5 items of 10 KB per second using ''strong consistency''. What should you set the READ throughput to?
  
 
* First calculate how many read units per item we need
 
* First calculate how many read units per item we need
Line 439: Line 608:
 
* Using strong consistency, we do ''not'' divide by 2
 
* Using strong consistency, we do ''not'' divide by 2
  
Answer: '''15 units of read throughput'''
+
Answer: '''15 units of READ throughput'''
 
</div>
 
</div>
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. Your application uses ''strongly consistent'' reads. What should you set the read throughput to?
+
Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. Your application uses ''strongly consistent'' reads. What should you set the READ throughput to?
  
 
* First calculate how many read units per item we need
 
* First calculate how many read units per item we need
Line 452: Line 621:
 
* Using strong consistency, we do ''not'' divide by 2
 
* Using strong consistency, we do ''not'' divide by 2
  
Answer: '''20 units of read throughput'''
+
Answer: '''20 units of READ throughput'''
 
</div>
 
</div>
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have an application that needs to read 25 items of 13kb in size per second. Your application uses strongly consistent reads. What should you set the read throughput to?
+
Question: You have an application that needs to read 25 items of 13kb in size per second. Your application uses ''strongly consistent'' reads. What should you set the READ throughput to?
  
 
* First calculate how many read units per item we need
 
* First calculate how many read units per item we need
Line 464: Line 633:
 
* Using strong consistency, we do ''not'' divide by 2
 
* Using strong consistency, we do ''not'' divide by 2
  
Answer: '''100 units of read throughput'''
+
Answer: '''100 units of READ throughput'''
 
</div>
 
</div>
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have a motion sensor which writes 300 items of data every 30 seconds. Each item consists of 5kb. Your application uses eventually consistent reads. What should you set the read throughput to?
+
Question: You have a motion sensor which writes 300 items of data every 30 seconds. Each item consists of 5kb. Your application uses ''eventually consistent'' reads. What should you set the READ throughput to?
  
 
* First calculate how many read units per item we need
 
* First calculate how many read units per item we need
Line 481: Line 650:
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have an application that requires to WRITE 5 items, with each item being 10 KB in size per second. What should you set the write throughput to?
+
Question: You have an application that requires to WRITE 5 items, with each item being 10 KB in size per second. What should you set the WRITE throughput to?
  
 
* Each write unit consists of 1 KB of data. You need to write 5 items per second with each item using 10 KB of data
 
* Each write unit consists of 1 KB of data. You need to write 5 items per second with each item using 10 KB of data
Line 499: Line 668:
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. What should you set the write throughput to?
+
Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. What should you set the WRITE throughput to?
  
 
* Each write units consists of 1 KB of data. You need to write 10 items per second with each having 5 KB of data
 
* Each write units consists of 1 KB of data. You need to write 10 items per second with each having 5 KB of data
Line 507: Line 676:
 
</div>
 
</div>
  
What happens if you exceed your throughput? <code>400 HTTP Status Code - ProvisionedThroughputExceededException</code>
+
==Key Management Service (KMS)==
You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes.
+
  
Using Web Identity Providers with DynamoDB
+
[https://docs.aws.amazon.com/kms/latest/developerguide/overview.html AWS Key Management Service] (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. KMS is integrated with most other AWS services that encrypt your data with encryption keys that you manage.
  
One can authenticate users using Web Identity providers (e.g., Facebook, Google, Amazon, or any other Open-ID Connect-compatible identity provider). This is done using <code>AssumeRoleWithWebIdentity</code> API.
+
; Examples
  
You will need to create a role first.
+
* Create a KMS key in the Oregon (us-west-2) region:
 +
<pre>
 +
$ aws kms --region=us-west-2 create-key --description="my app assets"
 +
{
 +
    "KeyMetadata": {
 +
        "CreationDate": 1494071487.263,
 +
        "KeyState": "Enabled",
 +
        "Arn": "arn:aws:kms:us-west-2:xxxxxxxxx:key/xxxxxxxxxxxxxxxxxxx",
 +
        "AWSAccountId": "xxxxxxxxxxxxx",
 +
        "Enabled": true,
 +
        "KeyUsage": "ENCRYPT_DECRYPT",
 +
        "KeyId": "xxxxxxxxx",
 +
        "Description": "my app assets"
 +
    }
 +
}
 +
</pre>
  
# Authenticate with Identity Provider (e.g., Facebook): Log into Facebook with your username + password
+
==Route 53==
# Facebook return a Web Identity Token
+
: SEE: [https://aws.amazon.com/route53/faqs/ Route 53 FAQs]
# AssumeRoleWithWebIdentity request (containing Web Identity Token, App ID of provider, and ARN of role) is sent to AWS Security Token Service
+
; DNS 101
# Amazon then issues Temporary Security Credentials (limit from 15 minutes to 1 hour; default 1 hour)
+
* [http://www.iana.org/domains/root/db Root Zone Database] &mdash; by the Internet Assigned Numbers Authority (IANA)
#: Credentials contain:
+
* SEE: [[:wikipedia:List of DNS record types|List of DNS record types]] for a complete list of record types.
## AccessKeyID, SecretAccessKey, SessionToken
+
## Expiration (time limit)
+
## AssumeRoleID
+
## SubjectFromWebIdentity Token (the unique ID that appears in an IAM policy variable for this particular identity provider)
+
# Using the above credentials, the user is allowed to access DynamoDB
+
  
DynamoDB Condition Writes : If item = $10, then update to $12 (conditional writes are idempotent)
+
Route 53 is a global service (i.e., not on a per AWS region).
E.g., two users try to update the same item at the same time
+
  
DynamoDB supports Atomic Counters, where you use the <code>UpdateItem</code> operation to increment or decrement the value of an existing attribute (or "field" in a table) without interfering with other write requests. (All write requests are applied in the order in which the were received.) For example, a web application might with to maintain a counter per visitor to their site, the application would need to increment this counter regardless of its current value.
+
Note: ELBs do not have pre-defined IPv4 addresses. You resolve to them using a DNS name.
  
Atomic Counters are not idempotent. This mean that the counter will increment each time you call <code>UpdateItem</code>. If you suspect that a previous request was unsuccessful, your application could retry the <code>UpdateItem</code> operation; however, this would risk updating the counter twice. This might be acceptable for a web site counter, because you can tolerate slightly over- or under-counting visitors. However, in a banking application, it would be safer to use conditional updates rather than atomic counters.  
+
* The Start of Authority (SOA) record stores information about:
 +
** The name of the server that supplied the data for the zone;
 +
** The administrator of the zone;
 +
** The current version of the data file;
 +
** The number of seconds a secondary name server should wait before checking for updates;
 +
** The number of seconds a secondary name server should wait before retrying a failed zone transfer;
 +
** The maximum number of seconds that a secondary name server can use data before it must either be refreshed or expire; and
 +
** The default number of seconds for the time-to-live (TTL) file on resource records.
  
If you application needs to read multiple items, you can use the <code>BatchGetItem</code> API. A single <code>BatchGetItem</code> request can retrieve up to 1 MB of data, which can contain as many as 100 items. In addition, a single <code>BatchGetItem</code> request can retrieve items from multiple tables.
+
* Name Server (NS) records:
 +
** Used by Top Level Domain servers to direct traffic to the Content DNS server, which contains the authoritative DNS records.
 +
 
 +
* A Records:
 +
** An ''A'' record is the fundamental type of DNS record and the "A" in A record stands for "Address".
 +
** The A record is used by a computer to translate the name of the domain to the IP address (e.g., <nowiki>http://www.example.com</nowiki> => <nowiki>http://1.2.3.4</nowiki>).
 +
 
 +
* TTL
 +
** The length that a DNS record is cached on either the Resolving Server or the user's own local PC is equal to the value of the "Time To Live" (TTL) in seconds. The lower the TTL, the faster changes to DNS records take to propagate throughout the Internet.
 +
 
 +
* CNAMES
 +
** A Canonical Name (CName) can be used to resolve one domain name to another. For example, you may have a mobile website with the domain name <nowiki>http://m.example.com</nowiki> that is used for when users browse to your domain name on their mobile devices. You may also want the name <nowiki>http://mobile.example.com</nowiki> to resolve to this same address.
 +
** CNAME lookups on AWS incur charges.
 +
 
 +
* Alias Records
 +
** Used to map resource record sets in your hosted zone to ELBs, CloudFront distributions, or S3 buckets that are configured as websites.
 +
** Alias records work like a CNAME record, in that you can map one DNS name (<nowiki>www.example.com</nowiki>) to another "target" DNS name (elb1234.elb.amazonaws.com).
 +
** The key difference: A CNAME cannot be used for naked domain names (zone apex; e.g., example.com, not www.example.com). You cannot have a CNAME for <nowiki>http://example.com</nowiki>, it must be either an A record or an Alias.
 +
** Alias resource record sets can save you time because Route 53 automatically recognizes changes in the record sets that the alias resource record set refers to.
 +
** For example, suppose an alias resource record set for example.com points to an ELB at lb1-1234.us-west-2.elb.amazonaws.com. If the IP address of the load balancer changes, Route 53 will automatically reflect those changes in DNS answers for example.com without any changes to the hosted zone that contains resource record sets for example.com.
 +
** Alias Record lookups on AWS are free. Given the choice (on an exam), always choose an Alias Record over a CNAME (if possible).
 +
 
 +
Note: ELBs do not have a pre-defined IPv4 addresses, you always resolve them using a DNS name.
 +
 
 +
; Route 53 Routing Policies
 +
* Simple
 +
** The default routing policy when you create a new record set.
 +
** This is most commonly used when you have a single resource that performs a given function for your domain (e.g., one web server that serves content for <nowiki>http://example.com</nowiki>).
 +
* Weighted
 +
** Allows you to split your traffic based on different weights assigned (e.g., send 20% of your traffic to us-east-1 and 80% to us-west-2).
 +
* Latency
 +
** Allows you to route your traffic based on the lowest network latency for your end user (i.e., which region will give them the fastest response time).
 +
** In order to use latency-based routing, you create a latency resource record set for the EC2 (or ELB) resource in each region that hosts your website. When Route 53 receives a query for your site, it selects the latency resource set for the region that gives the user the lowest latency. Route 53 then responds with the value associated with that resource record set.
 +
* Failover
 +
** Used when you want to create an active/passive setup. For example, you may want you primary site to be in us-west-2 and your secondary DR site in us-east-1.
 +
** Route 53 will monitor the health of your primary site using a health check, which monitors the health of your end points.
 +
* Geolocation
 +
** Lets you choose where your traffic will be sent based on the geographic location of your users (i.e., the location from which DNS queries originate). For example, you might want all queries from Europe to be routed to a fleet of EC2 instances that are specifically configured for your European customers. These servers may have the local language(s) of your European customers and all prices are displayed in Euros.
  
 
==Simple Queue Service (SQS)==
 
==Simple Queue Service (SQS)==
Line 541: Line 762:
  
 
* SQS vs. RabbitMQ:
 
* SQS vs. RabbitMQ:
** SQS is a managed service. So one does not have to worry about operational aspects of running a messaging system including administration, security, monitoring etc. Amazon will do this for you and will provide support if something were to go wrong.
+
** SQS is a managed service. So one does not have to worry about operational aspects of running a messaging system including administration, security, monitoring, etc. Amazon will do this for you and will provide support if something were to go wrong.
 
** SQS is Elastic and can scale to very large rate/volumes (unlimited according to AWS)
 
** SQS is Elastic and can scale to very large rate/volumes (unlimited according to AWS)
 
** Availability of SQS has a lot of 9's in it and is backed by Amazon, which is one less thing to worry about in your application.
 
** Availability of SQS has a lot of 9's in it and is backed by Amazon, which is one less thing to worry about in your application.
Line 559: Line 780:
 
To illustrate, suppose you have a number of images files to encode. In a SQS worker queue, you create a SQS message for each file specifying the command (jpeg-encode) and the location of the file in S3. A pool of EC2 instances running the needed image processing software does the following:
 
To illustrate, suppose you have a number of images files to encode. In a SQS worker queue, you create a SQS message for each file specifying the command (jpeg-encode) and the location of the file in S3. A pool of EC2 instances running the needed image processing software does the following:
  
# Asynchronously pulls the task messages from the queue;
+
# Asynchronously '''pulls''' the task messages from the queue;
# Retrievers the named file;
+
# Retrieves the named file;
 
# Processes the conversion (e.g., create a thumbnail, add a watermark, etc.)
 
# Processes the conversion (e.g., create a thumbnail, add a watermark, etc.)
 
# Write the image back to Amazon S3;
 
# Write the image back to Amazon S3;
Line 571: Line 792:
 
* SQS with Auto-scaling
 
* SQS with Auto-scaling
 
* SQS does not offer FIFO (first in, first out)
 
* SQS does not offer FIFO (first in, first out)
* 12 hour visibility time out
+
* 12 hour visibility time out (by default)
 
* SQS is engineered to provide "at least once" delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies.
 
* SQS is engineered to provide "at least once" delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies.
 
* 256kb message size (as of May 2016)
 
* 256kb message size (as of May 2016)
Line 583: Line 804:
 
** Each 64kb "chunk" of payload is billed as 1 request. For example, a single API call with a 256kb payload will be billed as four requests.
 
** Each 64kb "chunk" of payload is billed as 1 request. For example, a single API call with a 256kb payload will be billed as four requests.
  
* If you see "decouple", think SQS
+
* If you see "decouple" on the exam, think SQS.
  
 
* SQS Delivery:
 
* SQS Delivery:
Line 624: Line 845:
 
SNS is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.
 
SNS is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.
  
SQS follows the "publish-subscribe" (pub-sub) messaging paradigm, with notifications being delivered to clients, using a "push" mechanism that eliminates the need to periodically check or "poll" for new information and updates. With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, SNS gives developers an easy mechanism to incorporate a powerful notification system with their applications.
+
SNS follows the "publish-subscribe" (pub-sub) messaging paradigm, with notifications being delivered to clients, using a "push" mechanism that eliminates the need to periodically check or "poll" for new information and updates. With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, SNS gives developers an easy mechanism to incorporate a powerful notification system with their applications.
  
 
Push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push.
 
Push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push.
  
 
Besides pushing cloud notifications directly to mobile devices, SNS can also deliver notifications by SMS text message or email, to Amazon Simple Queue Service (SQS) queues or to any HTTP endpoint.
 
Besides pushing cloud notifications directly to mobile devices, SNS can also deliver notifications by SMS text message or email, to Amazon Simple Queue Service (SQS) queues or to any HTTP endpoint.
 +
 +
SNS notifications can also trigger Lambda functions. When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message. The Lambda function receives the message payload as an input parameter and can manipulate the information in the message, publish the message to other SNS topics, or send the message to other AWS services.
  
 
To prevent messages from being lost, all messages published to SNS are stored redundantly across multiple availability zones.
 
To prevent messages from being lost, all messages published to SNS are stored redundantly across multiple availability zones.
Line 658: Line 881:
  
 
* SNS benefits:
 
* SNS benefits:
** Instantaneous, push-based delivery (no polling)
+
** Instantaneous, ''push''-based delivery (no polling, like SQS)
 
** Simple APIs and easy integration with applications
 
** Simple APIs and easy integration with applications
 
** Flexible message delivery over multiple transport protocols
 
** Flexible message delivery over multiple transport protocols
Line 684: Line 907:
 
** Amazon SQS
 
** Amazon SQS
 
** Application
 
** Application
 +
** AWS Lambda
 
NOTE: Messages can be customized for each protocol
 
NOTE: Messages can be customized for each protocol
  
Line 689: Line 913:
 
: SEE: [https://aws.amazon.com/swf/faqs/ Amazon SWF FAQs]
 
: SEE: [https://aws.amazon.com/swf/faqs/ Amazon SWF FAQs]
  
Amazon Simple Workflow Server (Amazon SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process work-flows, and analytics pipelines, to be designed as a coordination of tasks. Task represent invocations of various processing steps in an application, which can be performed by executable code, web service calls, human actions, and scripts.
+
Amazon Simple Workflow Service (Amazon SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process work-flows, and analytics pipelines, to be designed as a coordination of tasks. Tasks represent invocations of various processing steps in an application, which can be performed by executable code, web service calls, human actions, and scripts.
  
; SWF Workers : programs that interact with SWF to get tasks, process received tasks, and return the results.
+
*SWF Actors
 +
; Workflow Starters: an application that can initiate (start) a workflow. This could be your e-commerce website when placing an order or a mobile app searching for bus times.
 +
; SWF Workers (Activity Workers) : programs that interact with SWF to get tasks, process received tasks, and return the results.
 
; SWF Decider : a program that controls the coordination of tasks, i.e., their ordering, concurrency, and scheduling according to the application logic.
 
; SWF Decider : a program that controls the coordination of tasks, i.e., their ordering, concurrency, and scheduling according to the application logic.
  
Line 717: Line 943:
 
** Does the service need to run for (much) more than 12 hours? If so, one should use SWF. If less than 12 hours, SQS ''might'' be the correct service to use.
 
** Does the service need to run for (much) more than 12 hours? If so, one should use SWF. If less than 12 hours, SQS ''might'' be the correct service to use.
 
** Maintaining your application's execution state (e.g. which steps have completed, which ones are running, etc.) is a perfect use case for SWF.
 
** Maintaining your application's execution state (e.g. which steps have completed, which ones are running, etc.) is a perfect use case for SWF.
** Amazon SWF is useful for automating work-flows that include long-running human tasks (e.g. approvals, reviews, investigations, etc.) Amazon SWF reliably tracks the status of processing steps that run up to several days or months.
+
** Amazon SWF is useful for automating work-flows that include long-running human tasks (e.g. approvals, reviews, investigations, etc.). Amazon SWF reliably tracks the status of processing steps that run up to several days or months.
 +
** SQS has a retention period of 14 days; SWF up to 1 year for workflow executions.
 +
 
 +
==Elastic Transcoder==
 +
 
 +
Amazon Elastic Transcoder lets you convert digital media stored in Amazon S3 into the audio and video codecs and the containers required by consumer playback devices. For example, you can convert large, high-quality digital media files into formats that users can play back on mobile devices, tablets, web browsers, and connected televisions.
 +
 
 +
It is a media transcoder in the Cloud. It allows you to convert media files from their original source format into different formats that will play on smartphones, tables, PCs, etc. It provides transcoding present for popular output formats, which mean that you do not need to guess which settings work beset on particular devices.
 +
 
 +
Pay based on the minutes that you transcode and the resolution at which you transcode.
 +
 
 +
Elastic Transcoder has three components:
 +
# '''Pipelines''' are queues that manage your transcoding jobs. Elastic Transcoder begins to process jobs in the order in which you add them to a pipeline. Typically, you will create at least two pipelines, one for standard-priority jobs and one for high-priority jobs. Most jobs go into the standard-priority pipeline; you use the high-priority pipeline only when you need a file to be transcoded immediately.
 +
# '''Jobs''' specify the settings that are not included in the preset, for example, the file to transcode and whether to create thumbnails. Each job converts one file into one different format. When you create a job, Elastic Transcoder adds it to the pipeline you specify. If there are already jobs in the pipeline, Elastic Transcoder begins processing the new job when resources are available.
 +
# '''Presets''' are templates that specify most of the settings for the transcoded media file. Elastic Transcoder includes some default presets for common formats. You can also create your own presets. When you create a job, you specify which preset to use.
 +
 
 +
==API Gateway==
 +
 
 +
Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a "front door" for applications to access data, business logic, or functionality from your back-end services, such as applications running on EC2, code running on Lambda, or any web application.
 +
 
 +
;API Caching
 +
You can enable API caching in API Gateway to cache your endpoint's response. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of the requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.
 +
 
 +
* API Gateway provides:
 +
** Low cost and efficient
 +
** Scales effortlessly
 +
** You can throttle requests to prevent attacks
 +
** You can connect to CloudWatch to log all requests
 +
 
 +
* Same origin policy
 +
** In computing, the same-origin policy is an important concept in the web application security model. Under the policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin.
 +
 
 +
;Cross-Origin Resource Sharing (CORS)
 +
* CORS is one way the server at the other end (not the client code in the browser) can relax the same-origin policy.
 +
* CORS is a mechanism that allows restricted resources (e.g., fonts) on a web page to be requested from another domain outside the domain from which the first resource was served.
 +
* Example error: "Origin policy cannot be read at the remote resource" => You need to enable CORS on API Gateway.
  
 
==CloudFormation==
 
==CloudFormation==
 +
: SEE: [https://aws.amazon.com/cloudformation/faqs/ Amazon CloudFormation FAQs]
 +
 
CloudFormation => Scripted infrastructure (IaaS)
 
CloudFormation => Scripted infrastructure (IaaS)
  
Line 751: Line 1,014:
 
** Remember the "<code>Fn::GetAtt</code>" function
 
** Remember the "<code>Fn::GetAtt</code>" function
  
If a CloudFormation stack creation fails, the default is to terminate and rollback all resources created on failure (i.e., delete all of the resources it was trying to create). One can disable roll back to leave all resources in their current state (failed or not). This is useful for troubleshooting your own templates.
+
If a CloudFormation stack creation fails, the default is to terminate and roll-back all resources created on failure (i.e., delete all of the resources it was trying to create). One can disable roll back to leave all resources in their current state (failed or not). This is useful for troubleshooting your own templates.
  
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
 
<div style="margin: 10px; padding: 5px; border: 2px solid #18e; background-color: #eee;">
Line 761: Line 1,024:
 
==Elastic Beanstalk==
 
==Elastic Beanstalk==
  
Using the AWS Elastic Beanstalk service is free. However, any resources it creates/consumes/provisions are ''not'' free.
+
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
 +
 
 +
You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling, to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
 +
 
 +
Using the AWS Elastic Beanstalk service is free. However, any AWS resources it creates/consumes/provisions to store and run your applications are ''not'' free.
  
 
* Environment tier:
 
* Environment tier:
Line 773: Line 1,040:
  
 
==Virtual Private Cloud (VPC)==
 
==Virtual Private Cloud (VPC)==
 +
: SEE: [https://aws.amazon.com/vpc/faqs/ Amazon VPC FAQs]
  
Think of a VPC as a virtual data centre in the cloud.
+
;Think of a VPC as a virtual data centre in the Cloud.
  
 
* AWS definition of a VPC:
 
* AWS definition of a VPC:
 
** Amazon Virtual Private Cloud (VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including select of you own IP address range, creation of subnets, and configuration of route tables and network gateways.
 
** Amazon Virtual Private Cloud (VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including select of you own IP address range, creation of subnets, and configuration of route tables and network gateways.
 
** You can easily customize the network configuration for your AWS virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place backend systems, such as databases or application service in a private-facing subnet, with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to EC2 instance in each subnet.
 
** You can easily customize the network configuration for your AWS virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place backend systems, such as databases or application service in a private-facing subnet, with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to EC2 instance in each subnet.
** Additionally, one can create a Hardware Virtual Private Network (VPN) connection between your corporate datacentre and your VPC and leverage the AWS cloud as an extension of your corporate datacentre. "Hybrid Cloud".
+
** Additionally, one can create a Hardware Virtual Private Network (VPN) connection between your corporate datacentre and your VPC and leverage the AWS cloud as an extension of your corporate datacentre (aka a "Hybrid Cloud").
 +
** VPCs consist of IGWs (or Virtual Private Gateways), Route Tables, Network Access Control Lists, Subnets, Security Groups, etc.
  
 
* What can one do with a VPC?
 
* What can one do with a VPC?
** Launch instances into a subnet of your choosing
+
** Launch instances into a subnet of one's choosing
 
** Assign custom IP address ranges in each subnet
 
** Assign custom IP address ranges in each subnet
 
** Configure route tables between subnets
 
** Configure route tables between subnets
** Create internet gateways and attach them to subnets (or not)
+
** Create internet gateways and attach them to subnets (or not). Only one internet gateway per VPC.
 
** Much better security control over your AWS resources
 
** Much better security control over your AWS resources
** Instance security groups
+
** Instance security groups (these are stateful: HTTP in = HTTP out)
** Create subnet network access control lists (ACLs)
+
** Create subnet network access control lists (ACLs). These are stateless: HTTP in != HTTP out (you must create separate ACLs for each).
 +
** Each subnet is mapped directly to an AZ, and only one AZ (you cannot span subnets across AZs) => 1 subnet = 1 AZ.
 +
** Security groups, route tables, and ACLs can span multiple subnets.
 
** Number of allowed VPCs in each AWS Region (by default): 5
 
** Number of allowed VPCs in each AWS Region (by default): 5
  
 
* Default VPC vs. Custom VPC
 
* Default VPC vs. Custom VPC
** Default VPC is user friendly (automatically created when one creates an AWS account). It allows one to immediately deploy instances.
+
** A Default VPC is user-friendly (automatically created when one creates an AWS account). It allows one to immediately deploy instances.
** All subnets in a default VPC have an internet gateway attached (i.e., all subnets are public)
+
** All subnets in a default VPC have an internet gateway attached (i.e., all subnets are public / all subnets have a route out to the Internet).
 
** Each EC2 instance has both a public and private IP address
 
** Each EC2 instance has both a public and private IP address
 
** If one were to delete the default VPC, the only way to get it back is to contact AWS
 
** If one were to delete the default VPC, the only way to get it back is to contact AWS
Line 804: Line 1,075:
 
** Peering is done in a "star configuration", i.e., 1 central VPC peers with 4 others. '''No transitive peering'''!
 
** Peering is done in a "star configuration", i.e., 1 central VPC peers with 4 others. '''No transitive peering'''!
  
* A "star configuration" peering:
+
* A "star configuration" (or hub-and-spoke) peering:
 
<pre>
 
<pre>
 
                 +-------+
 
                 +-------+
Line 823: Line 1,094:
 
</pre>
 
</pre>
  
In the above example, instances on VPC-B can not send/receive traffic on VPC-C via VPC-A. One would need to create a VPC peer directly from VPC-B and VPC-C. That is, with a star configuration (as shown above), there is no transitive peering.
+
In the above example, instances on VPC-B can not send/receive traffic on VPC-C via VPC-A. One would need to create a VPC peer directly from VPC-B and VPC-C. That is, with a star configuration (as shown above), there is ''no transitive peering''.
  
 
* VPC tenancy:
 
* VPC tenancy:
Line 831: Line 1,102:
 
By default, when one creates a VPC, a route tables is automatically created for the VPC.
 
By default, when one creates a VPC, a route tables is automatically created for the VPC.
  
NOTE: If one deletes one's account's default VPC (and/or the associated default subnets), the only way to get them back is to raise a ticket with Amazon.
+
NOTE: If one deletes one's account's default VPC (and/or the associated default subnets), the only way to get it back is to raise a ticket with Amazon.
  
 
* VPC subnets:
 
* VPC subnets:
 
** Use the CIDR format to specify your subnet's IP address block (e.g., 10.0.0.0/24). Note that block sizes must be between a /16 netmask and /28 netmask. Also, note that a subnet can be the same size as your VPC.
 
** Use the CIDR format to specify your subnet's IP address block (e.g., 10.0.0.0/24). Note that block sizes must be between a /16 netmask and /28 netmask. Also, note that a subnet can be the same size as your VPC.
** Subnets are ''always'' mapped to ''one'' availability zone (AZ). Subnets can not be mapped across multipl AZs. 1 subnet = 1 AZ.
+
** Subnets are ''always'' mapped to ''one'' availability zone (AZ). Subnets can not be mapped across multiple AZs. 1 subnet = 1 AZ.
  
 
* VPC Internet gateways:
 
* VPC Internet gateways:
Line 846: Line 1,117:
 
** Allow instances with only private IPs to reach the Internet via the NAT server (via, say, HTTP/HTTPS and all other protocols/ports closed, including SSH)
 
** Allow instances with only private IPs to reach the Internet via the NAT server (via, say, HTTP/HTTPS and all other protocols/ports closed, including SSH)
 
** One must disable Source/Destination Check on NAT instances for them to work properly.
 
** One must disable Source/Destination Check on NAT instances for them to work properly.
 +
** SEE: [http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-comparison.html Comparison of NAT Instances and NAT Gateways]
 +
 +
* NAT instances example:
 +
** Create a custom security group
 +
** Allow inbound traffic to 10.0.1.0/24 and 10.0.2.0/24 on HTTP and HTTPS
 +
** Allow outbound traffic on HTTP and HTTPS to anywhere
 +
** Provision a NAT instance inside the ''public'' subnet (make sure the NAT instance has a public IP)
 +
** Import! Make sure to select "Disabled Source/Destination Check" on this NAT instance!
 +
** Set up a route on the private subnet to route traffic through the NAT instance.
 +
** NAT instance behind a security group
 +
** The amount of traffic that NAT instances support depends on the instance size. If you are reaching a bottleneck, increase the instance size.
 +
** You can create HA using Autoscaling Groups with multiple subnets in different AZs (and use a script to automate failover).
 +
** NOTE: It is better (and easier) to use a NAT Gateway over a NAT Instance.
 +
 +
* NAT Gateways
 +
** Preferred by enterprise organizations
 +
** Scale automatically up to 10Gbps
 +
** No need to patch OS (e.g., no need for `yum update`, etc.)
 +
** ''Not'' associated with security groups
 +
** Automatically assigned a public IP address
 +
** No need to disable Source/Destination Checks.
 +
** Remember to update your VPC route tables after creating the NAT Gateway.
 +
 +
* NAT instances vs. Bastions
 +
** A NAT instance is used to provide Internet traffic to EC2 instances in private subnets.
 +
** A Bastion is used to securely administer EC2 instances (using SSH) in private subnets.
  
 
===Network Access Control Lists (ACLs)===
 
===Network Access Control Lists (ACLs)===
 +
SEE: [http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html Network ACLs]
  
* A network ACL is an optional layer of security that acts as a firewall for controlling traffic in and out of a subnet.  
+
* A network ACL is an optional layer of security that acts as a firewall for controlling traffic in and out of a subnet.
* ACLs ~ "firewall"-like rules
+
* ACLs ~ "firewall"-like rules.
 
* ACLs are a numbered list of rules, which are followed in order, starting with the lowest number first. They control network ingress/egress for all AWS resources within a given subnet.
 
* ACLs are a numbered list of rules, which are followed in order, starting with the lowest number first. They control network ingress/egress for all AWS resources within a given subnet.
* The highest ACL number allowed is 32,766
+
* The highest ACL number allowed is 32766.
* ACLs have a default (editable) number list that allows all inbound/outbound traffic
+
* Your VPC automatically comes with a default ACL and by default it allows all inbound/outbound traffic.
* One can create a custom ACL, which starts out with no inbound/outbound traffic allowed, until one adds a rule
+
* ACLs have a default (editable) number list that allows all inbound/outbound traffic.
 +
* One can create a custom ACL, which starts out with no inbound/outbound traffic allowed, until one adds a rule.
 
* ACLs are applied to an entire subnet (and override the security groups associated with a given instance on that subnet). For an example, if a security group applied to a given instance has port 80 allowed, but the ACL for the subnet the instance is on has port 80 denied, the ACL rule takes precedence (i.e., port 80 will be denied on all instances within that subnet, regardless of what the security group allows).
 
* ACLs are applied to an entire subnet (and override the security groups associated with a given instance on that subnet). For an example, if a security group applied to a given instance has port 80 allowed, but the ACL for the subnet the instance is on has port 80 denied, the ACL rule takes precedence (i.e., port 80 will be denied on all instances within that subnet, regardless of what the security group allows).
 
* Unless one creates a custom ACL and associates it with a given subnet, that ACL will use the default role and rules.
 
* Unless one creates a custom ACL and associates it with a given subnet, that ACL will use the default role and rules.
 
* One can not have multiple ACLs associated with the same subnet. However, a given ACL can be associated with multiple subnets.
 
* One can not have multiple ACLs associated with the same subnet. However, a given ACL can be associated with multiple subnets.
* If one dis-associates a ''custom'' ACL from a given subnet(s), the subnet reverts back to the default ACL
+
* When one associates a custom ACL with a subnet, the previous association is removed
 +
* If one dis-associates a ''custom'' ACL from a given subnet(s), the subnet reverts back to the default ACL.
 +
* If one wishes to block a specific IP address, use ACLs not Security Groups.
 +
 
 +
* Security Groups (SGs) vs. Network ACLs (ACLs)
 +
** SGs operate at the instance level (first layer of defense). ACLs operate at the subnet level (second layer of defense).
 +
** SGs allow rules only (everything is denied unless opened). ACLs allow rules and deny rules.
 +
** SGs are stateful (return traffic is automatically allowed, regardless of any rules). ACLs are stateless (return traffic must be explicitly allowed by rules).
 +
** SGs: AWS evaluates all rules before deciding whether to allow traffic. ACLs: AWS process rules in number order when deciding whether to allow traffic.
  
 
===Example labs===
 
===Example labs===
Line 874: Line 1,181:
 
** Provision an EC2 instance with ''only'' a private IP address (on the private subnet)
 
** Provision an EC2 instance with ''only'' a private IP address (on the private subnet)
  
* NAT Lab:
+
==Kinesis==
** Create a custom security group
+
 
** Allow inbound traffic to 10.0.1.0/24 and 10.0.2.0/24 on HTTP and HTTPS
+
Streaming Data is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously and in small sizes (on the order of kilobytes).
** Allow outbound traffic on HTTP and HTTPS to anywhere
+
 
** Provision a NAT instance inside the public subnet
+
Example streaming data sources:
** Import! Make sure to select "Disabled Source/Destination Check" on this NAT instance!
+
* Purchases from Online stores (e.g., amazon.com)
** Set up a route on the private subnet to route traffic through the NAT instance
+
* Stock prices
 +
* Game data (as the gamer plays)
 +
* Social network data
 +
* Geospatial data (e.g., Uber)
 +
* IoT sensor data
 +
 
 +
Amazon Kinesis is a platform on AWS to send your streaming data too. Kinesis makes it easy to load and analyze streaming data, and also provides the ability for you to build your own custom applications for your business needs.
 +
 
 +
* Core Kinesis services
 +
# Kinesis Streams
 +
#* Collect and stream data for ordered, replayable, real-time processing.
 +
#* Retention: stored by default for 24 hours and up to 7 days.
 +
#* Consists of shards: 5 transactions per second for reads (up to a maximum total data read rate of 2 MB per second) and up to a maximum total data write rate of 1 MB per second (including partition keys).
 +
#* The data capacity of your stream is a function of the number of shards that you specify for the stream. The total capacity of the stream is the sum of the capacities of its shards.
 +
# Kinesis Firehose
 +
#* Continuously deliver streaming data to Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.
 +
#* Analyze data using Lambda functions.
 +
# Kinesis Analytics
 +
#* Analyze streaming data from Amazon Kinesis Firehose and Amazon Kinesis Streams in real-time using SQL.
 +
 
 +
==AWS Security Hub==
 +
'''[https://docs.aws.amazon.com/securityhub/index.html AWS Security Hub]''' provides you with a comprehensive view of the security state of your AWS resources. Security Hub collects security data from across AWS accounts and services, and helps you analyze your security trends to identify and prioritize the security issues across your AWS environment.
 +
 
 +
AWS Security Hub integrates with other AWS services. One can forward all the findings from those services to Security Hub for a centralized view.
 +
 
 +
The following services are supported:
 +
* [https://aws.amazon.com/inspector/ Amazon Inspector] &mdash; Automated security assessment service to help improve the security and compliance of applications deployed on AWS.
 +
* [https://aws.amazon.com/guardduty/ Amazon GuardDuty] &mdash; Protect your AWS accounts, workloads, and data with intelligent threat detection and continuous monitoring
 +
* [https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html IAM Access Analyzer]
 +
* [https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html AWS Systems Manager Patch Manager]
 +
* [https://aws.amazon.com/firewall-manager/ AWS Firewall Manager] &mdash; Centrally configure and manage firewall rules across accounts and applications
 +
* [https://aws.amazon.com/macie/ Amazon Macie] &mdash; Discover and protect your sensitive data at scale
  
 
==AWS Shared Responsibility==
 
==AWS Shared Responsibility==
Line 905: Line 1,243:
 
; Shared Responsibility Model for AWS abstracted services
 
; Shared Responsibility Model for AWS abstracted services
 
* Abstracted services include: S3, Glacier, DynamoDB, SQS, and Simple Email Service (SES), Lambda
 
* Abstracted services include: S3, Glacier, DynamoDB, SQS, and Simple Email Service (SES), Lambda
* AWS takes on each more of the responsibility (e.g., network traffic protection provided by the platform; server-side encryption provided by the platform)
+
* AWS takes on even more of the responsibility (e.g., network traffic protection provided by the platform; server-side encryption provided by the platform)
 
* Customer still has the responsibility for client-side data encryption, data integrity authentication, and customer data
 
* Customer still has the responsibility for client-side data encryption, data integrity authentication, and customer data
  
Line 915: Line 1,253:
 
* $150 example registration fee
 
* $150 example registration fee
 
* Conducted Online at an approved centre
 
* Conducted Online at an approved centre
 +
 +
===AWS Certified Solutions Architect - Associate===
 +
* Time allotted: 80 minutes
 +
* 60 questions on the exam
 +
* $150 example registration fee
 +
* Conducted Online at an approved centre
 +
* AWS platforms covered:
 +
** Security & Identity
 +
** Compute
 +
** Storage
 +
** Databases
 +
** Networking & Content Delivery
 +
** Messaging
 +
** Desktop & App Streaming (only at a very high-level)
 +
** Management Tools (only at a very high-level)
 +
* AWS Global Infrastructure (what all of the above platforms/services reside in)
 +
** As of December 2016: 14 Regions and 38 Availability Zones (AZs)
 +
** In 2017: 4 more Regions and 11 more AZs
 +
* Edge Locations are CDN Endpoints for CloudFront (as of December 2016, there are ~66 Edge Locations)
  
 
==AWS Certifications==
 
==AWS Certifications==
  
NOTE: All AWS certification exams are taken on-site and proctored
+
''NOTE: All AWS certification exams are taken on-site and proctored.''
  
* Associate Level (each $150):
+
* Associate Level ($150 each):
 
** AWS Certified Developer - Associate
 
** AWS Certified Developer - Associate
** AWS Certified SysOps Administrator - Associate
 
 
** AWS Certified Solutions Architect - Associate
 
** AWS Certified Solutions Architect - Associate
* Professional Level (each $300):
+
** AWS Certified SysOps Administrator - Associate
 +
* Professional Level ($300 each):
 
** AWS Certified DevOps Engineer - Professional
 
** AWS Certified DevOps Engineer - Professional
 
** AWS Certified Solutions Architect - Professional
 
** AWS Certified Solutions Architect - Professional
 +
* Specialty (Beta, as of January 2017)
 +
** AWS Certified Security - Specialty
 +
** AWS Certified Big Data - Specialty
 +
** AWS Certified Advanced Networking - Specialty
 +
 +
==The AWS Partner Program==
 +
 +
<div style="float:left; margin:0px 20px 20px 0px;">
 +
{| align="center" style="border: 1px solid #999; background-color:#FFFFFF"
 +
|-
 +
! colspan="4" bgcolor="#EFEFEF" | '''The AWS Partner Program'''
 +
|-align="center" bgcolor="#1188ee"
 +
!Partner
 +
!Associate Certs
 +
!Professional Certs
 +
|- align="left"
 +
|'''Standard''' || 2 || 0
 +
|--bgcolor="#eeeeee"
 +
|'''Advanced''' || 4 || 2
 +
|-
 +
|'''Premier''' || 20 || 8
 +
|}
 +
</div>
 +
<br clear="all"/>
  
 
==Glossary==
 
==Glossary==
Line 932: Line 1,313:
 
;[http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#AmazonMachineImage AMI]: Amazon Machine Image
 
;[http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#AmazonMachineImage AMI]: Amazon Machine Image
 
;ARN: Amazon Resource Name
 
;ARN: Amazon Resource Name
;EBS: Elastic Block Storage
+
;EBS: Elastic Block Storage (virtual disks for EC2 instances)
 
;EC2: Elastic Compute Cloud
 
;EC2: Elastic Compute Cloud
 +
;EC2 Container Service (ECS)
 +
;Elastic Beanstalk
 
;EFS: Elastic File System
 
;EFS: Elastic File System
 
;ELB: Elastic Load Balancer
 
;ELB: Elastic Load Balancer
;S3: Simple Storage Service
+
;KRADLE: Kinesis, Redshift, Aurora, Dynamo DB, Lambda, EMR (lock-in services)
 +
;Lambda: Serverless code
 +
;Lightsail: Out-of-the-box Cloud
 +
;[http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#STS STS]: Security Token Service
 +
;VPC: Virtual Private Cloud
 +
;Route53: DNS + ability or register domain names
 +
;CloudFront: Content Delivery Network (CDN) / Edge Locations
 +
;DirectConnect
 +
 
 +
===Storage===
 +
;[http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#AmazonSimpleStorageService S3] : Simple Storage Service (object-based storage)
 +
;Glacier : Data archival (for objects in S3). Low cost.
 +
;EFS: Elastic File Service (block-based storage; shareable)
 +
;Storage Gateway : Connect S3 to on-premise DC
 +
 
 +
===Databases===
 +
;RDS : Relational Database Service (e.g., MySQL, MariaDB, Aurora, Postgres, etc.)
 +
;DynamoDB : NoSQL (non-relational database)
 +
;Redshift: Data warehousing
 +
;Elasticache :
 +
 
 +
===Migration===
 +
;Snowball : Move large amounts of data into the Cloud (e.g., contents of a HDD)
 +
;DMS: Database Migration Service (e.g. in-house Oracle DB into AWS RDS:Aurora)
 +
;Server Migration Service : Virtual machine migration (e.g., on-premise VMware VMs into AWS)
 +
 
 +
===Analytics===
 +
;Athena : Run SQL queries on S3 (e.g., CSV/JSON files)
 +
;EMR : Elastic MapReduce (process large amounts of data {Big Data}; e.g., log files)
 +
;CloudSearch :
 +
;Elastic Search :
 +
;Kinesis : Stream and analyse live/real-time data (e.g., financial data, social media feeds, etc.)
 +
;Data Pipeline : Allows moving data from one location to another (e.g., from S3 to DynamoDB or vice versa, etc.)
 +
;Quick Sight : Business analytics tool
 +
 
 +
===Security & Identity===
 +
;[http://docs.aws.amazon.com/general/latest/gr/glos-chap.html#IAM IAM] : AWS Identity and Access Management
 +
;Inspector : Agent-based service to inspect EC2 instances, etc.
 +
;Certificate Manager : Free SSL certs
 +
;Directory Service : Active Directory in the Cloud
 +
;WAF : Web Application Firewall (e.g., protect against SQL injections, etc.)
 +
;Artifacts : Compliance Reports (e.g., ISO 27001 certification, etc.)
 +
 
 +
===Management Tools===
 +
;Cloud Watch : Monitor performance of AWS (e.g. EC2 => CPU/RAM util)
 +
;Cloud Formation : Infrastructure as Code (document-based; JSON/YAML)
 +
;Cloud Trail : Audit AWS resource usage
 +
;OpsWorks : Chef for AWS
 +
;Config : Monitor AWS environment (e.g., send alert if someone creates an IAM role that breaks company policy)
 +
;Trusted Advisor : Automate performance, security, fault-tolerance, cost, etc.
 +
 
 +
===Application Services===
 +
;Step Functions: Visualize what is going on inside an application (and/or microservice)
 +
;SWF : Simple Workflow Service (coordinate automated vs. human tasks)
 +
;API Gateway : Create, publish, maintain APIs in the Cloud
 +
;AppStream : Stream desktop applications to users
 +
;Elastic Transcoder : Change video format (e.g., for viewing on different devices)
 +
 
 +
===Developer Tools===
 +
;CodeCommit : GitHub in AWS
 +
;CodeBuild : Compile code in the Cloud
 +
;CodeDeploy : Deploy code to EC2 instances
 +
;CodePipeline : Keep track of versions of code (e.g., dev, test, prod, UAT)
 +
 
 +
===Mobile Services===
 +
;Mobile Hub :
 +
;Cognito
 +
;Device Farm
 +
;Mobile Analytics
 +
;Pinpoint : Google Analytics for mobile applications
 +
 
 +
===Business Productivity===
 +
;WorkDocs
 +
;WorkMail : Exchange for AWS
 +
 
 +
===Internet of Things (IoT)===
 +
;IoT
 +
 
 +
===Desktop & App Streaming===
 +
;WorkSpaces : Virtual Desktops in the Cloud / Virtual Desktop Infrastructure (VDI) solutions
 +
;AppStream 2.0 : Stream dekstop applications to users
 +
 
 +
===Artificial Intelligence===
 +
;Lex : Think "Alexa in the Cloud" or Alexa on a RaspberryPi
 +
;Polly : Text-to-Speech (text => mp3 in S3)
 +
;Machine Learning
 +
;Rekognition : Analyse pictures with tagging and facial recognition
 +
 
 +
===Messaging===
 +
;SNS : Simple Notification Service
 +
;SQS : Message Queue Service
 +
;SES : Simple Email Service
  
 
==Links==
 
==Links==
Line 953: Line 1,427:
 
* http://ec2price.com/
 
* http://ec2price.com/
 
** https://github.com/grosskur/ec2price
 
** https://github.com/grosskur/ec2price
 +
* [https://www.expeditedssl.com/aws-in-plain-english AWS in Plain English]
 +
* [https://cloud.google.com/docs/compare/aws#service_comparisons GCP ''vs.'' AWS]
  
 
[[Category:Technical and Specialized Skills]]
 
[[Category:Technical and Specialized Skills]]

Latest revision as of 16:44, 30 April 2021

This category/article is (currently) just a random collection of my notes on Amazon Web Services (AWS). I will organize each service-type into a separate article, when I find the time.

Identity and Access Management (IAM)

SEE: AWS IAM FAQs

AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.

  • IAM Roles are more secure than storing your access key and secret access key on individual EC2 instances. This is consider a "best practice".
  • Roles are easier to manage
  • Roles can only be assigned when that EC2 instance is being provisioned (i.e., after provisioning an EC2 instance and adding an IAM Role to that instance, you are not able to delete the Role or add another Role. You can, however, add to or modify the existing Policies attached to the Role attached to the instance.)
  • Roles are universal, you can use them in any AWS region.

Elastic Compute Cloud (EC2)

SEE: AWS/EC2

Elastic Load Balancer (ELB)

SEE: Amazon ELB FAQs
  • Load Balancer types:
    • Application Load Balancer (ALB)
      Layer 7 Load Balancer
      Makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each EC2 instance or container instance in your VPC
    • Classic Load Balancer (ELB)
      Layer 4 Load Balancer
      Makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS), and support either EC2-Classic or a VPC.
  • ELBs are not free; one is charged by the hour and on a per GB basis of usage
  • ELB supported ports:
    • ec2-vpc: 1-65535
    • ec2-classic: 25, 80, 443, 465, 587, 1024-65535
  • ELB supported protocols:
    • HTTP, HTTPS, TCP, SSL

Instances monitored by ELBs are reported as either:

  • InService
  • OutofService

Health Checks check the instance's health by simply "talking" to it over HTTP/HTTPS (looking for specific files on the instance)

NOTE: One can have multiple SSL certificates (for multiple domain names) on a single Elastic Load Balancer.

CloudWatch

SEE: CloudWatch FAQs

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes to your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.

  • Services CloudWatch can monitor include: EC2, Classic ELB, ALB, EBS, S3, SNS, Lambda, DynamoDB, IoT, etc.
  • Standard (free) monitoring = every 5 minutes
  • Detailed (not free) monitoring = every 1 minute
  • Default CloudWatch EC2 monitoring metrics
    • CPU (e.g., CPU utilization, credit usage, credit balance)
    • Disk (e.g., read/write bytes/ops)
    • Network (e.g., traffic in/out, packets in/out)
    • Status Checks (instance-level and host/hypervisor-level)
    • Able to create custom metrics

CloudWatch Dashboards allow you to create customizable dashboards to see what is happening within your AWS account.

  • Bashboard widgets
    • Line (plot): compare metrics over time
    • Stacked area (plot): compare the total over time
    • Number: instantly see the latest value for a metric
    • Text: free text with markdown formatting. Example:
# Heading
## Sub-heading
Paragraphs are separated by a blank line. Text attributes *italic*, **bold**, ~~strikethrough~~ .

A [link](http://amazon.com). A link to this dashboard: [MyWebServer](#dashboards:name=MyWebServer).

[button:Button link](http://amazon.com) [button:primary:Primary button link](http://amazon.com)

Table | Header
----|-----
CloudWatch | Dashboards

```
Text block
ssh my-host
```
List syntax:

* CloudWatch
* Dashboards
  1. Graphs
  1. Text widget

CloudWatch Alarms allow you to set alarms that notify you (e.g., via email) when particular thresholds (you set) are hit.

Amazon EventBridge (formerly CloudWatch Events) helps you to respond to state changes in your AWS resources. When your resources change state they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a pre-determined schedule. For example, you can configure rules to:

  • Automatically invoke an AWS Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the Running state
  • Direct specific API records from CloudTrail to a Kinesis stream for detailed analysis of potential security or availability risks
  • Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume

CloudWatch Logs helps you to aggregate, monitor, and store logs. Note: You must install an agent on the EC2 instance to use this service. For example, you can:

  • Monitor HTTP response codes in Apache logs
  • Receive alarms for errors in kernel logs
  • Count exceptions in application logs

Note the difference between CloudWatch and CloudTrail.

CloudTrail

AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.

With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

AWS Command Line Interface (CLI)

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The AWS CLI introduces a new set of simple file commands for efficient file transfers to and from Amazon S3.

SDKs

  • HTTP codes:
    • 200 - The request has succeeded
    • 3xx - Redirection
    • 4xx - Client error (think 404 not found)
    • 5xx - Server error (think Apache service not running, etc.)
  • Available SDKs:
    • Android, iOS, JavaScript (browser)
    • Java
    • .Net
    • Node.js
    • PHP
    • Python
    • Ruby
    • Go
    • C++ (preview)
  • SDK default regions:
    • default region (for most SDKs): us-east-1
    • Some SDKs have default regions set (Java)
    • Some SDKs do not (Node.js)

Services that are free include: CloudFormation, Elastic Beanstalk, Autoscaling, Opsworks, etc. (however, the resources they create are not free; e.g., EC2 instances, ELBs)

Simple Storage Service (S3)

SEE: AWS/S3

Lambda

SEE: AWS/Lambda

Databases

SEE: Amazon RDS FAQs
RDS 
Relational Database Server
  • RDS (OLTP) Relational Database Types:
    • Aurora
    • MySQL Server
    • MariaDB
    • PostgreSQL
    • MS SQL Server
    • Oracle
RDS - Backups, multi-AZs, and read replicas
  • There are two different types of RDS backups:
  1. Automated Backups; and
  2. Snapshots (manual)
  • Automated Backups
    • Automated Backups allow you to recover your database to any point in time within a "retention period". The retention period can be between 1 and 35 days. Automated Backups will take a full daily snapshot and will also store transaction logs throughout the day. When you do a recovery, AWS will first choose the most recent daily backup, and then apply transaction logs relevant to that day. This allows you to do a point-in-time recovery down to a second, within the retention period.
    • Automated Backups are enabled by default. The backup data is stored in S3 and you get free storage space equal to the size of your database. Example: If you have an RDS instance of 10 GB, you will get 10 GB worth of storage.
    • Backups are taken within a defined window. During the backup window, storage I/O may be suspended while your data is being backed up and you may experience elevated latency.
  • Snapshots
    • RDS Snapshots are done manually (i.e., they are user initiated). They are stored even after you delete the original RDS instance (unlike automated backups).
  • Restoring backups
    • Whenever you restore either an Automated Backup or a manual Snapshot, the restored version of the database will be a new RDS instance with a new DNS endpoint.
  • Encryption
    • Encryption at rest is supported for MySQL, Oracle, SQL Server, PostreSQL, and MariaDB. Encryption is done using the AWS Key Management Service (KMS). Once your RDS instance is encrypted, the data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots.
    • As of April 2017, encrypting an existing RDS instance is not supported. To use RDS encryption for an existing database, create a new instance with encryption enabled and migrate your data into it.
  • Multi-AZ RDS
    • Allows you to have an exact copy of your production database in another Availability Zone (AZ). AWS handles the replication for you, so when you production database is written to, this write will automatically be synchronised to the standby database.
    • In the event of a planned database maintenance, instance failure, or an AZ failure, Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention.
    • This is meant for Disaster Recovery (DR) only. It is not primarily used for improving performance. For performance improvement, you need Read Replicas.
    • You cannot use the secondary database as an independent read node when you have deployed an RDS instance into multiple AZs (use Read Replicas instead).
  • Read Replicas
    • Allow you to have a read-only copy of your production database. This is achieved by using asynchronous replication from the primary RDS instance to the read replica. You use read replica's primarily for very read-heavy database workloads.
    • Use for scaling! Not for DR!
    • You must have automatic backups turned on in order to deploy a read replica.
    • You can have up to 5 read replica copies of any database.
    • You can have read replicas of read replicas (but watch out for latency).
    • Each read replica will have its own DNS endpoint.
    • You cannot have Read Replicas that have Multi-AZ.
    • You can, however, create Read Replicas of Multi-AZ source databases.
    • Read Replicas can be promoted to be their own databases (but this breaks the replication).
    • Able to create a Read Replica in a second region for MySQL and MariaDB. Not for PostgreSQL.
    • There is no charge associated with data transfer when replicating data from your primary RDS instance to your secondary instance.
    • Supported databases:
      • MySQL
      • PostgreSQL
      • MariaDB

Provisioned IOPS volumes can range in size from 100 GB to 6 TB for MySQL, MariaDB, PostgreSQL, and Oracle DB engines. SQL Server Express and Web editions can range in size from 100 GB to 4 TB, while SQL Server Standard and Enterprise editions can range in size from 200 GB to 4 TB.

DynamoDB vs. RDS
  • DynamoDB offers "push button" scaling (i.e., you can scale your database on-the-fly, without any down time).
  • With RDS, it is not so easy and you usually have to use a bigger instance size or add a Read Replica.
DynamoDB
  • A fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.
  • Stored on SSD storage.
  • Spread across 3 geographically distinct data centres.
  • Eventually Consistent Reads (default)
    • Consistency across all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data.
    • Best read performance.
  • Strongly Consistent Reads
    • Returns a result that reflects all writes that received a successful response prior to the read.
  • Non-relational Databases:
    • DynamoDB (document-oriented)
      • Collection (=Table)
      • Document (=Row)
      • Key-value pairs (=Fields)

An example (JSON) NoSQL document:

{
  "_id": "345sdf45asdf",
  "firstname": "John",
  "surname": "Smith",
  "age": "23",
  "address": [
    {
      "street": "123 First Street",
      "suburb": "Wallingford"
    }
  ]
}
Data Warehousing
  • Used for business intelligence (BI). Tools include: Cognos, Jaspersoft, SQL Server Reporting Services, Oracle Hyperion, SAP NetWeaver, etc.
  • Used to pull in very large and complex datasets. Usually used by management to perform queries on data (e.g., current performance vs. targets, etc.).
  • OLTP vs. OLAP
    • Online Transaction Processing (OLTP) differs from Online Analytics Processing (OLAP) in terms of the types of queries performed.
    • OLTP Example:
      • Return order number: 1234567
      • Pulls up a row of data (e.g., Name, Data, Address to deliver to, Delivery Status, etc.)
    • OLAP Example
      • Return net profit for EMEA and Pacific for the Digital Radio Product
      • Pulls in large numbers of records:
        Sum of radios sold in EMEA region
        Sum of radios sold in Pacific region
        Unit cost of radio in each region
        Sales price of each radio
        Sales price - unit cost

Data Warehousing databases use a different type of architecture, both from a database perspective and the infrastructure layer.

Redshift (OLAP)

Amazon Redshift is a fast and powerful, fully managed, petabyte-scale data warehouse service in the Cloud. Customers can start small for just $0.25 per hour with no commitment or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions.

  • Start out: Single Node (160 GB)
  • Scale: Use Multi-Node, which consists of:
    • Leader Node (manages client connections and receives queries)
    • Compute Node (store data and perform queries and computations). Able to have up to 128 Compute Nodes.
  • Columnar Data Storage
    • Instead of storing data as a series of rows, Redshift organizes the data by column. Unlike row-based systems, which are ideal for transaction processing, column-based systems are ideal for data warehousing and analytics, where queries often involve aggregates performed over large datasets. Since only the columns involved in the queries are processed and columnar data is stored sequentially on the storage media, column-based systems require far fewer I/Os, greatly improving query performance (up to 10x faster).
  • Advanced Compression
    • Columnar data stores can be compressed much more than row-based data stores, because similar data is stored sequentially on disk. Redshift employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores. In addition, Redshift does not require indexes or materialized views and, as such, uses less space than traditional relational database systems. When loading data into an empty table, Redshift automatically samples your data and selects the most appropriate compression scheme.
  • Massively Parallel Processing (MPP)
    • Redshift automatically distributes data and query load across all nodes.
    • Redshift makes it easy to add nodes to your data warehouse and enables you to maintain fast query performance as your data warehouse grows.
  • Redshift pricing (you are charged for the following):
    • Compute Node Hours
      • Total number of hours you run across all your compute nodes for the billing period. You are billed for 1 unit per node per hour, so a 3-node data warehouse cluster running persistently for an entire month would incur 2,160 instance hours (60x24x30).
      • You are not charged for the Leader Node hours; only Compute Nodes will incur charges.
    • Backup
    • Data transfer (only within a VPC, not outside it)
  • Redshift Security
    • Encrypted in transit using SSL
    • Encrypted at rest using AES-256 encryption
    • By default, Redshift takes care of key management. However you can also,
      • Manage your own keys through Hardware Security Module (HSM); or
      • Use AWS Key Management Service (KMS)
  • Redshift Availability
    • Not designed to be multi-AZ. As of April 2017, only available in 1 AZ.
    • However, you can restore snapshots to new AZs in the event of an outage.
ElastiCache
  • A web service that makes it easy to deploy, operate, and scale and in-memory cache in the Cloud. The service improves the performance of web applications by allowing you to retrieve information for fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.
  • Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (e.g., social media, gaming, media sharing, and Q&A portals) or compute-intensive workloads (e.g., a recommendation engine).
  • Caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations.
  • ElastiCache is a good choice if your database is particularly read-heavy and not prone to frequent changing.
  • ElastiCache supports two open-source in-memory caching engines:
    • Memcached
      • A widely adopted memory object caching system. ElastiCache is protocol-compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service.
    • Redis
      • A popular open-source in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports Master / Slave replication and Multi-AZ, which can be used to achieve cross-AZ redundancy.
Database Migration Service (DMS)
  • Announced at re:Invent 2015.
  • Allows you to (live) migrate your production database to AWS.
  • Once the migration has started, AWS manages all the complexities of the migration process like data type transformation, compression, and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target.
  • The AWS schema conversion tool automatically converts the source database schema and a majority of the custom code, including views, stored procedures, and functions, to a format compatible with the target database.
  • Example: convert Oracle to MySQL, etc.
Aurora

Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora provides up to five times better performance than MySQL at a price point 1/10th that of a commercial database while delivering similar performance and availability. It was announced at re:Invent 2014.

  • Aurora scaling
    • Start with 10 GB, scales in 10 GB increments up to 64 TB (Storage Autoscaling; i.e., it autoscales for you)
    • Compute resources can scale up to 32 vCPUs and 244 GB of memory.
    • 2 copies of your data is maintained in each availability zone, with a minimum of 3 AZs. As such, Aurora maintains 6 copies of your data.
    • Aurora is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability.
    • Aurora storage is self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.
  • Aurora Replicas
    • There are 2 types of Replicas:
      1. Aurora Replicas (currently 15). Automatic failover.
      2. MySQL Read Replicas (currently 5). No automatic failover.

DynamoDB

SEE: Amazon DynamoDB FAQs (import to read for the exams!)

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.

  • Stored on SSD storage
  • Spread across 3 geographically distinct data centres (note: not distinct AZs)
  • Eventual Consistent READs (default)
    • Consistency across all copies of data is usually reached within a second. Repeating a read after a short time should return the update data. (best read performance.)
  • Strongly Consistent READs
    • A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read.
  • The DynamoDB basics:
    • Tables
    • Items (think a row of data in the table)
    • Attributes (think of a column of data in a table)

Example DynamoDB table:

Item 101 (has the following attributes):
{                                        
   Title = "Book 101 Title"
   ISBN = "111-1111111111"
   Authors = "Author 1"
   Price = "-2"
   Dimensions = "8.5 x 11.0 x 0.5"
   PageCount = "500"
   InPublication = true
   ProductCategory = "Book" 
}
Item 201 (has the following attributes):
{
   Title = "18-Bicycle 201"
   Description = "201 description"
   BicycleType = "Road"
   Brand = "Brand-Company A"
   Price = "100"
   Color = [ "Red", "Black" ]
   ProductCategory = "Bike"
}

An item can have any number of attributes, although there is a limit of 400 KB on the item size. An item size is the sum of lengths of its attribute names and values (binary and UTF-8 lengths); it helps if you keep the attribute names short.

  • Supports up to 35 levels of nesting (JSON {foo,{bar,{baz,...)
  • For any AWS account, there is an initial limit of 256 tables per region (one can, however, request an increase from Amazon)
  • You can decrease the ReadCapacityUnits or WriteCapacityUnits settings for a table, but no more than four times per table in a single UTC calendar day. In a single operation, you can decrease the provisioned throughput for a table, for any global secondary indexes on that table, or for any combination of these.
  • Pricing
    • Provisioned Throughput Capacity:
      • Write throughput $0.0065 per hour for every 10 units
      • Read throughput $0.0065 per hour for every 50 units
    • First 25GB stored per month is free
    • Storage costs of $0.25 per GB per month thereafter

Pricing example:

Assume that one's application needs to perform 1 million writes and 1 million reads per day, while storing 28 GB of data.

  • First, one needs to calculate how many writes and reads per second one needs.
    • 1 million evenly spread writes per data is equivalent to:
1,000,000 (writes) / 24 (hours) / 60 (minutes) / 60 (seconds) = 11.6 writes per second

A DynamoDB Write Capacity Unit (WCU) can handle 1 write per second, so you need 12 WCUs (round up 11.6 to 12).

For write throughput, one is charged on $0.0065 per hour for every 10 units, thus:

($0.0065/10) * 12 WCUs * 24 hours = $0.1872 per day

Similarly, to handle 1 million strongly consistent reads per day, one needs 12 Read Capacity Units (RCUs).

For read throughput, one is charged $0.0065 per hour for every 50 units, thus:

So, ($0.0065/50) * 12 RCUs * 24 hours = $0.0374 per day

Storage cost is $0.25 per GB per month. Assume the database is 28 GB. One gets the first 25 GB for free, so one only pays for 3 GB of storage, which is $0.75 per month.

Total Cost = $0.1872 per day + $0.0374 per day + storage of $0.75 per month, thus:

(30 * ($0.1872 + $0.0374)) + $0.75 = $7.488

Answer: $7.488/month

With the Free Tier you get:

  • 25 Read Capacity Units
  • 25 Write Capacity Units
  • DynamoDB Indexes and Streams
  • Primary Keys (two types of primary keys available):
    • Single Attribute (think uniqueID)
      • Partition Key (Hash Key) composed of one attribute
    • Composite (think uniqueID and a date range)
      • Partition Key and Sort Key (Hash & Range) composed of two attributes
Partition Key 
DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored).
No two items in a table can have the same partition key value!
Composite Key (Partition Key and Sort Key) 
DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored).
The two items can have the same partition key, but the must have a different sort key.
All items with the same partition key are stored together, in sorted order by the sort key value.
  • Indexes
    • Local Secondary Index
      • an index that has the same hash key as the table, but a different range key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a table partition that has the same hash key.
      • has the same partition key and a different sort key.
      • can only be created at table creation. They cannot be removed or modified later.
      • maximum of 5 local secondary indexes per table
      • each table can have up to 20 projected non-key attributes, in total across all local secondary indexes within the table. Each index may also specify that all non-key attributes from the primary index are projected. (note: "projections" are the set of attributes that is copied into a local secondary index.)
    • Global Secondary Index
      • an index with a hash or a hash-and-range key that can be different from those on the table. A global secondary index is considered "global" because queries on the index can span all items in a table, across all partitions.
      • has a different partition key and a different sort key.
      • can be created at table creation or added later.
      • maximum of 5 global secondary indexes per table
  • DynamoDB Streams
    • used to capture any kind of modification of the DynamoDB tables.
    • If a new item is added to the table, the stream captures an image of the entire item including all of its attributes.
    • If an item in updated, the stream captures the "before" and "after" image of any attributes that were modified in the item.
    • If an item was deleted from the table, the stream captures an image of the entire item before it was deleted.
    • DynamoDB Streams are stored for a maximum of 24 hours.
    • Can trigger a Lambda function (e.g., replicate table in another region and/or create an SES to send an email to the user for, say, when they first register to the website as a "welcome" email)
  • One is able to export DynamoDB tables to a CSV file
  • DyanmoDB allows for push-button scalability (with zero downtime)
DynamoDB Query 
find items in a table using only primary key attribute values. You must provide a partition attribute name and a distinct value to search for.
One can (optionally) provide a sort key attribute name and value, and use a comparison operator to refine the search results.
Be default, a Query returns all of the data attributes for items with the specified primary key(s), however, you can use the ProjectExpression parameter so that the query only returns some of the attributes, rather than all of them.
Query results are always sorted by the sort key. If the data type of the sort key is a number, the results are returned in numeric order; otherwise, the results are returned in order of ASCII character code values. By default, the sort order is ascending. To reverse the order, set the ScanIndexForward parameter to false.
By default, Queries are eventually consistent, but can be changed to be strongly consistent.
DynamoDB Scan 
A Scan operation examines every item in the table. By default, a Scan returns all of the data attributes for every item; however, you can use the ProjectExpression parameter so that the Scan only returns some of the attributes, rather than all of them.

What should one typically use: A Query or a Scan? Generally, a Query operation is more efficient than a Scan operation.

A Scan operation always scans the entire table, then filters out values to provide the desired result, essentially adding the extra step of removing data from the result set. Avoid using a Scan operation on a large table with a filter that removes many results, if possible. Also, as a table grows, the Scan operation slows. The Scan operation examines every item for the requested values, and can use up the provisioned throughput for a large table in a single operation.

For quicker response times, design your tables in a way that can use the Query, GetItem, or BatchGetItem APIs, instead. Alternatively, design your application to use Scan operations in a way that minimizes the impact on your table's request rate.

What happens if you exceed your throughput? 400 HTTP Status Code - ProvisionedThroughputExceededException You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes.

DynamoDB Conditional Writes 
If item = $10, then update to $12 (conditional writes are idempotent)
E.g., two users try to update the same item at the same time
DynamoDB Atomic Counters 
DynamoDB supports Atomic Counters, where you use the UpdateItem operation to increment or decrement the value of an existing attribute (or "field" in a table) without interfering with other write requests. (All write requests are applied in the order in which the were received.) For example, a web application might with to maintain a counter per visitor to their site, the application would need to increment this counter regardless of its current value.
Atomic Counters are not idempotent. This mean that the counter will increment each time you call UpdateItem. If you suspect that a previous request was unsuccessful, your application could retry the UpdateItem operation; however, this would risk updating the counter twice. This might be acceptable for a web site counter, because you can tolerate slightly over- or under-counting visitors. However, in a banking application, it would be safer to use conditional updates rather than atomic counters.

If you application needs to read multiple items, you can use the BatchGetItem API. A single BatchGetItem request can retrieve up to 1 MB of data, which can contain as many as 100 items. In addition, a single BatchGetItem request can retrieve items from multiple tables.

DynamoDB API

Note: This is an incomplete list. The following are the main API calls one can expect to see on an exam.

CreateTable 
Creates a table and specifies the primary index used for data access.
UpdateTable 
Updates the provisioned throughput values for the given table.
DeleteTable 
Deletes a table.
DescribeTable 
Returns table size, status, and index information.
ListTables 
Returns a list of all tables associated with the current account and endpoint.
PutItem 
Creates a new item, or replaces an old item with a new item (including all the attributes). If an item already exists in the specified table with the same primary key, the new item completely replaces the existing item. You can also use conditional operators to replace an item only if its attribute values match certain conditions, or to insert a new item only if that item does not already exist.
BatchWriteItem 
Inserts, replaces, and deletes multiple items across multiple tables in a single request, but not as a single transaction. Supports batches of up to 25 items to Put or Delete, with a maximum total request size of 16 MB.
UpdateItem 
Edits an existing item's attributes. You can also use conditional operators to perform an update only if the item's attribute values match certain conditions.
DeleteItem 
Deletes a single item in a table by primary key. You can also use conditional operators to perform a delete an item only if the item's attribute values match certain conditions.
GetItem 
The GetItem operation returns a set of Attributes for an item that matches the primary key. The GetItem operation provides an eventually consistent read by default. If eventually consistent reads are not acceptable for your application, use ConsistentRead.
BatchGetItem 
The BatchGetItem operation returns the attributes for multiple items from multiple tables using their primary keys. A single response has a size limit of 16 MB and returns a maximum of 100 items. Supports both strong and eventual consistency.
Query 
Gets one or more items using the table primary key, or from a secondary index using the index key. You can narrow the scope of the query on a table by using comparison operators or expressions. You can also filter the query results using filters on non-key attributes. Supports both strong and eventual consistency. A single response has a size limit of 1 MB.
Scan 
Gets all items and attributes by performing a full scan across the table or a secondary index. You can limit the return set by specifying filters against one or more attributes.
A Scan operation on a table or secondary index has a limit of 1MB of data per operation. After the 1MB limit, it stops the operation and returns the matching values up to that point, and a LastEvaluatedKey to apply in a subsequent operation, so that you can pick up where you left off.

Using Web Identity Providers with DynamoDB

One can authenticate users using Web Identity providers (e.g., Facebook, Google, Amazon, or any other Open-ID Connect-compatible identity provider). This is done using AssumeRoleWithWebIdentity API.

You will need to create a role first.

  1. Authenticate with Identity Provider (e.g., Facebook): Log into Facebook with your username + password
  2. Facebook returns a Web Identity Token
  3. AssumeRoleWithWebIdentity request (containing Web Identity Token, App ID of provider, and ARN of role) is sent to AWS Security Token Service
  4. Amazon then issues Temporary Security Credentials (limit from 15 minutes to 1 hour; default 1 hour)
    Credentials contain:
    1. AccessKeyID, SecretAccessKey, SessionToken
    2. Expiration (time limit)
    3. AssumeRoleID
    4. SubjectFromWebIdentity Token (the unique ID that appears in an IAM policy variable for this particular identity provider)
  5. Using the above credentials, the user is allowed to access DynamoDB

Example exam questions

  • DynamoDB provisioned throughput calculations
    • Unit of Read provisioned throughput
      • All reads are rounded up to increments of 4 KB in size.
      • Eventually consistent Reads (default) consist of 2 reads per second.
      • Strongly consistent Reads consist of 1 read per second.
    • Unit of Write provisioned throughput
      • All writes are 1 KB in size.
      • All writes consist of 1 write per second.
  • The Magic Formula
(size of Read rounded to nearest 4 KB chunk / 4 KB) x (number of items) = read throughput
# divide by 2 if eventually consistent.

Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. Your application uses eventually consistent reads. What should you set the read throughput to?

  • First calculate how many READ units per item we need
  • 5 KB rounded up to nearest 4 KB increment = 8 KB
  • 8 KB / 4 KB = 2 read units per item
  • 600 / 60 = 10 items per second
  • 2 x 10 read items = 20
  • Using eventually consistent reads, we get 20 / 2 reads per second = 10

Answer: 10 units of read throughput

Question: You have an application that requires to read 10 items ("rows" in a table) of 1 KB per second using eventual consistency. What should you set the read throughput to?

  • First calculate how many Read units per item we need
  • 1 KB rounded up to nearest 4 KB increment = 4 KB
  • 4 KB / 4 KB = 1 read units per item
  • 1 x 10 read items = 10
  • Using eventual consistency, we get 10 / 2 reads per second = 5

Answer: 5 units of read throughput

Question: You have an application that requires to read 10 items of 6 KB per second using eventual consistency. What should you set the read throughput to?

  • First calculate how many read units per item we need
  • 6 KB round up to nearest increment of 4 KB = 8 KB
  • 8 KB / 4 KB = 2 read units per item
  • 2 x 10 read items = 20
  • Using eventual consistency, we get 20 / 2 reads per second = 10

Answer: 10 units of read throughput

Question: You have an application that requires to read 5 items of 10 KB per second using eventual consistency. What should you set the read throughput to?

  • First calculate how many read units per item we need
  • 10 KB rounded up to the nearest increment of 4 KB = 12 KB
  • 12 KB / 4 KB = 3 read units per item
  • 3 x 5 read items = 15
  • Using eventual consistency, we get 15 / 2 reads per second = 7.5 => 8

Answer: 8 units of READ throughput

Question: You have an application that needs to read 25 items of 13kb in size per second. Your application uses eventually consistent reads. What should you set the READ throughput to?

  • First calculate how many read units per item we need
  • 13 KB rounded up to the nearest increment of 4 KB = 16 KB
  • 16 KB / 4 KB = 4 read units per item
  • 4 x 25 read items = 100
  • Using eventual consistency, we get 100 / 2 reads per second = 50

Answer: 50 units of READ throughput

Question: You have an application that requires to read 5 items of 10 KB per second using strong consistency. What should you set the READ throughput to?

  • First calculate how many read units per item we need
  • 10 KB rounded up to the nearest increment of 4 KB = 12 KB
  • 12 KB / 4 KB = 3 read units per item
  • 3 x 5 read items = 15
  • Using strong consistency, we do not divide by 2

Answer: 15 units of READ throughput

Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. Your application uses strongly consistent reads. What should you set the READ throughput to?

  • First calculate how many read units per item we need
  • 5 KB rounded up to the nearest increment of 4 KB = 8 KB
  • 8 KB / 4 KB = 2 read units per item
  • 600 / 60 = 10 reads per second
  • 2 x 10 read items = 20
  • Using strong consistency, we do not divide by 2

Answer: 20 units of READ throughput

Question: You have an application that needs to read 25 items of 13kb in size per second. Your application uses strongly consistent reads. What should you set the READ throughput to?

  • First calculate how many read units per item we need
  • 13 KB rounded up to the nearest increment of 4 KB = 16 KB
  • 16 KB / 4 KB = 4 read units per second
  • 4 x 25 read items = 100
  • Using strong consistency, we do not divide by 2

Answer: 100 units of READ throughput

Question: You have a motion sensor which writes 300 items of data every 30 seconds. Each item consists of 5kb. Your application uses eventually consistent reads. What should you set the READ throughput to?

  • First calculate how many read units per item we need
  • 5 KB rounded up to the nearest increment of 4 KB = 8 KB
  • 8 KB / 4 KB = 2 read units per second
  • 300 items of data every 30 seconds = 10 read items per second
  • 2 x 10 read items = 20
  • Using eventual consistency, we get 20 / 2 reads per second = 10

Answer: 10 units of READ throughput

Question: You have an application that requires to WRITE 5 items, with each item being 10 KB in size per second. What should you set the WRITE throughput to?

  • Each write unit consists of 1 KB of data. You need to write 5 items per second with each item using 10 KB of data
  • 5 x 10 KB = 50 write units

Answer: 50 units of WRITE throughput

Question: You have an application that requires to write 12 items of 100 KB per item each second. What should you set the WRITE throughput to?

  • Each write unit consists of 1 KB of data. You need to write 12 items per second with each item having 100 KB of data
  • 12 x 100 KB = 1200 write units

Answer: 1200 units of WRITE throughput

Question: You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. What should you set the WRITE throughput to?

  • Each write units consists of 1 KB of data. You need to write 10 items per second with each having 5 KB of data
  • 10 x 5 KB = 50 write units

Answer: 50 units of WRITE throughput

Key Management Service (KMS)

AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. KMS is integrated with most other AWS services that encrypt your data with encryption keys that you manage.

Examples
  • Create a KMS key in the Oregon (us-west-2) region:
$ aws kms --region=us-west-2 create-key --description="my app assets"
{
    "KeyMetadata": {
        "CreationDate": 1494071487.263,
        "KeyState": "Enabled",
        "Arn": "arn:aws:kms:us-west-2:xxxxxxxxx:key/xxxxxxxxxxxxxxxxxxx",
        "AWSAccountId": "xxxxxxxxxxxxx",
        "Enabled": true,
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyId": "xxxxxxxxx",
        "Description": "my app assets"
    }
}

Route 53

SEE: Route 53 FAQs
DNS 101

Route 53 is a global service (i.e., not on a per AWS region).

Note: ELBs do not have pre-defined IPv4 addresses. You resolve to them using a DNS name.

  • The Start of Authority (SOA) record stores information about:
    • The name of the server that supplied the data for the zone;
    • The administrator of the zone;
    • The current version of the data file;
    • The number of seconds a secondary name server should wait before checking for updates;
    • The number of seconds a secondary name server should wait before retrying a failed zone transfer;
    • The maximum number of seconds that a secondary name server can use data before it must either be refreshed or expire; and
    • The default number of seconds for the time-to-live (TTL) file on resource records.
  • Name Server (NS) records:
    • Used by Top Level Domain servers to direct traffic to the Content DNS server, which contains the authoritative DNS records.
  • A Records:
    • An A record is the fundamental type of DNS record and the "A" in A record stands for "Address".
    • The A record is used by a computer to translate the name of the domain to the IP address (e.g., http://www.example.com => http://1.2.3.4).
  • TTL
    • The length that a DNS record is cached on either the Resolving Server or the user's own local PC is equal to the value of the "Time To Live" (TTL) in seconds. The lower the TTL, the faster changes to DNS records take to propagate throughout the Internet.
  • CNAMES
    • A Canonical Name (CName) can be used to resolve one domain name to another. For example, you may have a mobile website with the domain name http://m.example.com that is used for when users browse to your domain name on their mobile devices. You may also want the name http://mobile.example.com to resolve to this same address.
    • CNAME lookups on AWS incur charges.
  • Alias Records
    • Used to map resource record sets in your hosted zone to ELBs, CloudFront distributions, or S3 buckets that are configured as websites.
    • Alias records work like a CNAME record, in that you can map one DNS name (www.example.com) to another "target" DNS name (elb1234.elb.amazonaws.com).
    • The key difference: A CNAME cannot be used for naked domain names (zone apex; e.g., example.com, not www.example.com). You cannot have a CNAME for http://example.com, it must be either an A record or an Alias.
    • Alias resource record sets can save you time because Route 53 automatically recognizes changes in the record sets that the alias resource record set refers to.
    • For example, suppose an alias resource record set for example.com points to an ELB at lb1-1234.us-west-2.elb.amazonaws.com. If the IP address of the load balancer changes, Route 53 will automatically reflect those changes in DNS answers for example.com without any changes to the hosted zone that contains resource record sets for example.com.
    • Alias Record lookups on AWS are free. Given the choice (on an exam), always choose an Alias Record over a CNAME (if possible).

Note: ELBs do not have a pre-defined IPv4 addresses, you always resolve them using a DNS name.

Route 53 Routing Policies
  • Simple
    • The default routing policy when you create a new record set.
    • This is most commonly used when you have a single resource that performs a given function for your domain (e.g., one web server that serves content for http://example.com).
  • Weighted
    • Allows you to split your traffic based on different weights assigned (e.g., send 20% of your traffic to us-east-1 and 80% to us-west-2).
  • Latency
    • Allows you to route your traffic based on the lowest network latency for your end user (i.e., which region will give them the fastest response time).
    • In order to use latency-based routing, you create a latency resource record set for the EC2 (or ELB) resource in each region that hosts your website. When Route 53 receives a query for your site, it selects the latency resource set for the region that gives the user the lowest latency. Route 53 then responds with the value associated with that resource record set.
  • Failover
    • Used when you want to create an active/passive setup. For example, you may want you primary site to be in us-west-2 and your secondary DR site in us-east-1.
    • Route 53 will monitor the health of your primary site using a health check, which monitors the health of your end points.
  • Geolocation
    • Lets you choose where your traffic will be sent based on the geographic location of your users (i.e., the location from which DNS queries originate). For example, you might want all queries from Europe to be routed to a fleet of EC2 instances that are specifically configured for your European customers. These servers may have the local language(s) of your European customers and all prices are displayed in Euros.

Simple Queue Service (SQS)

SEE: Amazon SQS FAQs

Very first service offered by AWS

  • SQS vs. RabbitMQ:
    • SQS is a managed service. So one does not have to worry about operational aspects of running a messaging system including administration, security, monitoring, etc. Amazon will do this for you and will provide support if something were to go wrong.
    • SQS is Elastic and can scale to very large rate/volumes (unlimited according to AWS)
    • Availability of SQS has a lot of 9's in it and is backed by Amazon, which is one less thing to worry about in your application.

Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them.

SQS is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component. A queue is a temporary repository for messages that a awaiting processing.

Using SQS, you can decouple the components of an application so they run independently, with SQS easing message management between components. Any component of a distributed application can store messages in a fail-safe queue. Messages can contain up to 256 KB of text in any format. Any component can later retrieve the messages programmatically using the SQS API.

The queue acts as a buffer between the component producing and saving data, and the component receiving the data for processing. This mean that queue resolves issues that arise if the producer is producing work faster than the consumer can process it, or if the producer or consumer are only intermittently connected to the network.

Amazon SQS ensures delivery of each message at least once, and support multiple readers and writers interacting with the same queue. A single queue can be used simultaneously by many distributed application components, with no need for those components to coordinate with each other to share the queue.

SQS is engineered to always be available and deliver messages. One of the resulting trade-offs is that SQS does not guarantee first in, first out delivery of messages. For many distributed applications, each message can stand on its own, and as long as all messages are delivered, the order is not important. If your system requires that order be preserved, you can place sequencing information in each message, so that you can reorder the messages when the queue returns them.

To illustrate, suppose you have a number of images files to encode. In a SQS worker queue, you create a SQS message for each file specifying the command (jpeg-encode) and the location of the file in S3. A pool of EC2 instances running the needed image processing software does the following:

  1. Asynchronously pulls the task messages from the queue;
  2. Retrieves the named file;
  3. Processes the conversion (e.g., create a thumbnail, add a watermark, etc.)
  4. Write the image back to Amazon S3;
  5. Writes a "task complete" message to another queue;
  6. Deletes the original task message; and then
  7. Checks for more messages in the worker queue

Visibility Timeout Clock only starts when the component server (i.e., EC2 instance) pulls the message from the queue

  • SQS with Auto-scaling
  • SQS does not offer FIFO (first in, first out)
  • 12 hour visibility time out (by default)
  • SQS is engineered to provide "at least once" delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies.
  • 256kb message size (as of May 2016)
  • Billed at 64kb "chunks"
  • A 256kb message will be 4 x 64kb "chunks"
  • SQS Pricing
    • First 1 million SQS requests per month are free
    • $0.50 per 1 million SQS requests per month thereafter ($0.0000005 per SQS request)
    • A single request can have from 1 to 10 messages, up to a maximum total payload of 256kb
    • Each 64kb "chunk" of payload is billed as 1 request. For example, a single API call with a 256kb payload will be billed as four requests.
  • If you see "decouple" on the exam, think SQS.
  • SQS Delivery:
    • SQS messages can be delivered multiple times and in any order (no first in, first out; last in, last out)
  • SQS - Default visibility timeout:
    • Default visibility timeout is 30 seconds
    • Maximum timeout is 12 hours
    • When you receive a message from a queue and begin processing it, you may find the visibility timeout for the queue is insufficient to fully process and delete that message. To give yourself more time to process the message, you can extend its visibility timeout by using the ChangeMessageVisibility action to specify a new timeout value. SQS restarts the timeout period using the new value.
  • SQS Long Polling:
    • SQS long polling is a way to retrieve messages from your SQS queues. While the traditional SQ short polling returns immediately, even if the queue being polled is empty, SQS long polling does not return a response until a message arrives in the queue, or the long poll times out. SQS long polling makes it easy and inexpensive to retrieve messages from your SQS queue as soon as they are available.
    • Maximum Long Poll timeout is 20 seconds
    • Example exam question: Polling in tight loops is burning CPU cycles and costing the company money. How would you fix this? (Answer: Enable SQS Long Polling.)
  • SQS - Fanning Out:
    • Create an SNS topic first using SNS. Then create a and subscribe multiple SQS queues to the SNS topic. Now whenever a message is sent to the SNS topic, the message will be fanned out to the SQS queues (i.e., SNS will deliver the message to all the SQS queues that are subscribed to the topic).

Question: You are designing a new application which involves processing payments and delivering promotional emails to customers. You plan to use SQS to help facilitate this. You need to ensure that the payment process takes priority over the creation and delivery of emails. What is the best way to achieve this?

Answer: Use 2 SQS queues for the platform. Have the EC2 fleet poll the payment SQS queue first. If this queue is empty, then poll the promotional emails queue.

Question: Your EC2 instances download jobs from the SQS queue, however they are taking too long to process them. What API call can you use to extend the length of time to process the jobs?

Answer: ChangeMessageVisibility

Question: You have a fleet of EC2 instances that are constantly polling empty SQS queues which is burning CPU compute cycles and costing your company money. What should you do?

Answer: Enable SQS Long Polling

Simple Notification Service (SNS)

SEE: Amazon SNS FAQs

SNS is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.

SNS follows the "publish-subscribe" (pub-sub) messaging paradigm, with notifications being delivered to clients, using a "push" mechanism that eliminates the need to periodically check or "poll" for new information and updates. With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, SNS gives developers an easy mechanism to incorporate a powerful notification system with their applications.

Push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push.

Besides pushing cloud notifications directly to mobile devices, SNS can also deliver notifications by SMS text message or email, to Amazon Simple Queue Service (SQS) queues or to any HTTP endpoint.

SNS notifications can also trigger Lambda functions. When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message. The Lambda function receives the message payload as an input parameter and can manipulate the information in the message, publish the message to other SNS topics, or send the message to other AWS services.

To prevent messages from being lost, all messages published to SNS are stored redundantly across multiple availability zones.

  • SNS - Topics:
    • SNS allows you to group multiple recipients using "topics". A topic is an "access point" for allowing recipients to dynamically subscribe to identical copies of the same notification. One topic can support deliveries to to multiple endpoint types. For example, one can group together iOS, Android, and SMS recipients. When you publish once to a topic, SNS delivers appropriately formatted copies of your message to each subscriber.
    • Subscriptions via email require the receiving email owner to confirm the subscriptions in order to receive notifications from the given topic (prevents spam). Subscription requests expire after 3 days, if the receiving email owner does not confirm the subscription.

Example SNS email notification:

{
  "Type" : "Notification",
  "MessageId" : "436d9234-f427-5be8-aa54-dd98ae4e286dba0",
  "TopicArn" : "arn:aws:sns:us-west-2:01234:MyTestSNSTopic",
  "Subject" : "This is a test",
  "Message" : "Hello, world!",
  "Timestamp" : "2016-05-10T21:53:37.981Z",
  "SignatureVersion" : "1",
  "Signature" : "PInj15UDcwMI==",
  "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-9390147a5624348ee.pem",
  "UnsubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:01234:MyTestSNSTopic:1a4eead5-110c-4acb-894a-91cdf358aabc",
  "MessageAttributes" : {
    "AWS.SNS.MOBILE.MPNS.Type" : {"Type":"String","Value":"token"},
    "AWS.SNS.MOBILE.MPNS.NotificationClass" : {"Type":"String","Value":"realtime"},
    "AWS.SNS.MOBILE.WNS.Type" : {"Type":"String","Value":"wns/badge"}
  }
}
  • SNS benefits:
    • Instantaneous, push-based delivery (no polling, like SQS)
    • Simple APIs and easy integration with applications
    • Flexible message delivery over multiple transport protocols
    • Inexpensive, pay-as-you-go model with no up-front costs
    • Web-based AWS Management Console offers the simplicity of a point-and-click interface
  • SNS vs. SQS:
    • Both messaging services in AWS
    • SNS => Push
    • SQS => Pulls (Polls)
  • SNS Pricing:
    • Users pay $0.50 per 1 million SNS requests
    • $0.06 per 100,000 notification deliveries over HTTP
    • $0.75 per 100 notification deliveries over SMS
    • $2.00 per 100,000 notification deliveries over email

SNS data format = JSON

  • SNS protocols include:
    • HTTP
    • HTTPS
    • Email
    • Email-JSON
    • Amazon SQS
    • Application
    • AWS Lambda

NOTE: Messages can be customized for each protocol

Simple Workflow Service (SWF)

SEE: Amazon SWF FAQs

Amazon Simple Workflow Service (Amazon SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process work-flows, and analytics pipelines, to be designed as a coordination of tasks. Tasks represent invocations of various processing steps in an application, which can be performed by executable code, web service calls, human actions, and scripts.

  • SWF Actors
Workflow Starters
an application that can initiate (start) a workflow. This could be your e-commerce website when placing an order or a mobile app searching for bus times.
SWF Workers (Activity Workers) 
programs that interact with SWF to get tasks, process received tasks, and return the results.
SWF Decider 
a program that controls the coordination of tasks, i.e., their ordering, concurrency, and scheduling according to the application logic.

The workers and the decider can run on cloud infrastructure, such as Amazon EC2, or on machines behind firewalls. SWF brokers the interactions between workers and the decider. It allows the decider to get consistent views into the progress of tasks and to initiate new tasks in an ongoing manner. At the same time, SWF stores tasks, assigns them to workers when they are ready, and monitors their progress. It ensures that a task is assigned only once and is never duplicated. Since SWF maintains the application's state durably, workers and deciders do not have to keep track of execution state. They can run independently and scale quickly.

SWF Domains 
Your work-flow, activity types, and the workflow execution itself are all scoped to a domain. Domains isolate a set of types, executions, and task lists from others within the same account. One can register a domain by using the AWS Management Console or by using the RegisterDomain action in the SWF API.
The parameters of an SWF Domain are specified in JSON format. E.g.:
https://swf.us-west-2.amazonaws.com
RegisterDomain
{
  "name": "123456789",
  "description": "images",
  "workflowExecutionRetentionPeriodInDays": "60"
}

Maximum Workflow can be 1 year and the value is always measured in seconds.

  • SWF vs. SQS:
    • SWF presents a task-oriented API, whereas SQS offers a message-oriented API.
    • SWF ensures that a task is assigned only once and is never duplicated. What SQS, one needs to handle duplicate messages and may also need to ensure that a message is processed only once.
    • SWF keeps track of all the tasks and events in an application. With SQS, one needs to implement one's own application-level tracking, especially if one's application uses multiple queues.
    • Does the service require human interaction? If so, one should use SWF.
    • Does the service need to run for (much) more than 12 hours? If so, one should use SWF. If less than 12 hours, SQS might be the correct service to use.
    • Maintaining your application's execution state (e.g. which steps have completed, which ones are running, etc.) is a perfect use case for SWF.
    • Amazon SWF is useful for automating work-flows that include long-running human tasks (e.g. approvals, reviews, investigations, etc.). Amazon SWF reliably tracks the status of processing steps that run up to several days or months.
    • SQS has a retention period of 14 days; SWF up to 1 year for workflow executions.

Elastic Transcoder

Amazon Elastic Transcoder lets you convert digital media stored in Amazon S3 into the audio and video codecs and the containers required by consumer playback devices. For example, you can convert large, high-quality digital media files into formats that users can play back on mobile devices, tablets, web browsers, and connected televisions.

It is a media transcoder in the Cloud. It allows you to convert media files from their original source format into different formats that will play on smartphones, tables, PCs, etc. It provides transcoding present for popular output formats, which mean that you do not need to guess which settings work beset on particular devices.

Pay based on the minutes that you transcode and the resolution at which you transcode.

Elastic Transcoder has three components:

  1. Pipelines are queues that manage your transcoding jobs. Elastic Transcoder begins to process jobs in the order in which you add them to a pipeline. Typically, you will create at least two pipelines, one for standard-priority jobs and one for high-priority jobs. Most jobs go into the standard-priority pipeline; you use the high-priority pipeline only when you need a file to be transcoded immediately.
  2. Jobs specify the settings that are not included in the preset, for example, the file to transcode and whether to create thumbnails. Each job converts one file into one different format. When you create a job, Elastic Transcoder adds it to the pipeline you specify. If there are already jobs in the pipeline, Elastic Transcoder begins processing the new job when resources are available.
  3. Presets are templates that specify most of the settings for the transcoded media file. Elastic Transcoder includes some default presets for common formats. You can also create your own presets. When you create a job, you specify which preset to use.

API Gateway

Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a "front door" for applications to access data, business logic, or functionality from your back-end services, such as applications running on EC2, code running on Lambda, or any web application.

API Caching

You can enable API caching in API Gateway to cache your endpoint's response. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of the requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.

  • API Gateway provides:
    • Low cost and efficient
    • Scales effortlessly
    • You can throttle requests to prevent attacks
    • You can connect to CloudWatch to log all requests
  • Same origin policy
    • In computing, the same-origin policy is an important concept in the web application security model. Under the policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin.
Cross-Origin Resource Sharing (CORS)
  • CORS is one way the server at the other end (not the client code in the browser) can relax the same-origin policy.
  • CORS is a mechanism that allows restricted resources (e.g., fonts) on a web page to be requested from another domain outside the domain from which the first resource was served.
  • Example error: "Origin policy cannot be read at the remote resource" => You need to enable CORS on API Gateway.

CloudFormation

SEE: Amazon CloudFormation FAQs

CloudFormation => Scripted infrastructure (IaaS)

Using the CloudFormation service is free. However, any resources it creates/consumes/provisions (e.g., EC2 instances, Load Balancers, etc.) are not free.

CloudFormation templates are written in JSON format. E.g., CloudFormation LAMP stack template (see here for full template):

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  
  "Description" : "AWS CloudFormation Sample Template LAMP_Single_Instance:
Create a LAMP stack using a single EC2 instance and a local MySQL database for
storage. This template demonstrates using the AWS CloudFormation bootstrap
scripts to install the packages and files necessary to deploy the Apache web
server, PHP and MySQL at instance launch time. **WARNING** This template
creates an Amazon EC2 instance. You will be billed for the AWS resources used
if you create a stack from this template.",
  
  "Parameters" : {
      
    "KeyName": {
      "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instance",
      "Type": "AWS::EC2::KeyPair::KeyName",
      "ConstraintDescription" : "must be the name of an existing EC2 KeyPair."
    }, 
...

If a CloudFormation stack creation fails, the default is to terminate and roll-back all resources created on failure (i.e., delete all of the resources it was trying to create). One can disable roll back to leave all resources in their current state (failed or not). This is useful for troubleshooting your own templates.

Question : You are creating a virtual data centre using cloud formation and you need to output the DNS name of your load balancer. What command/function would you use to achieve this?

Answer: Fn::GetAtt function

Elastic Beanstalk

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling, to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

Using the AWS Elastic Beanstalk service is free. However, any AWS resources it creates/consumes/provisions to store and run your applications are not free.

  • Environment tier:
    • Web Server Environment - Provides resources for an AWS Elastic Beanstalk web server in either a single instance or load-balancing, auto scaling environment.
    • Worker Environment - Provides resources for an AWS Elastic Beanstalk worker application in either a single instance or load-balancing, auto scaling environment.
  • Environment type:
    • Single instance
    • Load balancing, auto-scaling
  • Preconfigured platforms:
    • PHP, Node.js, Python, Ruby, Tomcat, IIS, Java, Go, Docker

Virtual Private Cloud (VPC)

SEE: Amazon VPC FAQs
Think of a VPC as a virtual data centre in the Cloud.
  • AWS definition of a VPC:
    • Amazon Virtual Private Cloud (VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including select of you own IP address range, creation of subnets, and configuration of route tables and network gateways.
    • You can easily customize the network configuration for your AWS virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place backend systems, such as databases or application service in a private-facing subnet, with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to EC2 instance in each subnet.
    • Additionally, one can create a Hardware Virtual Private Network (VPN) connection between your corporate datacentre and your VPC and leverage the AWS cloud as an extension of your corporate datacentre (aka a "Hybrid Cloud").
    • VPCs consist of IGWs (or Virtual Private Gateways), Route Tables, Network Access Control Lists, Subnets, Security Groups, etc.
  • What can one do with a VPC?
    • Launch instances into a subnet of one's choosing
    • Assign custom IP address ranges in each subnet
    • Configure route tables between subnets
    • Create internet gateways and attach them to subnets (or not). Only one internet gateway per VPC.
    • Much better security control over your AWS resources
    • Instance security groups (these are stateful: HTTP in = HTTP out)
    • Create subnet network access control lists (ACLs). These are stateless: HTTP in != HTTP out (you must create separate ACLs for each).
    • Each subnet is mapped directly to an AZ, and only one AZ (you cannot span subnets across AZs) => 1 subnet = 1 AZ.
    • Security groups, route tables, and ACLs can span multiple subnets.
    • Number of allowed VPCs in each AWS Region (by default): 5
  • Default VPC vs. Custom VPC
    • A Default VPC is user-friendly (automatically created when one creates an AWS account). It allows one to immediately deploy instances.
    • All subnets in a default VPC have an internet gateway attached (i.e., all subnets are public / all subnets have a route out to the Internet).
    • Each EC2 instance has both a public and private IP address
    • If one were to delete the default VPC, the only way to get it back is to contact AWS
  • VPC Peering
    • Allows one to connect one VPC with another via a direct network route using private IP addresses.
    • Instances behave as if they were on the same private network.
    • One can peer VPCs with other AWS accounts as well as with other VPCs in the same account.
    • One cannot create a VPC larger than /16
    • Peering is done in a "star configuration", i.e., 1 central VPC peers with 4 others. No transitive peering!
  • A "star configuration" (or hub-and-spoke) peering:
                 +-------+
                 | VPC C |
                 +-------+
                     ^
                     |
                     v
+-------+        +-------+        +-------+
| VPC B | <----> | VPC A | <----> | VPC E |
+-------+        +-------+        +-------+
                     ^
                     |
                     v
                 +-------+
                 | VPC D |
                 +-------+

In the above example, instances on VPC-B can not send/receive traffic on VPC-C via VPC-A. One would need to create a VPC peer directly from VPC-B and VPC-C. That is, with a star configuration (as shown above), there is no transitive peering.

  • VPC tenancy:
    • Default - EC2 instances are created on shared hardware
    • Dedicated - EC2 instances are created on dedicated hardware (regardless of the tenancy attribute specified at launch). This the more expensive option.

By default, when one creates a VPC, a route tables is automatically created for the VPC.

NOTE: If one deletes one's account's default VPC (and/or the associated default subnets), the only way to get it back is to raise a ticket with Amazon.

  • VPC subnets:
    • Use the CIDR format to specify your subnet's IP address block (e.g., 10.0.0.0/24). Note that block sizes must be between a /16 netmask and /28 netmask. Also, note that a subnet can be the same size as your VPC.
    • Subnets are always mapped to one availability zone (AZ). Subnets can not be mapped across multiple AZs. 1 subnet = 1 AZ.
  • VPC Internet gateways:
    • By default, when one creates an Internet Gateway, it is detached. One must attach it to a VPC in order to use it.
    • One can only have 1 Internet Gateway per VPC.

Security Groups can span multiple VPCs and VPC subnets.

  • Network Address Translation (NAT) Server:
    • Allow instances with only private IPs to reach the Internet via the NAT server (via, say, HTTP/HTTPS and all other protocols/ports closed, including SSH)
    • One must disable Source/Destination Check on NAT instances for them to work properly.
    • SEE: Comparison of NAT Instances and NAT Gateways
  • NAT instances example:
    • Create a custom security group
    • Allow inbound traffic to 10.0.1.0/24 and 10.0.2.0/24 on HTTP and HTTPS
    • Allow outbound traffic on HTTP and HTTPS to anywhere
    • Provision a NAT instance inside the public subnet (make sure the NAT instance has a public IP)
    • Import! Make sure to select "Disabled Source/Destination Check" on this NAT instance!
    • Set up a route on the private subnet to route traffic through the NAT instance.
    • NAT instance behind a security group
    • The amount of traffic that NAT instances support depends on the instance size. If you are reaching a bottleneck, increase the instance size.
    • You can create HA using Autoscaling Groups with multiple subnets in different AZs (and use a script to automate failover).
    • NOTE: It is better (and easier) to use a NAT Gateway over a NAT Instance.
  • NAT Gateways
    • Preferred by enterprise organizations
    • Scale automatically up to 10Gbps
    • No need to patch OS (e.g., no need for `yum update`, etc.)
    • Not associated with security groups
    • Automatically assigned a public IP address
    • No need to disable Source/Destination Checks.
    • Remember to update your VPC route tables after creating the NAT Gateway.
  • NAT instances vs. Bastions
    • A NAT instance is used to provide Internet traffic to EC2 instances in private subnets.
    • A Bastion is used to securely administer EC2 instances (using SSH) in private subnets.

Network Access Control Lists (ACLs)

SEE: Network ACLs

  • A network ACL is an optional layer of security that acts as a firewall for controlling traffic in and out of a subnet.
  • ACLs ~ "firewall"-like rules.
  • ACLs are a numbered list of rules, which are followed in order, starting with the lowest number first. They control network ingress/egress for all AWS resources within a given subnet.
  • The highest ACL number allowed is 32766.
  • Your VPC automatically comes with a default ACL and by default it allows all inbound/outbound traffic.
  • ACLs have a default (editable) number list that allows all inbound/outbound traffic.
  • One can create a custom ACL, which starts out with no inbound/outbound traffic allowed, until one adds a rule.
  • ACLs are applied to an entire subnet (and override the security groups associated with a given instance on that subnet). For an example, if a security group applied to a given instance has port 80 allowed, but the ACL for the subnet the instance is on has port 80 denied, the ACL rule takes precedence (i.e., port 80 will be denied on all instances within that subnet, regardless of what the security group allows).
  • Unless one creates a custom ACL and associates it with a given subnet, that ACL will use the default role and rules.
  • One can not have multiple ACLs associated with the same subnet. However, a given ACL can be associated with multiple subnets.
  • When one associates a custom ACL with a subnet, the previous association is removed
  • If one dis-associates a custom ACL from a given subnet(s), the subnet reverts back to the default ACL.
  • If one wishes to block a specific IP address, use ACLs not Security Groups.
  • Security Groups (SGs) vs. Network ACLs (ACLs)
    • SGs operate at the instance level (first layer of defense). ACLs operate at the subnet level (second layer of defense).
    • SGs allow rules only (everything is denied unless opened). ACLs allow rules and deny rules.
    • SGs are stateful (return traffic is automatically allowed, regardless of any rules). ACLs are stateless (return traffic must be explicitly allowed by rules).
    • SGs: AWS evaluates all rules before deciding whether to allow traffic. ACLs: AWS process rules in number order when deciding whether to allow traffic.

Example labs

  • VPC Lab:
    • Create a custom VPC
      • Define an IP Address Range (e.g., 10.0.0.0/16)
      • By default, this created a Network ACL and Routing Table
    • Create a custom Route Table
    • Create 3 subnets (e.g., 10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24)
      • public subnet: 10.0.1.0/24; private subnets: 10.0.2.0/24 and 10.0.3.0/24
    • Create an Internet Gateway
    • Attach Internet Gateway to the custom VPC
    • Associate the public subnet with the custom Route Table
    • Provision an EC2 instance with an Elastic IP address (in the public subnet)
    • Provision an EC2 instance with only a private IP address (on the private subnet)

Kinesis

Streaming Data is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously and in small sizes (on the order of kilobytes).

Example streaming data sources:

  • Purchases from Online stores (e.g., amazon.com)
  • Stock prices
  • Game data (as the gamer plays)
  • Social network data
  • Geospatial data (e.g., Uber)
  • IoT sensor data

Amazon Kinesis is a platform on AWS to send your streaming data too. Kinesis makes it easy to load and analyze streaming data, and also provides the ability for you to build your own custom applications for your business needs.

  • Core Kinesis services
  1. Kinesis Streams
    • Collect and stream data for ordered, replayable, real-time processing.
    • Retention: stored by default for 24 hours and up to 7 days.
    • Consists of shards: 5 transactions per second for reads (up to a maximum total data read rate of 2 MB per second) and up to a maximum total data write rate of 1 MB per second (including partition keys).
    • The data capacity of your stream is a function of the number of shards that you specify for the stream. The total capacity of the stream is the sum of the capacities of its shards.
  2. Kinesis Firehose
    • Continuously deliver streaming data to Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.
    • Analyze data using Lambda functions.
  3. Kinesis Analytics
    • Analyze streaming data from Amazon Kinesis Firehose and Amazon Kinesis Streams in real-time using SQL.

AWS Security Hub

AWS Security Hub provides you with a comprehensive view of the security state of your AWS resources. Security Hub collects security data from across AWS accounts and services, and helps you analyze your security trends to identify and prioritize the security issues across your AWS environment.

AWS Security Hub integrates with other AWS services. One can forward all the findings from those services to Security Hub for a centralized view.

The following services are supported:

AWS Shared Responsibility

Shared Responsibility Model for AWS Infrastructure Services
  • Infrastructure services include: EC2, EBS, Auto-scaling, VPC, etc
  • Amazon responsibility (i.e., managed by AWS):
    • AWS global infrastructure (regions, availability zones, edge locations)
    • Foundation services (compute, storage, databases, networking)
    • AWS endpoints
    • AWS IAM
  • Customer responsibility (i.e., managed by AWD customers):
    • Server-side encryption, network traffic protection
    • Operating system, network, and firewall configuration(s)
    • Platform and application management
    • Customer data
    • Customer IAM
Shared Responsibility Model for AWS Container Services
  • Container services include: Relational Database Services (RDS), Elastic Map Reduce (EMR), and Elastic Beanstalk
  • AWS takes on more of the responsibility (e.g., operating system and network configuration; platform and application management)
  • Customer still has the responsibility for firewall configuration
Shared Responsibility Model for AWS abstracted services
  • Abstracted services include: S3, Glacier, DynamoDB, SQS, and Simple Email Service (SES), Lambda
  • AWS takes on even more of the responsibility (e.g., network traffic protection provided by the platform; server-side encryption provided by the platform)
  • Customer still has the responsibility for client-side data encryption, data integrity authentication, and customer data

Exams overview

AWS Certified Developer - Associate

  • Time allotted: 80 minutes
  • 55 questions on the exam
  • $150 example registration fee
  • Conducted Online at an approved centre

AWS Certified Solutions Architect - Associate

  • Time allotted: 80 minutes
  • 60 questions on the exam
  • $150 example registration fee
  • Conducted Online at an approved centre
  • AWS platforms covered:
    • Security & Identity
    • Compute
    • Storage
    • Databases
    • Networking & Content Delivery
    • Messaging
    • Desktop & App Streaming (only at a very high-level)
    • Management Tools (only at a very high-level)
  • AWS Global Infrastructure (what all of the above platforms/services reside in)
    • As of December 2016: 14 Regions and 38 Availability Zones (AZs)
    • In 2017: 4 more Regions and 11 more AZs
  • Edge Locations are CDN Endpoints for CloudFront (as of December 2016, there are ~66 Edge Locations)

AWS Certifications

NOTE: All AWS certification exams are taken on-site and proctored.

  • Associate Level ($150 each):
    • AWS Certified Developer - Associate
    • AWS Certified Solutions Architect - Associate
    • AWS Certified SysOps Administrator - Associate
  • Professional Level ($300 each):
    • AWS Certified DevOps Engineer - Professional
    • AWS Certified Solutions Architect - Professional
  • Specialty (Beta, as of January 2017)
    • AWS Certified Security - Specialty
    • AWS Certified Big Data - Specialty
    • AWS Certified Advanced Networking - Specialty

The AWS Partner Program

The AWS Partner Program
Partner Associate Certs Professional Certs
Standard 2 0
Advanced 4 2
Premier 20 8


Glossary

see: Official AWS Glossary
AMI
Amazon Machine Image
ARN
Amazon Resource Name
EBS
Elastic Block Storage (virtual disks for EC2 instances)
EC2
Elastic Compute Cloud
EC2 Container Service (ECS)
Elastic Beanstalk
EFS
Elastic File System
ELB
Elastic Load Balancer
KRADLE
Kinesis, Redshift, Aurora, Dynamo DB, Lambda, EMR (lock-in services)
Lambda
Serverless code
Lightsail
Out-of-the-box Cloud
STS
Security Token Service
VPC
Virtual Private Cloud
Route53
DNS + ability or register domain names
CloudFront
Content Delivery Network (CDN) / Edge Locations
DirectConnect

Storage

S3 
Simple Storage Service (object-based storage)
Glacier 
Data archival (for objects in S3). Low cost.
EFS
Elastic File Service (block-based storage; shareable)
Storage Gateway 
Connect S3 to on-premise DC

Databases

RDS 
Relational Database Service (e.g., MySQL, MariaDB, Aurora, Postgres, etc.)
DynamoDB 
NoSQL (non-relational database)
Redshift
Data warehousing
Elasticache 

Migration

Snowball 
Move large amounts of data into the Cloud (e.g., contents of a HDD)
DMS
Database Migration Service (e.g. in-house Oracle DB into AWS RDS:Aurora)
Server Migration Service 
Virtual machine migration (e.g., on-premise VMware VMs into AWS)

Analytics

Athena 
Run SQL queries on S3 (e.g., CSV/JSON files)
EMR 
Elastic MapReduce (process large amounts of data {Big Data}; e.g., log files)
CloudSearch 
Elastic Search 
Kinesis 
Stream and analyse live/real-time data (e.g., financial data, social media feeds, etc.)
Data Pipeline 
Allows moving data from one location to another (e.g., from S3 to DynamoDB or vice versa, etc.)
Quick Sight 
Business analytics tool

Security & Identity

IAM 
AWS Identity and Access Management
Inspector 
Agent-based service to inspect EC2 instances, etc.
Certificate Manager 
Free SSL certs
Directory Service 
Active Directory in the Cloud
WAF 
Web Application Firewall (e.g., protect against SQL injections, etc.)
Artifacts 
Compliance Reports (e.g., ISO 27001 certification, etc.)

Management Tools

Cloud Watch 
Monitor performance of AWS (e.g. EC2 => CPU/RAM util)
Cloud Formation 
Infrastructure as Code (document-based; JSON/YAML)
Cloud Trail 
Audit AWS resource usage
OpsWorks 
Chef for AWS
Config 
Monitor AWS environment (e.g., send alert if someone creates an IAM role that breaks company policy)
Trusted Advisor 
Automate performance, security, fault-tolerance, cost, etc.

Application Services

Step Functions
Visualize what is going on inside an application (and/or microservice)
SWF 
Simple Workflow Service (coordinate automated vs. human tasks)
API Gateway 
Create, publish, maintain APIs in the Cloud
AppStream 
Stream desktop applications to users
Elastic Transcoder 
Change video format (e.g., for viewing on different devices)

Developer Tools

CodeCommit 
GitHub in AWS
CodeBuild 
Compile code in the Cloud
CodeDeploy 
Deploy code to EC2 instances
CodePipeline 
Keep track of versions of code (e.g., dev, test, prod, UAT)

Mobile Services

Mobile Hub 
Cognito
Device Farm
Mobile Analytics
Pinpoint 
Google Analytics for mobile applications

Business Productivity

WorkDocs
WorkMail 
Exchange for AWS

Internet of Things (IoT)

IoT

Desktop & App Streaming

WorkSpaces 
Virtual Desktops in the Cloud / Virtual Desktop Infrastructure (VDI) solutions
AppStream 2.0 
Stream dekstop applications to users

Artificial Intelligence

Lex 
Think "Alexa in the Cloud" or Alexa on a RaspberryPi
Polly 
Text-to-Speech (text => mp3 in S3)
Machine Learning
Rekognition 
Analyse pictures with tagging and facial recognition

Messaging

SNS 
Simple Notification Service
SQS 
Message Queue Service
SES 
Simple Email Service

Links

AWS Whitepapers
Training and certification
Miscellaneous

Pages in category "AWS"

The following 8 pages are in this category, out of 8 total.