AWS Tutorial

Cloud computing has become an integral part of businesses across all organizations. It is serving business by offering features such as scalability, reliability, low-cost infrastructure. The Amazon Web Services (AWS) powers thousands of businesses across the world. This AWS tutorial covers the crucial concepts which are necessary to learn about its services such as S3, EC2, VPC, Cloud Formation, etc. This tutorial will also help you with skills and tools to become a successful AWS solutions architect. Now, let go through the concepts of Azure DevOps.

1. Overview of AWS

AWS improves efficiency and provides consolation for any number of business practices. Past in the 2000s, businesses were completely dependent on purchased servers, and those servers had limited functionality provided with high cost. Moreover, a functioning server required countless validations. For growing businesses, more servers and optimization practices are required. Acquiring those items proved inefficient, and, sometimes, prohibitively expensive. Using AWS has resolved many such issues. These servers are now instantly available in the organizations which use AWS, and it offers different workloads, increased storage preferences, and enhanced security measures.

2. What is AWS?

AWS is a cloud provider. Among other features, cloud providers permit more storage flexibility and enhanced security measures. AWS also contains features such as a local data center, bolstered security, higher computing capacity, and database construction. Depending on the location, you can also get other features like content caching. One of the benefits of AWS is that you get all 160 cloud services on a pay-as-you-go basis.  You can pay only for the services which you use that work on a relative scale. That means the less you use it, the less you pay and the more you use it, the less you pay per unit which means the price of each unit goes down with each new purchase. The Other benefits of AWS relate to the applications associated with it.

The applications are reliable as it runs on a safe and reliable infrastructure.
Their on-demand infrastructure allows greater scalability.
The design options available on the cloud allows large flexibility.
Let us now understand what is AWS by looking into the services of amazon web services (AWS).

3. AWS Services

Amazon Web Services has become a prerequisite cloud computing technology. Few of the AWS's prerequisite offerings are as follows:

Amazon S3

This tool is used for internet back up, and it's the inexpensive storage option in the object-storage category. The best part is you can retrieve stored data from almost anywhere whenever you need it.

AWS Data Transfer Products

The AWS data transfer is the migration, data collection, and data transfer products that help you collect data continuously. It also enables you to monitor and analyze the data in real-time.

Amazon EC2 (Elastic Compute Cloud)

This provides a secure and resizable computing capacity based on your needs. The service is designed to make web-scale cloud computing more accessible.

Amazon SNS (Simple Notification Services)

Amazon SNS is a notification tool that delivers messages to a large number of subscribers through email or SMS. Using this service you are able to send alarms, service notifications, and other messages intended to call attention to important information.

Amazon KMS (Key Management System)

A key management system is a security tool that uses 256-bit encryption for your data. It also safeguards it from hackers and cyber attacks.

Amazon Lambda

This service runs your code depending on specific events and manages the dependent resources. The managing or provisioning servers are not required here, and your payment depends on how long it takes to execute your code. It conserves a lot of money compared with services that charge hourly rates.

Route 53

Route 53 is a DNS service in the cloud that doesn't require you to maintain a separate DNS account. It's designed to provide businesses with a reliable and cost-effective method to route users to internet applications.

4. AWS Applications

AWS uses the applications of various domains, a few of those applications are as follows:

4.1 Cloud Compute Resources 

Cloud infrastructure or cloud compute resources are the servers that are used to host applications on the cloud. Amazon has several options for varying requirements. Businesses use these servers to run their Web software securely and reliably. 

Amazon's cloud computing offerings are listed below: 

Amazon EC2

This is called Elastic Compute 2 or EC2 is primarily used for hosting applications. It offers scalable VMs built with granular control over networking, security, migrations, and backups.

Amazon EBS

EBS or Elastic Beanstalk is an easy to configure alternative to EC2. You can upload application files, and it creates the necessary environment automatically.

Amazon Lightsail

Lightsail is a simplified version of EC2 which offers VPS servers with the one-click launch, backup, and restore functionalities.

Amazon Lambda

AWS Lambda is a serverless service that fires snippets of code only on request. Easy to configure, Lambda is excellent for serverless applications. 

4.2 Scalable Cloud Storage

Amazon S3

AWS S3 is used to store data securely and cost-effectively. Businesses can store business data with easy retrieval and worldwide availability.

Amazon Glacier

AWS Glacier is targeted at storing a backup of data for several years. It cold stores data and is used for archival purposes at a low cost.

Amazon Elastic Block Storage

AWS EBS is like a virtual hard disk that you can attach to your server. It's available in various configurations and boots up in seconds for quick usage. 

Amazon Elastic File System (EFS)

AWS EFS replicates File Explorer on your PC in the cloud. You can quickly create virtual file systems necessary for storage or application needs. 

4.3 Relational and Non-Relational Databases

Amazon RDS

AWS Relational Database Service or RDS is a cloud alternative for setting up MySQL/PSQL databases. It allows businesses to remotely connect databases using APIs, enabling security and portability of data. 

Amazon Redshift

AWS Redshift serves as a data warehousing service that comes with advanced querying functions. It allows Business Analysis possible using BI and SQL tools in the cloud. 

4.4 Other Cloud Domain Applications 


AWS Application Discovery Service is a unique service that prepares big organizations for migration. It enables discovering the workloads, usage, resources, etc. to plan a smooth movement on the AWS cloud.

Networking and Content Delivery

Amazon Route 53 is used for setting up DNS, and connecting domain names with AWS hosted applications. Elastic Load Balancing offers high availability of services with the dynamic distribution of incoming traffic to other regions. 

Management Tools

AWS Auto Scaling allows you to compensate for increased traffic by scaling up servers dynamically automatically.  


AWS Identity and Access Management (IAM) is the control panel for all AWS services. You can add users to your account and give them relevant permissions using IAM. 


AWS Identity and Access Management (IAM) is a web service that is used to securely control access to AWS resources. It enables creating and controlling the services for user authentication or limit access to a certain set of people who use your AWS resources. IAM works with the following six elements. 

  1. Principal: It is an entity that can perform actions on an AWS resource. A principal is a user, a role, or an application.
  2. Authentication: It is the process of confirming the identity of the principal trying to access an AWS product. The principal must provide its credentials or required keys for authentication.
  3. Request: A principal sends a request to AWS specifying the action and which resource should perform it.
  4. Authorization: All resources are denied by default. IAM authorizes a request when all parts of the request are allowed by a matching policy. AWS approves the action after authenticating and authorizing the request.
  5. Actions: These are used to view, create, edit, or delete a resource.
  6. Resources: A set of actions can be performed on a resource related to your AWS account.

5.1 Components of AWS IAM 

The fundamental components of IAM comprise users, groups, policies, and roles. We have the user, many users together form a group. Policies are the engines that allow or deny a connection based on policy. Roles are temporary credentials that can be assumed to an instance as needed.

AWS Training

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning


An IAM user is an identity with an associated credential and permissions attached to it. IAM is an actual person who is a user, or an application that is a user. With this, you can securely manage access to AWS services by creating an IAM user name for each employee in your organization. Each IAM user is associated with only one AWS account. By default, a newly created user is not authorized to perform any action in AWS. The benefit of having one-to-one user specification is that you can individually assign permissions to each user.


IAM group is a collection of IAM users. The IAM groups are used to specify permissions for multiple users such that any permissions applied to the group are applied to the individual users in that group as well. It is easy to manage the groups. You can set the permissions for the group, and those permissions are automatically applied to all the users in the group. By adding another user to the group, the new user will automatically inherit all the policies and the permissions already assigned to that group. This reduces the burden of administration.


An IAM policy sets permission and controls access to AWS resources. Policies are stored in AWS as JSON documents. Permissions are the specifications that tell who has access to the resources and what actions they can perform. For instance, a policy could allow an IAM user to access one of the buckets in Amazon S3. The policy contains the following information:

  1. Who can access it?
  2. What actions can the user take?
  3. Which AWS resources that user can access?
  4. When they can be accessed?

In JSON format that would appear as shown below:


There are two types of policies known as managed policies and inline policies. A managed policy is a default policy that can be attached to the multiple entities such as users, groups, and roles in your AWS account. Managed policies, whether they are AWS-managed or customer-managed, are stand-alone identity-based policies attached to multiple users and/or groups. Inline policies are created policies that are embedded directly into a single entity like a user, group, or role.


An IAM role is a set of permissions that define what actions should be allowed and denied by an entity in the AWS console. It is similar to a user in that it can be accessed by any type of entity (an individual or AWS service). Role permissions are temporary credentials.

If you would like to provide someone with a service or let someone access resources in your account, you can use roles for that purpose. You can also grant temporary access to your account to a third party, such as a consultant or an auditor. They’re not permanent users, just users with temporary access to your environment.

6. AWS EC2

Amazon EC2 provides scalable computing capacity in the AWS cloud. Leveraging EC2 enables enterprises to develop and deploy applications faster, without any need to invest in hardware upfront. Users can launch virtual servers, configure security and networking, and manage cookies from an intuitive dashboard. 

6.1 What is AWS EC2?

Amazon offers a vast array of services, among them EC2 is the core compute component of the technology stack. In practice, EC2 makes things easier for developers by providing secure, and resizable compute capability in the cloud. It can easily process scaling up or down, integrate into several other services, and comes with a plan where you only pay for how much you use it. 

6.2 Why is AWS EC2 important?

  • EC2 is important because of the following aspects.
  • You don’t require any hardware units.
  • Easily scalable (up or down).
  • You only pay for what you use.
  • You have complete control.
  • Highly secure.
  • You can access your assets from anywhere in the world.

With Amazon Simple Notification Service (SNS), EC2, and Simple Storage Service (S3), you should be able to do everything you want with ease. You’ll be able to notify users every time the company creates a newsletter.

7. AWS S3

Amazon S3 is a Simple Storage Service that provides object storage. S3 is built for storing and recovering any amount of information or data from anywhere over the internet. The storage is provided through a web services interface. It offers features with durability, low cost, scalability, availability, security, flexibility, simple data transfer.

7.1 Components of AWS S3 storage service

The components of S3 include the buckets and objects. An object consists of data, key (assigned name), and metadata. A bucket is used to store objects. When data is added to a bucket, Amazon S3 creates a unique version ID and allocates it to the object. The below screen is an example of an object, bucket, and link address.


You can log in to AWS as shown on the below screen.


Select S3 from Service offerings as shown on the below screen.


Now Amazon S3 bucket list appears which is usually empty for first-time users. Create a bucket by clicking on the “Create bucket” button.


Create a bucket by setting up name, region, and other options; finish off the process by pressing the “Create” button.


Select the created bucket as shown on the below screen.


Click on upload to select a file to be added to the bucket.

Ops Trainerz

Subscribe to our youtube channel to get new updates..!


Select a file to be added as shown on the below screen.


The file is now uploaded into the bucket.


7.2 Amazon S3 Storage Classes

The following are the five different storage classes that are used in S3.

  1. Amazon S3 Standard for frequent data access: It is suitable for a use case where the latency should be low.
  2. Amazon S3 Standard for infrequent data access: It can be used where the data is long-lived and less frequently accessed. 
  3. Amazon Glacier: It can be used where the data has to be archived, and high performance is not required. 
  4. One Zone-IA Storage Class: It can be used where the data is infrequently accessed and stored in a single region.
  5. Amazon S3 Standard Reduced Redundancy storage: It is suitable for a use case where the data is non-critical and reproduced quickly. 


Amazon VPC enables connecting the on-premises resources to AWS infrastructure through a virtual private network. This virtual network closely resembles a traditional network that is operated in the data center but enables leveraging of the scalable infrastructure in AWS. 

Each VPC that is created is logically isolated from other virtual networks in the AWS cloud and is fully customizable. You can select the IP address range, create subnets, configure root tables, set up network gateways, define security settings using security groups, and network access control lists. 

Default Amazon VPC

Each Amazon account comes with a default VPC that is pre-configured to start using it immediately. A VPC can span multiple availability zones in a region. The below screen illustrates the default VPC.


In the first section, there is a default Amazon VPC. The CIDR block for the default VPC is always a 16 subnet mask; in this example, it's It means this VPC can provide up to 65,536 IP addresses.

Custom Amazon VPC 

The default VPC is suitable for launching new instances when you're testing AWS, but creating a custom VPC allows you the following things.

  • Make things more secure.
  • Define your own IP address range by customizing your virtual network.
  • Create your subnets that are both private and public.
  • Strengthen security settings.

Hardware VPN Access

By default, instances that you launch into an Amazon VPC can't communicate with your network. You can connect your VPCs to your existing data center using hardware VPN access. By doing so, you can effectively extend your data center into the cloud and create a hybrid environment. To do this, a virtual private gateway has to set up. 


9. AWS CloudFormation

Managing your infrastructure using many services is very hard. Creating and managing multiple AWS resources is challenging and time-consuming. Executing such things could result in spending a whole lot of time managing your AWS resources instead of developing your applications. AWS CloudFormation assists in resolving these problems. It provides you with a simple way to create and manage a collection of AWS resources by provisioning and updating them in an orderly and predictable manner. Cloud Formation allows you to create and model your infrastructure and applications without performing any manual actions.

AWS CloudFormation enables you to manage your complete infrastructure or AWS resources in a text file, or template. AWS resources can be created or updated by using a stack. This stack is a collection of AWS resources. It is very easy to deploy all the resources that are required by an application by using templates. The templates can also be reused to replicate your infrastructure in multiple environments. The parameters, mappings, and conditions sections in the template have to be used for making the templates reusable so that you can customize your stacks when you create them.

  • Create a new template or use an existing CloudFormation template using the JSON or YAML format.
  • Save your code template locally or in an S3 bucket.
  • Use AWS CloudFormation to build a stack on your template.
  • AWS CloudFormation constructs and configures the stack resources that you have specified in your template.

9.1 Terms used in CloudFormation

An AWS CloudFormation template is a formatted text file in JSON or YAML language that describes your AWS infrastructure. To create, view, and modify templates, you can use AWS CloudFormation Designer or any text editor tool. An AWS CloudFormation template consists of nine main objects:

AWS Training

Weekday / Weekend Batches

  1. Format version: The format version defines the capability of a template.
  2. Description: The description is used for specifying any comments about your template.
  3. Metadata: The metadata can be used in the template to provide further information using JSON or YAML objects. 

Example in JSON:











4) Parameters: The parameters are used in customizing the templates. Each time you create or update your stack, parameters help you give your template custom values at runtime. 

Example in JSON:






"Description":"Enter t2.micro,m1.small,or m1.large,Default is t2.micro."}


5) Mappings: The mapping enables you to map keys to a corresponding named value that you specify in a conditional parameter. By using the “Fn:: FindInMap” intrinsic function you can also retrieve values in a map.

Example in JSON











6) Conditions: In a template, conditions define whether certain resources are created or when resource properties are assigned to a value during stack creation or updating. Conditions can be used when you want to reuse the templates by creating resources in different contexts. You can use intrinsic functions to define conditions.


  • Fn::If
  • Fn::Equals
  • Fn::And
  • Fn::Or
  • Fn::Not

In a template, during stack creation, all the conditions in your template are evaluated. Any resources that are associated with a true condition are created, and the invalid conditions are ignored automatically. 

Syntax in JSON format:



"Logical ID":{Intrinsic function}


7) Transform: The transform builds a simple declarative language for AWS CloudFormation and enables the reuse of template components. Here, you can declare a single transform or multiple transforms within a template. 

Syntax in JSON format:










8) Resources: Using this section, you are able to declare the AWS resource that you would like to create and specify in the stack, such as an Amazon S3 bucket or AWS Lambda. 

Syntax in JSON format:








9) Output: In a template, the output section describes the output values that you can import into other stacks or the values that are returned when you view your own stack properties. For instance, for an S3 bucket name, you can declare an output and use the “Description-stacks” command from the AWS CloudFormation service to make the bucket name easier to find.

Syntax in JSON format:



"Logical ID":{

"Description":"Information about the value",

"Value":"Value to return",


Thus we have reached the end of this tutorial. Mastering the concepts of AWS makes you a professional in managing and securing all the resources of any business repute by configuring the policies, groups, and roles. Various components of AWS with the supporting features like integration, deployment, and storage capabilities ensure fast delivery with high-performance quality services.

AWS Lambda Developer
I am working as AWS Lambda Developer since 2014. and I have good knowledge skills on AWS & DevOps Platform. TO share my Knowledge through Blogs from OpsTrainerz is Good Opportunity to Me.

Request for more information