Opsnetic Blogs

Opsnetic Blogs

"Blogs allow you to talk about any topic you are interested in and express your opinion"

Our blogs are solely written by our members, sharing their views and knowledge in various fields to help the community. We also offer a variety of content in the field of Cloud, DevOps, Cybersecurity.

Do check it out on Medium!!

Check Out Recent Blogs

Author: Raj Shah

Introduction

In the ever-evolving landscape of cloud computing, the synergy between Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Compute Cloud (Amazon EC2) stands out as a powerful combination. This guide will delve into the intricacies of leveraging Amazon S3 as an NFS (Network File System) volume for EC2 instances in private connection, unveiling a strategic approach to achieve substantial cost savings in large-scale storage scenarios.

The Foundation: Amazon S3 and EC2

Amazon S3, renowned for its scalability, durability, and secure object storage capabilities, meets its counterpart, Amazon EC2, a robust service providing scalable compute resources in the cloud. Together, they form the backbone of a versatile infrastructure, capable of handling diverse workloads efficiently.

The Need for NFS: Bridging Object Storage and File-Based Systems

While S3 excels at object storage, there arises a demand for seamlessly integrating file-based systems with cloud resources. The need to mount S3 as an NFS volume becomes evident, offering compatibility, flexibility, and ease of integration for applications that rely on traditional file systems.

Choosing the Right Solution: S3 as NFS vs. EFS

A critical decision point surfaces when to opt for S3 as NFS and when to consider Amazon Elastic File System (EFS). This guide navigates through the decision-making process, shedding light on scenarios where the cost-effectiveness of S3 as NFS becomes a strategic advantage over EFS, especially in the realm of large-scale storage.

Advantages of mounting S3 as an NFS volume include:

  • Compatibility: Many applications and tools are designed to work with file systems and may not natively support S3’s object storage interface. Mounting S3 as an NFS volume provides a familiar file system interface.
  • Ease of Integration: NFS enables seamless integration with existing applications and workflows that expect a traditional file system structure. This integration simplifies the migration of applications to the cloud without significant code changes.
  • Flexibility: NFS allows you to access S3 data as if it were a standard file system, providing flexibility for various use cases, such as sharing files across multiple instances, collaborating on data, or supporting legacy applications.
  • Uniformity of Access: Mounting S3 as an NFS volume allows for a unified access method across different storage types, making it easier to manage and maintain.

Solution Overview

Prerequisites

The deployment steps assume that:

  1. You have deployed the Amazon EC2 instance where you will mount Amazon S3 as an NFS volume.
    Note the security group ID of the instance as it will be required for permitting access to the NFS file share.
  2. You have created the S3 bucket that you will mount as an NFS volume in the same account and Region as the instance. The bucket and objects should not be public. I recommend enabling server-side encryption.

The figure below illustrates the solution architecture for mounting the Amazon S3 bucket to the Amazon EC2 instance as an NFS volume with private connections.

  1. This EC2 instance is the NFS client where the NFS file share is mounted. You would have set up this EC2 instance as a part of the prerequisites.
  2. This EC2 instance hosts the S3 File Gateway. You will create this instance by installing the S3 File Gateway Amazon Machine Image (AMI).
  3. This VPC interface endpoint provides private connectivity using SSH and HTTPS from your VPC to the AWS Storage Gateway service using AWS PrivateLink.
  4. The S3 File Gateway uses AWS PrivateLink to privately access AWS Storage Gateway, which is an AWS Regional service.
  5. This VPC gateway endpoint for S3 provides private access using HTTPS to the Amazon S3 AWS Regional service using AWS PrivateLink.
  6. The S3 File Gateway uses the VPC gateway endpoint to connect privately to the S3 service and your S3 bucket mounted to your EC2 instance.

Implementation

Step 1: Create the Amazon S3 File Gateway on the EC2 instance

  1. Go to Storage GatewayCreate Gateway
  2. Give a Name to the Gateway and select the Gateway Timezone
  3. Select Amazon S3 File Gateway as Gateway Option
  4. In the platform option select the Amazon EC2. In the Launch EC2 instance section, choose Customize your settings to launch the Gateway EC2 in the private subnet. Click on Launch Instance to get redirected towards the EC2 configuration page.

Set up Gateway on Amazon EC2:

  1. For Instance type, we recommend selecting at least m5.xlarge.
  2. In Network settings, For VPC, select the VPC that you want your EC2 instance to run in.
  3. For Subnet, specify the private subnet that your EC2 instance should be launched in.
  4. For Auto-assign Public IP, select Disable.
  5. Create a Security Group, Amazon S3 File Gateway requires TCP port 80 to be open for inbound traffic and one-time HTTP access during gateway activation. After activation, you can close this port. To create NFS file shares, you must open TCP/UDP port 2049 for NFS access, TCP/UDP port 111 for NFSv3 access, and TCP/UDP port 20048 for NFSv3 access. Where the source is CIDR of resp. security group.

6. For Configure storage, choose Add new volume to add storage to your gateway instance. You must add at least one Amazon EBS volume for cache storage with a size of at least 150 GiB, in addition to the Root volume.

Now, as our Gateway server is ready, we can proceed towards the Gateway Connection Options. Select Activation Key based connection as the instance launched doesn’t have any IP address.

Step 2: Create the VPC endpoints

We will be creating 2 VPC Endpoints:

  1. For AWS Storage Gateway to allow private access to the AWS Storage Gateway service from your VPC
  2. S3 VPC Gateway endpoint to allow private access to Amazon S3 from your VPC

So, let’s start with the creation of 1st VPC endpoint:

Go to AWS VPC → choose Endpoints Create Endpoint

  1. Give the endpoint the appropriate Name, Select AWS services in the Service Category. In the services, select com.amazonaws.<aws-region>.storagegateway
  2. Select the VPC and verify that Enable Private DNS Name is not checked in Additional Settings.
  3. Choose the relevant Availability Zone and Private Subnet where the S3 File Gateway is deployed.
  4. Create a Security Group with the source as the subnet CIDR range and following inbound rules:

5. Attach the security group to the VPC Endpoint and then hit Create Endpoint.

6. When the endpoint status is available, copy the first DNS name that doesn’t specify an Availability Zone

With this 1st endpoint created, let’s continue towards the 2nd VPC endpoint

For this, again go to AWS VPC → choose Endpoints Create Endpoint. For Service Name, choose com.amazonaws.<region>.s3 of type Gateway. Select the appropriate VPC and the route table in which the chosen private subnet is present.

So, at the end of this you would have these 2 VPC Endpoints listed:

Step 3: Get the Activation Key

Now, Connect to the Storage Gateway EC2 instance (Either via Bastion or SSM) that we launched to extract the Activation Key

Then provide the appropriate inputs to the prompted questions and be sure to provide the right DNS of the VPC Endpoint that we had copied in earlier steps

Step 4: Deploying the Storage Gateway

Paste the Activation Key in the required field

Once Done, Click Next to verify the configurations

The Configure Cache Storage automatically detects the suitable additional disk for Cache allocation

After some time we have our Storage Gateway in Running State

Step 5: Create a File Share

Now, we will create the NFS file share and mount it onto the EC2 instance

  1. Go to Storage GatewayFile SharesCreate File Share
  2. Select the Gateway from Dropdown and S3 bucket

3. Further we can name our file share, select the access objects protocol, and don’t forget to enable Automated cache refresh from S3 and set the minimum TTL.

4. Next, I prefer to choose the S3 Intelligent-Tiering for storage class and keep the rest options as default

5. We can also restrict the file share connection to specific IP allowed clients such as all the clients in the VPC CIDR. Keep the other options as the default. Then, review and create your file share.

6. It takes some minutes to update the status of the file share to Available. Once, it is in the available state. Copy the Linux mount command and run it on the NFS client

For Linux:

sudo mount -t nfs -o nolock,hard 172.31.79.131:/images [MountPath]

For Windows:

mount -o nolock -o mtype=hard 172.31.79.131:/images [WindowsDriveLetter]:

Validation

  1. Upload an image on the S3 bucket

2. Let’s do a ls on the mount directory in the NFS Client EC2

3. Now, let’s create a text file from the EC2 and see whether it shows up on the S3 or not?

touch images/test.txt

4. On our S3, we see the file getting reflected!

Conclusion

Use S3 as NFS When:

  • Data access patterns are sporadic or infrequent.
  • You want the flexibility of different storage classes.
  • You need to optimize costs for specific access patterns and data types.

Use EFS When:

  • You have a consistent need for file-based storage with dynamic scaling.
  • Data access patterns are frequent and consistent.
  • Simplified management is a priority.

Ultimately! If you have to create a PetaByte scale NFS storage and you are looking for cost-savings then this implementation is your best bet rather than choosing a costly AWS EFS service.

If you need help with DevOps practices, AWS, or Infrastructure Optimization at your company, feel free to reach out to us at Opsnetic.

Contributed By: Raj Shah


Harnessing Amazon S3 as NFS Volume for EC2 Instances: Achieving Cost Savings in Large-Scale Storage was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

Ever felt like your emails are lost in the vast digital abyss, never reaching their intended recipients? Or maybe you’re tired of clunky email marketing tools that drain your resources and offer little insight into your campaigns’ effectiveness. Well, fret no more! Today, we’re diving headfirst into the world of AWS Simple Email Service (SES), a powerful and scalable solution that will transform your email game.

Imagine this you’re an e-commerce giant like Amazon, sending out millions of order confirmations, promotional offers, and personalized recommendations every day. SES empowers you to handle this massive volume with ease and efficiency, ensuring your emails reach their destination and spark engagement.

But the benefits extend far beyond behemoths like Amazon. Let’s say you run a local bakery craving more customer interaction. With SES, you can craft eye-catching newsletters announcing new treats, send out personalized birthday coupons, and even gather feedback through interactive surveys. All the while, SES meticulously tracks your email performance, providing valuable data to fine-tune your strategy and maximize your impact.

Whether you’re a budding entrepreneur or a seasoned marketer, SES empowers you to connect with your audience on a deeper level. So buckle up, grab your thinking cap, and let’s embark on a journey to master the art of email communication with AWS SES!

Personalized Cookie Cravings: AWS SES Driving Sales with Immediate Action

Imagine the delicious aroma of freshly baked cookies wafting through the air. Now imagine sending an email that captures this enticing aroma and delivers it directly to your customers’ inboxes. But instead of a generic message, it triggers a personalized action based on their previous interactions.

This is the power of using AWS SES with click tracking and immediate action. Let’s take a closer look:

Scenario: You’re a bakery owner launching a new line of gourmet chocolate chip cookies. When a customer clicks a link in your email, such as “Learn More About Cookies” SES records this event. This click information is received, sending an immediate follow-up email with a special offer on the new cookies.

Now that you understand the potential of immediate action with AWS SES, let’s delve into the technical details of setting up this scenario.

Please have a look at the following simple workflow:

Here, we will be using SES for email sending and tracking. After the mail is sent from SES, we monitor the click events using the SES Configuration Set. An event arrives at SNS (event destination of Configuration Set) as soon as it is triggered. Then, we send it to SQS. A Lambda function is triggered by polling the queue and sending the exclusive offer mail to the respective users.

Step 1: Create a SQS Queue

We are adding the SQS Queue in our architecture rather than directly integrating the SNS to Lambda because there is message persistence in SQS for a configurable period (1 minute to 14 days), while SNS delivers messages immediately and deletes them.

This makes SQS ideal for situations where messages need to be guaranteed delivery even if the recipient is unavailable when they are sent.

Go to AWS SQSQueues → Create Queue. Choose Standard Queue, add an Access Policy for SNS and SQS communication, and keep the other configurations as default.

{
"Version": "2012-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": "*",
"Action": "SQS:*",
"Resource": "arn:aws:sqs:<aws-region>:<aws-account>:<sqs-queue-name>"
},
{
"Sid": "topic-subscription-arn:aws:sns:<aws-region>:<aws-account>:<sns-topic-name>",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SQS:SendMessage",
"Resource": "arn:aws:sqs:<aws-region>:<aws-account>:<sqs-queue-name>",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:sns:<aws-region>:<aws-account>:<sns-topic-name>"
}
}
}
]
}
Step 2: Create an SNS Topic

Create a SNS topic and configure the “Access Policy” so that SES can access the SNS Topic.

Add this access policy in the respective section.

{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "stmt1689931588490",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:<aws-region>:<aws-account-number>:<sns-topic-name>",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<aws-account-number>"
},
"StringLike": {
"AWS:SourceArn": "arn:aws:ses:*"
}
}
}
]
}
Step 3: Create and Configure SNS Subscription

Along with that enable the subscription filter policy for only retrieving the “Click” events from all the events we get from the SES.

Step 4: Setting up SES Configuration Set

Create an SES Configuration Set.

After that, go inside the set and now we set the event destination. Our scenario requires only click events. So, we only select the “Click” event type. But, there are several types which you can discover over here — https://docs.aws.amazon.com/ses/latest/dg/event-publishing-retrieving-sns-contents.html

There are multiple destinations to receive SES events from which we select the Amazon SNS and SNS topic we created earlier.

Step 5: Creating a Verified Identity

We would create 2 verified identities. One would act as the sender (bakery)and the other as the receiver (consumer).

Note: By default, you can only send to email addresses that have been verified in your SES account. This is called “sandbox mode”. This is to prevent spam.
To enable “production mode”, where you can send emails to non-verified email addresses, you need to request this manually through a support ticket, which will also be examined manually by an AWS employee.
Step 6: Create an event tracking Lambda function

Create an AWS Lambda function with Python LTS version and create a role with AWSLambdaSQSQueueExecutionRole, and AmazonSESFullAccess managed policies.

After that add a SQS trigger.

Note: The SQS is attached with lambda functions as a Trigger. So, Lambda keeps on polling from the SQS to process them until it finds anything.
The polling happens from 5 lambdas per integration. So, per hour there would be (5 * 3 * 60) * 24 * 30 = 0.648 million requests. (The first 1 million SQS requests are free per month)

recordBakeryOfferClickEvents —https://gist.github.com/rajshah001/714e98922ab4fcef77cde7968f39942e

Along with this create another simple Python 3 Lambda Function which we would use to send a welcome mail.

welcomeEmailSendingFunction — https://gist.github.com/rajshah001/075098dd98b9ca148e7dc3d5b2a4d49a

Step 7: Testing!

Run the welcomeEmailSendingFunction Lambda Function to send an initial welcome mail to the consumer.

Now, Click on “Learn More About Cookies”. This will trigger a click event on the SES which will travel from SES → SQS → Lambda.

And recordBakeryOfferClickEvents Lambda will automatically get executed (because of SQS trigger) and we would receive the Exclusive Offer mail from the Bakery shop as per our interest.

Conclusion

Here in this post we dived deep into AWS SES and other integrations tools provided like SNS, and SQS for email sending and tracking. Along with that we explored a simple bakery shop email workflow implementation.

If you need help with DevOps practices, AWS, Infrastructure Optimization at your company, feel free to reach out to us at Opsnetic.

Contributed By: Raj Shah


Level Up Your Email Game: Mastering AWS SES for Powerful Email Sending and Tracking was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

Recently, I had the opportunity to read the book “The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win”. This book not only gave me a deeper understanding of the DevOps process but also highlighted the philosophy behind the DevOps mindset. This review will focus on the key takeaways from the book that have helped me better understand DevOps as a philosophy.

The Value Stream in DevOps

One of the central concepts in the book is the idea of the value stream. The value stream refers to the flow of work from start to finish in an organization. According to the book, everyone in the value stream should share a culture that values each other’s time and contributions. This culture should also drive continuous improvement and learning by injecting pressure into the work system. The book highlights the use of lean principles, such as reducing batch sizes and shortening feedback loops, which can result in significant increases in productivity, product quality, and customer satisfaction.

The Three Ways

The book also introduces the three ways of DevOps, which provide a framework for understanding the process flow and enabling continuous improvement. The three ways are:

  1. Understanding the process/flow — This way focuses on understanding the flow of work in an organization and identifying bottlenecks or inefficiencies in the process. By understanding the process, organizations can make informed decisions about how to optimize their workflows.
  2. Maintaining a feedback loop mechanism in the value stream — This way focuses on using feedback loops to continuously monitor and improve the process. This can involve using metrics and data to measure performance and identify areas for improvement.
  3. Encouraging continuous experimentation and learning through repetition and practice — This way focuses on continuously experimenting and learning from the results. This can involve using an iterative approach to work, where organizations continually make small improvements and adapt to new information. By practising and repeating these processes, organizations can achieve mastery and drive continuous improvement.

The Four Types of Work

Another essential concept in the book is the idea of the four types of work:

  1. Business Projects — Business projects are those that directly contribute to the success of the organization. These projects should be given the highest priority and receive the necessary resources to ensure their success.
  2. IT Internal Projects — IT internal projects are those that are focused on improving the internal systems and processes of the organization. These projects can help organizations streamline their workflows and improve efficiency, but they should not come at the expense of business projects.
  3. Changes — Changes refer to the implementation of new systems, processes, or policies. These changes can be disruptive to the value stream and should be managed carefully to minimize their impact on the work of the organization.
  4. Unplanned Work or Recovery Work — Unplanned work or recovery work refers to unexpected events, such as outages or unexpected demand for a particular product or service. These types of work can be difficult to manage and can impact the delivery of other projects.

The book explains how each type of work has unique requirements and how a DevOps approach can help organizations manage these different types of work more effectively.

The Steps in the Theory of Constraints (TOC) Methodology

The book also explores the theory of constraints (TOC) methodology and its application in DevOps. TOC is a management approach that focuses on identifying and managing constraints in a system to achieve maximum performance. The five original TOC steps are:

  1. Identify the Constraint — The first step is to identify the constraint in the value stream that is limiting the flow of work and creating bottlenecks.
  2. Exploit the Constraint — Once the constraint has been identified, it should be exploited to its maximum potential to ensure the most efficient use of resources.
  3. Subordinate All Other Activities to the Constraint — All other activities in the value stream should be subordinated to the constraint so that the constraint is given priority and resources are optimized.
  4. Elevate the Constraint to New Levels — The constraint should be elevated to new levels through continuous improvement so that the flow of work can be maintained at the highest possible level.
  5. Find the Next Constraint — The final step is to find the next constraint and repeat the process. This approach helps organizations to continuously identify and overcome constraints, leading to maximum value creation for the customer.

The Five Dysfunctions of a Team

Finally, the book highlights the five dysfunctions of a team, which can negatively impact an organization’s ability to work effectively. These dysfunctions are:

  1. Absence of Trust — Trust is the foundation of any team and when it’s absent, team members are unwilling to be vulnerable with one another, leading to a lack of collaboration and poor communication.
  2. Fear of Conflict — Teams that are afraid of conflict often shy away from open and honest discussions, leading to artificial harmony instead of productive, passionate debate.
  3. Lack of Commitment — When team members lack commitment, they may feign agreement with decisions, causing ambiguity and confusion throughout the organization.
  4. Avoidance of Accountability — Teams that avoid accountability are less likely to call each other out on counterproductive behaviour, leading to low standards and subpar performance.
  5. Inattention to Results — Teams that focus on personal success and ego instead of team success are more likely to prioritize their own goals over the collective good, leading to a lack of attention to results.

Addressing these dysfunctions is critical to building a high-performing team, and can lead to increased collaboration, better communication, and improved results.

Conclusion

In conclusion, “The Phoenix Project” effectively explains DevOps concepts through a compelling story. It highlights the value of adopting a DevOps mindset to improve the entire company, not just IT. The book’s emphasis on a variety of principles provides a comprehensive approach to transforming IT into a business value driver. The Phoenix Project is a must-read for anyone looking to understand and implement DevOps principles in their organization.

Opsnetic LLC | Cloud Consulting

At Opsnetic, we embody the principles of DevOps philosophy learned from “The Phoenix Project” to help your business thrive. Our expert cloud consulting teams will help streamline processes, improve efficiency, and drive growth.


DevOps as a Philosophy: The Phoenix Project Review was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

To make a cloud architecture accessible, a solution architect needs to have a overall picture of the assets deployed in their organization. By having a dashboard which will be able to segregate the assets based on microscale factors, better decision making and management can be achieved for the respective organization.

Architecture Components

  1. CloudQuery — All of your assets from cloud and SaaS applications will be extracted, transformed, and loaded into PostgreSQL using this open-source tool.
  2. Grafana — It is an open source software that specializes in creating graphs and visualizations for users to easily understand the time-series data. It can be used to query, visualize, monitor, and alert.

Steps to Configure the Dashboard

Step 1: Create an EC2 instance

Launch an EC2 with Amazon Linux 2 AMI.

EC2 Instance Creation Summary

For the EC2 Instance Security Group, open SSH (22) and default Grafana port (3000) to the internet (0.0.0.0/0).

EC2 Security Group Configuration
Step 2: Install CloudQuery on the EC2
  • SSH into your EC2 Instance.
  • Run the following commands to install cloudquery on EC2 Linux machine:
curl -L https://github.com/cloudquery/cloudquery/releases/latest/download/cloudquery_linux_x86_64 -o cloudquery
# Give executable permissions to the downloaded cloudquery file
chmod a+x cloudquery
sudo cp ./cloudquery /bin
Step 3: Create a EC2 Role for giving cloudquery access to the assets in your AWS account

Amazon has created an IAM Managed Policy named ReadOnlyAccess, which grants read-only access to active resources on most AWS services.

The biggest difference is that we want our read-only roles to be able to see the architecture of our AWS systems and what resources are active, but we would prefer that the role not be able to read sensitive data from DynamoDB, S3, Kinesis, SQS queue messages, CloudFormation template parameters, and the like.

To better protect our data when creating read-only roles, we not only attach the ReadOnlyAccess managed policy from Amazon, but we also attach our own DenyData managed policy that uses Deny statements to take away a number of the previously allowed permissions.

So, we will attach 2 policies to our Role:

  1. AWS managed ReadOnlyAccess Policy
  2. Customer managed cloudqury-deny-data-read Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyData",
"Effect": "Deny",
"Action": [
"cloudformation:GetTemplate",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:Query",
"dynamodb:Scan",
"ec2:GetConsoleOutput",
"ec2:GetConsoleScreenshot",
"ecr:BatchGetImage",
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"kinesis:Get*",
"lambda:GetFunction",
"logs:GetLogEvents",
"s3:GetObject",
"sdb:Select*",
"sqs:ReceiveMessage"
],
"Resource": "*"
}
]
}
CloudQuery Role Creation with minimalistic ready only access

After successful creation of this Role, attach it to the created EC2 instance for AWS resource access.

Step 3: Setup CloudQuery
  • After installing CloudQuery, you need to generate a cloudquery.yml file that will describe which cloud provider you want to use and which resources you want CloudQuery to ETL:
cloudquery init aws
  • By default, cloudquery will try to connect to the database postgres on localhost:5432 with username postgres and password pass. After installing docker, you can create such a local postgres instance with:
# Docker Installation Commands for Amazon Linux 2
yum-config-manager --enable rhui-REGION-rhel-server-extras
yum -y install docker
systemctl start docker
systemctl enable docker
docker version
# Docker Command to create a postgres instance
docker run --name cloudquery_postgres -p 5432:5432 -e POSTGRES_PASSWORD=pass -d postgres
  • If you are running postgres at a different location or with different credentials, you need to edit cloudquery.yml's connection section.

For Example:

cloudquery:
...
...
connection:
type: postgres
username: postgres
password: pass

host: localhost
port: 5432
database: postgres
sslmode: disable

Once cloudquery.yml is generated and you are authenticated with AWS, run the following command to fetch the resources.

# --no-telemetry flag for not sending any telemetry data to CQ
cloudquery fetch --no-telemetry

After this command has run, you postgres database will be populated with the data fetched from your AWS Accounts and segregated in tables according to the services.

Step 4: Grafana Installation

Add a new YUM respository for the operating system to know where to download Grafana. The command below will use nano.

sudo nano /etc/yum.repos.d/grafana.repo

Add the lines below to grafana.repo. This setting will install to the Open Source version of Grafana.

[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Installation and Configuration commands

sudo yum install grafana
sudo systemctl daemon-reload
sudo systemctl start grafana-server
sudo systemctl status grafana-server
sudo systemctl enable grafana-server.service

Visit the newly installed Grafana Server by visiting the Public IP of the EC2 Instance on port 3000. Default username and password is admin. Change it to desired complex password after logging in.

Grafana Login Window
Step 5: Adding Data Source to Grafana

Go to Configuration -> Data Sources section in Grafana and click on add data source, thereafter select PostgreSQL and apply configure it in following manner:

Grafana Data Source Configuration

After this is done, click on Save and Test button to check connectivity of the postgres with grafana.

Step 6: Importing Grafana Dashboard
  1. Execute this query in postgreSQL to add the aws_resources view.
  2. Download the JSON for AWS Asset Inventory Grafana Dashboard.
  3. To import a dashboard click Import under the Dashboards icon in the side menu.
Grafana Import Dashboard

After all 3 steps you should be able to see the asset inventory dashboard:

AWS Asset Inventory Dashboard
Step 7: Customizing the Dashboard

To further customize the asset inventory dashboard, you can make use of cloudquery aws provider schema and write SQL queries of your business interest.

SQL Query to find all S3 buckets that are able to be public

Conclusion

Here in this post we have discussed about Grafana and CloudQuery Setup. Later, we configured an open-source cloud asset inventory for our AWS Account.

If you need help with DevOps practices, AWS, or Kubernetes at your company, feel free to reach out to us at Opsnetic.

Contributed By: Raj Shah


AWS Asset Inventory Dashboard with CloudQuery and Grafana was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

For Graphical rendering, Machine Learning training, and inferencing scenarios GPU resources are required. If you are managing these workloads in Kubernetes, effective utilization of expensive GPU resources can help you reduce the cost of the overall infrastructure!

What is Time-Sharing ?

Kubernetes enables applications to precisely request the resource amounts they need to function. While you can request fractional CPU units for applications, you can’t request fractional GPU units.

Time-sharing is a GKE (Google Kubernetes Engine) feature that lets multiple containers share a single physical GPU attached to a node. Using GPU time-sharing in GKE lets you more efficiently use your attached GPUs and save running costs. Time-shared GPUs are ideal for running workloads that don’t need to use high amounts of GPU resources all the time.

Limitations to keep in check

Before using GPUs on GKE, keep in mind the following limitations:
  • You cannot add GPUs to existing node pools.
  • GPU nodes cannot be live migrated during maintenance events.
  • The GPU type you can use depends on the machine series, as follows: A2 machine series — A100 GPUs & N1 machine series — All GPUs except A100.
  • GPUs are not supported in Windows Server node pools.
  • The maximum number of containers that can share a single physical GPU is 48.
  • You can enable time-sharing GPUs on GKE Standard clusters and node pools running GKE version 1.23.7-gke.1400 and later.

Creating a GPU Time-Sharing GKE cluster

Step 1: Create a GKE cluster with the following gcloud command

You can run the gcloud commands in the cloud shell or any shell authorized to interact with your GCP workloads.

gcloud container clusters create gpu-time-sharing \
--region=us-central1-a \
--cluster-version=1.23.5-gke.1503 \
--machine-type=n1-standard-2 \
--disk-type "pd-standard" \
--disk-size "50" \
--max-pods-per-node "48" \
--enable-ip-alias \
--default-max-pods-per-node "48" \
--spot \
--num-nodes "1"
--accelerator=type=nvidia-tesla-k80,count=1,gpu-sharing-strategy=time-sharing,max-shared-clients-per-gpu=48 \

The above command creates a GKE cluster with spot VM’s which helps in cutting the cost of the overall infrastructure in more than half of the actual cost.

You can use GPUs with Spot VMs if your workloads can tolerate frequent node disruptions.

You can look at the GPU platforms available to attach to the nodes and choose the one which fits best for your needs.

Successfully deployed GKE GPU Time-Sharing cluster

Step 2: Create an additional node pool (Optional)

gcloud beta container node-pools create "pool-1" \
--cluster "gpu-time-sharing" \
--zone "us-central1-a" \
--node-version "1.23.5-gke.1503" \
--machine-type "n1-standard-2" \
--accelerator "type=nvidia-tesla-p4,count=1" \
--disk-type "pd-standard" \
--disk-size "50" \
--num-nodes "1" \
--enable-autoupgrade \
--enable-autorepair \
--max-pods-per-node "48"
--spot

Step 3: Get access to the GKE cluster through kubeconfig

The below command adds a kubeconfig file to the system due to which you can switch-context to the created cluster and execute kubectl commands on it.

gcloud container clusters get-credentials gpu-time-sharing

Step 4: Testing GPU time-sharing functionality

kubectl get nodes
Displaying Nodes on Cluster

As we have created only 1 node node-pool for the cluster. We are able to correctly see the output using the kubectl command.

Now, install the GPU device drivers from NVIDIA that manage the time-sharing division of the physical GPUs. To install the drivers, you deploy a GKE installation DaemonSet that sets the drivers up.

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml

The above command deploys the installation DaemonSet and install the default GPU driver version. You can find more information regarding this installation here.

kubectl describe nodes gke-gpu-time-sharing-default-pool-6a5e9e79-rh8x
Checking Node Allocatable Capacity

After describing the created node, we are able to verify that the allocatable GPUs are 48 which are logical fragments of 1 GPU we originally attached!

Now, the following Kubernetes YAML file contains the deployment of pods in which the container prints the UUID of the GPU that’s attached to it.

apiVersion: apps/v1
kind: Deployment
metadata:
name: cuda-simple
spec:
replicas: 3
selector:
matchLabels:
app: cuda-simple
template:
metadata:
labels:
app: cuda-simple
spec:
nodeSelector:
cloud.google.com/gke-gpu-sharing-strategy: time-sharing
cloud.google.com/gke-max-shared-clients-per-gpu: "48"
containers:
- name: cuda-simple
image: nvidia/cuda:11.0-base
command:
- bash
- -c
- |
/usr/local/nvidia/bin/nvidia-smi -L; sleep 300
resources:
limits:
nvidia.com/gpu: 1

After successful creation of the above deployment we can notice 3 pods are in running state.

kubectl get pods
Showing Pods under the Deployment

By printing their individual logs, allocation of GPU fragments can be confirmed for each pod as shown below.

$ kubectl logs cuda-simple-749bf54c4d-864zv
GPU 0: Tesla K80 (UUID: GPU-c9cbf47c-b630-d1d3-b79f-421ec976fbc5)
$ kubectl logs cuda-simple-749bf54c4d-csq4d
GPU 0: Tesla K80 (UUID: GPU-c9cbf47c-b630-d1d3-b79f-421ec976fbc5)
$ kubectl logs cuda-simple-749bf54c4d-dnxqx
GPU 0: Tesla K80 (UUID: GPU-c9cbf47c-b630-d1d3-b79f-421ec976fbc5)

Since, we have created a single node cluster with a GPU attached to each node. Basically we are dealing with 1 GPU only!

All the pods in that node will have logical fragments of that GPU attached to them. And the maximum pods which could achieve this behaviour are 48 per node.

kubectl get pods
Scaling Pods under the Deployment

After seeing the node description, we can confirm that 10 logical GPU fragments of our Tesla K80 have been successfully allocated to the running pods (as each pod requests 1 GPU resource from the node).

Looking at Allocated resources of the node after rescaling deployment

Conclusion

Here in this post we have discussed about achieving GPU time-sharing in GKE environment, their proper utilization, and how to achieve cost benefits with spot instances.

If you need help with DevOps practices, or Kubernetes at your company, feel free to reach out to us at Opsnetic.


GPU time-sharing with multiple workloads in Google Kubernetes Engine was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

If you are migrating multiple MySQL databases to AWS or creating MySQL databases from scratch for your production workloads, you will find that some of them are a great fit for Amazon RDS while others are better suited to run directly on Amazon EC2. You may choose the best service from this post to accomplish your objectives more quickly and effectively. Let’s compare AWS RDS with AWS EC2 now.

Service Introduction

Relational Database Service (Amazon RDS): It is a SaaS-based solution that automatically sets up and keeps up your cloud database.

Elastic Compute Cloud (Amazon EC2): Scalable computing power is offered. It gives you the option to scale up or down in response to shifting needs.

Comparison of Services

  • Note: The pricing of EC2 MySQL and RDS MySQL is mostly on similar lines with little deviations.
For Example
EC2 MySQL Pricing — https://calculator.aws/#/estimate?id=24ff359a6b1c47ddd139e410c1336da705b266f3
RDS MySQL Pricing — https://calculator.aws/#/estimate?id=ef15dcd0a17d1865afa61bea52a2bf83dbd5563e

Who manages what ?

When to use what ?

While choosing between RDS and EC2, the entire decision goes down to what you want — control or automated processes, cost of time, and the skills to manage. Both Amazon RDS and Amazon EC2 offer different advantages for running MySQL. Amazon RDS is easier to set up, manage, and maintain than running MySQL on Amazon EC2, and lets you focus on other important tasks, rather than the day-to-day administration of MySQL. Alternatively, running MySQL on Amazon EC2 gives you more control, flexibility, and choice. Depending on your application and your requirements, you might prefer one over the other.

Amazon RDS might be a better choice for you if

  1. You want to focus on your business and applications, and have AWS take care of the undifferentiated heavy lifting tasks such as provisioning of the database, management of backup and recovery tasks, management of security patches, minor version upgrades, and storage management.
  2. You need a highly available database solution and want to take advantage of the push-button, synchronous Multi-AZ replication offered by Amazon RDS, without having to manually set up and maintain a standby database.
  3. You would like to have synchronous replication to a standby instance for high availability.
  4. Your database size and IOPS needs are less than the RDS MySQL limits. Refer to Amazon RDS DB instance storage for the current maximum.
  5. You don’t want to manage backups and, most importantly, point-in-time recoveries of your database.
  6. You would rather focus on high-level tasks, such as performance tuning and schema optimization, rather than the daily administration of the database.
  7. You want to scale the instance type up or down based on your workload patterns without being concerned about licensing and the complexity involved.

Amazon EC2 might be a better choice for you if

  1. You need full control over the database, including SYS/SYSTEM user access, or you need access at the operating system level.
  2. Your database size exceeds 80% of the current maximum database size in Amazon RDS.
  3. You need to use MySQL features or options that are not currently supported by Amazon RDS.
  4. Your database IOPS needs are higher than the current IOPS limit.
  5. You need a specific MySQL version that is not supported by Amazon RDS. For more information, refer to AWS RDS MySQL Editions.

Conclusion:

Here in this post we have covered AWS EC2 Vs AWS RDS for deploying MySQL database, and discussed how to differentiate between them at different use-cases and scenarios.

If you need help with DevOps practices, or AWS at your company, feel free to reach out to us at Opsnetic.


Deciding between EC2 hosted or RDS managed MySQL was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

When creating RDP connections to Windows servers in the past, clients had to decide which was more important: security or cost. Customers may now easily and securely access Windows servers through RDP using a browser thanks to Fleet Manager’s newest capability.

Now, you may quickly and easily login to your instances from the AWS Management Console via the browser. With the help of this functionality, you may establish an RDP connection to your Windows instance without disclosing the RDP port to the public, hence minimizing the attack surface. All AWS Regions that support AWS Systems Manager provide console-based access to Windows Instances in Fleet Manager.

Operational Excellence is one of the critical pillars of the AWS Well-Architected Framework. Best practices are recommended to help you run workloads effectively, gain insights into workload operations, and continuously improve supporting processes and procedures to deliver business value.
AWS Systems Manager is a service that lets companies automate and manage their operations in the cloud and on-premises.

In particular, Fleet Manager offers a console-based experience, enabling system administrators to view and administer their fleet of instances from a single place. Fleet Manager provides administrators with an aggregated view of their compute resources regardless of their location.

Accessing instances using RDP

Through the Remote Desktop Protocol, system administrators may connect to Windows-based instances using a Graphical User Interface (GUI). One method for achieving this was connecting to the Windows computers through an RDP client. The biggest drawback of this approach is the manual and time-consuming nature of configuring settings like the password and destination endpoint for the RDP session.

An approach is to proxy the RDP connections and set up bastion hosts, server instances that may safely access other servers on your network. However, more manual setting is necessary for this operation. Due to the excessive provisioning, this design may be more costly and prone to errors, increasing the operating burden on system managers. Furthermore, while building architectures, security is one of the top objectives. You want to create systems for secure RDP access without assigning public IP addresses or opening inbound ports to the instances.

Security and operational overhead are the key drawbacks of the older RDP systems. It is difficult to access numerous instances that way. Additionally, manually logging into Amazon EC2 instances raises the possibility of mistakes and misconfigurations, which might result in downtime or security threats.

Console-based RDP access to Windows instances

Using an RDP connection, AWS Systems Manager Fleet Manager allows a console-based management interface for Windows instances. Through the NICE DCV protocol, these sessions are accessible through your web browser.

Customers may now manage Windows instances and configure secure connections using a complete GUI thanks to this new functionality. Using console-based access to Windows instances has a number of benefits, such as:

  • Connect, view, and interact with up to four instances side-by-side within a single web browser window.
  • Quickly establish a connection via the AWS Management Console. Fleet Manager uses Session Manager to connect to Windows instances using RDP, so there’s no need to set up additional servers or install additional software and plugins.
  • Use Windows credentials, Amazon Elastic Compute Cloud (Amazon EC2) key pairs, or AWS Single Sign-On (SSO) to securely login to your instances. Now, system administrators have the option to RDP into the instance without providing a login or password. Furthermore, there is no need of instance security groups to allow direct inbound access to RDP ports.

Demonstration

Prerequisites

The following requirements must be fulfilled to open an RDP connection to an instance:

  • It must be a Windows instance
  • The SSM agent installed must be preinstalled and is available by default on many AMIs
  • Associate an EC2 key pair or Windows User Credentials
  • It must be able to access the public or private SSM endpoints

To use Fleet Manager, a capability of AWS Systems Manager, the instance profile attached to your instance must have the required permissions. It must have the Systems Manager EC2 instance profile and Fleet Manager permissions.

Connect to the instance via RDP

Open the AWS Systems Manager interface. Select Fleet Manager from the Node Management section on the left pane. This directs you to the Fleet Manager page, where the Managed Instance view lists all of the instances that may be accessed, whether they are on-premises or in the cloud.

Fleet Manager managed nodes view in the console

In this situation, you can see the Windows instance to which you want to establish an RDP connection. Check to see if the SSM Agent’s ping status is online. If it’s not, you can investigate why. Select Node actions after choosing the instance you wish to connect to. Then, choose Connect with Remote Desktop from the drop-down option.

Connect with Remote Desktop selected in console

This takes you to the Remote Desktop connection page.

Remote Desktop Connection authentication page

On this screen, you may select how you wish to log in to the instance. Use the EC2 key pair that was stored when the EC2 instance was launched in this situation. Locate the EC2 key pair on your local system, select it, and click Connect. As an alternative, you can choose to log in to the instance with your Windows credentials.

Connecting to the Windows instance via the EC2 key pair

You are now connected to the instance through RDP. Select End Session in the top right of the panel to exit the instance.

Console view of the Windows instances within the web browser window
Up to four nodes, or Windows instances, can be connected in this view.

Conclusion:

So far in this post we have discussed what is AWS System Manager Fleet Manager and what are it’s uses, how to establish a remote session to a windows instance using SSM Fleet Manager in AWS.

If you need help with DevOps practices, or AWS at your company, feel free to reach out to us at Opsnetic.

Contributed By: Raj Shah


Access Windows instances through Web-Browser using AWS System Manager Fleet Manager was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

How Containerization is different than Virtualization?

Multiple, independent services may be deployed on a single platform using containers and virtualization.

Despite the differences and similarities between them, containers and virtual machines (VMs) both increase IT efficiency, offer application mobility, and improve DevOps and the software development lifecycle.

Virtualization

Virtualization refers to run multiple operating systems on a single machine at the same time. Virtualization is made feasible by a layer of software known as a “hypervisor.”

In a virtual machine, dependencies, libraries, and configurations are encapsulated. It is an operating system that makes use of a server’s shared resources. They are separated from one another and have their own infrastructure. VM runs software on different OS without additional hardware.

Hypervisor Types

Type 1 or Bare-Metal Hypervisor

A Type 1 hypervisor runs directly on the host machine’s physical hardware, and it’s referred to as a bare-metal hypervisor. The Type 1 hypervisor doesn’t have to load an underlying OS. With direct access to the underlying hardware and no other software — such as OSes and device drivers — to contend with for virtualization, Type 1 hypervisors are regarded as the most efficient and best-performing hypervisors available for enterprise computing.

For Example: VMware vSphere/ESXi, Microsoft Hyper-V, etc.…

Type 2 or Hosted Hypervisor

A Type 2 hypervisor is typically installed on top of an existing OS. It is sometimes called a hosted hypervisor because it relies on the host machine’s preexisting OS to manage calls to CPU, memory, storage and network resources.

For Example: VMware Workstation, Oracle VirtualBox, etc.…

Containerization

Containerization packages up code and dependencies for the application to run quickly from one environment to another.

Containerization could also be termed as OS level virtualization. Put simply, they leverage features of the host operating system to isolate processes and control the processes’ access to CPUs, memory and desk space.

It generates several workloads on a single OS instance and wraps an application in a container with its own OS. It does not require any extra settings to be installed on any host system.

Comparison

When to use what ?

  • When opting microservice architecture choosing containerization over virtualization resonates well because rather than spinning up an entire virtual machine, containerization packages together everything needed to run a single application or microservice.
  • When you require total isolation of application running on the server, you should opt for virtualization.
  • Containers are also ideal for automation and DevOps pipelines, including continuous integration and continuous deployment (CI/CD) implementation.
  • In the era of multi-cloud solutions, applications might need to be migrated from one cloud vendor to another. Containerizing applications provides teams the portability they need to handle the many software environments & different platforms of modern IT.
  • Dependency Hell: The dependency issue arises when several packages have dependencies on the same shared packages or libraries, but they depend on different and incompatible versions of the shared packages.
    To avoid dependency hell containerization proves to be the best bet!

Conclusion:

Here in this post we have covered what is Virtualization and Containerization, what are it’s uses, and how to differentiate between them at different use-cases.

If you need help with DevOps practices, or AWS at your company, feel free to reach out to us at Opsnetic.


How Containerization is different than Virtualization ? was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

Amazon Connect is an easy to use multichannel cloud contact/call center solution that helps businesses provide excellent customer service at a lower cost.

Amazon Connect is built on the same contact center technology that Amazon uses to allow its customer support agents to conduct millions of customer conversations worldwide.

Benefits of Amazon Connect

  1. Make changes in minutes, not months
    Setting up Amazon Connect is easy. With only a few clicks in the Amazon Web Services (AWS) Management Console, agents can begin taking calls within minutes.
  2. Save up to 80% over traditional contact center solutions
    As an on-demand service, you pay for Amazon Connect usage by the minute. No long-term commitments, upfront charges, or minimum monthly fee.
  3. Easily scale to meet unpredictable demand
    With no infrastructure to deploy or manage, you can scale your Amazon Connect contact center up or down.

Features of Amazon Connect

  1. User Administration
    The ability to add users, such as agents or managers, and configure them with permissions that are appropriate to their roles. You can authenticate users through Amazon Connect, an existing AWS Directory Service directory service, or a SAML-based identity provider (IdP).
  2. Contact Control Panel (CCP)
    A customizable interface that agents use to engage with contacts across multiple channels, such as voice and chat.
  3. Contact Flows
    Contact flows contain features that let you define the customer experience with the contact center from start to end For example, you can play prompts, get input from the customer, branch based on customer impact, invoke a Lambda function, or integrate an Amazon Lex bot.
  4. Skills-based Routing
    The routing of contacts based on the skills of the agents.
  5. Metrics and Reporting
    Real-time and historical information about the activity in your contact center.

Core concepts to consider when building an Amazon Connect contact center

Telephony: Amazon Connect provides a variety of choices to enable your company to make and receive telephone calls. A big advantage of Amazon Connect is AWS manages the telephony infrastructure for you: carrier connections, redundancy, and routing. And, it’s designed to scale.

Chat: Amazon Connect allows your customers to start a chat with an agent or Amazon Lex bot, step away from it, and then resume the conversation. They can even switch devices and continue the chat.

Routing profiles and Queue-based routing: A routing profile determines the contacts that an agent receives and routing priority. Amazon Connect uses routing profiles to help you manage your contact center at scale.

Queue-based (or skills-based) routing directs customers to specific agents according to criteria like agent skill.

NLP: The ability for a computer to understand voice or text input, derive meaning, recognize purpose or intent, and collect individual data elements.

NLP reduces the time and effort needed to achieve your contact’s purpose, facilitate self-service, and can increase quality of experience for your contacts. Amazon Connect features a native integration with Amazon Lex for NLP over text and voice.

Channels and concurrency: Agents can be available concurrently on both voice and chat channels. Here’s how this works:

Suppose an agent is configured in their routing profile for voice and up to five chats. When the agent logs in, a chat or voice call can route to them. However, once they are on a voice call, no more voice calls or chats are routed to them until they finish the call.

Contact flows: A contact flow defines how a customer experiences your contact center from start to finish. At the most basic level, contact flows enable you to customize your IVR (interactive voice response) system.

For example, you can give customers a set of menu options, and route customers to agents based on what they enter on their phone. With Amazon Connect, contact flows are even more powerful. You create dynamic, personalized flows to interact with AWS services.

Key Terminologies

  1. Queues
    Queues allow contacts to be routed to the best agents to service them. If you need to route contacts with different priorities or to agents with different skills, you can create multiple queues. Queues can handle voice, chat, or both.
  2. Contact Flows
    Contact flows define a customer’s experience when they contact you. Amazon Connect contact flows can integrate with systems such as CRMs and databases to dynamically adapt the experience by customer and history.
  3. Routing Profile
    A routing profile is a collection of queues from which an agent services contacts. Routing profiles enable agents to service multiple queues with the proper level of priority.

Steps in creating an Amazon Connect instance

Steps in configuring an Amazon Connect instance

The Amazon Connect Dashboard looks like as follows:

  1. Communication Channels:
    After you create an Amazon Connect instance, you can claim a phone number to use for your contact center. Amazon Connect allows users to provision their own phone numbers. If you want to keep a phone number you already have, you can port the phone number and use it with Amazon Connect.
  2. Hours of Operations:
    Hours of operation define when Amazon Connect resources, such as queues, are available. These hours may be referenced in contact flows. To build out a holiday closure schedule, many enterprise organizations use a DynamoDB and reference that table with a Lambda function.
  3. Create Queue:
    In Amazon Connect, routing consists of three parts: queues, routing profiles, and contact flows. Contacts are routed through your contact center based on the routing logic you define in your contact flows. You can also use routing profiles to manage how agents are allocated to queues, such as routing specific types of contacts to agents with specific skill sets. If no agent with the required skill set is available, the contact is placed in the queue you define in the contact flow.
  4. Create Prompts:
    Amazon Connect gives customers several options to manage prompts. Users are able to simply type the prompt from within the contact flow and have it play back using Amazon Polly, a text-to-speech service. (Only 8 KHz WAV files that are less than 50 MB are supported for prompts.)
  5. Create contact flows:
    A contact flow determines a series of interactions with the user. Contact flows can play or show prompts to the user, get user inputs, and behave differently depending on conditions. In a way, contact flows are like simple programs, written in a very constrained, visual programming language.
  6. Create routing profiles:
    While queues are a “waiting area” for contacts, a routing profile links queues to agents. When you create a routing profile, you specify which queues will be in it. Routing profiles link specific types of contacts to agents with specific skill sets. If no agent with the required skill set is available, the contact is placed in the queue you define in the contact flow. You can also specify whether one queue should be prioritized over another.
  7. Configure users:
    User management in Amazon Connect enables adding, managing, and deleting users. User-specific settings, such as routing profiles and permissions, can be assigned after the users are created.
Following this steps makes your contact center ready to go!

Conclusion:

So far in this post we have discussed what is Amazon Connect and what are it’s uses, components, and steps to configure.

If you find this blog useful and want to setup Amazon Connect for your organization Opsnetic Cloud Consulting provides Amazon Connect Setup and Support Services. Along with that we provide help in DevOps practices, or AWS at your company, feel free to reach out to us at Opsnetic.

Contributed By: Raj Shah


Amazon Connect overview to setup contact center in cloud was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Author: Raj Shah

The AWS Well-Architected Framework describes key concepts, design principles, and architectural best practices for designing and running workloads in the cloud.

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS.

6 pillars of AWS Well Architected Framework:

  1. Operational Excellence: The ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value.
    Best Practices for Operational Excellence:
    1.1.
    Perform operations as code
    1.2. Make frequent, small, reversible changes
    1.3. Refine operations procedure frequently
    1.4. Anticipate & Learn from Failure
  2. Security: The security pillar describes how to take advantage of cloud technologies to protect data, systems, and assets in a way that can improve your security posture.
    Best Practices for Security:
    2.1.
    Implement a strong identity foundation
    2.2. Enable traceability
    2.3. Protect data in transit & at rest
    2.4. Implement access to data/infrastructure via principal of least privilege
  3. Reliability: The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when it’s expected to. This includes the ability to operate and test the workload through its total lifecycle. This paper provides in-depth, best practice guidance for implementing reliable workloads on AWS.
    Best Practices for Reliability:
    3.1.
    Automatically recover from failure
    3.2. Manage change in Automation
    3.3. Stop guessing the capacity, instead automate capacity provisioning
    3.4. Scale horizontally to increase aggregate availability of workload
  4. Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
    Best Practices for Performance Efficiency:
    4.1.
    Leverage advanced technologies provided by AWS
    4.2. Usage of serverless architecture to avoid management
    4.3. Understand the systems & their purpose to utilize them to their best
  5. Cost Optimization: The ability to run systems to deliver business value at the lowest price point.
    Best Practices for Cost Optimization:
    5.1.
    Implement Cloud Financial Management
    5.2. Measure overall efficiency of services consumed
    5.3. Adopt a consumption model (Pay for what you use)
    5.4. Stop spending money on services which are already managed by AWS and focus on developing the end product
  6. Sustainability: The ability to continually improve sustainability impacts by reducing energy consumption and increasing efficiency across all components of a workload by maximizing the benefits from the provisioned resources and minimizing the total resources required.
    Best Practices for Sustainability:
    6.1.
    Choose Regions near Amazon renewable energy projects
    6.2. Align service levels to customer needs
    6.3. Selection of an efficient architecture and data patterns to minimize requirement of end-user hardware leading to less resource utilization

Read the full Well-Architected whitepaper here

Design Architecture Trade-offs:

When architecting workloads, you make trade-offs between pillars based on your business context. These business decisions can drive your engineering priorities.

  • You might optimize to improve sustainability impact and reduce cost at the expense of reliability in development environments.
  • For mission-critical solutions, you might optimize reliability with increased costs and sustainability impact.
  • In ecommerce solutions, performance can affect revenue and customer propensity to buy.
  • Security and operational excellence are generally not traded-off against the other pillars.

Key Terminologies:

  • A component is the code, configuration, and AWS Resources that together deliver against a requirement.
  • A workload is a collection of resources and code (components) that delivers business value, such as a customer-facing application or a backend process.
  • Lenses provide a way for you to consistently measure your architectures against best practices and identify areas for improvement.

Understanding & Using AWS Well-Architected Tool:

The AWS Well-Architected Tool helps you review your workloads against current AWS best practices and provides guidance on how to improve your cloud architectures. This tool is based on the AWS Well-Architected Framework.

Step 1: Define Workload
Step 2: Choose Lenses to evaluate the workload
Step 3: Review & Measure the workload on selected lenses
Step 4: Based on the answers to the questions asked in lenses risks are identified and improvement plan gets created

Conclusion:

So far in this post we have discussed what is AWS Well-Architected Framework and what are it’s uses, best-practices. Also, how Well-Architected Tool enables us to define & evaluate a workload in AWS.

If you need help with DevOps practices, or AWS at your company, feel free to reach out to us at Opsnetic. Also, Opsnetic Provides FREE Well-Architected Review of your company’s workloads. So, feel free to reach us out!

Contributed By: Raj Shah


Understanding AWS Well Architected Framework and Tool was originally published in Opsnetic on Medium, where people are continuing the conversation by highlighting and responding to this story.