Chunk 4: Automation / CI / CD

Part 4 of the Cloud Resume Challenge entails completing the Automation portion. Step 12 involves working with infrastructure as code. In Step 14, participants focus on implementing Continuous Integration/Continuous Deployment (CI/CD) for the Back End, while Step 15 covers CI/CD for the Front End.

Step 12: Infrastructure AS Code

Now, you might wonder, what exactly is infrastructure as code? It simply refers to managing and provisioning cloud resources using code and configuration files instead of traditional manual processes. With infrastructure as code, developers can define and maintain their infrastructure using version-controlled code, ensuring consistency and reproducibility across environments. Initially, the CRC encouraged the use of AWS SAM (Serverless Application Model) for IAC. However, I opted for a different route, leveraging the flexibility and power of Terraform. Unlike AWS SAM, which is limited to AWS environments, Terraform supports multiple cloud providers, making it a versatile choice for managing infrastructure across different platforms. To demonstrate this, I used Terraform to define and provision the necessary AWS resources for my Lambda function deployment. Let’s take a closer look at the Terraform script:

resource “aws_lambda_function” “myfunc” {

filename = data.archive_file.zip_the_python_code.output_path

source_code_hash = data.archive_file.zip_the_python_code.output_base64sha256

function_name = “myfunc”

role = aws_iam_role.iam_for_lambda.arn

handler = “func.lambda_handler”

runtime = “python3.9”

}

resource “aws_iam_role” “iam_for_lambda” {

name = “iam_for_lambda”

assume_role_policy = <<EOF

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Action”: “sts:AssumeRole”,

“Principal”: {

“Service”: “lambda.amazonaws.com”

},

“Effect”: “Allow”,

“Sid”: “”

}

]

}

EOF

}

resource “aws_iam_policy” “iam_policy_for_resume_project” {

name = “aws_iam_policy_for_terraform_resume_project_policy”

path = “/”

description = “AWS IAM Policy for managing the resume project role”

policy = jsonencode(

{

“Version” : “2012-10-17”,

“Statement” : [

{

“Action” : [

“logs:CreateLogGroup”,

“logs:CreateLogStream”,

“logs:PutLogEvents”

],

“Resource” : “arn:aws:logs:*:*:*”,

“Effect” : “Allow”

},

{

“Effect” : “Allow”,

“Action” : [

“dynamodb:UpdateItem”,

“dynamodb:GetItem”,

“dynamodb:PutItem”

],

“Resource” : “arn:aws:dynamodb:*:*:table/cloudresume”

},

]

})

}

resource “aws_iam_role_policy_attachment” “attach_iam_policy_to_iam_role” {

role = aws_iam_role.iam_for_lambda.name

policy_arn = aws_iam_policy.iam_policy_for_resume_project.arn

}

data “archive_file” “zip_the_python_code” {

type = “zip”

source_file = “${path.module}/lambda/func.py”

output_path = “${path.module}/lambda/func.zip”

}

resource “aws_lambda_function_url” “url1” {

function_name = aws_lambda_function.myfunc.function_name

authorization_type = “NONE”

cors {

allow_credentials = true

allow_origins = [“*”]

allow_methods = [“*”]

allow_headers = [“date”, “keep-alive”]

expose_headers = [“keep-alive”, “date”]

max_age = 86400

}

}

  1. resource "aws_lambda_function": This section defines the AWS Lambda function, specifying its runtime (Python 3.9), handler function, and source code (a ZIP file containing the Python code). Additionally, it associates the Lambda function with an IAM role (aws_iam_role.iam_for_lambda.arn).
  2. resource "aws_iam_role": Here, I create an IAM role named “iam_for_lambda” and define its “AssumeRolePolicy” to grant permission for the Lambda service to assume this role.
  3. resource "aws_iam_policy": This block defines an IAM policy named “aws_iam_policy_for_terraform_resume_project_policy” to manage permissions related to the resume project. It allows actions such as writing logs and interacting with the “cloudresume” DynamoDB table.
  4. resource "aws_iam_role_policy_attachment": This section attaches the previously defined IAM policy to the IAM role, ensuring the Lambda function possesses the required permissions.
  5. data "archive_file": This part creates a ZIP archive containing the Python code required for the Lambda function. The source code is located in the lambda/func.py file, and the ZIP archive is stored in lambda/func.zip.
  6. resource "aws_lambda_function_url": Lastly, this segment establishes a URL for the Lambda function and configures CORS settings to allow access from different origins.

By employing Terraform, I could manage my AWS infrastructure as code, resulting in better version control and consistent environments. The script effectively set up the resources needed to deploy my Lambda function and granted it the necessary permissions to interact with AWS services seamlessly.

WHAT IS CI / CD?

CI/CD stands for Continuous Integration and Continuous Deployment (or Continuous Delivery). It is a set of practices and automation techniques used in software development to improve the development, testing, and deployment processes. The main goals of CI/CD are to enhance collaboration, increase the speed of development, and ensure the reliability of software releases.

Step 14: CI / CD Backend

In Step 14 of the Cloud Resume Challenge, participants are tasked with creating a CI/CD pipeline for their backend Lambda function. CI/CD pipelines automate the process of building, testing, and deploying code changes, streamlining the development workflow and ensuring the reliability of the application.

The provided code snippet presents a basic CI/CD pipeline configuration using GitHub Actions for a Lambda function. Let’s break down the steps involved:

name: CI/CD Pipeline for lambda function

on:

push:

branches:

– main

jobs:

build:

runs-on: ubuntu-latest

steps:

– name: Checkout repository

uses: actions/checkout@v2

– name: Set up Python

uses: actions/setup-python@v2

with:

python-version: ‘3.9’

– name: Install dependencies

run: pip install -r requirements.txt

– name: Run tests

run: python3 _test.py

– name: Deploy to AWS Lambda

run: |

aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }}

aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }}

aws lambda update-function-code –function-name Cloudres –zip-file fileb://lambda.zip –region us-east-1

  1. Triggering the Pipeline: The pipeline is triggered automatically whenever there is a push to the main branch of the repository. This means that every time a developer pushes changes to the main branch, the CI/CD pipeline will be activated.
  2. Setting up the Environment: The pipeline runs on an Ubuntu environment. It then sets up Python 3.9, the required version for the Lambda function, using the actions/setup-python GitHub Action.
  3. Installing Dependencies: Next, the pipeline installs the necessary Python dependencies by executing pip install -r requirements.txt. This ensures that the Lambda function has all the required libraries to run successfully.
  4. Running Tests: The pipeline runs tests for the Lambda function using the command python3 _test.py. This step is crucial to ensure that the function behaves as expected and to catch any potential bugs or issues early in the development process.
  5. Deploying to AWS Lambda: Finally, the pipeline deploys the Lambda function to AWS Lambda. It uses the AWS Command Line Interface (CLI) to set up the AWS access key and secret access key securely through GitHub Secrets. The function’s deployment is done by updating the Lambda function’s code with the new changes using the aws lambda update-function-code command.

With this CI/CD pipeline in place, developers can have confidence that their changes will be automatically built, tested, and deployed to the Lambda function in a controlled and consistent manner. Any issues that arise during the pipeline execution can be quickly identified and addressed, ensuring that the Lambda function is always in a reliable and functioning state.

It’s worth noting that this is a basic example, and real-world CI/CD pipelines can be much more complex, including additional steps like security scanning, integration testing, and more. The flexibility of CI/CD allows developers to tailor the pipeline to the specific needs of their project and team. Additionally, integrating CI/CD practices helps foster a collaborative and agile development environment where code changes can be quickly and safely delivered to production.

Step 15: Ci / CD FrontEnd

In Step 15 of the Cloud Resume Challenge, participants are required to create a second GitHub repository for their website code and set up GitHub Actions to automatically update the S3 bucket whenever new website code is pushed to the repository.

The provided code snippet demonstrates a GitHub Actions workflow that handles the automatic upload to the S3 bucket:

name: Upload cloudresume to S3

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@master

      - name: Upload to S3
        uses: jakejarvis/s3-sync-action@master
        with:
          args: --acl public-read --follow-symlinks --delete
        env:
          AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: 'us-east-1'
          SOURCE_DIR: './'

Let’s understand the workflow:

  1. Triggering the Workflow: The workflow is triggered automatically whenever there is a push to the main branch of the website repository.
  2. Setting up the Environment: The workflow runs on an Ubuntu environment.
  3. Checkout Code: The workflow checks out the latest code from the main branch of the website repository using actions/checkout@master.
  4. Upload to S3: The main part of the workflow is the Upload to S3 step. It uses the jakejarvis/s3-sync-action@master GitHub Action to synchronize the website code with the S3 bucket. The args option specifies additional options for the synchronization, such as setting the files’ permissions to public-read, following symbolic links, and deleting files from the S3 bucket that do not exist in the repository.
  5. Environment Variables: The env block defines the environment variables required for the AWS S3 synchronization. The secrets, like the AWS S3 bucket name, AWS access key ID, and AWS secret access key, are stored securely in GitHub Secrets. The SOURCE_DIR specifies the directory from which the website code will be uploaded to the S3 bucket.

With this GitHub Actions workflow in place, every time new code is pushed to the main branch of the website repository, the workflow will automatically update the corresponding S3 bucket, ensuring that the latest version of the website is available to users.

It’s important to note that using GitHub Secrets to store sensitive information like AWS credentials is a secure practice. By leveraging GitHub Actions, developers can automate deployment tasks and streamline the process of updating their cloud resources, making the development workflow more efficient and reliable.

Conclusion

The final part of the Cloud Resume Challenge was undoubtedly my favorite. It introduced me to the concept of Infrastructure as Code (IAC), which was entirely new to me. Prior to this challenge, I had no experience with IAC tools like Terraform. However, through this step, I learned how to define and manage cloud resources programmatically, making the deployment process more efficient and reliable.

With the successful completion of Step 15 and the implementation of the CI/CD pipeline for the Lambda function, I officially finished the Cloud Resume Challenge. It was a rewarding journey, and I now have a comprehensive cloud-based resume that showcases my skills and knowledge in various AWS services and DevOps practices.

One key lesson I learned from this challenge is the importance of not configuring API resources, such as DynamoDB tables, Function URLs, and Lambda functions, manually through the AWS console. Instead, leveraging Infrastructure as Code ensures consistency and reproducibility, making it easier to manage resources in a version-controlled manner. This approach eliminates the risk of human error and simplifies the process of maintaining and updating cloud resources.

Another crucial lesson I’ve learned is that manually configuring API resources, like DynamoDB tables, Function URLs, and Lambda functions through the AWS console, can be error-prone and time-consuming. Thankfully, I discovered a more efficient approach by leveraging GitHub Actions to automate these tasks. Now, whenever I push updates to my Python code, GitHub Actions automatically triggers Python tests. This seamless automation enables me to promptly identify and address any potential issues that may arise during the development process. As a result, I can maintain a more reliable and error-free codebase while significantly streamlining the deployment workflow.

Additionally, I have created a separate GitHub repository for my website code and set up GitHub Actions to automatically update the corresponding S3 bucket whenever new code is pushed. This automation ensures that the latest version of my website is always available to users without any manual intervention.

The Cloud Resume Challenge has not only allowed me to showcase my skills and knowledge to potential employers but has also taught me valuable DevOps practices. Understanding Infrastructure as Code and implementing CI/CD pipelines have been invaluable lessons that I will carry forward in my future projects.

As I wrap up this challenge, I feel more confident in my ability to work with cloud services, manage infrastructure efficiently, and automate essential processes for smoother development and deployment. I am grateful for this learning opportunity, and I look forward to applying these skills in my future endeavors. The Cloud Resume Challenge has undoubtedly been a rewarding and enriching experience, and I’m excited to continue my journey as a cloud developer.

Leave a Comment

Your email address will not be published. Required fields are marked *