AWS Code Star with Fargate

Pooja G Bhat
12 min readJun 29, 2021

--

What’s in the document?

  • Introduction to AWS CodeStar
  • We’ll discuss about the architecture advantages
  • Creating simple codestar project
  • Integrating codestar with ECS Fargate

What is AWS Code Star?

AWS CodeStar is a service that enables you to quickly develop, build, and deploy projects on AWS. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. The toolchain consists of source control, build and deploy stages which can be configured through a project template that is in YAML format and part of the CodeStar project structure.

A step towards NOOPS from Devops

😃 Architecture Advantages

  1. Fully automated source, build, deploy environment within minutes.
  2. Automated Resource creation with autoscaling, load balancing, creating VPC & security groups and heath checks, cloud watch etc.
  3. Pre-built Code Templates — CodeStar gives you the option to choose from many pre-built code templates which come with sample code applications. These applications can run on any of the following deployment environments: Self Managed — Amazon EC2 and AWS CodeDeploy Managed — AWS Elastic Beanstalk Serverless — AWS Lambda
  4. Speed to prod — in the matter of minutes.
  5. Template has support for C#, Go, HTML 5, Java, Node.js, PHP, Python, Ruby languages
  6. Very Little Devops experience required for most environments. A step towards NOOPS from Devops.
  7. Integrates with multiple managed services. When you create a project through CodeStar, AWS will automatically provision a handful of the underlying resources like AWS Elastic Beanstalk, AWS EC2 instances, AWS S3 Buckets, and an AWS CodeCommit repository
  8. Inbuilt integration with Jira.

Is this service right for my team and project?

Yes, If

  • Useful for starting a new project. When you want to quickly set up a software development project on AWS, whether you’re starting with a full set of tools for a team-based project or only setting up a trial project with a source repository
  • Using Codestar with ECS allows us to create containers which are lightweight processes. With ECS, you can schedule multiple containers on the same node, allowing you to achieve high density on EC2
  • Fully automated process of CI/CD is required
  • It is helpful too perform POC for client pitches which requires quick application setup

No, If

  • Migrating from other cloud services or migrating to other cloud service is planned
  • Limited language and framework templates — Though most of the popular language like Java and Python are supported, if you want to develop your application in other less popular languages like Golang and Elixir, support for it is not supported
  • Application is of large scale, servers spread over the globe
  • The team has a dedicated DevOps engineer for the project
  • The technology stack of the project is not supported by code star.

Few things to notice

  • Source code repo integration limited to AWS codecommit and github
  • Understanding of cloudformation script, ALB, security groups etc requires some time.
  • Migration to other cloud service providers is not possible, YAML file is specific to AWS services

Code Star is being developed actively and AWS has started a push for this service from August 2020. Codestar integration with ECS is currently not available in the console. But we can expect to see more frequent updates to the features of code star and the services supported by it.

Let’s create codestar project first!

  1. We will first look into creating the codestar project in AWS console. Go to Codestar service from AWS console and click on Create projects
  2. Select one of the templates provided by codestar based on the requirement. AWS provides multiple filters for an easier selection process. AWS service types, Application type and Programming language. We are selecting following resources. Aws Service : Elastic Beanstalk , Application Type : Web service, Programing language : Java
  3. After selecting the template, we need to provide the Project name, Repository details and EC2 configurations.Choose the EC2 type, VPC, subnet and keypair details. click on Next.Review the project setup and click on Create project. The project setup is completed!! It will take a few minutes to launch the application
  4. Select Pipeline tab in the codestar project. We get to see an overview of different sequential steps/stages of the application deployment life-cycle. It is mainly having 3 stages, namely- Source, Build and Deploy

Steps to integrate AWS Codestar with ECS and Fargate

Source stage : We will update the YAML(template.yml) file and buildSpec file. The YAML file is an alternate option to the console for adding the stages and action items and configuring the project. Add docker file to build the image.

Build stage : We are going to build the docker image and push it to AWS ECR.
Deploy stage : Deploy the docker image on ECS and run the application.

We will look into all these 3 stages changes.

1. Build stage changes

  • Open the code pipeline →Build →Build projects →Your project →Edit Environment →Override image → Enable the Privileged flag. Enable the privileged flag if you want to build Docker images.
  • Ps: Uncheck the Allow AWS CodeBuild to modify this service role so it can be used with this build project if you get the error Role trusts too many services
https://cdn-images-1.medium.com/max/800/1*o8SsQHTF3CCcSPKUEvT_gQ.png
  • Scroll down. At the end, you can find Environment variables. Add any additional keys required if you don’t want to expose those values in the project’s buildspec file.

Added keys DOCKER_REPOSITORY_URI and value: <account-id>.dkr.ecr.us-west-2.amazonaws.com
REPOSITORY_NAME and value: <ecr-repository-name>

  • Build role is in the format CodeStarWorker-<project-Id>-ToolChain. Create inline policy AmazonEC2ContainerRegistryPowerUser for the build role. (Copy paste the json content from AmazonEC2ContainerRegistryPowerUser managed policy)
  • Codestar by-default adds the permission boundary for the build role. So update the policy in the permission boundary to allow docker login in the build stage. Update the policy in permission boundary to allow AmazonEC2ContainerRegistryPowerUser
  • Now open the permission boundary. IAM →Permissions →Open CodeStar_<project-id>_PermissionsBoundary →Edit Policy →Switch to Json mode →Allow the AmazonEC2ContainerRegistryPowerUser actions.
{
…Existing permission boundary…
},
{ — Append json content for AmazonEC2ContainerRegistryPowerUser —
“Sid”: “7”,
“Effect”: “Allow”,
“Action”: [
“ecr:GetAuthorizationToken”,
“ecr:BatchCheckLayerAvailability”,
“ecr:GetDownloadUrlForLayer”,
“ecr:GetRepositoryPolicy”,
“ecr:DescribeRepositories”,
“ecr:ListImages”,
“ecr:DescribeImages”,
“ecr:BatchGetImage”,
“ecr:GetLifecyclePolicy”,
“ecr:GetLifecyclePolicyPreview”,
“ecr:ListTagsForResource”,
“ecr:DescribeImageScanFindings”,
“ecr:InitiateLayerUpload”,
“ecr:UploadLayerPart”,
“ecr:CompleteLayerUpload”,
“ecr:PutImage”
],
“Resource”: [ “*”
]
}

2. Deploy stage changes

Codestar uses CloudFormation templates to create, update, delete the resources.

Role Name : CodeStarWorker-<project-name>-CloudFormation

  • Add the following managed policies to above role ElasticLoadBalancingFullAccess and CloudWatchLogsFullAccess
  • Add inline policy for ECS, IAM and application auto-scaling in the cloudformation role.
{
“Version”: “2012–10–17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“ecs:PutAttributes”,
“ecs:UpdateCluster”,
“ecs:ListAttributes”,
“ecs:StartTask”,
“ecs:DeleteAttributes”,
“ecs:DescribeTaskDefinition”,
“ecs:DeregisterTaskDefinition”,
“ecs:ListServices”,
“ecs:UpdateService”,
“iam:PassRole”,
“ecs:CreateService”,
“ecs:RunTask”,
“ecs:RegisterTaskDefinition”,
“application-autoscaling:DescribeScalingActivities”,
“ecs:StopTask”,
“ecs:DescribeServices”,
“ecs:DescribeContainerInstances”,
“ecs:DescribeTasks”,
“ecs:ListTaskDefinitions”,
“ecs:UpdateTaskSet”,
“ecs:CreateTaskSet”,
“ecs:ListClusters”,
“ecs:UpdateClusterSettings”,
“application-autoscaling:RegisterScalableTarget”,
“ecs:CreateCluster”,
“application-autoscaling:DescribeScalableTargets”,
“ecs:DeleteService”,
“ecs:DeleteCluster”,
“ecs:DeleteTaskSet”,
“application-autoscaling:DeleteScalingPolicy”,
“ecs:DescribeClusters”,
“application-autoscaling:DescribeScalingPolicies”,
“application-autoscaling:PutScalingPolicy”,
“ecs:ListContainerInstances”,
“application-autoscaling:DeregisterScalableTarget”
],
“Resource”: “*”
}
]
}
  • Override the parameters in deploy stage. This is helpful if you want to create pipelines for multiple branches in codestar project with the same template file or you don’t want to expose the sensitive date in YAML file. For this Go to pipeline →open <project-id>-Pipeline → Edit Deploy →Scroll down and click on Advanced →Parameter overrides

Append the below values to the json to override the parameters for YAML file. We are using container port 3003 for production. Update accordingly.


{
..existing key-value pairs.. ,
“SubnetB”:”subnet-<id>",
“Image”:”<account-id>.dkr.ecr.us-west-2.amazonaws.com/poc-spring-app:latest”,
“Stage”:”production”,
“TargetGroupARN”: “arn:aws:elasticloadbalancing:us-west-2:<account-id>:targetgroup/codestar-tg-production/<id>”,
“ContainerPort”:”3003",
“LoadBalancerSecurityGroupId”:”sg-<security-group-id>"
}
  • Add the above values and save the pipeline.

3. Source stage Changes

Now we will look into the code changes.

  • Add docker file in the root directory where you have buildspec file and YAML file. The port mentioned in the docker file should be the same as the container port mentioned in the template file.
  • Add a health-check endpoint to check the health of the application status. Same endpoint url must be specified in the template file.
  • Update the pom file. Change the packaging to jar.
( <packaging>jar</packaging>)
  • Update the plugins to build the docker file.
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
https://cdn-images-1.medium.com/max/800/1*kvJXaN8HFDjAYWYwvlLuTw.png

Points to notice in template.yml file:

  • For ECS service, you have to wait for Listeners to get created first. Otherwise service will fail. Add below lines for service resource.DependsOn: — ListenerHTTP
  • We have already created few of the resources like task role, autoscaling role etc. earlier and used the ARN in YAML file.
  • Resource creation for Cluster, listeners, ALB etc. for production and development will be shown in the next article. All these resource creation is done using cloudformation template.
  • We have containerSecurity group for each environment. Container port should be same as the port mentioned in Docker file.
    Sample template.yml file -
AWSTemplateFormatVersion: 2010–09–09
Transform:
- AWS::CodeStar
Conditions:
UseSubnet: !Not [!Equals [!Ref 'SubnetId', subnet-none]]
isProduction: !Equals [ !Ref Stage, 'production']
isDevelopment: !Not [!Equals [!Ref Stage, 'production']]
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar project ID used to name project resources and create roles.
InstanceType:
Type: String
Description: The type of Amazon EC2 Linux instances that will be launched for this project.
KeyPairName:
Type: String
Description: The name of an existing Amazon EC2 key pair in the region where the project is created, which you can use to SSH into the new Amazon EC2 Linux instances.
VpcId:
Type: String
Description: The ID of the Amazon Virtual Private Cloud (VPC) used for the new Amazon EC2 Linux instances.
SubnetId:
Type: String
Description: The name of the VPC subnet used for the new Amazon EC2 Linux instances launched for this project.
SolutionStackName:
Type: String
Description: The software stack used to launch environments and configure instances in AWS Elastic Beanstalk.
EBTrustRole:
Type: String
Description: The service role in IAM for AWS Elastic Beanstalk to be created for this project.
EBInstanceProfile:
Type: String
Description: The IAM role that will be created for the Amazon EC2 Linux instances.
Stage:
Type: String
Description: The name for a project pipeline stage, such as Staging or Prod, for which resources are provisioned and deployed.
AllowedValues:
- dev
- integration
- staging
- production
Default: dev
SubnetB:
Type: AWS::EC2::Subnet::Id
ServiceName:
Type: String
Default: codestar
ContainerPort:
Type: Number
HealthCheckPath:
Type: String
Default: /health
MinContainers:
Type: Number
Default: 1
MaxContainers:
Type: Number
Default: 10
AutoScalingTargetValue:
Type: Number
Default: 50
LoadBalancerPortHTTP:
Type: Number
Default: 80
LoadBalancerPortHTTPS:
Type: Number
Default: 443
HealthCheckIntervalSeconds:
Type: Number
Default: 30
CPU:
Type: String
Description: The number of CPU units
Default: 256
AllowedValues:
- 256
- 512
- 1024
- 2048
- 4096
Memory:
Type: String
Description: The Amount of memory used by the task
Default: 0.5GB
AllowedValues:
- 0.5GB
- 1GB
- 2GB
- 3GB
- 4GB
- 5GB
- 6GB
ExecutionRole:
Type: String
Description: Role needed by ECS and containers
Default: arn:aws:iam::<id>:role/poc-ecs-task-role
AutoscalingRole:
Type: String
Description: Role need for autoscaling
Default: arn:aws:iam::<id>:role/aws-service-role/ecs.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ECSService
ClusterARN:
Type: String
Description: Development cluster ARN
Default: arn:aws:ecs:us-west-2:<id>:cluster/codestar-development
Image:
Type: String
#Override the value in deploy stage of the codepipeline
Description: Docker repository ARN with tag.
TargetGroupARN:
Type: String
#Override the value in deploy stage of the codepipeline
Description: Target group ARN for the service
LoadBalancerSecurityGroupId:
Type: String
#Override the value in deploy stage of the codepipeline
Description: Security group Id of the loadbalancer
Resources:
Cluster:
Type: AWS::ECS::Cluster
# Production ECS cluster
Condition: isProduction
Properties:
ClusterName: !Join ['-', [!Ref ServiceName, cluster, production]]
TaskDefinition:
Type: AWS::ECS::TaskDefinition
DependsOn: LogGroup
Properties:
Family: !Join ['-', [!Ref ServiceName, td, !Ref Stage]]
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Cpu: !Ref CPU
Memory: !Ref Memory
ExecutionRoleArn: !Ref ExecutionRole
TaskRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref Image
PortMappings:
- ContainerPort: !Ref ContainerPort
# Send logs to CloudWatch Logs
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
ContainerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Join ['-', [!Ref ServiceName, sg, !Ref Stage]]
GroupDescription: !Join [' ', [!Ref ServiceName, sg, !Ref Stage]]
VpcId: !Ref VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: !Ref LoadBalancerPortHTTP
ToPort: !Ref LoadBalancerPortHTTP
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: !Ref LoadBalancerPortHTTP
ToPort: !Ref LoadBalancerPortHTTP
CidrIpv6: ::/0
- IpProtocol: tcp
FromPort: !Ref ContainerPort
ToPort: !Ref ContainerPort
SourceSecurityGroupId: !Ref LoadBalancerSecurityGroupId
productionService:
Type: AWS::ECS::Service
# This dependency is needed so that the load balancer is setup correctly in time
Condition: isProduction
Properties:
ServiceName: !Join ['-', [!Ref ServiceName, service, production]]
Cluster: !Ref Cluster
TaskDefinition: !Ref TaskDefinition
DeploymentConfiguration:
MinimumHealthyPercent: 100
MaximumPercent: 200
DesiredCount: 1
# This may need to be adjusted if the container takes a while to start up
HealthCheckGracePeriodSeconds: !Ref HealthCheckIntervalSeconds
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
# change to DISABLED if you're using private subnets that have access to a NAT gateway
AssignPublicIp: ENABLED
Subnets:
- !Ref SubnetId
- !Ref SubnetB
SecurityGroups:
- !Ref ContainerSecurityGroup
LoadBalancers:
- ContainerName: !Ref ServiceName
ContainerPort: !Ref ContainerPort
TargetGroupArn: !Ref TargetGroupARN
developmentService:
Type: AWS::ECS::Service
# This dependency is needed so that the load balancer is setup correctly in time
Condition: isDevelopment
Properties:
ServiceName: !Join ['-', [!Ref ServiceName, service, !Ref Stage]]
Cluster: !Ref ClusterARN
TaskDefinition: !Ref TaskDefinition
DeploymentConfiguration:
MinimumHealthyPercent: 100
MaximumPercent: 200
DesiredCount: 1
# This may need to be adjusted if the container takes a while to start up
HealthCheckGracePeriodSeconds: !Ref HealthCheckIntervalSeconds
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
# change to DISABLED if you're using private subnets that have access to a NAT gateway
AssignPublicIp: ENABLED
Subnets:
- !Ref SubnetId
- !Ref SubnetB
SecurityGroups:
- !Ref LoadBalancerSecurityGroupId
LoadBalancers:
- ContainerName: !Ref ServiceName
ContainerPort: !Ref ContainerPort
TargetGroupArn: !Ref TargetGroupARN
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Join ['/', [/ecs, !Ref ServiceName, log-group, !Ref Stage]]
AutoScalingTarget:
Type: AWS::ApplicationAutoScaling::ScalableTarget
Properties:
MinCapacity: !Ref MinContainers
MaxCapacity: !Ref MaxContainers
ResourceId: !Join ['/', [service, !If [isProduction, !Ref Cluster, !Ref ClusterARN], !If [isProduction, !GetAtt productionService.Name, !GetAtt developmentService.Name]]]
ScalableDimension: ecs:service:DesiredCount
ServiceNamespace: ecs
# "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that allows Application Auto Scaling to modify your scalable target."
RoleARN: !Ref AutoscalingRole
AutoScalingPolicy:
Type: AWS::ApplicationAutoScaling::ScalingPolicy
Properties:
PolicyName: !Join ['-', [!Ref ServiceName, AutoScalingPolicy, !Ref Stage]]
PolicyType: TargetTrackingScaling
ScalingTargetId: !Ref AutoScalingTarget
TargetTrackingScalingPolicyConfiguration:
PredefinedMetricSpecification:
PredefinedMetricType: ECSServiceAverageCPUUtilization
ScaleInCooldown: 10
ScaleOutCooldown: 10
TargetValue: !Ref AutoScalingTargetValue
  • Update buildspec file to build the docker image and push it to ECR. Sample buildspec.yml is given below
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2&
- timeout 15 sh -c 'until docker info; do echo .; sleep 1; done'
pre_build:
commands:
- mvn clean compile test
- echo Logging in to Amazon ECR....
- aws --version
- aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin $DOCKER_REPOSITORY_URI
- REPOSITORY_URI=$DOCKER_REPOSITORY_URI/$REPOSITORY_NAME
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- mvn war:exploded
- echo Build started on 'date'
- echo Building the Docker image...
- docker build -f Dockerfile.production -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on 'date'
- echo Pushing the Docker image...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\\$PARTITION\\$/'${PARTITION}'/g;s/\\$AWS_REGION\\$/'${AWS_REGION}'/g;s/\\$ACCOUNT_ID\\$/'${ACCOUNT_ID}'/g;s/\\$PROJECT_ID\\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
files:
- 'template-export.yml'
- 'template-configuration.json'

Now push the code changes to branch which will trigger the pipeline. In build stage, image is getting pushed to ECR and in the deploy stage docker image from ECR is successfully deployed on ECS.

Open the load balancer , copy the DNS address and hit the URL in browser.

https://cdn-images-1.medium.com/max/800/1*C5qtIL0BM4fjvoct0aQedA.png

Hurrah!! Codestar application is successfully deployed on ECS with Fargate. 🥳

💲Cost Benefits — Case study

Using the AWS CodeStar is not charged, but you only for the resources or service your application use ex: EC2 instances, ECR, ECS, SQS, SNS, etc. Refer — https://aws.amazon.com/codestar/pricing/

Conclusion

In this blog you have seen that how easily we can create and manage an entire CI/CD pipeline in the application development using the AWS CodeStar and using ECS to deploy the application on containers, where a commit or change to code passes through various automated stage gates all the way from building and testing to deploying applications.

In the next blog, we will see how to create pipelines for each environments( dev, staging) using the same YAML file and triggering the CI/CD pipeline if there is any change in the source branch.

--

--