What’s in the document?

  • Introduction to AWS CodeStar
  • We’ll discuss about the architecture advantages
  • Creating simple codestar project
  • Integrating codestar with ECS Fargate

What is AWS Code Star?

AWS CodeStar is a service that enables you to quickly develop, build, and deploy projects on AWS. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. The toolchain consists of source control, build and deploy stages which can be configured through a project template that is in YAML format and part of the CodeStar project structure.

A step towards NOOPS from Devops

😃 Architecture Advantages

  1. Fully automated source, build, deploy environment within minutes.
  2. Automated Resource creation with autoscaling, load balancing, creating VPC & security groups and heath checks, cloud watch etc.
  3. Pre-built Code Templates — CodeStar gives you the option to choose from many pre-built code templates which come with sample code applications. These applications can run on any of the following deployment environments: Self Managed — Amazon EC2 and AWS CodeDeploy Managed — AWS Elastic Beanstalk Serverless — AWS Lambda
  4. Speed to prod — in the matter of minutes.
  5. Template has support for C#, Go, HTML 5, Java, Node.js, PHP, Python, Ruby languages
  6. Very Little Devops experience required for most environments. A step towards NOOPS from Devops.
  7. Integrates with multiple managed services. When you create a project through CodeStar, AWS will automatically provision a handful of the underlying resources like AWS Elastic Beanstalk, AWS EC2 instances, AWS S3 Buckets, and an AWS CodeCommit repository
  8. Inbuilt integration with Jira.

Is this service right for my team and project?

Yes, If

  • Useful for starting a new project. When you want to quickly set up a software development project on AWS, whether you’re starting with a full set of tools for a team-based project or only setting up a trial project with a source repository
  • Using Codestar with ECS allows us to create containers which are lightweight processes. With ECS, you can schedule multiple containers on the same node, allowing you to achieve high density on EC2
  • Fully automated process of CI/CD is required
  • It is helpful too perform POC for client pitches which requires quick application setup

No, If

  • Migrating from other cloud services or migrating to other cloud service is planned
  • Limited language and framework templates — Though most of the popular language like Java and Python are supported, if you want to develop your application in other less popular languages like Golang and Elixir, support for it is not supported
  • Application is of large scale, servers spread over the globe
  • The team has a dedicated DevOps engineer for the project
  • The technology stack of the project is not supported by code star.

Few things to notice

  • Source code repo integration limited to AWS codecommit and github
  • Understanding of cloudformation script, ALB, security groups etc requires some time.
  • Migration to other cloud service providers is not possible, YAML file is specific to AWS services

Code Star is being developed actively and AWS has started a push for this service from August 2020. Codestar integration with ECS is currently not available in the console. But we can expect to see more frequent updates to the features of code star and the services supported by it.

Let’s create codestar project first!

  1. We will first look into creating the codestar project in AWS console. Go to Codestar service from AWS console and click on Create projects
  2. Select one of the templates provided by codestar based on the requirement. AWS provides multiple filters for an easier selection process. AWS service types, Application type and Programming language. We are selecting following resources. Aws Service : Elastic Beanstalk , Application Type : Web service, Programing language : Java
  3. After selecting the template, we need to provide the Project name, Repository details and EC2 configurations.Choose the EC2 type, VPC, subnet and keypair details. click on Next.Review the project setup and click on Create project. The project setup is completed!! It will take a few minutes to launch the application
  4. Select Pipeline tab in the codestar project. We get to see an overview of different sequential steps/stages of the application deployment life-cycle. It is mainly having 3 stages, namely- Source, Build and Deploy

Steps to integrate AWS Codestar with ECS and Fargate

Source stage : We will update the YAML(template.yml) file and buildSpec file. The YAML file is an alternate option to the console for adding the stages and action items and configuring the project. Add docker file to build the image.

Build stage : We are going to build the docker image and push it to AWS ECR.
Deploy stage : Deploy the docker image on ECS and run the application.

We will look into all these 3 stages changes.

1. Build stage changes

  • Open the code pipeline →Build →Build projects →Your project →Edit Environment →Override image → Enable the Privileged flag. Enable the privileged flag if you want to build Docker images.
  • Ps: Uncheck the Allow AWS CodeBuild to modify this service role so it can be used with this build project if you get the error Role trusts too many services
  • Scroll down. At the end, you can find Environment variables. Add any additional keys required if you don’t want to expose those values in the project’s buildspec file.

Added keys DOCKER_REPOSITORY_URI and value: <account-id>.dkr.ecr.us-west-2.amazonaws.com
REPOSITORY_NAME and value: <ecr-repository-name>

  • Build role is in the format CodeStarWorker-<project-Id>-ToolChain. Create inline policy AmazonEC2ContainerRegistryPowerUser for the build role. (Copy paste the json content from AmazonEC2ContainerRegistryPowerUser managed policy)
  • Codestar by-default adds the permission boundary for the build role. So update the policy in the permission boundary to allow docker login in the build stage. Update the policy in permission boundary to allow AmazonEC2ContainerRegistryPowerUser
  • Now open the permission boundary. IAM →Permissions →Open CodeStar_<project-id>_PermissionsBoundary →Edit Policy →Switch to Json mode →Allow the AmazonEC2ContainerRegistryPowerUser actions.
…Existing permission boundary…
{ — Append json content for AmazonEC2ContainerRegistryPowerUser —
“Sid”: “7”,
“Effect”: “Allow”,
“Action”: [
“Resource”: [ “*”

2. Deploy stage changes

Codestar uses CloudFormation templates to create, update, delete the resources.

Role Name : CodeStarWorker-<project-name>-CloudFormation

  • Add the following managed policies to above role ElasticLoadBalancingFullAccess and CloudWatchLogsFullAccess
  • Add inline policy for ECS, IAM and application auto-scaling in the cloudformation role.
“Version”: “2012–10–17”,
“Statement”: [
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“Resource”: “*”
  • Override the parameters in deploy stage. This is helpful if you want to create pipelines for multiple branches in codestar project with the same template file or you don’t want to expose the sensitive date in YAML file. For this Go to pipeline →open <project-id>-Pipeline → Edit Deploy →Scroll down and click on Advanced →Parameter overrides

Append the below values to the json to override the parameters for YAML file. We are using container port 3003 for production. Update accordingly.

..existing key-value pairs.. ,
“TargetGroupARN”: “arn:aws:elasticloadbalancing:us-west-2:<account-id>:targetgroup/codestar-tg-production/<id>”,
  • Add the above values and save the pipeline.

3. Source stage Changes

Now we will look into the code changes.

  • Add docker file in the root directory where you have buildspec file and YAML file. The port mentioned in the docker file should be the same as the container port mentioned in the template file.
  • Add a health-check endpoint to check the health of the application status. Same endpoint url must be specified in the template file.
  • Update the pom file. Change the packaging to jar.
( <packaging>jar</packaging>)
  • Update the plugins to build the docker file.

Points to notice in template.yml file:

  • For ECS service, you have to wait for Listeners to get created first. Otherwise service will fail. Add below lines for service resource.DependsOn: — ListenerHTTP
  • We have already created few of the resources like task role, autoscaling role etc. earlier and used the ARN in YAML file.
  • Resource creation for Cluster, listeners, ALB etc. for production and development will be shown in the next article. All these resource creation is done using cloudformation template.
  • We have containerSecurity group for each environment. Container port should be same as the port mentioned in Docker file.
    Sample template.yml file -
AWSTemplateFormatVersion: 2010–09–09
- AWS::CodeStar
UseSubnet: !Not [!Equals [!Ref 'SubnetId', subnet-none]]
isProduction: !Equals [ !Ref Stage, 'production']
isDevelopment: !Not [!Equals [!Ref Stage, 'production']]
Type: String
Description: AWS CodeStar project ID used to name project resources and create roles.
Type: String
Description: The type of Amazon EC2 Linux instances that will be launched for this project.
Type: String
Description: The name of an existing Amazon EC2 key pair in the region where the project is created, which you can use to SSH into the new Amazon EC2 Linux instances.
Type: String
Description: The ID of the Amazon Virtual Private Cloud (VPC) used for the new Amazon EC2 Linux instances.
Type: String
Description: The name of the VPC subnet used for the new Amazon EC2 Linux instances launched for this project.
Type: String
Description: The software stack used to launch environments and configure instances in AWS Elastic Beanstalk.
Type: String
Description: The service role in IAM for AWS Elastic Beanstalk to be created for this project.
Type: String
Description: The IAM role that will be created for the Amazon EC2 Linux instances.
Type: String
Description: The name for a project pipeline stage, such as Staging or Prod, for which resources are provisioned and deployed.
- dev
- integration
- staging
- production
Default: dev
Type: AWS::EC2::Subnet::Id
Type: String
Default: codestar
Type: Number
Type: String
Default: /health
Type: Number
Default: 1
Type: Number
Default: 10
Type: Number
Default: 50
Type: Number
Default: 80
Type: Number
Default: 443
Type: Number
Default: 30
Type: String
Description: The number of CPU units
Default: 256
- 256
- 512
- 1024
- 2048
- 4096
Type: String
Description: The Amount of memory used by the task
Default: 0.5GB
- 0.5GB
- 1GB
- 2GB
- 3GB
- 4GB
- 5GB
- 6GB
Type: String
Description: Role needed by ECS and containers
Default: arn:aws:iam::<id>:role/poc-ecs-task-role
Type: String
Description: Role need for autoscaling
Default: arn:aws:iam::<id>:role/aws-service-role/ecs.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ECSService
Type: String
Description: Development cluster ARN
Default: arn:aws:ecs:us-west-2:<id>:cluster/codestar-development
Type: String
#Override the value in deploy stage of the codepipeline
Description: Docker repository ARN with tag.
Type: String
#Override the value in deploy stage of the codepipeline
Description: Target group ARN for the service
Type: String
#Override the value in deploy stage of the codepipeline
Description: Security group Id of the loadbalancer
Type: AWS::ECS::Cluster
# Production ECS cluster
Condition: isProduction
ClusterName: !Join ['-', [!Ref ServiceName, cluster, production]]
Type: AWS::ECS::TaskDefinition
DependsOn: LogGroup
Family: !Join ['-', [!Ref ServiceName, td, !Ref Stage]]
NetworkMode: awsvpc
Cpu: !Ref CPU
Memory: !Ref Memory
ExecutionRoleArn: !Ref ExecutionRole
TaskRoleArn: !Ref ExecutionRole
- Name: !Ref ServiceName
Image: !Ref Image
- ContainerPort: !Ref ContainerPort
# Send logs to CloudWatch Logs
LogDriver: awslogs
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
Type: AWS::EC2::SecurityGroup
GroupName: !Join ['-', [!Ref ServiceName, sg, !Ref Stage]]
GroupDescription: !Join [' ', [!Ref ServiceName, sg, !Ref Stage]]
VpcId: !Ref VpcId
- IpProtocol: tcp
FromPort: !Ref LoadBalancerPortHTTP
ToPort: !Ref LoadBalancerPortHTTP
- IpProtocol: tcp
FromPort: !Ref LoadBalancerPortHTTP
ToPort: !Ref LoadBalancerPortHTTP
CidrIpv6: ::/0
- IpProtocol: tcp
FromPort: !Ref ContainerPort
ToPort: !Ref ContainerPort
SourceSecurityGroupId: !Ref LoadBalancerSecurityGroupId
Type: AWS::ECS::Service
# This dependency is needed so that the load balancer is setup correctly in time
Condition: isProduction
ServiceName: !Join ['-', [!Ref ServiceName, service, production]]
Cluster: !Ref Cluster
TaskDefinition: !Ref TaskDefinition
MinimumHealthyPercent: 100
MaximumPercent: 200
DesiredCount: 1
# This may need to be adjusted if the container takes a while to start up
HealthCheckGracePeriodSeconds: !Ref HealthCheckIntervalSeconds
LaunchType: FARGATE
# change to DISABLED if you're using private subnets that have access to a NAT gateway
AssignPublicIp: ENABLED
- !Ref SubnetId
- !Ref SubnetB
- !Ref ContainerSecurityGroup
- ContainerName: !Ref ServiceName
ContainerPort: !Ref ContainerPort
TargetGroupArn: !Ref TargetGroupARN
Type: AWS::ECS::Service
# This dependency is needed so that the load balancer is setup correctly in time
Condition: isDevelopment
ServiceName: !Join ['-', [!Ref ServiceName, service, !Ref Stage]]
Cluster: !Ref ClusterARN
TaskDefinition: !Ref TaskDefinition
MinimumHealthyPercent: 100
MaximumPercent: 200
DesiredCount: 1
# This may need to be adjusted if the container takes a while to start up
HealthCheckGracePeriodSeconds: !Ref HealthCheckIntervalSeconds
LaunchType: FARGATE
# change to DISABLED if you're using private subnets that have access to a NAT gateway
AssignPublicIp: ENABLED
- !Ref SubnetId
- !Ref SubnetB
- !Ref LoadBalancerSecurityGroupId
- ContainerName: !Ref ServiceName
ContainerPort: !Ref ContainerPort
TargetGroupArn: !Ref TargetGroupARN
Type: AWS::Logs::LogGroup
LogGroupName: !Join ['/', [/ecs, !Ref ServiceName, log-group, !Ref Stage]]
Type: AWS::ApplicationAutoScaling::ScalableTarget
MinCapacity: !Ref MinContainers
MaxCapacity: !Ref MaxContainers
ResourceId: !Join ['/', [service, !If [isProduction, !Ref Cluster, !Ref ClusterARN], !If [isProduction, !GetAtt productionService.Name, !GetAtt developmentService.Name]]]
ScalableDimension: ecs:service:DesiredCount
ServiceNamespace: ecs
# "The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that allows Application Auto Scaling to modify your scalable target."
RoleARN: !Ref AutoscalingRole
Type: AWS::ApplicationAutoScaling::ScalingPolicy
PolicyName: !Join ['-', [!Ref ServiceName, AutoScalingPolicy, !Ref Stage]]
PolicyType: TargetTrackingScaling
ScalingTargetId: !Ref AutoScalingTarget
PredefinedMetricType: ECSServiceAverageCPUUtilization
ScaleInCooldown: 10
ScaleOutCooldown: 10
TargetValue: !Ref AutoScalingTargetValue
  • Update buildspec file to build the docker image and push it to ECR. Sample buildspec.yml is given below
version: 0.2
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp:// --storage-driver=overlay2&
- timeout 15 sh -c 'until docker info; do echo .; sleep 1; done'
- mvn clean compile test
- echo Logging in to Amazon ECR....
- aws --version
- aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin $DOCKER_REPOSITORY_URI
- mvn war:exploded
- echo Build started on 'date'
- echo Building the Docker image...
- docker build -f Dockerfile.production -t $REPOSITORY_URI:latest .
- echo Build completed on 'date'
- echo Pushing the Docker image...
- docker push $REPOSITORY_URI:latest
- echo Writing image definitions file...
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\\$PARTITION\\$/'${PARTITION}'/g;s/\\$AWS_REGION\\$/'${AWS_REGION}'/g;s/\\$ACCOUNT_ID\\$/'${ACCOUNT_ID}'/g;s/\\$PROJECT_ID\\$/'${PROJECT_ID}'/g' template-configuration.json
- 'template-export.yml'
- 'template-configuration.json'

Now push the code changes to branch which will trigger the pipeline. In build stage, image is getting pushed to ECR and in the deploy stage docker image from ECR is successfully deployed on ECS.

Open the load balancer , copy the DNS address and hit the URL in browser.


Hurrah!! Codestar application is successfully deployed on ECS with Fargate. 🥳

💲Cost Benefits — Case study

Using the AWS CodeStar is not charged, but you only for the resources or service your application use ex: EC2 instances, ECR, ECS, SQS, SNS, etc. Refer — https://aws.amazon.com/codestar/pricing/


In this blog you have seen that how easily we can create and manage an entire CI/CD pipeline in the application development using the AWS CodeStar and using ECS to deploy the application on containers, where a commit or change to code passes through various automated stage gates all the way from building and testing to deploying applications.

In the next blog, we will see how to create pipelines for each environments( dev, staging) using the same YAML file and triggering the CI/CD pipeline if there is any change in the source branch.




Love podcasts or audiobooks? Learn on the go with our new app.

GitOps Deployment and Kubernetes

How to get started with your own Discord server

Pocket Core RC-0.3.0

Powering Tencent Billing Platform with Apache Pulsar

Clean (PHP) Code

Hackers & Painters

Orbital Development Cycle

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Pooja G Bhat

Pooja G Bhat

More from Medium

AWS CDK — Setup the CDK & Create an S3 Bucket on the Fly and Upload Files to it Dynamically

Working of Amazon Deep Composer

CDK Example — How to create a S3 secure Bucket resource

AWS Architecture — Everything You Need to Know About It!