[cs615asa] AWS meetup summary

Bryana Atkinson batkinso at stevens.edu
Thu Apr 27 22:43:24 EDT 2017


Event Summary


This event was focused on Continuous Delivery and Continuous Integration.
It was hosted by AWS at the AWS Pop-up Loft in New York City. There were
four lectures given, each on various aspects of continuously delivering and
integrating code for production, whether it be software patches or
upgrades. As it was hosted by AWS, it focused on using AWS instances and
tools offered by AWS (such as CodeBuild or CodePipene) to make this process
easier.


Why I Chose This Event

I chose this event primarily because it was hosted by AWS. Considering that
this course uses AWS very heavily, I thought it would be useful to get more
information on using AWS straight from the source. There were many days to
choose from, with each day focusing on a different topic. I chose
Continuous Delivery and Continuous Integration because it focused on
automating tasks, something that I knew we would be covering in class. I
thought it would be useful to learn how AWS handles this. I also am
interested in automating tasks and hoped there would be some information of
shell scripting.


What I Learned

These lectures were focused on Continuous Delivery and Continuous
Integration - in other words, automation. Because it was hosted by AWS, it
also focused on the various AWS tools provided that help with automation,
including CodeCommit, CodeDeploy, CodeBuild, and CodePipeline. These tools
coincide with three of the four stages of the release process that AWS
named: Source, Build, Test, and Release. AWS hosted these lectures with the
assumption that the product has already been released, and that now the
goal was to release a further version of the same product with the least
disruption to current users. However, the same tools and principles can be
used for a first release. Since our class focuses on command line and not
on GUIs, I won't  focus too much on the AWS tools. Instead, I'll focus on
the principles of the automation stages. It's important to note that these
stages are only rough guidelines. There will likely be much movement
between the stages as work progresses.

                Source

                This is the stage where the code is made, whether it's
updates to already written code or a first release. Typically, this will
happen in some sort of shared space, such as GitHub. It is subject to peer
review and unit testing at this stage.

                Build and Test

                These two stages run side by side. It is the stage where
the code made in the previous stage is compiled and built.  If this is not
a first release, the new code should be built alongside the current
infrastructure. After a build, further tests can be run, such as
integration testing. It is expected that at this stage multiple builds will
be done as each round of testing shows more things that needs to be fixed.
It's likely that changes will need to be made to the source, which will
then be rebuilt and re-tested.

                Release

                After the previous stages have taken place and you are
satisfied with your latest built, it is time to release the process. AWS
asserts that there are two main parts of a release, deployment and
delivery. Deployment is making your new build live. Delivery is not only
deploying your build, but also releasing it to users. According to AWS,
deployment can be fully automated. Delivery, on the other hand cannot be
fully automated, although there are tools that can make the process easier.
In order to deliver a build, you have to make the manual decision of how
you want it to be delivered. You have to decide if delivering the content
will disrupt the user's experience, and how to combat that. For example,
you could tell users that the site will be unavailable during a period of
time and use the downtime to deploy the new content. Or you could try to
reroute traffic to a few servers while the others are down to receive the
update, creating a staged delivery. Each option has there various
advantages and disadvantages. For example, users would be annoyed if a
service is entirely unavailable for a time, but at least they would know
for certain. On the other hand, staged delivery means the service is
available, but what if the reduced amount of servers leads to lagging and
difficulty getting/using the service? Users would also be annoyed in that
case. It is difficult to say for certain which is the better option.


Bryana Atkinson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.stevens.edu/pipermail/cs615asa/attachments/20170427/232c4476/attachment.html>


More information about the cs615asa mailing list