CASCADEO AWS CASE STUDY
Meteorcomm is the rail industry’s trusted choice for wireless
communications technology to reliably transport mission-critical information.
To enable the safe and effcient operation of the world’s
trains, Meteorcomm helps railroad service organizations embed
kill switches and other technology into their railcars which perform
functions like halting the car in case of an emergency.
In addition to ensuring passenger and operator safety, these solutions
enable Positive Train Control (PTC) compliance and improve
The hardware and software employed in their messaging and
systems management solutions is complex and highly-integrated by
nature. To help them deliver solutions as quickly as customers need
them, Meteorcomm is committed to agile software development
principles and a continuous integration / continuous delivery (CI/
CD) approach. Metorcomm’s solutions require that each railcar
have custom hardware and dedicated software onboard. To streamline
the testing of these deployments.
Meteorcomm made signifcant investments in test automation –
running over 3,300 complex integration tests against nightly builds
of their Interoperable Train Control Messaging (ITCM) application.
Historically, to support this ITCM application, they’ve relied on a
large bank of massive density blade servers running in two private
data centers managed by an internal IT team.
In an attempt to enable their engineers to be able to request
capacity to support their compute needs on-demand, these servers
have been virtualized into thousands of VMware virtual machines.
Typically, this would result in roughly 6000 virtual machines running
concurrently at peak, pushing utilization beyond a manageable
threshold. Still, outside of these peak hours, the vast majority of
their hardware was sitting idle most of the time.
Their development, test and training teams didn’t want to have
to rebuild all these instances every time they were needed, so to
ensure that they could meet peak demand, their solution was to
keep them running and perform cleanups of the instance after each
use. This led to massive cost-ineffciencies. They also lacked the
governance and cost management capabilities that they required,
further highlighting the need for a for a better solution.
The AWS and Cascadeo Solution
Meteorcomm determined that Amazon Web Services would enable
them to better meet dynamically changing demands by provisioning
infrastructure on-demand, paying only for the resources they
use. From there, Meteorcomm engaged AWS Premier Consulting
Partner Cascadeo to design and implement a new infrastructure
solution on AWS to support their regression tests.
The new solution (Figure 1) creates a green feld test environment
where each test run deploys a new Amazon VPC that houses all of
the Amazon EC2 instances and other infrastructure needed to run
that test, as defned by an AWS CloudFormation Template.
All of the AMIs that make up each set of instances required for a
test are confgurable at any time. Meteorcomm has also automated
the process of scaling each test set and the infrastructure that supports
it up and down as needed and, like the AMIs, this template can
be re-confgured at any time to support their changing needs.
A user interface allows for simple re-confguration of both at any
time, and also defnes the cost governance rules for the regression
runs to ensure they are staying as cost-effective as possible.
Figure 1: Meteorcomm’s Regression Test Infrastructure
One of the key components of Meteorcomm’s
architecture is the inclusion of a Radio Network
Simulation (RNS) server, which is hosted on an
instance of Microsoft Windows Server running
services on specifc ports. When an individual service
starts up, the RNS server allocates which ports
the messages need to be relayed to by leveraging a
confguration fle that maps this out.
The individual RNS servers are created using a
CloudFormation template that provisions a base
Windows Server AMI and contains the user data
needed to pull necessary fles from Amazon S3 and
install the RNS.exe fle. A Windows PowerShell
script is then run, which starts all of the services
that correspond to the starting ports and confguration
fles that were all in the package from S3.
Finally, a DSC script is run to verify that the system
is correctly confgured.
In addition to Amazon EC2, Amazon S3, Amazon
VPC, and AWS CloudFormation, the following
services are used:
Jenkins extensible automation server with Jenkins Pipeline,
a suite of plugins to supportcontinuous integration and continuous delivery pipelines
Amazon CloudWatch, AWS CloudTrail and Zenoss to support operational requirements
By leveraging this automated process, Meteorcomm can spin up the
infrastructure that they need when they need it, which eliminates
the cost ineffciencies of provisioning for peak demand 24/7.
They can start a regression test almost immediately, scale up to
thousands of instances, each leveraging the exact combination of
interrelated services confgured how they choose, and quickly shut
the instances off when the tests are complete.
Also, because the provisioning and deployment is automated, every
test runs in an identical environment—tests do not fail due to
different environmental variables in manually built instances.
As a result of their migration to AWS, Meteorcomm has reduced
time to completion for each test by 75%, and estimates a 50% yearover-year
The confgurable automation allows Meteorcomm to add other
product test groups to the solution with minimum effort, making
the solution extensible.
Based on the success of their AWS-based regression test solution
for their ITCM solution, Meteorcomm is already looking at other
ways that it can leverage AWS.
Most notably, they are looking to expand the confgurable automations
they’re currently using to enable other development groups
to move their regression tests out of their on-premises data centers
and into AWS.
They are also looking for ways to leverage Amazon EC2 Spot
Instances to further drive down costs for nightly regression
runswhere time is not as critical.
Another initiative they are looking into is how to decrease time to
completion for each regression run by optimizing groups of tests
across test instances.
To learn more about how AWS can help you solve your most complex
technology challenges, visit
For more information about how Cascadeo can help power your
company’s transformation to the AWS Cloud, see Cascadeo’s
Premier Consulting Partner listing in the AWS Partner Directory.
Cascadeo AWS Case Study: MacDonald-Miller
About MacDonald-Miller Facility Solutions
MacDonald-Miller is a full service-design build mechanical contractor in the Pacific Northwest. With over 1,000 talented professionals, 10 locations and our own prefabrication shop, no project is too big or too small. We’ve helped shape the local landscape for over half a century with buildings that operate in the most efficient manner possible. We like to think we’re saving the planet one building at a time.
Facility managers, owners and tenants can all rest assured that our experience as industry leaders lends itself well to tackling the complexities of their industries--healthcare, biotech/labs, industrial, marine construction, commercial office buildings, and residential projects. We’ve done it all. And if we have our way, we’ll keep doing it for the next 50 years.
Internal IT infrastructure, including an internal datacenter, was built over many years. The company decided to migrate their applications to the AWS cloud for its scalability, flexibility, and support for automation.
Deployment automation and configuration management was fundamental to the project from its inception, as the company’s leadership correctly identified the typical “lift-and-shift” approach as a dead-end strategy for sustainable operations.
The company’s legacy line-of-business applications have a dependency on MSSQL with MSDTC. MSDTC support and the live migration of a large database to AWS was an added challenge.
Why Amazon Web Services
Per the latest Gartner Magic Quadrant report, AWS has always been a leader in Cloud Infrastructure as a Service with respect to completeness of vision and the highest ability to execute. On top of this, AWS offers multiple services for automation including resources provisioning, configuration management, and application deployments. AWS OpsWorks, AWS CodePipeline, AWS Lambda and AWS CloudFormation have been used to automate the company’s network monitoring system (Zenoss), application servers, and VPC in the cloud.
The figure below illustrates VPCs in the cloud:
Figure 1: VPCs in AWS with On-premise Network Integration
The figures below illustrate automated deployment of VPC and Zenoss in the cloud:
Figure 2: VPC in AWS
Figure 3: Automated Zenoss Deployment in AWS
Benefits of Automated AWS Infrastructure
Automated deployment of applications brings several benefits both to engineers and managers alike. When deployment is designed to be repeatable and automated, anyone can perform the deployment allowing engineers to focus on more critical tasks. Repetition makes the whole process less error-prone; thus benefitting the management implicitly. Moreover, these solutions can be extended to provide for other requirements such as multi-region deployments or a simple case of disaster recovery.
Live Migration of MSSQL with MSDTC Support
By leveraging third-party software SIOS DataKeeper, the live migration of the production core MS SQL Server database system was made possible. Initially, an EC2 instance was configured as a failover cluster member to the existing production database, with volume-level replication across an AWS Direct Connect private interconnect. Failover and failback between EC2-based MSSQL and legacy on-premise MSSQL server infrastructure was exercised. The final outcome was a multi-AZ MS SQL on Windows Server EC2 instances with volume-level replication across AZs and WSFC failover / clustering capabilities including MSDTC.
Headquartered in Seattle and launched in January 2002, PayScale has been giving light to a once dark area by providing salary, benefits and compensation information. The service not only helps job candidates be informed of what they are worth but also helps companies make competitive salary offers. PayScale has collected the world’s largest salary information database with over 50 million individual salary profiles. The service works by enabling individuals to submit their job and salary profiles and be compared to other individuals in
the market. With this volume of information, the company has been able to statistically induce
and cater real-time salary information.
PayScale hosts several large Microsoft SQL Server databases with high I/O workload. It is imperative to have measures in place to protect data and be able to recover from any failure without compromising performance of production databases. The company wanted to update their Business Continuity and Disaster Recovery Plan for the databases in the event of a full data center outage and needed a backup site with the infrastructure that could not only keep up with demand and can easily scale but also secure. This is where Amazon Web Services (AWS) was clearly the right choice.
Why Amazon Web Services
Due to the workload of the databases and other existing technologies in place, it was determined using native SQL Server Database Replication would provide the least intrusive
way of copying data to the DR site. Amazon EC2 became the natural choice for the DR server.
Although Amazon RDS service is available, having a Microsoft SQL Server on EC2 allowed a deeper level of database administration needed to setup this particular replication.
In terms of security, the Microsoft SQL Server DR instance is placed inside a VPC. Amazon Virtual Private Cloud (VPC) is a logically isolated virtual network in AWS in which instances and
other resources can be securely provisioned. The VPC service allowed staff to connect to the Microsoft SQL Server DR instance via their existing VPN while protecting the instance from unauthorized access.
Data is replicating asynchronously from the datacenter to AWS over VPN connection. A staging instance was setup to be the publisher for the DR databases. The figure below
shows the resulting database DR architecture.
Aside from providing a DR site for the databases, AWS also became a playground for doing database development and load testing. It became easy to launch a test environment by
launching a SQL Server instance with the EBS snapshot of the DR instance.
PayScale also has realized upfront cost savings by using AWS Reserved Instance
as opposed to the infrastructure on hardware. AWS has helped the company’s agility with cost savings and business continuity.
Cascadeo continues to support and provide MS SQL advice to PayScale to keep the systems
Cascadeo AWS Case Study: beBetter Health
About beBetter Health
beBetter Health has been helping companies deliver successful wellness programs for over
25 years. The beBetter System helps employees take action toward improving their health,
provides employers with strategies to reduce health care costs and boost employee
productivity, and gives brokers everything needed to implement an effective wellness
beBetter Health launched an initiative to build a new Software-as-a-Service (SaaS) product to
better meet the changing needs of its wellness customers. For earlier web applications,
beBetter Health had served its customers from its own, traditional datacenter infrastructure,
which had multiple single points of failure and a history of unplanned outages. For the new
product, the team wanted to focus its resources and efforts specifically on development and
rapid deployment of the new applications without having to simultaneously build and
support a new infrastructure.
Given the limited domain knowledge by its own staff and the need to support existing
internal IT systems, beBetter Health knew it needed a partner with the solid AWS experience
in order to meet its objectives for product go-to-market. beBetter Health partnered with
Cascadeo Corporation due to their ability to quickly design, build, and operate a scalable,
highly-available AWS infrastructure that could underpin its product performance.
Why Amazon Web Services
beBetter Health chose to build its new application services on AWS after evaluating
alternatives including building its own, virtualized infrastructure. Several factors drove the
A low hurdle to pilot AWS features and no need for initial capital investment,
The richness of the AWS solution with integrated compute, tiered storage, and
security solutions from a single provider,
The higher performance, reliability, availability, and scalability requirements
mandated by the new SaaS product couple with low-cost, inter-region failover,
The business desire to focus on engineering product features more than
infrastructure operations, and
An ability to leverage vendors, such as Cascadeo Corporation, as their IT services
provider with deep knowledge of AWS capabilities and operations.
To achieve beBetter Health's business objectives, Cascadeo built a secure AWS VPC
connected via a VPN to the legacy datacenter and migrated its production servers to an
initial three-node deployment using EC2 instances. Cascadeo scaled up the infrastructure to
14 nodes using ELB and RDS to meet increasing demand across beBetter Health's services.
As part of the operational strategy to take full advantage of the AWS product suite,
Cascadeo has helped beBetter Health to evaluate, test, and deploy new AWS services as they
beBetter Health has been able to keep its operating costs very low, even with many
constituents putting new demand on the AWS environment. The costs do grow from month
to month but can be monitored and managed effectively by beBetter Health's technical
stakeholder who holds a global view of both the constituent demand and the AWS
deployment. At the same time, beBetter Health has obtained better uptime, significantly
reduced unplanned outages, and increased application performance leading to a better
overall user experience. Lastly, AWS has given beBetter Health greater operational flexibility,
which has enabled rapid revision of the production environment without wasted time and
According to beBetter Health's technical lead, "Without Cascadeo, beBetter would not have
been able to achieve these results in AWS. Anybody can take a Ferrari out for a lap, but to
get the most out AWS, you need some one at the wheel who knows how to drive one."
Chef helps customers automate their infrastructure, allowing for accelerating time to
market, managing scale and complexity, and safeguarding systems. Whether your
network is in the cloud, on-site, or a hybrid, Chef can automate how you configure,
deploy and scale your servers and applications, whether you manage 5 servers, 5,000
servers or 500,000 servers. It's no wonder that Chef has been chosen by companies like
Facebook, GE, and Amazon for mission-critical challenges.
Due to an increase in demand for professional services, Chef created a certification
process that would link their customers to an eco-system of qualified Chef engineers to
leverage for their respective initiatives. Cascadeo was selected as one of the initial
partners to complete the training and certification process.
The Chef Certification Program is designed to thoroughly assess, train and verify skills
and abilities to successfully deploy Chef, as well as establishing a pattern towards
continuous delivery for customers. Chef takes a hands-on approach with partners on
joint projects to ensure best practices are well understood and implemented.
Additionally, the partner certification program includes active participation in the Chef
and DevOps communities, as well as having an internal practice manager to ensure
proper knowledge transfer. The certification program is currently invite-only.
As a result of this process, Cascadeo was announced as one of the first Chef certified
partners at ChefConf 2015.
The Certification process:
Receiving the Chef certification required several criteria to be met. Cascadeo's
engineering team needed to attend a number of Chef-led training sessions and
workshops. In addition, Chef shadowed several joint engagements, and also reviewed a
range of projects we had previously completed as validation of our work product,
approach, and methodologies. Lastly, we needed to have one of our engineers lead a
multi-day Chef training seminar.
Cascadeo's DevOps practice enables our clients to focus on developing their products
and services, accelerate time to market, and decrease operational overhead. By
providing our clients with certified Cloud and Chef engineers, Cascadeo empowers our
clients to integrate proper deployment methodologies, change management strategies,
and continuous integration and delivery into their development cycles.
Cascadeo partners with our clients' DevOps, IT, and engineering teams to provide
ongoing support, training, infrastructure as code scripting, and CI/CD pipeline
methodologies. By leveraging Cascadeo's expertise, customers can more effectively
deploy, manage and support all of their environments on 24x7x365 basis.
As part of our partnership with Chef, our engineers have access to additional support
services from Chef, get to interact with key individuals and product teams at Chef, and
continue to receive and participate in ongoing trainings. This ensures we are leveraging
the most current methodologies and best practices for mission critical applications and
"Having Cascadeo as a partner strengthens our ability to work with customers across the
board. Combining Cascadeo's Chef expertise with their abilities to work with client
environments allows us to deliver great solutions together." -- Mahir Lupinacci - Director
of Business Development, Chef