Next Generation MSP​


Cascadeo has been virtualizing and operating server infrastructure for more than 13 years.  The process has been fairly consistent over time: lead with human-driven discovery, follow with assessment and design, implement and automate, and at the end, operationalize the system with monitoring and optionally a managed services contract.

It turns out that the entire process can be improved significantly with data analytics.  Specifically, the telemetry generated by applications (regardless of how or where they are hosted today) is immensely valuable in planning, executing, and validating cloud migration projects.  Starting with monitoring leads to a virtuous cycle: data leads to new insights and opportunities for improvement, which (if acted upon) yield new data that can be used to compare the effects of the changes and identify the next candidates for optimization.

Cascadeo typically deploys managed services to all new client environments, including those that are not yet optimized or automated in any way.  Because the deployment of is automated and takes just minutes, this can be done by less-technical resources and/or by the client instead of by the vendor.  The platform discovers the assets in a cloud account and immediately begins collecting available telemetry, which is fed in to a cloud analytics back-end built on the InfluxData TICK stack for extreme-scale time-series data analysis.

Once is deployed, operational data is streamed over SSL continuously.  The analytics platform studies this data and looks for both operational issues and patterns in the dataset.  For example, the system can infer a possible relationship between disk activity and network activity based solely on the fact that these measurements seem to correlate.   By understanding the apparent relationships between different data sources, the system enables the creation of a logical application dependency map – something that would have previously been done painstakingly by hand, with client and Cascadeo engineers trying to create it based on often-incomplete documentation or inference.

As improvements are made, the patterns in the data evolve and reflect the impacts of the changes.  This makes it possible to directly answer questions like “did this change make things better?” and “is the system performing better or worse after the most recent upgrade?”. Rather than subjective human assessment, this provides direct measurement that can be used to evaluate and recalibrate migration and optimization project outcomes.

In summary, moves away from the traditional waterfall-style project and towards Agile-style iterative development by focusing on operations and telemetry from the very beginning.   The data collected via informs and validates the subsequent engineering efforts. As the system evolves sprint by sprint, the progress and impact is measured and evaluated. The data provides insight in to the next round of optimization opportunities – better performance, lower cost, increased scalability, etc.  This forms a virtuous cycle between development and operations, aligning even legacy applications more closely to a ‘cloud-native devops’ approach vs. a conventional corporate IT operations approach.

About Cascadeo

Have questions about how our certified team of cloud deployment and managed services experts can help your business achieve its strategic goals?

Contact Cascadeo