Decorative image of a yellow lightbulb in a blue circle

“Most of the excitement seems to be about the future, when in reality, the future has already arrived.” —Jared Reimer, Cascadeo Founder & CTO

The Future Has Already Arrived: an Interview with Cascadeo Founder & CTO Jared Reimer

Image of a neural network with the Cascadeo logo in white.Cascadeo: You’ve been traveling in the U.S. and internationally the past few months, giving presentations on generative AI and its implications for cloud. What are the audiences you’ve spoken to saying about generative AI? What are they excited about?  

Jared Reimer: Generally speaking, there is enthusiasm and excitement about the promise AI brings. Many people haven’t “connected the dots” between their own experimentation with tools like ChatGPT and their jobs yet, however. It is often seen as a novelty or a meant for some other profession, rather than something that they must learn to embrace and work with as an essential part of their careers. Most of the excitement seems to be about the future, when in reality, the future has already arrived. This will become much more apparent as existing applications from Microsoft, Adobe, Google, and others suddenly have AI built into them—something already rolling out and which will accelerate in the weeks ahead. The technology is much further along, and far more broadly applicable, than many people seem to realize—even within the tech sector. 

Cascadeo: What are those audiences worried about? Where have you seen people expressing anxieties about gen AI?  

Jared Reimer: There are the usual concerns that are well-documented and frequently discussed: “hallucinations,” bias in the training dataset, displacement of jobs, and intellectual property concerns. Interesting, there is little worry of job losses. It is either because people do not perceive there to be a threat to their specific job function or because they underestimate the current and future capabilities of AI. Few seem to truly appreciate the magnitude of the changes taking place or the unstoppable advances in the technology. I fear that society is not prepared to deal with a period of extreme turmoil and disruption. In the end, AI will be a net-positive for humanity, for jobs, for education and even for areas as diverse as art, music, drug discovery, software development, teaching, and legal work. Before the potential is fully realized and universally appreciated, uneven adoption will create winners and losers in short order. Organizations that ride the wave will generally have a significant competitive advantage over those that ignore it.  

Cascadeo: Have you noticed a difference in the energy around gen AI in North America compared to Asia?  

Jared Reimer: There is more awareness in the Seattle-SF Silicon Corridor because that is where the majority of the work is being done, at least among the hyperscalers and companies like OpenAI and Anthropic. I think there is also more willingness to try bleeding-edge technologies in American companies, whereas the trend seems to be a bit more cautious in other places. Some of that is because of competitive pressure, some of it is cultural, and some of it really boils down to fear of job loss for being seen as “wrong.” The tolerance for risk, the forgiveness of failed projects, and the attention given to the latest technology varies by country. Broadly speaking, cloud adoption is years behind in Asia. I don’t think the companies there will have the luxury of being quite as patient with AI adoption. 

Cascadeo: Analysts have produced long lists of likely use cases for generative AI in recent months, many of them focused on customer service applications and synthetic data for modeling. What are some of the most interesting use cases you’re hearing discussed? What are you most excited about?  

Jared Reimer: The application of generative AI to repetitive, white-collar professional work will free people to do more useful things. Some examples that come to mind include paralegal, contract law, medical transcription, basic software development, QA and testing, code and cost optimization, and troubleshooting routine problems. At Cascadeo, we are already using a combination of AI tools in our cloud management platform and are seeing truly dramatic results. In seconds, we are now able to take an event from a client’s cloud or on-premise environment and create a detailed understanding of its causes, effects, troubleshooting options, and even ways to fix it if necessary. This is work that previously would have tied up a cloud engineer or operations expert for minutes to hours. 80% of the time spent on most IT outages consists of the work leading up to the repair/fix. If we can compress that 80% triage period to near zero, the cost and disruption of IT service incidents will be dramatically reduced. In many cases, we can now predict failures before they occur, resulting in zero downtime, and systems can even self-correct without operator intervention. 

Cascadeo: What are some measures you think all enterprises should be taking to assure their generative AI use is ethical? What does responsible AI look like to you?  

Jared Reimer: Unfortunately, there is no realistic way to regulate or enforce rules around AI. Each organization, government, academic institution, and individual will have to decide what is and is not acceptable to them. This is because the required hardware (GPU) is readily available, there is a Cambrian explosion of open-source software taking place, and no way to “put the genie back in the bottle.” The talk of a six-month pause or of regulating AI is, bluntly, not intellectually honest. Even if the US were to create rules and regulations, they would be both unenforceable at home and ignored abroad. As with any new technology, there will be good actors and there will be bad ones. The major tech companies are putting enormous effort into making sure the mainstream AI tools address the long list of very real concerns. The underground, the political parties, hostile foreign governments and others will do the opposite.  Those who tell you they have the answers to the “ethical AI” problem space are giving people simple answers to appease and distract them. There are no easy answers here.  

This does not mean the problem is intractable or can be ignored, however. Working with subject-matter experts within and outside of your organization to shape policy and guide decision-making is essential. Many of the world’s leading thinkers on this subject publish on platforms like LinkedIn and share their wisdom and learnings. While we cannot control what others may choose to do with the tools available to them, we certainly can and must make the best choices we can within our sphere of influence. Cascadeo goes to tremendous lengths to debate, discuss, evangelize and enforce policies that aim to maximize the benefits of AI while minimizing the risk of harm. We do this with our own software, our staff, and also as an offering to our clients—many of whom are struggling with these same challenging issues. Professional services is about more than just technical design and implementation, and in the case of AI, a lot of the work is thinking through how to deploy this correctly the first time rather than rushing new technology to market and dealing with the fallout after-the-fact. We pride ourselves on ethical engineering and operations, as we have for 17 years, and carry that work forward in the era of AI when it is more important than ever before.