Generative AI: What to Ask, and How to Ask it
As any serious researcher, teacher, or journalist can tell you, there’s only one way to get the best answers: ask the right questions. As generative AI takes up an increasing amount of our information landscape, learning to ask a machine the right questions becomes essential. The quality, accuracy, and depth of information provided by a generative AI tool depends entirely on what you ask and how you ask it. This has always been true in information sciences; university libraries teach entire courses to help undergrads understand that cataloguing systems aren’t intuitive and humanized the way Google can seem to be. Chat GPT and its ilk appear even more human than Google, as they provide rich, detailed answers in sophisticated language. But we’re not in Nexus-6 territory yet. The value of your interactions with generative AI is still dependent on learning to talk to the machine in its preferred manner.
The internet recognized (and monetized) this right away. Shortly after Chat GPT’s public release, the web was filled with courses and instructions for becoming a prompt engineer, which is to say, an expert in asking the right questions. Within weeks, a new discipline developed, complete with its own procedures and jargon (zero-shot prompting, chain-of-thought prompting) and a flurry of consultants and social media entities ready to guide you through the confusion. In short: you can ask generative AI anything you like and get an answer, but to get the best possible answer, you might want some foundational knowledge about how to query effectively and how LLMs work.
Developers use generative AI a bit differently than the average person, to generate, refine, and complete code and provide support for higher-level data tasks, as well, so that instead of a term paper, they create applications and functions much more quickly and efficiently than would otherwise be possible. Karol See, Cascadeo’s Head of Product for the Cascadeo AI cloud management platform, notes that while she is not functioning as a developer in her current role, she still thinks like one, and for her, “it’s not really about knowing the right questions, but more figuring out the right flow of conversation.” Generative AI prompting in this context might mean asking the AI how to ask, refining and following up on prompts, and cycling through a series of responses to arrive at the desired outcome.
This process extends to extremely advanced functions, many of them automated, like the generative AI integration in Cascadeo AI, where it’s used for monitoring and observability and inventory analysis, and will soon be employed to support cost optimization and various types of internal and customer-facing reporting.
For example, in monitoring and observability, Karol and her team have developed a system that uses prompt templates to take in data like customized alert thresholds and events, query Cascadeo’s own data lake or an LLM for likely root causes and remediation steps, format that information, and deliver it to a member of our managed services team and to the customer’s preferred alert channel. At that point, an engineer needs only to investigate the environment and determine which of the provided suggestions will best remediate the issue, eliminating the time spent on in-depth research. In inventory analysis, a prompt template is being prototyped to instruct the LLM to provide a summary of a customer’s cloud footprint complete with potential security issues.
The team is also using generative AI prompts to assist in more complex back-end functions like developing security posture scores for customer environments to provide a clear view into a deployment’s relationship to security and governance policies, suggest database labels and graph types, and create long form reports to provide customers with deeper knowledge of their IT infrastructures.
For such incredibly complex desired outcomes, generative AI demands sophisticated prompts and a spirit of experimentation. Its best users regularly find new ways to extract, configure, and employ the massive troves of data in LLMs by continuously refining their methods of inquiry. As our relationship with this technology grows, remaining playful and curious can lead to discoveries as yet unimagined.