Journey to the Cloud

Last week, I had the privilege of speaking on a webinar with F5 about the complexities of securing internally created Large Language Models (LLMs) for organizations. This wasn’t about protecting end-users from asking ChatGPT how to make apple pie, it was about helping organizations safeguard their internal models from disclosing sensitive information. I was prepared to discuss AI Gateway features, profiles, and processes, but someone asked a question that really stuck with me: “What if we want to deploy this technology, but we haven’t even started our cloud journey?” AI is not a passing fad, it’s ubiquitous, and it’s reshaping cybersecurity. But it also highlighted an important point, some organizations haven’t yet embraced the cloud. So what steps should be taken when starting that journey? Top 3 Considerations for Your Cloud Journey A smooth move to the cloud isn’t just about shifting workloads, it’s about building a secure foundation. Here are three key areas to focus on when transitioning from on-premises to the cloud: IAM, segmentation, and resiliency. Think of it like moving to a new house, before unpacking, validate what you really need, and don’t carry over that old box of shoes lurking under the staircase. 1. IAM (Identity and Access Management) There are countless guides on configuring IAM roles and policies, but how do you validate who actually needs access? Does a security analyst who is also a cloud administrator need full admin rights? What about a network engineer who occasionally requires elevated privileges? Should you just give this individual (*) access? Getting IAM right requires careful planning with your business units. It’s arguably the most important step when moving to the cloud, because overly permissive access can introduce significant risks. 2. Segmentation When migrating to the cloud, traffic segmentation and policing are critical. Cloud providers offer many built-in security tools, but sometimes third-party solutions provide better efficacy for controlling and monitoring traffic. Thoughtful segmentation ensures that even if one segment is compromised, the rest of your environment remains secure. 3. Resiliency In traditional data center design, we built redundancy into power feeds, port-channels, and VM placement to ensure failover in case of a failure. The cloud promises high availability, but if your architecture isn’t designed for failover across multiple availability zones, a major outage can leave you vulnerable. Your most critical data, whether you call it your “crown jewel” or “honey-pot,” deserves protection through resilient designs that account for failover and disaster recovery. Final Thoughts Cloud adoption isn’t just a technology shift, it’s an opportunity to rethink security and resiliency from the ground up. Start with IAM, plan your network segmentation carefully, and design for failover. By doing so, you’ll not only protect your data, but also ensure a smooth, secure move to the cloud.
AI: Protecting end users from themselves.

Every once in a while there is a product or technology that comes out that is a complete game changer not only for organizations, but society as a whole. The advent of AI is not new, but the adoption of large language models has exploded over the past seven years, giving everyday people the ability to understand complex topics and even assist with intricate engineering deployments. When used correctly, these tools can help you achieve your goals with nothing more than words on a screen and iterative prompts. If you equate your organization to a mission then one thing should always be top of mind, “This mission is too important for me to allow you to jeopardize it.” AI Guardrails Like any tools that are very useful and help you get your job done, there should be safety precautions taken when using these tools. A carpenter is able to do a job faster with an electric saw. Electric saws speed up the job, but need guardrails for safety. LLM’s are no different. LLM’s need guardrails to serve as an additional protective layer, reducing the risk of harmful or misleading outputs. Potential Risks: Data Privacy- Ensure organization privacy requirements are met by protecting against the leak or disclosure of sensitive corporate information, PII (personal identifiable information), and other secrets. Content Moderation- Guarantee that the AI applications being used adhere to your company’s content guardrails, approved usage guidelines, and policies. Align the application’s output with its intended use and block attempts to divulge information that could damage your organization reputation. General risks- Safeguard the applications integrity and performance by preventing misuse, ensuring secure data management, and employing a range of standard tools and proactive measures, such as detecting and blocking insecure code handling or blocking unwanted URL’s in prompts and responses. Guardrails and AITG: If you haven’t adopted the use of AI in your organization or have fear of enabling your workforce to use AI, you’re not alone. AITG works with organizations to help protect, identify and manage risks associated with LLM-based Applications. This old way of blocking URL’s or app-id’s to stop access is just that,old. At AITG, security remains the foundation of our approach, guiding how we secure and manage AI systems. Let us help make your organization and team work more efficiently by enabling them to use AI the right way, securely and responsibly.