AccessIT Group

Journey to the Cloud

Last week, I had the privilege of speaking on a webinar with F5 about the complexities of securing internally created Large Language Models (LLMs) for organizations. This wasn’t about protecting end-users from asking ChatGPT how to make apple pie, it was about helping organizations safeguard their internal models from disclosing sensitive information. I was prepared to discuss AI Gateway features, profiles, and processes, but someone asked a question that really stuck with me: “What if we want to deploy this technology, but we haven’t even started our cloud journey?” AI is not a passing fad, it’s ubiquitous, and it’s reshaping cybersecurity. But it also highlighted an important point, some organizations haven’t yet embraced the cloud. So what steps should be taken when starting that journey? Top 3 Considerations for Your Cloud Journey A smooth move to the cloud isn’t just about shifting workloads, it’s about building a secure foundation. Here are three key areas to focus on when transitioning from on-premises to the cloud: IAM, segmentation, and resiliency. Think of it like moving to a new house, before unpacking, validate what you really need, and don’t carry over that old box of shoes lurking under the staircase. 1. IAM (Identity and Access Management) There are countless guides on configuring IAM roles and policies, but how do you validate who actually needs access? Does a security analyst who is also a cloud administrator need full admin rights? What about a network engineer who occasionally requires elevated privileges? Should you just give this individual (*) access? Getting IAM right requires careful planning with your business units. It’s arguably the most important step when moving to the cloud, because overly permissive access can introduce significant risks. 2. Segmentation When migrating to the cloud, traffic segmentation and policing are critical. Cloud providers offer many built-in security tools, but sometimes third-party solutions provide better efficacy for controlling and monitoring traffic. Thoughtful segmentation ensures that even if one segment is compromised, the rest of your environment remains secure. 3. Resiliency In traditional data center design, we built redundancy into power feeds, port-channels, and VM placement to ensure failover in case of a failure. The cloud promises high availability, but if your architecture isn’t designed for failover across multiple availability zones, a major outage can leave you vulnerable. Your most critical data, whether you call it your “crown jewel”  or “honey-pot,” deserves protection through resilient designs that account for failover and disaster recovery. Final Thoughts Cloud adoption isn’t just a technology shift, it’s an opportunity to rethink security and resiliency from the ground up. Start with IAM, plan your network segmentation carefully, and design for failover. By doing so, you’ll not only protect your data, but also ensure a smooth, secure move to the cloud.

Is the Cloud Migration Mindset Snafu Reoccurring with Untethered AI Adoption?

Organizations once rushed to the cloud in search of transformation, innovation, reduced cost of ownership, and a competitive advantage. In that haste, they overlooked a hard truth: threat actors thrive in environments filled with misconfigurations and weak security practices. Many enterprises quickly embraced cloud capabilities, but they failed to bring cybersecurity along with them. Most organizations never thoroughly answered the foundational question of cloud-era security: Where does our critical data reside? Even now, many enterprises lack a complete inventory of sensitive data locations or data flows. That visibility gap did not disappear; it simply shifted. And now, with the rise of GenAI, that same unknown data is being fed into tools outside organizational control. The result was years of avoidable breaches, exposed buckets, overly permissive identities, and reactive security strategies that continue to ripple across the industry today. We are witnessing the same pattern with Generative AI and LLMs. The rapid introduction of GenAI and large language models has created unprecedented opportunities, rapid innovation, resource optimization, improved productivity, and enhanced decision quality. Yet one issue persists: Where are the guardrails? For most organizations, they are either immature or nonexistent. AI Governance should have been implemented from day one, with oversight committees established early to set boundaries, evaluate risks, and shape responsible adoption. To bridge this gap, organizations should define clear roles, responsibilities, and processes for these committees to ensure continuous oversight and accountability. This proactive approach helps organizations embed governance into their AI strategies from the outset, reducing risks and aligning with best practices. This is not speculation. Recent research shows that employees are adopting Generative AI at extraordinary rates, often without informing IT or leadership. A supporting perspective can be found in this post by Ian Paul of CheckPoint, ‘How CIOs Can Turn AI Visibility into Strategy.’ The implications are significant. Hidden or “shadow” AI usage creates an environment in which innovation occurs organically, but without governance, oversight, or security. Yet that same usage data, when finally observed, can become an invaluable blueprint for formulating an informed AI strategy. Organizations can learn exactly which tools employees find valuable and which workflows are ripe for meaningful AI-driven efficiency gains. But visibility is the prerequisite for strategy. Security leaders need to understand which AI services are being accessed, what types of prompts are being submitted, how much sensitive content is being shared, and where risky behavior is occurring. Implementing monitoring tools such as AI activity dashboards, data flow analysis, and real-time alerting can provide the necessary visibility. These methods enable organizations to identify unauthorized AI usage, assess data exposure, and ensure compliance with security policies, thereby supporting a more informed and secure AI environment. The gap between organizational intent and real-world usage shows why AI Governance must be a core function, giving leaders confidence in responsible AI management. The lesson is clear. Building visibility, governance, and accountability into AI adoption helps organizations feel prepared and secure against repeating past mistakes. Organizations do not need to slow down innovation. They need to ensure that innovation does not outpace cybersecurity’s ability to support it safely.