Data Security Truths That’ll Change How You Think About Protecting Data (and Maybe Keep You Up at Night)

Data is the lifeblood of business innovation, customer engagement, and operational efficiency. Yet, as organizations generate, store, and process unprecedented volumes of data across cloud, SaaS, and on-premises environments, the risks associated with data exposure, misuse, and breaches have never been higher. Traditional security tools, while essential, are increasingly insufficient for managing the sprawling, dynamic, and complex data landscapes of modern enterprises. Enter Data Security Posture Management (DSPM): a proactive category of security solutions designed to provide continuous visibility, automated classification, and real-time monitoring of sensitive data, regardless of where it resides. DSPM is rapidly becoming a cornerstone of modern cybersecurity strategies, enabling organizations to proactively manage data risk, ensure compliance, and empower secure business innovation. This article explores the evolution, core principles, challenges, benefits, and best practices of DSPM, drawing on the latest industry research and real-world adoption trends. The Data Explosion: It’s Not Just Hype, It’s a Full-Blown Crisis Let’s start with the jaw-dropper: Over 90% of all data was created in just the last two years. That’s not a typo. And by the beginning of 2026, we’re staring down the barrel of 181 zettabytes of data. Digital transformation, cloud adoption, IoT, AI, and the proliferation of SaaS applications fuel this explosion. Data is now scattered across on-premises servers, public and private clouds, SaaS platforms, and edge devices. The Expanding Attack Surface As data becomes more distributed, the attack surface expands. Sensitive information, such as customer records, financial data, intellectual property, employee details, and health records, can be found in structured databases, unstructured files, emails, backups, and ephemeral cloud storage. The complexity of tracking, classifying, and securing this data is compounded by: Visibility: The Blind Spot Nobody Wants to Admit Here’s the kicker: 83% of organizations admit they lack visibility into their data, making manual methods inadequate and underscoring the need for automated solutions to avoid flying blind. You can never be certain if you don’t have any insights into what data you have, how much of it is regulated, which users or identities can access it, or how it has transformed over time. I found that this isn’t just a technical problem, it’s a trust problem. If you don’t know what you have, how can you protect it? What is Data Security Posture Management (DSPM)? Definition and Scope DSPM is a security discipline and technology category focused on providing continuous, automated visibility into the security posture of sensitive data across all environments, on-premises, cloud, SaaS, and hybrid. It encompasses: DSPM is not a replacement for existing security tools such as DLP, SIEM, or CSPM; instead, it integrates seamlessly with them, providing a complementary layer that focuses on the data itself, its location, context, and risk profile. This integration helps security teams leverage their current investments while enhancing data visibility and control. How DSPM Differs from Other Security Tools CSPM, SSPM, and DLP are valuable, but DSPM’s unified, data-centric view can inspire confidence by integrating discovery, classification, monitoring, and risk management into a single workflow. Survey Insights According to the 2024 DSPM Adoption Report published by Cyera: DSPM: Not Just Another Tool, It’s the Nerve Center Forget the patchwork of point solutions. DSPM is a unified, data-centric approach that brings together discovery, classification, monitoring, and risk management in one place. It’s not about adding another dashboard; it’s about finally seeing the whole picture. Automated discovery, contextual classification, real-time monitoring, and risk assessment, DSPM does it all, and then some. I found that this shift isn’t just about technology, it’s about mindset. You stop reacting and start anticipating. Core Components and Features of DSPM Data Discovery Data Classification Real-Time Monitoring and Alerting Risk Assessment and Remediation Integration and Scalability Key Challenges Addressed by DSPM Excessive Data Access and Overprivileged Accounts Lack of Visibility Data Management at Scale Insider and Third-Party Risk Tool Fragmentation Manual Methods? They’re Dead Weight Still relying on manual data discovery or a jumble of disconnected tools? I found that’s a recipe for disaster. Manual methods can’t keep up with the scale or speed of today’s data sprawl. DSPM’s automated, AI-powered classification and monitoring are the only way to stay ahead of threats and compliance headaches. “DSPM is rapidly becoming a cornerstone of modern cybersecurity strategies, enabling organizations to proactively manage data risk, ensure compliance, and empower secure business innovation.” The Future: AI, Automation, and Unified Platforms Looking ahead, I found that DSPM is evolving fast. Expect deeper AI integration, more intelligent automation, and platforms that unify data security across every environment, cloud, on-prem, SaaS, and even AI apps. The days of fragmented, reactive security are numbered. Final Thought: Are You Ready for the Data Security Reality Check? If you’re still treating data security as an afterthought, the numbers and the risks should give you pause. DSPM isn’t just another acronym; it’s the new foundation for protecting what matters most. The question isn’t whether you’ll need it, but how soon you’ll make your next move. Data security isn’t just about more tools; it’s about seeing what you’ve been missing and acting before it’s too late.
AI as an Insider Threat: Expanded Risks with Expanded Usage

Next-generation AI models may pose a “high” cybersecurity risk, including the potential to generate sophisticated exploits or assist intrusion operations, according to a warning from OpenAI. This highlights that AI is no longer just a defensive tool; it is a strategic attack surface that organizations must actively govern. Adding to that, 60% of organizations are highly concerned about employee misuse of AI tools enabling insider threats, according to the 2025 Insider Risk Report by Cybersecurity Insiders and Cogility. The Problem: AI Expands the Attack Surface AI is now embedded across workflows in engineering, HR, finance, and clinical operations, introducing new and often misunderstood risks. Modern AI systems can generate executable code, identify vulnerabilities, craft personalized phishing messages, and automate reconnaissance. While these tools support security teams, they also enable misuse by anyone with access. Employees, contractors, or third parties may unintentionally or deliberately misuse AI in ways traditional controls can miss. Shadow AI use, unapproved model integrations, and poorly governed prompts increase exposure. Compounding the issue, 62% of organizations faced at least one deepfake attack in the past year, and 32% experienced attacks targeting their AI applications, according to a 2025 Gartner survey. Why This Matters: Business, Regulatory, and Board Risk AI-driven insider risk is a strategic business threat. Misuse can corrupt data, create vulnerabilities, disrupt operations, and expose sensitive information, impacting productivity, customer trust, and competitive advantage. Regulatory risks are growing as organizations must comply with HIPAA, SOX, GDPR, and new AI governance requirements focused on transparency and accountability. Noncompliance risks costly enforcement and reputational harm. Boards now expect CISOs and CIOs to clearly explain how AI-enabled insider risks are governed and mitigated. Failure to do so may weaken investor confidence and slow strategic initiatives. How Organizations Can Prepare Organizations should adopt a holistic People, Process, Technology approach. People: Train employees and leaders on responsible AI use and insider risk awareness. Process: Implement strong AI governance frameworks defining acceptable use, model oversight, and ownership. Incorporate AI-specific threat modeling and behavioral analytics, leveraging standards like the NIST AI Risk Management Framework. Technology: Enforce identity and access controls around AI tools and sensitive data, along with continuous monitoring to detect misuse early. Finally, investing in ongoing employee education is critical because even as AI evolves, people remain the backbone of any defense strategy. AccessIT Group’s Strategy and Transformation practice helps organizations design and implement AI-aware insider risk programs that align strategy, governance, and technology controls to protect sensitive data and enable secure, value-driven AI adoption.
Breached Attack Simulations: The Next Step in Cyber Defense

In today’s threat landscape, cyberattacks are no longer a matter of if — but when. Traditional security testing methods, like vulnerability scans and penetration tests, are essential, but they often represent only a snapshot in time. Organizations need a more realistic way to evaluate their defenses, and simulating a user account compromise is the most realistic way to achieve that. That’s where Breached Attack Simulations (BAS) come in. What Is a Breached Attack Simulation? A Breached Attack Simulation is a cybersecurity testing process that mimics real-world attack tactics, techniques, and procedures (TTPs). Instead of waiting for a real attacker to exploit your weaknesses, BAS platforms proactively test how your people, processes, and technologies respond to simulated breaches across the entire attack chain. These simulations can include: Key Benefits of Breached Attack Simulations Validation of Security Controls Security tools like firewalls, EDRs, and SIEMs need constant tuning. BAS exercises can help validate whether these tools are configured correctly and effectively detect and block attacks — without waiting for a real breach to find out. Realistic, Adversary-Based Testing BAS exercises leverages real-world attacker behaviors sourced from frameworks like MITRE ATT&CK, ensuring that the simulated attacks mirror the methods used by advanced threat actors. Measurable Risk Reduction Each simulation produces actionable data — showing which attack stages succeed or fail, which alerts are triggered, and where gaps exist. Security teams can help to prioritize remediation based on quantifiable results. Faster Incident Response Running BAS exercises helps SOC analysts practice real-world detection and response workflows. This not only improves response times but also strengthens coordination across teams. A Proactive Approach to Cyber Resilience With threats evolving daily, security can’t be a once-a-year exercise. Breached Attack Simulations bring realistic enhanced testing to cybersecurity programs, turning assumptions into measurable facts. By integrating BAS into your security operations, your organization gains the insight and confidence needed to stay ahead of attackers — before they strike.
Beating the Clock Without Losing Credibility: A CISO’s Guide to Year-End Security Decisions

With only a short window remaining in the year, many CISOs are under direct pressure to deploy remaining security budget before it is lost in the next fiscal cycle. That pressure often comes with increased executive scrutiny, where year-end spend is later evaluated through a straightforward question: what value did this investment deliver, and why is it not fully implemented yet? In this environment, the risk is not spending the budget. The risk is spending it in a way that creates operational friction or unrealistic expectations in the new year. New tools acquired late in the year frequently enter the organization without adequate time for onboarding, integration, or staffing alignment. Even strong technologies can struggle to demonstrate value when introduced without a clear execution path. At the same time, year-end is often the point where initiatives that have already been planned, evaluated, and aligned over the course of the year are ready to move forward. For CISOs, executing on these established decisions can improve cost predictability, support budget efficiency, and provide a clearer contractual footing going into the next fiscal year. In these cases, moving ahead is not reactive spending but completion of deliberate planning. Cloud marketplaces are particularly relevant in this context. When used appropriately, they allow organizations to apply remaining budget in ways that align with existing cloud strategies and procurement models. Marketplace purchases can be executed quickly, integrated directly into current environments, and reduce the perception of introducing new standalone platforms. This often makes them easier to explain and defend to executive stakeholders. The most effective year-end actions typically fall into two areas. The first is completing purchases that teams are already prepared to operationalize, including technologies or expansions that were evaluated earlier and have a defined implementation plan. The second is strengthening the adoption of capabilities already in place, such as enabling advanced features, expanding coverage, or adding services that improve outcomes without increasing architectural complexity. Challenges tend to surface in January when there is a gap between what leadership expects and what teams are realistically able to deliver. Acquiring net new technology late in the year without a clear deployment plan often leads to difficult conversations when progress is slower than anticipated. Avoiding this outcome does not require delaying decisions; it requires maintaining alignment between what is purchased and what can be executed. For CISOs managing year-end budget pressure, the objective is not to spend faster. The objective is to spend in ways that are defensible, operationally sound, and aligned with existing priorities. By executing on established plans, leveraging cloud marketplaces where they fit naturally, and avoiding last-minute additions that lack a clear delivery model, organizations can close out the year responsibly and enter the next fiscal cycle without carrying unnecessary risk.
The Evolution of Cyber Risks in M&A, Rebalancing Approaches and Countermeasures in a Growing Threat Landscape

53% of surveyed organizations report they have encountered a critical cybersecurity issue or incident during an M&A that put the deal into jeopardy, according to ForeScout (“The Role of Cybersecurity in M&A Diligence“). As such, visibility into key risks and determining actionable priorities are critical components of the Mergers and Acquisitions (M&A) lifecycle. Although the role of cybersecurity in M&A, especially during ‘due diligence’ is nothing new to the industry, it is too often seen as a check-box activity, leaving many issues underestimated, unidentified, or even unseen. Today, threat actors are increasingly targeting M&A announcements themselves, or indicators of a potential transaction – to extract leverage – using leaked deal data, phishing schemes, and ransomware to exploit periods of organizational transition and distraction. Now more than ever, organizations must proactively evolve their cybersecurity strategies, rebalancing due-diligence approaches and strengthening countermeasures to keep pace with a rapidly growing and increasingly sophisticated threat landscape. The Pace of Chance As the risk and threat landscape has significantly evolved in recent times, approaches to gain risk visibility and assess business level impacts for M&A has fallen behind. These must steadily evolve to position success and manage risk liabilities that are increasing in impact magnitude, with impacts spanning beyond cyber breaches into large scale reputational damage, costly legal affairs, and impacts to market capitalization for public companies as highlighted examples. Some notable and issues warranting heightened concern include: Change Influencers At a macro scale – heightened geopolitical tensions and geostrategic influences are placing certain industries and demographics at increased risk. This is often the realm of nation state actors or their ‘professional’ affiliates. Impacted organizations may include: Key Areas to Consider Enhancing: 1. Data Ecosystem Leakage and Exfiltration: Shadow IT, and Assets in an ‘under managed’ and/or ‘under configured’ state: Data Boundaries and Operational Processes and Behaviors: 2. Attack Surface and Reconnaissance 3. Legacy Debt Accumulation 4. Technology Licensing Hangovers 5. The Role of The Security Tech Stack In conclusion: In today’s rapidly evolving threat landscape, cybersecurity is no longer optional in M&A—it’s mission-critical. Organizations must move beyond checkbox due diligence, proactively identifying and addressing risks before they can jeopardize a deal. Only by rebalancing strategies and strengthening defenses can companies protect deal value and emerge more resilient in an era defined by digital risk. In closing:
Holiday Phishing Scams: How to Stay Cyber-Safe This Festive Season

The holiday season is upon us, which is usually a time for giving, connecting, and celebrating — but unfortunately, it’s also prime time for cybercriminals. Every year, phishing attacks spike during the holidays – starting with Black Friday and Cyber Monday – taking advantage of busy shoppers, generous donors, and distracted employees. Whether you’re clicking through online sales or managing year-end finances, knowing how to spot and stop phishing attempts can keep your data — and your holiday spirit — safe. Why Phishing Increases During the Holidays Cybercriminals know people are more likely to let their guard down this time of year. A few reasons phishing thrives during the holidays include: According to cybersecurity reports, phishing email volume can increase by up to 80% during the holiday season. Common Types of Holiday Phishing Scams Here are some of the most frequent scams seen between November and January: How to Protect Yourself and Your Organization The good news: A few smart habits can protect you from most phishing threats. A Secure Season Starts with Awareness The holidays should be a time of joy, not digital danger. By staying alert to phishing tactics and sharing these best practices with your colleagues, friends, and family, you can ensure a safer, stress-free holiday season online. Remember: When something sounds too good to be true — or too urgent to wait — it’s probably a phish.
AI: Protecting end users from themselves.

Every once in a while there is a product or technology that comes out that is a complete game changer not only for organizations, but society as a whole. The advent of AI is not new, but the adoption of large language models has exploded over the past seven years, giving everyday people the ability to understand complex topics and even assist with intricate engineering deployments. When used correctly, these tools can help you achieve your goals with nothing more than words on a screen and iterative prompts. If you equate your organization to a mission then one thing should always be top of mind, “This mission is too important for me to allow you to jeopardize it.” AI Guardrails Like any tools that are very useful and help you get your job done, there should be safety precautions taken when using these tools. A carpenter is able to do a job faster with an electric saw. Electric saws speed up the job, but need guardrails for safety. LLM’s are no different. LLM’s need guardrails to serve as an additional protective layer, reducing the risk of harmful or misleading outputs. Potential Risks: Data Privacy- Ensure organization privacy requirements are met by protecting against the leak or disclosure of sensitive corporate information, PII (personal identifiable information), and other secrets. Content Moderation- Guarantee that the AI applications being used adhere to your company’s content guardrails, approved usage guidelines, and policies. Align the application’s output with its intended use and block attempts to divulge information that could damage your organization reputation. General risks- Safeguard the applications integrity and performance by preventing misuse, ensuring secure data management, and employing a range of standard tools and proactive measures, such as detecting and blocking insecure code handling or blocking unwanted URL’s in prompts and responses. Guardrails and AITG: If you haven’t adopted the use of AI in your organization or have fear of enabling your workforce to use AI, you’re not alone. AITG works with organizations to help protect, identify and manage risks associated with LLM-based Applications. This old way of blocking URL’s or app-id’s to stop access is just that,old. At AITG, security remains the foundation of our approach, guiding how we secure and manage AI systems. Let us help make your organization and team work more efficiently by enabling them to use AI the right way, securely and responsibly.
NIST AI RMF vs ISO/IEC 42001

Bridging AI Governance and Risk Management As artificial intelligence becomes increasingly integral to business operations, regulators and standards bodies are establishing frameworks to promote trustworthy, transparent, and responsible AI. Three of the most influential are the NIST AI Risk Management Framework 100-1 (AI RMF 1.0), with companion resource 600-1 for Generative AI, and the ISO/IEC 42001:2023 Artificial Intelligence Management System Standard. While both aim to foster responsible AI, they differ in scope, structure, and implementation approach. Understanding these similarities and differences helps organizations integrate both frameworks into a unified, defensible AI governance strategy. Purpose and Intent NIST AI RMF (AI 100-1), released by the U.S. National Institute of Standards and Technology in January 2023, provides a voluntary framework to help organizations identify, manage, and mitigate AI risks throughout the AI lifecycle. It focuses on promoting trustworthiness, ensuring AI systems are valid, reliable, safe, secure, fair, and accountable. ISO/IEC 42001:2023, by contrast, is a certifiable management system standard, similar in structure to ISO/IEC 27001 for information security. It defines requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS), embedding AI governance directly into organizational structures and operations. In short: Structural Approach Framework Core Structure Purpose NIST AI RMF 4 Functions: Govern, Map, Measure, Manage Guides organizations through the lifecycle of identifying and mitigating AI risks ISO/IEC 42001 Plan–Do–Check–Act (PDCA) management cycle Establishes an operational, auditable AI governance system aligned with other ISO standards Both utilize risk-based thinking; however, NIST’s approach is functional and descriptive, whereas ISO’s is prescriptive and certifiable. Common Themes and Overlaps Despite structural differences, both frameworks share strong conceptual alignment and reinforce each other in practice. 1. Risk-Based Approach Both emphasize risk assessment, treatment, and monitoring. 2. Lifecycle Integration Both integrate risk management across the AI lifecycle, from data design and model training to deployment and ongoing monitoring.NIST defines AI actors and their roles, while ISO formalizes these within organizational leadership, planning, and accountability structures. 3. Trustworthiness and Ethical Principles Both promote trustworthy AI, emphasizing accountability, transparency, fairness, safety, and privacy.NIST defines seven core characteristics of trustworthy AI. ISO requires policies and controls that embed these values in corporate governance. 4. Continuous Improvement NIST encourages regular reviews and updates to adapt to the evolution of AI.ISO mandates continual improvement of the AI management system as a formal clause requirement. Key Differences Dimension NIST AI RMF ISO/IEC 42001 Nature Voluntary guidance Certifiable management system Focus AI risk identification and mitigation Organizational governance and control over AI Intended Users AI developers, deployers, policymakers Organizations seeking formal certification Outcome Improved AI trustworthiness and transparency Compliance evidence, accountability, and certification readiness Structure 4 Functions (Govern, Map, Measure, Manage) 10 Clauses (Context, Leadership, Planning, Operation, etc.) Documentation Requirement Recommended Mandatory (policies, risk register, impact assessments, controls) External Alignment OECD, ISO 31000, ISO/IEC 22989 ISO 27001, 9001, 27701, 23894 Auditability Informal self-assessment Third-party certification possible Consideration for Generative AI (NIST AI 600-1) In August 2024, NIST introduced NIST AI 600-1, “Secure Development Practices for Generative AI.” This companion document expands on the AI RMF principles to address the unique risks associated with generative AI systems. While NIST AI RMF 100-1 establishes a broad foundation for risk management across all types of AI, NIST AI 600-1 focuses specifically on model development, data security, and content integrity for generative models, such as large language models (LLMs), image generators, and other foundational models. Key aspects of NIST AI 600-1 include: For organizations already aligned with ISO/IEC 42001, incorporating NIST AI 600-1 controls can strengthen compliance by demonstrating due diligence over the secure development and responsible deployment of generative AI, especially in sectors facing increased regulatory scrutiny, such as finance, healthcare, and education. Practical Integration Strategy For organizations already certified under ISO/IEC management systems (such as 27001 or 9001), ISO/IEC 42001 provides a natural extension for AI governance. For organizations earlier in their AI maturity journey, NIST AI RMF serves as an accessible entry point to build foundational risk management processes before scaling toward certification. A combined approach is often most effective: Example of Complementary Alignment NIST AI RMF Function ISO/IEC 42001 Equivalent Common Outcome Govern Clauses 4–5 (Context, Leadership, Policy) Establishes AI governance culture and accountability Map Clauses 6–7 (Planning, Support) Identifies AI risks, opportunities, and required controls Measure Clause 9 (Performance Evaluation) Audits and monitors AI performance and risk metrics Manage Clauses 8 & 10 (Operation, Improvement) Implements and continuously enhances AI management practices AI Governance Through Policy Creation, Dissemination and Enforcement AI governance, achieved through policy creation, dissemination, and enforcement, is essential for ensuring that artificial intelligence is developed, deployed, and managed responsibly. Policies establish clear boundaries and expectations around how AI systems should operate, addressing critical aspects such as data privacy, bias mitigation, model transparency, and accountability. Without formalized governance policies, organizations risk deploying AI in ways that amplify bias, expose sensitive data, or create ethical and regulatory liabilities. By codifying principles of fairness, explainability, and human oversight into enforceable frameworks, enterprises can ensure that their AI systems align with their organizational values, legal requirements, and risk tolerance levels. Enforcement of these policies is equally critical, as governance without implementation is merely aspirational. Active monitoring, auditing, and continuous evaluation of AI systems are necessary to ensure compliance with established policies and to detect deviations early. Enforcement mechanisms, such as automated controls, periodic reviews, and internal AI ethics committees, translate policy intent into operational reality. This not only reduces risks but also builds trust among stakeholders, customers, and regulators. Effective AI governance through strong policy enforcement ultimately strengthens organizational resilience, enabling innovation with confidence while maintaining ethical integrity and regulatory compliance. Conclusion The evolution of AI governance now encompasses three complementary standards: NIST AI RMF (100-1), ISO/IEC 42001:2023, and NIST AI 600-1, each addressing a distinct yet interconnected layer of responsibility. Together, these frameworks form a comprehensive AI governance ecosystem, one that balances innovation with accountability and automation with human oversight. By integrating all three, organizations can demonstrate not only compliance and control, but also confidence and credibility in how
Families at Risk: Digital Threats to C-Suite Executives Don’t Stop at the Boardroom

Strategy and Transformation Practice 72% of U.S. Senior Executives were targeted by cyberattacks between February 2023 and August 2024, according to a 2024 report by GetApp. While the success and impact of these attacks vary, one thing is clear: businesses are becoming harder targets. Through stronger employee awareness, governance, and tooling, attackers are being forced to evolve. As a result, they’re turning to executives’ personal lives, and families, as potential entry points. This includes leveraging personal data about spouses and children from data brokers and social media sites. Cybercriminals are launching SIM-swaps, phishing campaigns, and emotional extortion tactics designed to bypass corporate security through personal channels. In this new threat landscape, protecting executive leadership means protecting their households. Cybersecurity at the top must now extend from the boardroom into the home. In a troubling example of this, attackers turned to an executive’s child to gain access they could not get directly. While this threat is pervasive amongst the general population, it’s particularly salient amongst high profile individuals and their families. “Doxing”, as it’s commonly referred, is the malicious act of publicly revealing someone’s private information without their consent. This often involves the disclosure and sale of personally identifiable information (PII) on the dark web, where criminals buy and use it for identity theft, fraud, and targeted attacks. Where is this information found? Unfortunately, it can be found easily in a number of places. It could include public sources like LinkedIn, company bios, press releases, social media, etc. It can be found on Data broker sites that aggregate public personal information, including home address. Potentially found in “breach dumps” that include Email/password leaks and Dark web markets or public breach repositories. The information can be used in a number of attacks. One such attack is “SIM-swapping”, where they hijack a child’s phone number and impersonate them in emotionally charged calls to pressure the executive into approving actions like Multi-Factor Authentication (MFA) bypass. In some cases, attackers extort an executive’s child—threatening to expose personal information—to coerce them into installing malware, compromising the family’s home network. Additionally, threat actors use brokered family data to impersonate trusted loved ones via email or phone, executing pretexting attacks designed to trick executives into disclosing credentials or installing malware. How can you protect yourself, your family, and your business? SIM-swapping, spoofing, and phishing attacks often start with a child or spouse’s compromised phone or email. Malware installed on a family member’s device can pivot into executive work networks or data. Family members are often the weakest link in security, especially children. Attackers often buy executive and family details from data brokers to impersonate or threaten. As attackers increasingly target executives through their families, the protection of personal and household security is critical to reducing risks for the entire business. Securing family data, strengthening account protections, and improving cyber hygiene help close vulnerable entry points that could compromise corporate systems. AccessIT Group offers Digital Executive Protection, providing thorough OSINT reviews to identify exposed personal information and tailored digital security training for executives. These training courses include take-home materials for families, empowering them to maintain strong defenses and safeguard both personal and business assets.
Inside the 2025 PCI SSC North America Community Meeting: Insights, Myths, and Key Takeaways

This week, the payments security community gathered in Fort Worth, Texas, for the highly anticipated 2025 PCI SSC North America Community Meeting. Held from September 16–18, the event brought together Council staff, industry experts, and stakeholders from across North America to discuss the latest in payment card security, technical updates, and collaborative opportunities. Setting the Stage: Why the PCI Community Meeting Matters Every year, the PCI SSC North America Community Meeting is more than just a conference; it’s a crucial gathering spot that wouldn’t be the same without the varied perspectives from across the industry, including yours. This event sparks innovation, deepens relationships, and guarantees that the standards safeguarding cardholder data stay strong and up-to-date in a rapidly changing environment. Key Themes and Highlights 1. Technical and Security Updates A central focus of this year’s meeting was on the latest technical and security developments in the payments ecosystem. Council staff and industry leaders shared insights on evolving threats, compliance requirements, and best practices for securing payment data. Attendees learned about upcoming changes to PCI standards and how these will impact merchants, service providers, and solution vendors. 2. Engaging Sessions and Expert Speakers The agenda featured a robust lineup of sessions led by renowned speakers and subject matter experts. Topics ranged from practical guidance on implementing PCI DSS v4.0 to deep dives into emerging technologies such as tokenization, cloud security, and AI-driven fraud prevention. Panel discussions and interactive workshops encouraged lively debate and knowledge sharing among participants. 3. Community Collaboration Collaboration remains a pledge of the PCI Community Meeting. This year’s event emphasized the importance of active participation within the PCI ecosystem. Attendees were encouraged to join Special Interest Groups (SIGs), contribute to standards development, and network with peers facing similar challenges. 4. Looking Ahead: A Global Perspective While the focus was on North America, the meeting also previewed upcoming PCI SSC events in Europe and Asia-Pacific, highlighting the global nature of payment security challenges and the need for international cooperation. My Presentation: Busting PCI Myths A personal highlight this year came unexpectedly when I was asked at the last minute to fill in for a tech talk slot. I presented “Busting PCI Myths: Practical Truths for Real Security,” a topic I’m passionate about after nearly two decades as a QSA and PCI advisor. During my talk, I addressed some of the most persistent misconceptions that continue to circulate in the industry: The key takeaway? Don’t let PCI myths lull you into a false sense of security. Real protection comes from understanding your true responsibilities and building strong, layered defenses. Ongoing Challenges: Requirements 6.4.3 and 11.6.1 Just like last year, there was significant discussion and some confusion around PCI DSS requirements 6.4.3 and 11.6.1. These requirements introduce critical mandates for monitoring and tamper detection, even for merchants completing the simplest SAQ-A. Many attendees were seeking practical guidance on how to implement these controls effectively, especially in cloud environments and where third-party service providers are involved. Final Thoughts The 2025 PCI SSC North America Community Meeting reaffirmed its status as the premier forum for shaping the future of payment security. Whether you’re a seasoned QSA or new to PCI, the event is a reminder that compliance is a journey, not a checkbox. If you missed it, I highly recommend checking out the PCI SSC website for session recordings and resources. Let’s continue to bust myths, share knowledge, and work together to build a stronger, more secure payments ecosystem. Did you attend the meeting or have thoughts on some of the new requirements? Share your experiences in the comments below!