From The Editor: Lessons of a ChatGPT Power User
Using AI in the editorial process can be weird. But in an educational way -- whether you're a writer, an image designer or a cybersecurity practitioner.
Using AI in the editorial process can be weird. But in an educational way -- whether you're a writer, an image designer or a cybersecurity practitioner.
The agentic AI governance gap is a fundamental enterprise weakness. Sixty-three percent of organizations lack AI governance policies, according to IBM's research. This creates a complete lack of any meaningful organizational control over these deployments.
New research highlights the gap between how technology is designed to work and how it's actually safely operated.
Anthropic's disclosure lacked important elements, which explains the professional criticism that erupted despite the potmortem's potential significance. And while the post is marketing for Anthropic, it also provides strategic threat context for security executives.
2026 will bring CISOs and security professionals potential AI breaches, tight infrastructure regulation, a new European Union vulnerability database, quantum security growth, and merger and acquisition shifts.
The field desperately needs people, but neither employers nor job seekers seem yet to fully align on what skills those people should possess in an AI-dominated future.
The problem is that most security models weren’t designed with agent autonomy in mind.
The National Institute of Standards and Technology's new Control Overlays for Securing AI Systems and the Coalition for Secure AI provide much-needed standardization for AI security across government and industry.
While the majority of organizations move to embrace AI in their security operations programs, not all will be successful.