Artificial intelligence is transforming cybersecurity for both adversaries and defenders alike. Threat actors are using AI to hyper-personalize social engineering attacks, deepfake impersonation and synthetic media attacks, automated vulnerability identifications and more autonomous malware, and more, while defenders are incorporating AI tools and new AI-powered workflows into their defense strategies — or they better be. Many cybersecurity professionals fear AI may be a job killer for humans. However, rather than wholesale replacement of human staff, others see AI as making current workers more productive in their roles and creating entirely new career categories.
Organizations are scrambling to fill specialized roles that blend AI expertise with traditional security knowledge, offering unprecedented opportunities for professionals willing to master this convergence.
According to a recent survey by ISC2, the 2024 edition of the ISC2 Global Workforce Study, 82% of cybersecurity professionals see AI as improving their efficiency, but 56% believe that AI will, in fact, obsolete some portions of their current job. The consensus is a "hybrid" workforce: security specialists who blend technical know-how with AI literacy will be in the highest demand as jobs evolve.
The foreseeable future is hybrid, where human professionals will collaborate with AI-powered tools designed to enhance their effectiveness. Here's how experts see roles changing:
AI Security Analysts: Formal job roles are appearing with job titles advertised as AI Security analysts, Artificial Intelligence Security Analyst, and AI Cyber Defense Analyst that will now serve as the frontline defenders of their organizations' largely AI-driven infrastructure. They'll spend their days monitoring machine learning systems for anomalies while using AI tools to enhance traditional threat detection methods. Unlike conventional security analysts who primarily review logs and alerts, these professionals must understand how adversaries might manipulate AI models themselves.
A typical day involves analyzing behavioral patterns in AI systems, investigating alerts generated by machine learning algorithms, and fine-tuning detection models to reduce false positives. They start mornings reviewing AI-generated alerts created overnight. They'll then spend time updating neural networks based on new attack signatures identified in the previous 24 hours.
This role demands proficiency in Python programming, an understanding of machine learning algorithms, and experience with AI-enhanced SIEM platforms. Most importantly, they need analytical thinking skills to interpret AI outputs and translate them into actionable security insights. Salaries often range from $90,000 to $150,000 annually, with experienced professionals in major tech centers earning significantly more.
Machine Learning Security Engineers: This role represents one of the most technical tiers of these emerging roles, designing sophisticated neural networks specifically for threat detection. They create convolutional networks that can identify malware patterns in real-time and develop autoencoders for anomaly detection in network traffic.
Daily responsibilities include training machine learning models on security datasets, optimizing algorithms for real-time threat detection, and ensuring the security of ML pipelines themselves. These engineers might spend mornings debugging a neural network that's producing too many false positives, then afternoons implementing new training datasets to improve model accuracy.
Required skills encompass advanced programming in Python and TensorFlow, a deep understanding of supervised and unsupervised learning, and knowledge of statistical analysis for security applications. The specialized nature drives premium salaries from $152,000 to $205,000 annually, with top performers earning up to $300,000.
AI Incident Response Specialists: This role handles the chaos when AI systems are compromised or weaponized against organizations. They investigate how attackers manipulated machine learning models, document attack vectors specific to AI systems, and develop response playbooks for AI-related incidents.
When incidents occur, they lead forensic investigations to understand model poisoning attempts, analyze adversarial inputs designed to fool AI systems, and coordinate between AI development teams and security operations. Their days fluctuate between proactive preparation—updating response procedures and training teams—and reactive crisis management during active incidents.
Essential skills include digital forensics expertise, an understanding of AI system architectures, and strong communication abilities to explain complex AI attacks to executives. Compensation ranges from $120,000 to $180,000 annually, with the critical nature of their crisis management role driving strong demand.
AI Governance, Compliance, and Privacy Attorneys: Job postings for cybersecurity/privacy attorneys are increasing by about 41% from 2023 to 2024. These professionals ensure AI systems comply with emerging regulations like the EU AI Act while maintaining security standards.
Daily work involves developing AI policy frameworks, conducting compliance audits of machine learning systems, and preparing documentation for regulatory reviews. They spend considerable time translating complex AI regulations into practical implementation guidelines that development teams can follow.
Cybersecurity and privacy attorneys experienced the highest growth of any cybersecurity role, handling the legal complexities of AI-powered attacks and AI system regulations. They review breach response plans for AI incidents, advise on AI compliance strategies, and represent organizations in AI-related regulatory investigations.
Will generative AI replace such law work anytime soon? The consensus appears to be that it is also unlikely. "In my discussions with other lawyers, there is a wide range of views on the use of AI. Some lawyers see the embarrassing stories in the news about AI hallucinating legal citations or making up quotations from cases that don't exist, even federal judges aren't immune, and they vow to avoid using AI at all," said Michael Schearer, attorney and digital network exploitation analyst at technology and professional services provider CACI International.
"Others have cautiously embraced AI in helping to summarize documents or assist in time-intensive tasks. Others worry about whether AI will eventually replace their jobs, although I think most lawyers aren't necessarily concerned that this will happen anytime soon. At least in the near term, though, AI will not replace cybersecurity and data privacy lawyers. But lawyers that don't integrate AI into their work will get replaced by those who use AI. Burying their heads in the sand is not a replacement for thoughtful use of AI, while respecting guidelines established by bar associations."
Recent law graduates specializing in cybersecurity earn $235,000 to $260,000 in major markets, while experienced attorneys command over $300,000.
AI Risk Management Specialists: Quantify the business risks of AI deployment, developing frameworks to assess everything from model bias to adversarial attacks. They analyze vulnerabilities in AI systems, develop mitigation strategies, and monitor emerging threats across the AI landscape.
Their days involve conducting risk assessments for new AI implementations, collaborating with legal teams on liability issues, and briefing executives on AI-related threats. Critical skills include strategic thinking, technical AI proficiency, and strong communication abilities to explain complex risks to business leaders.
Compensation typically ranges from $130,000 to $190,000 annually, reflecting the growing recognition of AI risks across industries.
The reality is there's not a single cybersecurity position that won't rely heavily on new AI tools. Penetration testers will use AI to create data lakes of their client's digital footprint and place that information in data lakes they'll query to find potential vulnerabilities they can exploit before threat actors do the same.
This is why long-time cybersecurity professional Andrew Storms, currently VP of security at Replicated, believes those who want to stay relevant in the field will need to hone their AI, API, and coding skills to keep up. Over his career, Storms experienced the evolution from network-centric enterprise technology to the dot-com eCommerce boom, the shift to cloud, and now the increasing role of new AI tools. Storms sees many parallels with these earlier inflection points, at least when it comes to how professionals must respond.
"Learn everything you can about AI tools and how to make those tools truly useful for your security practices and goals. You have to go well beyond just buying products with AI components, and actively integrate and adapt AI to their workflows," Storms advised. "If you do that, and you combine security knowledge with AI, coding, and business communication skills, you will have greater career longevity," he said.
As AI becomes integral to both attack and defense strategies, organizations desperately need professionals who understand both domains. The convergence creates a golden opportunity for those ready to embrace this technological evolution.