In an eye-opening chat, CyberArk India & SAARC VP Rohan Vaidya explains how AI is supercharging cybercrime, why India’s oversharing culture is dangerous, and how LLM poisoning may already be happening silently. He also breaks down what machine identities are, and why cybersecurity roles are evolving—not disappearing.
It’s 2025, and artificial intelligence is everywhere, from writing birthday wishes to impersonating political leaders. But as AI keeps getting smarter, cybercriminals are moving just as fast. In a detailed and candid interview with News9live, Rohan Vaidya, Area Vice President, India & SAARC at CyberArk, spoke about AI’s role in cybersecurity, on both sides of the fence.
From phishing scams with cloned voices to LLM poisoning and machine identities, this isn’t your typical cyber doom story. It is more of a reality check.
AI has given hackers “steroids”
The real game-changer, he pointed out, is AI’s accessibility. Generative and agentic AI tools have made it easier for anyone with a laptop and an internet connection to create malware, spoof emails, or mimic someone’s voice. “It’s like hackers have gotten access to steroids,” he explained.
What was once the realm of highly skilled coders is now open to curious teenagers and amateur scammers. And often, it starts innocently, with someone poking around out of curiosity, until things spiral.
When asked who is winning, hackers or defenders? Vaidya didn’t hesitate. “They only need to win once, while the good guys must win every single day,” he said. But he also noted that defenders are still ahead, statistically speaking.
The new phishing: deepfakes, cloned voices, and fake headlines
“You keep seeing familiar logos or visuals, and eventually your brain starts trusting them. Add AI to that mix, and the line between real and fake is gone,” he said. This isn’t theory, scammers are already using cloned voices of loved ones to extract money from unsuspecting family members.
When we mentioned social media posts about people receiving a spam call that mimicked their own voice. Vaidya wasn’t surprised. He pointed out how psychological tricks are used in advertising. Like Marlboro’s infamous red branding, are now being mirrored in phishing attacks.
India’s oversharing habit is a hacker’s dream
Vaidya stressed that India’s cultural tendency to overshare personal information is making things worse. “In a Mumbai-Pune train journey, a co-passenger can learn everything about you – salary, school, family history,” he said.
This openness, paired with low digital awareness, is a goldmine for cybercriminals. In contrast, countries like Germany are far more guarded. “If we can’t be like them, we need to build frameworks suited for us,” Vaidya said.
Cybersecurity jobs aren’t going away, but they are changing
When asked about the fear of AI replacing cybersecurity jobs, Vaidya was clear: “It’s not about jobs going away, it’s about roles transforming.” He explained how future cybersecurity professionals will need to train and work alongside AI agents, acting as copilots. Those who adapt will survive. “If I fail to keep up, it won’t be AI taking my job, it will be my inability to meet the increasing demands of the environment,” he said.
LLM poisoning is already happening, and most companies don’t even know
He stressed that even companies with internal, non-internet-exposed models aren’t safe. “If I get access to your training pipeline, I don’t need to breach your whole system. I can quietly manipulate outputs.”
When we brought up DeepSeek and the growing adoption of open-source LLMs. Vaidya called LLM poisoning the next big risk. One that’s likely already under way. “Hackers are far ahead. They don’t have rules to follow. They’ve figured out that poisoning models is one of the easiest ways to cause chaos,” he warned.
Why machine identities matter more than ever
The final stretch of the conversation touched on an often-ignored concept: machine identities. As automation and agentic AI grow, machines talking to machines is the new normal. But they still need to verify each other, and credentials are often stored carelessly in code. Vaidya explained how CyberArk’s approach vaults credentials and rotates them after every use, ensuring nothing stays exposed for long.
This system, he said, works across DevOps, cloud environments, and AI systems where APIs or SSH keys are often mismanaged.
The real war is cultural, not just technical
The future of cybersecurity isn’t just about smarter firewalls or better AI tools. It’s about changing how people think, share, and behave online. “If we don’t stop oversharing, no tech can fully protect us,” Vaidya said.
He believes education must start early, with awareness programs embedded in school curriculums. Because in this AI-powered world, where cloned voices and deepfakes can fool even the sharpest eyes, human awareness might just be the last firewall left.
Read More: