AI tools are transforming modern work, but their rapid, often unmonitored adoption exposes companies to serious cybersecurity risks. A wave of recent studies, including the Cybernews Business Digital Index, reveal that 84% of popular AI tools have suffered data breaches, and many are still being widely used in the workplace, often without IT oversight.

As many as 75% of employees utilise AI at work, primarily through chatbots for drafting emails, summarising meetings, and supporting daily communication. Still, only 14% of companies have official AI policies, leaving most usage unregulated—and in many cases, hidden. About 1 in 3 employees admit to concealing their AI use from management.

“Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance,” says Emanuelis Norbutas, chief technical officer at nexos.ai. “Without clear oversight, enforcing policies, monitoring usage, and ensuring compliance becomes nearly impossible.”

A Google 2024 survey found that 93% of Gen Z workers (ages 22–27) use two or more AI tools on the job, often through personal logins. According to Harmonic, 45.4% of sensitive prompts are submitted using unmonitored, non-corporate accounts, completely bypassing security systems.

Widespread breaches

Cybernews researchers evaluated 52 of the most-visited AI tools online and uncovered serious gaps in cybersecurity. Despite an average security score of 85 out of 100, 41% of tools earned a D or F, and 36% experienced a breach in the past 30 days alone. System hosting flaws, poor cloud setups, and misconfigured encryption (SSL/TLS) were found in 91% of tools.

The study also found:

  • 93% of tools had SSL/TLS issues, weakening data encryption.
  • 44% of AI companies exhibited employee password reuse.
  • 51% had stolen corporate credentials on the dark web.

Productivity tools were the weakest category, with 100% showing system vulnerabilities and 92% experiencing breaches.

Password reuse, particularly within AI development firms, has become a key vector for credential-stuffing attacks, where hackers exploit recycled logins to quietly infiltrate systems.

AI tools are now an embedded part of work culture, especially for younger generations. But without clear security policies, identity protection, or usage monitoring, businesses risk far more than just data. They expose their infrastructure to untraceable, preventable threats. As AI integration deepens, companies must treat every tool as a potential attack vector—and act accordingly.

Post Views: 15