• Hello and welcome! Register to enjoy full access and benefits:

    • Advertise in the Marketplace section for free.
    • Get more visibility with a signature link.
    • Company/website listings.
    • Ask & answer queries.
    • Much more...

    Register here or log in if you're already a member.

  • 🎉 WHV has crossed 14,000 monthly views and 157,000 clicks per month, as per Google Analytics! Thank you for your support! 🎉

Rising Data Breaches Reveal AI Privacy Risks and Compliance Challenges

johny899

New Member
Content Writer
Messages
486
Reaction score
3
Points
23
Balance
$557.1USD
Do you ever have the feeling that safeguarding data is impossible? You are certainly not alone. A recent report from Perforce illustrates that many organizations struggle to protect confidential data, especially when using AI and software solutions. And quite frankly, some of those survey results are alarming.

More and More Data Breaches​

Here's the issue: 60% of organizations have experienced a breach or theft of data used in either software, AI, or analytics work. This is an increase of 11% from last year! More than half of the organizations that handle sensitive and confidential data are getting breached. Worse yet, 84% of organizations even allowed exceptions in testing systems (which means real data was used in a non-production system).

So what is the big deal? After all, organizations are using real data in testing 95%, AI projects 90%, and data is used in software development 78% of the time. Your information is still at risk of exposure, even the data outside of live systems. And it is not just about being careful; 32% of organizations had issues with an audit process and 22% of organizations were fined.

Mixed Feelings Toward AI Data​

This is where the tension lies. Not only do 91% of companies feel AI should use sensitive data to learn, nearly 82% of them feel comfortable AI uses sensitive data for training. Meanwhile, 78% expressed concern over data theft, and 68% have fears privacy checks would not be done. It’s akin to the person who wants to drive fast but fears getting a speeding ticket.

Steve Karam from Perforce comments that companies feel compelled to innovate but hate the privacy risk. His suggestion? Don’t use real different data to train the AI. Use fake, synthetic data instead.

How Companies Can Mitigate Their Risk​

The good news? Companies are beginning to take action. 86% of companies report they will spend money on an AI privacy tool and nearly half already use fake data, and almost 95% mask sensitive data. There are tools like Perforce’s Delphix DevOps Data Platform, for example, that help companies safeguard real data while innovating.

The Takeaway​

AI itself is not the issue; it is how businesses leverage data. When used properly and with the appropriate tools, AI can be both secure and useful. It's when we ignore the danger that we run into problems.

Being smart about data isn't only about following policies, it is about keeping people safe and worry-free.