Do you ever have the feeling that safeguarding data is impossible? You are certainly not alone. A recent report from Perforce illustrates that many organizations struggle to protect confidential data, especially when using AI and software solutions. And quite frankly, some of those survey results are alarming.
So what is the big deal? After all, organizations are using real data in testing 95%, AI projects 90%, and data is used in software development 78% of the time. Your information is still at risk of exposure, even the data outside of live systems. And it is not just about being careful; 32% of organizations had issues with an audit process and 22% of organizations were fined.
Steve Karam from Perforce comments that companies feel compelled to innovate but hate the privacy risk. His suggestion? Don’t use real different data to train the AI. Use fake, synthetic data instead.
Being smart about data isn't only about following policies, it is about keeping people safe and worry-free.
More and More Data Breaches
Here's the issue: 60% of organizations have experienced a breach or theft of data used in either software, AI, or analytics work. This is an increase of 11% from last year! More than half of the organizations that handle sensitive and confidential data are getting breached. Worse yet, 84% of organizations even allowed exceptions in testing systems (which means real data was used in a non-production system).So what is the big deal? After all, organizations are using real data in testing 95%, AI projects 90%, and data is used in software development 78% of the time. Your information is still at risk of exposure, even the data outside of live systems. And it is not just about being careful; 32% of organizations had issues with an audit process and 22% of organizations were fined.
Mixed Feelings Toward AI Data
This is where the tension lies. Not only do 91% of companies feel AI should use sensitive data to learn, nearly 82% of them feel comfortable AI uses sensitive data for training. Meanwhile, 78% expressed concern over data theft, and 68% have fears privacy checks would not be done. It’s akin to the person who wants to drive fast but fears getting a speeding ticket.Steve Karam from Perforce comments that companies feel compelled to innovate but hate the privacy risk. His suggestion? Don’t use real different data to train the AI. Use fake, synthetic data instead.
How Companies Can Mitigate Their Risk
The good news? Companies are beginning to take action. 86% of companies report they will spend money on an AI privacy tool and nearly half already use fake data, and almost 95% mask sensitive data. There are tools like Perforce’s Delphix DevOps Data Platform, for example, that help companies safeguard real data while innovating.The Takeaway
AI itself is not the issue; it is how businesses leverage data. When used properly and with the appropriate tools, AI can be both secure and useful. It's when we ignore the danger that we run into problems.Being smart about data isn't only about following policies, it is about keeping people safe and worry-free.