One of the more impactful publications that resonates these past twelve months is Marc Andreessen’s call to action via “It’s Time To Build.” His call to action in the immediate aftermath of the coronavirus garnered much attention. The broader call to action to keep building is still important almost a year later.
Zectonal’s call to build is based on a fundamental need to ensure the security, resiliency, and integrity for decision-makers to obtain situational awareness for the entire life-cycle of AI operations. The world demands AI that works. Properly functioning AI becomes significantly more likely with robust situational awareness. Zectonal builds software to provide this situational awareness.
Building functional AI is complex and requires that many disparate systems and environmental factors are all tuned in such a way that the resulting outcomes are what the AI system was developed to generalize. Getting this properly tuned even just a single time with so many dynamics is difficult. Most machine learning development stops at this initial training victory. If lucky, the algorithm is deployed and cited as a success.
The ability for machine learning models to consistently generalize properly in a continually evolving constellation of dependencies is magnitudes harder. Only a few organizations are capable of even understanding how the dynamics that impact model tuning and performance are changing and evolving. Zectonal refers to this as understanding the situational awareness of AI. AutoML does not solve this problem and is itself reliant on situational awareness to automate properly.
Unfortunately, in an era of overwhelming demand to embrace AI, poorly performing AI and a lack of robust situational awareness are emerging from the shadows. Poorly performing AI is defined as technology that does not predict accurately, classify accurately, automate properly, and is susceptible to a variety of cybersecurity vulnerabilities. Poorly performing AI results in far worse decision-making and insights versus the alternative of not using it. Poorly performing AI is rarely diagnosed until it’s too late, due primarily to a lack of situational awareness.
Now is time to build a more secure AI, and one that is more reliable for consumers. Over the past twenty years, we have all observed the negative impact security vulnerabilities have left the previous generation of network infrastructure vulnerable to all people who use it.
Marc Andreessen just might be one of the most often quoted technologists of our time. I can remember the wonder and exhilaration the day when my college computer science lab migrated from the Lynx web browser to the Mosaic web browser. It opened up an entire brave, new, online world that was not possible using text. Mosaic was one of many technologies at that time instrumental in building the world wide web. AI needs to have that similar type of impact today.
In earlier days working at Amazon Web Services (“AWS”), I would often tell customers that software workloads were a key ingredient for performing some kind of functional work — without software workloads the cloud was just blinking LED’s and idle CPU’s. AWS and cloud computing became a core ingredient for builders over the last decade.
Building AI with security, reliability, and trust cannot be an afterthought. Imagine a world of AI vulnerable to manipulation, malfeasance, or just plain bad engineering.
The next brave new world requires an AI that works as we intend. Zectonal is building software that provides situational awareness for an AI-centric world to ensure this happens.
Interested in learning exactly what situational awareness really means to us? Zectonal is building and hiring! Reach out to us at email@example.com