IT Failure: Are AI-based Systems More or Less Vulnerable?

The broad-based IT failure prompted me to consider if broad adoption of AI will make systems more or less vulnerable.  This is particularly important when it comes to infrastructure, such as global airlines being grounded.

My first pass answer…..  it depends. 

It strikes me that what makes these software flaws or vulnerabilities particularly dangerous is the concentration of companies on a narrow set of solutions.  The hyperscalar software and cloud providers have become “too big to fail”.  To the extent that AI contributes to more industry concentration, then it would increase systemic risks.

There’s a couple other factors that could make AI-centric systems a higher risk than traditional system.

🔳Lack of transparency. 

AI systems tend to be (depending on the models) less deterministic and less understandable to humans.  This inherently increases risk.

🔳Higher levels of integrations, particularly when humans are not “in the loop”..

    AI is more powerful as it integrated directly with other system via AI and makes automated decisions at computer speed. 

    🔳 Automated decisions at computer speed. 

    This potentially enables a race condition where things more so quickly that significant damage can be done before we even know there’s a problem.

    On the other hand….

    🔳Neural-based AI models are inherently redundant. 

    A flaw in one particular part of the model may be filtered out by this massive parallelism.  Indeed, the AI might be designed for inherence resilience with “rules” to override failures.  Not to mention intelligent diagnostics. Isolated failures in traditional systems tend to hit an “Interrupt” and rely on the programmers’ recovery method.