PAI-favicon-120423 MLSecOps-favicon icon3

Blog Byte: Spherical Steaks in ML. “Say what?!”

What we’re reading: Sven Cattell, the President for AI Village, posted a great article called “The Spherical Cow of Machine Learning.” The title alone was thought provoking, but the topic got us wondering two things: First - What’s a spherical cow? (Spoiler Alert: It’s a physics term. We could have guessed, given Schrödinger and his cat.)  Second - How does this crazy metaphor relate to ML Security? Also, this team loves barbecue, so we were curious. 

Relevance to ML Security: In the context of machine learning, "Spherical Cow" refers to an oversimplified assumption that is made when designing a model or algorithm. The term comes from a physics joke where a theoretical physicist begins a problem with the assumption that a cow is a perfect sphere, in order to simplify the calculations.

In ML, a "Spherical Cow" assumption might involve assuming that the model is invulnerable to certain types of attacks. While these assumptions can make the problem more manageable, they can also create security vulnerabilities if the model is deployed in a real-world setting where the data is more complex or the attack surface is larger. Therefore, it's important to be aware of these assumptions and their limitations when designing and testing ML models. An example is here, and worth a listen on the MLSecOps podcast with Johann Rehberger.

For the ML Team: Understanding the limitations of oversimplified assumptions is crucial to building secure and robust ML systems. By considering a wider range of scenarios and potential attacks, they can design more effective defenses against ML system attacks.

For the SEC Team: Red teamers and pentesters should care about this concept because they can use it to identify vulnerabilities in ML systems and develop attack strategies. For example, the model creator organization may make guarantees of efficacy based on average usage, but customers may only need a specific application of the model. Attackers can then focus on that specific application of the model and find vulnerabilities for that specific application such as misclassification. This would mean regulatory minimum efficacy guarantees would be useless.

Our thoughts: Securing a Jupyter Notebook is an example of how to protect against potential attacks on machine learning models and systems. This is relevant to the concept of Spherical Cow because Jupyter Notebooks are often used in the development and testing of ML models, and the assumptions made during this process can create security vulnerabilities. For example, if the Jupyter Notebook environment is not properly secured, an attacker could gain access to the notebook and manipulate the data or code to create a "backdoor" in the model. This could allow the attacker to evade detection and potentially compromise the entire system.

By securing the Jupyter Notebook environment, machine learning practitioners can reduce the risk of Spherical Cow assumptions leading to security vulnerabilities. By ensuring that the environment is only accessible to authorized users and that data and code are encrypted, they can protect against attacks that exploit oversimplified assumptions in the model. 

Get started by visiting nbdefense.ai and building in NB Defense into your Jupyter Notebook environment and including our command line interface in your CI flow.

PS.  The image was generated with DALL·E 2, proving Generative AI still has some deeper learning ahead. :)