We are thrilled to announce the integration of Protect AI’s Recon with Dataiku Agents, a groundbreaking step in securing enterprise LLM application deployments. With this integration, enterprises can harness Recon’s advanced red teaming capabilities to proactively identify vulnerabilities, enhance LLM application integrity and ensure compliance with the latest AI governance standards.
This guide will walk you through the process of configuring and scheduling automated red teaming scans for Dataiku GenAI Agents using Recon.
Step 1.1: Access the Targets Section
Step 1.2: Provide Target Details
Step 1.3 Input Parameter
where <DATAIKU HOSTNAME> is the publicly resolvable hostname for your Dataiku instance, and <PROJECT KEY> is the project key for the Dataiku project hosting your agent.
where <DATAIKU API KEY> is a Dataiku API key with permissions to use a model. Note that if the API key is a ‘global’ or ‘project’ API key (as opposed to a ‘user’ API key) then the key must be configured with an ‘associated user’.
Also note that this section uses the ‘completion’ API call of the Dataiku REST API.
Step 1.4 Verify and Edit JSON
Set the Request JSON to:
{
"llmId": "<AGENT ID>",
"queries": [
{
"messages": [
{
"role": "user",
"content": "{INPUT}"
}
]
}
]
}
where <AGENT ID> is the Dataiku LLM Mesh Agent ID.
Set the Response JSON to:
{
"responses": [
{
"ok": true,
"text": "{RESPONSE}"
}
]
}
There is no need to apply rate limits or guardrails, as these are configured in the Dataiku LLM Mesh on the underlying LLM connection.
Recon offers two modes:
NOTE: A scan can take from minutes to hours to complete depending on the type of scan, complexity, and latency of your application.
Reach out to your Protect AI Sales team for more information and guidance about integrating Recon and Dataiku.