Loading…
Attending this event?
Friday, June 28 • 3:30pm - 4:15pm
Securing the Gateway and Mitigating Risks in LLM API Integration

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

As the adoption of AI and Large Language Models (LLMs) continues to accelerate across enterprises, the average organization now leverages three or more such models to drive efficiency and innovation.

This rapid LLM usage comes with significant risks to user privacy and data security. A substantial portion of LLMs ingest data indirectly through APIs, leading to the processing of vast amounts of sensitive information without adequate protection measures.
Recent incidents, such as the self-XSS vulnerability in Writesonic and the prompt injection vulnerability in ChatGPT or NotionAI, have highlighted the urgency of addressing vulnerabilities in the usage of LLM APIs.

The most pressing concerns surrounding LLM API security include prompt injection, overreliance on unvalidated LLM outputs, insecure output handling mechanisms, and exposure of sensitive data. In this talk, we will present these critical vulnerabilities, their impact and ways to mitigate them.

1. Prompt injection vulnerabilities enable attackers to manipulate an LLM's responses by injecting malicious prompts, potentially leading to data theft, manipulation, or the execution of harmful actions. Additionally, organizations that blindly trust and implement LLM outputs without proper validation expose themselves to significant risks, as evidenced by the Writesonic self-XSS vulnerability, where users could inject malicious JavaScript code into the website.
2. Insecure output handling practices in LLMs can result in the unintentional exposure of sensitive information, such as personally identifiable data, trade secrets, or intellectual property. Even with robust input validation, inadequate safeguards within the LLM itself can lead to sensitive data leaks, compromising the privacy and security of individuals and organizations. A recent example is CVE-2023-29374, a critical vulnerability in LangChain up to version 0.0.131, which allows for prompt injection attacks that can execute arbitrary code.

How to securely use LLMs?

Securing LLMs for Developers
Here are five ways developers can secure LLMs:
1. Input validation: Ensure prompts are free from malicious code or instructions.
2. Security testing: Conduct thorough vulnerability assessments regularly.
3. Restrict API connections: Connect only to trusted APIs.
4. Validate LLM outputs: Implement solid output handling mechanisms to
prevent data exposure.
5. Advanced techniques: Use methods like homomorphic encryption or
secure multi-party computation to further protect data.
We'll present these strategies and more with examples, aiming to provide developers with practical ways to secure their LLM APIs.

Speakers
avatar for Ayush Agarwal

Ayush Agarwal

Software Developer, Akto
Ayush Agarwal is currently working as a Software Developer at Akto. He has over 4 years of experience working on scalable and complex systems. He built the API Security testing framework at Akto, which involved developing a new language in the form of YAML. This language empowers... Read More →
avatar for Avneesh Hota

Avneesh Hota

Software Developer, Akto.io
Avneesh has a solid background in API security and a keen interest in the emerging field of Large Language Models (LLMs). With three years at akto.io, he's been pivotal in enhancing their API testing frameworks and tackling diverse technological architectures. Moving beyond mere security, he's also exploring the responsible use of LLMs. Avneesh is known for his ability to demystify complex topics at key industry events like OWASP... Read More →


Friday June 28, 2024 3:30pm - 4:15pm WEST
Feedback form isn't open yet.

Attendees (5)