A Step-by-Step Guide to Securing Large Language Models (LLMs)

Posted on in Presentations

Securing Large Language Models (LLMs) is key in today's AI landscape. This talk will dive into viewing LLMs as data compressors, understanding challenges with compressed data, and tracing data origins. Join this session to see how to ensure that LLMs are not trained on sensitive or biased data by implementing on-demand scanning, training automation, and a proxy system to withhold sensitive outputs.

Ravi Ithal


CTO, Normalyze

Share With Your Community