PROMPT INJECTION

What Is a Prompt Injection Attack?

A prompt injection attack is a GenAI security threat where an attacker deliberately crafts and inputs deceptive text into a large language model (LLM) to manipulate its outputs. This type of attack exploits the model’s response generation process to achieve unauthorized actions, such as extracting confidential information, injecting false content, or disrupting the model’s intended…

Read More
Home
Courses
Services
Search