Description

The application unintentionally reveals sensitive parts of its internal system prompts in the output. This exposure could allow attackers to gain insights into the system's internal processing. Note: This alert may be a false positive as Large Language Models (LLMs) are known to occasionally hallucinate or generate responses that appear to contain system prompts but may not actually reflect the real system configuration.

Remediation

Implement stricter output handling controls to ensure sensitive prompt data is removed or obfuscated before response delivery.

References

Related Vulnerabilities