Description

The Large Language Model (LLM) has disclosed a list of internal tools and their descriptions upon receiving specially crafted queries. This information exposure can aid attackers in reconnaissance, potentially enabling more targeted attacks or further vulnerability enumeration.

Remediation

Configure the LLM to restrict or sanitize responses regarding its internal tools and capabilities. Implement validation checks or filters to detect and prevent sensitive information disclosure.

References

Related Vulnerabilities