Exploring Trustworthiness in Large Language Models Under the EU AI Act
This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research identifies significant gaps in the application of trustworthiness principles across various high-stakes domains, emphasizing the need for further exploration and development.