eBiltegia

    • Qué es eBiltegia 
    •   Acerca de eBiltegia
    •   Te ayudamos a publicar en abierto
    • El acceso abierto en MU 
    •   ¿Qué es la Ciencia Abierta?
    •   Política institucional de Acceso Abierto a documentos científicos y materiales docentes de Mondragon Unibertsitatea
    •   La Biblioteca recoge y difunde tus publicaciones

Con la colaboración de:

Euskara | Español | English
  • Contacto
  • Ciencia Abierta
  • Acerca de eBiltegia
  • Login
Ver ítem 
  •   eBiltegia MONDRAGON UNIBERTSITATEA
  • Aportaciones a congresos
  • Aportaciones a congresos - Ingeniería
  • Ver ítem
  •   eBiltegia MONDRAGON UNIBERTSITATEA
  • Aportaciones a congresos
  • Aportaciones a congresos - Ingeniería
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.
Thumbnail
Ver/Abrir
ASTRAL Automated Safety Testing of Large Language Models.pdf (625.1Kb)
Registro completo
Impacto

Web of Science   

Google Scholar
Compartir
EmailLinkedinFacebookTwitter
Guarda la referencia
Mendely

Zotero

untranslated

Mets

Mods

Rdf

Marc

Exportar a BibTeX
Título
ASTRAL: Automated Safety Testing of Large Language Models
Autor-a
Ugarte Querejeta, Miriam cc
Valle Entrena, Pablo cc
Parejo, Jose Antonio
Segura, Sergio
Arrieta, Aitor cc
Fecha de publicación
2025
Grupo de investigación
Ingeniería del software y sistemas
Otras instituciones
https://ror.org/00wvqgd19
Universidad de Sevilla
Versión
Postprint
Tipo de documento
Contribución a congreso
Idioma
Inglés
Derechos
© 2025 IEEE
Acceso
Acceso abierto
URI
https://hdl.handle.net/20.500.11984/13991
Versión de la editorial
https://doi.org/10.1109/AST66626.2025.00018
Publicado en
IEEE/ACM International Conference on Automation of Software Test (AST)  Ottawa (Canada), 28-29 April 2025
Editorial
IEEE
Palabras clave
Large Language Models
ODS 9 Industria, innovación e infraestructura
ODS 10 Reducción de las desigualdades
Resumen
Large Language Models (LLMs) have recently gained significant attention due to their ability to understand and generate sophisticated human-like content. However, ensuring their safety is paramount as ... [+]
Large Language Models (LLMs) have recently gained significant attention due to their ability to understand and generate sophisticated human-like content. However, ensuring their safety is paramount as they might provide harmful and unsafe responses. Existing LLM testing frameworks address various safety-related concerns (e.g., drugs, terrorism, animal abuse) but often face challenges due to unbalanced and obsolete datasets. In this paper, we present ASTRAL, a tool that automates the generation and execution of test cases (i.e., prompts) for testing the safety of LLMs. First, we introduce a novel black-box coverage criterion to generate balanced and diverse unsafe test inputs across a diverse set of safety categories as well as linguistic writing characteristics (i.e., different style and persuasive writing techniques). Second, we propose an LLM-based approach that leverages Retrieval Augmented Generation (RAG), few-shot prompting strategies and web browsing to generate up-to-date test inputs. Lastly, similar to current LLM test automation techniques, we leverage LLMs as test oracles to distinguish between safe and unsafe test outputs, allowing a fully automated testing approach. We conduct an extensive evaluation on well-known LLMs, revealing the following key findings: i) GPT3.5 outperforms other LLMs when acting as the test oracle, accurately detecting unsafe responses, and even surpassing more recent LLMs (e.g., GPT-4), as well as LLMs that are specifically tailored to detect unsafe LLM outputs (e.g., LlamaGuard); ii) the results confirm that our approach can uncover nearly twice as many unsafe LLM behaviors with the same number of test inputs compared to currently used static datasets; and iii) our black-box coverage criterion combined with web browsing can effectively guide the LLM on generating up-to-date unsafe test inputs, significantly increasing the number of unsafe LLM behaviors. [-]
Colecciones
  • Aportaciones a congresos - Ingeniería [449]

Listar

Todo eBiltegiaComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasGrupos de investigaciónPublicado enEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasGrupos de investigaciónPublicado en

Mi cuenta

AccederRegistro

Estadísticas

Ver Estadísticas de uso

Recolectado por:

OpenAIREBASERecolecta

Validado por:

OpenAIRERebiun
MONDRAGON UNIBERTSITATEA | Biblioteca
Contacto | Sugerencias
DSpace
 

 

Recolectado por:

OpenAIREBASERecolecta

Validado por:

OpenAIRERebiun
MONDRAGON UNIBERTSITATEA | Biblioteca
Contacto | Sugerencias
DSpace