Publication: An Adversarial Attack Approach on Financial LLMs Driven by Embedding-Similarity Optimization
An Adversarial Attack Approach on Financial LLMs Driven by Embedding-Similarity Optimization
Date
Date
Date
Citations
Can Türetken, A. (2024). An Adversarial Attack Approach on Financial LLMs Driven by Embedding-Similarity Optimization. (Master’s thesis, University of Zurich) https://doi.org/10.5167/uzh-262354
Abstract
Abstract
Abstract
Adversarial attacks on financial sentiment analysis models are a critical area of research within NLP. We introduce a novel white-box attack method that leverages a pre-trained general-purpose language model to generate high-quality and human-imperceptible attacks. Unlike existing methods that rely on training specialized adversarial models or computationally-intensive gradient optimization routines, our approach employs carefully-designed instructions and a novel embedding-similarity function to maintain semantic integrity while prod
Additional indexing
Creators (Authors)
Faculty
Faculty
Faculty
Item Type
Item Type
Item Type
Referees
Scope
Scope
Scope
Language
Language
Language
Publication date
Publication date
Publication date
Date available
Date available
Date available
Number of pages
Number of pages
Number of pages
OA Status
OA Status
OA Status
Citations
Can Türetken, A. (2024). An Adversarial Attack Approach on Financial LLMs Driven by Embedding-Similarity Optimization. (Master’s thesis, University of Zurich) https://doi.org/10.5167/uzh-262354