Some of my past work:
AI Projects
Security of AI Systems: an in-depth study on the security risks of transfer learning, including backdoor and adversarial attacks; in collaboration with the German Federal Office of Information Security (BSI) publication
BerDiBa (Berlin Digital Rail Operations): a holistic digital twin for rail operations using innovative signal processing methods and Artificial Intelligence. My work focused on robustness assessment of AI modules in autonomous railway systems.
Localized Classmaps: a new Explainable AI method that enables an interpretation of the outputs of classification algorithms. code, publication
Talks
- WeAreDevelopers 2023: Machine Learning: Promising, but Perilous
- Automotive Software Development Meetup: Building Safety Cases for AI-based perception, May 2023
- PyData Yerevan 2022: The Explainability Problem summary, video
Publications
(2024) Structuring a Training Strategy to Robustify Perception Models with Realistic Image Augmentations arXiv preprint. Ahmed Hammam, Bharathwaj Krishnaswami Sreedhar, Nura Kawa, Tim Patzelt, Oliver De Candido.
(2022) Security of AI-Systems: Fundamentals - Provision or use of external data or trained models Bundesamt für Sicherheit in der Informationstechnik.
(2022) Explainable AI (XAI) with Class Maps Towards Data Science
(2022) Defining Hydrogeological Site Similarity with Hierarchical Agglomerative Clustering. Groundwater. Kawa, N., Cucchi, K., Rubin, Y., Attinger, S. and Heße, F.
(2021) exPrior: An R Package for the Formulation of Ex-Situ Priors. The R Journal. Falk Heße, Karina Cucchi, Nura Kawa and Yoram Rubin.
(2019) Ex-situ priors: A Bayesian hierarchical framework for defining informative prior distributions in hydrogeology. Advances in Water Resources. Cucchi, K., Heße, F., Kawa, N., Wang, C., & Rubin, Y.