About | Career | Publications | Research | Contacts |
Olga Slizovskaia
Associate Principal AI Scientist
AstraZeneca, Centre for AI
Av. Diagonal, 615
Les Corts, 08028 Barcelona
Olga is a machine learning researcher with expertise in source separation, sound localization, voice conversion, music processing, and speech synthesis using deep learning and machine learning algorithms. In addition, Olga has experience in multimodal and audio-visual learning, time series analysis, and anomaly detection.
About
Olga joined AstraZeneca, Centre for AI as an Associate Principal AI Scientist in January 2023 to develop new and innovative approaches to analyse and interpret audio data, leveraging the latest techniques in deep learning and machine learning fields. Her current work is focused on audio biomarkers discovery research and respiratory sound analysis.
Previously, she served as a Research Engineer at Voctro Labs (acquired by Voicemod) and was awarded a 3-year R&D Torres Quevedo Fellowship in 2022. She interned at Mitsubishi Electric Research Laboratories under the supervision of Gordon Wichern and Jonathan Le Roux, and at Telefonica Research collaborating with Joan Serrà and Ilias Leontiadis.
Olga received her PhD from the University of Pompeu Fabra in 2020, supervised by Emilia Gómez and Gloria Haro. Her research focused on audio-visual deep learning methods and was supported by the María de Maeztu Unit of Excellence. She earned her Master’s degree in Applied Mathematics and Computer Science from Moscow State University in 2013.
In her free time, Olga enjoys visiting bakeries, mountaineering, spending time with her dog, and Irish dancing.
Career
2023 – Associate Principal AI Scientist - AstraZeneca, Centre for AI
- Audio biomarkers discovery research and respiratory sound analysis with deep learning and machine learning techniques
2020 – Research Engineer - Voctro Labs (acquired by Voicemod)
- Real-time voice processing and voice conversion R&D for Voicemod Voice Changer. Optimizing autoregressive models to meet low latency and low resource utilization requirements. Awarded a 3 years R&D Torres Quevedo scholarship.
2021 – Research Intern - Mitsubishi Electric Research Laboratories
- Research project on conditioned sound event localization and detection under the supervision of Gordon Wichern and Jonathan Le Roux.
2020 – Doctoral Thesis Defense - Pompeu Fabra University
- PhD thesis: “Audio-visual deep learning methods for musical instrument classification and separation” under the supervision of Emilia Gómez and Gloria Haro
2018 – Research Intern - Telefonica Research
- Anomaly detection in time-series data using likelihood-based generative models (normalizing flows) collaborating with Joan Serrà and Ilias Leontiadis
2015 – Data Engineer - Data-Centric Alliance
- Design and development of machine learning models for online advertising
2013 – Data Engineer - Zvooq LLC (now Zvuk)
- Music data ingestion infrastructure development and support
2013 – Master’s degree - Applied Mathematics and Computer Science, Moscow State University
Publications
Complete list
For an always updated list of my publications visit my Google Scholar Profile
Recent
Method and System for Sound Event Localization and Detection
G. Wichern, O. Slizovskaia, J. Le Roux
US Patent App. 17/687,866 (2023)
Locate this, not that: Class-conditioned sound event doa estimation
O. Slizovskaia, G. Wichern, Z.-Q. Wang, J. Le Roux
In IEEE ICASSP (2022)
[arXiv] [doi]
Conditioned source separation for music instrument performances
O. Slizovskaia, G. Haro, E. Gómez
The IEEE/ACM Transactions on Audio, Speech, and Language Processing (2021)
[arXiv] [doi]
Input complexity and out-of-distribution detection with likelihood-based generative models
J. Serrà, D. Álvarez, V. Gómez, O. Slizovskaia, J. F. Núñez, J. Luque
In The 2020 International Conference on Learning Representations (ICLR) (2020)
[arXiv] [openreview]
Research
Current Research Interests
My research focuses on developing machine learning and deep learning techniques for improving machine perception of audio signals. This includes speech recognition, sound event detection, and music information retrieval.
CV
- CV updated Feb 2024.
Dissimination materials
- PhD Thesis:
Audio-visual deep learning methods for musical instrument classification and separation
[Manuscript] [Slides] [Defense Video] - Conditioned Source Separation for Music Instrument Performances
Code
- You can check my GitHub profile for open-source code for some projects.
Contacts
For collaboration or any inquiries, you can reach out to me:
- Email: olga [dot] slizovskaia [at] astrazeneca [dot] com
- LinkedIn: Olga Slizovskaia
- GitHub: Veleslavia