Ewa Nowara
Senior Machine Learning Scientist at Prescient Design (Genentech)
Email  / 
Resume  / 
Bio  / 
Google Scholar  / 
LinkedIn
I am a Senior Machine Learning Scientist at Genentech on the
Prescient Design team.
I work on ML research for molecular drug design with a focus on oncology.
Prior to joining Prescient, I was a Research Scientist at Meta Reality Labs where I worked on generative models,
including diffusion models and variational autoencoders for 3D objects and content creation for AR & VR.
Before coming to Meta, I was a Postdoctoral Research Fellow at Johns Hopkins University working with Prof.
Rama Chellappa
on geo-localizing natural images.
I received my Ph.D. from Rice University in 2021 where I was fortunate to work with
Prof. Ashok Veeraraghavan
in the Computational Imaging Group. My Ph.D.
research focused on deep learning, computer vision, and computational imaging for robust unconstrained camera-based vital signs monitoring,
known as imaging photoplethysmography.
|
|
|
Seeing beneath the skin with computational photography
Ewa M. Nowara,
Daniel McDuff,
Ashok Veeraraghavan
Communications of the ACM, 2022
In this article, we review state-of-the-art physiological and medical imaging modalities
that leverage recent advances in computational photography. We explain the principles behind them,
and discuss their advantages and their limitations, with the hope of introducing this emerging field
and research avenues to a broader audience.
|
|
Where in the World is this Image? Transformer-based Geo-localization in the Wild
Shraman Pramanick, Ewa M. Nowara,
Josh Gleason,
Carlos D. Castillo,
Rama Chellappa
ECCV, 2022
arXiv
/
poster
/
video
We propose a novel approach to geolocalize an image taken anywhere in the world in any unconstrained settings.
Our model, called TransLocator, is based on a unified dual-branch Vision Transformer (ViT) model which uses features
from an RGB image and its semantic segmentaion representation. The ViT architecture allows it to attend to tiny details
over the entire image and the semantic segmentaion represenation gives it robustess even under extreme appearance variations.
|
|
The Benefit of Distraction: Denoising Remote Vitals Measurements using Inverse Attention
Ewa M. Nowara,
Daniel McDuff,
Ashok Veeraraghavan
ICCV, 2021
arXiv
/
video
We exploit the idea that statistics of corruptions may be shared between the video regions that contain the signal of interest and those that do not. We use the inverse of an attention mask to generate a corruption estimate that is then used to denoise temporal observations.
|
|
Combining Magnification and Measurement for Non-Contact Cardiac Monitoring
Ewa M. Nowara,
Daniel McDuff,
Ashok Veeraraghavan
CVPR Workshops, 2021
We improve the generalizability of deep learning models for non-contact cardiac monitoring by augmenting the training set videos with ``magnified'' videos. These data augmentations are specifically geared towards revealing useful features for recovering the physiological signals.
|
|
“Warm Bodies”: A Post-Processing Technique for Animating Dynamic Blood Flow on Photos and Avatars
Daniel McDuff,
Ewa M. Nowara
CHI, 2021
arXiv
/
video
We animate blood flow patterns to augment the appearance of synthetic avatars and photo-realistic faces based on a data-driven physiological model.
|
|
Systematic Analysis of Video-Based Pulse Measurement from Compressed Videos
Ewa M. Nowara,
Daniel McDuff,
Ashok Veeraraghavan
Biomedical Optics Express, 2021
video
We show that deep learning models can learn how noise at different video compression levels affects the physiological signals and are able to reliably recover vital signs from highly compressed videos, even in presence of large motion.
|
|
Near-Infrared Imaging Photoplethysmography During Driving
Ewa M. Nowara,
Tim K. Marks,
Hassan Mansour,
Ashok Veeraraghavan
Trans. on Intelligent Transportation Systems, 2020
video
We demonstrate that we can reduce most outside light variations using narrow-band near-infrared (NIR) video recordings and obtain reliable heart rate estimates. We present a novel optimization algorithm, which we call AutoSparsePPG, that leverages the quasi-periodicity of physiological signals and achieves better performance than the state-of-the-art methods.
|
|
A Meta-Analysis of the Impact of Skin Tone and Gender on Non-Contact Photoplethysmography Measurements
Ewa M. Nowara,
Daniel McDuff,
Ashok Veeraraghavan
CVPR Workshop, 2020
video
We evaluate how much gender and skin tone affect vital signs estimation from video. We find that the performance drops significantly on videos of people with very dark skin tones, especially for machine learning algorithms.
|
|
PPG3D: Does 3D head tracking improve camera-based PPG estimation?
Genki Nagamatsu,
Ewa M. Nowara,
Amruta Pai,
Ashok Veeraraghavan
Hiroshi Kawasaki
EMBC, 2020
We use 3D face tracking to estimate the position of facial landmarks with pixel-level accuracy to improve motion robustness of camera-based vital sign estimation.
|
|
Combating the Impact of Video Compression on Non-Contact Vital Sign Measurement Using Supervised Learning
Ewa M. Nowara,
Daniel McDuff
ICCV, 2019
We demonstrate that very small intensity variations in the skin related to physiological signals can be recovered even from very compressed videos with supervised deep learning.
|
|
SparsePPG: Towards Driver Monitoring Using Camera-Based Vital Signs Estimation in Near-Infrared
Ewa M. Nowara,
Tim K. Marks,
Hassan Mansour,
Ashok Veeraraghavan
CVPR Workshops, 2018
We demonstrate the feasibility of using narrow-bandwidth near-infrared (NIR) active illumination at 940 nm for camera-based vital signs measurements. We develop a novel signal tracking and denoising algorithm (SparsePPG) based on Robust Principal Components Analysis and sparse frequency spectrum estimation.
|
|
PPGsecure: Biometric Presentation Attack Detection Using Photoplethysmograms
Ewa M. Nowara,
Ashutosh Sabharwal,
Ashok Veeraraghavan
Face and Gesture, 2017
We developed a machine learning system to prevent face spoofing attacks by detecting and analyzing the heartbeat signal from the face videos.
|
Patents
T Marks, H Mansour, E Nowara, Y Nakamura, A Veeraraghavan
System and Method for Remote Measurements of Vital Signs of a Person in a Volatile Environment
US Patent App. 17/199,696, 2021
T Marks, H Mansour, E Nowara, Y Nakamura, A Veeraraghavan
System and Method for Remote Measurements of Vital Signs
US Patent App. 16/167,668 2019
|
Awards and Honors
- Featured in the RSIP Vision Magazine as a Woman of the month in Computer Vision (2022)
- Invited speaker, Microsoft Research AI Breakthroughs (2020)
- Best graduate poster and demo award at ECE Corporate Affiliates Day at Rice for “SparsePPG: Towards Driver Monitoring Using Camera-Based Vital Signs Estimation in Near-Infrared.” (2019)
- The Ken Kennedy Institute for Information Technology 2017/18 Schlumberger Graduate Fellowship (2017-18)
- Selected attendee, Doctoral Consortium at Automatic Face and Gesture Recognition (2017)
- Selected attendee, CRA-W (Computing Research Association) Grad Cohort (2016)
- Texas Instruments Fellowship (2015)
- Presidential Award (given to top 14 graduating seniors) (2015)
- Best undergraduate talk award in Computational Chemistry at GCURS (Gulf Coast Undergraduate Regional Symposium) at Rice University (2015) [poster].
- Research Featured in St. Mary’s University Publication ‘Gold and Blue’ Magazine— ‘Polish Student’s Research Looks for Clues to Parkinson’s Disease’—Author: Chris Jarvis. (2012, pages 10-11)
|
|