Hello! I'm Hayley, and I'm a Computer Science PhD student at University of Southern California (USC). I am excited about problems regarding human perception of the world and how symbolic representation of knowledge can facilitate learning in a new domain via information transfer and generalization across domains and modalities.
My research lies at the intersection of representation learning and information theory, inspired by the way our perceptual system integrates multimodal sensory inputs via identifying invariant semantics.
My current guiding question is,
How do we, an intelligent agent, understand observations from multiple modalities (e.g. images, audio signals and written texts), and
How do we extract and build representations of the semantics that is invariant across the multimodal observations?
How do we represent the model's behaviors formally and measure their characteristics to distinguish one another?
I am developing generative models to jointly learn the analysis and synthesis processes of multimodal data. My most recent work introduces a generative model with disentangled representation that learns spatial semantics from map tiles collected from diverse sources such as satellites, Google Street Map and custom rendering engines. I am also interested in understanding how the semantic information flows while processing observations from multiple modalities, using tools in deep learning and thermodynamic approaches to information flow.
At USC, I'm working with Prof. Laurent Itti at iLab and at ISI's VIMAL. I previously worked with Prof. Craig Knoblock at ISI's Center on Knowledge Graphs and Prof. Yao-Yi Chiang at the Spatial Computing and Informatics lab.
Before USC, I studied at Massachusetts Institute of Technology for my Bachelors and Masters in Electrical Engineering and Computer Science (EECS) with Minor in Mathematics. During my Masters, I concentrated on Artificial Intelligence and worked under the joint guidance of Prof. Regina Barzilay, Prof. Wojciech Matusik, and Dr. Julian Straub. My main projects were (1) non-rigid image registration of mammograms for breast cancer detection, and (2) 3D reconstruction of human arms for efficient lymphedema screening. You can find out more about them here.
Please see the project page for more details on my work.
I use Python (eg. PyTorch, Numpy, scikit-learn) for machine learning projects and C++ for hardware systems (eg. Microsoft Kinnect and Intel RealSense) as in this project.
Besides working on my projects, I enjoy being in nature and trying out different sports. Along the way, I became a certified scuba diver and have sky-dived in Czech sky! I enjoy biking and swimming -- they help me connect to the dimension that is not about thinking and analyzing, and remind me we are more than our thoughts and minds. I enjoy sharing such experiences with friends:)
I occasionally write here to process and share what I learn. I would love to hear from you if you have any questions or thoughts:)