Small Simplicity

Understanding Intelligence from Computational Perspective

About me

profile

| CV | cv Github | @cocoaaa Pubs | publications Projects | projects

Contact

Happy to meet new friends and share ideas!
Feel free to contact me at:
courriel
domain

Hello! I'm Hayley, and I'm a Computer Science PhD student at University of Southern California (USC). I am excited about problems regarding human perception of the world and how symbolic representation of knowledge can facilitate learning in a new domain via information transfer and generalization across domains and modalities.

My research lies at the intersection of representation learning and information theory, inspired by the way our perceptual system integrates multimodal sensory inputs via identifying invariant semantics.

My current guiding question is,

How do we, an intelligent agent, understand observations from multiple modalities (e.g. images, audio signals and written texts), and

How do we extract and build representations of the semantics that is invariant across the multimodal observations?

How do we represent the model's behaviors formally and measure their characteristics to distinguish one another?

I am developing generative models to jointly learn the analysis and synthesis processes of multimodal data. My most recent work introduces a generative model with disentangled representation that learns spatial semantics from map tiles collected from diverse sources such as satellites, Google Street Map and custom rendering engines. I am also interested in understanding how the semantic information flows while processing observations from multiple modalities, using tools in deep learning and thermodynamic approaches to information flow.

Research Homes

Please see the project page for more details on my work.

Lingering Research Questions

Click to expand In a bigger scheme, I am excited about the problems regarding human perception of the world and how symbolic representation of knowledge can facilitate learning in a new domain via knowledge transfer across various domains/modalities. I'm continuously exploring these questions in my research:
  • How can intelligent agents learn with less supervision, particularly in the domain of vision and three-dimensional perception (:spatial reasonging?)
    • via autonomously interacting with the environment
    • via incorporating external knowledge
    • via incorporating common sense reasoning
    My [project](#semantic_road_project) on road detection from satellite images explores this question using external geospatial knowledge base (OpenStreetMap) and transfer learning.
  • How can those knowledge be represented in a more abstract form so that it can be used for learning in different domains
    • Knowledge Representation, Transfer Learning, Domain Adaptation

Softwares

I use Python (eg. PyTorch, Numpy, scikit-learn) for machine learning projects and C++ for hardware systems (eg. Microsoft Kinnect and Intel RealSense) as in this project.


Besides working on my projects, I enjoy being in nature and trying out different sports. Along the way, I became a certified scuba diver and have sky-dived in Czech sky! I enjoy biking and swimming -- they help me connect to the dimension that is not about thinking and analyzing, and remind me we are more than our thoughts and minds. I enjoy sharing such experiences with friends:)

I occasionally write here to process and share what I learn. I would love to hear from you if you have any questions or thoughts:)