I am a graduate student at Montreal Institute for Learning Algorithms (MILA), under the supervision of Aaron Courville, and co-supervised by Laurent Charlin. My research is mostly about Deep Latent Variable models and efficient approximate inference. My recent focus is about improving expressivity in doing variational inference (see our ICML18 paper NAF!), the optimization process of inference (out NIPS18 paper AVO!), and understanding the training dynamics of generative models in general. I am also interested in meta learning, natural language understanding and reinforcement learning.
Here’s my one-page CV and google scholar page.
Publications
Conferences
- Improving Explorability in Variational Inference with Annealed Variational Objectives [arXiv]
- Chin-Wei Huang, Shawn Tan, Alexandre Lacoste, Aaron Courville
- to be presented at NIPS 2018
- Neural Autoregressive Flows [arXiv] [bib] [slides]
- Chin-Wei Huang*, David Krueger*, Alexandre Lacoste, Aaron Courville
- presented at ICML 2018 (LONG TALK!)
- Neural Language Modeling by Jointly Learning Syntax and Lexicon [arXiv] [openreview] [bib]
- Yikang Shen, Zhouhan Lin, Chin-Wei Huang, Aaron Courville
- presented at ICLR 2018
Pre-prints
- Generating Contradictory, Neutral, and Entailing Sentences [arXiv]
- Yikang Shen, Shawn Tan, Chin-Wei Huang, Aaron Courville
- Bayesian Hypernetworks [arXiv] [openreview] [DLRL video] [bib]
- David Krueger*, Chin-Wei Huang*, Riashat Islam, Ryan Turner, Alexandre Lacoste, Aaron Courville
- presented in the Deep Learning and Reinforcement Learning Summer School (17′)
- presented in the Montreal AI Symposium (17′)
- presented in the NIPS (’17) workshop on Bayesian Deep Learning (BDL)
Workshops
- Facilitating Multimodality in Normalizing Flows [BDL]
- Chin-Wei Huang*, David Krueger*, Aaron Courville
- presented in the NIPS (’17) workshop on Bayesian Deep Learning (BDL)
- Sequentialized Sampling Importance Resampling and Scalable IWAE [BDL] [bib]
- Chin-Wei Huang, Aaron Courville
- presented in the NIPS (’17) workshop on Bayesian Deep Learning (BDL)
- Learnable Explicit Density for Continuous Latent Space and Variational Inference [arXiv] [padl] [poster] [bib]
- Chin-Wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad Havaei, Laurent Charlin, Aaron Courville
- presented in the ICML (’17) workshop on Principle Approaches to Deep Learning (padl)
Symposiums
- Deconstructive Defense Against Adversarial Attacks [poster]
- Chin-Wei Huang*, Nan Rosemary Ke*, Chris Pal
- presented in the Montreal AI Symposium (17′)
- Data Imputation with Latent Variable Models
- Michal Drozdzal, Mohammad Havaei, Chin-Wei Huang, Laurent Charlin, Nicolas Chapados, Aaron Courville
- presented in the Montreal AI Symposium (17′)
Technical reports
- Multilabel Topic Model and User-Item Representation for Personalized Display of Review [report]
- Chin-Wei Huang, Pierre-André Brousseau
- A final project report for IFT6266 (Probabilistic Graphical Models, 2016A)
Talks
- Autoregressive Flows for Image Generation and Density Estimation [slides]
- Presented at 2018 AI Summer School: Vision and Learning @ NTHU
- Presented at Speech Processing and Machine Learning Lab @ NTU
Advertisements