Chenlin Meng

Publish under Chenlin Meng

chenlin AT stanford.edu

I am a CS PhD at Stanford University, where I am advised by Prof. Stefano Ermon. I am interested in the broad applications of generative AI.

Google Scholar
Twitter

Education

Stanford University
Sep. 2020 - Now, Computer Science PhD
Stanford University
Sep. 2016 - Jun. 2020, Mathematics B.S.
• With distinction

Selected Publications

  • On Distillation of Guided Diffusion Models (CVPR 2023, Award candidate)
    Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans
    Diffusion models typlically requires tens to hundreds of model evalutions to generate one image. Using our approach, we only need 2-4 evalutions, drastically reducing the sampling cost. Our approach is also hightly effective for video generation and is used in Imagen Video.
    [Paper | Code (Coming soon)]
  • Denoising Diffusion Implicit Models (ICLR 2021)
    Jiaming Song, Chenlin Meng and Stefano Ermon
    Diffusion models typlically requires hundreds of denoising steps to generate one image, which is expensive and often infeasible in real-world settings. This paper is one of the earliest works to reduce the denoising steps of diffusion models from >1000 to <100. It has been widely used in Stable Diffusion, DALL-E 2, Imagen, GLIDE, eDiff-I and many others.
    [Paper | Code]
  • D2C: Diffusion-Denoising Models for Few-shot Conditional Generation (NeurIPS 2021)
    Abhishek Sinha, Jiaming Song, Chenlin Meng, and Stefano Ermon
    D2C uses a learned diffusion-based prior over the latent representations to improve generation and contrastive self-supervised learning to improve representation quality. D2C can adapt to novel generation tasks conditioned on labels or manipulation constraints, by learning from as few as 100 labeled examples.
    [Paper | Project Page | Code]
  • Concrete Score Matching: Generalized Score Matching for Discrete Data (NeurIPS 2022)
    Chenlin Meng*, Kristy Choi*, Jiaming Song, and Stefano Ermon
    Representing probability distributions by the gradient of their density functions (e.g., diffusion model) has proven effective in modeling a wide range of continuous data modalities. However, this representation is not applicable in discrete domains where the gradient is undefined. To this end, we propose an analogous score function called the "Concrete score", a generalization of the (Stein) score for discrete settings.
    [Paper | Code (Coming soon)]

Publications

(*Equally contributed to the project)

Selected Computer Graphics Projects

  • Rendering Cotton Candy on the Cup

    Grand prize winning project for the 2018 Rendering Competition. Modeled and built the scene Cotton Candy on the Cup in PBRT, a computer graphics rendering system. All the models in the scene are built from scratch. Composed the final scene and tuned the final image.

  • Rendering and Building the Beauty and the Beast scene





    Modeled and rendered the 3D scene Beauty and the Beast in the given ray tracer. Built and textured all the 3D models (e.g. violin, rose, cloth, books, water, etc) from scratch. Composed the final scene and tuned the final image.

  • Hair Simulation in Blender




    Modeled and simulated hair motion in Blender.

Professional Services

  • Journal reviewer:
    JMLR, TMLR
  • Conference reviwer:
    ICLR, NeurIPS, ICML,AISTATS, UAI, CVPR, ICCV, ECCV, ACM SIGGRAPH

Selected Coursework

  • Math Courses:
    Differential Topology, Galois Theory, Combinatorics, Graph Theory, Abstract Algebra, Number Theory, Complex Analysis, Analysis on Manifolds, Mathematical Analysis, Discrete Mathematics, Numerical Analysis.
  • CS Courses:
    Computer Graphics, Machine Learning, Deep Learning, Analysis of Algorithms, Computer System.

Honors and Awards

Personal


I am a fan of arts and 3D animation. I also love music, especially movie soundtracks.