Junhyuk So

I am a Ph.D. candidate at POSTECH CSE, advised by Prof. Eunhyeok Park.
I work on building efficient machine learning algorithms with real world applications - vision, language, audio and robotics.

I am a member of both Efficient Computing Lab and the Machine Learning Lab. Publications from our groups are available here (ECo) and here (ML).

Education. I started my Ph.D. in Computer Science and Engineering at POSTECH in 2022. Previously, I received my B.S. in Electrical and Computer Engineering from the University of Seoul.

Email  /  CV  /  Google Scholar  /  LinkedIn

profile photo

Research

I am primarily interested in enhancing efficiency of machine learning algorithms, especially for generative models. Most of my work involves designing efficient inference algorithms for vision generation. I am also deeply interested in general ML topics, such as optimization or numerical methods.

Keyword : [Diffusion], [LLM/VLM],[Efficient ML], [Quantization], [Parallelization] [Multimodal]
so2025gsd Grouped Speculative Decoding for Autoregressive Image Generation
Junhyuk So, Juncheol Shin, Hyunho Kook and Eunhyeok Park.
ICCV, 2025
arXiv / Code

Keyword : [VLM],[Efficient ML], [Parallelization],[Speculative Decoding]
so2025pcm PCM : Picard Consistency Model for Fast Parallel Sampling of Diffusion Models
Junhyuk So, Jiwoong Shin, Chaeyeon Jang and Eunhyeok Park.
CVPR, 2025
Paper / arXiv / Code

Keyword : [Diffusion],[Efficient ML], [Parallelization]
so2024frdiff FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models
Junhyuk So*, Jungwon Lee* and Eunhyeok Park.
ECCV, 2024
Paper / arXiv / Page / Code / Colab

Keyword : [Diffusion],[Efficient ML], [Caching]
so2023tdq Temporal Dynamic Quantizatin for Diffusion Models
Junhyuk So*, Jungwon Lee*, Daehyun Ahn, Hyungjun Kim and Eunhyeok Park.
NeurIPS, 2023
Paper / arXiv / Code

Keyword : [Diffusion],[Efficient ML], [Quantization]
oh2023m2mix Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Changdae Oh*, Junhyuk So*, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon and Kyungwoo Song.
NeurIPS, 2023
Paper / arXiv / Code

Keyword : [Multimodal],[Mix-Up]
shin2023nipq NIPQ: Noise Proxy-Based Integrated Pseudo-Quantization
Juncheol Shin*, Junhyuk So*, Sein Park, Seungyeop Kang, Sungjoo Yoo, and Eunhyeok Park.
CVPR, 2023
Paper / arXiv / Code

Keyword : [Efficient ML],[Quantization]
so2023dmm Robust Contrastive Learning With Dynamic Mixed Margin
Junhyuk So*, YongTaek Lim*, Yewon Kim*, Changdae Oh, and Kyungwoo Song.
IEEE Access, 2023
Paper

Keyword : [Multimodal],[Mix-Up]
oh2022farcon Learning Fair Representation via Distributional Contrastive Disentanglement
Changdae Oh, Heeji Won, Junhyuk So, Taero Kim, Yewon Kim, Hosik Choi, and Kyungwoo Song.
KDD, 2022
Paper / arXiv / Code

Keyword : [VAE], [Fairness]
oh2021spie Exploiting Activation Sparsity for Fast CNN Inference on Mobile GPUs
Chanyoung Oh*, Junhyuk So*, Sumin Kim* and Youngmin Yi.
ESWeek(CODES+ISSS) and ACM TECS (journal track), 2021
Paper

Keyword : [Efficient ML], [Pruning], [GPGPU]

Miscellanea

  • Academic Services :
  • Talks :
    • Recent Topics on Image generation Acceleration @ Squeezebits. 25.06.30
  • Teaching :
    • TA : Introduction to Artifical Intelligence (CSED105)
    • TA : Implementation and Acceleration of machine learning (AIGS510-01)

Template borrowed from Jon Barron