Efficient Computing Lab.

The Efficient Computing Laboratory (ECL) is a part of Department of AI at UST ETRI Campus. Gajeong-ro 218, Yuseong-gu, Daejoen South Korea.

ust.jpg

7-416 ETRI, Gajeong-ro 218,Yuseong-gu123, Daejeon, South Korea

Welcome to the Efficient Computing Lab. We focus on energy efficiency, system optimization, user experience, and sustainable tech solutions. Our research interests are the followings::

  • Model Compression: We are dedicated to improving the performance of machine learning models, particularly through the use of model compression techniques. This involves reducing the complexity of existing models while maintaining their performance or even enhancing it. We are interested in exploring different methods of model compression, such as pruning, quantization, and knowledge distillation. By doing so, we aim to make these models more accessible and affordable to implement, even in resource-constrained environments.
  • AI Compiler: Our interest also extends to the field of AI compilers, with a focus on optimizing and automating the process of translating high-level AI algorithms into efficient low-level machine codes. We believe that efficient AI compilers can significantly reduce the amount of computational power required, thus lowering energy consumption. Additionally, they can improve the speed of execution, thereby enhancing the overall user experience. Our aim is to investigate new compiler techniques, optimization methods, and tools that will make AI systems more efficient, sustainable, and widely applicable.

We are looking for highly motivated students. If you are interested, please send cv to leejaymin_at_etri_dot_re_dot_kr.

News

Aug 19, 2024 Mixed Non-linear quantization paper was accepted at ECCV Workshop-CADL. Congratulations:tada:
Aug 13, 2024 NEST Compiler for AI accelerators paper was accepted at ETRI Journal (IF 1.3). Congratulations:tada:
Jun 30, 2024 Visual Preference Inference paper was accepted at IEEE IROS 2024 as oral pitch and interactive presentation. Congratulations:tada:
May 17, 2024 “Quantization for hybrid vision transformers” was accepted at IEEE IoT Journal (IF:10.6, Top:2.2%):tada:
Apr 5, 2024 “Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation” was accepted at IEEE 2024 ICRA Workshop VLMNM:tada:
Nov 2, 2023 “ACLTuner: A Profiling-Driven Fast Tuning to Optimize Deep Learning Inference” was accepted at Machine Learning for Systems workshop (NuerrIPS 2023):tada:
Aug 6, 2023 “Pipelining of a Mobile SoC and an External NPU for Accelerating CNN Inference” was accepted at IEEE Embedded Systems Letters:tada:

Selected Publications

2022

  1. CPrune: Compiler-informed model pruning for efficient target-aware DNN execution
    T. KimYongin KwonJemin LeeTaeho Kim, and Sangtae Ha
    In European Conference on Computer Vision (ECCV), pp.651–667, Oct. 23-27, 2022, BK-IF 2, Acceptance Rate 28% (1,650 papers accepted out of 5,803 submitted)., Oct 2022

2020

  1. PASS: Reducing redundant notifications between a smartphone and a smartwatch for energy saving
    Jemin LeeUichin Lee, and Hyungshin Kim
    IEEE Transactions on Mobile Computing,(impact factor: 5.538, JCR20: Top 17%, telecommunications rank #16 out of 91), ISSN: 1536-1233, doi: https://doi.org/10.1109/TMC.2019.2930506, Nov 2020

2019

  1. Fire in your hands: Understanding thermal behavior of smartphones
    Soowon Kang, Hyeonwoo Choi, Sooyoung Park, Chunjong ParkJemin LeeUichin Lee, and Sung-Ju Lee
    In The 25th Annual International Conference on Mobile Computing and Networking, pp. 1-16, Los Cabos, Mexico, 21-25 Oct. 2019, BK-IF 4, Acceptance Rate 19% (55 papers accepted out of 290 submitted)., Oct 2019