Computation Efficient Learning Lab.38
Efficient AI Technology
We focus on how to redesign AI and learning technologies towards superior computing efficiency for IoT/Big Data/edge computing. We explore alternative computing solutions for future learning technology, including near data computing to push computation beyond traditional processors, and brain-inspired hyperdimensional computing that closely models the ultimate efficient processor - the human brain.
On-Device Generative AI
We advance state-of-the-art Generative AI by developing sophisticated models, including LLM and Diffusion, and applying them to emerging fields such as system optimization and performance evaluation. Our work emphasizes the development of techniques for model optimization and knowledge compression, facilitating efficient deployment across diverse application domains and driving innovation in modern system design.
Brain-inspired Hyperdimensional Computing
Hyperdimensional (HD) computing is an alternative computing method that processes cognitive tasks in a lightweight and error-torrent way based on theoretical neuroscience. We work on developing neuro-symbolic AI to support various learning tasks such as supervised, unsupervised, and reinforcement learning based on HD computing.
Learning with Alternative Computing
We rethink the role of machine learning in various alternative computing. We explore diverse system-level solutions to seamlessly accelerate AI applications on next-generation computer architectures such as near-data computing, in-memory computing, and CXL (Compute Express Link). We also develop ML-driven system software to enhance the capability of traditional computers.
Advisor Professor : Kim, Yeseong
Computation Efficient Learning Lab. Homepage
We focus on how to redesign AI and learning technologies towards superior computing efficiency for IoT/Big Data/edge computing. We explore alternative computing solutions for future learning technology, including near data computing to push computation beyond traditional processors, and brain-inspired hyperdimensional computing that closely models the ultimate efficient processor - the human brain.
On-Device Generative AI
We advance state-of-the-art Generative AI by developing sophisticated models, including LLM and Diffusion, and applying them to emerging fields such as system optimization and performance evaluation. Our work emphasizes the development of techniques for model optimization and knowledge compression, facilitating efficient deployment across diverse application domains and driving innovation in modern system design.
Brain-inspired Hyperdimensional Computing
Hyperdimensional (HD) computing is an alternative computing method that processes cognitive tasks in a lightweight and error-torrent way based on theoretical neuroscience. We work on developing neuro-symbolic AI to support various learning tasks such as supervised, unsupervised, and reinforcement learning based on HD computing.
Learning with Alternative Computing
We rethink the role of machine learning in various alternative computing. We explore diverse system-level solutions to seamlessly accelerate AI applications on next-generation computer architectures such as near-data computing, in-memory computing, and CXL (Compute Express Link). We also develop ML-driven system software to enhance the capability of traditional computers.
Advisor Professor : Kim, Yeseong
Computation Efficient Learning Lab. Homepage
Subject
Co-Author(s)
Related Keyword
Recent Submissions
- FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering
- Exploiting Boosting in Hyperdimensional Computing for Enhanced Reliability in Healthcare
- Late Breaking Results: Hyperdimensional Regression with Fine-Grained and Scalable Confidence-Based Learning
- Diffusion-Based Generative System Surrogates for Scalable Learning-Driven Optimization in Virtual Playgrounds
- Late Breaking Results: Dynamically Scalable Pruning for Transformer-Based Large Language Models
