WEB OF SCIENCE
SCOPUS
In this paper, we introduce DiffNEST, a diffusion-based surrogate framework for scalable, learning-driven optimization in complex computing environments. The growing complexity of modern systems often renders traditional optimization techniques inefficient, while reinforcement learning (RL)-based methods struggle with high data collection costs and hardware constraints. DiffNEST employs a diffusion model to generate realistic, continuous system traces, enabling optimization without reliance on physical hardware. DiffNEST generates realistic traces that reflect diverse workload characteristics, facilitating rapid exploration of large optimization search spaces. A case study demonstrates that DiffNEST can accelerate real-world optimization tasks, achieving up to 50% improvement in task-aware adaptive DVFS and 16% in multi-core cache allocation compared to RL approaches trained directly on physical hardware. Through fine-tuning, we show that DiffNEST can also be reused across multiple optimization tasks and workload domains, indicating its potential as a general-purpose surrogate modeling framework for system-level optimization. The code is publicly available to facilitate further research and development. © 2025 Copyright held by the owner/author(s)
더보기Department of Electrical Engineering and Computer Science