Detail View

DC Field Value Language
dc.contributor.author Son, Eungang -
dc.contributor.author Song, Seungeon -
dc.contributor.author Kim, Bong-Seok -
dc.contributor.author Kim, Sangdong -
dc.contributor.author Lee, Jonghun -
dc.date.accessioned 2026-01-21T17:40:15Z -
dc.date.available 2026-01-21T17:40:15Z -
dc.date.created 2025-12-09 -
dc.date.issued 2025-11 -
dc.identifier.issn 2624-6120 -
dc.identifier.uri https://scholar.dgist.ac.kr/handle/20.500.11750/59387 -
dc.description.abstract Foot gesture recognition using a continuous-wave (CW) radar requires implementation on edge hardware with strict latency and memory budgets. Existing structured and unstructured pruning pipelines rely on iterative training–pruning–retraining cycles, increasing search costs and making them significantly time-consuming. We propose a NAS-guided bisection hybrid pruning framework on foot gesture recognition from a continuous-wave (CW) radar, which employs a weighted shared supernet encompassing both block and channel options. The method consists of three major steps. In the bisection-guided NAS structured pruning stage, the algorithm identifies the minimum number of retained blocks—or equivalently, the maximum achievable sparsity—that satisfies the target accuracy under specified FLOPs and latency constraints. Next, during the hybrid compression phase, a global L1 percentile-based unstructured pruning and channel repacking are applied to further reduce memory usage. Finally, in the low-cost decision protocol stage, each pruning decision is evaluated using short fine-tuning (1–3 epochs) and partial validation (10–30% of dataset) to avoid repeated full retraining. We further provide a unified theory for hybrid pruning—formulating a resource-aware objective, a logit-perturbation invariance bound for unstructured pruning/INT8/repacking, a Hoeffding-based bisection decision margin, and a compression (code-length) generalization bound—explaining when the compressed models match baseline accuracy while meeting edge budgets. Radar return signals are processed with a short-time Fourier transform (STFT) to generate unique time–frequency spectrograms for each gesture (kick, swing, slide, tap). The proposed pruning method achieves 20–57% reductions in floating-point operations (FLOPs) and approximately 86% reductions in parameters, while preserving equivalent recognition accuracy. Experimental results demonstrate that the pruned model maintains high gesture recognition performance with substantially lower computational cost, making it suitable for real-time deployment on edge devices. -
dc.language English -
dc.publisher MDPI -
dc.title Radar Foot Gesture Recognition with Hybrid Pruned Lightweight Deep Models -
dc.type Article -
dc.identifier.doi 10.3390/signals6040066 -
dc.identifier.scopusid 2-s2.0-105025780060 -
dc.identifier.bibliographicCitation Signals, v.6, no.4 -
dc.description.isOpenAccess TRUE -
dc.subject.keywordAuthor gesture recognition -
dc.subject.keywordAuthor RADAR -
dc.subject.keywordAuthor STFT -
dc.subject.keywordAuthor Fourier transform -
dc.subject.keywordAuthor CW -
dc.subject.keywordAuthor network pruning -
dc.subject.keywordAuthor lightweight network -
dc.subject.keywordAuthor bisection-method -
dc.citation.number 4 -
dc.citation.title Signals -
dc.citation.volume 6 -
dc.description.journalRegisteredClass scopus -
dc.type.docType Article -
Show Simple Item Record

File Downloads

공유

qrcode
공유하기

Related Researcher

송승언
Song, Seungeon송승언

Division of Mobility Technology

read more

Total Views & Downloads

???jsp.display-item.statistics.view???: , ???jsp.display-item.statistics.download???: