<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/6303" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/6303</id>
  <updated>2026-04-04T15:18:27Z</updated>
  <dc:date>2026-04-04T15:18:27Z</dc:date>
  <entry>
    <title>A 3.3-To-11V-Supply-Range 10μW/Ch Arbitrary-Waveform-Capable Neural Stimulator with Output-Adaptive-Self-Bias and Supply-Tracking Schemes in 0.18μm Standard CMOS</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/57274" />
    <author>
      <name>Wie, Jeongyoon</name>
    </author>
    <author>
      <name>Jung, Sangwoo</name>
    </author>
    <author>
      <name>Seol, Taeryoung</name>
    </author>
    <author>
      <name>Kim, Geunha</name>
    </author>
    <author>
      <name>Lee, Sehwan</name>
    </author>
    <author>
      <name>Jang, Homin</name>
    </author>
    <author>
      <name>Kim, Samhwan</name>
    </author>
    <author>
      <name>Shin, Yeon Jae</name>
    </author>
    <author>
      <name>Jang, Jae Eun</name>
    </author>
    <author>
      <name>Kung, Jaeha</name>
    </author>
    <author>
      <name>George, Arup Kocheethra</name>
    </author>
    <author>
      <name>Lee, Junghyup</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/57274</id>
    <updated>2025-07-25T02:42:30Z</updated>
    <published>2024-04-23T15:00:00Z</published>
    <summary type="text">Title: A 3.3-To-11V-Supply-Range 10μW/Ch Arbitrary-Waveform-Capable Neural Stimulator with Output-Adaptive-Self-Bias and Supply-Tracking Schemes in 0.18μm Standard CMOS
Author(s): Wie, Jeongyoon; Jung, Sangwoo; Seol, Taeryoung; Kim, Geunha; Lee, Sehwan; Jang, Homin; Kim, Samhwan; Shin, Yeon Jae; Jang, Jae Eun; Kung, Jaeha; George, Arup Kocheethra; Lee, Junghyup
Abstract: Neurostimulation has emerged as the cornerstone that enables closed-loop brain-machine interfaces and targeted treatments for many neurological disorders. Regardless of the application, neurostimulators employ implanted electrodes to deliver charge pulses to tissues within safety limits to engender desired neural responses. However, as electrode-Tissue-impedance (ETI) varies widely (Fig. 1 (top)), neurostimulators should operate over a wide supply range to ensure both therapeutic effectiveness and safety [1]. When ETI is large, a higher supply is needed to provide adequate stimulation. However, when ETI is low, a low supply is necessary to minimize tissue damage from excessive electrical field and heat rise [1], [2]. Furthermore, power consumption during standby mode limited to under 10μ W/Ch ensures no tissue necrosis. Lastly, a stimulator capable of delivering arbitrary stimulation waveforms is also desirable for maximal efficiency and therapeutic effectiveness. © 2024 IEEE.</summary>
    <dc:date>2024-04-23T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Noise Tolerance of an Energy-Scalable Deep Learning Model with Two Extreme Bit-Precisions</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/47914" />
    <author>
      <name>Jung, Sangwoo</name>
    </author>
    <author>
      <name>Kung, Jaeha</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/47914</id>
    <updated>2025-07-25T04:25:14Z</updated>
    <published>2019-10-06T15:00:00Z</published>
    <summary type="text">Title: Noise Tolerance of an Energy-Scalable Deep Learning Model with Two Extreme Bit-Precisions
Author(s): Jung, Sangwoo; Kung, Jaeha
Abstract: In this paper, we perform the noise analysis on an energy-scalable deep learning model with two extreme bit-precisions, named MixNet. In real-world applications, there might be a great deal of noisy inputs that are collected from mobile sensors, and the training is performed on those noisy datasets. According to our initial set of experiments, MixNet has lower sensitivity to the noise in the training dataset, when compared to the original CNN model with high-precision. As a result, it is expected that the MixNet can be trained better even in a noisy environment than the original high-precision deep learning models. © 2019 IEEE.</summary>
    <dc:date>2019-10-06T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/47803" />
    <author>
      <name>Park, Dahoon</name>
    </author>
    <author>
      <name>Kown, Kon-Woo</name>
    </author>
    <author>
      <name>Im, Sunghoon</name>
    </author>
    <author>
      <name>Kung, Jaeha</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/47803</id>
    <updated>2025-07-25T02:43:38Z</updated>
    <published>2021-11-21T15:00:00Z</published>
    <summary type="text">Title: ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack
Author(s): Park, Dahoon; Kown, Kon-Woo; Im, Sunghoon; Kung, Jaeha
Abstract: In this paper, we present Zero-data Based Repeated bit flip Attack (ZeBRA) that precisely destroys deep neural networks (DNNs) by synthesizing its own attack datasets. Many prior works on adversarial weight attack require not only the weight parameters, but also the training or test dataset in searching vulnerable bits to be attacked. We propose to synthesize the attack dataset, named distilled target data, by utilizing the statistics of batch normalization layers in the victim DNN model. Equipped with the distilled target data, our ZeBRA algorithm can search vulnerable bits in the model without accessing training or test dataset. Thus, our approach makes the adversarial weight attack more fatal to the security of DNNs. Our experimental results show that 2.0x (CIFAR-10) and 1.6x (ImageNet) less number of bit flips are required on average to destroy DNNs compared to the previous attack method. Our code is available at https://github.com/pdh930105/ZeBRA. © 2021. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.</summary>
    <dc:date>2021-11-21T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Adaptive Input-to-Neuron Interlink Development in Training of Spike-Based Liquid State Machines</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/46930" />
    <author>
      <name>Hwang, Sangwoo</name>
    </author>
    <author>
      <name>Lee, Junghyup</name>
    </author>
    <author>
      <name>Kung, Jaeha</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/46930</id>
    <updated>2025-07-25T04:10:39Z</updated>
    <published>2021-05-23T15:00:00Z</published>
    <summary type="text">Title: Adaptive Input-to-Neuron Interlink Development in Training of Spike-Based Liquid State Machines
Author(s): Hwang, Sangwoo; Lee, Junghyup; Kung, Jaeha
Abstract: In this paper, we present a novel approach in developing input-to-neuron interlinks to achieve better accuracy in spike-based liquid state machines. An energy-efficient Spiking Neural Network suffer from lower accuracy in image classification compared to deep learning models. The previous LSM models randomly connect input neurons to excitatory neurons in a liquid. This limits the expressive power of a liquid model as large portion of excitatory neurons become inactive which never fire. To overcome this limitation, we propose an adaptive interlink development method which achieves 3.2% higher classification accuracy than the static LSM model of 3,200 neurons. Also, our hardware implementation on FPGA improves performance by 3.16∼4.99× or 1.47∼3.95× over CPU/GPU. © 2021 IEEE</summary>
    <dc:date>2021-05-23T15:00:00Z</dc:date>
  </entry>
</feed>

