<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/12473" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/12473</id>
  <updated>2026-04-05T02:03:25Z</updated>
  <dc:date>2026-04-05T02:03:25Z</dc:date>
  <entry>
    <title>Optimizing Write Fidelity of MRAMs by Alternating Water-filling Algorithm</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/17287" />
    <author>
      <name>Kim, Yongjune</name>
    </author>
    <author>
      <name>Jeon, Yoocharn</name>
    </author>
    <author>
      <name>Choi, Hyeokjin</name>
    </author>
    <author>
      <name>Guyot, Cyril</name>
    </author>
    <author>
      <name>Cassuto, Yuval</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/17287</id>
    <updated>2025-07-25T03:22:23Z</updated>
    <published>2022-08-31T15:00:00Z</published>
    <summary type="text">Title: Optimizing Write Fidelity of MRAMs by Alternating Water-filling Algorithm
Author(s): Kim, Yongjune; Jeon, Yoocharn; Choi, Hyeokjin; Guyot, Cyril; Cassuto, Yuval
Abstract: Magnetic random-access memory (MRAM) is a promising memory technology due to its high density, non-volatility, and high endurance. However, achieving high memory fidelity incurs high write-energy costs, which should be reduced for large-scale deployment of MRAMs. In this paper, we formulate a biconvex optimization problem to optimize write fidelity given energy and latency constraints. The basic idea is to allocate non-uniform write pulses depending on the importance of each bit position. The fidelity measure we consider is mean squared error (MSE), for which we optimize write pulses via alternating convex search (ACS). We derive analytic solutions and propose an alternating water-filling algorithm by casting the MRAM’s write operation as communication over parallel channels. Hence, the proposed alternating water-filling algorithm is computationally more efficient than the original ACS while their solutions are identical. Since the formulated biconvex problem is non-convex, both the original ACS and the proposed algorithm do not guarantee global optimality. However, the MSEs obtained by the proposed algorithm are comparable to the MSEs by complicated global nonlinear programming solvers. Furthermore, we prove that our algorithm can reduce the MSE exponentially with the number of bits per word. For an 8-bit accessed word, the proposed algorithm reduces the MSE by a factor of 21. We also evaluate MNIST dataset classification supposing that the model parameters of deep neural networks are stored in MRAMs. The numerical results show that the optimized write pulses can achieve 40% write-energy reduction for the same classification accuracy. IEEE</summary>
    <dc:date>2022-08-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>On the Efficient Estimation of Min-Entropy</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/13820" />
    <author>
      <name>Kim, Yongjune</name>
    </author>
    <author>
      <name>Guyot, Cyril</name>
    </author>
    <author>
      <name>Kim, Young-Sik</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/13820</id>
    <updated>2025-07-25T02:44:47Z</updated>
    <published>2021-03-31T15:00:00Z</published>
    <summary type="text">Title: On the Efficient Estimation of Min-Entropy
Author(s): Kim, Yongjune; Guyot, Cyril; Kim, Young-Sik
Abstract: The min-entropy is a widely used metric to quantify the randomness of generated random numbers in cryptographic applications; it measures the difficulty of guessing the most likely output. An important min-entropy estimator is the compression estimator of NIST Special Publication (SP) 800-90B, which relies on Maurer’s universal test. In this paper, we propose two kinds of min-entropy estimators to improve computational complexity and estimation accuracy by leveraging two variations of Maurer’s test: Coron’s test (for Shannon entropy) and Kim’s test (for Rényi entropy). First, we propose a min-entropy estimator based on Coron’s test. It is computationally more efficient than the compression estimator while maintaining the estimation accuracy. The secondly proposed estimator relies on Kim’s test that computes the Rényi entropy. This estimator improves estimation accuracy as well as computational complexity. We analytically characterize the bias-variance tradeoff, which depends on the order of Rényi entropy. By taking into account this tradeoff, we observe that the order of two is a proper assignment and focus on the min-entropy estimation based on the collision entropy (i.e., Rényi entropy of order two). The min-entropy estimation from the collision entropy can be described by a closed-form solution, whereas both the compression estimator and the proposed estimator based on Coron’s test do not have closed-form solutions. By leveraging the closed-form solution, we also propose a lightweight estimator that processes data samples in an online manner. Numerical evaluations demonstrate that the first proposed estimator achieves the same accuracy as the compression estimator with much less computation. The proposed estimator based on the collision entropy can even improve the accuracy and reduce the computational complexity. IEEE</summary>
    <dc:date>2021-03-31T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Compression By and For Deep Boltzmann Machines</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/12484" />
    <author>
      <name>Li, Qing</name>
    </author>
    <author>
      <name>Chen, Yang</name>
    </author>
    <author>
      <name>Kim, Yongjune</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/12484</id>
    <updated>2025-07-24T07:30:24Z</updated>
    <published>2020-11-30T15:00:00Z</published>
    <summary type="text">Title: Compression By and For Deep Boltzmann Machines
Author(s): Li, Qing; Chen, Yang; Kim, Yongjune
Abstract: We answer two questions in this work: what Deep Boltzmann Machines (DBMs) can do for compression and vise versa. We show that (1) DBMs can be applied to learn the rate distortion approaching posterior as in the Blahut-Arimoto (BA) algorithm, and to construct a lossy source compression scheme based on the Deep AutoEncoder; (2) compression can improve DBMs&amp;apos; training performances via compression-based denoising algorithms. The implementation of the BA algorithm in the form of DBMs is the foundation of the two applications. © 1972-2012 IEEE.</summary>
    <dc:date>2020-11-30T15:00:00Z</dc:date>
  </entry>
</feed>

