<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12474">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12474</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58941" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/46915" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/46840" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/46827" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T13:37:07Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58941">
    <title>Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58941</link>
    <description>Title: Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions
Author(s): Lee, Eunsang; Lee, Joon-Woo; Lee, Junghyun; Kim, Young-Sik; Kim, Yongjune; No, Jong-Seon; Choi, Woosuk
Abstract: Recently, the standard ResNet-20 network was successfully implemented on the fully homomorphic encryption scheme, residue number system variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme using bootstrapping, but the implementation lacks practicality due to high latency and low security level. To improve the performance, we first minimize total bootstrapping runtime using multiplexed parallel convolution that collects sparse output data for multiple channels compactly. We also propose the imaginary-removing bootstrapping to prevent the deep neural networks from catastrophic divergence during approximate ReLU operations. In addition, we optimize level consumptions and use lighter and tighter parameters. Simulation results show that we have 4.67x lower inference latency and 134x less amortized runtime (runtime per image) for ResNet-20 compared to the state-of-the-art previous work, and we achieve standard 128-bit security. Furthermore, we successfully implement ResNet-110 with high accuracy on the RNS-CKKS scheme for the first time.</description>
    <dc:date>2022-07-18T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/46915">
    <title>Boosting for Straggling and Flipping Classifiers</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46915</link>
    <description>Title: Boosting for Straggling and Flipping Classifiers
Author(s): Cassuto, Yuval; Kim, Yongjune
Abstract: Boosting is a well-known method in machine learning for combining multiple weak classifiers into one strong classifier. When used in distributed setting, accuracy is hurt by classifiers that flip or straggle due to communication and/or computation unreliability. While unreliability in the form of noisy data is well-treated by the boosting literature, the unreliability of the classifier outputs has not been explicitly addressed. Protecting the classifier outputs with an error/erasure-correcting code requires reliable encoding of multiple classifier outputs, which is not feasible in common distributed settings. In this paper we address the problem of training boosted classifiers subject to straggling or flips at classification time. We propose two approaches: one based on minimizing the usual exponential loss but in expectation over the classifier errors, and one by defining and minimizing a new worst-case loss for a specified bound on the number of unreliable classifiers. © 2021 IEEE.</description>
    <dc:date>2021-07-16T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/46840">
    <title>High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46840</link>
    <description>Title: High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization
Author(s): Lee, Yongwoo; Lee, Joon-Woo; Kim, Young-Sik; Kim, Yongjune; No, Jong-Seon; Kang, HyungChul
Abstract: The Cheon-Kim-Kim-Song (CKKS) scheme (Asiacrypt’17) is one of the most promising homomorphic encryption (HE) schemes as it enables privacy-preserving computing over real (or complex) numbers. It is known that bootstrapping is the most challenging part of the CKKS scheme. Further, homomorphic evaluation of modular reduction is the core of the CKKS bootstrapping. As modular reduction is not represented by the addition and multiplication of complex numbers, approximate polynomials for modular reduction should be used. The best-known techniques (Eurocrypt’21) use a polynomial approximation for trigonometric functions and their composition. However, all the previous methods are based on an indirect approximation, and thus it requires lots of multiplicative depth to achieve high accuracy. This paper proposes a direct polynomial approximation of modular reduction for CKKS bootstrapping, which is optimal in error variance and depth. Further, we propose an efficient algorithm, namely the lazy baby-step giant-step (BSGS) algorithm, to homomorphically evaluate the approximate polynomial, utilizing the lazy relinearization/rescaling technique. The lazy-BSGS reduces the computational complexity by half compared to the ordinary BSGS algorithm. The performance improvement for the CKKS scheme by the proposed algorithm is verified by implementation using HE libraries. The implementation results show that the proposed method has a multiplicative depth of 10 for modular reduction to achieve the state-of-the-art accuracy, while the previous methods have depths of 11 to 12. Moreover, we achieve higher accuracy within a small multiplicative depth, for example, 93-bit within multiplicative depth 11. © 2022, International Association for Cryptologic Research.</description>
    <dc:date>2022-05-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/46827">
    <title>Generalized Longest Repeated Substring Min-Entropy Estimator</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46827</link>
    <description>Title: Generalized Longest Repeated Substring Min-Entropy Estimator
Author(s): Woo, Jiheon; Yoo, Chanhee; Kim, Young-Sik; Cassuto, Yuval; Kim, Yongjune
Abstract: The min-entropy is a widely used metric to quantify the randomness of generated random numbers, which measures the difficulty of guessing the most likely output. It is difficult to accurately estimate the min-entropy of a non-independent and identically distributed (non-IID) source. Hence, NIST Special Publication (SP) 800-90B adopts ten different min-entropy estimators and then conservatively selects the minimum value among ten min-entropy estimates. Among these estimators, the longest repeated substring (LRS) estimator estimates the collision entropy instead of the min-entropy by counting the number of repeated substrings. Since the collision entropy is an upper bound on the min-entropy, the LRS estimator inherently provides overestimated outputs. In this paper, we propose two techniques to estimate the min-entropy of a non-IID source accurately. The first technique resolves the overestimation problem by translating the collision entropy into the min-entropy. Next, we generalize the LRS estimator by adopting the general Rényi entropy instead of the collision entropy (i.e., Rényi entropy of order two). We show that adopting a higher order can reduce the variance of min-entropy estimates. By integrating these techniques, we propose a generalized LRS estimator that effectively resolves the overestimation problem and provides stable min-entropy estimates. Theoretical analysis and empirical results support that the proposed generalized LRS estimator improves the estimation accuracy significantly, which makes it an appealing alternative to the current-standard LRS estimator. © 2022 IEEE.</description>
    <dc:date>2022-06-26T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

