<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12472">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12472</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58941" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/56724" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/46915" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/46840" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-05T02:03:25Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58941">
    <title>Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58941</link>
    <description>Title: Low-Complexity Deep Convolutional Neural Networks on Fully Homomorphic Encryption Using Multiplexed Parallel Convolutions
Author(s): Lee, Eunsang; Lee, Joon-Woo; Lee, Junghyun; Kim, Young-Sik; Kim, Yongjune; No, Jong-Seon; Choi, Woosuk
Abstract: Recently, the standard ResNet-20 network was successfully implemented on the fully homomorphic encryption scheme, residue number system variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme using bootstrapping, but the implementation lacks practicality due to high latency and low security level. To improve the performance, we first minimize total bootstrapping runtime using multiplexed parallel convolution that collects sparse output data for multiple channels compactly. We also propose the imaginary-removing bootstrapping to prevent the deep neural networks from catastrophic divergence during approximate ReLU operations. In addition, we optimize level consumptions and use lighter and tighter parameters. Simulation results show that we have 4.67x lower inference latency and 134x less amortized runtime (runtime per image) for ResNet-20 compared to the state-of-the-art previous work, and we achieve standard 128-bit security. Furthermore, we successfully implement ResNet-110 with high accuracy on the RNS-CKKS scheme for the first time.</description>
    <dc:date>2022-07-18T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/56724">
    <title>연합 학습 방법 및 장치</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/56724</link>
    <description>Title: 연합 학습 방법 및 장치
Author(s): 김선형; 김용준; 최원정
Abstract: 본 발명에 의하면 손실 함수 및 L1-norm 정규화 항을 이용하여 연합 학습의 통신 효율을 개선하는 연합 학습 방법 및 장치가 제공된다. 이로써 연합 학습의 통신 효율이 제고된다.</description>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/46915">
    <title>Boosting for Straggling and Flipping Classifiers</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46915</link>
    <description>Title: Boosting for Straggling and Flipping Classifiers
Author(s): Cassuto, Yuval; Kim, Yongjune
Abstract: Boosting is a well-known method in machine learning for combining multiple weak classifiers into one strong classifier. When used in distributed setting, accuracy is hurt by classifiers that flip or straggle due to communication and/or computation unreliability. While unreliability in the form of noisy data is well-treated by the boosting literature, the unreliability of the classifier outputs has not been explicitly addressed. Protecting the classifier outputs with an error/erasure-correcting code requires reliable encoding of multiple classifier outputs, which is not feasible in common distributed settings. In this paper we address the problem of training boosted classifiers subject to straggling or flips at classification time. We propose two approaches: one based on minimizing the usual exponential loss but in expectation over the classifier errors, and one by defining and minimizing a new worst-case loss for a specified bound on the number of unreliable classifiers. © 2021 IEEE.</description>
    <dc:date>2021-07-16T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/46840">
    <title>High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46840</link>
    <description>Title: High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization
Author(s): Lee, Yongwoo; Lee, Joon-Woo; Kim, Young-Sik; Kim, Yongjune; No, Jong-Seon; Kang, HyungChul
Abstract: The Cheon-Kim-Kim-Song (CKKS) scheme (Asiacrypt’17) is one of the most promising homomorphic encryption (HE) schemes as it enables privacy-preserving computing over real (or complex) numbers. It is known that bootstrapping is the most challenging part of the CKKS scheme. Further, homomorphic evaluation of modular reduction is the core of the CKKS bootstrapping. As modular reduction is not represented by the addition and multiplication of complex numbers, approximate polynomials for modular reduction should be used. The best-known techniques (Eurocrypt’21) use a polynomial approximation for trigonometric functions and their composition. However, all the previous methods are based on an indirect approximation, and thus it requires lots of multiplicative depth to achieve high accuracy. This paper proposes a direct polynomial approximation of modular reduction for CKKS bootstrapping, which is optimal in error variance and depth. Further, we propose an efficient algorithm, namely the lazy baby-step giant-step (BSGS) algorithm, to homomorphically evaluate the approximate polynomial, utilizing the lazy relinearization/rescaling technique. The lazy-BSGS reduces the computational complexity by half compared to the ordinary BSGS algorithm. The performance improvement for the CKKS scheme by the proposed algorithm is verified by implementation using HE libraries. The implementation results show that the proposed method has a multiplicative depth of 10 for modular reduction to achieve the state-of-the-art accuracy, while the previous methods have depths of 11 to 12. Moreover, we achieve higher accuracy within a small multiplicative depth, for example, 93-bit within multiplicative depth 11. © 2022, International Association for Cryptologic Research.</description>
    <dc:date>2022-05-31T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

