<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection: null</title>
  <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/4355" />
  <subtitle />
  <id>https://scholar.dgist.ac.kr/handle/20.500.11750/4355</id>
  <updated>2026-04-04T09:03:26Z</updated>
  <dc:date>2026-04-04T09:03:26Z</dc:date>
  <entry>
    <title>Image Broadcasting for Heterogeneous User Devices in MIMO Networks</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/46982" />
    <author>
      <name>Jang, Soyoung</name>
    </author>
    <author>
      <name>Chang, Seok-Ho</name>
    </author>
    <author>
      <name>Kim, Minyeong</name>
    </author>
    <author>
      <name>Cho, Sunghyun</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/46982</id>
    <updated>2025-07-25T03:22:39Z</updated>
    <published>2019-05-21T15:00:00Z</published>
    <summary type="text">Title: Image Broadcasting for Heterogeneous User Devices in MIMO Networks
Author(s): Jang, Soyoung; Chang, Seok-Ho; Kim, Minyeong; Cho, Sunghyun
Abstract: This paper considers a multimedia broadcasting scenario in which two types of heterogeneous users with different display resolutions and different numbers of antennas stay in the service area. We propose an image broadcasting scheme that uses the image super-resolution (SR) techniques, spatial diversity, and diversity-multiplexing tradeoff (DMT) achieving codes. The proposed scheme broadcasts a low-resolution (LR) image to two types of users, along with residual pixel-error map containing high-frequency details of high-resolution (HR) image. Then, a user retaining an HR screen employs SR to reconstruct an HR image from the received LR image, and exploits the residual map to further enhance the image quality. Our scheme properly trains the neural network models of the deep learning-based SR by taking into account the source coding rates of the images. Considering the relationship between the number of antennas and screen resolution, based on hardware space of user devices, the proposed scheme encodes an LR image with spatial diversity, and encodes residual map with DMT-achieving codes. Numerical evaluation shows that our scheme significantly outperforms the baseline strategy that broadcasts either HR or LR images. © 2019 IEEE.</summary>
    <dc:date>2019-05-21T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Video Upright Adjustment and Stabilization</title>
    <link rel="alternate" href="https://scholar.dgist.ac.kr/handle/20.500.11750/12919" />
    <author>
      <name>Won, Jucheol</name>
    </author>
    <author>
      <name>Cho, Sunghyun</name>
    </author>
    <id>https://scholar.dgist.ac.kr/handle/20.500.11750/12919</id>
    <updated>2025-07-25T03:33:36Z</updated>
    <published>2019-09-09T15:00:00Z</published>
    <summary type="text">Title: Video Upright Adjustment and Stabilization
Author(s): Won, Jucheol; Cho, Sunghyun
Abstract: We propose a novel video upright adjustment method that can reliably correct slanted video contents. Our approach combines deep learning and Bayesian inference to estimate accurate rotation angles from video frames. We train a convolutional neural network to obtain initial estimates of the rotation angles of input video frames. The initial estimates are temporally inconsistent and inaccurate. To resolve this, we use Bayesian inference. We analyze estimation errors of the network, and derive an error model. Based on the error model, we formulate video upright adjustment as a maximum a posteriori problem where we estimate consistent rotation angles from the initial estimates. Finally, we propose a joint approach to video stabilization and upright adjustment to minimize information loss. Experimental results show that our video upright adjustment method can effectively correct slanted video contents, and our joint approach can achieve visually pleasing results from shaky and slanted videos. © 2019. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.</summary>
    <dc:date>2019-09-09T15:00:00Z</dc:date>
  </entry>
</feed>

