<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/4355</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 09:03:00 GMT</pubDate>
    <dc:date>2026-04-04T09:03:00Z</dc:date>
    <item>
      <title>Image Broadcasting for Heterogeneous User Devices in MIMO Networks</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46982</link>
      <description>Title: Image Broadcasting for Heterogeneous User Devices in MIMO Networks
Author(s): Jang, Soyoung; Chang, Seok-Ho; Kim, Minyeong; Cho, Sunghyun
Abstract: This paper considers a multimedia broadcasting scenario in which two types of heterogeneous users with different display resolutions and different numbers of antennas stay in the service area. We propose an image broadcasting scheme that uses the image super-resolution (SR) techniques, spatial diversity, and diversity-multiplexing tradeoff (DMT) achieving codes. The proposed scheme broadcasts a low-resolution (LR) image to two types of users, along with residual pixel-error map containing high-frequency details of high-resolution (HR) image. Then, a user retaining an HR screen employs SR to reconstruct an HR image from the received LR image, and exploits the residual map to further enhance the image quality. Our scheme properly trains the neural network models of the deep learning-based SR by taking into account the source coding rates of the images. Considering the relationship between the number of antennas and screen resolution, based on hardware space of user devices, the proposed scheme encodes an LR image with spatial diversity, and encodes residual map with DMT-achieving codes. Numerical evaluation shows that our scheme significantly outperforms the baseline strategy that broadcasts either HR or LR images. © 2019 IEEE.</description>
      <pubDate>Tue, 21 May 2019 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/46982</guid>
      <dc:date>2019-05-21T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Video Upright Adjustment and Stabilization</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12919</link>
      <description>Title: Video Upright Adjustment and Stabilization
Author(s): Won, Jucheol; Cho, Sunghyun
Abstract: We propose a novel video upright adjustment method that can reliably correct slanted video contents. Our approach combines deep learning and Bayesian inference to estimate accurate rotation angles from video frames. We train a convolutional neural network to obtain initial estimates of the rotation angles of input video frames. The initial estimates are temporally inconsistent and inaccurate. To resolve this, we use Bayesian inference. We analyze estimation errors of the network, and derive an error model. Based on the error model, we formulate video upright adjustment as a maximum a posteriori problem where we estimate consistent rotation angles from the initial estimates. Finally, we propose a joint approach to video stabilization and upright adjustment to minimize information loss. Experimental results show that our video upright adjustment method can effectively correct slanted video contents, and our joint approach can achieve visually pleasing results from shaky and slanted videos. © 2019. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.</description>
      <pubDate>Mon, 09 Sep 2019 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/12919</guid>
      <dc:date>2019-09-09T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

