Cited time in webofscience Cited time in scopus

Full metadata record

DC Field Value Language
dc.contributor.author Kim, Jae-Yeul -
dc.contributor.author Ha, Jong-Eun -
dc.date.accessioned 2024-01-23T11:10:13Z -
dc.date.available 2024-01-23T11:10:13Z -
dc.date.created 2024-01-12 -
dc.date.issued 2023-12 -
dc.identifier.issn 2169-3536 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/47647 -
dc.description.abstract Visual surveillance requires robust detection of foreground objects under challenging environments of abrupt lighting variation, stationary foreground objects, dynamic background objects, and severe weather conditions. Most classical algorithms leverage background model images produced by statistical modeling of the change of brightness values over time. Since they have difficulties using global features, many false detections occur at the stationary foreground regions and dynamic background objects. Recent deep learning-based methods can easily reflect global characteristics compared to classical methods. However, deep learning-based methods still need to be improved in utilizing spatiotemporal information. We propose an algorithm for efficiently using spatiotemporal information by adopting a split and merge framework. First, we split spatiotemporal information on successive multiple images into spatial and temporal parts using two sub-networks of semantic and motion networks. Finally, separated information is fused in a spatiotemporal fusion network. The proposed network consists of three sub-networks, which we note as MSF-NET (Motion and Semantic features Fusion NETwork). Also, we propose a method to train the proposed MSF-NET stably. Compared to the latest deep learning algorithms, the proposed MSF-NET gives 9% and 13% higher FM in the LASIESTA and SBI datasets. Also, we designed the proposed MSF-NET to be lightweight to run in real-time on a desktop GPU. © 2023 IEEE. -
dc.language English -
dc.publisher Institute of Electrical and Electronics Engineers -
dc.title MSF-NET: Foreground Objects Detection With Fusion of Motion and Semantic Features -
dc.type Article -
dc.identifier.doi 10.1109/ACCESS.2023.3345842 -
dc.identifier.scopusid 2-s2.0-85181543199 -
dc.identifier.bibliographicCitation IEEE Access, v.11, pp.145551 - 145565 -
dc.description.isOpenAccess TRUE -
dc.subject.keywordAuthor Deep learning -
dc.subject.keywordAuthor foreground object detection -
dc.subject.keywordAuthor spatiotemporal information -
dc.subject.keywordAuthor visual surveillance -
dc.subject.keywordPlus BACKGROUND SUBTRACTION -
dc.subject.keywordPlus DETECTION ALGORITHMS -
dc.subject.keywordPlus NETWORK -
dc.citation.endPage 145565 -
dc.citation.startPage 145551 -
dc.citation.title IEEE Access -
dc.citation.volume 11 -
Files in This Item:
001132118200001.pdf

001132118200001.pdf

기타 데이터 / 0 B / Adobe PDF download
Appears in Collections:
ETC 1. Journal Articles

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE