Detail View

Human Activity Classification Based on Point Clouds Measured by Millimeter Wave MIMO Radar with Deep Recurrent Neural Networks
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
Human Activity Classification Based on Point Clouds Measured by Millimeter Wave MIMO Radar with Deep Recurrent Neural Networks
Issued Date
2021-06
Citation
Kim, Youngwook. (2021-06). Human Activity Classification Based on Point Clouds Measured by Millimeter Wave MIMO Radar with Deep Recurrent Neural Networks. IEEE Sensors Journal, 21(12), 13522–13529. doi: 10.1109/JSEN.2021.3068388
Type
Article
Author Keywords
deep convolutional neural networksdeep recurrent neural networksFeature extractionFMCW radarHuman activity classificationMillimeter wave radarMIMO radarpoint cloudsRadarRadar antennasRadar measurementsShapeThree-dimensional displays
Keywords
ConvolutionConvolutional neural networksDeep neural networksMillimeter wavesMIMO radarRadar measurementAngular resolutionClassification accuracyConvolutional networksHuman activitiesHuman subjectsTime instancesTime varyingVirtual arrayRecurrent neural networks
ISSN
1530-437X
Abstract
We investigate the feasibility of classifying human activities measured by a MIMO radar in the form of a point cloud. If a human subject is measured by a radar system that has a very high angular azimuth and elevation resolution, scatterers from the body can be localized. When precisely represented, individual points form a point cloud whose shape resembles that of the human subject. As the subject engages in various activities, the shapes of the point clouds change accordingly. We propose to classify human activities through recognition of point cloud variations. To construct a dataset, we used an FMCW MIMO radar to measure 19 human subjects performing 7 activities. The radar had 12 TXs and 16 RXs, producing a 33×31 virtual array with approximately 3.5 degrees of angular resolution in azimuth and elevation. To classify human activities, we used a deep recurrent neural network (DRNN) with a two-dimensional convolutional network. The convolutional filters captured point clouds’ features at time instance for sequential input into the DRNN, which recognized time-varying signatures, producing a classification accuracy exceeding 97%. IEEE
URI
http://hdl.handle.net/20.500.11750/15495
DOI
10.1109/JSEN.2021.3068388
Publisher
Institute of Electrical and Electronics Engineers
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

오대건
Oh, Daegun오대건

Division of Intelligent Robotics

read more

Total Views & Downloads