Papers - TAMAKI Toru
-
On the Performance Evaluation of Action Recognition Models on Transcoded Low Quality Videos International journal
Aoi Otani, Ryota Hashiguchi, Kazuki Omi, Norishige Fukushima, Toru Tamaki
2022.04
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (other academic)
In the design of action recognition models, the quality of videos in the dataset is an important issue, however the trade-off between the quality and performance is often ignored. In general, action recognition models are trained and tested on high-quality videos, but in actual situations where action recognition models are deployed, sometimes it might not be assumed that the input videos are of high quality. In this study, we report qualitative evaluations of action recognition models for the quality degradation associated with transcoding by JPEG and H.264/AVC. Experimental results are shown for evaluating the performance of pre-trained models on the transcoded validation videos of Kinetics400. The models are also trained on the transcoded training videos. From these results, we quantitatively show the degree of degradation of the model performance with respect to the degradation of the video quality.
DOI: 10.48550/arXiv.2204.09166
Other Link: https://doi.org/10.48550/arXiv.2204.09166
-
Model-agnostic Multi-Domain Learning with Domain-Specific Adapters for Action Recognition International journal
Kazuki Omi, Toru Tamaki
2022.04
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (other academic)
In this paper, we propose a multi-domain learning model for action recognition. The proposed method inserts domain-specific adapters between layers of domain-independent layers of a backbone network. Unlike a multi-head network that switches classification heads only, our model switches not only the heads, but also the adapters for facilitating to learn feature representations universal to multiple domains. Unlike prior works, the proposed method is model-agnostic and doesn't assume model structures unlike prior works. Experimental results on three popular action recognition datasets (HMDB51, UCF101, and Kinetics-400) demonstrate that the proposed method is more effective than a multi-head architecture and more efficient than separately training models for each domain.
DOI: 10.48550/arXiv.2204.07270
Other Link: https://doi.org/10.48550/arXiv.2204.07270
-
Vision Transformer with Cross-attention by Temporal Shift for Efficient Action Recognition International journal
Ryota Hashiguchi, Toru Tamaki
2022.04
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (other academic)
We propose Multi-head Self/Cross-Attention (MSCA), which introduces a temporal cross-attention mechanism for action recognition, based on the structure of the Multi-head Self-Attention (MSA) mechanism of the Vision Transformer (ViT). Simply applying ViT to each frame of a video frame can capture frame features, but cannot model temporal features. However, simply modeling temporal information with CNN or Transfomer is computationally expensive. TSM that perform feature shifting assume a CNN and cannot take advantage of the ViT structure. The proposed model captures temporal information by shifting the Query, Key, and Value in the calculation of MSA of ViT. This is efficient without additional coinformationmputational effort and is a suitable structure for extending ViT over temporal. Experiments on Kineitcs400 show the effectiveness of the proposed method and its superiority over previous methods.
DOI: 10.48550/arXiv.2204.00452
Other Link: https://doi.org/10.48550/arXiv.2204.00452
-
ObjectMix: Data Augmentation by Copy-Pasting Objects in Videos for Action Recognition International journal
Jun Kimata, Tomoya Nitta, Toru Tamaki
2022.04
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (other academic)
In this paper, we propose a data augmentation method for action recognition using instance segmentation. Although many data augmentation methods have been proposed for image recognition, few methods have been proposed for action recognition. Our proposed method, ObjectMix, extracts each object region from two videos using instance segmentation and combines them to create new videos. Experiments on two action recognition datasets, UCF101 and HMDB51, demonstrate the effectiveness of the proposed method and show its superiority over VideoMix, a prior work.
DOI: 10.48550/arXiv.2204.00239
Other Link: https://doi.org/10.48550/arXiv.2204.00239
-
On the Instability of Unsupervised Domain Adaptation with ADDA Reviewed International journal
Kazuki Omi and Toru Tamaki
International Workshop on Advanced Image Technology (IWAIT2022) 2022.01
Authorship:Last author, Corresponding author Language:English Publishing type:Research paper (international conference proceedings)
DOI: 10.1117/12.2625953
Other Link: https://doi.org/10.1117/12.2625953
-
Estimating the number of Table Tennis Rallies in a Match Video Reviewed International journal
Shoma Kato, Akira Kito, Toru Tamaki and Hiroaki Sawano
International Workshop on Advanced Image Technology (IWAIT2022) 2022.01
Language:English Publishing type:Research paper (international conference proceedings)
DOI: 10.1117/12.2625945
Other Link: https://doi.org/10.1117/12.2625945
-
Classification with CNN features and SVM on Embedded DSP Core for Colorectal Magnified NBI Endoscopic Video Image Reviewed International journal
Masayuki Odagawa, Takumi Okamoto, Tetsushi Koide, Toru Tamaki, Shigeto Yoshida, Hiroshi Mieno, Shinji Tanaka
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E105-A ( 1 ) 25 - 34 2022.01
Language:English Publishing type:Research paper (scientific journal)
DOI: 10.1587/transfun.2021EAP1036
Other Link: https://www.jstage.jst.go.jp/article/transfun/E105.A/1/E105.A_2021EAP1036/_article
-
Development of multi-class computer-aided diagnostic systems using the NICE/JNET classifications for colorectal lesions Reviewed International journal
Yuki Okamoto, Shigeto Yoshida, Seiji Izakura, Daisuke Katayama, Ryuichi Michida, Tetsushi Koide, Toru Tamaki, Yuki Kamigaichi, Hirosato Tamari, Yasutsugu Shimohara, Tomoyuki Nishimura, Katsuaki Inagaki, Hidenori Tanaka, Ken Yamashita, Kyoku Sumimoto, Shiro Oka, Shinji Tanaka
Journal of Gastroenterology and Hepatology 37 ( 1 ) 104 - 110 2022.01
Language:English Publishing type:Research paper (scientific journal) Publisher:Wiley
DOI: 10.1111/jgh.15682
Other Link: https://onlinelibrary.wiley.com/doi/10.1111/jgh.15682
-
Feasibility Study for Computer-Aided Diagnosis System with Navigation Function of Clear Region for Real-Time Endoscopic Video Image on Customizable Embedded DSP Cores Reviewed International journal
Masayuki Odagawa, Tetsushi Koide, Toru Tamaki, Shigeto Yoshida, Hiroshi Mieno, Shinji Tanaka
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 1 ( E105-A ) 58 - 62 2022.01
Language:English Publishing type:Research paper (scientific journal)
DOI: 10.1587/transfun.2021EAL2044
Other Link: https://www.jstage.jst.go.jp/article/transfun/E105.A/1/E105.A_2021EAL2044/_article
-
Localization of Flying Bats from Multichannel Audio Signals by Estimating Location Map with Convolutional Neural Networks Reviewed
Kazuki Fujimori, Bisser Raytchev, Kazufumi Kaneda, Yasufumi Yamada, Yu Teshima, Emyo Fujioka, Shizuko Hiryu, and Toru Tamaki
Journal of Robotics and Mechatronics 33 ( 3 ) 515 - 525 2021.06
Language:English Publishing type:Research paper (scientific journal) Publisher:Fuji Technology Press Ltd
We propose a method that uses ultrasound audio signals from a multichannel microphone array to estimate the positions of flying bats. The proposed model uses a deep convolutional neural network that takes multichannel signals as input and outputs the probability maps of the locations of bats. We present experimental results using two ultrasound audio clips of different bat species and show numerical simulations with synthetically generated sounds.
Other Link: https://www.fujipress.jp/jrm/rb/robot003300030515/
-
A Hardware Implementation on Customizable Embedded DSP Core for Colorectal Tumor Classification with Endoscopic Video toward Real-Time Computer-Aided Diagnosis System Reviewed
Masayuki ODAGAWA, Takumi OKAMOTO, Tetsushi KOIDE, Toru TAMAKI, Bisser RAYTCHEV, Kazufumi KANEDA, Shigeto YOSHIDA, Hiroshi MIENO, Shinji TANAKA, Takayuki SUGAWARA, Hiroshi TOISHI, Masayuki TSUJI, Nobuo TAMBA
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E104-A ( 4 ) 691 - 701 2021.04
Language:English Publishing type:Research paper (scientific journal) Publisher:IEICE
In this paper, we present a hardware implementation of a colorectal cancer diagnosis support system using a colorectal endoscopic video image on customizable embedded DSP. In an endoscopic video image, color shift, blurring or reflection of light occurs in a lesion area, which affects the discrimination result by a computer. Therefore, in order to identify lesions with high robustness and stable classification to these images specific to video frame, we implement a computer-aided diagnosis (CAD) system for colorectal endoscopic images with Narrow Band Imaging (NBI) magnification with the Convolutional Neural Network (CNN) feature and Support Vector Machine (SVM) classification. Since CNN and SVM need to perform many multiplication and accumulation (MAC) operations, we implement the proposed hardware system on a customizable embedded DSP, which can realize at high speed MAC operations and parallel processing with Very Long Instruction Word (VLIW). Before implementing to the customizable embedded DSP, we profile and analyze processing cycles of the CAD system and optimize the bottlenecks. We show the effectiveness of the real-time diagnosis support system on the embedded system for endoscopic video images. The prototyped system demonstrated real-time processing on video frame rate (over 30fps @ 200MHz) and more than 90% accuracy.
-
Spectral Rendering of Fluorescence on Translucent Materials Reviewed
Masaya Kugita, Kazufumi Kaneda, Bisser Raytchev, Toru Tamaki
20 ( 1 ) 30 - 39 2021.03
Language:Japanese Publishing type:Research paper (scientific journal)
To render fluorescence, a wavelength dependent phenomena, we need to take into account the spectral distribution of light. Moreover, for translucent fluorescent medium we need to consider subsurface scattering. We propose a spectral rendering method to render fluorescence on translucent materials under global illumination environment. The proposed method is based on the physical properties of the fluorescence phenomena and rendered in a Probabilistic Progressive Photon Mapping method. By separating the power of photons into 3 elements (fluorescence, single scattering, multiple scattering), our method realizes fluorescence taking into account the scattering and absorption of light under the surface. We also introduce the Photon Power Table used for calculating the illuminance efficiently and deciding the outgoing point of light probabilistically. Finally, we show the usefulness of our method by demonstrating the rendered image.
Other Link: https://www.art-science.org/journal/v20n1/v20n1pp30/artsci-v20n1pp30.pdf
-
Rephrasing visual questions by specifying the entropy of the answer distribution Reviewed
Kento Terao, Toru Tamaki, Bisser Raytchev, Kazufumi Kaneda, Shin’Ichi Satoh
IEICE TRANSACTIONS on Information and Systems E103-D ( 11 ) 2362 - 2370 2020.11
Language:English Publishing type:Research paper (scientific journal) Publisher:IEICE
DOI: 10.1587/transinf.2020EDP7089
Other Link: https://search.ieice.org/bin/summary.php?id=e103-d_11_2362&category=D&year=2020&lang=E&abst=
-
An Entropy Clustering Approach for Assessing Visual Question Difficulty Reviewed International journal
Kento Terao, Toru Tamaki, Bisser Raytchev, Kazufumi Kaneda, Shin'ichi Satoh
IEEE Access 8 180633 - 180645 2020.09
Language:English Publishing type:Research paper (scientific journal) Publisher:IEEE
DOI: 10.1109/ACCESS.2020.3022063
Other Link: https://ieeexplore.ieee.org/document/9187418
-
画像中の物体および人物領域の抽出手法に関する研究
玉木徹
2001.03