TAMAKI Toru

写真a

Affiliation Department

Department of Computer Science
Department of Computer Science

Title

Professor

Homepage

https://sites.google.com/nitech.jp/tamaki-lab/

External Link

Degree

  • 博士(工学) ( 2001.03   名古屋大学 )

Research Areas

  • Informatics / Perceptual information processing  / コンピュータビジョン

External Career

  • Niigata University   Research Assistant

    2001.04 - 2005.09

      More details

    Country:Japan

  • Hiroshima University   Associate Professor

    2005.10 - 2020.10

      More details

    Country:Japan

  • ESIEE Paris, France   Laboratoire d'Informatique Gaspard-Monge (LIGM), Équipe Algorithmes, Architectures, Analyse et Synthèse d'images (A3SI)   chercheur associé

    2015.05 - 2016.01

      More details

    Country:France

Professional Memberships

  • 電子情報通信学会

    1996.10

  • 情報処理学会

    2005.06

  • IEEE

    2002.03

 

Papers

  • S3Aug: Segmentation, Sampling, and Shift for Action Recognition

    Taiki Sugiura, Toru Tamaki

    arXiv   1 - 9   2023.10

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (bulletin of university, research institution)  

    Action recognition is a well-established area of research in computer vision. In this paper, we propose S3Aug, a video data augmenatation for action recognition. Unlike conventional video data augmentation methods that involve cutting and pasting regions from two videos, the proposed method generates new videos from a single training video through segmentation and label-to-image transformation. Furthermore, the proposed method modifies certain categories of label images by sampling to generate a variety of videos, and shifts intermediate features to enhance the temporal coherency between frames of the generate videos. Experimental results on the UCF101, HMDB51, and Mimetics datasets demonstrate the effectiveness of the proposed method, paricularlly for out-of-context videos of the Mimetics dataset.

    DOI: 10.48550/arXiv.2310.14556

    DOI: 10.48550/arXiv.2310.14556

    Other Link: https://arxiv.org/abs/2310.14556

  • Joint learning of images and videos with a single Vision Transformer Reviewed International journal

    Shuki Shimizu, Toru Tamaki

    Proc. of 18th International Conference on Machine Vision and Applications (MVA)   1 - 6   2023.08

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    In this study, we propose a method for jointly learning of images and videos using a single model. In general, images and videos are often trained by separate models. We propose in this paper a method that takes a batch of images as input to Vision Transformer (IV-ViT), and also a set of video frames with temporal aggregation by late fusion. Experimental results on two image datasets and two action recognition datasets are presented.

    DOI: 10.23919/MVA57639.2023.10215661

    DOI: 10.23919/MVA57639.2023.10215661

    Other Link: https://ieeexplore.ieee.org/document/10215661/authors#authors

  • 効率的な動作認識のためのシフトによる時間的な相互アテンションを用いたVision Transformer

    橋口凌大, 玉木徹

    画像ラボ   34 ( 5 )   9 - 16   2023.05

     More details

    Authorship:Last author, Corresponding author   Language:Japanese   Publishing type:Research paper (bulletin of university, research institution)  

    効率的な動作認識のために時間的な相互アテンション機構を導入したマルチヘッド自己・相互アテンション(Multi-head Self/Cross-Attention、MSCA)を提案する。これは追加の計算量がなく効率的であり、ViTを時間的に拡張するために適した構造となっている。Kineitcs400を用いた実験により提案手法の有効性と、従来手法に対する優位性を示す。

    Other Link: https://www.nikko-pb.co.jp/products/detail.php?product_id=5529

  • Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition Reviewed

    Tomoya Nitta, Tsubasa Hirakawa, Hironobu Fujiyoshi, Toru Tamaki

    IEICE Transactions on Information and Systems   E106-D ( 3 )   391 - 400   2023.03

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:The Institute of Electronics, Information and Communication Engineers  

    In this paper we propose an extension of the Attention Branch Network (ABN) by using instance segmentation for generating sharper attention maps for action recognition. Methods for visual explanation such as Grad-CAM usually generate blurry maps which are not intuitive for humans to understand, particularly in recognizing actions of people in videos. Our proposed method, Object-ABN, tackles this issue by introducing a new mask loss that makes the generated attention maps close to the instance segmentation result. Further the Prototype Conformity (PC) loss and multiple attention maps are introduced to enhance the sharpness of the maps and improve the performance of classification. Experimental results with UCF101 and SSv2 shows that the generated maps by the proposed method are much clearer qualitatively and quantitatively than those of the original ABN.

    DOI: 10.1587/transinf.2022EDP7138

    DOI: 10.1587/transinf.2022EDP7138

    Other Link: https://www.jstage.jst.go.jp/article/transinf/E106.D/3/E106.D_2022EDP7138/_article

  • ObjectMix: Data Augmentation by Copy-Pasting Objects in Videos for Action Recognition Reviewed International journal

    Jun Kimata, Tomoya Nitta, Toru Tamaki

    ACM MM 2022 Asia (MMAsia '22)   2022.12

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    DOI: 10.1145/3551626.3564941

    Other Link: https://www.google.com/url?q=https%3A%2F%2Fdoi.org%2F10.1145%2F3551626.3564941&sa=D&sntz=1&usg=AOvVaw2jqzXXsG8MZbwSm67eCcjm

  • Temporal Cross-attention for Action Recognition Reviewed International journal

    Ryota Hashiguchi, Toru Tamaki

    2022.12

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    Feature shifts have been shown to be useful for action recognition with CNN-based models since Temporal Shift Module (TSM) was proposed. It is based on frame-wise feature extraction with late fusion, and layer features are shifted along the time direction for the temporal interaction. TokenShift, a recent model based on Vision Transformer (ViT), also uses the temporal feature shift mechanism, which, however, does not fully exploit the structure of Multi-head Self-Attention (MSA) in ViT. In this paper, we propose Multi-head Self/Cross-Attention (MSCA), which fully utilizes the attention structure. TokenShift is based on a frame-wise ViT with features temporally shifted with successive frames (at time t+1 and t-1). In contrast, the proposed MSCA replaces MSA in the frame-wise ViT, and some MSA heads attend to successive frames instead of the current frame. The computation cost is the same as the frame-wise ViT and TokenShift as it simply changes the target to which the attention is taken. There is a choice about which of key, query, and value are taken from the successive frames, then we experimentally compared these variants with Kinetics400. We also investigate other variants in which the proposed MSCA is used along the patch dimension of ViT, instead of the head dimension. Experimental results show that a variant, MSCA-KV, shows the best performance and is better than TokenShift by 0.1% and then ViT by 1.2%.

    Other Link: https://openaccess.thecvf.com/menu_other.html

  • Model-agnostic Multi-Domain Learning with Domain-Specific Adapters for Action Recognition Reviewed International journal

    Kazuki Omi, Jun Kimata, Toru Tamaki

    IEICE Transactions on Information and Systems   E105-D ( 12 )   2022.12

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEICE  

    In this paper, we propose a multi-domain learning model for action recognition. The proposed method inserts domain-specific adapters between layers of domain-independent layers of a backbone net- work. Unlike a multi-head network that switches classification heads only, our model switches not only the heads, but also the adapters for facilitating to learn feature representations universal to multiple domains. Unlike prior works, the proposed method is model-agnostic and doesn’t assume model structures unlike prior works. Experimental results on three popular action recognition datasets (HMDB51, UCF101, and Kinetics-400) demonstrate that the proposed method is more effective than a multi-head architecture and more efficient than separately training models for each domain.

    DOI: 10.1587/transinf.2022EDP7058

    Other Link: https://search.ieice.org/bin/summary_advpub.php?id=2022EDP7058&category=D&lang=E&abst=

  • 動作行動認識の最前線:手法,タスク,データセット Invited

    玉木徹

    画像応用技術専門委員会 研究会報告   34 ( 4 )   1 - 20   2022.11

     More details

    Authorship:Lead author, Last author, Corresponding author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

    Other Link: http://www.tc-iaip.org/research/

  • Performance Evaluation of Action Recognition Models on Low Quality Videos Reviewed International journal

    Aoi Otani, Ryota Hashiguchi, Kazuki Omi, Norishige Fukushima, Toru Tamaki

    IEEE Access   10   94898 - 94907   2022.09

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE  

    In the design of action recognition models, the quality of videos is an important issue; however, the trade-off between the quality and performance is often ignored. In general, action recognition models are trained on high-quality videos, hence it is not known how the model performance degrades when tested on low-quality videos, and how much the quality of training videos affects the performance. The issue of video quality is important, however, it has not been studied so far. The goal of this study is to show the trade-off between the performance and the quality of training and test videos by quantitative performance evaluation of several action recognition models for transcoded videos in different qualities. First, we show how the video quality affects the performance of pre-trained models. We transcode the original validation videos of Kinetics400 by changing quality control parameters of JPEG (compression strength) and H.264/AVC (CRF). Then we use the transcoded videos to validate the pre-trained models. Second, we show how the models perform when trained on transcoded videos. We transcode the original training videos of Kinetics400 by changing the quality parameters of JPEG and H.264/AVC. Then we train the models on the transcoded training videos and validate them with the original and transcoded validation videos. Experimental results with JPEG transcoding show that there is no severe performance degradation (up to −1.5%) for compression strength smaller than 70 where no quality degradation is visually observed, and for larger than 80 the performance degrades linearly with respect to the quality index. Experiments with H.264/AVC transcoding show that there is no significant performance loss (up to −1%) with CRF30 while the total size of video files is reduced to 30%. In summary, the video quality doesn’t have a large impact on the performance of action recognition models unless the quality degradation is severe and visible. This enables us to transcode the tr...

    DOI: 10.1109/ACCESS.2022.3204755

    Other Link: https://ieeexplore.ieee.org/document/9878331

  • Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition International journal

    Tomoya Nitta, Tsubasa Hirakawa, Hironobu Fujiyoshi, Toru Tamaki

    2022.07

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (other academic)  

    In this paper we propose an extension of the Attention Branch Network (ABN) by using instance segmentation for generating sharper attention maps for action recognition. Methods for visual explanation such as Grad-CAM usually generate blurry maps which are not intuitive for humans to understand, particularly in recognizing actions of people in videos. Our proposed method, Object-ABN, tackles this issue by introducing a new mask loss that makes the generated attention maps close to the instance segmentation result. Further the PC loss and multiple attention maps are introduced to enhance the sharpness of the maps and improve the performance of classification. Experimental results with UCF101 and SSv2 shows that the generated maps by the proposed method are much clearer qualitatively and quantitatively than those of the original ABN.

    DOI: 10.48550/arXiv.2207.13306

    Other Link: https://doi.org/10.48550/arXiv.2207.13306

display all >>

Books and Other Publications

Misc

  • 効率的な動作認識のためのシフトによる時間的な相互アテンションを用いたVision Transformer Invited

    橋口凌大, 玉木 徹

    2023.05

     More details

    Authorship:Last author, Corresponding author   Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (trade magazine, newspaper, online media)  

  • 移動軌跡のデータサイエンス Invited

    玉木徹

    74 ( 2 )   236 - 240   2020.03

     More details

    Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (trade magazine, newspaper, online media)   Publisher:株式会社エヌ・ティー・エス  

    Other Link: http://www.nts-book.co.jp/item/detail/summary/bio/20051225_42bk.html

Presentations

display all >>

Industrial Property Rights

  • 計測装置、及び建設機械

    細幸広, 藤原翔, 船原佑介, 玉木徹

     More details

    Application no:特願2019-213340  Date applied:2019.11

    Announcement no:特開2021-85178  Date announced:2021.06

    Patent/Registration no:特許第7246294号  Date registered:2023.03  Date issued:2023.03

    Rights holder:コベルコ建機株式会社, 国立大学法人広島大学   Country of applicant:Domestic   Country of acquisition:Domestic

    【発明の詳細な説明】
    【技術分野】
    【0001】
    本発明は、腕部材に対して回転可能に取り付けられた容器の収容物の体積を計測する技術に関するものである。
    【背景技術】
    【0002】
    油圧ショベルにおいては、作業当日の作業量を把握するために、バケットが掘削した掘削物の体積が計算される。また、油圧ショベルが掘削物をダンプカーに積み込む作業を行うに場合、掘削物の体積がダンプカーの上限積載量を超えないように掘削物の体積が計算される。このように、掘削物の体積は、種々の用途に適用可能であるため、高精度に計算されることが望ましい。掘削物の体積を計算する技術として、下記の特許文献1、2が知られている。
    【0003】
    特許文献1には、掘削後のバケットの状況を撮影した画像から算出されたバケットの表面形状と、排土後のバケット内の状況を撮影した画像から算出したバケットの内部形状との差を演算することにより、バケットの作業量を算出する技術が開示されている。
    【0004】
    特許文献2には、掘削物が入った状態でバケットの開口面から掘削物表面までの長さと、バケットが空の時のバケットの底からバケットの開口面までの長さとを足すことにより、バケットの底から掘削物の表面までの長さを求め、この長さに基づいて掘削物の体積を計算する技術が開示されている。

  • 牛体診断システムおよび牛体診断方法

    川村 健介, 玉木 徹, 小櫃 剛人, 黒川 勇三

     More details

    Applicant:国立大学法人広島大学

    Application no:特願2014-188656  Date applied:2014.09

    Announcement no:特開2016-059300  Date announced:2016.04

    Country of applicant:Domestic   Country of acquisition:Domestic

    J-GLOBAL

  • 内視鏡画像診断支援システム

    小出 哲士, ホアン アイン トゥワン, 吉田 成人, 三島 翼, 重見 悟, 玉木 徹, 平川 翼, 宮木 理恵, 杉 幸樹

     More details

    Applicant:国立大学法人広島大学

    Application no:特願2014-022425  Date applied:2014.02

    Announcement no:特開2015-146970  Date announced:2015.08

    Country of applicant:Domestic   Country of acquisition:Domestic

    J-GLOBAL

  • 物体検出装置及び物体検出方法

    田中 慎也, 土谷 千加夫, 玉木 徹, 栗田 多喜夫

     More details

    Applicant:日産自動車株式会社, 国立大学法人広島大学

    Application no:特願2012-267267  Date applied:2012.12

    Announcement no:特開2014-115706  Date announced:2014.06

    Country of applicant:Domestic   Country of acquisition:Domestic

    J-GLOBAL

  • 画像のレンズ歪みの補正方法

    玉木 徹, 山村 毅, 大西 昇

     More details

    Applicant:理化学研究所

    Application no:特願2001-054686  Date applied:2001.02

    Announcement no:特開2002-158915  Date announced:2002.05

    Patent/Registration no:特許第3429280号  Date registered:2003.05  Date issued:2003.05

    Country of applicant:Domestic   Country of acquisition:Domestic

    J-GLOBAL

Awards

  • 2020年度IPSJ-CGVI優秀研究発表賞

    2021.06   情報処理学会コンピュータグラフィックスとビジュアル情報学研究発表会   スペクトル類似度を考慮した深層学習によるRGB画像からスペクトル画像への変換手法

    坂本真啓, 金田和文, 玉木徹, Bisser Raytchev

     More details

    Award type:International academic award (Japan or overseas)  Country:Japan

  • 電子情報通信学会情報・システムソサイエティ功労賞

    2021.06   電子情報通信学会情報・システムソサイエティ  

    玉木徹

     More details

    Award type:Award from Japanese society, conference, symposium, etc.  Country:Japan

  • 平成17年度金森奨励賞

    2006.06   医用画像情報学会  

    玉木徹

     More details

    Award type:Honored in official journal of a scientific society, scientific journal 

  • 平成11年度学生奨励賞

    1999.11   電子情報通信学会東海支部  

    玉木徹

Scientific Research Funds Acquisition Results

  • 動画像理解のための時空間情報設計の方法論構築

    Grant number:22K12090  2022.04 - 2025.03

    科学研究費補助金  基盤研究(C)

    玉木徹

      More details

    Authorship:Principal investigator  Grant type:Competitive

    Grant amount:\4160000 ( Direct Cost: \3200000 、 Indirect Cost:\960000 )

    本研究の目的は,動画像理解のための時空間特徴量を取得する新しい方法論を構築することである.様々な動画像認識において空間的な情報と時間的な情報を,時空間情報としてひとまとめで扱う事が多いが,本研究が目指すのは,空間情報と時間情報を高いレベルで分離するというアプローチである.単に別々に特徴量を抽出するのではなく,様々な動画認識タスクに応用するために,時間と空間の情報を関連させつつ分離するために,所望の性質を満たす特徴量を設計するという枠組みを提案する.

  • Development of a Real-Time Computer-Aided Diagnosis System Based on Objective Indicators for Gastrointestinal Endoscopic Image Analysis

    Grant number:20H04157  2020.04 - 2023.03

    Grant-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research(B)

      More details

    Authorship:Coinvestigator(s)  Grant type:Competitive

    Grant amount:\900000 ( Direct Cost: \900000 )

  • Construction of objective indicators by gastrointestinal endoscopic image analysis and development of general computer-aided diagnosis system

    2017.04 - 2020.03

    Grant-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research(B)

  • Systems Science of Bio-Navigation

    2016.06 - 2021.03

    Grant-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research on Innovative Areas (Research in a proposed research area)

  • ナビゲーションにおける画像情報分析基盤の整備とヒトの行動分類

    2016.06 - 2021.03

    科学研究費補助金  新学術領域研究(研究領域提案型)

    玉木 徹、玉木 徹, 藤吉 弘亘

      More details

    本研究では,本計画班の構成員が開発してきた最先端の映像認識技術に立脚し,野生動物やペットなどに装着したカメラから得られた映像や,人間が撮影した映 像など,これまでの映像認識技術では処理が困難な自己移動を含む映像を,安定かつ頑健に認識する技術を開発し,本領域における画像・映像情報分析のための 基盤技術を構築する.本年度の実績は以下のとおりである.
    ・前年度までに,B01生態学チームから提供された海鳥のGPU経路データを学習し,目的地までに至る経路を予測するための逆強化学習を利用した手法を開発している.これをさらに発展させて,GPS経路データの欠損部分を補完する手法を開発した.これにより,これまでは様々な原因で得られなかった経路情報が,データ駆動型モデルによりもっともらしい経路を出力することが可能になり,また補完経路を確率分布として出力することが可能となった.しかしこの手法は膨大な計算時間と多大なメモリ量を必要とするため,制度を保ちつつ計算コストを大幅に削減する手法を考案した.
    ・映像中の人物移動軌跡をいくつかのグループに分け(クラスタリングし),歩行目的地に応じて分割する手法を,さらに発展させた.これは前年度までに開発したベイズ推定に基づく手法である.それぞれの目的地へと到達する様子カーネル密度推定を用いて可視化し,どのような経路と目的地が頻繁に利用されているのかを把握することが可能となった.
    ・B01生態学チームから提供されたコウモリの音声データから3次元位置を予測する手法を開発した.屋内で飛行するコウモリの3次元位置を,20chのマイクロホンアレイで録音された音声信号から,回帰によって推定する深層ネットワークを提案し,20cm程度の誤差(RMSE)で推定することが可能となった.

display all >>

 

Teaching Experience

  • 科学技術計算

    2022.04 Institution:Nagoya Institute of Technology

     More details

    Level:Undergraduate (specialized)  Country:Japan

  • メディア系演習II

    2021.10 Institution:Nagoya Institute of Technology

     More details

    Level:Undergraduate (specialized)  Country:Japan

  • 画像処理特論IV

    2021.10 - 2024.02 Institution:Nagoya Institute of Technology

     More details

    Level:Postgraduate  Country:Japan

  • プログラミング基礎

    2021.10 - 2023.03 Institution:Nagoya Institute of Technology

     More details

    Level:Undergraduate (specialized)  Country:Japan

  • ソフトウェア工学

    2021.04 Institution:Nagoya Institute of Technology

     More details

    Level:Undergraduate (specialized)  Country:Japan

display all >>

 

Committee Memberships

  • 電子情報通信学会   英文論文誌ED編集委員  

    2022.05 - 2024.05   

      More details

    Committee type:Academic society

  • 電子情報通信学会   パターン認識・メディア理解研究専門委員会 専門委員  

    2020.06 - 2022.06   

      More details

    Committee type:Academic society

  • 電子情報通信学会   パターン認識・メディア理解研究専門委員会 副委員長  

    2018.06 - 2020.06   

      More details

    Committee type:Academic society

  • 情報処理学会   コンピュータビジョンとイメージメディア研究運営委員会 運営委員  

    2016.04 - 2020.03   

      More details

    Committee type:Academic society

  • 電子情報通信学会   医用画像研究専門委員会 専門委員  

    2014.06 - 2022.06   

      More details

    Committee type:Academic society

  • 情報処理学会   コンピュータグラフィックスとビジュアル情報学研究運営委員会 運営委員  

    2013.04 - 2017.03   

      More details

    Committee type:Academic society

  • 電子情報通信学会   パターン認識・メディア理解研究専門委員会 専門委員  

    2012.05 - 2014.06   

      More details

    Committee type:Academic society

  • 電子情報通信学会   パターン認識・メディア理解研究専門委員会 幹事  

    2011.05 - 2012.05   

      More details

    Committee type:Academic society

  • 電子情報通信学会   ソサイエティ論文誌編集委員会 査読委員  

    2010.08   

      More details

    Committee type:Academic society

Social Activities

  • コンピュータビジョン論文読み会2023-6

    Role(s): Presenter, Planner, Organizing member

    connpass  2023.10

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2023-5

    Role(s): Presenter, Planner, Organizing member

    connpass  2023.09

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2023-4

    Role(s): Presenter, Planner, Organizing member

    connpass  2023.06

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2023-3

    Role(s): Presenter, Planner, Organizing member

    connpass  2023.06

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2023-2

    Role(s): Presenter, Planner, Organizing member

    connpass  2023.05

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2023-1

    Role(s): Presenter, Planner, Organizing member

    connpass  2023.04

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2022-6

    Role(s): Presenter, Planner, Organizing member

    connpass  2022.11

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2022-5

    Role(s): Presenter, Planner, Organizing member

    connpass  2022.11

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2022-4

    Role(s): Presenter, Planner, Organizing member

    connpass  2022.10

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

  • コンピュータビジョン論文読み会2022-3

    Role(s): Presenter, Planner, Organizing member

    connpass  2022.10

     More details

    Audience: College students, Graduate students, Teachers, Researchesrs, General, Scientific, Company, Governmental agency

    Type:Seminar, workshop

display all >>