Total: 1
Multi-granularity features can be extracted from multi-modal medical images and how to effectively analyze these features jointly is a challenging and critical issue for computer-aided diagnosis (CAD). However, most existing multi-modal classification methods have not fully explored the interactions among the intra- and inter-granularity features across multi-modals. To address this limitation, we propose a novel Indepth Integration of Multi-Granulairty Features Network (IIMGF-Net) for a typical multi-modal task, i.e., a dual-modal based CAD. Specifically, the proposed IIMGF-Net consistes of two types of key modules, i.e., Cross-Modal Intra-Granularity Fusion (CMIGF) and Multi-Granularity Collaboration (MGC). The CMIGF module enhances the attentive interactions between the same granularity features from dual-modals and derive an integrated representation at each granularity. Based on these representations, the MGC module captures inter-granularity interactions among the resulting representations of CMIGF through a coarse-to-fine and fine-to-coarse collaborative learning mechanism. Extensive experiments on two dual-modal datasets validate the effectiveness of the proposed method, demonstrating its superiority in dual-modal CAD tasks by integrating multi-granularity information.