Mostrar el registro sencillo del ítem

dc.contributor.authorMartínez-Río, J.*
dc.contributor.authorCarmona, E.J.*
dc.contributor.authorCancelas, D.*
dc.contributor.authorNovo Buján, Jorge*
dc.contributor.authorOrtega Hortas, Marcos*
dc.date.accessioned2025-09-08T12:24:02Z
dc.date.available2025-09-08T12:24:02Z
dc.date.issued2023
dc.identifier.citationMartínez-Río J, Carmona EJ, Cancelas D, Novo J, Ortega M. Deformable registration of multimodal retinal images using a weakly supervised deep learning approach. Neural Computing and Applications. 2023;35(20):14779-97.
dc.identifier.issn1433-3058
dc.identifier.otherhttps://portalcientifico.sergas.gal//documentos/6433d2bfe8f2fa0e62f2b8e0
dc.identifier.urihttp://hdl.handle.net/20.500.11940/21315
dc.description.abstractThere are different retinal vascular imaging modalities widely used in clinical practice to diagnose different retinal pathologies. The joint analysis of these multimodal images is of increasing interest since each of them provides common and complementary visual information. However, if we want to facilitate the comparison of two images, obtained with different techniques and containing the same retinal region of interest, it will be necessary to make a previous registration of both images. Here, we present a weakly supervised deep learning methodology for robust deformable registration of multimodal retinal images, which is applied to implement a method for the registration of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) images. This methodology is strongly inspired by VoxelMorph, a general unsupervised deep learning framework of the state of the art for deformable registration of unimodal medical images. The method was evaluated in a public dataset with 172 pairs of FA and superficial plexus OCTA images. The degree of alignment of the common information (blood vessels) and preservation of the non-common information (image background) in the transformed image were measured using the Dice coefficient (DC) and zero-normalized cross-correlation (ZNCC), respectively. The average values of the mentioned metrics, including the standard deviations, were DC = 0.72 ± 0.10 and ZNCC = 0.82 ± 0.04. The time required to obtain each pair of registered images was 0.12 s. These results outperform rigid and deformable registration methods with which our method was compared.
dc.description.sponsorshipThis work was supported by the Ministerio de Ciencia, Innovacion y Universidades, Government of Spain, through the RTI2018-095894-B-I00 research project. Some of the authors of this work also receive financial support from the European Social Fund through the predoctoral contract ref. PEJD-2019-PRE/TIC-17030 and research assistant contract ref. PEJ-2019-AI/TIC-13771.
dc.languageeng
dc.rightsAttribution 4.0 International (CC BY 4.0)*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleDeformable registration of multimodal retinal images using a weakly supervised deep learning approach
dc.typeArtigo
dc.authorsophosMartínez-Río, J.; Carmona, E.J.; Cancelas, D.; Novo, J.; Ortega, M.
dc.identifier.doi10.1007/s00521-023-08454-8
dc.identifier.sophos6433d2bfe8f2fa0e62f2b8e0
dc.issue.number20
dc.journal.titleNeural Computing and Applications*
dc.organizationInstituto de Investigación Biomédica de A Coruña (INIBIC)
dc.organizationInstituto de Investigación Biomédica de A Coruña (INIBIC)
dc.page.initial14779
dc.page.final14797
dc.relation.projectIDMinisterio de Ciencia, Innovacion y Universidades, Government of Spain [RTI2018-095894-B-I00]
dc.relation.projectIDEuropean Social Fund [PEJD-2019-PRE/TIC-17030, PEJ-2019-AI/TIC-13771]
dc.relation.publisherversionhttps://doi.org/10.1007/s00521-023-08454-8
dc.rights.accessRightsopenAccess*
dc.subject.keywordINIBIC
dc.subject.keywordINIBIC
dc.typefidesArtículo Científico (incluye Original, Original breve, Revisión Sistemática y Meta-análisis)
dc.typesophosArtículo Original
dc.volume.number35


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution 4.0 International (CC BY 4.0)
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution 4.0 International (CC BY 4.0)