dc.contributor.author | Miao, Wei | |
dc.contributor.author | Wang, Lijun | |
dc.contributor.author | Lu, Huchuan | |
dc.contributor.author | Huang, Kaining | |
dc.contributor.author | Shi, Xinchu | |
dc.contributor.author | Liu, Bocong | |
dc.date.accessioned | 2024-01-19T07:11:12Z | |
dc.date.available | 2024-01-19T07:11:12Z | |
dc.date.issued | 2024 | |
dc.identifier.citation | Miao, W., Wang, L., Lu, H., Huang, K., Shi, X., & Liu, B. (2024). ITrans : generative image inpainting with transformers. <i>Multimedia Systems</i>, <i>30</i>(1), Article 21. <a href="https://doi.org/10.1007/s00530-023-01211-w" target="_blank">https://doi.org/10.1007/s00530-023-01211-w</a> | |
dc.identifier.other | CONVID_197929931 | |
dc.identifier.uri | https://jyx.jyu.fi/handle/123456789/92893 | |
dc.description.abstract | Despite significant improvements, convolutional neural network (CNN) based methods are struggling with handling long-range global image dependencies due to their limited receptive fields, leading to an unsatisfactory inpainting performance under complicated scenarios. To address this issue, we propose the Inpainting Transformer (ITrans) network, which combines the power of both self-attention and convolution operations. The ITrans network augments convolutional encoder–decoder structure with two novel designs, i.e., the global and local transformers. The global transformer aggregates high-level image context from the encoder in a global perspective, and propagates the encoded global representation to the decoder in a multi-scale manner. Meanwhile, the local transformer is intended to extract low-level image details inside the local neighborhood at a reduced computational overhead. By incorporating the above two transformers, ITrans is capable of both global relationship modeling and local details encoding, which is essential for hallucinating perceptually realistic images. Extensive experiments demonstrate that the proposed ITrans network outperforms favorably against state-of-the-art inpainting methods both quantitatively and qualitatively. | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | eng | |
dc.publisher | Springer | |
dc.relation.ispartofseries | Multimedia Systems | |
dc.rights | CC BY 4.0 | |
dc.subject.other | convolutional neural network | |
dc.subject.other | image inpainting | |
dc.subject.other | global transformer | |
dc.subject.other | local transformer | |
dc.title | ITrans : generative image inpainting with transformers | |
dc.type | article | |
dc.identifier.urn | URN:NBN:fi:jyu-202401191388 | |
dc.contributor.laitos | Informaatioteknologian tiedekunta | fi |
dc.contributor.laitos | Faculty of Information Technology | en |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | |
dc.type.coar | http://purl.org/coar/resource_type/c_2df8fbb1 | |
dc.description.reviewstatus | peerReviewed | |
dc.relation.issn | 0942-4962 | |
dc.relation.numberinseries | 1 | |
dc.relation.volume | 30 | |
dc.type.version | publishedVersion | |
dc.rights.copyright | © The Author(s) 2024 | |
dc.rights.accesslevel | openAccess | fi |
dc.subject.yso | kuvankäsittely | |
dc.subject.yso | neuroverkot | |
dc.subject.yso | valokuvat | |
dc.format.content | fulltext | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p6449 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p7292 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p2699 | |
dc.rights.url | https://creativecommons.org/licenses/by/4.0/ | |
dc.relation.doi | 10.1007/s00530-023-01211-w | |
jyx.fundinginformation | Open Access funding provided by University of Jyväskylä (JYU). | |
dc.type.okm | A1 | |