RCAN [Kor]

Yulun Zhang et al. / Image Super-Resolution Using Very Deep Residual Channel Attention Networks / ECCV 2018

English version of this article is available.

1. Problem definition

ė‹Øģ¼ ģ“ėÆøģ§€ ģ“ˆķ•“ģƒķ™” (Single Image Super-Resolution, SISR) źø°ė²•ģ€ ģ“ėÆøģ§€ ė‚“ģ˜ ėø”ėŸ¬ģ™€ ė‹¤ģ–‘ķ•œ ė…øģ“ģ¦ˆė„¼ ģ œź±°ķ•˜ė©“ģ„œ, ė™ģ‹œģ— ģ €ķ•“ģƒė„ (Low Resolution, LR) ģ“ėÆøģ§€ė„¼ ź³ ķ•“ģƒė„ (High Resolution, HR)딜 ė³µģ›ķ•˜ėŠ” ź²ƒģ„ ėŖ©ķ‘œė”œ ķ•œė‹¤. x와 y넼 각각 LRź³¼ HR ģ“ėÆøģ§€ė¼ź³  ķ•  ė•Œ, SRģ„ ģˆ˜ģ‹ģœ¼ė”œ ķ‘œķ˜„ķ•˜ė©“ ė‹¤ģŒź³¼ 같다.

y=(xāŠ—k)↓s+n\textbf{y}=(\textbf{x} \otimes \textbf{k} )\downarrow_s + \textbf{n}

ģ—¬źø°ģ„œ y와 xėŠ” 각각 ź³ ķ•“ģƒė„ģ™€ ģ €ķ•“ģƒė„ ģ“ėÆøģ§€ė„¼ ģ˜ėÆøķ•˜ė©°, k와 nģ€ 각각 ėø”ėŸ¬ 행렬과 ė…øģ“ģ¦ˆ ķ–‰ė ¬ģ„ ė‚˜ķƒ€ė‚øė‹¤. ģµœź·¼ģ—ėŠ” CNNģ“ SR에 효과적으딜 ģž‘ģš©ķ•œė‹¤ėŠ” 사실에 ė”°ė¼, CNN-based SRģ“ ķ™œė°œķžˆ ģ—°źµ¬ė˜ź³  ģžˆė‹¤. ķ•˜ģ§€ė§Œ CNN-based SRģ€ ė‹¤ģŒ 두가지 ķ•œź³„ģ ģ„ 가지고 ģžˆė‹¤.

  • ģøµģ“ ź¹Šģ–“ģ§ˆģˆ˜ė” Gradient Vanishing [Note i]ģ“ ė°œģƒķ•˜ģ—¬ ķ•™ģŠµģ“ ģ–“ė ¤ģ›Œģ§

  • LR ģ“ėÆøģ§€ģ— ķ¬ķ•Øėœ ģ €ģ£¼ķŒŒ(low-frequency) 정볓가 ėŖØė“  ģ±„ė„ģ—ģ„œ ė™ė“±ķ•˜ź²Œ ė‹¤ė£Øģ–“ģ§ģœ¼ė”œģØ 각 feature mapģ˜ ėŒ€ķ‘œģ„±ģ“ 약화됨

ģ•žģ„œ ģ–øźø‰ķ•œ SRģ˜ ėŖ©ķ‘œģ™€ ģœ„ 2가지 ķ•œź³„ģ ģ„ ź·¹ė³µķ•˜źø° ģœ„ķ•“, 핓당 ė…¼ė¬øģ—ģ„œėŠ” Deep-RCAN (Residual Channel Attention Networks)ģ„ ģ œģ•ˆķ•œė‹¤.

[Note i] Gradient Vanishing: Input ź°’ģ“ activation functionģ„ ź±°ģ¹˜ė©“ģ„œ ģž‘ģ€ ė²”ģœ„ģ˜ output ź°’ģœ¼ė”œ squeezing 되며, ė”°ė¼ģ„œ ģ“ˆźø°ģ˜ input ź°’ģ“ ģ—¬ėŸ¬ ģøµģ˜ activation functionģ„ ź±°ģ¹ ģˆ˜ė” output 값에 ź±°ģ˜ ģ˜ķ–„ģ„ ėÆøģ¹˜ģ§€ ėŖ»ķ•˜ź²Œ ė˜ėŠ” 상태넼 ģ˜ėÆøķ•Ø. ģ“ģ— ė”°ė¼ 쓈기 layerė“¤ģ˜ ķŒŒė¼ėÆøķ„° ź°’ė“¤ģ“ output에 ėŒ€ķ•œ ė³€ķ™”ģœØģ“ ģž‘ģ•„ģ§€ź²Œė˜ģ–“ ķ•™ģŠµģ“ ė¶ˆź°€ķ•“ģ§

2. Motivation

ė³ø ė…¼ė¬øģ˜ baselineģø deep-CNNź³¼ attention 기법과 ź“€ė Øėœ paperė“¤ģ€ ė‹¤ģŒź³¼ 같다.

1. CNN 기반 SR

  • [SRCNN & FSRCNN]: CNNģ„ SR에 ģ ģš©ķ•œ ģµœģ“ˆģ˜ źø°ė²•ģœ¼ė”œģ„œ, 3ģøµģ˜ CNNģ„ źµ¬ģ„±ķ•Øģœ¼ė”œģØ źø°ģ”“ģ˜ Non-CNN 기반 SR 기법들에 비핓 크게 ģ„±ėŠ„ģ„ ķ–„ģƒģ‹œģ¼°ģŒ. FSRCNNģ€ SRCNNģ˜ ė„¤ķŠøģ›Œķ¬ 구씰넼 ź°„ģ†Œķ™”ķ•˜ģ—¬ 추딠과 ķ•™ģŠµ ģ†ė„ė„¼ ģ¦ėŒ€ģ‹œķ‚“.

  • [VDSR & DRCN]: SRCNN볓다 ģøµģ„ ė” 깊게 ģ ģøµķ•˜ģ—¬ (20ģøµ), ģ„±ėŠ„ģ„ 크게 ķ–„ģƒģ‹œķ‚“.

  • [SRResNet & SRGAN]: SRResNetģ€ SR에 ResNetģ„ 최쓈딜 ė„ģž…ķ•˜ģ˜€ģŒ. ė˜ķ•œ SRGANģ—ģ„œėŠ” SRResNet에 GANģ„ ė„ģž…ķ•Øģœ¼ė”œģØ ėø”ėŸ¬ķ˜„ģƒģ„ ģ™„ķ™”ģ‹œķ‚“ģœ¼ė”œģØ 사실에 ź°€ź¹Œģš“(photo-realistic) SRģ„ źµ¬ķ˜„ķ•˜ģ˜€ģŒ. ķ•˜ģ§€ė§Œ, ģ˜ė„ķ•˜ģ§€ ģ•Šģ€ ģøź³µģ ģø(artifact) ź°ģ²“ė„¼ ģƒģ„±ķ•˜ėŠ” ź²½ģš°ź°€ ė°œģƒķ•Ø.

  • [EDSR & MDSR]: źø°ģ”“ģ˜ ResNetģ—ģ„œ ė¶ˆķ•„ģš”ķ•œ ėŖØė“ˆģ„ ģ œź±°ķ•˜ģ—¬, ģ†ė„ė„¼ 크게 ģ¦ź°€ģ‹œķ‚“. ķ•˜ģ§€ė§Œ, ģ“ėÆøģ§€ ģ²˜ė¦¬ģ—ģ„œ ź“€ź±“ģø ź¹Šģ€ ģøµģ„ źµ¬ķ˜„ķ•˜ģ§€ ėŖ»ķ•˜ė©°, ėŖØė“  channelģ—ģ„œ low-frequency 정볓넼 ė™ģ¼ķ•˜ź²Œ 다루얓 ė¶ˆķ•„ģš”ķ•œ ź³„ģ‚°ģ“ ķ¬ķ•Øė˜ź³  ė‹¤ģ–‘ķ•œ feature넼 ė‚˜ķƒ€ė‚“ģ§€ ėŖ»ķ•œė‹¤ėŠ” ķ•œź³„ė„¼ ģ§€ė‹˜.

2. Attention 기법

Attentionģ€ ģøķ’‹ ė°ģ“ķ„°ģ—ģ„œ ꓀심 ģžˆėŠ” ķŠ¹ģ • 부분에 처리 ė¦¬ģ†ŒģŠ¤ė„¼ ķŽøķ–„ģ‹œķ‚¤ėŠ” źø°ė²•ģœ¼ė”œģ„œ, 핓당 부분에 ėŒ€ķ•œ 처리 ģ„±ėŠ„ģ„ ģ¦ź°€ģ‹œķ‚Øė‹¤. ķ˜„ģž¬ź¹Œģ§€ attentionģ€ ź°ģ²“ģøģ‹ģ“ė‚˜ ģ“ėÆøģ§€ ė¶„ė„˜ 등 high-level vision task에 ģ¼ė°˜ģ ģœ¼ė”œ ģ‚¬ģš©ė˜ģ—ˆź³ , ģ“ėÆøģ§€ SR ė“±ģ˜ low-level vision taskģ—ģ„œėŠ” ź±°ģ˜ 다루얓지지 ģ•Šģ•˜ė‹¤. ė³ø ė…¼ė¬øģ—ģ„œėŠ” ź³ ķ•“ģƒė„(High-Resolution, HR) ģ“ėÆøģ§€ė„¼ źµ¬ģ„±ķ•˜ėŠ” 고주파(High-Frequency)넼 ź°•ķ™”ķ•˜źø° ģœ„ķ•“, LR ģ“ėÆøģ§€ģ—ģ„œ 고주파 ģ˜ģ—­ģ— attentionģ„ ģ ģš©ķ•œė‹¤.

2.2. Idea

핓당 ė…¼ė¬øģ˜ idea와 ģ“ģ— 따넸 contributionģ€ ģ•„ėž˜ ģ„øź°€ģ§€ė”œ ģš”ģ•½ķ•  수 ģžˆė‹¤.

1. Residual Channel Attention Network (RCAN)

Residual Channel Attention Network (RCAN) ģ„ 통핓 źø°ģ”“ģ˜ CNN 기반 SR볓다 ė”ģš± ģøµģ„ 깊게 ģŒ“ģŒģœ¼ė”œģØ, ė” ģ •ķ™•ķ•œ SR ģ“ėÆøģ§€ė„¼ ķšė“ķ•œė‹¤.

2. Residual in Residual (RIR)

Residual in Residual (RIR)ģ„ 통핓 i) ķ•™ģŠµź°€ėŠ„ķ•œ(trainable) ė”ģš± ź¹Šģ€ ģøµģ„ ģŒ“ģœ¼ė©°, ii) RIR ėø”ė” ė‚“ė¶€ģ˜ long and short skip connection으딜 ģ €ķ•“ģƒė„ ģ“ėÆøģ§€ģ˜ low-frequency 정볓넼 ģš°ķšŒģ‹œķ‚“ģœ¼ė”œģØ ė” ķšØģœØģ ģø ģ‹ ź²½ė§ģ„ 설계할 수 ģžˆė‹¤.

3. Channel Attention (CA)

Channel Attention (CA)ģ„ 통핓 Feature 채널 ź°„ ģƒķ˜øģ¢…ģ†ģ„±ģ„ ź³ ė ¤ķ•Øģœ¼ė”œģØ, ģ ģ‘ģ‹ feature rescalingģ„ ź°€ėŠ„ģ¼€ ķ•œė‹¤.

3. Residual Channel Attention Network (RCAN)

3.1. Network Architecture

RCANģ˜ ė„¤ķŠøģ›Œķ¬ źµ¬ģ”°ėŠ” 크게 4 ė¶€ė¶„ģœ¼ė”œ źµ¬ģ„±ė˜ģ–“ ģžˆė‹¤: i) Shallow feature extraction, ii) RIR deep feature extraction, iii) Upscale module, iv) Reconstruction part. ė³ø ė…¼ė¬øģ—ģ„œėŠ” i), iii), iv)에 ėŒ€ķ•“ģ„œėŠ” 기씓 źø°ė²•ģø EDSRź³¼ ģœ ģ‚¬ķ•˜ź²Œ 각각 one convolutional layer, deconvolutional layer, L1 lossź°€ ģ‚¬ģš©ė˜ģ—ˆė‹¤. ii) RIR deep feature extractionģ„ ķ¬ķ•Øķ•˜ģ—¬, CA와 RCAB에 ėŒ€ķ•œ contributionģ€ ė‹¤ģŒ ģ ˆģ—ģ„œ ģ†Œź°œķ•œė‹¤.

L(Θ)=1Nāˆ‘Ni=1∄HRCAN(ILRi)āˆ’IHRi∄1L(\Theta )=\frac{1}{N}\sum_{N}^{i=1}\left \| H_{RCAN}(I_{LR}^i)-I_{HR}^i \right \|_1

3.2. Residual in Residual (RIR)

RIRģ—ģ„œėŠ” residual group (RG)ź³¼ long skip connection (LSC)으딜 źµ¬ģ„±ėœ Gź°œģ˜ ėø”ė”ģœ¼ė”œ ģ“ė£Øģ–“ģ ø ģžˆė‹¤. ķŠ¹ķžˆ, 1ź°œģ˜ RGėŠ” residual channel attention block(RCAB)와 short skip connection (SSC)ģ„ ė‹Øģœ„ė”œ ķ•˜ėŠ” Bź°œģ˜ ģ—°ģ‚°ģœ¼ė”œ źµ¬ģ„±ė˜ģ–“ ģžˆė‹¤. ģ“ėŸ¬ķ•œ 구씰딜 400개 ģ“ģƒģ˜ CNN ģøµģ„ ķ˜•ģ„±ķ•˜ėŠ” ź²ƒģ“ ź°€ėŠ„ķ•˜ė‹¤. RGė§Œģ„ 깊게 ģŒ“ėŠ” ź²ƒģ€ ģ„±ėŠ„ ģø”ė©“ģ—ģ„œ ķ•œź³„ź°€ ģžˆźø° ė•Œė¬øģ— LSC넼 RIR ė§ˆģ§€ė§‰ 부에 ė„ģž…ķ•˜ģ—¬ ģ‹ ź²½ė§ģ„ ģ•ˆģ •ķ™”ģ‹œķ‚Øė‹¤. ė˜ķ•œ LSC와 SSC넼 ķ•Øź»˜ ė„ģž…ķ•Øģœ¼ė”œģØ LRģ“ėÆøģ§€ģ˜ ė¶ˆķ•„ģš”ķ•œ ģ €ģ£¼ķŒŒ 정볓넼 ė”ģš± 효율적으딜 ģš°ķšŒģ‹œķ‚¬ 수 ģžˆė‹¤.

3.3. Residual Channel Attention Block (RCAB)

ė³ø ė…¼ė¬øģ—ģ„œėŠ” Channel Attention (CA)넼 Residual Block (RB)에 ė³‘ķ•©ģ‹œķ‚“ģœ¼ė”œģØ, Residual Channel Attention Block (RCAB)넼 ģ œģ•ˆķ•˜ģ˜€ė‹¤. ķŠ¹ķžˆ, CNNģ“ local receptive field만 ź³ ė ¤ķ•Øģœ¼ė”œģØ local region ģ“ģ™øģ˜ ģ „ģ²“ģ ģø 정볓넼 ģ“ģš©ķ•˜ģ§€ ėŖ»ķ•œė‹¤ėŠ” ģ ģ„ ź·¹ė³µķ•˜źø° ģœ„ķ•“ CAģ—ģ„œėŠ” global average pooling으딜 공간적 정볓넼 ķ‘œķ˜„ķ•˜ģ˜€ė‹¤.

ķ•œķŽø, 채널간 ģ—°ź“€ģ„±ģ„ ė‚˜ķƒ€ė‚“źø° ģœ„ķ•“, gating ė§¤ģ»¤ė‹ˆģ¦˜ģ„ [Note ii] ģ¶”ź°€ė”œ ė„ģž…ķ•˜ģ˜€ė‹¤. gating ė§¤ģ»¤ė‹ˆģ¦˜ģ€ ģ¼ė°˜ģ ģœ¼ė”œ 채널간 ė¹„ģ„ ķ˜•ģ„±ģ„ ė‚˜ķƒ€ė‚“ģ•¼ ķ•˜ė©°, one-hot ķ™œģ„±ķ™”ģ— 비핓 ė‹¤ģˆ˜ ģ±„ė„ģ˜ featureź°€ ź°•ģ”°ė˜ė©“ģ„œ 상호 ė°°ķƒ€ģ ģø ꓀계넼 ķ•™ģŠµķ•“ģ•¼ ķ•œė‹¤. ģ“ėŸ¬ķ•œ źø°ģ¤€ģ„ ģ¶©ģ”±ķ•˜źø° ģœ„ķ•“, sigmoid gatingź³¼ ReLUź°€ ģ„ ģ •ė˜ģ—ˆė‹¤.

[Note ii] Gating Mechanisms: Gating Mechanismsģ€ Vanishing gradient 문제넼 ķ•“ź²°ķ•˜źø° ģœ„ķ•“ ė„ģž…ė˜ģ—ˆģœ¼ė©° RNN에 효과적으딜 ģ ģš©ėœė‹¤. Gating Mechanismsģ€ ģ—…ė°ģ“ķŠøė„¼ smoothingķ•˜ėŠ” 효과넼 ģ§€ė‹Œė‹¤. [Gu, Albert, et al. "Improving the gating mechanism of recurrent neural networks." International Conference on Machine Learning. PMLR, 2020.]

4. Experiment & Result

4.1. Experimental setup

1. Datasets and degradation models

ķ•™ģŠµģš© ģ“ėÆøģ§€ėŠ” DIV2K ė°ģ“ķ„°ģ…‹ģ˜ ģ¼ė¶€ 800개 ģ“ėÆøģ§€ė„¼ ģ“ģš©ķ•˜ģ˜€ģœ¼ė©°, ķ…ŒģŠ¤ķŠø ģ“ėÆøģ§€ė”œėŠ” Set5, B100, Urban 100ź³¼ Manga109넼 ģ‚¬ģš©ķ•˜ģ˜€ė‹¤. Degradation ėŖØėøė”œėŠ” bicubic (BI)와 blur-downscale (BD)ź°€ ģ‚¬ģš©ė˜ģ—ˆė‹¤.

2. Evaluation metrics

PSNRź³¼ SSIM으딜 처리된 ģ“ėÆøģ§€ģ˜ YCbCr color space [Note iii]ģ˜ Y ģ±„ė„ģ„ ķ‰ź°€ķ•˜ģ˜€ģŒ. ė˜ķ•œ recognition errorģ—ģ„œ 1~5ģœ„ģ˜ ķƒ€ SR 기법과 ė¹„źµķ•˜ģ—¬, ģ„±ėŠ„ ģš°ģœ„ė„¼ ķ™•ģøķ•˜ģ˜€ģŒ.

[Note iii] YcbCr: YCBCRģ€ Y'CBCR, YCbCr ė˜ėŠ” Y'CbCrģ“ė¼ź³  불리며, ė¹„ė””ģ˜¤ ė° 디지털 사진 ģ‹œģŠ¤ķ…œģ—ģ„œ 컬러 ģ“ėÆøģ§€ ķŒŒģ“ķ”„ė¼ģøģ˜ ģ¼ė¶€ė”œ ģ‚¬ģš©ė˜ėŠ” ģƒ‰ģƒ 공간 ģ œķ’ˆźµ°ģ“ė‹¤. Y'ėŠ” luma ģ„±ė¶„ģ“ź³  CB ė° CRģ€ ģ²­ģƒ‰ģ°Ø ė° ģ ģƒ‰ģ°Ø 크딜마 ģ„±ė¶„ģ“ė‹¤. Y'(ķ”„ė¼ģž„ ķ¬ķ•Ø)ėŠ” ķœ˜ė„ģø Y와 źµ¬ė³„ė˜ė©°, ģ“ėŠ” ź“‘ ź°•ė„ź°€ 감마 ė³“ģ •ėœ RGB ķ”„ė¼ģ“ėØøė¦¬ė„¼ 기반으딜 ė¹„ģ„ ķ˜•ģ ģœ¼ė”œ ģøģ½”ė”©ėØģ„ ģ˜ėÆøķ•œė‹¤. [Wikipedia]

3. Training settings

ģ•žģ„œ ģ–øźø‰ķ•œ DIV2K ė°ģ“ķ„°ģ…‹ģ— ģžˆėŠ” 800ź°œģ˜ ģ“ėÆøģ§€ģ— ķšŒģ „, ģƒķ•˜ė°˜ģ „ 등 data augmentationģ„ ģ ģš©ķ•˜ź³ , 각 training batchģ—ģ„œėŠ” 48x48 ģ‚¬ģ“ģ¦ˆģ˜ 16ź°œģ˜ LR ķŒØģ¹˜ź°€ ģøķ’‹ģœ¼ė”œ ģ¶”ģ¶œė˜ģ—ˆė‹¤. ė˜ķ•œ ģµœģ ķ™” źø°ė²•ģœ¼ė”œėŠ” ADAMģ“ ģ‚¬ģš©ė˜ģ—ˆė‹¤.

4.2. Result

1. Effects of RIR and CA

źø°ģ”“źø°ė²•ģ“ 37.45dBģ˜ ģ„±ėŠ„ģ„ ė³“ģ—¬ģ¤€ė° ė°˜ķ•“, long skip connection (LSC)ź³¼ short skip connection (SSC)ź°€ ķ¬ķ•Øėœ RIRź³¼ CA넼 ģ“ģš©ķ•Øģœ¼ė”œģØ, 37.90dBź¹Œģ§€ ģ„±ėŠ„ģ„ ė†’ģ˜€ė‹¤. (LSC)으딜 źµ¬ģ„±ėœ Gź°œģ˜ ėø”ė”ģœ¼ė”œ ģ“ė£Øģ–“ģ ø ģžˆė‹¤.

2. Model Size Analyses

RCANģ€ ķƒ€ 기법들 (DRCN, FSRCNN, PSyCo, ENet-E)ź³¼ ė¹„źµķ•˜ģ—¬ ź°€ģž„ ź¹Šģ€ ģ‹ ź²½ė§ģ„ ģ“ė£Øė©“ģ„œė„, 전첓 ķŒŒė¼ėÆøķ„° ģˆ˜ėŠ” ź°€ģž„ ģ ģ§€ė§Œ, ź°€ģž„ ė†’ģ€ ģ„±ėŠ„ģ„ ė³“ģ—¬ģ£¼ģ—ˆė‹¤.

5. Conclusion

ė³ø ė…¼ė¬øģ—ģ„œėŠ” ė†’ģ€ ģ •ķ™•ė„ģ˜ SR ģ“ėÆøģ§€ė„¼ ķšė“ķ•˜źø° ģœ„ķ•“ RCANģ“ ģ ģš©ė˜ģ—ˆė‹¤. ķŠ¹ķžˆ, RIR 구씰와 LSC ė° SSC넼 ķ•Øź»˜ ķ™œģš©ķ•Øģœ¼ė”œģØ, ź¹Šģ€ ģøµģ„ ķ˜•ģ„±ķ•  수 ģžˆģ—ˆė‹¤. ė˜ķ•œ RIRģ€ LR ģ“ėÆøģ§€ģ˜ ė¶ˆķ•„ģš”ķ•œ ģ •ė³“ģø ģ €ģ£¼ķŒŒ 정볓넼 ģš°ķšŒģ‹œķ‚“ģœ¼ė”œģØ, ģ‹ ź²½ė§ģ“ 고주파 정볓넼 ķ•™ģŠµķ•  수 ģžˆė„ė” ķ•˜ģ˜€ė‹¤. ė” ė‚˜ģ•„ź°€, CA넼 ė„ģž…ķ•˜ģ—¬ ģ±„ė„ź°„ģ˜ ģƒķ˜øģ¢…ģ†ģ„±ģ„ ź³ ė ¤ķ•Øģœ¼ė”œģØ channel-wise feature넼 ģ ģ‘ģ‹ģœ¼ė”œ rescalingķ•˜ģ˜€ė‹¤. ģ œģ•ˆķ•œ źø°ė²•ģ€ BI, DB degradation ėŖØėøģ„ ģ“ģš©ķ•˜ģ—¬ SR ģ„±ėŠ„ģ„ ź²€ģ¦ķ•˜ģ˜€ģœ¼ė©°, ģ¶”ź°€ė”œ ź°ģ²“ ģøģ‹ģ—ģ„œė„ ģš°ģˆ˜ķ•œ ģ„±ėŠ„ģ„ ė‚˜ķƒ€ė‚“ėŠ” ź²ƒģ„ ķ™•ģøķ•˜ģ˜€ė‹¤.

Take home message (ģ˜¤ėŠ˜ģ˜ źµķ›ˆ)

ģ“ėÆøģ§€ ė‚“ģ—ģ„œ ꓀심 ģžˆėŠ” ģ˜ģ—­ģ˜ 정볓넼 ė¶„ķ• ķ•“ė‚“ź³ , 핓당 정볓에 attentionģ„ ģ ģš©ķ•Øģœ¼ė”œģØ ķ•™ģŠµź³¼ģ •ģ—ģ„œ ė¹„ģ¤‘ģ„ ė” ė†’ģ¼ 수 ģžˆė‹¤.

전첓 ķŒŒė§ˆė¦¬ķ„° 개수넼 ėŠ˜ė¦¬ėŠ” ź²ƒė³“ė‹¤ ģ‹ ź²½ė§ģ„ ė” 깊게 ģŒ“ėŠ” ź²ƒģ“ ģ„±ėŠ„ģ„ ė†’ģ“ėŠ”ė° ė” ķšØź³¼ģ ģ“ė‹¤.

Author / Reviewer information

1. Author

ķ•œģŠ¹ķ˜ø (Seungho Han)

  • KAIST ME

  • Research Topics: Formation Control, Vehicle Autonomous Driving, Image Super Resolution

  • https://www.linkedin.com/in/seung-ho-han-8a54a4205/

2. Reviewer

  1. Korean name (English name): Affiliation / Contact information

  2. Korean name (English name): Affiliation / Contact information

  3. ...

Reference & Additional materials

  1. [Original Paper] Zhang, Yulun, et al. "Image super-resolution using very deep residual channel attention networks." Proceedings of the European conference on computer vision (ECCV). 2018.

  2. [Github] https://github.com/yulunzhang/RCAN

  3. [Github] https://github.com/dongheehand/RCAN-tf

  4. [Github] https://github.com/yjn870/RCAN-pytorch

  5. [Attention] https://wikidocs.net/22893

  6. [Dataset] Xu, Qianxiong, and Yu Zheng. "A Survey of Image Super Resolution Based on CNN." Cloud Computing, Smart Grid and Innovative Frontiers in Telecommunications. Springer, Cham, 2019. 184-199.

  7. [BSRGAN] Zhang, Kai, et al. "Designing a practical degradation model for deep blind image super-resolution." arXiv preprint arXiv:2103.14006 (2021).

  8. [Google's SR3] https://80.lv/articles/google-s-new-approach-to-image-super-resolution/

  9. [SRCNN] Dai, Yongpeng, et al. "SRCNN-based enhanced imaging for low frequency radar." 2018 Progress in Electromagnetics Research Symposium (PIERS-Toyama). IEEE, 2018.

  10. [FSRCNN] Zhang, Jian, and Detian Huang. "Image Super-Resolution Reconstruction Algorithm Based on FSRCNN and Residual Network." 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2019.

  11. [VDSR] Hitawala, Saifuddin, et al. "Image super-resolution using VDSR-ResNeXt and SRCGAN." arXiv preprint arXiv:1810.05731 (2018).

  12. [SRResNet ] Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

  13. [SRGAN] Nagano, Yudai, and Yohei Kikuta. "SRGAN for super-resolving low-resolution food images." Proceedings of the Joint Workshop on Multimedia for Cooking and Eating Activities and Multimedia Assisted Dietary Management. 2018.

Last updated

Was this helpful?