📝
Awesome reviews
  • Welcome
  • Paper review
    • [2022 Spring] Paper review
      • RobustNet [Eng]
      • DPT [Kor]
      • DALL-E [Kor]
      • VRT: A Video Restoration Transformer [Kor]
      • Barbershop [Kor]
      • Barbershop [Eng]
      • REFICS [ENG]
      • Deep texture manifold [Kor]
      • SlowFast Networks [Kor]
      • SCAN [Eng]
      • DPT [Kor]
      • Chaining a U-Net With a Residual U-Net for Retinal Blood Vessels Segmentation [Kor]
      • Chaining a U-Net With a Residual U-Net for Retinal Blood Vessels Segmentation [Eng]
      • Patch Cratf : Video Denoising by Deep Modeling and Patch Matching [Eng]
      • LAFITE: Towards Language-Free Training for Text-to-Image Generation [Kor]
      • RegSeg [Eng]
      • D-NeRF [Eng]
      • SimCLR [Kor]
      • LabOR [Kor]
      • LabOR [Eng]
      • SegFormer [Kor]
      • Self-Calibrating Neural Radiance Fields [Kor]
      • Self-Calibrating Neural Radiance Fields [Eng]
      • GIRAFFE [Kor]
      • GIRAFFE [Eng]
      • DistConv [Kor]
      • SCAN [Eng]
      • slowfastnetworks [Kor]
      • Nesterov and Scale-Invariant Attack [Kor]
      • OutlierExposure [Eng]
      • TSNs [Kor]
      • TSNs [Eng]
      • Improving the Transferability of Adversarial Samples With Adversarial Transformations [Kor]
      • VOS: OOD detection by Virtual Outlier Synthesis [Kor]
      • MultitaskNeuralProcess [Kor]
      • RSLAD [Eng]
      • Deep Learning for 3D Point Cloud Understanding: A Survey [Eng]
      • BEIT [Kor]
      • Divergence-aware Federated Self-Supervised Learning [Eng]
      • NeRF-W [Kor]
      • Learning Multi-Scale Photo Exposure Correction [Eng]
      • ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions [Eng]
      • ViT [Eng]
      • CrossTransformer [Kor]
      • NeRF [Kor]
      • RegNeRF [Kor]
      • Image Inpainting with External-internal Learning and Monochromic Bottleneck [Eng]
      • CLIP-NeRF [Kor]
      • CLIP-NeRF [Eng]
      • DINO: Emerging Properties in Self-Supervised Vision Transformers [Eng]
      • DINO: Emerging Properties in Self-Supervised Vision Transformers [Kor]
      • DatasetGAN [Eng]
      • MOS [Kor]
      • MOS [Eng]
      • PlaNet [Eng]
      • MAE [Kor]
      • Fair Attribute Classification through Latent Space De-biasing [Kor]
      • Fair Attribute Classification through Latent Space De-biasing [Eng]
      • Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning [Kor]
      • PointNet [Kor]
      • PointNet [Eng]
      • MSD AT [Kor]
      • MM-TTA [Kor]
      • MM-TTA [Eng]
      • M-CAM [Eng]
      • MipNerF [Kor]
      • The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos [Eng]
      • Calibration [Eng]
      • CenterPoint [Kor]
      • YOLOX [Kor]
    • [2021 Fall] Paper review
      • DenseNet [Kor]
      • Time series as image [Kor]
      • mem3d [Kor]
      • GraSP [Kor]
      • DRLN [Kor]
      • VinVL: Revisiting Visual Representations in Vision-Language Models [Eng]
      • VinVL: Revisiting Visual Representations in Vision-Language Models [Kor]
      • NeSyXIL [Kor]
      • NeSyXIL [Eng]
      • RCAN [Kor]
      • RCAN [Eng]
      • MI-AOD [Kor]
      • MI-AOD [Eng]
      • DAFAS [Eng]
      • HyperGAN [Eng]
      • HyperGAN [Kor]
      • Scene Text Telescope: Text-focused Scene Image Super-Resolution [Eng]
      • Scene Text Telescope: Text-focused Scene Image Super-Resolution [Kor]
      • UPFlow [Eng]
      • GFP-GAN [Kor]
      • Federated Contrastive Learning [Kor]
      • Federated Contrastive Learning [Eng]
      • BGNN [Kor]
      • LP-KPN [Kor]
      • Feature Disruptive Attack [Kor]
      • Representative Interpretations [Kor]
      • Representative Interpretations [Eng]
      • Neural Discrete Representation Learning [KOR]
      • Neural Discrete Representation Learning [ENG]
      • Video Frame Interpolation via Adaptive Convolution [Kor]
      • Separation of hand motion and pose [kor]
      • pixelNeRF [Kor]
      • pixelNeRF [Eng]
      • SRResNet and SRGAN [Eng]
      • MZSR [Kor]
      • SANforSISR [Kor]
      • IPT [Kor]
      • Swin Transformer [kor]
      • CNN Cascade for Face Detection [Kor]
      • CapsNet [Kor]
      • Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [Kor]
      • CSRNet [Kor]
      • ScrabbleGAN [Kor]
      • CenterTrack [Kor]
      • CenterTrack [Eng]
      • STSN [Kor]
      • STSN [Eng]
      • VL-BERT:Visual-Linguistic BERT [Kor]
      • VL-BERT:Visual-Linguistic BERT [Eng]
      • Squeeze-and-Attention Networks for Semantic segmentation [Kor]
      • Shot in the dark [Kor]
      • Noise2Self [Kor]
      • Noise2Self [Eng]
      • Dynamic Head [Kor]
      • PSPNet [Kor]
      • PSPNet [Eng]
      • CUT [Kor]
      • CLIP [Eng]
      • Local Implicit Image Function [Kor]
      • Local Implicit Image Function [Eng]
      • MetaAugment [Eng]
      • Show, Attend and Tell [Kor]
      • Transformer [Kor]
      • DETR [Eng]
      • Multimodal Versatile Network [Eng]
      • Multimodal Versatile Network [Kor]
      • BlockDrop [Kor]
      • MDETR [Kor]
      • MDETR [Eng]
      • FSCE [Kor]
      • waveletSR [Kor]
      • DAN-net [Eng]
      • Boosting Monocular Depth Estimation [Eng]
      • Progressively Complementary Network for Fisheye Image Rectification Using Appearance Flow [Kor]
      • Syn2real-generalization [Kor]
      • Syn2real-generalization [Eng]
      • GPS-Net [Kor]
      • Frustratingly Simple Few Shot Object Detection [Eng]
      • DCGAN [Kor]
      • RealSR [Kor]
      • AMP [Kor]
      • AMP [Eng]
      • RCNN [Kor]
      • MobileNet [Eng]
  • Author's note
    • [2022 Spring] Author's note
      • Pop-Out Motion [Kor]
    • [2021 Fall] Author's note
      • Standardized Max Logits [Eng]
      • Standardized Max Logits [Kor]
  • Dive into implementation
    • [2022 Spring] Implementation
      • Supervised Contrastive Replay [Kor]
      • Pose Recognition with Cascade Transformers [Eng]
    • [2021 Fall] Implementation
      • Diversity Input Method [Kor]
        • Source code
      • Diversity Input Method [Eng]
        • Source code
  • Contributors
    • [2022 Fall] Contributors
    • [2021 Fall] Contributors
  • How to contribute?
    • (Template) Paper review [Language]
    • (Template) Author's note [Language]
    • (Template) Implementation [Language]
  • KAIST AI
Powered by GitBook
On this page
  • Guideline
  • Title & Description
  • (Start your manuscript from here)
  • 1. Introduction
  • 2. Method
  • 3. Implementation
  • Environment
  • Module 1
  • (Module 2 ...)
  • Author / Reviewer information
  • Author
  • Reviewer
  • Reference & Additional materials

Was this helpful?

  1. How to contribute?

(Template) Implementation [Language]

(Description) 1st auhor / Paper name / Venue

Guideline

Remove this section when you submit the manuscript

Write the manuscript/draft by editing this file.

Title & Description

Title of an article must follow this form: Title of article [language]

Example

  • Standardized Max Logit [Kor]

  • VITON-HD: High-Resolution Virtual Try-On [Eng]

  • Image-to-Image Translation via GDWCT [Kor]

  • Coloring with Words [Eng]

  • ...

Description of an article must follow this form: <1st author> / <paper name> / <venue>

Example

  • Jung et al. / Standardized Max Logit: A simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-scene Segmentation / ICCV 2021 Oral

  • Kim et al. / Deep Edge-Aware Interactive Colorization against Color-Bleeding Effects / ICCV 2021 Oral

  • Choi et al. / RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening / CVPR 2021 Oral

  • ...

(Start your manuscript from here)

If you are writing manuscripts in both Korean and English, add one of these lines.

You need to add hyperlink to the manuscript written in the other language.

Remove this part if you are writing manuscript in a single language.

(In English article) ---> 한국어로 쓰인 리뷰를 읽고 싶으시면 여기를 누르세요.

(한국어 리뷰에서) ---> English version of this article is available.

1. Introduction

Please provide the general information of the selected paper / method. This can be a shortened version of Paper review.

2. Method

In this section, you need to describe the method or algorithm in theory.

3. Implementation

This section covers the actual implementation.

When you write the manuscript, please follow the rules below:

  • Usecode blockwhen you write codes.

  • Use Python language, especially version 3 (3.8 >= recommended).

  • Use PyTorch, TensorFlow, and JAX (Numpy is okay) for the deep learning library.

  • Use manual seed.

  • A module should be implemented in a function or class.

  • Do not use GPU, but use CPU instead.

  • Use 4 spaces (= 1 tab) for indentation.

  • Type hint is optional.

  • Naming convention

    • class name: CamelCaseNaming

    • function and variable name: snake_case_naming

Environment

You can use Hint block in this section.

Please provide the dependency information and manual seed for reproducibility.

# Environment setup using conda
conda create -n tutorial python=3.8
conda activate tutorial
conda install ...
# or
pip install ...
example1.py
import os
import sys
import random
from typing import List, Dict, Tuple, Union, Any

import torch
import torch.nn as nn
import torch.nn.functional as F

# please provide version information
print(sys.version)
print(np.__version__)
print(torch.__version__)

# you should set manual seed
my_seed = 7777
random.seed(my_seed)
torch.manual_seed(my_seed)
torch.cuda.manual_seed(my_seed)
torch.cuda.manual_seed_all(my_seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

Module 1

You can freely change name of the subsection (Module 1) and add subsections.

Please provide the implementation of each module or algorithm with detailed (line-by-line) comments.

Note that you must specify the shape of input, intermediate, and output tensors.

You can add code blocks with multiple tabs.

example2.py
class MyModule(nn.Module):

    def __init__(self, ...):
        
        self.temp = nn.Linear(...)
        
    def forward(self,
                x: torch.Tensor) -> torch.Tensor:
    
        # input
        # x (batch, dim1, dim2, ...)
        # y (batch, dim1, dim2, ...)
        # return
        # out (batch, ...)
    
        out = self.temp(x) # fc-layer (batch, ...)
        ...
        
        return out
        
if __name__ == '__main__':
    
    test_x = torch.randn(...)
    test_model = MyModule(...)
    test_out = test_model(x)
    
    print(test_x)
    print(test_out)
    print(test_out.size())

You can add code blocks with multiple tabs.

example3.py
class MyModule(nn.Module):

    def __init__(self, ...):
        
        self.temp = ...
        
    def forward(self,
                x: torch.Tensor,
                y: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
    
        # input
        # x (batch, dim1, dim2, ...)
        # y (batch, dim1, dim2, ...)
        # output
        # out1 (batch, ...)
        # out2 (batch, ...)
        
        out = x + y # add two tensors (batch, ...)
        ...
        
        return out1, out2
        
if __name__ == '__main__':
    
    test_x = torch.randn(...)
    test_y = torch.randn(...)
    test_model = MyModule(...)
    test_out = test_model(x, y)
    
    print(test_x)
    print(test_y)
    print(test_out)
    print(test_out.size())

(Module 2 ...)

hi.py
# you can add subsections if you need

Author / Reviewer information

You don't need to provide the reviewer information at the draft submission stage.

Author

Korean Name (English name)

  • Affiliation (KAIST AI / NAVER)

  • (optional) 1~2 line self-introduction

  • Contact information (Personal webpage, GitHub, LinkedIn, ...)

  • ...

Reviewer

  1. Korean name (English name): Affiliation / Contact information

  2. Korean name (English name): Affiliation / Contact information

  3. ...

Reference & Additional materials

  1. Citation of this paper

  2. Official (unofficial) GitHub repository

  3. Citation of related work

  4. Other useful materials

  5. ...

Previous(Template) Author's note [Language]

Last updated 3 years ago

Was this helpful?

Also, please provide us a working example that describe how the proposed method works. Watch the professor's and see how the professor explains.

Note that you can attach images and tables in this manuscript. When you upload those files, please read section.

lecture videos
How to contribute?