Tips | DeepLearning Technique

κ°„λ‹¨ν•œ λ”₯λŸ¬λ‹ λͺ¨λΈ μ„±λŠ₯ ν–₯상 ν…Œν¬λ‹‰ 2가지 - Input Mixup & Label Smoothing

λ³Έ ν¬μŠ€νŠΈμ—μ„œλŠ” 졜근 λ”₯λŸ¬λ‹ λͺ¨λΈ ν•™μŠ΅μ—μ„œ 많이 μ‚¬μš©λ˜λ©°, λͺ¨λΈμ˜ ꡬ쑰λ₯Ό 바꾸지 μ•Šκ³  κ°„λ‹¨ν•˜κ²Œ λ”₯λŸ¬λ‹ λͺ¨λΈ μ„±λŠ₯을 ν–₯μƒμ‹œν‚¬ 수 μžˆλŠ” 2가지 방법에 λŒ€ν•΄ κ³΅λΆ€ν•˜κ³  μ •λ¦¬ν•˜μ˜€λ‹€.

1. Mixup Training

ν•™μŠ΅μ„ 진행할 λ•Œ λžœλ€ν•˜κ²Œ 두 개의 μƒ˜ν”Œμ„ λ½‘μ•„μ„œ MIXUP ν•œ 뒀에 ν•™μŠ΅μ— μ‚¬μš©ν•œλ‹€. μœ„ κ·Έλ¦Όμ—μ„œλŠ” lambdaκ°€ 0.5둜 μ„€μ •λ˜μ—ˆμ§€λ§Œ, 보톡은 νŠΉμ • 이미지에 더 높은 값이 λΆ€μ—¬λœλ‹€.


πŸ€” Mixup을 μ‚¬μš©ν•˜λ©΄ μ™œ μ„±λŠ₯이 더 μ’‹μ•„μ§ˆκΉŒ?

1️⃣ Data Augmentation
λžœλ€μ„±μ„ λ λŠ” ν˜•νƒœλ‘œ 데이터 증진을 μˆ˜ν–‰ν•˜λ©΄ μ‹€μ œ λͺ¨λΈμ€ λ”μš± 더 λ§Žμ€ λ°μ΄ν„°λ‘œ ν•™μŠ΅μ„ ν•œ 것과 같은 효과

2️⃣ Over-Fitting 방지
μ–΄λŠ 정도 Regularization을 μˆ˜ν–‰ν•˜λŠ” 효과


2. Label Smoothing

Label Smoothing은 μΌλ°˜ν™” Generalization μ„±λŠ₯을 높이기 μœ„ν•΄ label을 smoothingν•˜λŠ” 방법이닀. Mixup Trainingκ³Ό μœ μ‚¬ν•œ 뢀뢄이 μžˆμ§€λ§Œ κ°€μž₯ 큰 차이점은 μ΄λ―Έμ§€λŠ” κ±΄λ“œλ¦¬μ§€ μ•Šκ³  label만 λ°”κΏ”μ€€λ‹€λŠ” 것이닀.

➑️ μ •λ‹΅ λ ˆμ΄λΈ”μ— λŒ€ν•΄μ„œλ§Œ 100% ν™•λ₯ μ„ λΆ€μ—¬ν•˜μ§€ μ•ŠλŠ”λ‹€.
Hard Label 방식 μ •λ‹΅ λ ˆμ΄λΈ”μ— λŒ€ν•΄μ„œλ§Œ 1, λ‚˜λ¨Έμ§€ λ ˆμ΄λΈ”μ€ 0으둜 λΆ€μ—¬ν•˜λŠ” 것이 μ•„λ‹ˆλΌ
Soft Label 방식Smoothing μž‘μ—…μ„ 톡해 μ •λ‹΅ λ ˆμ΄λΈ”μ€ 1κ³Ό κ°€κΉŒμš΄ κ°’μœΌλ‘œ, λ‚˜λ¨Έμ§€ λ ˆμ΄λΈ”μ— 0보닀 쑰금 큰 값을 λ„£μ–΄μ£ΌλŠ” μž‘μ—…μ΄λ‹€.

μ‚¬λžŒμ˜ μ‹€μˆ˜μ— μ˜ν•΄μ„œ 잘λͺ» labeling 된 값이 μ‘΄μž¬ν•  수 있기 λ•Œλ¬Έμ— ν•˜λ‚˜μ˜ λ ˆμ΄λΈ”μ— λŒ€ν•΄μ„œ 높은 Confidence λ₯Ό κ°–κ²Œ ν•˜λŠ” 것은 λ‹€μ–‘ν•œ 문제λ₯Ό μ•ΌκΈ°ν•  수 μžˆλ‹€.


πŸ’» μ½”λ“œ μ‹€μŠ΅ - Input Mixup & Label Smoothing

좜처 : https://github.com/ndb796/Deep-Learning-Paper-Review-and-Practice/blob/master/code_practices/ResNet18_CIFAR10_Training_with_Input_Mixup_and_Label_Smoothing.ipynb

ν•„μš”ν•œ 라이브러리 κ΅¬ν˜„

import numpy as np

mixup_alpha = 1.0

def mixup_data(x, y):
    lam = np.random.beta(mixup_alpha, mixup_alpha)
    batch_size = x.size()[0]
    index = torch.randperm(batch_size).cuda()
    mixed_x = lam * x + (1 - lam) * x[index]
    y_a, y_b = y, y[index]
    return mixed_x, y_a, y_b, lam


def mixup_criterion(criterion, pred, y_a, y_b, lam):
    return lam * criterion(pred, y_a) + (1 - lam) * criterion(pred, y_b)


class LabelSmoothingCrossEntropy(nn.Module):
    def __init__(self):
        super(LabelSmoothingCrossEntropy, self).__init__()
    def forward(self, y, targets, smoothing=0.1):
        confidence = 1. - smoothing
        log_probs = F.log_softmax(y, dim=-1) # 예츑 ν™•λ₯  계산
        true_probs = torch.zeros_like(log_probs)
        true_probs.fill_(smoothing / (y.shape[1] - 1))
        true_probs.scatter_(1, targets.data.unsqueeze(1), confidence) # μ •λ‹΅ 인덱슀의 μ •λ‹΅ ν™•λ₯ μ„ confidence둜 λ³€κ²½
        return torch.mean(torch.sum(true_probs * -log_probs, dim=-1)) # negative log likelihood

ν™˜κ²½ μ„€μ € 및 ν•™μŠ΅ (Training) ν•¨μˆ˜ μ •μ˜

device = 'cuda'

net = ResNet18()
net = net.to(device)

learning_rate = 0.1
file_name = 'resnet18_cifar10.pth'

criterion = LabelSmoothingCrossEntropy()
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9, weight_decay=0.0002)


def train(epoch):
    print('\n[ Train epoch: %d ]' % epoch)
    net.train()
    train_loss = 0
    correct = 0
    total = 0

    for batch_idx, (inputs, targets) in enumerate(train_loader):
        inputs, targets = inputs.to(device), targets.to(device)
        inputs, targets_a, targets_b, lam = mixup_data(inputs, targets)
        optimizer.zero_grad()

        outputs = net(inputs)
        loss = mixup_criterion(criterion, outputs, targets_a, targets_b, lam)
        loss.backward()

        optimizer.step()
        train_loss += loss.item()
        _, predicted = outputs.max(1)

        total += targets.size(0)
        current_correct = (lam * predicted.eq(targets_a).sum().item() + (1 - lam) * predicted.eq(targets_b).sum().item())
        correct += current_correct

        if batch_idx % 100 == 0:
            print('\nCurrent batch:', str(batch_idx))
            print('Current batch average train accuracy:', current_correct / targets.size(0))
            print('Current batch average train loss:', loss.item() / targets.size(0))

    print('\nTotal average train accuarcy:', correct / total)
    print('Total average train loss:', train_loss / total)


def test(epoch):
    print('\n[ Test epoch: %d ]' % epoch)
    net.eval()
    loss = 0
    correct = 0
    total = 0

    for batch_idx, (inputs, targets) in enumerate(test_loader):
        inputs, targets = inputs.to(device), targets.to(device)
        total += targets.size(0)

        outputs = net(inputs)
        loss += criterion(outputs, targets).item()

        _, predicted = outputs.max(1)
        correct += predicted.eq(targets).sum().item()

    print('\nTotal average test accuarcy:', correct / total)
    print('Total average test loss:', loss / total)

    state = {
        'net': net.state_dict()
    }
    if not os.path.isdir('checkpoint'):
        os.mkdir('checkpoint')
    torch.save(state, './checkpoint/' + file_name)
    print('Model Saved!')

ν•™μŠ΅ 진행

import time


def adjust_learning_rate(optimizer, epoch):
    lr = learning_rate
    if epoch >= 50:
        lr /= 10
    if epoch >= 100:
        lr /= 10
    for param_group in optimizer.param_groups:
        param_group['lr'] = lr

start_time = time.time()

for epoch in range(0, 150):
    adjust_learning_rate(optimizer, epoch)
    train(epoch)
    test(epoch)
    if epoch % 10 == 0 :
        print('\nTime elapsed:', time.time() - start_time)

μ°Έκ³ 

[YouTube] κ°„λ‹¨ν•œ λ”₯λŸ¬λ‹ λͺ¨λΈ μ„±λŠ₯ ν–₯상 ν…Œν¬λ‹‰ 2가지 μ†Œκ°œ (μ½”λ“œ μ‹€μŠ΅ 포함) - Input Mixupκ³Ό Label Smoothing

 

Related Posts



πŸ’™ You need to log in to GitHub to write comments. πŸ’™
If you can't see comments, please refresh page(F5).