项目作者: omerbsezer

项目描述 :
Pytorch Tutorial, Pytorch with Google Colab, Pytorch Implementations: CNN, RNN, DCGAN, Transfer Learning, Chatbot, Pytorch Sample Codes
高级语言: Jupyter Notebook
项目地址: git://github.com/omerbsezer/Fast-Pytorch.git
创建时间: 2019-04-09T08:29:14Z
项目社区:https://github.com/omerbsezer/Fast-Pytorch

开源协议:

下载


Fast-Pytorch

This repo aims to cover Pytorch details, Pytorch example implementations, Pytorch sample codes, running Pytorch codes with Google Colab (with K80 GPU/CPU) in a nutshell.

Running in Colab

  • Two way:
    • Clone or download all repo, then upload your drive root file (‘/drive/‘), open .ipynb files with ‘Colaboratory’ application
    • Download “Github2Drive.ipynb” and copy your drive root file, open with ‘Colaboratory’ and run 3 cells one by one, hence repo is cloned to your drive file. (Pytorch with Google Colab)

Table of Contents:

" class="reference-link">Fast Pytorch Tutorial

It’s python deep learning framework/library that is developed by Facebook. Pytorch has own datastructure that provides automatic differentiation for all operations on Tensors.

Important keys: torch.Tensor, .requires_grad, .backward(), .grad, with torch.no_grad().

Pytorch CheatSheet: :fire:Details

" class="reference-link">Pytorch Playground

" class="reference-link">Model (Neural Network Layers)

  • :fire:Details
    1. torch.nn.RNN(*args, **kwargs)
    2. torch.nn.LSTM(*args, **kwargs)
    3. torch.nn.GRU(*args, **kwargs)
    4. torch.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh')
    5. torch.nn.LSTMCell(input_size, hidden_size, bias=True)
    6. torch.nn.GRUCell(input_size, hidden_size, bias=True)
    7. torch.nn.Linear(in_features, out_features, bias=True)
    8. torch.nn.Bilinear(in1_features, in2_features, out_features, bias=True)
    9. torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
    10. torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
    11. torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
    12. torch.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
    13. torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
    14. torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
    15. torch.nn.Unfold(kernel_size, dilation=1, padding=0, stride=1)
    16. torch.nn.Fold(output_size, kernel_size, dilation=1, padding=0, stride=1)

    " class="reference-link">Optimizer

  • :fire:Details
    1. torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)
    2. torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0)
    3. torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
    4. torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)
    5. torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)
    6. torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0)
    7. torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-05, tolerance_change=1e-09, history_size=100, line_search_fn=None)
    8. torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
    9. torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50))
    10. torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False) # stochastic gradient descent
    11. torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1)
    12. torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1)

    " class="reference-link">Loss Functions

  • :fire:Details
    1. torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean') # L1 Loss
    2. torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean') # Mean square error loss
    3. torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')
    4. torch.nn.CTCLoss(blank=0, reduction='mean') #Connectionist Temporal Classification loss
    5. torch.nn.NLLLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') #negative log likelihood loss
    6. torch.nn.PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')
    7. torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean') # Kullback-Leibler divergence Loss
    8. torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean') # Binary Cross Entropy
    9. torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')

    " class="reference-link">Pooling Layers

  • :fire:Details
    1. torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    2. torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    3. torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    4. torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0) # Computes a partial inverse of MaxPool2d
    5. torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
    6. torch.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
    7. torch.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False) # 2D power-average pooling
    8. torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False)
    9. torch.nn.AdaptiveAvgPool2d(output_size)

    " class="reference-link">Non-linear activation functions

  • :fire:Details
    1. torch.nn.ELU(alpha=1.0, inplace=False) # the element-wise function
    2. torch.nn.Hardshrink(lambd=0.5) # hard shrinkage function element-wise
    3. torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)
    4. torch.nn.PReLU(num_parameters=1, init=0.25)
    5. torch.nn.ReLU(inplace=False)
    6. torch.nn.RReLU(lower=0.125, upper=0.3333333333333333, inplace=False) # randomized leaky rectified liner unit function
    7. torch.nn.SELU(inplace=False)
    8. torch.nn.CELU(alpha=1.0, inplace=False)
    9. torch.nn.Sigmoid()
    10. torch.nn.Softplus(beta=1, threshold=20)
    11. torch.nn.Softshrink(lambd=0.5)
    12. torch.nn.Tanh()
    13. torch.nn.Tanhshrink()
    14. torch.nn.Threshold(threshold, value, inplace=False)
    15. torch.nn.Softmax(dim=None)
    16. torch.nn.Softmax2d()

    " class="reference-link">Basic 2 Layer NN

  • Basic two layer feed forward neural networks with optimizer, loss:
    ```Python
    import torch
    class TwoLayerNet(torch.nn.Module):
    def init(self, D_in, H, D_out):
    1. super(TwoLayerNet, self).__init__()
    2. self.linear1 = torch.nn.Linear(D_in, H)
    3. self.linear2 = torch.nn.Linear(H, D_out)
    def forward(self, x):
    1. h_relu = self.linear1(x).clamp(min=0)
    2. y_pred = self.linear2(h_relu)
    3. return y_pred

N_batchsize, D_input, Hidden_size D_output = 64, 1000, 100, 10
epoch=500

x = torch.randn(N_batchsize, D_input)
y = torch.randn(N_batchsize, D_output)

model = TwoLayerNet(D_input, Hidden, D_output)

criterion = torch.nn.MSELoss(reduction=’sum’) # loss, mean square error
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) # optimizer, stochastic gradient descent, lr=learning rate

for t in range(epoch):
y_pred = model(x) # Forward pass
loss = criterion(y_pred, y) #print(t, loss.item())
optimizer.zero_grad() # Zero gradients,
loss.backward() # backward pass
optimizer.step() # update the weights

  1. ### Fast Torchvision Tutorial <a name="torchvisiontutorial"></a>
  2. "The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision."
  3. ### ImageFolder <a name="imagefolder"></a>
  4. - If you have special/custom datasets, image folder function can be used.
  5. ```Python
  6. # Example
  7. imagenet_data = torchvision.datasets.ImageFolder('path/to/imagenet_root/')
  8. data_loader = torch.utils.data.DataLoader(imagenet_data,
  9. batch_size=4,
  10. shuffle=True,
  11. num_workers=args.nThreads)

" class="reference-link">Transforms

  • Transforms are common for image transformations. :fire:Details
    1. # Some of the important functions:
    2. from torchvision import datasets, transforms
    3. transform = transforms.Compose([transforms.Resize((input_size, input_size)), transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) 3 Example
    4. torchvision.transforms.CenterCrop(size)
    5. torchvision.transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=0)
    6. torchvision.transforms.Grayscale(num_output_channels=1)
    7. torchvision.transforms.Pad(padding, fill=0, padding_mode='constant')
    8. torchvision.transforms.RandomApply(transforms, p=0.5)
    9. torchvision.transforms.RandomCrop(size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant')
    10. torchvision.transforms.RandomGrayscale(p=0.1)
    11. torchvision.transforms.RandomResizedCrop(size, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), interpolation=2)
    12. torchvision.transforms.RandomRotation(degrees, resample=False, expand=False, center=None)
    13. torchvision.transforms.RandomVerticalFlip(p=0.5)
    14. torchvision.transforms.Resize(size, interpolation=2)
    15. torchvision.transforms.Scale(*args, **kwargs)
    16. torchvision.transforms.LinearTransformation(transformation_matrix)
    17. torchvision.transforms.Normalize(mean, std, inplace=False) # Normalize a tensor image with mean and standard deviation.
    18. torchvision.transforms.ToTensor() # Convert a PIL Image or numpy.ndarray to tensor
    19. # Functional transforms give you fine-grained control of the transformation pipeline. As opposed to the transformations above, functional transforms don’t contain a random number generator for their parameters. That means you have to specify/generate all parameters, but you can reuse the functional transform.
    20. torchvision.transforms.functional.adjust_brightness(img, brightness_factor)
    21. torchvision.transforms.functional.hflip(img)
    22. torchvision.transforms.functional.normalize(tensor, mean, std, inplace=False) # Normalize a tensor image with mean and standard deviation
    23. torchvision.transforms.functional.pad(img, padding, fill=0, padding_mode='constant')
    24. torchvision.transforms.functional.rotate(img, angle, resample=False, expand=False, center=None) # Rotate the image by angle
    25. torchvision.transforms.functional.to_grayscale(img, num_output_channels=1) # Convert image to grayscale version of image.

    " class="reference-link">Datasets

  • Most used datasets in the literature. :fire:Details
    1. torchvision.datasets.MNIST(root='data/mnist', train=True, transform=transform, target_transform=None, download=True) # with example
    2. torchvision.datasets.FashionMNIST(root='data/fashion-mnist', train=True, transform=transform, target_transform=None, download=True) # with example
    3. torchvision.datasets.KMNIST(root, train=True, transform=None, target_transform=None, download=False)
    4. torchvision.datasets.EMNIST(root, split, **kwargs)
    5. torchvision.datasets.FakeData(size=1000, image_size=(3, 224, 224), num_classes=10, transform=None, target_transform=None, random_offset=0)
    6. torchvision.datasets.CocoCaptions(root, annFile, transform=None, target_transform=None)
    7. torchvision.datasets.CocoDetection(root, annFile, transform=None, target_transform=None)
    8. torchvision.datasets.LSUN(root, classes='train', transform=None, target_transform=None)
    9. torchvision.datasets.CIFAR10(root, train=True, transform=None, target_transform=None, download=False)
    10. torchvision.datasets.STL10(root, split='train', transform=None, target_transform=None, download=False)
    11. torchvision.datasets.SVHN(root, split='train', transform=None, target_transform=None, download=False)
    12. torchvision.datasets.PhotoTour(root, name, train=True, transform=None, download=False)
    13. torchvision.datasets.SBU(root, transform=None, target_transform=None, download=True)
    14. torchvision.datasets.Flickr8k(root, ann_file, transform=None, target_transform=None)
    15. torchvision.datasets.VOCSegmentation(root, year='2012', image_set='train', download=False, transform=None, target_transform=None)
    16. torchvision.datasets.Cityscapes(root, split='train', mode='fine', target_type='instance', transform=None, target_transform=None)

    " class="reference-link">Models

  • :fire:Details
    1. # model with random weights
    2. import torchvision.models as models
    3. resnet18 = models.resnet18()
    4. alexnet = models.alexnet()
    5. vgg16 = models.vgg16()
    6. squeezenet = models.squeezenet1_0()
    7. densenet = models.densenet161()
    8. inception = models.inception_v3()
    9. googlenet = models.googlenet()
    10. # with pre-trained models
    11. resnet18 = models.resnet18(pretrained=True)
    12. alexnet = models.alexnet(pretrained=True)
    13. squeezenet = models.squeezenet1_0(pretrained=True)
    14. vgg16 = models.vgg16(pretrained=True)
    15. densenet = models.densenet161(pretrained=True)
    16. inception = models.inception_v3(pretrained=True)
    17. googlenet = models.googlenet(pretrained=True)

    " class="reference-link">Utils

    1. torchvision.utils.make_grid(tensor, nrow=8, padding=2, normalize=False, range=None, scale_each=False, pad_value=0) # Make a grid of images.
    2. torchvision.utils.save_image(tensor, filename, nrow=8, padding=2, normalize=False, range=None, scale_each=False, pad_value=0) # Save a given Tensor into an image file

" class="reference-link">Pytorch with Google Colab

  • If you want to use drive.google for storage, you have to run the following codes for authentication. After running cell, links for authentication are appereared, click and copy the token pass for that session.
    1. !apt-get install -y -qq software-properties-common python-software-properties module-init-tools
    2. !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
    3. !apt-get update -qq 2>&1 > /dev/null
    4. !apt-get -y install -qq google-drive-ocamlfuse fuse
    5. from google.colab import auth
    6. auth.authenticate_user()
    7. from oauth2client.client import GoogleCredentials
    8. creds = GoogleCredentials.get_application_default()
    9. import getpass
    10. !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
    11. vcode = getpass.getpass()
    12. !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
  • Then, you can use your drive file and reach the your codes which are in your drive.
    1. !mkdir -p drive
    2. !google-drive-ocamlfuse drive
    3. import sys
    4. sys.path.insert(0,'drive/Fast-Pytorch/Learning_Pytorch') # Example, your drive root: 'drive/'
    5. !ls drive
  • After authentication, git clone command is also used to clone project.
    1. %cd 'drive/Fast-Pytorch'
    2. !ls
    3. !git clone https://github.com/omerbsezer/Fast-Pytorch.git

    " class="reference-link">Pytorch Example Implementations

  • All codes are run on the Colab. You can also run on desktop jupyter notebooks.(Anaconda)[https://www.anaconda.com/distribution/].

    " class="reference-link">MLP

  • MLP 1 Class with Binary Cross Entropy (BCE) Loss: :green_book:[Colab], :notebook:[Notebook]
  • MLP 2 Classes with Cross Entropy Loss: :green_book:[Colab], :notebook:[Notebook]
  • MLP 3-Layer with MNIST Example: :green_book:[Colab], :notebook:[Notebook]
  1. class Model(nn.Module):
  2. def __init__(self):
  3. super(Model,self).__init__()
  4. self.fc1 =torch.nn.Linear(x.shape[1],5)
  5. self.fc2 =torch.nn.Linear(5,3)
  6. self.fc3 =torch.nn.Linear(3,1)
  7. self.sigmoid=torch.nn.Sigmoid()
  8. def forward(self,x):
  9. out =self.fc1(x)
  10. out =self.sigmoid(out)
  11. out =self.fc2(out)
  12. out =self.sigmoid(out)
  13. out =self.fc3(out)
  14. out= self.sigmoid(out)
  15. return out

" class="reference-link">CNN

  1. class CNN(nn.Module):
  2. def __init__(self):
  3. super(CNN,self).__init__()
  4. # input_size:28, same_padding=(filter_size-1)/2, 3-1/2=1:padding
  5. self.cnn1=nn.Conv2d(in_channels=1, out_channels=8, kernel_size=3, stride=1, padding=1)
  6. # input_size-filter_size +2(padding)/stride + 1 = 28-3+2(1)/1+1=28
  7. self.batchnorm1=nn.BatchNorm2d(8)
  8. # output_channel:8, batch(8)
  9. self.relu=nn.ReLU()
  10. self.maxpool1=nn.MaxPool2d(kernel_size=2)
  11. #input_size=28/2=14
  12. self.cnn2=nn.Conv2d(in_channels=8, out_channels=32, kernel_size=5, stride=1, padding=2)
  13. # same_padding: (5-1)/2=2:padding_size.
  14. self.batchnorm2=nn.BatchNorm2d(32)
  15. self.maxpool2=nn.MaxPool2d(kernel_size=2)
  16. # input_size=14/2=7
  17. # 32x7x7=1568
  18. self.fc1 =nn.Linear(in_features=1568, out_features=600)
  19. self.dropout= nn.Dropout(p=0.5)
  20. self.fc2 =nn.Linear(in_features=600, out_features=10)
  21. def forward(self,x):
  22. out =self.cnn1(x)
  23. out =self.batchnorm1(out)
  24. out =self.relu(out)
  25. out =self.maxpool1(out)
  26. out =self.cnn2(out)
  27. out =self.batchnorm2(out)
  28. out =self.relu(out)
  29. out =self.maxpool2(out)
  30. out =out.view(-1,1568)
  31. out =self.fc1(out)
  32. out =self.relu(out)
  33. out =self.dropout(out)
  34. out =self.fc2(out)
  35. return out

" class="reference-link">CNN Visualization

visualization-CNN-runtime

" class="reference-link">RNN

  1. class TextGenerator(nn.Module):
  2. def __init__(self, vocab_size, embed_size, hidden_size, num_layers):
  3. super(TextGenerator, self).__init__()
  4. self.embed= nn.Embedding(vocab_size,embed_size)
  5. self.lstm=nn.LSTM(embed_size,hidden_size,num_layers, batch_first=True)
  6. self.linear=nn.Linear(hidden_size, vocab_size)
  7. def forward(self,x,h):
  8. x= self.embed(x)
  9. # h: hidden_state, c=output
  10. # x= x.view(batch_size,timesteps,embed_size)
  11. out, (h,c)=self.lstm(x,h)
  12. #(batch_size*timesteps, hidden_size)
  13. #out.size(0):batch_size; out.size(1):timesteps, out.size(2): hidden_size
  14. out=out.reshape(out.size(0)*out.size(1),out.size(2))
  15. # decode hidden states of all time steps
  16. out= self.linear(out)
  17. return out, (h,c)

" class="reference-link">Transfer Learning

transferlearning

" class="reference-link">DCGAN

dcgan

  1. class Generator(nn.Module):
  2. def __init__(self, ngpu):
  3. super(Generator, self).__init__()
  4. self.ngpu = ngpu
  5. self.main = nn.Sequential(
  6. # input is Z, going into a convolution
  7. nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False), # input channel=100, o_channel:512, kernel=4, stride=1, padding=0
  8. nn.BatchNorm2d(ngf * 8),
  9. nn.ReLU(True),
  10. # state size. (ngf*8) x 4 x 4
  11. nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
  12. nn.BatchNorm2d(ngf * 4),
  13. nn.ReLU(True),
  14. # state size. (ngf*4) x 8 x 8
  15. nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
  16. nn.BatchNorm2d(ngf * 2),
  17. nn.ReLU(True),
  18. # state size. (ngf*2) x 16 x 16
  19. nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
  20. nn.BatchNorm2d(ngf),
  21. nn.ReLU(True),
  22. # state size. (ngf) x 32 x 32
  23. nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
  24. nn.Tanh()
  25. # state size. (nc) x 64 x 64
  26. )
  27. def forward(self, input):
  28. return self.main(input)
  1. class Discriminator(nn.Module):
  2. def __init__(self, ngpu):
  3. super(Discriminator, self).__init__()
  4. self.ngpu = ngpu
  5. self.main = nn.Sequential(
  6. # input is (nc) x 64 x 64
  7. nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
  8. nn.LeakyReLU(0.2, inplace=True),
  9. # state size. (ndf) x 32 x 32
  10. nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
  11. nn.BatchNorm2d(ndf * 2),
  12. nn.LeakyReLU(0.2, inplace=True),
  13. # state size. (ndf*2) x 16 x 16
  14. nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
  15. nn.BatchNorm2d(ndf * 4),
  16. nn.LeakyReLU(0.2, inplace=True),
  17. # state size. (ndf*4) x 8 x 8
  18. nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
  19. nn.BatchNorm2d(ndf * 8),
  20. nn.LeakyReLU(0.2, inplace=True),
  21. # state size. (ndf*8) x 4 x 4
  22. nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
  23. nn.Sigmoid()
  24. )
  25. def forward(self, input):
  26. return self.main(input)

" class="reference-link">ChatBot

  1. > what is your name?
  2. Bot: berger .
  3. > are you married?
  4. Bot: no .
  5. > how old are you?
  6. Bot: i m not hungry .
  7. > how are you?
  8. Bot: okay .
  9. > where are you from?
  10. Bot: i m travelling .
  11. > do you know me?
  12. Bot: yes .
  13. > who am i?
  14. Bot: i don t know .
  15. > what is your job?
  16. Bot: i m not going to tell you .
  17. > what is your problem?
  18. Bot: i m not afraid of anything .
  19. > are you robot?
  20. Error: Encountered unknown word.
  21. > what is my name?
  22. Bot: berger .
  23. > ai?
  24. Error: Encountered unknown word.
  25. > what do you want to me?
  26. Bot: i m going to kill you .
  27. > how do you kill me?
  28. Bot: i told you .
  29. > what is your plan?
  30. Bot: i m not going to tell you .
  31. > are you live?
  32. Bot: yes .
  33. > where?
  34. Bot: the zoo .
  35. > what is zoo?
  36. Bot: the sheets . . .
  37. > where is the zoo?
  38. Bot: i don t know .

" class="reference-link">Pytorch Sample Codes

  • CycleGAN [github], [github2]
  • [Project] A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing, [github]
  • Wiseodd/generative-models, both pytorch and tensorflow [github]
  • GAN, LSGAN, WGAN, DRAGAN, CGAN, infoGAN, ACGAN, EBGAN, BEGAN [github]
  • CartoonGAN github
  • Pix2Pix [github]

" class="reference-link">References