본문 바로가기

AI&ML/DL algorithms3

AlexNet 코드 AlexNet# import library import tensorflow as tf from tensorflow.keras import Sequential, layers from tensorflow.keras.layers import BatchNormalization, Dropout def lrn(x, depth_radius=5, bias=1.0, alpha=1e-4, beta=0.75):     return tf.nn.local_response_normalization(x,                                               depth_radius=depth_radius,                                               bias=bias.. 2024. 8. 14.
Transformer 코드 Transformer Self Attention아래 코드는 positional embedding이 포함되지 않은 코드이다!!import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F # Scaled Dot Production Attention class ScaledDotProductAttention(nn.Module):     def __init__(self):         super(ScaledDotProductAttention, self).__init__()     def forward(self, Q, K, V, mask = None):         d_k = Q.size(-1)     .. 2024. 7. 24.
VGGNET 코드 VGGNET 16 weight# import libraryimport tensorflow as tf from tensorflow.keras import Sequential, layers # build vgg16 modeldef vgg16():     model = Sequential()          model.add(layers.Conv2D(64, kernel_size = (3, 3), padding = 'same', activation = 'relu', input_shape=(224, 224, 3)))     model.add(layers.Conv2D(64, kernel_size = (3, 3), padding = 'same', activation = 'relu'))     model.add(lay.. 2024. 7. 10.
728x90