day 26 Regularization and Normalization

53 minute read

Regularization

Regularization과 Normalization

Regularization : 정칙화라고 불리며, 오버피팅을 해결하기 위한 방법 중의 하나입니다. 오늘 우리가 가장 중요하게 다룰 주제이기도 하지요. L1, L2 Regularization, Dropout, Batch normalization 등이 있습니다. 이 방법들은 모두 오버피팅을 해결하고자 하는 방법 중에 하나입니다.

Normalization : 정규화라고 불리며, 이는 데이터의 형태를 좀 더 의미 있게, 혹은 트레이닝에 적합하게 전처리하는 과정입니다. 데이터를 z-score로 바꾸거나 minmax scaler를 사용하여 0과 1사이의 값으로 분포를 조정하는 것들이 해당됩니다.

from sklearn.datasets import load_iris
import pandas as pd 
import matplotlib.pyplot as plt

iris = load_iris()
iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
target_df = pd.DataFrame(data=iris.target, columns=['species'])

# 0, 1, 2로 되어있는 target 데이터를 
# 알아보기 쉽게 'setosa', 'versicolor', 'virginica'로 바꿉니다 
def converter(species):
    if species == 0:
        return 'setosa'
    elif species == 1:
        return 'versicolor'
    else:
        return 'virginica'

target_df['species'] = target_df['species'].apply(converter)

iris_df = pd.concat([iris_df, target_df], axis=1)
iris_df.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) species
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa
X = [iris_df['petal length (cm)'][a] for a in iris_df.index if iris_df['species'][a]=='virginica']
Y = [iris_df['sepal length (cm)'][a] for a in iris_df.index if iris_df['species'][a]=='virginica']

print(X)
print(Y)
[6.0, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5.0, 5.1, 5.3, 5.5, 6.7, 6.9, 5.0, 5.7, 4.9, 6.7, 4.9, 5.7, 6.0, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5.0, 5.2, 5.4, 5.1]
[6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6.0, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6.0, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]
plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.title('petal-sepal scatter before normalization') 
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()

png

After normalization

from sklearn.preprocessing import minmax_scale

X_scale = minmax_scale(X)
Y_scale = minmax_scale(Y)

plt.figure(figsize=(5,5))
plt.scatter(X_scale,Y_scale)
plt.title('petal-sepal scatter after normalization') 
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()

png

from sklearn.linear_model import LinearRegression
import numpy as np 

X = np.array(X)
Y = np.array(Y)

# Iris Dataset을 Linear Regression으로 학습합니다. 
linear= LinearRegression()
linear.fit(X.reshape(-1,1), Y)

# Linear Regression의 기울기와 절편을 확인합니다. 
a, b=linear.coef_, linear.intercept_
print("기울기 : %0.2f, 절편 : %0.2f" %(a,b))
기울기 : 1.00, 절편 : 1.06
plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.plot(X,linear.predict(X.reshape(-1,1)),'-b')
plt.title('petal-sepal scatter with linear regression') 
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()

png

#L1 regularization은 Lasso로 import 합니다.
from sklearn.linear_model import Lasso

L1 = Lasso()
L1.fit(X.reshape(-1,1), Y)
a, b=L1.coef_, L1.intercept_
print("기울기 : %0.2f, 절편 : %0.2f" %(a,b))

plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.plot(X,L1.predict(X.reshape(-1,1)),'-b')
plt.title('petal-sepal scatter with L1 regularization(Lasso)') 
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()
기울기 : 0.00, 절편 : 6.59

png

L1 calculates the slope as 0. Can’t quite solve the problem correctly.

Trying L2 maybe will give us the correct output…

#L2 regularization은 Ridge로 import 합니다. 
from sklearn.linear_model import Ridge

L2 = Ridge()
L2.fit(X.reshape(-1,1), Y)
a, b = L2.coef_, L2.intercept_
print("기울기 : %0.2f, 절편 : %0.2f" %(a,b))

plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.plot(X,L2.predict(X.reshape(-1,1)),'-b')
plt.title('petal-sepal scatter with L2 regularization(Ridge)') 
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()
기울기 : 0.93, 절편 : 1.41

png

Linear Regression이 L2 Norm과 관련이 있습니다. 그래서 L2 Regularization을 쓰는 Ridge방법으로는 앞서 Linear Regression과 큰 차이가 없는 결과가 나옵니다.

L1 Regularization

image

from sklearn.datasets import load_iris
import pandas as pd 
import matplotlib.pyplot as plt

iris = load_iris()
iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
target_df = pd.DataFrame(data=iris.target, columns=['species'])

def converter(species):
    if species == 0:
        return 'setosa'
    elif species == 1:
        return 'versicolor'
    else:
        return 'virginica'

target_df['species'] = target_df['species'].apply(converter)

iris_df = pd.concat([iris_df, target_df], axis=1)
iris_df.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) species
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa
X = [iris_df['petal length (cm)'][a] for a in iris_df.index if iris_df['species'][a]=='virginica']
Y = [iris_df['sepal length (cm)'][a] for a in iris_df.index if iris_df['species'][a]=='virginica']

X = np.array(X)
Y = np.array(Y)

plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()

png

from sklearn.linear_model import Lasso

L1 = Lasso()
L1.fit(X.reshape(-1,1), Y)
a, b = L1.coef_, L1.intercept_
print("기울기 : %0.2f, 절편 : %0.2f" %(a,b))

plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.plot(X,L1.predict(X.reshape(-1,1)),'-b')
plt.xlabel('petal length (cm)')
plt.ylabel('sepal length (cm)')
plt.grid()
plt.show()
기울기 : 0.00, 절편 : 6.59

png

image

$p=1$인 경우 $β$ 에 대해 미분하는 과정에서 $λ$가 사라지므로 Regularization의 효과를 볼 수 없습니다. $X$ 가 1차원 값인 선형회귀분석 같은 경우에는 L1 Regularization이 의미가 없다는 것을 말합니다. 그러므로, L1 Regularization을 사용할 때는 X가 2차원 이상인 여러 컬럼 값이 있는 데이터일 때 실제 효과를 볼 수 있습니다.

Another example of L1…

from sklearn.datasets import load_wine

wine = load_wine()
wine_df = pd.DataFrame(data=wine.data, columns=wine.feature_names)
target_df = pd.DataFrame(data=wine.target, columns=['Y'])
wine_df.head(5)
alcohol malic_acid ash alcalinity_of_ash magnesium total_phenols flavanoids nonflavanoid_phenols proanthocyanins color_intensity hue od280/od315_of_diluted_wines proline
0 14.23 1.71 2.43 15.6 127.0 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065.0
1 13.20 1.78 2.14 11.2 100.0 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050.0
2 13.16 2.36 2.67 18.6 101.0 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185.0
3 14.37 1.95 2.50 16.8 113.0 3.85 3.49 0.24 2.18 7.80 0.86 3.45 1480.0
4 13.24 2.59 2.87 21.0 118.0 2.80 2.69 0.39 1.82 4.32 1.04 2.93 735.0
target_df.head(5)
Y
0 0
1 0
2 0
3 0
4 0

Solving with linear regression…

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error

# 데이터를 준비하고
X_train, X_test, y_train, y_test = train_test_split(wine_df, target_df, test_size=0.3, random_state=101)

# 모델을 훈련시킵니다.
model = LinearRegression()
model.fit(X_train, y_train)

# 테스트를 해볼까요?
model.predict(X_test)
pred = model.predict(X_test)

# 테스트 결과는 이렇습니다!
print("result of linear regression")
print('Mean Absolute Error:', mean_absolute_error(y_test, pred))
print('Mean Squared Error:', mean_squared_error(y_test, pred))
print('Mean Root Squared Error:', np.sqrt(mean_squared_error(y_test, pred)))

print("\n\n coefficient linear regression")
print(model.coef_)
result of linear regression
Mean Absolute Error: 0.25128973939722626
Mean Squared Error: 0.1062458740952556
Mean Root Squared Error: 0.32595379134971814


 coefficient linear regression
[[-8.09017190e-02  4.34817880e-02 -1.18857931e-01  3.65705449e-02
  -4.68014203e-04  1.41423581e-01 -4.54107854e-01 -5.13172664e-01
   9.69318443e-02  5.34311136e-02 -1.27626604e-01 -2.91381844e-01
  -5.72238959e-04]]

with L1 regularization

from sklearn.linear_model import Lasso
from sklearn.metrics import mean_absolute_error, mean_squared_error

# 모델을 준비하고 훈련시킵니다.
L1 = Lasso(alpha=0.05)
L1.fit(X_train, y_train)

# 테스트를 해봅시다.
pred = L1.predict(X_test)

# 모델 성능은 얼마나 좋을까요?
print("result of Lasso")
print('Mean Absolute Error:', mean_absolute_error(y_test, pred))
print('Mean Squared Error:', mean_squared_error(y_test, pred))
print('Mean Root Squared Error:', np.sqrt(mean_squared_error(y_test, pred)))

print("\n\n coefficient of Lasso")
print(L1.coef_)
result of Lasso
Mean Absolute Error: 0.24233731936122138
Mean Squared Error: 0.0955956894578189
Mean Root Squared Error: 0.3091855259513597


 coefficient of Lasso
[-0.          0.01373795 -0.          0.03065716  0.00154719 -0.
 -0.34143614 -0.          0.          0.06755943 -0.         -0.14558153
 -0.00089635]

We can see that from the coefficents given by L1, only 7 out of 13 are given values while others are 0. Even though the error were not much different, we can observe which columns had a bigger effect on the output.

L2 Regularization

image

image

L2 closer to 0 but not quite. It is faster than L1 norm.

from sklearn.datasets import load_wine
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error, mean_squared_error

wine = load_wine()
wine_df = pd.DataFrame(data=wine.data, columns=wine.feature_names)
target_df = pd.DataFrame(data=wine.target, columns=['Y'])
X_train, X_test, y_train, y_test = train_test_split(wine_df, target_df, test_size= 0.3, random_state=101)
print('=3')
=3
from sklearn.linear_model import Lasso

L1 = Lasso(alpha=0.05, max_iter=5)
L1.fit(X_train, y_train)
pred = L1.predict(X_test)

print("result of Lasso")
print('Mean Absolute Error:', mean_absolute_error(y_test, pred))
print('Mean Squared Error:', mean_squared_error(y_test, pred))
print('Mean Root Squared Error:', np.sqrt(mean_squared_error(y_test, pred)))

print("\n\n coefficient of Lasso")
print(L1.coef_)
result of Lasso
Mean Absolute Error: 0.24845768841769436
Mean Squared Error: 0.10262989110341268
Mean Root Squared Error: 0.32035900346862844


 coefficient of Lasso
[-0.          0.         -0.          0.03295564  0.00109495  0.
 -0.4027847   0.          0.          0.06023131 -0.         -0.12001119
 -0.00078971]


/opt/conda/lib/python3.7/site-packages/sklearn/linear_model/_coordinate_descent.py:529: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 3.924145836492522, tolerance: 0.007479838709677421
  positive)
from sklearn.linear_model import Ridge

L2 = Ridge(alpha=0.05,max_iter=5)
L2.fit(X_train, y_train)
pred = L2.predict(X_test)

print("result of Ridge")
print('Mean Absolute Error:', mean_absolute_error(y_test, pred))
print('Mean Squared Error:', mean_squared_error(y_test, pred))
print('Mean Root Squared Error:', np.sqrt(mean_squared_error(y_test, pred)))

print("\n\n coefficient of Ridge")
print(L2.coef_)
result of Ridge
Mean Absolute Error: 0.2511466959936429
Mean Squared Error: 0.10568076460795563
Mean Root Squared Error: 0.3250857803841251


 coefficient of Ridge
[[-8.12456257e-02  4.35541496e-02 -1.21661565e-01  3.65979773e-02
  -3.94014013e-04  1.39168707e-01 -4.50691113e-01 -4.87216747e-01
   9.54111059e-02  5.37077039e-02 -1.28602933e-01 -2.89832790e-01
  -5.73136185e-04]]

정리하면, L1 Regularization은 가중치가 적은 벡터에 해당하는 계수를 0으로 보내면서 차원 축소와 비슷한 역할을 하는 것이 특징이며, L2 Regularization은 0이 아닌 0에 가깝게 보내지만 제곱 텀이 있기 때문에 L1 Regularization보다는 수렴 속도가 빠르다는 장점이 있습니다. 예를 들어, A=[1,1,1,1,1]A=[1,1,1,1,1] , B=[5,0,0,0,0]B=[5,0,0,0,0] 의 경우 L1-norm은 같지만, L2-norm은 같지 않습니다. 즉, 제곱 텀에서 결과에 큰 영향을 미치는 값은 더 크게, 결과에 영향이 적은 값들은 더 작게 보내면서 수렴 속도가 빨라지는 것입니다.

Extra : Lp norm

image

x=np.array([1,10,1,1,1])
p=5
norm_x=np.linalg.norm(x, ord=p)
making_norm = (sum(x**p))**(1/p)
print("result of numpy package norm function : %0.5f "%norm_x) 
print("result of making norm : %0.5f "%making_norm)
result of numpy package norm function : 10.00008 
result of making norm : 10.00008 
norm_x=np.linalg.norm(x, ord=np.inf)
print("result of infinite norm : %0.5f "%norm_x)
result of infinite norm : 10.00000 
A=np.array([[1,2,3],[1,2,3],[4,6,8]])
inf_norm_A=np.linalg.norm(A, ord=np.inf)
print("result inf norm of A :", inf_norm_A)
one_norm_A=np.linalg.norm(A, ord=1)
print("result one norm of A :", one_norm_A)
result inf norm of A : 18.0
result one norm of A : 14.0

dropout

드롭아웃 기법이 나오기 전의 신경망은 fully connected architecture로 모든 뉴런들이 연결되어 있었습니다. 드롭아웃이 나오면서 확률적으로 랜덤하게 몇 가지의 뉴럴만 선택하여 정보를 전달하는 과정입니다. 롭아웃은 오버피팅을 막는 Regularization layer 중 하나입니다. 확률을 너무 높이면, 제대로 전달되지 않으므로 학습이 잘되지 않고, 확률을 너무 낮추는 경우는 fully connected layer와 같습니다. fully connected layer에서 오버피팅이 생기는 경우에 주로 Dropout layer를 추가합니다.

when not overfitting

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

fashion_mnist = keras.datasets.fashion_mnist
print('=3')
=3
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

train_images = train_images / 255.0
test_images = test_images / 255.0
model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation='relu'),
    # 여기에 dropout layer를 추가해보았습니다. 나머지 layer는 아래의 실습과 같습니다.
    keras.layers.Dropout(0.9),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history= model.fit(train_images, train_labels, epochs=5)
Epoch 1/5
1875/1875 [==============================] - 4s 2ms/step - loss: 1.6178 - accuracy: 0.3913
Epoch 2/5
1875/1875 [==============================] - 3s 2ms/step - loss: 1.1984 - accuracy: 0.5197
Epoch 3/5
1875/1875 [==============================] - 3s 2ms/step - loss: 1.1308 - accuracy: 0.5441
Epoch 4/5
1875/1875 [==============================] - 3s 2ms/step - loss: 1.1052 - accuracy: 0.5581
Epoch 5/5
1875/1875 [==============================] - 3s 2ms/step - loss: 1.0924 - accuracy: 0.5649
model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    # 이번에는 dropout layer가 없습니다. 
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=5)
Epoch 1/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.6280 - accuracy: 0.7828
Epoch 2/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.3892 - accuracy: 0.8602
Epoch 3/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.3314 - accuracy: 0.8788
Epoch 4/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.3143 - accuracy: 0.8861
Epoch 5/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.2909 - accuracy: 0.8910

when overffitting

X_train, X_valid, y_train, y_valid = train_test_split(train_images, train_labels, test_size=0.01, random_state=101)
X_train = X_train / 255.0
X_valid = X_valid / 255.0

#Dense layer만으로 만들어 낸 classification 모델입니다.
model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(256, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history= model.fit(X_train, y_train, epochs=200, batch_size=512, validation_data=(X_valid, y_valid))
Epoch 1/200
117/117 [==============================] - 1s 7ms/step - loss: 2.2003 - accuracy: 0.4001 - val_loss: 1.6401 - val_accuracy: 0.6367
Epoch 2/200
117/117 [==============================] - 0s 3ms/step - loss: 1.5164 - accuracy: 0.6190 - val_loss: 1.1601 - val_accuracy: 0.6817
Epoch 3/200
117/117 [==============================] - 0s 3ms/step - loss: 1.1129 - accuracy: 0.6734 - val_loss: 0.9331 - val_accuracy: 0.7383
Epoch 4/200
117/117 [==============================] - 0s 3ms/step - loss: 0.9106 - accuracy: 0.7183 - val_loss: 0.7990 - val_accuracy: 0.7617
Epoch 5/200
117/117 [==============================] - 0s 3ms/step - loss: 0.7943 - accuracy: 0.7362 - val_loss: 0.7276 - val_accuracy: 0.7650
Epoch 6/200
117/117 [==============================] - 0s 3ms/step - loss: 0.7301 - accuracy: 0.7470 - val_loss: 0.6737 - val_accuracy: 0.7767
Epoch 7/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6801 - accuracy: 0.7614 - val_loss: 0.6466 - val_accuracy: 0.7783
Epoch 8/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6527 - accuracy: 0.7667 - val_loss: 0.6172 - val_accuracy: 0.7833
Epoch 9/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6228 - accuracy: 0.7750 - val_loss: 0.6004 - val_accuracy: 0.7950
Epoch 10/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6057 - accuracy: 0.7845 - val_loss: 0.5812 - val_accuracy: 0.7967
Epoch 11/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5771 - accuracy: 0.7965 - val_loss: 0.5675 - val_accuracy: 0.7950
Epoch 12/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5650 - accuracy: 0.7999 - val_loss: 0.5553 - val_accuracy: 0.8017
Epoch 13/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5546 - accuracy: 0.8031 - val_loss: 0.5454 - val_accuracy: 0.8083
Epoch 14/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5461 - accuracy: 0.8059 - val_loss: 0.5373 - val_accuracy: 0.8000
Epoch 15/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5348 - accuracy: 0.8113 - val_loss: 0.5246 - val_accuracy: 0.8150
Epoch 16/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5214 - accuracy: 0.8161 - val_loss: 0.5140 - val_accuracy: 0.8100
Epoch 17/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5205 - accuracy: 0.8173 - val_loss: 0.5101 - val_accuracy: 0.8133
Epoch 18/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5097 - accuracy: 0.8202 - val_loss: 0.5052 - val_accuracy: 0.8200
Epoch 19/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4964 - accuracy: 0.8267 - val_loss: 0.4989 - val_accuracy: 0.8183
Epoch 20/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4915 - accuracy: 0.8285 - val_loss: 0.4921 - val_accuracy: 0.8200
Epoch 21/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4882 - accuracy: 0.8270 - val_loss: 0.4837 - val_accuracy: 0.8200
Epoch 22/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4793 - accuracy: 0.8311 - val_loss: 0.4826 - val_accuracy: 0.8200
Epoch 23/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4756 - accuracy: 0.8326 - val_loss: 0.4803 - val_accuracy: 0.8233
Epoch 24/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4721 - accuracy: 0.8325 - val_loss: 0.4716 - val_accuracy: 0.8267
Epoch 25/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4659 - accuracy: 0.8379 - val_loss: 0.4741 - val_accuracy: 0.8333
Epoch 26/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4673 - accuracy: 0.8359 - val_loss: 0.4630 - val_accuracy: 0.8233
Epoch 27/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4555 - accuracy: 0.8392 - val_loss: 0.4602 - val_accuracy: 0.8267
Epoch 28/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4541 - accuracy: 0.8414 - val_loss: 0.4581 - val_accuracy: 0.8233
Epoch 29/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4566 - accuracy: 0.8412 - val_loss: 0.4548 - val_accuracy: 0.8317
Epoch 30/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4500 - accuracy: 0.8404 - val_loss: 0.4541 - val_accuracy: 0.8333
Epoch 31/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4443 - accuracy: 0.8447 - val_loss: 0.4458 - val_accuracy: 0.8317
Epoch 32/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4499 - accuracy: 0.8432 - val_loss: 0.4438 - val_accuracy: 0.8300
Epoch 33/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4430 - accuracy: 0.8437 - val_loss: 0.4412 - val_accuracy: 0.8300
Epoch 34/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4404 - accuracy: 0.8463 - val_loss: 0.4391 - val_accuracy: 0.8367
Epoch 35/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4307 - accuracy: 0.8500 - val_loss: 0.4379 - val_accuracy: 0.8333
Epoch 36/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4320 - accuracy: 0.8490 - val_loss: 0.4343 - val_accuracy: 0.8350
Epoch 37/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4289 - accuracy: 0.8514 - val_loss: 0.4310 - val_accuracy: 0.8317
Epoch 38/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4251 - accuracy: 0.8514 - val_loss: 0.4306 - val_accuracy: 0.8317
Epoch 39/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4284 - accuracy: 0.8529 - val_loss: 0.4281 - val_accuracy: 0.8317
Epoch 40/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4207 - accuracy: 0.8528 - val_loss: 0.4213 - val_accuracy: 0.8433
Epoch 41/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4168 - accuracy: 0.8540 - val_loss: 0.4232 - val_accuracy: 0.8400
Epoch 42/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4141 - accuracy: 0.8558 - val_loss: 0.4216 - val_accuracy: 0.8367
Epoch 43/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4174 - accuracy: 0.8547 - val_loss: 0.4186 - val_accuracy: 0.8383
Epoch 44/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4134 - accuracy: 0.8550 - val_loss: 0.4181 - val_accuracy: 0.8333
Epoch 45/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4137 - accuracy: 0.8547 - val_loss: 0.4130 - val_accuracy: 0.8383
Epoch 46/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4123 - accuracy: 0.8563 - val_loss: 0.4139 - val_accuracy: 0.8350
Epoch 47/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4119 - accuracy: 0.8558 - val_loss: 0.4109 - val_accuracy: 0.8333
Epoch 48/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4121 - accuracy: 0.8543 - val_loss: 0.4092 - val_accuracy: 0.8300
Epoch 49/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4096 - accuracy: 0.8574 - val_loss: 0.4066 - val_accuracy: 0.8367
Epoch 50/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4066 - accuracy: 0.8568 - val_loss: 0.4073 - val_accuracy: 0.8350
Epoch 51/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4037 - accuracy: 0.8596 - val_loss: 0.4044 - val_accuracy: 0.8350
Epoch 52/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3992 - accuracy: 0.8598 - val_loss: 0.4100 - val_accuracy: 0.8350
Epoch 53/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4051 - accuracy: 0.8576 - val_loss: 0.4044 - val_accuracy: 0.8417
Epoch 54/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3968 - accuracy: 0.8599 - val_loss: 0.4021 - val_accuracy: 0.8367
Epoch 55/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3968 - accuracy: 0.8612 - val_loss: 0.3989 - val_accuracy: 0.8350
Epoch 56/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3917 - accuracy: 0.8593 - val_loss: 0.3960 - val_accuracy: 0.8383
Epoch 57/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3895 - accuracy: 0.8629 - val_loss: 0.3957 - val_accuracy: 0.8400
Epoch 58/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3915 - accuracy: 0.8622 - val_loss: 0.3947 - val_accuracy: 0.8367
Epoch 59/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3928 - accuracy: 0.8631 - val_loss: 0.3918 - val_accuracy: 0.8400
Epoch 60/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3890 - accuracy: 0.8627 - val_loss: 0.3950 - val_accuracy: 0.8433
Epoch 61/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3909 - accuracy: 0.8637 - val_loss: 0.3885 - val_accuracy: 0.8417
Epoch 62/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3799 - accuracy: 0.8683 - val_loss: 0.3894 - val_accuracy: 0.8350
Epoch 63/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3836 - accuracy: 0.8689 - val_loss: 0.3929 - val_accuracy: 0.8383
Epoch 64/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3827 - accuracy: 0.8659 - val_loss: 0.3889 - val_accuracy: 0.8417
Epoch 65/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3866 - accuracy: 0.8658 - val_loss: 0.3871 - val_accuracy: 0.8417
Epoch 66/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3803 - accuracy: 0.8654 - val_loss: 0.3853 - val_accuracy: 0.8417
Epoch 67/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3751 - accuracy: 0.8662 - val_loss: 0.3832 - val_accuracy: 0.8417
Epoch 68/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3740 - accuracy: 0.8697 - val_loss: 0.3873 - val_accuracy: 0.8450
Epoch 69/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3756 - accuracy: 0.8684 - val_loss: 0.3818 - val_accuracy: 0.8367
Epoch 70/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3743 - accuracy: 0.8703 - val_loss: 0.3840 - val_accuracy: 0.8450
Epoch 71/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3770 - accuracy: 0.8677 - val_loss: 0.3790 - val_accuracy: 0.8400
Epoch 72/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3717 - accuracy: 0.8698 - val_loss: 0.3828 - val_accuracy: 0.8483
Epoch 73/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3725 - accuracy: 0.8698 - val_loss: 0.3779 - val_accuracy: 0.8450
Epoch 74/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3704 - accuracy: 0.8713 - val_loss: 0.3779 - val_accuracy: 0.8483
Epoch 75/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3670 - accuracy: 0.8714 - val_loss: 0.3825 - val_accuracy: 0.8400
Epoch 76/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3677 - accuracy: 0.8705 - val_loss: 0.3786 - val_accuracy: 0.8450
Epoch 77/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3688 - accuracy: 0.8702 - val_loss: 0.3779 - val_accuracy: 0.8483
Epoch 78/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3654 - accuracy: 0.8726 - val_loss: 0.3723 - val_accuracy: 0.8533
Epoch 79/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3670 - accuracy: 0.8728 - val_loss: 0.3744 - val_accuracy: 0.8467
Epoch 80/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3637 - accuracy: 0.8712 - val_loss: 0.3766 - val_accuracy: 0.8500
Epoch 81/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3603 - accuracy: 0.8741 - val_loss: 0.3718 - val_accuracy: 0.8500
Epoch 82/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3616 - accuracy: 0.8730 - val_loss: 0.3704 - val_accuracy: 0.8500
Epoch 83/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3600 - accuracy: 0.8754 - val_loss: 0.3683 - val_accuracy: 0.8483
Epoch 84/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3603 - accuracy: 0.8724 - val_loss: 0.3697 - val_accuracy: 0.8450
Epoch 85/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3563 - accuracy: 0.8749 - val_loss: 0.3704 - val_accuracy: 0.8533
Epoch 86/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3580 - accuracy: 0.8745 - val_loss: 0.3687 - val_accuracy: 0.8500
Epoch 87/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3578 - accuracy: 0.8755 - val_loss: 0.3655 - val_accuracy: 0.8533
Epoch 88/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3536 - accuracy: 0.8776 - val_loss: 0.3692 - val_accuracy: 0.8550
Epoch 89/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3579 - accuracy: 0.8730 - val_loss: 0.3655 - val_accuracy: 0.8533
Epoch 90/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3504 - accuracy: 0.8772 - val_loss: 0.3657 - val_accuracy: 0.8567
Epoch 91/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3531 - accuracy: 0.8754 - val_loss: 0.3647 - val_accuracy: 0.8550
Epoch 92/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3563 - accuracy: 0.8750 - val_loss: 0.3720 - val_accuracy: 0.8517
Epoch 93/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3615 - accuracy: 0.8699 - val_loss: 0.3642 - val_accuracy: 0.8567
Epoch 94/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3546 - accuracy: 0.8752 - val_loss: 0.3639 - val_accuracy: 0.8517
Epoch 95/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3510 - accuracy: 0.8769 - val_loss: 0.3615 - val_accuracy: 0.8567
Epoch 96/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3491 - accuracy: 0.8778 - val_loss: 0.3609 - val_accuracy: 0.8533
Epoch 97/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3479 - accuracy: 0.8769 - val_loss: 0.3598 - val_accuracy: 0.8550
Epoch 98/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3433 - accuracy: 0.8796 - val_loss: 0.3627 - val_accuracy: 0.8567
Epoch 99/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3420 - accuracy: 0.8792 - val_loss: 0.3609 - val_accuracy: 0.8550
Epoch 100/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3462 - accuracy: 0.8773 - val_loss: 0.3607 - val_accuracy: 0.8583
Epoch 101/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3437 - accuracy: 0.8799 - val_loss: 0.3585 - val_accuracy: 0.8483
Epoch 102/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3431 - accuracy: 0.8786 - val_loss: 0.3586 - val_accuracy: 0.8583
Epoch 103/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3386 - accuracy: 0.8810 - val_loss: 0.3606 - val_accuracy: 0.8550
Epoch 104/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3428 - accuracy: 0.8782 - val_loss: 0.3559 - val_accuracy: 0.8483
Epoch 105/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3433 - accuracy: 0.8778 - val_loss: 0.3569 - val_accuracy: 0.8550
Epoch 106/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3391 - accuracy: 0.8805 - val_loss: 0.3556 - val_accuracy: 0.8533
Epoch 107/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3357 - accuracy: 0.8813 - val_loss: 0.3558 - val_accuracy: 0.8567
Epoch 108/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3403 - accuracy: 0.8801 - val_loss: 0.3547 - val_accuracy: 0.8533
Epoch 109/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3391 - accuracy: 0.8799 - val_loss: 0.3581 - val_accuracy: 0.8517
Epoch 110/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3443 - accuracy: 0.8806 - val_loss: 0.3556 - val_accuracy: 0.8550
Epoch 111/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3316 - accuracy: 0.8834 - val_loss: 0.3535 - val_accuracy: 0.8500
Epoch 112/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3357 - accuracy: 0.8803 - val_loss: 0.3564 - val_accuracy: 0.8533
Epoch 113/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3319 - accuracy: 0.8832 - val_loss: 0.3513 - val_accuracy: 0.8533
Epoch 114/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3348 - accuracy: 0.8826 - val_loss: 0.3527 - val_accuracy: 0.8533
Epoch 115/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3364 - accuracy: 0.8815 - val_loss: 0.3529 - val_accuracy: 0.8550
Epoch 116/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3384 - accuracy: 0.8797 - val_loss: 0.3510 - val_accuracy: 0.8567
Epoch 117/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3324 - accuracy: 0.8810 - val_loss: 0.3586 - val_accuracy: 0.8550
Epoch 118/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3345 - accuracy: 0.8817 - val_loss: 0.3521 - val_accuracy: 0.8567
Epoch 119/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3349 - accuracy: 0.8809 - val_loss: 0.3482 - val_accuracy: 0.8533
Epoch 120/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3264 - accuracy: 0.8841 - val_loss: 0.3495 - val_accuracy: 0.8517
Epoch 121/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3292 - accuracy: 0.8836 - val_loss: 0.3488 - val_accuracy: 0.8483
Epoch 122/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3274 - accuracy: 0.8840 - val_loss: 0.3498 - val_accuracy: 0.8500
Epoch 123/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3255 - accuracy: 0.8842 - val_loss: 0.3472 - val_accuracy: 0.8600
Epoch 124/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3288 - accuracy: 0.8847 - val_loss: 0.3471 - val_accuracy: 0.8617
Epoch 125/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3263 - accuracy: 0.8837 - val_loss: 0.3471 - val_accuracy: 0.8533
Epoch 126/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3228 - accuracy: 0.8856 - val_loss: 0.3507 - val_accuracy: 0.8617
Epoch 127/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3267 - accuracy: 0.8843 - val_loss: 0.3466 - val_accuracy: 0.8583
Epoch 128/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3252 - accuracy: 0.8849 - val_loss: 0.3468 - val_accuracy: 0.8600
Epoch 129/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3174 - accuracy: 0.8887 - val_loss: 0.3481 - val_accuracy: 0.8567
Epoch 130/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3248 - accuracy: 0.8837 - val_loss: 0.3448 - val_accuracy: 0.8567
Epoch 131/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3271 - accuracy: 0.8837 - val_loss: 0.3482 - val_accuracy: 0.8600
Epoch 132/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3201 - accuracy: 0.8857 - val_loss: 0.3433 - val_accuracy: 0.8600
Epoch 133/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3266 - accuracy: 0.8850 - val_loss: 0.3492 - val_accuracy: 0.8583
Epoch 134/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3196 - accuracy: 0.8865 - val_loss: 0.3472 - val_accuracy: 0.8600
Epoch 135/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3263 - accuracy: 0.8838 - val_loss: 0.3435 - val_accuracy: 0.8583
Epoch 136/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3168 - accuracy: 0.8892 - val_loss: 0.3427 - val_accuracy: 0.8583
Epoch 137/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3184 - accuracy: 0.8871 - val_loss: 0.3419 - val_accuracy: 0.8633
Epoch 138/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3141 - accuracy: 0.8897 - val_loss: 0.3389 - val_accuracy: 0.8583
Epoch 139/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3183 - accuracy: 0.8879 - val_loss: 0.3430 - val_accuracy: 0.8650
Epoch 140/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3173 - accuracy: 0.8871 - val_loss: 0.3413 - val_accuracy: 0.8650
Epoch 141/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3110 - accuracy: 0.8903 - val_loss: 0.3386 - val_accuracy: 0.8600
Epoch 142/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3188 - accuracy: 0.8865 - val_loss: 0.3398 - val_accuracy: 0.8600
Epoch 143/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3160 - accuracy: 0.8887 - val_loss: 0.3408 - val_accuracy: 0.8650
Epoch 144/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3118 - accuracy: 0.8892 - val_loss: 0.3376 - val_accuracy: 0.8617
Epoch 145/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3106 - accuracy: 0.8903 - val_loss: 0.3383 - val_accuracy: 0.8667
Epoch 146/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3093 - accuracy: 0.8894 - val_loss: 0.3389 - val_accuracy: 0.8500
Epoch 147/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3084 - accuracy: 0.8900 - val_loss: 0.3356 - val_accuracy: 0.8667
Epoch 148/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3120 - accuracy: 0.8891 - val_loss: 0.3387 - val_accuracy: 0.8633
Epoch 149/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3122 - accuracy: 0.8885 - val_loss: 0.3402 - val_accuracy: 0.8633
Epoch 150/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3108 - accuracy: 0.8876 - val_loss: 0.3374 - val_accuracy: 0.8583
Epoch 151/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3088 - accuracy: 0.8906 - val_loss: 0.3396 - val_accuracy: 0.8600
Epoch 152/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3040 - accuracy: 0.8911 - val_loss: 0.3357 - val_accuracy: 0.8533
Epoch 153/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3097 - accuracy: 0.8908 - val_loss: 0.3386 - val_accuracy: 0.8617
Epoch 154/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3045 - accuracy: 0.8910 - val_loss: 0.3384 - val_accuracy: 0.8633
Epoch 155/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3058 - accuracy: 0.8915 - val_loss: 0.3390 - val_accuracy: 0.8600
Epoch 156/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3037 - accuracy: 0.8927 - val_loss: 0.3375 - val_accuracy: 0.8583
Epoch 157/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3103 - accuracy: 0.8893 - val_loss: 0.3382 - val_accuracy: 0.8650
Epoch 158/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3057 - accuracy: 0.8920 - val_loss: 0.3363 - val_accuracy: 0.8617
Epoch 159/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3015 - accuracy: 0.8920 - val_loss: 0.3369 - val_accuracy: 0.8633
Epoch 160/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3081 - accuracy: 0.8910 - val_loss: 0.3340 - val_accuracy: 0.8600
Epoch 161/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3083 - accuracy: 0.8903 - val_loss: 0.3395 - val_accuracy: 0.8633
Epoch 162/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3050 - accuracy: 0.8899 - val_loss: 0.3359 - val_accuracy: 0.8617
Epoch 163/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3029 - accuracy: 0.8926 - val_loss: 0.3367 - val_accuracy: 0.8650
Epoch 164/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3010 - accuracy: 0.8941 - val_loss: 0.3319 - val_accuracy: 0.8650
Epoch 165/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3006 - accuracy: 0.8932 - val_loss: 0.3341 - val_accuracy: 0.8650
Epoch 166/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3006 - accuracy: 0.8940 - val_loss: 0.3296 - val_accuracy: 0.8633
Epoch 167/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2987 - accuracy: 0.8937 - val_loss: 0.3326 - val_accuracy: 0.8583
Epoch 168/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3003 - accuracy: 0.8922 - val_loss: 0.3308 - val_accuracy: 0.8600
Epoch 169/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2964 - accuracy: 0.8946 - val_loss: 0.3312 - val_accuracy: 0.8717
Epoch 170/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2998 - accuracy: 0.8933 - val_loss: 0.3307 - val_accuracy: 0.8583
Epoch 171/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2968 - accuracy: 0.8945 - val_loss: 0.3338 - val_accuracy: 0.8650
Epoch 172/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2960 - accuracy: 0.8961 - val_loss: 0.3336 - val_accuracy: 0.8567
Epoch 173/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3003 - accuracy: 0.8926 - val_loss: 0.3309 - val_accuracy: 0.8650
Epoch 174/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2905 - accuracy: 0.8979 - val_loss: 0.3371 - val_accuracy: 0.8667
Epoch 175/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2972 - accuracy: 0.8931 - val_loss: 0.3310 - val_accuracy: 0.8600
Epoch 176/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2931 - accuracy: 0.8962 - val_loss: 0.3322 - val_accuracy: 0.8700
Epoch 177/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2954 - accuracy: 0.8947 - val_loss: 0.3303 - val_accuracy: 0.8667
Epoch 178/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2959 - accuracy: 0.8960 - val_loss: 0.3263 - val_accuracy: 0.8650
Epoch 179/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2912 - accuracy: 0.8952 - val_loss: 0.3291 - val_accuracy: 0.8567
Epoch 180/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2921 - accuracy: 0.8962 - val_loss: 0.3339 - val_accuracy: 0.8617
Epoch 181/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3008 - accuracy: 0.8921 - val_loss: 0.3315 - val_accuracy: 0.8650
Epoch 182/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2924 - accuracy: 0.8931 - val_loss: 0.3280 - val_accuracy: 0.8667
Epoch 183/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2901 - accuracy: 0.8958 - val_loss: 0.3308 - val_accuracy: 0.8700
Epoch 184/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2963 - accuracy: 0.8940 - val_loss: 0.3260 - val_accuracy: 0.8633
Epoch 185/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2906 - accuracy: 0.8965 - val_loss: 0.3298 - val_accuracy: 0.8633
Epoch 186/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2886 - accuracy: 0.8967 - val_loss: 0.3308 - val_accuracy: 0.8667
Epoch 187/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2862 - accuracy: 0.8977 - val_loss: 0.3298 - val_accuracy: 0.8600
Epoch 188/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2900 - accuracy: 0.8970 - val_loss: 0.3251 - val_accuracy: 0.8700
Epoch 189/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2899 - accuracy: 0.8958 - val_loss: 0.3257 - val_accuracy: 0.8667
Epoch 190/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2873 - accuracy: 0.8971 - val_loss: 0.3302 - val_accuracy: 0.8667
Epoch 191/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2922 - accuracy: 0.8950 - val_loss: 0.3348 - val_accuracy: 0.8617
Epoch 192/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2894 - accuracy: 0.8984 - val_loss: 0.3262 - val_accuracy: 0.8683
Epoch 193/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2936 - accuracy: 0.8969 - val_loss: 0.3282 - val_accuracy: 0.8683
Epoch 194/200
117/117 [==============================] - 0s 2ms/step - loss: 0.2794 - accuracy: 0.9014 - val_loss: 0.3350 - val_accuracy: 0.8683
Epoch 195/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2921 - accuracy: 0.8947 - val_loss: 0.3273 - val_accuracy: 0.8700
Epoch 196/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2906 - accuracy: 0.8964 - val_loss: 0.3236 - val_accuracy: 0.8683
Epoch 197/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2849 - accuracy: 0.8972 - val_loss: 0.3253 - val_accuracy: 0.8650
Epoch 198/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2818 - accuracy: 0.8993 - val_loss: 0.3233 - val_accuracy: 0.8700
Epoch 199/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2847 - accuracy: 0.8994 - val_loss: 0.3257 - val_accuracy: 0.8700
Epoch 200/200
117/117 [==============================] - 0s 3ms/step - loss: 0.2808 - accuracy: 0.9008 - val_loss: 0.3238 - val_accuracy: 0.8650
# loss 값을 plot 해보겠습니다.
y_vloss = history.history['val_loss']
y_loss = history.history['loss']
x_len = np.arange(len(y_loss))

plt.plot(x_len, y_vloss, marker='.', c='red', label="Validation-set Loss")
plt.plot(x_len, y_loss, marker='.', c='blue', label="Train-set Loss")
plt.legend(loc='upper right')
plt.grid()
plt.title('Loss graph without dropout layer') 
plt.ylim(0,1)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

png

# accuracy 값을 plot 해보겠습니다.
y_vacc = history.history['val_accuracy']
y_acc = history.history['accuracy']
x_len = np.arange(len(y_acc))

plt.plot(x_len, y_vacc, marker='.', c='red', label="Validation-set accuracy")
plt.plot(x_len, y_acc, marker='.', c='blue', label="Train-set accuracy")
plt.legend(loc='lower right')
plt.grid()
plt.ylim(0.5,1) 
plt.title('Accuracy graph without dropout layer') 
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()

png

above code without dropout layer, the train set accuracy rose while loss fell. However, the validation set accuracy and loss converged.

Now we will add a drop out layer.

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(256, activation='relu'),
    # 여기에 dropout layer를 추가해보았습니다. 나머지 layer는 위의 실습과 같습니다. 
    keras.layers.Dropout(0.5),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history= model.fit(X_train, y_train, epochs=200, batch_size=512, validation_data=(X_valid, y_valid))
Epoch 1/200
117/117 [==============================] - 1s 5ms/step - loss: 2.2092 - accuracy: 0.3967 - val_loss: 1.6753 - val_accuracy: 0.5900
Epoch 2/200
117/117 [==============================] - 0s 3ms/step - loss: 1.5650 - accuracy: 0.5476 - val_loss: 1.2124 - val_accuracy: 0.6417
Epoch 3/200
117/117 [==============================] - 0s 3ms/step - loss: 1.2047 - accuracy: 0.6121 - val_loss: 0.9983 - val_accuracy: 0.7133
Epoch 4/200
117/117 [==============================] - 0s 3ms/step - loss: 1.0160 - accuracy: 0.6633 - val_loss: 0.8701 - val_accuracy: 0.7367
Epoch 5/200
117/117 [==============================] - 0s 3ms/step - loss: 0.9046 - accuracy: 0.6899 - val_loss: 0.7861 - val_accuracy: 0.7650
Epoch 6/200
117/117 [==============================] - 0s 3ms/step - loss: 0.8260 - accuracy: 0.7109 - val_loss: 0.7284 - val_accuracy: 0.7717
Epoch 7/200
117/117 [==============================] - 0s 3ms/step - loss: 0.7801 - accuracy: 0.7249 - val_loss: 0.6874 - val_accuracy: 0.7733
Epoch 8/200
117/117 [==============================] - 0s 3ms/step - loss: 0.7375 - accuracy: 0.7376 - val_loss: 0.6596 - val_accuracy: 0.7883
Epoch 9/200
117/117 [==============================] - 0s 3ms/step - loss: 0.7096 - accuracy: 0.7462 - val_loss: 0.6411 - val_accuracy: 0.7717
Epoch 10/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6828 - accuracy: 0.7540 - val_loss: 0.6209 - val_accuracy: 0.7867
Epoch 11/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6651 - accuracy: 0.7598 - val_loss: 0.6049 - val_accuracy: 0.7900
Epoch 12/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6497 - accuracy: 0.7652 - val_loss: 0.5874 - val_accuracy: 0.7917
Epoch 13/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6276 - accuracy: 0.7726 - val_loss: 0.5782 - val_accuracy: 0.7917
Epoch 14/200
117/117 [==============================] - 0s 4ms/step - loss: 0.6233 - accuracy: 0.7759 - val_loss: 0.5655 - val_accuracy: 0.7983
Epoch 15/200
117/117 [==============================] - 0s 3ms/step - loss: 0.6080 - accuracy: 0.7790 - val_loss: 0.5539 - val_accuracy: 0.7950
Epoch 16/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5936 - accuracy: 0.7873 - val_loss: 0.5471 - val_accuracy: 0.7983
Epoch 17/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5802 - accuracy: 0.7904 - val_loss: 0.5361 - val_accuracy: 0.8000
Epoch 18/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5815 - accuracy: 0.7910 - val_loss: 0.5295 - val_accuracy: 0.7983
Epoch 19/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5582 - accuracy: 0.8019 - val_loss: 0.5204 - val_accuracy: 0.7967
Epoch 20/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5522 - accuracy: 0.8026 - val_loss: 0.5133 - val_accuracy: 0.8083
Epoch 21/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5461 - accuracy: 0.8053 - val_loss: 0.5071 - val_accuracy: 0.8117
Epoch 22/200
117/117 [==============================] - 0s 4ms/step - loss: 0.5378 - accuracy: 0.8078 - val_loss: 0.5029 - val_accuracy: 0.8133
Epoch 23/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5341 - accuracy: 0.8102 - val_loss: 0.4943 - val_accuracy: 0.8167
Epoch 24/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5239 - accuracy: 0.8160 - val_loss: 0.4905 - val_accuracy: 0.8150
Epoch 25/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5188 - accuracy: 0.8158 - val_loss: 0.4855 - val_accuracy: 0.8200
Epoch 26/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5151 - accuracy: 0.8178 - val_loss: 0.4793 - val_accuracy: 0.8233
Epoch 27/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5121 - accuracy: 0.8208 - val_loss: 0.4781 - val_accuracy: 0.8200
Epoch 28/200
117/117 [==============================] - 0s 3ms/step - loss: 0.5024 - accuracy: 0.8227 - val_loss: 0.4707 - val_accuracy: 0.8183
Epoch 29/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4987 - accuracy: 0.8236 - val_loss: 0.4692 - val_accuracy: 0.8200
Epoch 30/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4935 - accuracy: 0.8243 - val_loss: 0.4631 - val_accuracy: 0.8233
Epoch 31/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4902 - accuracy: 0.8273 - val_loss: 0.4587 - val_accuracy: 0.8200
Epoch 32/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4878 - accuracy: 0.8270 - val_loss: 0.4547 - val_accuracy: 0.8200
Epoch 33/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4818 - accuracy: 0.8288 - val_loss: 0.4519 - val_accuracy: 0.8200
Epoch 34/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4889 - accuracy: 0.8285 - val_loss: 0.4471 - val_accuracy: 0.8233
Epoch 35/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4737 - accuracy: 0.8318 - val_loss: 0.4436 - val_accuracy: 0.8217
Epoch 36/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4736 - accuracy: 0.8314 - val_loss: 0.4424 - val_accuracy: 0.8300
Epoch 37/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4705 - accuracy: 0.8334 - val_loss: 0.4394 - val_accuracy: 0.8217
Epoch 38/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4672 - accuracy: 0.8372 - val_loss: 0.4371 - val_accuracy: 0.8333
Epoch 39/200
117/117 [==============================] - 0s 4ms/step - loss: 0.4575 - accuracy: 0.8404 - val_loss: 0.4329 - val_accuracy: 0.8333
Epoch 40/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4571 - accuracy: 0.8396 - val_loss: 0.4316 - val_accuracy: 0.8300
Epoch 41/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4579 - accuracy: 0.8411 - val_loss: 0.4287 - val_accuracy: 0.8367
Epoch 42/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4559 - accuracy: 0.8399 - val_loss: 0.4289 - val_accuracy: 0.8317
Epoch 43/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4547 - accuracy: 0.8384 - val_loss: 0.4238 - val_accuracy: 0.8383
Epoch 44/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4499 - accuracy: 0.8413 - val_loss: 0.4224 - val_accuracy: 0.8367
Epoch 45/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4529 - accuracy: 0.8413 - val_loss: 0.4193 - val_accuracy: 0.8367
Epoch 46/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4482 - accuracy: 0.8401 - val_loss: 0.4181 - val_accuracy: 0.8333
Epoch 47/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4427 - accuracy: 0.8427 - val_loss: 0.4135 - val_accuracy: 0.8400
Epoch 48/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4443 - accuracy: 0.8421 - val_loss: 0.4150 - val_accuracy: 0.8383
Epoch 49/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4422 - accuracy: 0.8426 - val_loss: 0.4124 - val_accuracy: 0.8350
Epoch 50/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4339 - accuracy: 0.8458 - val_loss: 0.4130 - val_accuracy: 0.8317
Epoch 51/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4334 - accuracy: 0.8486 - val_loss: 0.4086 - val_accuracy: 0.8367
Epoch 52/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4330 - accuracy: 0.8480 - val_loss: 0.4096 - val_accuracy: 0.8400
Epoch 53/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4332 - accuracy: 0.8452 - val_loss: 0.4062 - val_accuracy: 0.8383
Epoch 54/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4247 - accuracy: 0.8528 - val_loss: 0.4056 - val_accuracy: 0.8383
Epoch 55/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4316 - accuracy: 0.8464 - val_loss: 0.4042 - val_accuracy: 0.8350
Epoch 56/200
117/117 [==============================] - 0s 4ms/step - loss: 0.4264 - accuracy: 0.8481 - val_loss: 0.4013 - val_accuracy: 0.8400
Epoch 57/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4223 - accuracy: 0.8500 - val_loss: 0.3996 - val_accuracy: 0.8367
Epoch 58/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4224 - accuracy: 0.8513 - val_loss: 0.3984 - val_accuracy: 0.8433
Epoch 59/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4262 - accuracy: 0.8512 - val_loss: 0.3968 - val_accuracy: 0.8483
Epoch 60/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4201 - accuracy: 0.8492 - val_loss: 0.3946 - val_accuracy: 0.8433
Epoch 61/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4214 - accuracy: 0.8506 - val_loss: 0.3968 - val_accuracy: 0.8400
Epoch 62/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4199 - accuracy: 0.8515 - val_loss: 0.3981 - val_accuracy: 0.8383
Epoch 63/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4186 - accuracy: 0.8521 - val_loss: 0.3928 - val_accuracy: 0.8400
Epoch 64/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4110 - accuracy: 0.8553 - val_loss: 0.3883 - val_accuracy: 0.8433
Epoch 65/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4097 - accuracy: 0.8554 - val_loss: 0.3920 - val_accuracy: 0.8417
Epoch 66/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4082 - accuracy: 0.8565 - val_loss: 0.3893 - val_accuracy: 0.8450
Epoch 67/200
117/117 [==============================] - 0s 4ms/step - loss: 0.4175 - accuracy: 0.8544 - val_loss: 0.3902 - val_accuracy: 0.8383
Epoch 68/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4110 - accuracy: 0.8570 - val_loss: 0.3841 - val_accuracy: 0.8450
Epoch 69/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4109 - accuracy: 0.8535 - val_loss: 0.3838 - val_accuracy: 0.8517
Epoch 70/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4027 - accuracy: 0.8581 - val_loss: 0.3824 - val_accuracy: 0.8500
Epoch 71/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4090 - accuracy: 0.8538 - val_loss: 0.3863 - val_accuracy: 0.8467
Epoch 72/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4022 - accuracy: 0.8582 - val_loss: 0.3826 - val_accuracy: 0.8450
Epoch 73/200
117/117 [==============================] - 0s 4ms/step - loss: 0.4036 - accuracy: 0.8574 - val_loss: 0.3821 - val_accuracy: 0.8450
Epoch 74/200
117/117 [==============================] - 0s 3ms/step - loss: 0.4027 - accuracy: 0.8585 - val_loss: 0.3791 - val_accuracy: 0.8483
Epoch 75/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3995 - accuracy: 0.8588 - val_loss: 0.3829 - val_accuracy: 0.8500
Epoch 76/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3976 - accuracy: 0.8592 - val_loss: 0.3780 - val_accuracy: 0.8550
Epoch 77/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3911 - accuracy: 0.8601 - val_loss: 0.3757 - val_accuracy: 0.8533
Epoch 78/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3956 - accuracy: 0.8591 - val_loss: 0.3778 - val_accuracy: 0.8517
Epoch 79/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3913 - accuracy: 0.8603 - val_loss: 0.3770 - val_accuracy: 0.8500
Epoch 80/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3914 - accuracy: 0.8624 - val_loss: 0.3736 - val_accuracy: 0.8567
Epoch 81/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3890 - accuracy: 0.8622 - val_loss: 0.3750 - val_accuracy: 0.8500
Epoch 82/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3896 - accuracy: 0.8640 - val_loss: 0.3728 - val_accuracy: 0.8500
Epoch 83/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3898 - accuracy: 0.8626 - val_loss: 0.3729 - val_accuracy: 0.8533
Epoch 84/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3894 - accuracy: 0.8620 - val_loss: 0.3713 - val_accuracy: 0.8467
Epoch 85/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3926 - accuracy: 0.8610 - val_loss: 0.3725 - val_accuracy: 0.8517
Epoch 86/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3904 - accuracy: 0.8641 - val_loss: 0.3695 - val_accuracy: 0.8533
Epoch 87/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3860 - accuracy: 0.8633 - val_loss: 0.3662 - val_accuracy: 0.8483
Epoch 88/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3895 - accuracy: 0.8622 - val_loss: 0.3671 - val_accuracy: 0.8517
Epoch 89/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3805 - accuracy: 0.8646 - val_loss: 0.3662 - val_accuracy: 0.8467
Epoch 90/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3864 - accuracy: 0.8626 - val_loss: 0.3641 - val_accuracy: 0.8500
Epoch 91/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3853 - accuracy: 0.8637 - val_loss: 0.3658 - val_accuracy: 0.8567
Epoch 92/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3799 - accuracy: 0.8630 - val_loss: 0.3638 - val_accuracy: 0.8533
Epoch 93/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3769 - accuracy: 0.8673 - val_loss: 0.3658 - val_accuracy: 0.8567
Epoch 94/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3833 - accuracy: 0.8660 - val_loss: 0.3648 - val_accuracy: 0.8567
Epoch 95/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3826 - accuracy: 0.8646 - val_loss: 0.3634 - val_accuracy: 0.8483
Epoch 96/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3809 - accuracy: 0.8655 - val_loss: 0.3619 - val_accuracy: 0.8550
Epoch 97/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3787 - accuracy: 0.8660 - val_loss: 0.3647 - val_accuracy: 0.8533
Epoch 98/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3790 - accuracy: 0.8673 - val_loss: 0.3603 - val_accuracy: 0.8567
Epoch 99/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3771 - accuracy: 0.8663 - val_loss: 0.3589 - val_accuracy: 0.8567
Epoch 100/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3764 - accuracy: 0.8659 - val_loss: 0.3600 - val_accuracy: 0.8600
Epoch 101/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3775 - accuracy: 0.8649 - val_loss: 0.3564 - val_accuracy: 0.8533
Epoch 102/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3749 - accuracy: 0.8684 - val_loss: 0.3560 - val_accuracy: 0.8517
Epoch 103/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3742 - accuracy: 0.8680 - val_loss: 0.3554 - val_accuracy: 0.8600
Epoch 104/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3698 - accuracy: 0.8697 - val_loss: 0.3550 - val_accuracy: 0.8550
Epoch 105/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3705 - accuracy: 0.8690 - val_loss: 0.3589 - val_accuracy: 0.8550
Epoch 106/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3769 - accuracy: 0.8659 - val_loss: 0.3527 - val_accuracy: 0.8533
Epoch 107/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3649 - accuracy: 0.8706 - val_loss: 0.3563 - val_accuracy: 0.8550
Epoch 108/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3705 - accuracy: 0.8662 - val_loss: 0.3544 - val_accuracy: 0.8517
Epoch 109/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3716 - accuracy: 0.8691 - val_loss: 0.3543 - val_accuracy: 0.8533
Epoch 110/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3640 - accuracy: 0.8687 - val_loss: 0.3522 - val_accuracy: 0.8583
Epoch 111/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3654 - accuracy: 0.8709 - val_loss: 0.3524 - val_accuracy: 0.8567
Epoch 112/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3644 - accuracy: 0.8713 - val_loss: 0.3527 - val_accuracy: 0.8550
Epoch 113/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3690 - accuracy: 0.8700 - val_loss: 0.3501 - val_accuracy: 0.8567
Epoch 114/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3657 - accuracy: 0.8709 - val_loss: 0.3507 - val_accuracy: 0.8567
Epoch 115/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3667 - accuracy: 0.8702 - val_loss: 0.3502 - val_accuracy: 0.8550
Epoch 116/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3579 - accuracy: 0.8717 - val_loss: 0.3486 - val_accuracy: 0.8533
Epoch 117/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3610 - accuracy: 0.8702 - val_loss: 0.3462 - val_accuracy: 0.8533
Epoch 118/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3601 - accuracy: 0.8726 - val_loss: 0.3462 - val_accuracy: 0.8567
Epoch 119/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3628 - accuracy: 0.8725 - val_loss: 0.3469 - val_accuracy: 0.8533
Epoch 120/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3569 - accuracy: 0.8735 - val_loss: 0.3489 - val_accuracy: 0.8550
Epoch 121/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3652 - accuracy: 0.8712 - val_loss: 0.3468 - val_accuracy: 0.8567
Epoch 122/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3553 - accuracy: 0.8741 - val_loss: 0.3461 - val_accuracy: 0.8567
Epoch 123/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3558 - accuracy: 0.8742 - val_loss: 0.3458 - val_accuracy: 0.8567
Epoch 124/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3548 - accuracy: 0.8743 - val_loss: 0.3421 - val_accuracy: 0.8567
Epoch 125/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3535 - accuracy: 0.8750 - val_loss: 0.3439 - val_accuracy: 0.8583
Epoch 126/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3563 - accuracy: 0.8728 - val_loss: 0.3412 - val_accuracy: 0.8517
Epoch 127/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3606 - accuracy: 0.8742 - val_loss: 0.3438 - val_accuracy: 0.8550
Epoch 128/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3520 - accuracy: 0.8766 - val_loss: 0.3433 - val_accuracy: 0.8567
Epoch 129/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3529 - accuracy: 0.8741 - val_loss: 0.3440 - val_accuracy: 0.8567
Epoch 130/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3570 - accuracy: 0.8718 - val_loss: 0.3394 - val_accuracy: 0.8517
Epoch 131/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3540 - accuracy: 0.8738 - val_loss: 0.3381 - val_accuracy: 0.8550
Epoch 132/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3497 - accuracy: 0.8770 - val_loss: 0.3417 - val_accuracy: 0.8583
Epoch 133/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3528 - accuracy: 0.8753 - val_loss: 0.3412 - val_accuracy: 0.8600
Epoch 134/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3493 - accuracy: 0.8758 - val_loss: 0.3404 - val_accuracy: 0.8567
Epoch 135/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3435 - accuracy: 0.8780 - val_loss: 0.3406 - val_accuracy: 0.8600
Epoch 136/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3529 - accuracy: 0.8750 - val_loss: 0.3407 - val_accuracy: 0.8583
Epoch 137/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3493 - accuracy: 0.8745 - val_loss: 0.3394 - val_accuracy: 0.8583
Epoch 138/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3506 - accuracy: 0.8765 - val_loss: 0.3375 - val_accuracy: 0.8617
Epoch 139/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3473 - accuracy: 0.8755 - val_loss: 0.3398 - val_accuracy: 0.8583
Epoch 140/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3478 - accuracy: 0.8763 - val_loss: 0.3398 - val_accuracy: 0.8583
Epoch 141/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3479 - accuracy: 0.8756 - val_loss: 0.3363 - val_accuracy: 0.8583
Epoch 142/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3504 - accuracy: 0.8747 - val_loss: 0.3374 - val_accuracy: 0.8567
Epoch 143/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3416 - accuracy: 0.8777 - val_loss: 0.3351 - val_accuracy: 0.8567
Epoch 144/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3446 - accuracy: 0.8783 - val_loss: 0.3386 - val_accuracy: 0.8583
Epoch 145/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3487 - accuracy: 0.8747 - val_loss: 0.3347 - val_accuracy: 0.8617
Epoch 146/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3461 - accuracy: 0.8781 - val_loss: 0.3350 - val_accuracy: 0.8567
Epoch 147/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3449 - accuracy: 0.8778 - val_loss: 0.3339 - val_accuracy: 0.8583
Epoch 148/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3467 - accuracy: 0.8775 - val_loss: 0.3327 - val_accuracy: 0.8550
Epoch 149/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3408 - accuracy: 0.8789 - val_loss: 0.3366 - val_accuracy: 0.8567
Epoch 150/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3475 - accuracy: 0.8768 - val_loss: 0.3353 - val_accuracy: 0.8583
Epoch 151/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3415 - accuracy: 0.8784 - val_loss: 0.3316 - val_accuracy: 0.8667
Epoch 152/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3454 - accuracy: 0.8775 - val_loss: 0.3315 - val_accuracy: 0.8617
Epoch 153/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3397 - accuracy: 0.8805 - val_loss: 0.3301 - val_accuracy: 0.8600
Epoch 154/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3429 - accuracy: 0.8776 - val_loss: 0.3314 - val_accuracy: 0.8617
Epoch 155/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3381 - accuracy: 0.8795 - val_loss: 0.3320 - val_accuracy: 0.8567
Epoch 156/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3395 - accuracy: 0.8779 - val_loss: 0.3324 - val_accuracy: 0.8583
Epoch 157/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3373 - accuracy: 0.8784 - val_loss: 0.3339 - val_accuracy: 0.8583
Epoch 158/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3382 - accuracy: 0.8799 - val_loss: 0.3275 - val_accuracy: 0.8600
Epoch 159/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3356 - accuracy: 0.8806 - val_loss: 0.3328 - val_accuracy: 0.8567
Epoch 160/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3366 - accuracy: 0.8807 - val_loss: 0.3305 - val_accuracy: 0.8633
Epoch 161/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3309 - accuracy: 0.8833 - val_loss: 0.3274 - val_accuracy: 0.8667
Epoch 162/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3328 - accuracy: 0.8799 - val_loss: 0.3283 - val_accuracy: 0.8550
Epoch 163/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3377 - accuracy: 0.8807 - val_loss: 0.3276 - val_accuracy: 0.8617
Epoch 164/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3339 - accuracy: 0.8811 - val_loss: 0.3262 - val_accuracy: 0.8567
Epoch 165/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3375 - accuracy: 0.8795 - val_loss: 0.3250 - val_accuracy: 0.8633
Epoch 166/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3295 - accuracy: 0.8836 - val_loss: 0.3288 - val_accuracy: 0.8683
Epoch 167/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3368 - accuracy: 0.8795 - val_loss: 0.3285 - val_accuracy: 0.8683
Epoch 168/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3330 - accuracy: 0.8815 - val_loss: 0.3261 - val_accuracy: 0.8617
Epoch 169/200
117/117 [==============================] - 0s 4ms/step - loss: 0.3274 - accuracy: 0.8830 - val_loss: 0.3254 - val_accuracy: 0.8650
Epoch 170/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3292 - accuracy: 0.8836 - val_loss: 0.3236 - val_accuracy: 0.8667
Epoch 171/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3350 - accuracy: 0.8806 - val_loss: 0.3221 - val_accuracy: 0.8633
Epoch 172/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3286 - accuracy: 0.8834 - val_loss: 0.3226 - val_accuracy: 0.8700
Epoch 173/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3357 - accuracy: 0.8813 - val_loss: 0.3225 - val_accuracy: 0.8633
Epoch 174/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3239 - accuracy: 0.8826 - val_loss: 0.3243 - val_accuracy: 0.8683
Epoch 175/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3352 - accuracy: 0.8806 - val_loss: 0.3220 - val_accuracy: 0.8617
Epoch 176/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3295 - accuracy: 0.8834 - val_loss: 0.3258 - val_accuracy: 0.8667
Epoch 177/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3245 - accuracy: 0.8844 - val_loss: 0.3229 - val_accuracy: 0.8667
Epoch 178/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3255 - accuracy: 0.8855 - val_loss: 0.3209 - val_accuracy: 0.8683
Epoch 179/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3314 - accuracy: 0.8815 - val_loss: 0.3256 - val_accuracy: 0.8650
Epoch 180/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3267 - accuracy: 0.8844 - val_loss: 0.3211 - val_accuracy: 0.8717
Epoch 181/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3251 - accuracy: 0.8846 - val_loss: 0.3226 - val_accuracy: 0.8683
Epoch 182/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3261 - accuracy: 0.8841 - val_loss: 0.3210 - val_accuracy: 0.8617
Epoch 183/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3273 - accuracy: 0.8824 - val_loss: 0.3205 - val_accuracy: 0.8683
Epoch 184/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3221 - accuracy: 0.8847 - val_loss: 0.3205 - val_accuracy: 0.8617
Epoch 185/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3241 - accuracy: 0.8853 - val_loss: 0.3172 - val_accuracy: 0.8633
Epoch 186/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3259 - accuracy: 0.8843 - val_loss: 0.3220 - val_accuracy: 0.8717
Epoch 187/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3229 - accuracy: 0.8837 - val_loss: 0.3189 - val_accuracy: 0.8683
Epoch 188/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3215 - accuracy: 0.8854 - val_loss: 0.3193 - val_accuracy: 0.8733
Epoch 189/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3223 - accuracy: 0.8862 - val_loss: 0.3183 - val_accuracy: 0.8700
Epoch 190/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3233 - accuracy: 0.8853 - val_loss: 0.3234 - val_accuracy: 0.8667
Epoch 191/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3236 - accuracy: 0.8848 - val_loss: 0.3167 - val_accuracy: 0.8683
Epoch 192/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3193 - accuracy: 0.8848 - val_loss: 0.3191 - val_accuracy: 0.8700
Epoch 193/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3168 - accuracy: 0.8880 - val_loss: 0.3161 - val_accuracy: 0.8717
Epoch 194/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3228 - accuracy: 0.8850 - val_loss: 0.3174 - val_accuracy: 0.8683
Epoch 195/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3229 - accuracy: 0.8860 - val_loss: 0.3159 - val_accuracy: 0.8733
Epoch 196/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3202 - accuracy: 0.8863 - val_loss: 0.3168 - val_accuracy: 0.8717
Epoch 197/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3196 - accuracy: 0.8861 - val_loss: 0.3173 - val_accuracy: 0.8717
Epoch 198/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3221 - accuracy: 0.8854 - val_loss: 0.3174 - val_accuracy: 0.8750
Epoch 199/200
117/117 [==============================] - 0s 2ms/step - loss: 0.3198 - accuracy: 0.8855 - val_loss: 0.3192 - val_accuracy: 0.8683
Epoch 200/200
117/117 [==============================] - 0s 3ms/step - loss: 0.3170 - accuracy: 0.8871 - val_loss: 0.3123 - val_accuracy: 0.8750
# loss 값을 plot 해보겠습니다. 
y_vloss = history.history['val_loss']
y_loss = history.history['loss']
x_len = np.arange(len(y_loss))

plt.plot(x_len, y_vloss, marker='.', c='red', label="Validation-set Loss")
plt.plot(x_len, y_loss, marker='.', c='blue', label="Train-set Loss")
plt.legend(loc='upper right')
plt.grid()
plt.ylim(0,1)
plt.title('Loss graph with dropout layer') 
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

png

# accuracy 값을 plot 해보겠습니다. 
y_vacc = history.history['val_accuracy']
y_acc = history.history['accuracy']
x_len = np.arange(len(y_acc))

plt.plot(x_len, y_vacc, marker='.', c='red', label="Validation-set accuracy")
plt.plot(x_len, y_acc, marker='.', c='blue', label="Train-set accuracy")
plt.legend(loc='lower right')
plt.grid()
plt.ylim(0.5,1) 
plt.title('Accuracy graph with dropout layer') 
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()

png

We were able to prevent some what level of overfitting by adding a dropout layer.

batch normalization

Batch normalization is used to solve the problem of gradient vanishing and exploding.

image

아무것도 하지 않은 fully connected layer와 Batch Normalization layer를 추가한 두 실험을 비교하고자 합니다.

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt

fashion_mnist = keras.datasets.fashion_mnist
print('=3')
=3
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

train_images = train_images / 255.0
test_images = test_images / 255.0
from sklearn.model_selection import train_test_split

X_train, X_valid, y_train, y_valid = train_test_split(train_images, train_labels, test_size=0.3, random_state=101)

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history= model.fit(X_train, y_train, epochs=20, batch_size=2048, validation_data=(X_valid, y_valid))
Epoch 1/20
21/21 [==============================] - 1s 16ms/step - loss: 1.6579 - accuracy: 0.4697 - val_loss: 0.7620 - val_accuracy: 0.7326
Epoch 2/20
21/21 [==============================] - 0s 7ms/step - loss: 0.7025 - accuracy: 0.7607 - val_loss: 0.5999 - val_accuracy: 0.8003
Epoch 3/20
21/21 [==============================] - 0s 7ms/step - loss: 0.5750 - accuracy: 0.8115 - val_loss: 0.5328 - val_accuracy: 0.8205
Epoch 4/20
21/21 [==============================] - 0s 7ms/step - loss: 0.5222 - accuracy: 0.8278 - val_loss: 0.5006 - val_accuracy: 0.8273
Epoch 5/20
21/21 [==============================] - 0s 7ms/step - loss: 0.4861 - accuracy: 0.8349 - val_loss: 0.4720 - val_accuracy: 0.8393
Epoch 6/20
21/21 [==============================] - 0s 7ms/step - loss: 0.4547 - accuracy: 0.8477 - val_loss: 0.4527 - val_accuracy: 0.8462
Epoch 7/20
21/21 [==============================] - 0s 7ms/step - loss: 0.4365 - accuracy: 0.8536 - val_loss: 0.4413 - val_accuracy: 0.8489
Epoch 8/20
21/21 [==============================] - 0s 7ms/step - loss: 0.4220 - accuracy: 0.8573 - val_loss: 0.4284 - val_accuracy: 0.8527
Epoch 9/20
21/21 [==============================] - 0s 7ms/step - loss: 0.4042 - accuracy: 0.8642 - val_loss: 0.4232 - val_accuracy: 0.8552
Epoch 10/20
21/21 [==============================] - 0s 7ms/step - loss: 0.4056 - accuracy: 0.8619 - val_loss: 0.4081 - val_accuracy: 0.8604
Epoch 11/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3882 - accuracy: 0.8683 - val_loss: 0.4040 - val_accuracy: 0.8601
Epoch 12/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3795 - accuracy: 0.8685 - val_loss: 0.3985 - val_accuracy: 0.8629
Epoch 13/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3688 - accuracy: 0.8738 - val_loss: 0.3907 - val_accuracy: 0.8651
Epoch 14/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3658 - accuracy: 0.8734 - val_loss: 0.3873 - val_accuracy: 0.8646
Epoch 15/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3504 - accuracy: 0.8793 - val_loss: 0.3841 - val_accuracy: 0.8677
Epoch 16/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3414 - accuracy: 0.8816 - val_loss: 0.3795 - val_accuracy: 0.8674
Epoch 17/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3540 - accuracy: 0.8777 - val_loss: 0.3749 - val_accuracy: 0.8704
Epoch 18/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3414 - accuracy: 0.8821 - val_loss: 0.3741 - val_accuracy: 0.8691
Epoch 19/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3388 - accuracy: 0.8839 - val_loss: 0.3708 - val_accuracy: 0.8693
Epoch 20/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3280 - accuracy: 0.8871 - val_loss: 0.3688 - val_accuracy: 0.8703
# loss 값을 plot 해보겠습니다. 
y_vloss = history.history['val_loss']
y_loss = history.history['loss']
x_len = np.arange(len(y_loss))

plt.plot(x_len, y_vloss, marker='.', c='red', label="Validation-set Loss")
plt.plot(x_len, y_loss, marker='.', c='blue', label="Train-set Loss")
plt.legend(loc='upper right')
plt.grid()
plt.ylim(0,1)
plt.title('Loss graph without batch normalization') 
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

png

# accuracy 값을 plot 해보겠습니다. 
y_vacc = history.history['val_accuracy']
y_acc = history.history['accuracy']
x_len = np.arange(len(y_acc))

plt.plot(x_len, y_vacc, marker='.', c='red', label="Validation-set accuracy")
plt.plot(x_len, y_acc, marker='.', c='blue', label="Train-set accuracy")
plt.legend(loc='lower right')
plt.grid()
plt.ylim(0.5,1)
plt.title('Accuracy graph without batch normalization') 
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()

png

아래는 BatchNoramlization layer를 추가한 실습입니다.

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation='relu'),
    #여기에 batchnormalization layer를 추가해보았습니다. 나머지 layer는 위의 실습과 같습니다.
    keras.layers.BatchNormalization(),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history= model.fit(X_train, y_train, epochs=20, batch_size=2048, validation_data=(X_valid, y_valid))
Epoch 1/20
21/21 [==============================] - 1s 20ms/step - loss: 1.4464 - accuracy: 0.5297 - val_loss: 1.0967 - val_accuracy: 0.6723
Epoch 2/20
21/21 [==============================] - 0s 11ms/step - loss: 0.5474 - accuracy: 0.8135 - val_loss: 0.8827 - val_accuracy: 0.7428
Epoch 3/20
21/21 [==============================] - 0s 11ms/step - loss: 0.4668 - accuracy: 0.8399 - val_loss: 0.7481 - val_accuracy: 0.8027
Epoch 4/20
21/21 [==============================] - 0s 9ms/step - loss: 0.4191 - accuracy: 0.8564 - val_loss: 0.6874 - val_accuracy: 0.8221
Epoch 5/20
21/21 [==============================] - 0s 9ms/step - loss: 0.3881 - accuracy: 0.8679 - val_loss: 0.6410 - val_accuracy: 0.8418
Epoch 6/20
21/21 [==============================] - 0s 9ms/step - loss: 0.3723 - accuracy: 0.8711 - val_loss: 0.5916 - val_accuracy: 0.8451
Epoch 7/20
21/21 [==============================] - 0s 10ms/step - loss: 0.3495 - accuracy: 0.8796 - val_loss: 0.5484 - val_accuracy: 0.8530
Epoch 8/20
21/21 [==============================] - 0s 9ms/step - loss: 0.3363 - accuracy: 0.8830 - val_loss: 0.5123 - val_accuracy: 0.8587
Epoch 9/20
21/21 [==============================] - 0s 9ms/step - loss: 0.3185 - accuracy: 0.8889 - val_loss: 0.4914 - val_accuracy: 0.8589
Epoch 10/20
21/21 [==============================] - 0s 7ms/step - loss: 0.3090 - accuracy: 0.8932 - val_loss: 0.4609 - val_accuracy: 0.8645
Epoch 11/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2976 - accuracy: 0.8960 - val_loss: 0.4397 - val_accuracy: 0.8594
Epoch 12/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2874 - accuracy: 0.9004 - val_loss: 0.4233 - val_accuracy: 0.8696
Epoch 13/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2807 - accuracy: 0.9008 - val_loss: 0.4177 - val_accuracy: 0.8644
Epoch 14/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2697 - accuracy: 0.9071 - val_loss: 0.3944 - val_accuracy: 0.8693
Epoch 15/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2639 - accuracy: 0.9075 - val_loss: 0.3832 - val_accuracy: 0.8719
Epoch 16/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2538 - accuracy: 0.9107 - val_loss: 0.3777 - val_accuracy: 0.8707
Epoch 17/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2483 - accuracy: 0.9133 - val_loss: 0.3728 - val_accuracy: 0.8713
Epoch 18/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2478 - accuracy: 0.9127 - val_loss: 0.3619 - val_accuracy: 0.8752
Epoch 19/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2329 - accuracy: 0.9183 - val_loss: 0.3743 - val_accuracy: 0.8661
Epoch 20/20
21/21 [==============================] - 0s 7ms/step - loss: 0.2274 - accuracy: 0.9212 - val_loss: 0.3673 - val_accuracy: 0.8709
# loss 값을 plot 해보겠습니다. 
y_vloss = history.history['val_loss']
y_loss = history.history['loss']
x_len = np.arange(len(y_loss))

plt.plot(x_len, y_vloss, marker='.', c='red', label="Validation-set Loss")
plt.plot(x_len, y_loss, marker='.', c='blue', label="Train-set Loss")
plt.legend(loc='upper right')
plt.grid()
plt.ylim(0,1)
plt.title('Loss graph with batch normalization') 
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()

png

# accuracy 값을 plot 해보겠습니다. 
y_vacc = history.history['val_accuracy']
y_acc = history.history['accuracy']
x_len = np.arange(len(y_acc))

plt.plot(x_len, y_vacc, marker='.', c='red', label="Validation-set accuracy")
plt.plot(x_len, y_acc, marker='.', c='blue', label="Train-set accuracy")
plt.legend(loc='lower right')
plt.grid()
plt.ylim(0.5,1) 
plt.title('Accurcy graph with batch normalization') 
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()

png

We can see that batch normalization allows slightly higher and faster accuracy increase and loss decrease. Batch normalization normalizes the image value with more even distribution, allowing more stable learning.


Leave a comment