Part 4: Hierarchical PCA Autoencoder (continued)

In this notebook, we will implement a hierarchical PCA autoencoder with 2 subnetworks (i.e., a total of 3 latent codes in the end) to decompose the synthetic dataset of binary ellipses with 3 variables (size, axes and rotation). The dataset will be generated using the same method in Part 2. The effectiveness of the additional hierarchy will be compared with the plain autoencoder with latent covariance loss. Specifically for this part, we will also rewrite the custom layer for the latent covariance loss so that it could handle different dimensions of the latent space.

Setup

import itertools
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from skimage.transform import rotate

np.random.seed(42)
tf.random.set_seed(42)

Standard encoder and decoder as used in Part 3.

def encoder_gen(inputs):
    x = keras.layers.Conv2D(4, (3, 3), padding='same')(inputs)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.MaxPool2D((2, 2))(x)
    x = keras.layers.Conv2D(8, (3, 3), padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.MaxPool2D((2, 2))(x)
    x = keras.layers.Conv2D(16, (3, 3), padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.MaxPool2D((2, 2))(x)
    x = keras.layers.Conv2D(32, (3, 3), padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.MaxPool2D((2, 2))(x)
    x = keras.layers.Conv2D(64, (3, 3), padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.MaxPool2D((2, 2))(x)
    x = keras.layers.Flatten()(x)
    x = keras.layers.Dense(1)(x)

    return x

def decoder_gen(inputs):
    x = keras.layers.Dense(16)(inputs)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.Reshape((2, 2, 4))(x)
    x = keras.layers.Conv2DTranspose(32, (3, 3), strides=2, padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.Conv2DTranspose(16, (3, 3), strides=2, padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.Conv2DTranspose(8, (3, 3), strides=2, padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.Conv2DTranspose(4, (3, 3), strides=2, padding='same')(x)
    x = keras.layers.LeakyReLU()(x)
    x = keras.layers.Conv2DTranspose(1, (3, 3), strides=2, padding='same')(x)

    return x

A function for assigning unique names to the sub-networks.

def edit_name(model, name):
  for layer in model.layers:
      layer._name = name + '_' + layer._name 
  for i in range(len(model.weights)):
      model.weights[i]._handle_name = name + '_' + model.weights[i].name

Synthesize the dataset of ellipses. To be consistent with Part 2, we will use a total number of 32 batches.

def phantomEllipse_ang(n, a, b, ang):
    x = np.arange(n)
    R = n // 2
    y = x[:, np.newaxis]
    img = (x-R)**2/a**2 + (y-R)**2/b**2
    img[img<=1] = 1
    img[img>1] = 0
    return rotate(img, angle=ang)

n = 64
num_batch = 32
batch_size = 500
N = int(num_batch * batch_size)
random_gen = np.random.default_rng()
a = random_gen.uniform(1, n//2, N)
b = random_gen.uniform(1, n//2, N)
ang = random_gen.uniform(0, 90, N)
dataset = np.array([phantomEllipse_ang(n, _a, _b, _ang) for _a, _b, _ang in zip(a, b, ang)])
dataset = dataset[..., np.newaxis]

frames = np.random.choice(np.arange(N), 8)
_, ax = plt.subplots(1, 8, figsize=(12, 3))
for i in range(8):
    ax[i].imshow(dataset[frames[i], ..., 0], cmap=plt.get_cmap('gray'))
    ax[i].axis("off")
plt.show()
../_images/pca_ae_hierarchy_rotation_7_0.png

Latent covariance loss

The custom layer is modified below to accommodate variable sizes in the latent space. Here I will compute the covariance between all possible pairs of latent codes. In the original work, the authors only accounted for the covariances between the new additional code and the existing (trained) codes, since the trained codes should already be optimized with each other. My intention, however, is to be able to use this custom layer in non-hierarchical autoencoders.

class LatentCovarianceLayer(keras.layers.Layer):
    def __init__(self, lam=0.1, **kwargs):
        super().__init__(**kwargs)
        self.lam = lam
    def call(self, inputs):
        _sum = 0.0
        for _indices in list(itertools.combinations(range(inputs.shape[-1]), 2)):
            _sum += tf.math.reduce_mean(
                tf.math.multiply(inputs[:, _indices[0]], inputs[:, _indices[1]]))
        
        covariance = self.lam * _sum
        self.add_loss(tf.math.abs(covariance))
        self.add_metric(tf.abs(covariance), name='cov_loss')
        return inputs
    def get_config(self):
        base_config = super().get_config()
        return {**base_config, "lam":self.lam}

First autoencoder

Train the first network

# SCROLL
keras.backend.clear_session()
input_img = keras.layers.Input(shape=[64, 64, 1])
encoded = encoder_gen(input_img)
decoded = decoder_gen(encoded)
pca_ae = keras.models.Model(input_img, decoded)

optimizer = tf.keras.optimizers.Adam(learning_rate=0.002)
pca_ae.compile(optimizer=optimizer, loss='mse')

tempfn='./model_0.hdf5'
model_cb=keras.callbacks.ModelCheckpoint(tempfn, monitor='loss',save_best_only=True, verbose=1)
early_cb=keras.callbacks.EarlyStopping(monitor='loss', patience=50, verbose=1)
learning_rate_reduction = keras.callbacks.ReduceLROnPlateau(monitor='loss',
                                                            patience=25,
                                                            verbose=1,
                                                            factor=0.5,
                                                            min_lr=0.00001)
cb = [model_cb, early_cb, learning_rate_reduction]

history=pca_ae.fit(dataset, dataset,
                   epochs=1000,
                   batch_size=500,
                   shuffle=True,
                   callbacks=cb)
Epoch 1/1000
32/32 [==============================] - 31s 53ms/step - loss: 0.1781

Epoch 00001: loss improved from inf to 0.17812, saving model to ./model_0.hdf5
Epoch 2/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0935

Epoch 00002: loss improved from 0.17812 to 0.09349, saving model to ./model_0.hdf5
Epoch 3/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0637

Epoch 00003: loss improved from 0.09349 to 0.06374, saving model to ./model_0.hdf5
Epoch 4/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0604

Epoch 00004: loss improved from 0.06374 to 0.06038, saving model to ./model_0.hdf5
Epoch 5/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0580

Epoch 00005: loss improved from 0.06038 to 0.05798, saving model to ./model_0.hdf5
Epoch 6/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0556

Epoch 00006: loss improved from 0.05798 to 0.05560, saving model to ./model_0.hdf5
Epoch 7/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0535

Epoch 00007: loss improved from 0.05560 to 0.05354, saving model to ./model_0.hdf5
Epoch 8/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0516

Epoch 00008: loss improved from 0.05354 to 0.05161, saving model to ./model_0.hdf5
Epoch 9/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0500

Epoch 00009: loss improved from 0.05161 to 0.05000, saving model to ./model_0.hdf5
Epoch 10/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0480

Epoch 00010: loss improved from 0.05000 to 0.04802, saving model to ./model_0.hdf5
Epoch 11/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0479

Epoch 00011: loss improved from 0.04802 to 0.04794, saving model to ./model_0.hdf5
Epoch 12/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0452

Epoch 00012: loss improved from 0.04794 to 0.04519, saving model to ./model_0.hdf5
Epoch 13/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0435

Epoch 00013: loss improved from 0.04519 to 0.04353, saving model to ./model_0.hdf5
Epoch 14/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0426

Epoch 00014: loss improved from 0.04353 to 0.04259, saving model to ./model_0.hdf5
Epoch 15/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0420

Epoch 00015: loss improved from 0.04259 to 0.04201, saving model to ./model_0.hdf5
Epoch 16/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0416

Epoch 00016: loss improved from 0.04201 to 0.04156, saving model to ./model_0.hdf5
Epoch 17/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0412

Epoch 00017: loss improved from 0.04156 to 0.04121, saving model to ./model_0.hdf5
Epoch 18/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0409

Epoch 00018: loss improved from 0.04121 to 0.04089, saving model to ./model_0.hdf5
Epoch 19/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0406

Epoch 00019: loss improved from 0.04089 to 0.04062, saving model to ./model_0.hdf5
Epoch 20/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0404

Epoch 00020: loss improved from 0.04062 to 0.04036, saving model to ./model_0.hdf5
Epoch 21/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0403

Epoch 00021: loss improved from 0.04036 to 0.04033, saving model to ./model_0.hdf5
Epoch 22/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0399

Epoch 00022: loss improved from 0.04033 to 0.03994, saving model to ./model_0.hdf5
Epoch 23/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0398

Epoch 00023: loss improved from 0.03994 to 0.03983, saving model to ./model_0.hdf5
Epoch 24/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0399

Epoch 00024: loss did not improve from 0.03983
Epoch 25/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0397

Epoch 00025: loss improved from 0.03983 to 0.03968, saving model to ./model_0.hdf5
Epoch 26/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0396

Epoch 00026: loss improved from 0.03968 to 0.03962, saving model to ./model_0.hdf5
Epoch 27/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0396

Epoch 00027: loss improved from 0.03962 to 0.03960, saving model to ./model_0.hdf5
Epoch 28/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0395

Epoch 00028: loss improved from 0.03960 to 0.03951, saving model to ./model_0.hdf5
Epoch 29/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0395

Epoch 00029: loss improved from 0.03951 to 0.03945, saving model to ./model_0.hdf5
Epoch 30/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0394

Epoch 00030: loss improved from 0.03945 to 0.03940, saving model to ./model_0.hdf5
Epoch 31/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0395

Epoch 00031: loss did not improve from 0.03940
Epoch 32/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0394

Epoch 00032: loss did not improve from 0.03940
Epoch 33/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0393

Epoch 00033: loss improved from 0.03940 to 0.03934, saving model to ./model_0.hdf5
Epoch 34/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0394

Epoch 00034: loss did not improve from 0.03934
Epoch 35/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0393

Epoch 00035: loss improved from 0.03934 to 0.03933, saving model to ./model_0.hdf5
Epoch 36/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0393

Epoch 00036: loss improved from 0.03933 to 0.03931, saving model to ./model_0.hdf5
Epoch 37/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0394

Epoch 00037: loss did not improve from 0.03931
Epoch 38/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0393

Epoch 00038: loss improved from 0.03931 to 0.03928, saving model to ./model_0.hdf5
Epoch 39/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0392

Epoch 00039: loss improved from 0.03928 to 0.03923, saving model to ./model_0.hdf5
Epoch 40/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0392

Epoch 00040: loss improved from 0.03923 to 0.03920, saving model to ./model_0.hdf5
Epoch 41/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0393

Epoch 00041: loss did not improve from 0.03920
Epoch 42/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0392

Epoch 00042: loss did not improve from 0.03920
Epoch 43/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0392

Epoch 00043: loss improved from 0.03920 to 0.03917, saving model to ./model_0.hdf5
Epoch 44/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0391

Epoch 00044: loss improved from 0.03917 to 0.03915, saving model to ./model_0.hdf5
Epoch 45/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0391

Epoch 00045: loss improved from 0.03915 to 0.03914, saving model to ./model_0.hdf5
Epoch 46/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0393

Epoch 00046: loss did not improve from 0.03914
Epoch 47/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0391

Epoch 00047: loss improved from 0.03914 to 0.03912, saving model to ./model_0.hdf5
Epoch 48/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0390

Epoch 00048: loss improved from 0.03912 to 0.03904, saving model to ./model_0.hdf5
Epoch 49/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0391

Epoch 00049: loss did not improve from 0.03904
Epoch 50/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0391

Epoch 00050: loss did not improve from 0.03904
Epoch 51/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0390

Epoch 00051: loss improved from 0.03904 to 0.03903, saving model to ./model_0.hdf5
Epoch 52/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0390

Epoch 00052: loss improved from 0.03903 to 0.03900, saving model to ./model_0.hdf5
Epoch 53/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0390

Epoch 00053: loss did not improve from 0.03900
Epoch 54/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0390

Epoch 00054: loss improved from 0.03900 to 0.03899, saving model to ./model_0.hdf5
Epoch 55/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0389

Epoch 00055: loss improved from 0.03899 to 0.03890, saving model to ./model_0.hdf5
Epoch 56/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0390

Epoch 00056: loss did not improve from 0.03890
Epoch 57/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0390

Epoch 00057: loss did not improve from 0.03890
Epoch 58/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0389

Epoch 00058: loss did not improve from 0.03890
Epoch 59/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0388

Epoch 00059: loss improved from 0.03890 to 0.03879, saving model to ./model_0.hdf5
Epoch 60/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0388

Epoch 00060: loss improved from 0.03879 to 0.03876, saving model to ./model_0.hdf5
Epoch 61/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0389

Epoch 00061: loss did not improve from 0.03876
Epoch 62/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0388

Epoch 00062: loss did not improve from 0.03876
Epoch 63/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0388

Epoch 00063: loss improved from 0.03876 to 0.03876, saving model to ./model_0.hdf5
Epoch 64/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0386

Epoch 00064: loss improved from 0.03876 to 0.03862, saving model to ./model_0.hdf5
Epoch 65/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0388

Epoch 00065: loss did not improve from 0.03862
Epoch 66/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0386

Epoch 00066: loss did not improve from 0.03862
Epoch 67/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0385

Epoch 00067: loss improved from 0.03862 to 0.03854, saving model to ./model_0.hdf5
Epoch 68/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0385

Epoch 00068: loss improved from 0.03854 to 0.03851, saving model to ./model_0.hdf5
Epoch 69/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0386

Epoch 00069: loss did not improve from 0.03851
Epoch 70/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0386

Epoch 00070: loss did not improve from 0.03851
Epoch 71/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0384

Epoch 00071: loss improved from 0.03851 to 0.03845, saving model to ./model_0.hdf5
Epoch 72/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0383

Epoch 00072: loss improved from 0.03845 to 0.03833, saving model to ./model_0.hdf5
Epoch 73/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0384

Epoch 00073: loss did not improve from 0.03833
Epoch 74/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0384

Epoch 00074: loss did not improve from 0.03833
Epoch 75/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0383

Epoch 00075: loss improved from 0.03833 to 0.03826, saving model to ./model_0.hdf5
Epoch 76/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0381

Epoch 00076: loss improved from 0.03826 to 0.03810, saving model to ./model_0.hdf5
Epoch 77/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0385

Epoch 00077: loss did not improve from 0.03810
Epoch 78/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0380

Epoch 00078: loss improved from 0.03810 to 0.03801, saving model to ./model_0.hdf5
Epoch 79/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0380

Epoch 00079: loss improved from 0.03801 to 0.03796, saving model to ./model_0.hdf5
Epoch 80/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0380

Epoch 00080: loss did not improve from 0.03796
Epoch 81/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0379

Epoch 00081: loss improved from 0.03796 to 0.03795, saving model to ./model_0.hdf5
Epoch 82/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0379

Epoch 00082: loss improved from 0.03795 to 0.03787, saving model to ./model_0.hdf5
Epoch 83/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0380

Epoch 00083: loss did not improve from 0.03787
Epoch 84/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0377

Epoch 00084: loss improved from 0.03787 to 0.03774, saving model to ./model_0.hdf5
Epoch 85/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0377

Epoch 00085: loss did not improve from 0.03774
Epoch 86/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0377

Epoch 00086: loss improved from 0.03774 to 0.03768, saving model to ./model_0.hdf5
Epoch 87/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0374

Epoch 00087: loss improved from 0.03768 to 0.03740, saving model to ./model_0.hdf5
Epoch 88/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0374

Epoch 00088: loss did not improve from 0.03740
Epoch 89/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0375

Epoch 00089: loss did not improve from 0.03740
Epoch 90/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0375

Epoch 00090: loss did not improve from 0.03740
Epoch 91/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0371

Epoch 00091: loss improved from 0.03740 to 0.03713, saving model to ./model_0.hdf5
Epoch 92/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0374

Epoch 00092: loss did not improve from 0.03713
Epoch 93/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0372

Epoch 00093: loss did not improve from 0.03713
Epoch 94/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0373

Epoch 00094: loss did not improve from 0.03713
Epoch 95/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0372

Epoch 00095: loss did not improve from 0.03713
Epoch 96/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0368

Epoch 00096: loss improved from 0.03713 to 0.03678, saving model to ./model_0.hdf5
Epoch 97/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0369

Epoch 00097: loss did not improve from 0.03678
Epoch 98/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0367

Epoch 00098: loss improved from 0.03678 to 0.03667, saving model to ./model_0.hdf5
Epoch 99/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0366

Epoch 00099: loss improved from 0.03667 to 0.03659, saving model to ./model_0.hdf5
Epoch 100/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0369

Epoch 00100: loss did not improve from 0.03659
Epoch 101/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0364

Epoch 00101: loss improved from 0.03659 to 0.03643, saving model to ./model_0.hdf5
Epoch 102/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0365

Epoch 00102: loss did not improve from 0.03643
Epoch 103/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0365

Epoch 00103: loss did not improve from 0.03643
Epoch 104/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0363

Epoch 00104: loss improved from 0.03643 to 0.03630, saving model to ./model_0.hdf5
Epoch 105/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0362

Epoch 00105: loss improved from 0.03630 to 0.03617, saving model to ./model_0.hdf5
Epoch 106/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0361

Epoch 00106: loss improved from 0.03617 to 0.03613, saving model to ./model_0.hdf5
Epoch 107/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0364

Epoch 00107: loss did not improve from 0.03613
Epoch 108/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0360

Epoch 00108: loss improved from 0.03613 to 0.03597, saving model to ./model_0.hdf5
Epoch 109/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0358

Epoch 00109: loss improved from 0.03597 to 0.03582, saving model to ./model_0.hdf5
Epoch 110/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0358

Epoch 00110: loss did not improve from 0.03582
Epoch 111/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0357

Epoch 00111: loss improved from 0.03582 to 0.03570, saving model to ./model_0.hdf5
Epoch 112/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0359

Epoch 00112: loss did not improve from 0.03570
Epoch 113/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0357

Epoch 00113: loss improved from 0.03570 to 0.03565, saving model to ./model_0.hdf5
Epoch 114/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0355

Epoch 00114: loss improved from 0.03565 to 0.03555, saving model to ./model_0.hdf5
Epoch 115/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0364

Epoch 00115: loss did not improve from 0.03555
Epoch 116/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0353

Epoch 00116: loss improved from 0.03555 to 0.03531, saving model to ./model_0.hdf5
Epoch 117/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0352

Epoch 00117: loss improved from 0.03531 to 0.03521, saving model to ./model_0.hdf5
Epoch 118/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0354

Epoch 00118: loss did not improve from 0.03521
Epoch 119/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0352

Epoch 00119: loss improved from 0.03521 to 0.03516, saving model to ./model_0.hdf5
Epoch 120/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0352

Epoch 00120: loss did not improve from 0.03516
Epoch 121/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0358

Epoch 00121: loss did not improve from 0.03516
Epoch 122/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0358

Epoch 00122: loss did not improve from 0.03516
Epoch 123/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0354

Epoch 00123: loss did not improve from 0.03516
Epoch 124/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0349

Epoch 00124: loss improved from 0.03516 to 0.03486, saving model to ./model_0.hdf5
Epoch 125/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0351

Epoch 00125: loss did not improve from 0.03486
Epoch 126/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0347

Epoch 00126: loss improved from 0.03486 to 0.03473, saving model to ./model_0.hdf5
Epoch 127/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0348

Epoch 00127: loss did not improve from 0.03473
Epoch 128/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0349

Epoch 00128: loss did not improve from 0.03473
Epoch 129/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0347

Epoch 00129: loss improved from 0.03473 to 0.03469, saving model to ./model_0.hdf5
Epoch 130/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0348

Epoch 00130: loss did not improve from 0.03469
Epoch 131/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0346

Epoch 00131: loss improved from 0.03469 to 0.03461, saving model to ./model_0.hdf5
Epoch 132/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0349

Epoch 00132: loss did not improve from 0.03461
Epoch 133/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0348

Epoch 00133: loss did not improve from 0.03461
Epoch 134/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0344

Epoch 00134: loss improved from 0.03461 to 0.03444, saving model to ./model_0.hdf5
Epoch 135/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0349

Epoch 00135: loss did not improve from 0.03444
Epoch 136/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0346

Epoch 00136: loss did not improve from 0.03444
Epoch 137/1000
32/32 [==============================] - 2s 57ms/step - loss: 0.0345

Epoch 00137: loss did not improve from 0.03444
Epoch 138/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0343

Epoch 00138: loss improved from 0.03444 to 0.03432, saving model to ./model_0.hdf5
Epoch 139/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0347

Epoch 00139: loss did not improve from 0.03432
Epoch 140/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0343

Epoch 00140: loss improved from 0.03432 to 0.03429, saving model to ./model_0.hdf5
Epoch 141/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0343

Epoch 00141: loss did not improve from 0.03429
Epoch 142/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0342

Epoch 00142: loss improved from 0.03429 to 0.03425, saving model to ./model_0.hdf5
Epoch 143/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0340

Epoch 00143: loss improved from 0.03425 to 0.03404, saving model to ./model_0.hdf5
Epoch 144/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0340

Epoch 00144: loss improved from 0.03404 to 0.03398, saving model to ./model_0.hdf5
Epoch 145/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0340

Epoch 00145: loss did not improve from 0.03398
Epoch 146/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0343

Epoch 00146: loss did not improve from 0.03398
Epoch 147/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0339

Epoch 00147: loss improved from 0.03398 to 0.03394, saving model to ./model_0.hdf5
Epoch 148/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0342

Epoch 00148: loss did not improve from 0.03394
Epoch 149/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0344

Epoch 00149: loss did not improve from 0.03394
Epoch 150/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0339

Epoch 00150: loss did not improve from 0.03394
Epoch 151/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0340

Epoch 00151: loss did not improve from 0.03394
Epoch 152/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0338

Epoch 00152: loss improved from 0.03394 to 0.03381, saving model to ./model_0.hdf5
Epoch 153/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0337

Epoch 00153: loss improved from 0.03381 to 0.03370, saving model to ./model_0.hdf5
Epoch 154/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0342

Epoch 00154: loss did not improve from 0.03370
Epoch 155/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0341

Epoch 00155: loss did not improve from 0.03370
Epoch 156/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0337

Epoch 00156: loss did not improve from 0.03370
Epoch 157/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0341

Epoch 00157: loss did not improve from 0.03370
Epoch 158/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0352

Epoch 00158: loss did not improve from 0.03370
Epoch 159/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0339

Epoch 00159: loss did not improve from 0.03370
Epoch 160/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0341

Epoch 00160: loss did not improve from 0.03370
Epoch 161/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0335

Epoch 00161: loss improved from 0.03370 to 0.03351, saving model to ./model_0.hdf5
Epoch 162/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0335

Epoch 00162: loss improved from 0.03351 to 0.03345, saving model to ./model_0.hdf5
Epoch 163/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0335

Epoch 00163: loss did not improve from 0.03345
Epoch 164/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0335

Epoch 00164: loss did not improve from 0.03345
Epoch 165/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0336

Epoch 00165: loss did not improve from 0.03345
Epoch 166/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0333

Epoch 00166: loss improved from 0.03345 to 0.03332, saving model to ./model_0.hdf5
Epoch 167/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0334

Epoch 00167: loss did not improve from 0.03332
Epoch 168/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0337

Epoch 00168: loss did not improve from 0.03332
Epoch 169/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0338

Epoch 00169: loss did not improve from 0.03332
Epoch 170/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0334

Epoch 00170: loss did not improve from 0.03332
Epoch 171/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0334

Epoch 00171: loss did not improve from 0.03332
Epoch 172/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0334

Epoch 00172: loss did not improve from 0.03332
Epoch 173/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0334

Epoch 00173: loss did not improve from 0.03332
Epoch 174/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0334

Epoch 00174: loss did not improve from 0.03332
Epoch 175/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0332

Epoch 00175: loss improved from 0.03332 to 0.03323, saving model to ./model_0.hdf5
Epoch 176/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0332

Epoch 00176: loss did not improve from 0.03323
Epoch 177/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0337

Epoch 00177: loss did not improve from 0.03323
Epoch 178/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0335

Epoch 00178: loss did not improve from 0.03323
Epoch 179/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0331

Epoch 00179: loss improved from 0.03323 to 0.03310, saving model to ./model_0.hdf5
Epoch 180/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0332

Epoch 00180: loss did not improve from 0.03310
Epoch 181/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0333

Epoch 00181: loss did not improve from 0.03310
Epoch 182/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0332

Epoch 00182: loss did not improve from 0.03310
Epoch 183/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0334

Epoch 00183: loss did not improve from 0.03310
Epoch 184/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0335

Epoch 00184: loss did not improve from 0.03310
Epoch 185/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0329

Epoch 00185: loss improved from 0.03310 to 0.03288, saving model to ./model_0.hdf5
Epoch 186/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0329

Epoch 00186: loss did not improve from 0.03288
Epoch 187/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0329

Epoch 00187: loss did not improve from 0.03288
Epoch 188/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0334

Epoch 00188: loss did not improve from 0.03288
Epoch 189/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0332

Epoch 00189: loss did not improve from 0.03288
Epoch 190/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0335

Epoch 00190: loss did not improve from 0.03288
Epoch 191/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0332

Epoch 00191: loss did not improve from 0.03288
Epoch 192/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0330

Epoch 00192: loss did not improve from 0.03288
Epoch 193/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0328

Epoch 00193: loss improved from 0.03288 to 0.03278, saving model to ./model_0.hdf5
Epoch 194/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0331

Epoch 00194: loss did not improve from 0.03278
Epoch 195/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0330

Epoch 00195: loss did not improve from 0.03278
Epoch 196/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0330

Epoch 00196: loss did not improve from 0.03278
Epoch 197/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0332

Epoch 00197: loss did not improve from 0.03278
Epoch 198/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0328

Epoch 00198: loss improved from 0.03278 to 0.03276, saving model to ./model_0.hdf5
Epoch 199/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0327

Epoch 00199: loss improved from 0.03276 to 0.03273, saving model to ./model_0.hdf5
Epoch 200/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0326

Epoch 00200: loss improved from 0.03273 to 0.03261, saving model to ./model_0.hdf5
Epoch 201/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0331

Epoch 00201: loss did not improve from 0.03261
Epoch 202/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0327

Epoch 00202: loss did not improve from 0.03261
Epoch 203/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0329

Epoch 00203: loss did not improve from 0.03261
Epoch 204/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0329

Epoch 00204: loss did not improve from 0.03261
Epoch 205/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0326

Epoch 00205: loss did not improve from 0.03261
Epoch 206/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0330

Epoch 00206: loss did not improve from 0.03261
Epoch 207/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0331

Epoch 00207: loss did not improve from 0.03261
Epoch 208/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0327

Epoch 00208: loss did not improve from 0.03261
Epoch 209/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0327

Epoch 00209: loss did not improve from 0.03261
Epoch 210/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0327

Epoch 00210: loss did not improve from 0.03261
Epoch 211/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0328

Epoch 00211: loss did not improve from 0.03261
Epoch 212/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0327

Epoch 00212: loss did not improve from 0.03261
Epoch 213/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0326

Epoch 00213: loss did not improve from 0.03261
Epoch 214/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0325

Epoch 00214: loss improved from 0.03261 to 0.03254, saving model to ./model_0.hdf5
Epoch 215/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0325

Epoch 00215: loss improved from 0.03254 to 0.03251, saving model to ./model_0.hdf5
Epoch 216/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0327

Epoch 00216: loss did not improve from 0.03251
Epoch 217/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0331

Epoch 00217: loss did not improve from 0.03251
Epoch 218/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0326

Epoch 00218: loss did not improve from 0.03251
Epoch 219/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0329

Epoch 00219: loss did not improve from 0.03251
Epoch 220/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0326

Epoch 00220: loss did not improve from 0.03251
Epoch 221/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0323

Epoch 00221: loss improved from 0.03251 to 0.03229, saving model to ./model_0.hdf5
Epoch 222/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0326

Epoch 00222: loss did not improve from 0.03229
Epoch 223/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0324

Epoch 00223: loss did not improve from 0.03229
Epoch 224/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0324

Epoch 00224: loss did not improve from 0.03229
Epoch 225/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0326

Epoch 00225: loss did not improve from 0.03229
Epoch 226/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0340

Epoch 00226: loss did not improve from 0.03229
Epoch 227/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0347

Epoch 00227: loss did not improve from 0.03229
Epoch 228/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0324

Epoch 00228: loss did not improve from 0.03229
Epoch 229/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0323

Epoch 00229: loss improved from 0.03229 to 0.03227, saving model to ./model_0.hdf5
Epoch 230/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0322

Epoch 00230: loss improved from 0.03227 to 0.03223, saving model to ./model_0.hdf5
Epoch 231/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0323

Epoch 00231: loss did not improve from 0.03223
Epoch 232/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0322

Epoch 00232: loss improved from 0.03223 to 0.03223, saving model to ./model_0.hdf5
Epoch 233/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0322

Epoch 00233: loss improved from 0.03223 to 0.03217, saving model to ./model_0.hdf5
Epoch 234/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0324

Epoch 00234: loss did not improve from 0.03217
Epoch 235/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0325

Epoch 00235: loss did not improve from 0.03217
Epoch 236/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0322

Epoch 00236: loss improved from 0.03217 to 0.03216, saving model to ./model_0.hdf5
Epoch 237/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0324

Epoch 00237: loss did not improve from 0.03216
Epoch 238/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0324

Epoch 00238: loss did not improve from 0.03216
Epoch 239/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0323

Epoch 00239: loss did not improve from 0.03216
Epoch 240/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0321

Epoch 00240: loss improved from 0.03216 to 0.03206, saving model to ./model_0.hdf5
Epoch 241/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0321

Epoch 00241: loss did not improve from 0.03206
Epoch 242/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0322

Epoch 00242: loss did not improve from 0.03206
Epoch 243/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0321

Epoch 00243: loss did not improve from 0.03206
Epoch 244/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0325

Epoch 00244: loss did not improve from 0.03206
Epoch 245/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0321

Epoch 00245: loss did not improve from 0.03206
Epoch 246/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0322

Epoch 00246: loss did not improve from 0.03206
Epoch 247/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0322

Epoch 00247: loss did not improve from 0.03206
Epoch 248/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0324

Epoch 00248: loss did not improve from 0.03206
Epoch 249/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0322

Epoch 00249: loss did not improve from 0.03206
Epoch 250/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0321

Epoch 00250: loss improved from 0.03206 to 0.03206, saving model to ./model_0.hdf5
Epoch 251/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0323

Epoch 00251: loss did not improve from 0.03206
Epoch 252/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0322

Epoch 00252: loss did not improve from 0.03206
Epoch 253/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0322

Epoch 00253: loss did not improve from 0.03206
Epoch 254/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0322

Epoch 00254: loss did not improve from 0.03206
Epoch 255/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0319

Epoch 00255: loss improved from 0.03206 to 0.03192, saving model to ./model_0.hdf5
Epoch 256/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0320

Epoch 00256: loss did not improve from 0.03192
Epoch 257/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0322

Epoch 00257: loss did not improve from 0.03192
Epoch 258/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0321

Epoch 00258: loss did not improve from 0.03192
Epoch 259/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0321

Epoch 00259: loss did not improve from 0.03192
Epoch 260/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0326

Epoch 00260: loss did not improve from 0.03192
Epoch 261/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0320

Epoch 00261: loss did not improve from 0.03192
Epoch 262/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0321

Epoch 00262: loss did not improve from 0.03192
Epoch 263/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0322

Epoch 00263: loss did not improve from 0.03192
Epoch 264/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0317

Epoch 00264: loss improved from 0.03192 to 0.03168, saving model to ./model_0.hdf5
Epoch 265/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0326

Epoch 00265: loss did not improve from 0.03168
Epoch 266/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0320

Epoch 00266: loss did not improve from 0.03168
Epoch 267/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0319

Epoch 00267: loss did not improve from 0.03168
Epoch 268/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0318

Epoch 00268: loss did not improve from 0.03168
Epoch 269/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0318

Epoch 00269: loss did not improve from 0.03168
Epoch 270/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0318

Epoch 00270: loss did not improve from 0.03168
Epoch 271/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0323

Epoch 00271: loss did not improve from 0.03168
Epoch 272/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0345

Epoch 00272: loss did not improve from 0.03168
Epoch 273/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0329

Epoch 00273: loss did not improve from 0.03168
Epoch 274/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0319

Epoch 00274: loss did not improve from 0.03168
Epoch 275/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0318

Epoch 00275: loss did not improve from 0.03168
Epoch 276/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0317

Epoch 00276: loss did not improve from 0.03168
Epoch 277/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0315

Epoch 00277: loss improved from 0.03168 to 0.03155, saving model to ./model_0.hdf5
Epoch 278/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00278: loss did not improve from 0.03155
Epoch 279/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0318

Epoch 00279: loss did not improve from 0.03155
Epoch 280/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0317

Epoch 00280: loss did not improve from 0.03155
Epoch 281/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0320

Epoch 00281: loss did not improve from 0.03155
Epoch 282/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0316

Epoch 00282: loss did not improve from 0.03155
Epoch 283/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0325

Epoch 00283: loss did not improve from 0.03155
Epoch 284/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0329

Epoch 00284: loss did not improve from 0.03155
Epoch 285/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0319

Epoch 00285: loss did not improve from 0.03155
Epoch 286/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0314

Epoch 00286: loss improved from 0.03155 to 0.03141, saving model to ./model_0.hdf5
Epoch 287/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0317

Epoch 00287: loss did not improve from 0.03141
Epoch 288/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0315

Epoch 00288: loss did not improve from 0.03141
Epoch 289/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00289: loss did not improve from 0.03141
Epoch 290/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00290: loss did not improve from 0.03141
Epoch 291/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0315

Epoch 00291: loss did not improve from 0.03141
Epoch 292/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0315

Epoch 00292: loss did not improve from 0.03141
Epoch 293/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0315

Epoch 00293: loss did not improve from 0.03141
Epoch 294/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0319

Epoch 00294: loss did not improve from 0.03141
Epoch 295/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0317

Epoch 00295: loss did not improve from 0.03141
Epoch 296/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0318

Epoch 00296: loss did not improve from 0.03141
Epoch 297/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0314

Epoch 00297: loss improved from 0.03141 to 0.03139, saving model to ./model_0.hdf5
Epoch 298/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00298: loss did not improve from 0.03139
Epoch 299/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00299: loss did not improve from 0.03139
Epoch 300/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0318

Epoch 00300: loss did not improve from 0.03139
Epoch 301/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0318

Epoch 00301: loss did not improve from 0.03139
Epoch 302/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0322

Epoch 00302: loss did not improve from 0.03139
Epoch 303/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0315

Epoch 00303: loss did not improve from 0.03139
Epoch 304/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00304: loss did not improve from 0.03139
Epoch 305/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0317

Epoch 00305: loss did not improve from 0.03139
Epoch 306/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0317

Epoch 00306: loss did not improve from 0.03139
Epoch 307/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0316

Epoch 00307: loss did not improve from 0.03139
Epoch 308/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0314

Epoch 00308: loss improved from 0.03139 to 0.03138, saving model to ./model_0.hdf5
Epoch 309/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0314

Epoch 00309: loss did not improve from 0.03138
Epoch 310/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0315

Epoch 00310: loss did not improve from 0.03138
Epoch 311/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0315

Epoch 00311: loss did not improve from 0.03138

Epoch 00311: ReduceLROnPlateau reducing learning rate to 0.0010000000474974513.
Epoch 312/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0310

Epoch 00312: loss improved from 0.03138 to 0.03101, saving model to ./model_0.hdf5
Epoch 313/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0309

Epoch 00313: loss improved from 0.03101 to 0.03087, saving model to ./model_0.hdf5
Epoch 314/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00314: loss did not improve from 0.03087
Epoch 315/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00315: loss improved from 0.03087 to 0.03084, saving model to ./model_0.hdf5
Epoch 316/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00316: loss did not improve from 0.03084
Epoch 317/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00317: loss did not improve from 0.03084
Epoch 318/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0310

Epoch 00318: loss did not improve from 0.03084
Epoch 319/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0310

Epoch 00319: loss did not improve from 0.03084
Epoch 320/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0308

Epoch 00320: loss improved from 0.03084 to 0.03080, saving model to ./model_0.hdf5
Epoch 321/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00321: loss did not improve from 0.03080
Epoch 322/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0308

Epoch 00322: loss improved from 0.03080 to 0.03078, saving model to ./model_0.hdf5
Epoch 323/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00323: loss did not improve from 0.03078
Epoch 324/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00324: loss did not improve from 0.03078
Epoch 325/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0308

Epoch 00325: loss did not improve from 0.03078
Epoch 326/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00326: loss did not improve from 0.03078
Epoch 327/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0310

Epoch 00327: loss did not improve from 0.03078
Epoch 328/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00328: loss did not improve from 0.03078
Epoch 329/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00329: loss did not improve from 0.03078
Epoch 330/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00330: loss did not improve from 0.03078
Epoch 331/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00331: loss did not improve from 0.03078
Epoch 332/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0309

Epoch 00332: loss did not improve from 0.03078
Epoch 333/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0309

Epoch 00333: loss did not improve from 0.03078
Epoch 334/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0308

Epoch 00334: loss did not improve from 0.03078
Epoch 335/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00335: loss improved from 0.03078 to 0.03074, saving model to ./model_0.hdf5
Epoch 336/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0308

Epoch 00336: loss did not improve from 0.03074
Epoch 337/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0307

Epoch 00337: loss improved from 0.03074 to 0.03072, saving model to ./model_0.hdf5
Epoch 338/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0308

Epoch 00338: loss did not improve from 0.03072
Epoch 339/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0307

Epoch 00339: loss did not improve from 0.03072
Epoch 340/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0308

Epoch 00340: loss did not improve from 0.03072
Epoch 341/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0309

Epoch 00341: loss did not improve from 0.03072
Epoch 342/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0308

Epoch 00342: loss did not improve from 0.03072
Epoch 343/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00343: loss improved from 0.03072 to 0.03070, saving model to ./model_0.hdf5
Epoch 344/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0307

Epoch 00344: loss did not improve from 0.03070
Epoch 345/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00345: loss did not improve from 0.03070
Epoch 346/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0309

Epoch 00346: loss did not improve from 0.03070
Epoch 347/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00347: loss did not improve from 0.03070
Epoch 348/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0306

Epoch 00348: loss improved from 0.03070 to 0.03062, saving model to ./model_0.hdf5
Epoch 349/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0306

Epoch 00349: loss did not improve from 0.03062
Epoch 350/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0307

Epoch 00350: loss did not improve from 0.03062
Epoch 351/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00351: loss did not improve from 0.03062
Epoch 352/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00352: loss did not improve from 0.03062
Epoch 353/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0307

Epoch 00353: loss did not improve from 0.03062
Epoch 354/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0307

Epoch 00354: loss did not improve from 0.03062
Epoch 355/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0307

Epoch 00355: loss did not improve from 0.03062
Epoch 356/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0308

Epoch 00356: loss did not improve from 0.03062
Epoch 357/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0309

Epoch 00357: loss did not improve from 0.03062
Epoch 358/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0307

Epoch 00358: loss did not improve from 0.03062
Epoch 359/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0305

Epoch 00359: loss improved from 0.03062 to 0.03054, saving model to ./model_0.hdf5
Epoch 360/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0307

Epoch 00360: loss did not improve from 0.03054
Epoch 361/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0306

Epoch 00361: loss did not improve from 0.03054
Epoch 362/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0307

Epoch 00362: loss did not improve from 0.03054
Epoch 363/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0308

Epoch 00363: loss did not improve from 0.03054
Epoch 364/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0310

Epoch 00364: loss did not improve from 0.03054
Epoch 365/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00365: loss did not improve from 0.03054
Epoch 366/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0307

Epoch 00366: loss did not improve from 0.03054
Epoch 367/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0307

Epoch 00367: loss did not improve from 0.03054
Epoch 368/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0307

Epoch 00368: loss did not improve from 0.03054
Epoch 369/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0305

Epoch 00369: loss did not improve from 0.03054
Epoch 370/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0306

Epoch 00370: loss did not improve from 0.03054
Epoch 371/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00371: loss did not improve from 0.03054
Epoch 372/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0305

Epoch 00372: loss improved from 0.03054 to 0.03048, saving model to ./model_0.hdf5
Epoch 373/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0306

Epoch 00373: loss did not improve from 0.03048
Epoch 374/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0305

Epoch 00374: loss did not improve from 0.03048
Epoch 375/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0307

Epoch 00375: loss did not improve from 0.03048
Epoch 376/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0306

Epoch 00376: loss did not improve from 0.03048
Epoch 377/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0306

Epoch 00377: loss did not improve from 0.03048
Epoch 378/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0305

Epoch 00378: loss did not improve from 0.03048
Epoch 379/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0304

Epoch 00379: loss improved from 0.03048 to 0.03038, saving model to ./model_0.hdf5
Epoch 380/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0305

Epoch 00380: loss did not improve from 0.03038
Epoch 381/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0309

Epoch 00381: loss did not improve from 0.03038
Epoch 382/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0307

Epoch 00382: loss did not improve from 0.03038
Epoch 383/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0305

Epoch 00383: loss did not improve from 0.03038
Epoch 384/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0306

Epoch 00384: loss did not improve from 0.03038
Epoch 385/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0304

Epoch 00385: loss did not improve from 0.03038
Epoch 386/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0305

Epoch 00386: loss did not improve from 0.03038
Epoch 387/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0306

Epoch 00387: loss did not improve from 0.03038
Epoch 388/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0308

Epoch 00388: loss did not improve from 0.03038
Epoch 389/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0303

Epoch 00389: loss improved from 0.03038 to 0.03033, saving model to ./model_0.hdf5
Epoch 390/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0303

Epoch 00390: loss improved from 0.03033 to 0.03032, saving model to ./model_0.hdf5
Epoch 391/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0305

Epoch 00391: loss did not improve from 0.03032
Epoch 392/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0304

Epoch 00392: loss did not improve from 0.03032
Epoch 393/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0304

Epoch 00393: loss did not improve from 0.03032
Epoch 394/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0306

Epoch 00394: loss did not improve from 0.03032
Epoch 395/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0303

Epoch 00395: loss improved from 0.03032 to 0.03027, saving model to ./model_0.hdf5
Epoch 396/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0304

Epoch 00396: loss did not improve from 0.03027
Epoch 397/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0303

Epoch 00397: loss did not improve from 0.03027
Epoch 398/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0305

Epoch 00398: loss did not improve from 0.03027
Epoch 399/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0304

Epoch 00399: loss did not improve from 0.03027
Epoch 400/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0304

Epoch 00400: loss did not improve from 0.03027
Epoch 401/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0305

Epoch 00401: loss did not improve from 0.03027
Epoch 402/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0304

Epoch 00402: loss did not improve from 0.03027
Epoch 403/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0302

Epoch 00403: loss improved from 0.03027 to 0.03024, saving model to ./model_0.hdf5
Epoch 404/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0304

Epoch 00404: loss did not improve from 0.03024
Epoch 405/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0306

Epoch 00405: loss did not improve from 0.03024
Epoch 406/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0303

Epoch 00406: loss did not improve from 0.03024
Epoch 407/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0303

Epoch 00407: loss did not improve from 0.03024
Epoch 408/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0303

Epoch 00408: loss did not improve from 0.03024
Epoch 409/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0303

Epoch 00409: loss did not improve from 0.03024
Epoch 410/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0302

Epoch 00410: loss improved from 0.03024 to 0.03022, saving model to ./model_0.hdf5
Epoch 411/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0304

Epoch 00411: loss did not improve from 0.03022
Epoch 412/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0303

Epoch 00412: loss did not improve from 0.03022
Epoch 413/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0302

Epoch 00413: loss did not improve from 0.03022
Epoch 414/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0304

Epoch 00414: loss did not improve from 0.03022
Epoch 415/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0304

Epoch 00415: loss did not improve from 0.03022
Epoch 416/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0302

Epoch 00416: loss improved from 0.03022 to 0.03022, saving model to ./model_0.hdf5
Epoch 417/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0303

Epoch 00417: loss did not improve from 0.03022
Epoch 418/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0303

Epoch 00418: loss did not improve from 0.03022
Epoch 419/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0304

Epoch 00419: loss did not improve from 0.03022
Epoch 420/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0302

Epoch 00420: loss did not improve from 0.03022

Epoch 00420: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
Epoch 421/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0300

Epoch 00421: loss improved from 0.03022 to 0.02997, saving model to ./model_0.hdf5
Epoch 422/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0300

Epoch 00422: loss improved from 0.02997 to 0.02995, saving model to ./model_0.hdf5
Epoch 423/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0300

Epoch 00423: loss did not improve from 0.02995
Epoch 424/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0299

Epoch 00424: loss improved from 0.02995 to 0.02992, saving model to ./model_0.hdf5
Epoch 425/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00425: loss did not improve from 0.02992
Epoch 426/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0299

Epoch 00426: loss did not improve from 0.02992
Epoch 427/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0299

Epoch 00427: loss improved from 0.02992 to 0.02989, saving model to ./model_0.hdf5
Epoch 428/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00428: loss improved from 0.02989 to 0.02989, saving model to ./model_0.hdf5
Epoch 429/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0299

Epoch 00429: loss did not improve from 0.02989
Epoch 430/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00430: loss did not improve from 0.02989
Epoch 431/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0299

Epoch 00431: loss did not improve from 0.02989
Epoch 432/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0300

Epoch 00432: loss did not improve from 0.02989
Epoch 433/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00433: loss did not improve from 0.02989
Epoch 434/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0299

Epoch 00434: loss improved from 0.02989 to 0.02986, saving model to ./model_0.hdf5
Epoch 435/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00435: loss did not improve from 0.02986
Epoch 436/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0299

Epoch 00436: loss did not improve from 0.02986
Epoch 437/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0298

Epoch 00437: loss improved from 0.02986 to 0.02983, saving model to ./model_0.hdf5
Epoch 438/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0298

Epoch 00438: loss improved from 0.02983 to 0.02983, saving model to ./model_0.hdf5
Epoch 439/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0299

Epoch 00439: loss did not improve from 0.02983
Epoch 440/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0298

Epoch 00440: loss improved from 0.02983 to 0.02982, saving model to ./model_0.hdf5
Epoch 441/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00441: loss did not improve from 0.02982
Epoch 442/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0299

Epoch 00442: loss did not improve from 0.02982
Epoch 443/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0299

Epoch 00443: loss did not improve from 0.02982
Epoch 444/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0299

Epoch 00444: loss did not improve from 0.02982
Epoch 445/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0298

Epoch 00445: loss did not improve from 0.02982
Epoch 446/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0299

Epoch 00446: loss did not improve from 0.02982
Epoch 447/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0299

Epoch 00447: loss did not improve from 0.02982
Epoch 448/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0298

Epoch 00448: loss improved from 0.02982 to 0.02980, saving model to ./model_0.hdf5
Epoch 449/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0298

Epoch 00449: loss improved from 0.02980 to 0.02979, saving model to ./model_0.hdf5
Epoch 450/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0298

Epoch 00450: loss did not improve from 0.02979
Epoch 451/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0298

Epoch 00451: loss did not improve from 0.02979
Epoch 452/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0298

Epoch 00452: loss did not improve from 0.02979
Epoch 453/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0299

Epoch 00453: loss did not improve from 0.02979
Epoch 454/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0299

Epoch 00454: loss did not improve from 0.02979
Epoch 455/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0298

Epoch 00455: loss did not improve from 0.02979
Epoch 456/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0298

Epoch 00456: loss did not improve from 0.02979
Epoch 457/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0298

Epoch 00457: loss did not improve from 0.02979
Epoch 458/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0298

Epoch 00458: loss did not improve from 0.02979
Epoch 459/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0298

Epoch 00459: loss did not improve from 0.02979

Epoch 00459: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
Epoch 460/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0297

Epoch 00460: loss improved from 0.02979 to 0.02966, saving model to ./model_0.hdf5
Epoch 461/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00461: loss improved from 0.02966 to 0.02963, saving model to ./model_0.hdf5
Epoch 462/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00462: loss did not improve from 0.02963
Epoch 463/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00463: loss did not improve from 0.02963
Epoch 464/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00464: loss did not improve from 0.02963
Epoch 465/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00465: loss improved from 0.02963 to 0.02962, saving model to ./model_0.hdf5
Epoch 466/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00466: loss did not improve from 0.02962
Epoch 467/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00467: loss improved from 0.02962 to 0.02960, saving model to ./model_0.hdf5
Epoch 468/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00468: loss did not improve from 0.02960
Epoch 469/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00469: loss did not improve from 0.02960
Epoch 470/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00470: loss did not improve from 0.02960
Epoch 471/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0297

Epoch 00471: loss did not improve from 0.02960
Epoch 472/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00472: loss did not improve from 0.02960
Epoch 473/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00473: loss improved from 0.02960 to 0.02958, saving model to ./model_0.hdf5
Epoch 474/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00474: loss did not improve from 0.02958
Epoch 475/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0297

Epoch 00475: loss did not improve from 0.02958
Epoch 476/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00476: loss did not improve from 0.02958
Epoch 477/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00477: loss did not improve from 0.02958
Epoch 478/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00478: loss did not improve from 0.02958
Epoch 479/1000
32/32 [==============================] - 2s 57ms/step - loss: 0.0296

Epoch 00479: loss did not improve from 0.02958
Epoch 480/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00480: loss did not improve from 0.02958
Epoch 481/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00481: loss improved from 0.02958 to 0.02955, saving model to ./model_0.hdf5
Epoch 482/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00482: loss did not improve from 0.02955
Epoch 483/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00483: loss did not improve from 0.02955
Epoch 484/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00484: loss did not improve from 0.02955
Epoch 485/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00485: loss did not improve from 0.02955
Epoch 486/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00486: loss did not improve from 0.02955
Epoch 487/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00487: loss did not improve from 0.02955
Epoch 488/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00488: loss did not improve from 0.02955
Epoch 489/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0295

Epoch 00489: loss improved from 0.02955 to 0.02955, saving model to ./model_0.hdf5
Epoch 490/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0296

Epoch 00490: loss did not improve from 0.02955
Epoch 491/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00491: loss did not improve from 0.02955
Epoch 492/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0296

Epoch 00492: loss did not improve from 0.02955
Epoch 493/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00493: loss did not improve from 0.02955
Epoch 494/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00494: loss did not improve from 0.02955
Epoch 495/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0296

Epoch 00495: loss did not improve from 0.02955
Epoch 496/1000
32/32 [==============================] - 2s 57ms/step - loss: 0.0295

Epoch 00496: loss improved from 0.02955 to 0.02954, saving model to ./model_0.hdf5
Epoch 497/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00497: loss did not improve from 0.02954
Epoch 498/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0296

Epoch 00498: loss did not improve from 0.02954
Epoch 499/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0295

Epoch 00499: loss improved from 0.02954 to 0.02953, saving model to ./model_0.hdf5
Epoch 500/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0295

Epoch 00500: loss improved from 0.02953 to 0.02953, saving model to ./model_0.hdf5
Epoch 501/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0295

Epoch 00501: loss did not improve from 0.02953
Epoch 502/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0295

Epoch 00502: loss improved from 0.02953 to 0.02952, saving model to ./model_0.hdf5
Epoch 503/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0295

Epoch 00503: loss did not improve from 0.02952
Epoch 504/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0295

Epoch 00504: loss improved from 0.02952 to 0.02952, saving model to ./model_0.hdf5
Epoch 505/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0296

Epoch 00505: loss did not improve from 0.02952
Epoch 506/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0296

Epoch 00506: loss did not improve from 0.02952

Epoch 00506: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
Epoch 507/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0295

Epoch 00507: loss improved from 0.02952 to 0.02947, saving model to ./model_0.hdf5
Epoch 508/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0295

Epoch 00508: loss improved from 0.02947 to 0.02945, saving model to ./model_0.hdf5
Epoch 509/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0294

Epoch 00509: loss improved from 0.02945 to 0.02944, saving model to ./model_0.hdf5
Epoch 510/1000
32/32 [==============================] - 2s 57ms/step - loss: 0.0295

Epoch 00510: loss did not improve from 0.02944
Epoch 511/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0294

Epoch 00511: loss improved from 0.02944 to 0.02943, saving model to ./model_0.hdf5
Epoch 512/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00512: loss did not improve from 0.02943
Epoch 513/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00513: loss did not improve from 0.02943
Epoch 514/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00514: loss did not improve from 0.02943
Epoch 515/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00515: loss did not improve from 0.02943
Epoch 516/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00516: loss did not improve from 0.02943
Epoch 517/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00517: loss improved from 0.02943 to 0.02942, saving model to ./model_0.hdf5
Epoch 518/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00518: loss did not improve from 0.02942
Epoch 519/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0294

Epoch 00519: loss did not improve from 0.02942
Epoch 520/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00520: loss improved from 0.02942 to 0.02941, saving model to ./model_0.hdf5
Epoch 521/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00521: loss did not improve from 0.02941
Epoch 522/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0294

Epoch 00522: loss did not improve from 0.02941
Epoch 523/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00523: loss did not improve from 0.02941
Epoch 524/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00524: loss did not improve from 0.02941
Epoch 525/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00525: loss did not improve from 0.02941
Epoch 526/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0294

Epoch 00526: loss did not improve from 0.02941
Epoch 527/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00527: loss did not improve from 0.02941
Epoch 528/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0294

Epoch 00528: loss improved from 0.02941 to 0.02941, saving model to ./model_0.hdf5
Epoch 529/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00529: loss did not improve from 0.02941
Epoch 530/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00530: loss did not improve from 0.02941
Epoch 531/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00531: loss did not improve from 0.02941
Epoch 532/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00532: loss did not improve from 0.02941
Epoch 533/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0294

Epoch 00533: loss did not improve from 0.02941

Epoch 00533: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
Epoch 534/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00534: loss improved from 0.02941 to 0.02937, saving model to ./model_0.hdf5
Epoch 535/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00535: loss did not improve from 0.02937
Epoch 536/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00536: loss improved from 0.02937 to 0.02936, saving model to ./model_0.hdf5
Epoch 537/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00537: loss did not improve from 0.02936
Epoch 538/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0294

Epoch 00538: loss did not improve from 0.02936
Epoch 539/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00539: loss improved from 0.02936 to 0.02936, saving model to ./model_0.hdf5
Epoch 540/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00540: loss did not improve from 0.02936
Epoch 541/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00541: loss did not improve from 0.02936
Epoch 542/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0294

Epoch 00542: loss did not improve from 0.02936
Epoch 543/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00543: loss improved from 0.02936 to 0.02935, saving model to ./model_0.hdf5
Epoch 544/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0294

Epoch 00544: loss did not improve from 0.02935
Epoch 545/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00545: loss did not improve from 0.02935
Epoch 546/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00546: loss did not improve from 0.02935
Epoch 547/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00547: loss improved from 0.02935 to 0.02934, saving model to ./model_0.hdf5
Epoch 548/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00548: loss improved from 0.02934 to 0.02934, saving model to ./model_0.hdf5
Epoch 549/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00549: loss did not improve from 0.02934
Epoch 550/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00550: loss did not improve from 0.02934
Epoch 551/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00551: loss improved from 0.02934 to 0.02934, saving model to ./model_0.hdf5
Epoch 552/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0293

Epoch 00552: loss did not improve from 0.02934
Epoch 553/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0294

Epoch 00553: loss did not improve from 0.02934
Epoch 554/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0294

Epoch 00554: loss did not improve from 0.02934
Epoch 555/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00555: loss did not improve from 0.02934
Epoch 556/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00556: loss improved from 0.02934 to 0.02933, saving model to ./model_0.hdf5
Epoch 557/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00557: loss did not improve from 0.02933
Epoch 558/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00558: loss improved from 0.02933 to 0.02933, saving model to ./model_0.hdf5
Epoch 559/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00559: loss did not improve from 0.02933
Epoch 560/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00560: loss did not improve from 0.02933
Epoch 561/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00561: loss did not improve from 0.02933
Epoch 562/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00562: loss did not improve from 0.02933
Epoch 563/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00563: loss improved from 0.02933 to 0.02933, saving model to ./model_0.hdf5
Epoch 564/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00564: loss did not improve from 0.02933
Epoch 565/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00565: loss did not improve from 0.02933
Epoch 566/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00566: loss did not improve from 0.02933
Epoch 567/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00567: loss did not improve from 0.02933
Epoch 568/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00568: loss did not improve from 0.02933

Epoch 00568: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05.
Epoch 569/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00569: loss improved from 0.02933 to 0.02931, saving model to ./model_0.hdf5
Epoch 570/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00570: loss improved from 0.02931 to 0.02931, saving model to ./model_0.hdf5
Epoch 571/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00571: loss did not improve from 0.02931
Epoch 572/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00572: loss improved from 0.02931 to 0.02931, saving model to ./model_0.hdf5
Epoch 573/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00573: loss improved from 0.02931 to 0.02930, saving model to ./model_0.hdf5
Epoch 574/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00574: loss improved from 0.02930 to 0.02930, saving model to ./model_0.hdf5
Epoch 575/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00575: loss improved from 0.02930 to 0.02930, saving model to ./model_0.hdf5
Epoch 576/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00576: loss improved from 0.02930 to 0.02930, saving model to ./model_0.hdf5
Epoch 577/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00577: loss did not improve from 0.02930
Epoch 578/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00578: loss did not improve from 0.02930
Epoch 579/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00579: loss did not improve from 0.02930
Epoch 580/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00580: loss did not improve from 0.02930
Epoch 581/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00581: loss did not improve from 0.02930
Epoch 582/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00582: loss did not improve from 0.02930
Epoch 583/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00583: loss did not improve from 0.02930
Epoch 584/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00584: loss improved from 0.02930 to 0.02929, saving model to ./model_0.hdf5
Epoch 585/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00585: loss did not improve from 0.02929
Epoch 586/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00586: loss did not improve from 0.02929
Epoch 587/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00587: loss did not improve from 0.02929
Epoch 588/1000
32/32 [==============================] - 2s 57ms/step - loss: 0.0293

Epoch 00588: loss did not improve from 0.02929
Epoch 589/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0293

Epoch 00589: loss did not improve from 0.02929
Epoch 590/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00590: loss did not improve from 0.02929
Epoch 591/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00591: loss did not improve from 0.02929
Epoch 592/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00592: loss did not improve from 0.02929
Epoch 593/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00593: loss improved from 0.02929 to 0.02929, saving model to ./model_0.hdf5

Epoch 00593: ReduceLROnPlateau reducing learning rate to 1.5625000742147677e-05.
Epoch 594/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00594: loss improved from 0.02929 to 0.02928, saving model to ./model_0.hdf5
Epoch 595/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00595: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 596/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00596: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 597/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00597: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 598/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00598: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 599/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00599: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 600/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00600: loss did not improve from 0.02928
Epoch 601/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00601: loss did not improve from 0.02928
Epoch 602/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00602: loss did not improve from 0.02928
Epoch 603/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00603: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 604/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00604: loss did not improve from 0.02928
Epoch 605/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00605: loss did not improve from 0.02928
Epoch 606/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00606: loss did not improve from 0.02928
Epoch 607/1000
32/32 [==============================] - 2s 59ms/step - loss: 0.0293

Epoch 00607: loss did not improve from 0.02928
Epoch 608/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00608: loss did not improve from 0.02928
Epoch 609/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00609: loss did not improve from 0.02928
Epoch 610/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00610: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 611/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00611: loss did not improve from 0.02928
Epoch 612/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00612: loss did not improve from 0.02928
Epoch 613/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0293

Epoch 00613: loss did not improve from 0.02928
Epoch 614/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00614: loss did not improve from 0.02928
Epoch 615/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00615: loss improved from 0.02928 to 0.02928, saving model to ./model_0.hdf5
Epoch 616/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00616: loss did not improve from 0.02928
Epoch 617/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00617: loss improved from 0.02928 to 0.02927, saving model to ./model_0.hdf5
Epoch 618/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00618: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5

Epoch 00618: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 619/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00619: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5
Epoch 620/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00620: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5
Epoch 621/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00621: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5
Epoch 622/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00622: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5
Epoch 623/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00623: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5
Epoch 624/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00624: loss did not improve from 0.02927
Epoch 625/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00625: loss did not improve from 0.02927
Epoch 626/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00626: loss did not improve from 0.02927
Epoch 627/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0293

Epoch 00627: loss did not improve from 0.02927
Epoch 628/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00628: loss improved from 0.02927 to 0.02927, saving model to ./model_0.hdf5
Epoch 629/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00629: loss did not improve from 0.02927
Epoch 630/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00630: loss did not improve from 0.02927
Epoch 631/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00631: loss did not improve from 0.02927
Epoch 632/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0293

Epoch 00632: loss did not improve from 0.02927
Epoch 633/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00633: loss improved from 0.02927 to 0.02926, saving model to ./model_0.hdf5
Epoch 634/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00634: loss did not improve from 0.02926
Epoch 635/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00635: loss did not improve from 0.02926
Epoch 636/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00636: loss did not improve from 0.02926
Epoch 637/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00637: loss did not improve from 0.02926
Epoch 638/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00638: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 639/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00639: loss did not improve from 0.02926
Epoch 640/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00640: loss did not improve from 0.02926
Epoch 641/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00641: loss did not improve from 0.02926
Epoch 642/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00642: loss did not improve from 0.02926
Epoch 643/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00643: loss did not improve from 0.02926
Epoch 644/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00644: loss did not improve from 0.02926
Epoch 645/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00645: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 646/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00646: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 647/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00647: loss did not improve from 0.02926
Epoch 648/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00648: loss did not improve from 0.02926
Epoch 649/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00649: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 650/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00650: loss did not improve from 0.02926
Epoch 651/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00651: loss did not improve from 0.02926
Epoch 652/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00652: loss did not improve from 0.02926
Epoch 653/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00653: loss did not improve from 0.02926
Epoch 654/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00654: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 655/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00655: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 656/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0293

Epoch 00656: loss did not improve from 0.02926
Epoch 657/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00657: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 658/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00658: loss did not improve from 0.02926
Epoch 659/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00659: loss did not improve from 0.02926
Epoch 660/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00660: loss did not improve from 0.02926
Epoch 661/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00661: loss did not improve from 0.02926
Epoch 662/1000
32/32 [==============================] - 2s 57ms/step - loss: 0.0293

Epoch 00662: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 663/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00663: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 664/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00664: loss did not improve from 0.02926
Epoch 665/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00665: loss did not improve from 0.02926
Epoch 666/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00666: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 667/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00667: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 668/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00668: loss did not improve from 0.02926
Epoch 669/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00669: loss did not improve from 0.02926
Epoch 670/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00670: loss did not improve from 0.02926
Epoch 671/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00671: loss improved from 0.02926 to 0.02926, saving model to ./model_0.hdf5
Epoch 672/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00672: loss did not improve from 0.02926
Epoch 673/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00673: loss did not improve from 0.02926
Epoch 674/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00674: loss did not improve from 0.02926
Epoch 675/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00675: loss improved from 0.02926 to 0.02925, saving model to ./model_0.hdf5
Epoch 676/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00676: loss did not improve from 0.02925
Epoch 677/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00677: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 678/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00678: loss did not improve from 0.02925
Epoch 679/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00679: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 680/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00680: loss did not improve from 0.02925
Epoch 681/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00681: loss did not improve from 0.02925
Epoch 682/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00682: loss did not improve from 0.02925
Epoch 683/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00683: loss did not improve from 0.02925
Epoch 684/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00684: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 685/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00685: loss did not improve from 0.02925
Epoch 686/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00686: loss did not improve from 0.02925
Epoch 687/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00687: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 688/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00688: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 689/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00689: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 690/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00690: loss did not improve from 0.02925
Epoch 691/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00691: loss did not improve from 0.02925
Epoch 692/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00692: loss did not improve from 0.02925
Epoch 693/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00693: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 694/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00694: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 695/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00695: loss did not improve from 0.02925
Epoch 696/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00696: loss did not improve from 0.02925
Epoch 697/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00697: loss did not improve from 0.02925
Epoch 698/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00698: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 699/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0293

Epoch 00699: loss did not improve from 0.02925
Epoch 700/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00700: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 701/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00701: loss did not improve from 0.02925
Epoch 702/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00702: loss did not improve from 0.02925
Epoch 703/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0293

Epoch 00703: loss did not improve from 0.02925
Epoch 704/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0293

Epoch 00704: loss did not improve from 0.02925
Epoch 705/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00705: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 706/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0293

Epoch 00706: loss did not improve from 0.02925
Epoch 707/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00707: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 708/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00708: loss did not improve from 0.02925
Epoch 709/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00709: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 710/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00710: loss did not improve from 0.02925
Epoch 711/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00711: loss did not improve from 0.02925
Epoch 712/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00712: loss did not improve from 0.02925
Epoch 713/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00713: loss did not improve from 0.02925
Epoch 714/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00714: loss improved from 0.02925 to 0.02925, saving model to ./model_0.hdf5
Epoch 715/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00715: loss did not improve from 0.02925
Epoch 716/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00716: loss did not improve from 0.02925
Epoch 717/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00717: loss improved from 0.02925 to 0.02924, saving model to ./model_0.hdf5
Epoch 718/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00718: loss did not improve from 0.02924
Epoch 719/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00719: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 720/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00720: loss did not improve from 0.02924
Epoch 721/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00721: loss did not improve from 0.02924
Epoch 722/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00722: loss did not improve from 0.02924
Epoch 723/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00723: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 724/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00724: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 725/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00725: loss did not improve from 0.02924
Epoch 726/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00726: loss did not improve from 0.02924
Epoch 727/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00727: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 728/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00728: loss did not improve from 0.02924
Epoch 729/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00729: loss did not improve from 0.02924
Epoch 730/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00730: loss did not improve from 0.02924
Epoch 731/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00731: loss did not improve from 0.02924
Epoch 732/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00732: loss did not improve from 0.02924
Epoch 733/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00733: loss did not improve from 0.02924
Epoch 734/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00734: loss did not improve from 0.02924
Epoch 735/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00735: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 736/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00736: loss did not improve from 0.02924
Epoch 737/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00737: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 738/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00738: loss did not improve from 0.02924
Epoch 739/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00739: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 740/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00740: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 741/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00741: loss did not improve from 0.02924
Epoch 742/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00742: loss did not improve from 0.02924
Epoch 743/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00743: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 744/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00744: loss did not improve from 0.02924
Epoch 745/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00745: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 746/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00746: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 747/1000
32/32 [==============================] - 2s 50ms/step - loss: 0.0292

Epoch 00747: loss did not improve from 0.02924
Epoch 748/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00748: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 749/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00749: loss did not improve from 0.02924
Epoch 750/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00750: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 751/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00751: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 752/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00752: loss did not improve from 0.02924
Epoch 753/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00753: loss did not improve from 0.02924
Epoch 754/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00754: loss did not improve from 0.02924
Epoch 755/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00755: loss improved from 0.02924 to 0.02924, saving model to ./model_0.hdf5
Epoch 756/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00756: loss did not improve from 0.02924
Epoch 757/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00757: loss improved from 0.02924 to 0.02923, saving model to ./model_0.hdf5
Epoch 758/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00758: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 759/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00759: loss did not improve from 0.02923
Epoch 760/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00760: loss did not improve from 0.02923
Epoch 761/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00761: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 762/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00762: loss did not improve from 0.02923
Epoch 763/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00763: loss did not improve from 0.02923
Epoch 764/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00764: loss did not improve from 0.02923
Epoch 765/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00765: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 766/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00766: loss did not improve from 0.02923
Epoch 767/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00767: loss did not improve from 0.02923
Epoch 768/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00768: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 769/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00769: loss did not improve from 0.02923
Epoch 770/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00770: loss did not improve from 0.02923
Epoch 771/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00771: loss did not improve from 0.02923
Epoch 772/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00772: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 773/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00773: loss did not improve from 0.02923
Epoch 774/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00774: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 775/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00775: loss did not improve from 0.02923
Epoch 776/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00776: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 777/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00777: loss did not improve from 0.02923
Epoch 778/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00778: loss did not improve from 0.02923
Epoch 779/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00779: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 780/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00780: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 781/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0292

Epoch 00781: loss did not improve from 0.02923
Epoch 782/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00782: loss did not improve from 0.02923
Epoch 783/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00783: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 784/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00784: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 785/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00785: loss did not improve from 0.02923
Epoch 786/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00786: loss did not improve from 0.02923
Epoch 787/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00787: loss did not improve from 0.02923
Epoch 788/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00788: loss did not improve from 0.02923
Epoch 789/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00789: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 790/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00790: loss did not improve from 0.02923
Epoch 791/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00791: loss did not improve from 0.02923
Epoch 792/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0292

Epoch 00792: loss did not improve from 0.02923
Epoch 793/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00793: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 794/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00794: loss did not improve from 0.02923
Epoch 795/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00795: loss improved from 0.02923 to 0.02923, saving model to ./model_0.hdf5
Epoch 796/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00796: loss improved from 0.02923 to 0.02922, saving model to ./model_0.hdf5
Epoch 797/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00797: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 798/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00798: loss did not improve from 0.02922
Epoch 799/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00799: loss did not improve from 0.02922
Epoch 800/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00800: loss did not improve from 0.02922
Epoch 801/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00801: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 802/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00802: loss did not improve from 0.02922
Epoch 803/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00803: loss did not improve from 0.02922
Epoch 804/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00804: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 805/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00805: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 806/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00806: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 807/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00807: loss did not improve from 0.02922
Epoch 808/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00808: loss did not improve from 0.02922
Epoch 809/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00809: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 810/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0292

Epoch 00810: loss did not improve from 0.02922
Epoch 811/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00811: loss did not improve from 0.02922
Epoch 812/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00812: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 813/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00813: loss did not improve from 0.02922
Epoch 814/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00814: loss did not improve from 0.02922
Epoch 815/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00815: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 816/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00816: loss did not improve from 0.02922
Epoch 817/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00817: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 818/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00818: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 819/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00819: loss did not improve from 0.02922
Epoch 820/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00820: loss did not improve from 0.02922
Epoch 821/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00821: loss did not improve from 0.02922
Epoch 822/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0292

Epoch 00822: loss did not improve from 0.02922
Epoch 823/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00823: loss did not improve from 0.02922
Epoch 824/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00824: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 825/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00825: loss did not improve from 0.02922
Epoch 826/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00826: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 827/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00827: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 828/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0292

Epoch 00828: loss did not improve from 0.02922
Epoch 829/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00829: loss did not improve from 0.02922
Epoch 830/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00830: loss did not improve from 0.02922
Epoch 831/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00831: loss did not improve from 0.02922
Epoch 832/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00832: loss improved from 0.02922 to 0.02922, saving model to ./model_0.hdf5
Epoch 833/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00833: loss did not improve from 0.02922
Epoch 834/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00834: loss did not improve from 0.02922
Epoch 835/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00835: loss improved from 0.02922 to 0.02921, saving model to ./model_0.hdf5
Epoch 836/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00836: loss did not improve from 0.02921
Epoch 837/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00837: loss did not improve from 0.02921
Epoch 838/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00838: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 839/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00839: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 840/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00840: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 841/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00841: loss did not improve from 0.02921
Epoch 842/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00842: loss did not improve from 0.02921
Epoch 843/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00843: loss did not improve from 0.02921
Epoch 844/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00844: loss did not improve from 0.02921
Epoch 845/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00845: loss did not improve from 0.02921
Epoch 846/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00846: loss did not improve from 0.02921
Epoch 847/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00847: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 848/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00848: loss did not improve from 0.02921
Epoch 849/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00849: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 850/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00850: loss did not improve from 0.02921
Epoch 851/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00851: loss did not improve from 0.02921
Epoch 852/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00852: loss did not improve from 0.02921
Epoch 853/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00853: loss did not improve from 0.02921
Epoch 854/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00854: loss did not improve from 0.02921
Epoch 855/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00855: loss did not improve from 0.02921
Epoch 856/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00856: loss did not improve from 0.02921
Epoch 857/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00857: loss did not improve from 0.02921
Epoch 858/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00858: loss did not improve from 0.02921
Epoch 859/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00859: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 860/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00860: loss did not improve from 0.02921
Epoch 861/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00861: loss did not improve from 0.02921
Epoch 862/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00862: loss improved from 0.02921 to 0.02921, saving model to ./model_0.hdf5
Epoch 863/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00863: loss did not improve from 0.02921
Epoch 864/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00864: loss did not improve from 0.02921
Epoch 865/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00865: loss did not improve from 0.02921
Epoch 866/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00866: loss did not improve from 0.02921
Epoch 867/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00867: loss did not improve from 0.02921
Epoch 868/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00868: loss did not improve from 0.02921
Epoch 869/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00869: loss did not improve from 0.02921
Epoch 870/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00870: loss did not improve from 0.02921
Epoch 871/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00871: loss did not improve from 0.02921
Epoch 872/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00872: loss improved from 0.02921 to 0.02920, saving model to ./model_0.hdf5
Epoch 873/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00873: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 874/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00874: loss did not improve from 0.02920
Epoch 875/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00875: loss did not improve from 0.02920
Epoch 876/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00876: loss did not improve from 0.02920
Epoch 877/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00877: loss did not improve from 0.02920
Epoch 878/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00878: loss did not improve from 0.02920
Epoch 879/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00879: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 880/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00880: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 881/1000
32/32 [==============================] - 2s 56ms/step - loss: 0.0292

Epoch 00881: loss did not improve from 0.02920
Epoch 882/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00882: loss did not improve from 0.02920
Epoch 883/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00883: loss did not improve from 0.02920
Epoch 884/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00884: loss did not improve from 0.02920
Epoch 885/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00885: loss did not improve from 0.02920
Epoch 886/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00886: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 887/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00887: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 888/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00888: loss did not improve from 0.02920
Epoch 889/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00889: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 890/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00890: loss did not improve from 0.02920
Epoch 891/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00891: loss did not improve from 0.02920
Epoch 892/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00892: loss did not improve from 0.02920
Epoch 893/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00893: loss did not improve from 0.02920
Epoch 894/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00894: loss did not improve from 0.02920
Epoch 895/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00895: loss did not improve from 0.02920
Epoch 896/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00896: loss did not improve from 0.02920
Epoch 897/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00897: loss did not improve from 0.02920
Epoch 898/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00898: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 899/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00899: loss did not improve from 0.02920
Epoch 900/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00900: loss did not improve from 0.02920
Epoch 901/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00901: loss did not improve from 0.02920
Epoch 902/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00902: loss did not improve from 0.02920
Epoch 903/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00903: loss did not improve from 0.02920
Epoch 904/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00904: loss did not improve from 0.02920
Epoch 905/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00905: loss did not improve from 0.02920
Epoch 906/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00906: loss did not improve from 0.02920
Epoch 907/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00907: loss did not improve from 0.02920
Epoch 908/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00908: loss did not improve from 0.02920
Epoch 909/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00909: loss improved from 0.02920 to 0.02920, saving model to ./model_0.hdf5
Epoch 910/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00910: loss improved from 0.02920 to 0.02919, saving model to ./model_0.hdf5
Epoch 911/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00911: loss did not improve from 0.02919
Epoch 912/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00912: loss did not improve from 0.02919
Epoch 913/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00913: loss did not improve from 0.02919
Epoch 914/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00914: loss did not improve from 0.02919
Epoch 915/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00915: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 916/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00916: loss did not improve from 0.02919
Epoch 917/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00917: loss did not improve from 0.02919
Epoch 918/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00918: loss did not improve from 0.02919
Epoch 919/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00919: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 920/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00920: loss did not improve from 0.02919
Epoch 921/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00921: loss did not improve from 0.02919
Epoch 922/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00922: loss did not improve from 0.02919
Epoch 923/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00923: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 924/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00924: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 925/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0292

Epoch 00925: loss did not improve from 0.02919
Epoch 926/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00926: loss did not improve from 0.02919
Epoch 927/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00927: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 928/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00928: loss did not improve from 0.02919
Epoch 929/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00929: loss did not improve from 0.02919
Epoch 930/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00930: loss did not improve from 0.02919
Epoch 931/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00931: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 932/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00932: loss did not improve from 0.02919
Epoch 933/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00933: loss did not improve from 0.02919
Epoch 934/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00934: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 935/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00935: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 936/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00936: loss did not improve from 0.02919
Epoch 937/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00937: loss did not improve from 0.02919
Epoch 938/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00938: loss did not improve from 0.02919
Epoch 939/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00939: loss did not improve from 0.02919
Epoch 940/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00940: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 941/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00941: loss did not improve from 0.02919
Epoch 942/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00942: loss did not improve from 0.02919
Epoch 943/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00943: loss did not improve from 0.02919
Epoch 944/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00944: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 945/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00945: loss did not improve from 0.02919
Epoch 946/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00946: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 947/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00947: loss did not improve from 0.02919
Epoch 948/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00948: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 949/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00949: loss improved from 0.02919 to 0.02919, saving model to ./model_0.hdf5
Epoch 950/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00950: loss did not improve from 0.02919
Epoch 951/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00951: loss did not improve from 0.02919
Epoch 952/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00952: loss did not improve from 0.02919
Epoch 953/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00953: loss did not improve from 0.02919
Epoch 954/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00954: loss did not improve from 0.02919
Epoch 955/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00955: loss improved from 0.02919 to 0.02918, saving model to ./model_0.hdf5
Epoch 956/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00956: loss did not improve from 0.02918
Epoch 957/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00957: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 958/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00958: loss did not improve from 0.02918
Epoch 959/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00959: loss did not improve from 0.02918
Epoch 960/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00960: loss did not improve from 0.02918
Epoch 961/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00961: loss did not improve from 0.02918
Epoch 962/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00962: loss did not improve from 0.02918
Epoch 963/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00963: loss did not improve from 0.02918
Epoch 964/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00964: loss did not improve from 0.02918
Epoch 965/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00965: loss did not improve from 0.02918
Epoch 966/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00966: loss did not improve from 0.02918
Epoch 967/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00967: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 968/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00968: loss did not improve from 0.02918
Epoch 969/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00969: loss did not improve from 0.02918
Epoch 970/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00970: loss did not improve from 0.02918
Epoch 971/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00971: loss did not improve from 0.02918
Epoch 972/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00972: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 973/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00973: loss did not improve from 0.02918
Epoch 974/1000
32/32 [==============================] - 2s 51ms/step - loss: 0.0292

Epoch 00974: loss did not improve from 0.02918
Epoch 975/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00975: loss did not improve from 0.02918
Epoch 976/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00976: loss did not improve from 0.02918
Epoch 977/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00977: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 978/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00978: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 979/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00979: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 980/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00980: loss did not improve from 0.02918
Epoch 981/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00981: loss did not improve from 0.02918
Epoch 982/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00982: loss did not improve from 0.02918
Epoch 983/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00983: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 984/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00984: loss did not improve from 0.02918
Epoch 985/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00985: loss did not improve from 0.02918
Epoch 986/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00986: loss did not improve from 0.02918
Epoch 987/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00987: loss did not improve from 0.02918
Epoch 988/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00988: loss improved from 0.02918 to 0.02918, saving model to ./model_0.hdf5
Epoch 989/1000
32/32 [==============================] - 2s 55ms/step - loss: 0.0292

Epoch 00989: loss improved from 0.02918 to 0.02917, saving model to ./model_0.hdf5
Epoch 990/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00990: loss did not improve from 0.02917
Epoch 991/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00991: loss improved from 0.02917 to 0.02917, saving model to ./model_0.hdf5
Epoch 992/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00992: loss did not improve from 0.02917
Epoch 993/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00993: loss did not improve from 0.02917
Epoch 994/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00994: loss did not improve from 0.02917
Epoch 995/1000
32/32 [==============================] - 2s 54ms/step - loss: 0.0292

Epoch 00995: loss did not improve from 0.02917
Epoch 996/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00996: loss did not improve from 0.02917
Epoch 997/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00997: loss did not improve from 0.02917
Epoch 998/1000
32/32 [==============================] - 2s 52ms/step - loss: 0.0292

Epoch 00998: loss did not improve from 0.02917
Epoch 999/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 00999: loss did not improve from 0.02917
Epoch 1000/1000
32/32 [==============================] - 2s 53ms/step - loss: 0.0292

Epoch 01000: loss did not improve from 0.02917
model_0 = keras.models.load_model('model_0.hdf5')
edit_name(model_0, 'model_0')

Examine the result

img = dataset[1430, ...]
img_rec = model_0.predict(img[np.newaxis,...])
_, ax = plt.subplots(1, 2)
ax[0].imshow(img[...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
ax[0].axis('off')
ax[1].imshow(img_rec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
ax[1].axis('off')
plt.show()
../_images/pca_ae_hierarchy_rotation_15_0.png
decoder_0 = keras.models.Sequential(model_0.layers[-12:])
encoder_0 = keras.models.Model(inputs=model_0.input, outputs=model_0.get_layer('latent_covariance_layer').output)

During my tests, the first autoencoder sometimes could not capture the rotation of the ellipses at all. Overall the reconstruction quality is visibly worse than in the case of only two variables in the dataset.

Second autoencoder

I tried different \(\lambda\) and landed with 0.3 that offered a good comprise between the distinction of features and interpretability of the latent codes.

keras.backend.clear_session()
encoder_0.trainable = False
input_img = keras.layers.Input(shape=[64, 64, 1])
encoded_1 = encoder_gen(input_img)
encoded_0 = encoder_0(input_img)
concat = keras.layers.Concatenate()([encoded_0, encoded_1])
batchnorm = keras.layers.BatchNormalization(center=False, scale=False)(concat)
add_loss = LatentCovarianceLayer(0.3)(batchnorm)
decoded_1 = decoder_gen(add_loss)
pca_ae = keras.models.Model(input_img, decoded_1)
# SCROLL
optimizer = tf.keras.optimizers.Adam(learning_rate=0.002)
pca_ae.compile(optimizer=optimizer, loss='mse')

tempfn='./model_1.hdf5'
model_cb=keras.callbacks.ModelCheckpoint(tempfn, monitor='loss',save_best_only=True, verbose=1)
early_cb=keras.callbacks.EarlyStopping(monitor='loss', patience=50, verbose=1)
learning_rate_reduction = keras.callbacks.ReduceLROnPlateau(monitor='loss',
                                                            patience=25,
                                                            verbose=1,
                                                            factor=0.5,
                                                            min_lr=0.00001)
cb = [model_cb, early_cb, learning_rate_reduction]

history=pca_ae.fit(dataset, dataset,
                   epochs=1000,
                   batch_size=500,
                   shuffle=True,
                   callbacks=cb)
Epoch 1/1000
32/32 [==============================] - 3s 65ms/step - loss: 0.2513 - cov_loss: 0.0829

Epoch 00001: loss improved from inf to 0.25126, saving model to ./model_1.hdf5
Epoch 2/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.1083 - cov_loss: 0.0172

Epoch 00002: loss improved from 0.25126 to 0.10832, saving model to ./model_1.hdf5
Epoch 3/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0767 - cov_loss: 0.0173

Epoch 00003: loss improved from 0.10832 to 0.07675, saving model to ./model_1.hdf5
Epoch 4/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0631 - cov_loss: 0.0127

Epoch 00004: loss improved from 0.07675 to 0.06311, saving model to ./model_1.hdf5
Epoch 5/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0660 - cov_loss: 0.0200

Epoch 00005: loss did not improve from 0.06311
Epoch 6/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0597 - cov_loss: 0.0164

Epoch 00006: loss improved from 0.06311 to 0.05969, saving model to ./model_1.hdf5
Epoch 7/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0546 - cov_loss: 0.0134

Epoch 00007: loss improved from 0.05969 to 0.05458, saving model to ./model_1.hdf5
Epoch 8/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0594 - cov_loss: 0.0192

Epoch 00008: loss did not improve from 0.05458
Epoch 9/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0561 - cov_loss: 0.0171

Epoch 00009: loss did not improve from 0.05458
Epoch 10/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0551 - cov_loss: 0.0167

Epoch 00010: loss did not improve from 0.05458
Epoch 11/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0518 - cov_loss: 0.0135

Epoch 00011: loss improved from 0.05458 to 0.05180, saving model to ./model_1.hdf5
Epoch 12/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0494 - cov_loss: 0.0119

Epoch 00012: loss improved from 0.05180 to 0.04940, saving model to ./model_1.hdf5
Epoch 13/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0516 - cov_loss: 0.0146

Epoch 00013: loss did not improve from 0.04940
Epoch 14/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0544 - cov_loss: 0.0178

Epoch 00014: loss did not improve from 0.04940
Epoch 15/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0454 - cov_loss: 0.0091

Epoch 00015: loss improved from 0.04940 to 0.04538, saving model to ./model_1.hdf5
Epoch 16/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0487 - cov_loss: 0.0120

Epoch 00016: loss did not improve from 0.04538
Epoch 17/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0472 - cov_loss: 0.0101

Epoch 00017: loss did not improve from 0.04538
Epoch 18/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0438 - cov_loss: 0.0081

Epoch 00018: loss improved from 0.04538 to 0.04377, saving model to ./model_1.hdf5
Epoch 19/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0471 - cov_loss: 0.0116

Epoch 00019: loss did not improve from 0.04377
Epoch 20/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0484 - cov_loss: 0.0125

Epoch 00020: loss did not improve from 0.04377
Epoch 21/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0481 - cov_loss: 0.0120

Epoch 00021: loss did not improve from 0.04377
Epoch 22/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0470 - cov_loss: 0.0115

Epoch 00022: loss did not improve from 0.04377
Epoch 23/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0480 - cov_loss: 0.0127

Epoch 00023: loss did not improve from 0.04377
Epoch 24/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0438 - cov_loss: 0.0090

Epoch 00024: loss did not improve from 0.04377
Epoch 25/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0495 - cov_loss: 0.0146

Epoch 00025: loss did not improve from 0.04377
Epoch 26/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0474 - cov_loss: 0.0129

Epoch 00026: loss did not improve from 0.04377
Epoch 27/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0441 - cov_loss: 0.0091

Epoch 00027: loss did not improve from 0.04377
Epoch 28/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0459 - cov_loss: 0.0111

Epoch 00028: loss did not improve from 0.04377
Epoch 29/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0458 - cov_loss: 0.0108

Epoch 00029: loss did not improve from 0.04377
Epoch 30/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0416 - cov_loss: 0.0071

Epoch 00030: loss improved from 0.04377 to 0.04157, saving model to ./model_1.hdf5
Epoch 31/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0420 - cov_loss: 0.0079

Epoch 00031: loss did not improve from 0.04157
Epoch 32/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0445 - cov_loss: 0.0106

Epoch 00032: loss did not improve from 0.04157
Epoch 33/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0433 - cov_loss: 0.0096

Epoch 00033: loss did not improve from 0.04157
Epoch 34/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0430 - cov_loss: 0.0091

Epoch 00034: loss did not improve from 0.04157
Epoch 35/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0418 - cov_loss: 0.0082

Epoch 00035: loss did not improve from 0.04157
Epoch 36/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0460 - cov_loss: 0.0121

Epoch 00036: loss did not improve from 0.04157
Epoch 37/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0445 - cov_loss: 0.0111

Epoch 00037: loss did not improve from 0.04157
Epoch 38/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0431 - cov_loss: 0.0096

Epoch 00038: loss did not improve from 0.04157
Epoch 39/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0436 - cov_loss: 0.0101

Epoch 00039: loss did not improve from 0.04157
Epoch 40/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0440 - cov_loss: 0.0095

Epoch 00040: loss did not improve from 0.04157
Epoch 41/1000
32/32 [==============================] - 2s 62ms/step - loss: 0.0423 - cov_loss: 0.0091

Epoch 00041: loss did not improve from 0.04157
Epoch 42/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0473 - cov_loss: 0.0136

Epoch 00042: loss did not improve from 0.04157
Epoch 43/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0457 - cov_loss: 0.0124

Epoch 00043: loss did not improve from 0.04157
Epoch 44/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0424 - cov_loss: 0.0094

Epoch 00044: loss did not improve from 0.04157
Epoch 45/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0442 - cov_loss: 0.0111

Epoch 00045: loss did not improve from 0.04157
Epoch 46/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0411 - cov_loss: 0.0085

Epoch 00046: loss improved from 0.04157 to 0.04111, saving model to ./model_1.hdf5
Epoch 47/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0408 - cov_loss: 0.0083

Epoch 00047: loss improved from 0.04111 to 0.04080, saving model to ./model_1.hdf5
Epoch 48/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0417 - cov_loss: 0.0091

Epoch 00048: loss did not improve from 0.04080
Epoch 49/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0425 - cov_loss: 0.0091

Epoch 00049: loss did not improve from 0.04080
Epoch 50/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0420 - cov_loss: 0.0097

Epoch 00050: loss did not improve from 0.04080
Epoch 51/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0410 - cov_loss: 0.0090

Epoch 00051: loss did not improve from 0.04080
Epoch 52/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0409 - cov_loss: 0.0084

Epoch 00052: loss did not improve from 0.04080
Epoch 53/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0426 - cov_loss: 0.0100

Epoch 00053: loss did not improve from 0.04080
Epoch 54/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0409 - cov_loss: 0.0091

Epoch 00054: loss did not improve from 0.04080
Epoch 55/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0414 - cov_loss: 0.0092

Epoch 00055: loss did not improve from 0.04080
Epoch 56/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0405 - cov_loss: 0.0082

Epoch 00056: loss improved from 0.04080 to 0.04049, saving model to ./model_1.hdf5
Epoch 57/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0380 - cov_loss: 0.0058

Epoch 00057: loss improved from 0.04049 to 0.03804, saving model to ./model_1.hdf5
Epoch 58/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0416 - cov_loss: 0.0091

Epoch 00058: loss did not improve from 0.03804
Epoch 59/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0410 - cov_loss: 0.0083

Epoch 00059: loss did not improve from 0.03804
Epoch 60/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0425 - cov_loss: 0.0100

Epoch 00060: loss did not improve from 0.03804
Epoch 61/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0400 - cov_loss: 0.0079

Epoch 00061: loss did not improve from 0.03804
Epoch 62/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0392 - cov_loss: 0.0075

Epoch 00062: loss did not improve from 0.03804
Epoch 63/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0387 - cov_loss: 0.0064

Epoch 00063: loss did not improve from 0.03804
Epoch 64/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0386 - cov_loss: 0.0072

Epoch 00064: loss did not improve from 0.03804
Epoch 65/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0390 - cov_loss: 0.0071

Epoch 00065: loss did not improve from 0.03804
Epoch 66/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0370 - cov_loss: 0.0058

Epoch 00066: loss improved from 0.03804 to 0.03700, saving model to ./model_1.hdf5
Epoch 67/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0383 - cov_loss: 0.0070

Epoch 00067: loss did not improve from 0.03700
Epoch 68/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0380 - cov_loss: 0.0054

Epoch 00068: loss did not improve from 0.03700
Epoch 69/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0382 - cov_loss: 0.0072

Epoch 00069: loss did not improve from 0.03700
Epoch 70/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0392 - cov_loss: 0.0070

Epoch 00070: loss did not improve from 0.03700
Epoch 71/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0388 - cov_loss: 0.0071

Epoch 00071: loss did not improve from 0.03700
Epoch 72/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0395 - cov_loss: 0.0082

Epoch 00072: loss did not improve from 0.03700
Epoch 73/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0379 - cov_loss: 0.0067

Epoch 00073: loss did not improve from 0.03700
Epoch 74/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0388 - cov_loss: 0.0077

Epoch 00074: loss did not improve from 0.03700
Epoch 75/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0378 - cov_loss: 0.0062

Epoch 00075: loss did not improve from 0.03700
Epoch 76/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0364 - cov_loss: 0.0057

Epoch 00076: loss improved from 0.03700 to 0.03641, saving model to ./model_1.hdf5
Epoch 77/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0384 - cov_loss: 0.0064

Epoch 00077: loss did not improve from 0.03641
Epoch 78/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0359 - cov_loss: 0.0049

Epoch 00078: loss improved from 0.03641 to 0.03586, saving model to ./model_1.hdf5
Epoch 79/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0375 - cov_loss: 0.0067

Epoch 00079: loss did not improve from 0.03586
Epoch 80/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0375 - cov_loss: 0.0064

Epoch 00080: loss did not improve from 0.03586
Epoch 81/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0360 - cov_loss: 0.0055

Epoch 00081: loss did not improve from 0.03586
Epoch 82/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0391 - cov_loss: 0.0082

Epoch 00082: loss did not improve from 0.03586
Epoch 83/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0377 - cov_loss: 0.0070

Epoch 00083: loss did not improve from 0.03586
Epoch 84/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0360 - cov_loss: 0.0054

Epoch 00084: loss did not improve from 0.03586
Epoch 85/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0405 - cov_loss: 0.0090

Epoch 00085: loss did not improve from 0.03586
Epoch 86/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0365 - cov_loss: 0.0052

Epoch 00086: loss did not improve from 0.03586
Epoch 87/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0363 - cov_loss: 0.0052

Epoch 00087: loss did not improve from 0.03586
Epoch 88/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0369 - cov_loss: 0.0058

Epoch 00088: loss did not improve from 0.03586
Epoch 89/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0361 - cov_loss: 0.0054

Epoch 00089: loss did not improve from 0.03586
Epoch 90/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0360 - cov_loss: 0.0052

Epoch 00090: loss did not improve from 0.03586
Epoch 91/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0350 - cov_loss: 0.0046

Epoch 00091: loss improved from 0.03586 to 0.03497, saving model to ./model_1.hdf5
Epoch 92/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0356 - cov_loss: 0.0053

Epoch 00092: loss did not improve from 0.03497
Epoch 93/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0379 - cov_loss: 0.0064

Epoch 00093: loss did not improve from 0.03497
Epoch 94/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0389 - cov_loss: 0.0082

Epoch 00094: loss did not improve from 0.03497
Epoch 95/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0368 - cov_loss: 0.0061

Epoch 00095: loss did not improve from 0.03497
Epoch 96/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0378 - cov_loss: 0.0056

Epoch 00096: loss did not improve from 0.03497
Epoch 97/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0358 - cov_loss: 0.0045

Epoch 00097: loss did not improve from 0.03497
Epoch 98/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0350 - cov_loss: 0.0048

Epoch 00098: loss did not improve from 0.03497
Epoch 99/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0358 - cov_loss: 0.0054

Epoch 00099: loss did not improve from 0.03497
Epoch 100/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0359 - cov_loss: 0.0052

Epoch 00100: loss did not improve from 0.03497
Epoch 101/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0362 - cov_loss: 0.0055

Epoch 00101: loss did not improve from 0.03497
Epoch 102/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0386 - cov_loss: 0.0068

Epoch 00102: loss did not improve from 0.03497
Epoch 103/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0355 - cov_loss: 0.0054

Epoch 00103: loss did not improve from 0.03497
Epoch 104/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0349 - cov_loss: 0.0048

Epoch 00104: loss improved from 0.03497 to 0.03489, saving model to ./model_1.hdf5
Epoch 105/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0347 - cov_loss: 0.0043

Epoch 00105: loss improved from 0.03489 to 0.03473, saving model to ./model_1.hdf5
Epoch 106/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0355 - cov_loss: 0.0053

Epoch 00106: loss did not improve from 0.03473
Epoch 107/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0346 - cov_loss: 0.0044

Epoch 00107: loss improved from 0.03473 to 0.03455, saving model to ./model_1.hdf5
Epoch 108/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0350 - cov_loss: 0.0045

Epoch 00108: loss did not improve from 0.03455
Epoch 109/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0337 - cov_loss: 0.0038

Epoch 00109: loss improved from 0.03455 to 0.03372, saving model to ./model_1.hdf5
Epoch 110/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0330 - cov_loss: 0.0034

Epoch 00110: loss improved from 0.03372 to 0.03295, saving model to ./model_1.hdf5
Epoch 111/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0327 - cov_loss: 0.0033

Epoch 00111: loss improved from 0.03295 to 0.03270, saving model to ./model_1.hdf5
Epoch 112/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0349 - cov_loss: 0.0042

Epoch 00112: loss did not improve from 0.03270
Epoch 113/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0330 - cov_loss: 0.0034

Epoch 00113: loss did not improve from 0.03270
Epoch 114/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0349 - cov_loss: 0.0045

Epoch 00114: loss did not improve from 0.03270
Epoch 115/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0334 - cov_loss: 0.0034

Epoch 00115: loss did not improve from 0.03270
Epoch 116/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0332 - cov_loss: 0.0034

Epoch 00116: loss did not improve from 0.03270
Epoch 117/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0321 - cov_loss: 0.0032

Epoch 00117: loss improved from 0.03270 to 0.03211, saving model to ./model_1.hdf5
Epoch 118/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0334 - cov_loss: 0.0031

Epoch 00118: loss did not improve from 0.03211
Epoch 119/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0339 - cov_loss: 0.0039

Epoch 00119: loss did not improve from 0.03211
Epoch 120/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0334 - cov_loss: 0.0039

Epoch 00120: loss did not improve from 0.03211
Epoch 121/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0335 - cov_loss: 0.0036

Epoch 00121: loss did not improve from 0.03211
Epoch 122/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0348 - cov_loss: 0.0045

Epoch 00122: loss did not improve from 0.03211
Epoch 123/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0333 - cov_loss: 0.0039

Epoch 00123: loss did not improve from 0.03211
Epoch 124/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0331 - cov_loss: 0.0032

Epoch 00124: loss did not improve from 0.03211
Epoch 125/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0320 - cov_loss: 0.0025

Epoch 00125: loss improved from 0.03211 to 0.03204, saving model to ./model_1.hdf5
Epoch 126/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0329 - cov_loss: 0.0033

Epoch 00126: loss did not improve from 0.03204
Epoch 127/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0333 - cov_loss: 0.0032

Epoch 00127: loss did not improve from 0.03204
Epoch 128/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0330 - cov_loss: 0.0034

Epoch 00128: loss did not improve from 0.03204
Epoch 129/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0336 - cov_loss: 0.0038

Epoch 00129: loss did not improve from 0.03204
Epoch 130/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0329 - cov_loss: 0.0034

Epoch 00130: loss did not improve from 0.03204
Epoch 131/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0335 - cov_loss: 0.0034

Epoch 00131: loss did not improve from 0.03204
Epoch 132/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0332 - cov_loss: 0.0029

Epoch 00132: loss did not improve from 0.03204
Epoch 133/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0319 - cov_loss: 0.0031

Epoch 00133: loss improved from 0.03204 to 0.03187, saving model to ./model_1.hdf5
Epoch 134/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0336 - cov_loss: 0.0037

Epoch 00134: loss did not improve from 0.03187
Epoch 135/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0328 - cov_loss: 0.0032

Epoch 00135: loss did not improve from 0.03187
Epoch 136/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0320 - cov_loss: 0.0028

Epoch 00136: loss did not improve from 0.03187
Epoch 137/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0327 - cov_loss: 0.0031

Epoch 00137: loss did not improve from 0.03187
Epoch 138/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0344 - cov_loss: 0.0042

Epoch 00138: loss did not improve from 0.03187
Epoch 139/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0326 - cov_loss: 0.0030

Epoch 00139: loss did not improve from 0.03187
Epoch 140/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0314 - cov_loss: 0.0025

Epoch 00140: loss improved from 0.03187 to 0.03140, saving model to ./model_1.hdf5
Epoch 141/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0327 - cov_loss: 0.0032

Epoch 00141: loss did not improve from 0.03140
Epoch 142/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0310 - cov_loss: 0.0026

Epoch 00142: loss improved from 0.03140 to 0.03096, saving model to ./model_1.hdf5
Epoch 143/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0315 - cov_loss: 0.0024

Epoch 00143: loss did not improve from 0.03096
Epoch 144/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0325 - cov_loss: 0.0034

Epoch 00144: loss did not improve from 0.03096
Epoch 145/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0313 - cov_loss: 0.0027

Epoch 00145: loss did not improve from 0.03096
Epoch 146/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0335 - cov_loss: 0.0041

Epoch 00146: loss did not improve from 0.03096
Epoch 147/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0333 - cov_loss: 0.0034

Epoch 00147: loss did not improve from 0.03096
Epoch 148/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0323 - cov_loss: 0.0032

Epoch 00148: loss did not improve from 0.03096
Epoch 149/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0322 - cov_loss: 0.0031

Epoch 00149: loss did not improve from 0.03096
Epoch 150/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0324 - cov_loss: 0.0032

Epoch 00150: loss did not improve from 0.03096
Epoch 151/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0314 - cov_loss: 0.0023

Epoch 00151: loss did not improve from 0.03096
Epoch 152/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0318 - cov_loss: 0.0028

Epoch 00152: loss did not improve from 0.03096
Epoch 153/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0338 - cov_loss: 0.0036

Epoch 00153: loss did not improve from 0.03096
Epoch 154/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0318 - cov_loss: 0.0028

Epoch 00154: loss did not improve from 0.03096
Epoch 155/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0309 - cov_loss: 0.0026

Epoch 00155: loss improved from 0.03096 to 0.03091, saving model to ./model_1.hdf5
Epoch 156/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0311 - cov_loss: 0.0022

Epoch 00156: loss did not improve from 0.03091
Epoch 157/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0327 - cov_loss: 0.0027

Epoch 00157: loss did not improve from 0.03091
Epoch 158/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0310 - cov_loss: 0.0025

Epoch 00158: loss did not improve from 0.03091
Epoch 159/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0325 - cov_loss: 0.0032

Epoch 00159: loss did not improve from 0.03091
Epoch 160/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0318 - cov_loss: 0.0029

Epoch 00160: loss did not improve from 0.03091
Epoch 161/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0318 - cov_loss: 0.0026

Epoch 00161: loss did not improve from 0.03091
Epoch 162/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0310 - cov_loss: 0.0023

Epoch 00162: loss did not improve from 0.03091
Epoch 163/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0322 - cov_loss: 0.0027

Epoch 00163: loss did not improve from 0.03091
Epoch 164/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0322 - cov_loss: 0.0028

Epoch 00164: loss did not improve from 0.03091
Epoch 165/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0302 - cov_loss: 0.0021

Epoch 00165: loss improved from 0.03091 to 0.03024, saving model to ./model_1.hdf5
Epoch 166/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0317 - cov_loss: 0.0033

Epoch 00166: loss did not improve from 0.03024
Epoch 167/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0314 - cov_loss: 0.0025

Epoch 00167: loss did not improve from 0.03024
Epoch 168/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0317 - cov_loss: 0.0032

Epoch 00168: loss did not improve from 0.03024
Epoch 169/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0313 - cov_loss: 0.0025

Epoch 00169: loss did not improve from 0.03024
Epoch 170/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0326 - cov_loss: 0.0037

Epoch 00170: loss did not improve from 0.03024
Epoch 171/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0306 - cov_loss: 0.0023

Epoch 00171: loss did not improve from 0.03024
Epoch 172/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0316 - cov_loss: 0.0029

Epoch 00172: loss did not improve from 0.03024
Epoch 173/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0315 - cov_loss: 0.0029

Epoch 00173: loss did not improve from 0.03024
Epoch 174/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0317 - cov_loss: 0.0026

Epoch 00174: loss did not improve from 0.03024
Epoch 175/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0318 - cov_loss: 0.0028

Epoch 00175: loss did not improve from 0.03024
Epoch 176/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0318 - cov_loss: 0.0026

Epoch 00176: loss did not improve from 0.03024
Epoch 177/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0312 - cov_loss: 0.0025

Epoch 00177: loss did not improve from 0.03024
Epoch 178/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0324 - cov_loss: 0.0032

Epoch 00178: loss did not improve from 0.03024
Epoch 179/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0315 - cov_loss: 0.0029

Epoch 00179: loss did not improve from 0.03024
Epoch 180/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0309 - cov_loss: 0.0020

Epoch 00180: loss did not improve from 0.03024
Epoch 181/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0313 - cov_loss: 0.0024

Epoch 00181: loss did not improve from 0.03024
Epoch 182/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0305 - cov_loss: 0.0024

Epoch 00182: loss did not improve from 0.03024
Epoch 183/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0308 - cov_loss: 0.0025

Epoch 00183: loss did not improve from 0.03024
Epoch 184/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0310 - cov_loss: 0.0026

Epoch 00184: loss did not improve from 0.03024
Epoch 185/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0317 - cov_loss: 0.0029

Epoch 00185: loss did not improve from 0.03024
Epoch 186/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0325 - cov_loss: 0.0037

Epoch 00186: loss did not improve from 0.03024
Epoch 187/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0313 - cov_loss: 0.0026

Epoch 00187: loss did not improve from 0.03024
Epoch 188/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0306 - cov_loss: 0.0025

Epoch 00188: loss did not improve from 0.03024
Epoch 189/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0308 - cov_loss: 0.0023

Epoch 00189: loss did not improve from 0.03024
Epoch 190/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0301 - cov_loss: 0.0022

Epoch 00190: loss improved from 0.03024 to 0.03008, saving model to ./model_1.hdf5
Epoch 191/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0317 - cov_loss: 0.0030

Epoch 00191: loss did not improve from 0.03008
Epoch 192/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0314 - cov_loss: 0.0034

Epoch 00192: loss did not improve from 0.03008
Epoch 193/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0309 - cov_loss: 0.0030

Epoch 00193: loss did not improve from 0.03008
Epoch 194/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0315 - cov_loss: 0.0028

Epoch 00194: loss did not improve from 0.03008
Epoch 195/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0306 - cov_loss: 0.0018

Epoch 00195: loss did not improve from 0.03008
Epoch 196/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0301 - cov_loss: 0.0022

Epoch 00196: loss did not improve from 0.03008
Epoch 197/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0324 - cov_loss: 0.0021

Epoch 00197: loss did not improve from 0.03008
Epoch 198/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0303 - cov_loss: 0.0022

Epoch 00198: loss did not improve from 0.03008
Epoch 199/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0301 - cov_loss: 0.0024

Epoch 00199: loss did not improve from 0.03008
Epoch 200/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0312 - cov_loss: 0.0025

Epoch 00200: loss did not improve from 0.03008
Epoch 201/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0293 - cov_loss: 0.0022

Epoch 00201: loss improved from 0.03008 to 0.02933, saving model to ./model_1.hdf5
Epoch 202/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0302 - cov_loss: 0.0028

Epoch 00202: loss did not improve from 0.02933
Epoch 203/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0302 - cov_loss: 0.0025

Epoch 00203: loss did not improve from 0.02933
Epoch 204/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0293 - cov_loss: 0.0018

Epoch 00204: loss improved from 0.02933 to 0.02929, saving model to ./model_1.hdf5
Epoch 205/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0321 - cov_loss: 0.0030

Epoch 00205: loss did not improve from 0.02929
Epoch 206/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0318 - cov_loss: 0.0032

Epoch 00206: loss did not improve from 0.02929
Epoch 207/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0308 - cov_loss: 0.0024

Epoch 00207: loss did not improve from 0.02929
Epoch 208/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0296 - cov_loss: 0.0024

Epoch 00208: loss did not improve from 0.02929
Epoch 209/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0300 - cov_loss: 0.0021

Epoch 00209: loss did not improve from 0.02929
Epoch 210/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0300 - cov_loss: 0.0022

Epoch 00210: loss did not improve from 0.02929
Epoch 211/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0308 - cov_loss: 0.0026

Epoch 00211: loss did not improve from 0.02929
Epoch 212/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0315 - cov_loss: 0.0031

Epoch 00212: loss did not improve from 0.02929
Epoch 213/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0307 - cov_loss: 0.0026

Epoch 00213: loss did not improve from 0.02929
Epoch 214/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0300 - cov_loss: 0.0021

Epoch 00214: loss did not improve from 0.02929
Epoch 215/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0299 - cov_loss: 0.0024

Epoch 00215: loss did not improve from 0.02929
Epoch 216/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0294 - cov_loss: 0.0020

Epoch 00216: loss did not improve from 0.02929
Epoch 217/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0321 - cov_loss: 0.0028

Epoch 00217: loss did not improve from 0.02929
Epoch 218/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0316 - cov_loss: 0.0031

Epoch 00218: loss did not improve from 0.02929
Epoch 219/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0299 - cov_loss: 0.0023

Epoch 00219: loss did not improve from 0.02929
Epoch 220/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0321 - cov_loss: 0.0034

Epoch 00220: loss did not improve from 0.02929
Epoch 221/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0293 - cov_loss: 0.0016

Epoch 00221: loss did not improve from 0.02929
Epoch 222/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0307 - cov_loss: 0.0027

Epoch 00222: loss did not improve from 0.02929
Epoch 223/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0296 - cov_loss: 0.0022

Epoch 00223: loss did not improve from 0.02929
Epoch 224/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0298 - cov_loss: 0.0023

Epoch 00224: loss did not improve from 0.02929
Epoch 225/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0284 - cov_loss: 0.0017

Epoch 00225: loss improved from 0.02929 to 0.02836, saving model to ./model_1.hdf5
Epoch 226/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0306 - cov_loss: 0.0024

Epoch 00226: loss did not improve from 0.02836
Epoch 227/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0299 - cov_loss: 0.0021

Epoch 00227: loss did not improve from 0.02836
Epoch 228/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0304 - cov_loss: 0.0029

Epoch 00228: loss did not improve from 0.02836
Epoch 229/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0296 - cov_loss: 0.0024

Epoch 00229: loss did not improve from 0.02836
Epoch 230/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0320 - cov_loss: 0.0025

Epoch 00230: loss did not improve from 0.02836
Epoch 231/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0304 - cov_loss: 0.0027

Epoch 00231: loss did not improve from 0.02836
Epoch 232/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0299 - cov_loss: 0.0024

Epoch 00232: loss did not improve from 0.02836
Epoch 233/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0301 - cov_loss: 0.0021

Epoch 00233: loss did not improve from 0.02836
Epoch 234/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0300 - cov_loss: 0.0024

Epoch 00234: loss did not improve from 0.02836
Epoch 235/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0307 - cov_loss: 0.0029

Epoch 00235: loss did not improve from 0.02836
Epoch 236/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0302 - cov_loss: 0.0022

Epoch 00236: loss did not improve from 0.02836
Epoch 237/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0299 - cov_loss: 0.0025

Epoch 00237: loss did not improve from 0.02836
Epoch 238/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0293 - cov_loss: 0.0020

Epoch 00238: loss did not improve from 0.02836
Epoch 239/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0306 - cov_loss: 0.0032

Epoch 00239: loss did not improve from 0.02836
Epoch 240/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0289 - cov_loss: 0.0021

Epoch 00240: loss did not improve from 0.02836
Epoch 241/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0286 - cov_loss: 0.0020

Epoch 00241: loss did not improve from 0.02836
Epoch 242/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0294 - cov_loss: 0.0022

Epoch 00242: loss did not improve from 0.02836
Epoch 243/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0300 - cov_loss: 0.0023

Epoch 00243: loss did not improve from 0.02836
Epoch 244/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0288 - cov_loss: 0.0020

Epoch 00244: loss did not improve from 0.02836
Epoch 245/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0295 - cov_loss: 0.0021

Epoch 00245: loss did not improve from 0.02836
Epoch 246/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0288 - cov_loss: 0.0025

Epoch 00246: loss did not improve from 0.02836
Epoch 247/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0279 - cov_loss: 0.0018

Epoch 00247: loss improved from 0.02836 to 0.02788, saving model to ./model_1.hdf5
Epoch 248/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0293 - cov_loss: 0.0023

Epoch 00248: loss did not improve from 0.02788
Epoch 249/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0302 - cov_loss: 0.0022

Epoch 00249: loss did not improve from 0.02788
Epoch 250/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0297 - cov_loss: 0.0023

Epoch 00250: loss did not improve from 0.02788
Epoch 251/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0296 - cov_loss: 0.0024

Epoch 00251: loss did not improve from 0.02788
Epoch 252/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0295 - cov_loss: 0.0026

Epoch 00252: loss did not improve from 0.02788
Epoch 253/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0295 - cov_loss: 0.0022

Epoch 00253: loss did not improve from 0.02788
Epoch 254/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0293 - cov_loss: 0.0023

Epoch 00254: loss did not improve from 0.02788
Epoch 255/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0293 - cov_loss: 0.0023

Epoch 00255: loss did not improve from 0.02788
Epoch 256/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0280 - cov_loss: 0.0019

Epoch 00256: loss did not improve from 0.02788
Epoch 257/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0297 - cov_loss: 0.0021

Epoch 00257: loss did not improve from 0.02788
Epoch 258/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0287 - cov_loss: 0.0024

Epoch 00258: loss did not improve from 0.02788
Epoch 259/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0286 - cov_loss: 0.0024

Epoch 00259: loss did not improve from 0.02788
Epoch 260/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0293 - cov_loss: 0.0023

Epoch 00260: loss did not improve from 0.02788
Epoch 261/1000
32/32 [==============================] - 2s 64ms/step - loss: 0.0280 - cov_loss: 0.0021

Epoch 00261: loss did not improve from 0.02788
Epoch 262/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0295 - cov_loss: 0.0022

Epoch 00262: loss did not improve from 0.02788
Epoch 263/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0300 - cov_loss: 0.0027

Epoch 00263: loss did not improve from 0.02788
Epoch 264/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0302 - cov_loss: 0.0027

Epoch 00264: loss did not improve from 0.02788
Epoch 265/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0287 - cov_loss: 0.0021

Epoch 00265: loss did not improve from 0.02788
Epoch 266/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0299 - cov_loss: 0.0025

Epoch 00266: loss did not improve from 0.02788
Epoch 267/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0300 - cov_loss: 0.0023

Epoch 00267: loss did not improve from 0.02788
Epoch 268/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0288 - cov_loss: 0.0024

Epoch 00268: loss did not improve from 0.02788
Epoch 269/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0296 - cov_loss: 0.0025

Epoch 00269: loss did not improve from 0.02788
Epoch 270/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0286 - cov_loss: 0.0023

Epoch 00270: loss did not improve from 0.02788
Epoch 271/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0284 - cov_loss: 0.0024

Epoch 00271: loss did not improve from 0.02788
Epoch 272/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0281 - cov_loss: 0.0022

Epoch 00272: loss did not improve from 0.02788

Epoch 00272: ReduceLROnPlateau reducing learning rate to 0.0010000000474974513.
Epoch 273/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0290 - cov_loss: 0.0022

Epoch 00273: loss did not improve from 0.02788
Epoch 274/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0289 - cov_loss: 0.0030

Epoch 00274: loss did not improve from 0.02788
Epoch 275/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0283 - cov_loss: 0.0018

Epoch 00275: loss did not improve from 0.02788
Epoch 276/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0278 - cov_loss: 0.0021

Epoch 00276: loss improved from 0.02788 to 0.02784, saving model to ./model_1.hdf5
Epoch 277/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0277 - cov_loss: 0.0020

Epoch 00277: loss improved from 0.02784 to 0.02775, saving model to ./model_1.hdf5
Epoch 278/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0271 - cov_loss: 0.0019

Epoch 00278: loss improved from 0.02775 to 0.02710, saving model to ./model_1.hdf5
Epoch 279/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0285 - cov_loss: 0.0023

Epoch 00279: loss did not improve from 0.02710
Epoch 280/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0288 - cov_loss: 0.0024

Epoch 00280: loss did not improve from 0.02710
Epoch 281/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0289 - cov_loss: 0.0020

Epoch 00281: loss did not improve from 0.02710
Epoch 282/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0282 - cov_loss: 0.0023

Epoch 00282: loss did not improve from 0.02710
Epoch 283/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0282 - cov_loss: 0.0023

Epoch 00283: loss did not improve from 0.02710
Epoch 284/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0281 - cov_loss: 0.0022

Epoch 00284: loss did not improve from 0.02710
Epoch 285/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0285 - cov_loss: 0.0026

Epoch 00285: loss did not improve from 0.02710
Epoch 286/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0272 - cov_loss: 0.0017

Epoch 00286: loss did not improve from 0.02710
Epoch 287/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0273 - cov_loss: 0.0020

Epoch 00287: loss did not improve from 0.02710
Epoch 288/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0280 - cov_loss: 0.0021

Epoch 00288: loss did not improve from 0.02710
Epoch 289/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0280 - cov_loss: 0.0022

Epoch 00289: loss did not improve from 0.02710
Epoch 290/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0276 - cov_loss: 0.0021

Epoch 00290: loss did not improve from 0.02710
Epoch 291/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0283 - cov_loss: 0.0026

Epoch 00291: loss did not improve from 0.02710
Epoch 292/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0276 - cov_loss: 0.0019

Epoch 00292: loss did not improve from 0.02710
Epoch 293/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0281 - cov_loss: 0.0021

Epoch 00293: loss did not improve from 0.02710
Epoch 294/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0271 - cov_loss: 0.0018

Epoch 00294: loss did not improve from 0.02710
Epoch 295/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0280 - cov_loss: 0.0021

Epoch 00295: loss did not improve from 0.02710
Epoch 296/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0282 - cov_loss: 0.0025

Epoch 00296: loss did not improve from 0.02710
Epoch 297/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0268 - cov_loss: 0.0019

Epoch 00297: loss improved from 0.02710 to 0.02679, saving model to ./model_1.hdf5
Epoch 298/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0274 - cov_loss: 0.0019

Epoch 00298: loss did not improve from 0.02679
Epoch 299/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0280 - cov_loss: 0.0023

Epoch 00299: loss did not improve from 0.02679
Epoch 300/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0284 - cov_loss: 0.0030

Epoch 00300: loss did not improve from 0.02679
Epoch 301/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0272 - cov_loss: 0.0022

Epoch 00301: loss did not improve from 0.02679
Epoch 302/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0292 - cov_loss: 0.0025

Epoch 00302: loss did not improve from 0.02679
Epoch 303/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0278 - cov_loss: 0.0019

Epoch 00303: loss did not improve from 0.02679
Epoch 304/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0286 - cov_loss: 0.0025

Epoch 00304: loss did not improve from 0.02679
Epoch 305/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0287 - cov_loss: 0.0025

Epoch 00305: loss did not improve from 0.02679
Epoch 306/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0268 - cov_loss: 0.0020

Epoch 00306: loss did not improve from 0.02679
Epoch 307/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0282 - cov_loss: 0.0024

Epoch 00307: loss did not improve from 0.02679
Epoch 308/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0266 - cov_loss: 0.0020

Epoch 00308: loss improved from 0.02679 to 0.02664, saving model to ./model_1.hdf5
Epoch 309/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0268 - cov_loss: 0.0019

Epoch 00309: loss did not improve from 0.02664
Epoch 310/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0274 - cov_loss: 0.0018

Epoch 00310: loss did not improve from 0.02664
Epoch 311/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0277 - cov_loss: 0.0026

Epoch 00311: loss did not improve from 0.02664
Epoch 312/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0273 - cov_loss: 0.0017

Epoch 00312: loss did not improve from 0.02664
Epoch 313/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0297 - cov_loss: 0.0022

Epoch 00313: loss did not improve from 0.02664
Epoch 314/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0290 - cov_loss: 0.0020

Epoch 00314: loss did not improve from 0.02664
Epoch 315/1000
32/32 [==============================] - 2s 71ms/step - loss: 0.0276 - cov_loss: 0.0022

Epoch 00315: loss did not improve from 0.02664
Epoch 316/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0272 - cov_loss: 0.0022

Epoch 00316: loss did not improve from 0.02664
Epoch 317/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0278 - cov_loss: 0.0023

Epoch 00317: loss did not improve from 0.02664
Epoch 318/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0282 - cov_loss: 0.0027

Epoch 00318: loss did not improve from 0.02664
Epoch 319/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0271 - cov_loss: 0.0021

Epoch 00319: loss did not improve from 0.02664
Epoch 320/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0272 - cov_loss: 0.0019

Epoch 00320: loss did not improve from 0.02664
Epoch 321/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0276 - cov_loss: 0.0020

Epoch 00321: loss did not improve from 0.02664
Epoch 322/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0273 - cov_loss: 0.0022

Epoch 00322: loss did not improve from 0.02664
Epoch 323/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0261 - cov_loss: 0.0016

Epoch 00323: loss improved from 0.02664 to 0.02607, saving model to ./model_1.hdf5
Epoch 324/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0276 - cov_loss: 0.0023

Epoch 00324: loss did not improve from 0.02607
Epoch 325/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0279 - cov_loss: 0.0021

Epoch 00325: loss did not improve from 0.02607
Epoch 326/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0276 - cov_loss: 0.0021

Epoch 00326: loss did not improve from 0.02607
Epoch 327/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0266 - cov_loss: 0.0019

Epoch 00327: loss did not improve from 0.02607
Epoch 328/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0276 - cov_loss: 0.0022

Epoch 00328: loss did not improve from 0.02607
Epoch 329/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0283 - cov_loss: 0.0026

Epoch 00329: loss did not improve from 0.02607
Epoch 330/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0278 - cov_loss: 0.0023

Epoch 00330: loss did not improve from 0.02607
Epoch 331/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0280 - cov_loss: 0.0022

Epoch 00331: loss did not improve from 0.02607
Epoch 332/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0268 - cov_loss: 0.0017

Epoch 00332: loss did not improve from 0.02607
Epoch 333/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0276 - cov_loss: 0.0021

Epoch 00333: loss did not improve from 0.02607
Epoch 334/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0274 - cov_loss: 0.0024

Epoch 00334: loss did not improve from 0.02607
Epoch 335/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0280 - cov_loss: 0.0020

Epoch 00335: loss did not improve from 0.02607
Epoch 336/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0273 - cov_loss: 0.0020

Epoch 00336: loss did not improve from 0.02607
Epoch 337/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0268 - cov_loss: 0.0020

Epoch 00337: loss did not improve from 0.02607
Epoch 338/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0280 - cov_loss: 0.0019

Epoch 00338: loss did not improve from 0.02607
Epoch 339/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0018

Epoch 00339: loss did not improve from 0.02607
Epoch 340/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0289 - cov_loss: 0.0024

Epoch 00340: loss did not improve from 0.02607
Epoch 341/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0273 - cov_loss: 0.0021

Epoch 00341: loss did not improve from 0.02607
Epoch 342/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0276 - cov_loss: 0.0023

Epoch 00342: loss did not improve from 0.02607
Epoch 343/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0282 - cov_loss: 0.0024

Epoch 00343: loss did not improve from 0.02607
Epoch 344/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0274 - cov_loss: 0.0022

Epoch 00344: loss did not improve from 0.02607
Epoch 345/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0282 - cov_loss: 0.0027

Epoch 00345: loss did not improve from 0.02607
Epoch 346/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0279 - cov_loss: 0.0023

Epoch 00346: loss did not improve from 0.02607
Epoch 347/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0280 - cov_loss: 0.0025

Epoch 00347: loss did not improve from 0.02607
Epoch 348/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0282 - cov_loss: 0.0025

Epoch 00348: loss did not improve from 0.02607

Epoch 00348: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
Epoch 349/1000
32/32 [==============================] - 2s 69ms/step - loss: 0.0269 - cov_loss: 0.0020

Epoch 00349: loss did not improve from 0.02607
Epoch 350/1000
32/32 [==============================] - 2s 69ms/step - loss: 0.0274 - cov_loss: 0.0021

Epoch 00350: loss did not improve from 0.02607
Epoch 351/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0265 - cov_loss: 0.0018

Epoch 00351: loss did not improve from 0.02607
Epoch 352/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0273 - cov_loss: 0.0021

Epoch 00352: loss did not improve from 0.02607
Epoch 353/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0269 - cov_loss: 0.0023

Epoch 00353: loss did not improve from 0.02607
Epoch 354/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0270 - cov_loss: 0.0022

Epoch 00354: loss did not improve from 0.02607
Epoch 355/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0269 - cov_loss: 0.0021

Epoch 00355: loss did not improve from 0.02607
Epoch 356/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0275 - cov_loss: 0.0024

Epoch 00356: loss did not improve from 0.02607
Epoch 357/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0261 - cov_loss: 0.0015

Epoch 00357: loss improved from 0.02607 to 0.02606, saving model to ./model_1.hdf5
Epoch 358/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0267 - cov_loss: 0.0024

Epoch 00358: loss did not improve from 0.02606
Epoch 359/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0274 - cov_loss: 0.0024

Epoch 00359: loss did not improve from 0.02606
Epoch 360/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0266 - cov_loss: 0.0022

Epoch 00360: loss did not improve from 0.02606
Epoch 361/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0020

Epoch 00361: loss did not improve from 0.02606
Epoch 362/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0265 - cov_loss: 0.0016

Epoch 00362: loss did not improve from 0.02606
Epoch 363/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0268 - cov_loss: 0.0017

Epoch 00363: loss did not improve from 0.02606
Epoch 364/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0278 - cov_loss: 0.0022

Epoch 00364: loss did not improve from 0.02606
Epoch 365/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0275 - cov_loss: 0.0024

Epoch 00365: loss did not improve from 0.02606
Epoch 366/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0018

Epoch 00366: loss did not improve from 0.02606
Epoch 367/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0259 - cov_loss: 0.0018

Epoch 00367: loss improved from 0.02606 to 0.02590, saving model to ./model_1.hdf5
Epoch 368/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0267 - cov_loss: 0.0020

Epoch 00368: loss did not improve from 0.02590
Epoch 369/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0268 - cov_loss: 0.0018

Epoch 00369: loss did not improve from 0.02590
Epoch 370/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0263 - cov_loss: 0.0017

Epoch 00370: loss did not improve from 0.02590
Epoch 371/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0271 - cov_loss: 0.0017

Epoch 00371: loss did not improve from 0.02590
Epoch 372/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0254 - cov_loss: 0.0017

Epoch 00372: loss improved from 0.02590 to 0.02541, saving model to ./model_1.hdf5
Epoch 373/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0263 - cov_loss: 0.0018

Epoch 00373: loss did not improve from 0.02541
Epoch 374/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0276 - cov_loss: 0.0022

Epoch 00374: loss did not improve from 0.02541
Epoch 375/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0260 - cov_loss: 0.0017

Epoch 00375: loss did not improve from 0.02541
Epoch 376/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0263 - cov_loss: 0.0017

Epoch 00376: loss did not improve from 0.02541
Epoch 377/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0269 - cov_loss: 0.0019

Epoch 00377: loss did not improve from 0.02541
Epoch 378/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0273 - cov_loss: 0.0019

Epoch 00378: loss did not improve from 0.02541
Epoch 379/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0254 - cov_loss: 0.0016

Epoch 00379: loss improved from 0.02541 to 0.02535, saving model to ./model_1.hdf5
Epoch 380/1000
32/32 [==============================] - 2s 69ms/step - loss: 0.0277 - cov_loss: 0.0024

Epoch 00380: loss did not improve from 0.02535
Epoch 381/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0276 - cov_loss: 0.0025

Epoch 00381: loss did not improve from 0.02535
Epoch 382/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0270 - cov_loss: 0.0022

Epoch 00382: loss did not improve from 0.02535
Epoch 383/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0276 - cov_loss: 0.0024

Epoch 00383: loss did not improve from 0.02535
Epoch 384/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0264 - cov_loss: 0.0019

Epoch 00384: loss did not improve from 0.02535
Epoch 385/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0276 - cov_loss: 0.0021

Epoch 00385: loss did not improve from 0.02535
Epoch 386/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0268 - cov_loss: 0.0021

Epoch 00386: loss did not improve from 0.02535
Epoch 387/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0266 - cov_loss: 0.0021

Epoch 00387: loss did not improve from 0.02535
Epoch 388/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0282 - cov_loss: 0.0025

Epoch 00388: loss did not improve from 0.02535
Epoch 389/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0269 - cov_loss: 0.0020

Epoch 00389: loss did not improve from 0.02535
Epoch 390/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0265 - cov_loss: 0.0020

Epoch 00390: loss did not improve from 0.02535
Epoch 391/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0262 - cov_loss: 0.0019

Epoch 00391: loss did not improve from 0.02535
Epoch 392/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0261 - cov_loss: 0.0020

Epoch 00392: loss did not improve from 0.02535
Epoch 393/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0260 - cov_loss: 0.0017

Epoch 00393: loss did not improve from 0.02535
Epoch 394/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0256 - cov_loss: 0.0018

Epoch 00394: loss did not improve from 0.02535
Epoch 395/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0266 - cov_loss: 0.0021

Epoch 00395: loss did not improve from 0.02535
Epoch 396/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0255 - cov_loss: 0.0019

Epoch 00396: loss did not improve from 0.02535
Epoch 397/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0272 - cov_loss: 0.0023

Epoch 00397: loss did not improve from 0.02535

Epoch 00397: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
Epoch 398/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0254 - cov_loss: 0.0017

Epoch 00398: loss did not improve from 0.02535
Epoch 399/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0261 - cov_loss: 0.0017

Epoch 00399: loss did not improve from 0.02535
Epoch 400/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0254 - cov_loss: 0.0016

Epoch 00400: loss did not improve from 0.02535
Epoch 401/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0263 - cov_loss: 0.0022

Epoch 00401: loss did not improve from 0.02535
Epoch 402/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0257 - cov_loss: 0.0018

Epoch 00402: loss did not improve from 0.02535
Epoch 403/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0265 - cov_loss: 0.0021

Epoch 00403: loss did not improve from 0.02535
Epoch 404/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0258 - cov_loss: 0.0018

Epoch 00404: loss did not improve from 0.02535
Epoch 405/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0259 - cov_loss: 0.0017

Epoch 00405: loss did not improve from 0.02535
Epoch 406/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0262 - cov_loss: 0.0017

Epoch 00406: loss did not improve from 0.02535
Epoch 407/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0261 - cov_loss: 0.0020

Epoch 00407: loss did not improve from 0.02535
Epoch 408/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0259 - cov_loss: 0.0014

Epoch 00408: loss did not improve from 0.02535
Epoch 409/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0018

Epoch 00409: loss did not improve from 0.02535
Epoch 410/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0258 - cov_loss: 0.0018

Epoch 00410: loss did not improve from 0.02535
Epoch 411/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0252 - cov_loss: 0.0016

Epoch 00411: loss improved from 0.02535 to 0.02515, saving model to ./model_1.hdf5
Epoch 412/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0268 - cov_loss: 0.0022

Epoch 00412: loss did not improve from 0.02515
Epoch 413/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0255 - cov_loss: 0.0015

Epoch 00413: loss did not improve from 0.02515
Epoch 414/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0269 - cov_loss: 0.0023

Epoch 00414: loss did not improve from 0.02515
Epoch 415/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0263 - cov_loss: 0.0023

Epoch 00415: loss did not improve from 0.02515
Epoch 416/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0269 - cov_loss: 0.0023

Epoch 00416: loss did not improve from 0.02515
Epoch 417/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0266 - cov_loss: 0.0024

Epoch 00417: loss did not improve from 0.02515
Epoch 418/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0265 - cov_loss: 0.0022

Epoch 00418: loss did not improve from 0.02515
Epoch 419/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0262 - cov_loss: 0.0019

Epoch 00419: loss did not improve from 0.02515
Epoch 420/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0261 - cov_loss: 0.0020

Epoch 00420: loss did not improve from 0.02515
Epoch 421/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0261 - cov_loss: 0.0021

Epoch 00421: loss did not improve from 0.02515
Epoch 422/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0271 - cov_loss: 0.0021

Epoch 00422: loss did not improve from 0.02515
Epoch 423/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0268 - cov_loss: 0.0018

Epoch 00423: loss did not improve from 0.02515
Epoch 424/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0265 - cov_loss: 0.0021

Epoch 00424: loss did not improve from 0.02515
Epoch 425/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0259 - cov_loss: 0.0020

Epoch 00425: loss did not improve from 0.02515
Epoch 426/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0266 - cov_loss: 0.0022

Epoch 00426: loss did not improve from 0.02515
Epoch 427/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0265 - cov_loss: 0.0020

Epoch 00427: loss did not improve from 0.02515
Epoch 428/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0022

Epoch 00428: loss did not improve from 0.02515
Epoch 429/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0024

Epoch 00429: loss did not improve from 0.02515
Epoch 430/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0264 - cov_loss: 0.0023

Epoch 00430: loss did not improve from 0.02515
Epoch 431/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0258 - cov_loss: 0.0023

Epoch 00431: loss did not improve from 0.02515
Epoch 432/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0257 - cov_loss: 0.0016

Epoch 00432: loss did not improve from 0.02515
Epoch 433/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0264 - cov_loss: 0.0020

Epoch 00433: loss did not improve from 0.02515
Epoch 434/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0255 - cov_loss: 0.0017

Epoch 00434: loss did not improve from 0.02515
Epoch 435/1000
32/32 [==============================] - 2s 69ms/step - loss: 0.0272 - cov_loss: 0.0022

Epoch 00435: loss did not improve from 0.02515
Epoch 436/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0265 - cov_loss: 0.0022

Epoch 00436: loss did not improve from 0.02515

Epoch 00436: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
Epoch 437/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0259 - cov_loss: 0.0021

Epoch 00437: loss did not improve from 0.02515
Epoch 438/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0260 - cov_loss: 0.0020

Epoch 00438: loss did not improve from 0.02515
Epoch 439/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0255 - cov_loss: 0.0017

Epoch 00439: loss did not improve from 0.02515
Epoch 440/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0273 - cov_loss: 0.0024

Epoch 00440: loss did not improve from 0.02515
Epoch 441/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0253 - cov_loss: 0.0015

Epoch 00441: loss did not improve from 0.02515
Epoch 442/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0269 - cov_loss: 0.0016

Epoch 00442: loss did not improve from 0.02515
Epoch 443/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0264 - cov_loss: 0.0019

Epoch 00443: loss did not improve from 0.02515
Epoch 444/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0257 - cov_loss: 0.0018

Epoch 00444: loss did not improve from 0.02515
Epoch 445/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0250 - cov_loss: 0.0014

Epoch 00445: loss improved from 0.02515 to 0.02502, saving model to ./model_1.hdf5
Epoch 446/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0269 - cov_loss: 0.0020

Epoch 00446: loss did not improve from 0.02502
Epoch 447/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0263 - cov_loss: 0.0019

Epoch 00447: loss did not improve from 0.02502
Epoch 448/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0256 - cov_loss: 0.0019

Epoch 00448: loss did not improve from 0.02502
Epoch 449/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0270 - cov_loss: 0.0021

Epoch 00449: loss did not improve from 0.02502
Epoch 450/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0267 - cov_loss: 0.0022

Epoch 00450: loss did not improve from 0.02502
Epoch 451/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0254 - cov_loss: 0.0014

Epoch 00451: loss did not improve from 0.02502
Epoch 452/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0258 - cov_loss: 0.0019

Epoch 00452: loss did not improve from 0.02502
Epoch 453/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0262 - cov_loss: 0.0019

Epoch 00453: loss did not improve from 0.02502
Epoch 454/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0274 - cov_loss: 0.0018

Epoch 00454: loss did not improve from 0.02502
Epoch 455/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0262 - cov_loss: 0.0020

Epoch 00455: loss did not improve from 0.02502
Epoch 456/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0253 - cov_loss: 0.0017

Epoch 00456: loss did not improve from 0.02502
Epoch 457/1000
32/32 [==============================] - 2s 63ms/step - loss: 0.0245 - cov_loss: 0.0016

Epoch 00457: loss improved from 0.02502 to 0.02445, saving model to ./model_1.hdf5
Epoch 458/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0253 - cov_loss: 0.0014

Epoch 00458: loss did not improve from 0.02445
Epoch 459/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0260 - cov_loss: 0.0020

Epoch 00459: loss did not improve from 0.02445
Epoch 460/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0248 - cov_loss: 0.0016

Epoch 00460: loss did not improve from 0.02445
Epoch 461/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0257 - cov_loss: 0.0018

Epoch 00461: loss did not improve from 0.02445
Epoch 462/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0260 - cov_loss: 0.0019

Epoch 00462: loss did not improve from 0.02445
Epoch 463/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0250 - cov_loss: 0.0017

Epoch 00463: loss did not improve from 0.02445
Epoch 464/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0273 - cov_loss: 0.0020

Epoch 00464: loss did not improve from 0.02445
Epoch 465/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0265 - cov_loss: 0.0022

Epoch 00465: loss did not improve from 0.02445
Epoch 466/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0268 - cov_loss: 0.0022

Epoch 00466: loss did not improve from 0.02445
Epoch 467/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0269 - cov_loss: 0.0027

Epoch 00467: loss did not improve from 0.02445
Epoch 468/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0258 - cov_loss: 0.0019

Epoch 00468: loss did not improve from 0.02445
Epoch 469/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0268 - cov_loss: 0.0022

Epoch 00469: loss did not improve from 0.02445
Epoch 470/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0264 - cov_loss: 0.0021

Epoch 00470: loss did not improve from 0.02445
Epoch 471/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0252 - cov_loss: 0.0017

Epoch 00471: loss did not improve from 0.02445
Epoch 472/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0257 - cov_loss: 0.0020

Epoch 00472: loss did not improve from 0.02445
Epoch 473/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0261 - cov_loss: 0.0018

Epoch 00473: loss did not improve from 0.02445
Epoch 474/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0270 - cov_loss: 0.0024

Epoch 00474: loss did not improve from 0.02445
Epoch 475/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0266 - cov_loss: 0.0021

Epoch 00475: loss did not improve from 0.02445
Epoch 476/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0255 - cov_loss: 0.0020

Epoch 00476: loss did not improve from 0.02445
Epoch 477/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0260 - cov_loss: 0.0020

Epoch 00477: loss did not improve from 0.02445
Epoch 478/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0252 - cov_loss: 0.0017

Epoch 00478: loss did not improve from 0.02445
Epoch 479/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0266 - cov_loss: 0.0020

Epoch 00479: loss did not improve from 0.02445
Epoch 480/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0268 - cov_loss: 0.0017

Epoch 00480: loss did not improve from 0.02445
Epoch 481/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0259 - cov_loss: 0.0019

Epoch 00481: loss did not improve from 0.02445
Epoch 482/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0257 - cov_loss: 0.0017

Epoch 00482: loss did not improve from 0.02445

Epoch 00482: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
Epoch 483/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0262 - cov_loss: 0.0017

Epoch 00483: loss did not improve from 0.02445
Epoch 484/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0255 - cov_loss: 0.0016

Epoch 00484: loss did not improve from 0.02445
Epoch 485/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0254 - cov_loss: 0.0017

Epoch 00485: loss did not improve from 0.02445
Epoch 486/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0255 - cov_loss: 0.0018

Epoch 00486: loss did not improve from 0.02445
Epoch 487/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0251 - cov_loss: 0.0016

Epoch 00487: loss did not improve from 0.02445
Epoch 488/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0271 - cov_loss: 0.0021

Epoch 00488: loss did not improve from 0.02445
Epoch 489/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0255 - cov_loss: 0.0017

Epoch 00489: loss did not improve from 0.02445
Epoch 490/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0257 - cov_loss: 0.0019

Epoch 00490: loss did not improve from 0.02445
Epoch 491/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0259 - cov_loss: 0.0019

Epoch 00491: loss did not improve from 0.02445
Epoch 492/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0252 - cov_loss: 0.0017

Epoch 00492: loss did not improve from 0.02445
Epoch 493/1000
32/32 [==============================] - 2s 65ms/step - loss: 0.0254 - cov_loss: 0.0021

Epoch 00493: loss did not improve from 0.02445
Epoch 494/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0268 - cov_loss: 0.0022

Epoch 00494: loss did not improve from 0.02445
Epoch 495/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0251 - cov_loss: 0.0014

Epoch 00495: loss did not improve from 0.02445
Epoch 496/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0257 - cov_loss: 0.0017

Epoch 00496: loss did not improve from 0.02445
Epoch 497/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0257 - cov_loss: 0.0016

Epoch 00497: loss did not improve from 0.02445
Epoch 498/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0258 - cov_loss: 0.0017

Epoch 00498: loss did not improve from 0.02445
Epoch 499/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0259 - cov_loss: 0.0019

Epoch 00499: loss did not improve from 0.02445
Epoch 500/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0254 - cov_loss: 0.0021

Epoch 00500: loss did not improve from 0.02445
Epoch 501/1000
32/32 [==============================] - 2s 66ms/step - loss: 0.0259 - cov_loss: 0.0017

Epoch 00501: loss did not improve from 0.02445
Epoch 502/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0261 - cov_loss: 0.0018

Epoch 00502: loss did not improve from 0.02445
Epoch 503/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0269 - cov_loss: 0.0019

Epoch 00503: loss did not improve from 0.02445
Epoch 504/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0255 - cov_loss: 0.0020

Epoch 00504: loss did not improve from 0.02445
Epoch 505/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0266 - cov_loss: 0.0020

Epoch 00505: loss did not improve from 0.02445
Epoch 506/1000
32/32 [==============================] - 2s 68ms/step - loss: 0.0256 - cov_loss: 0.0017

Epoch 00506: loss did not improve from 0.02445
Epoch 507/1000
32/32 [==============================] - 2s 67ms/step - loss: 0.0251 - cov_loss: 0.0018

Epoch 00507: loss did not improve from 0.02445

Epoch 00507: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05.
Epoch 00507: early stopping
model_1 = keras.models.load_model('model_1.hdf5', custom_objects={"LatentCovarianceLayer": LatentCovarianceLayer})

Examine the result

img = dataset[1430, ...]
img_rec = model_1.predict(img[np.newaxis,...])
_, ax = plt.subplots(1, 2)
ax[0].imshow(img[...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
ax[0].axis('off')
ax[1].imshow(img_rec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
ax[1].axis('off')
plt.show()
../_images/pca_ae_hierarchy_rotation_22_0.png
decoder_1 = keras.models.Sequential(model_1.layers[-12:])
encoder_1 = keras.models.Model(inputs=model_1.input, outputs=model_1.get_layer('latent_covariance_layer').output)

The improvement in reconstruction is obvious after the introduction of the second latent code.

pca_ae_decoder = keras.models.Sequential(model_1.layers[-12:])

We can have a look at how the latent codes behave, just as we did in previous parts of this series.

vals = [-1, -0.5, 0, 0.5, 1, 2]
_, ax = plt.subplots(1, len(vals), figsize=(12, 3))
for i in range(len(vals)):
    img_dec = pca_ae_decoder.predict([[vals[i], 0]])
    ax[i].imshow(img_dec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
    ax[i].axis("off")
    ax[i].text(0, 5, f"z=({vals[i]}, 0)", c='w')
plt.show()
../_images/pca_ae_hierarchy_rotation_27_0.png
vals = [-2, -1, 0, 1, 2, 3]
_, ax = plt.subplots(1, len(vals), figsize=(12, 3))
for i in range(len(vals)):
    img_dec = pca_ae_decoder.predict([[0, vals[i]]])
    ax[i].imshow(img_dec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
    ax[i].axis("off")
    ax[i].text(0, 5, f"z=(0, {vals[i]})", c='w')
plt.show()
../_images/pca_ae_hierarchy_rotation_28_0.png

It is quite interesting to observe that, the autoencoder is able to separate the primary features (size and axes) as it did before with the dataset of 2-variable ellipses. However, the rotation seems to be encoded in both latent codes.

Third autoencoder

Now we can add one more code to the latent space.

keras.backend.clear_session()
encoder_1.trainable = False
input_img = keras.layers.Input(shape=[64, 64, 1])
encoded_2 = encoder_gen(input_img)
encoded_1 = encoder_1(input_img)
concat = keras.layers.Concatenate()([encoded_1, encoded_2])
batchnorm = keras.layers.BatchNormalization(center=False, scale=False)(concat)
add_loss = LatentCovarianceLayer(0.3)(batchnorm)
decoded_2 = decoder_gen(add_loss)
pca_ae = keras.models.Model(input_img, decoded_2)
# SCROLL
optimizer = tf.keras.optimizers.Adam(learning_rate=0.002)
pca_ae.compile(optimizer=optimizer, loss='mse')

tempfn='./model_2.hdf5'
model_cb=keras.callbacks.ModelCheckpoint(tempfn, monitor='loss',save_best_only=True, verbose=1)
early_cb=keras.callbacks.EarlyStopping(monitor='loss', patience=50, verbose=1)
learning_rate_reduction = keras.callbacks.ReduceLROnPlateau(monitor='loss',
                                                            patience=25,
                                                            verbose=1,
                                                            factor=0.5,
                                                            min_lr=0.00001)
cb = [model_cb, early_cb, learning_rate_reduction]

history=pca_ae.fit(dataset, dataset,
                   epochs=1000,
                   batch_size=500,
                   shuffle=True,
                   callbacks=cb)
Epoch 1/1000
32/32 [==============================] - 4s 76ms/step - loss: 0.2266 - cov_loss: 0.0565

Epoch 00001: loss improved from inf to 0.22656, saving model to ./model_2.hdf5
Epoch 2/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.1157 - cov_loss: 0.0148

Epoch 00002: loss improved from 0.22656 to 0.11567, saving model to ./model_2.hdf5
Epoch 3/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0817 - cov_loss: 0.0212

Epoch 00003: loss improved from 0.11567 to 0.08168, saving model to ./model_2.hdf5
Epoch 4/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0687 - cov_loss: 0.0185

Epoch 00004: loss improved from 0.08168 to 0.06870, saving model to ./model_2.hdf5
Epoch 5/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0561 - cov_loss: 0.0107

Epoch 00005: loss improved from 0.06870 to 0.05610, saving model to ./model_2.hdf5
Epoch 6/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0592 - cov_loss: 0.0161

Epoch 00006: loss did not improve from 0.05610
Epoch 7/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0534 - cov_loss: 0.0128

Epoch 00007: loss improved from 0.05610 to 0.05337, saving model to ./model_2.hdf5
Epoch 8/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0505 - cov_loss: 0.0114

Epoch 00008: loss improved from 0.05337 to 0.05046, saving model to ./model_2.hdf5
Epoch 9/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0481 - cov_loss: 0.0104

Epoch 00009: loss improved from 0.05046 to 0.04814, saving model to ./model_2.hdf5
Epoch 10/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0510 - cov_loss: 0.0139

Epoch 00010: loss did not improve from 0.04814
Epoch 11/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0487 - cov_loss: 0.0119

Epoch 00011: loss did not improve from 0.04814
Epoch 12/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0473 - cov_loss: 0.0110

Epoch 00012: loss improved from 0.04814 to 0.04727, saving model to ./model_2.hdf5
Epoch 13/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0456 - cov_loss: 0.0112

Epoch 00013: loss improved from 0.04727 to 0.04558, saving model to ./model_2.hdf5
Epoch 14/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0446 - cov_loss: 0.0107

Epoch 00014: loss improved from 0.04558 to 0.04464, saving model to ./model_2.hdf5
Epoch 15/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0424 - cov_loss: 0.0097

Epoch 00015: loss improved from 0.04464 to 0.04244, saving model to ./model_2.hdf5
Epoch 16/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0435 - cov_loss: 0.0105

Epoch 00016: loss did not improve from 0.04244
Epoch 17/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0452 - cov_loss: 0.0112

Epoch 00017: loss did not improve from 0.04244
Epoch 18/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0406 - cov_loss: 0.0092

Epoch 00018: loss improved from 0.04244 to 0.04056, saving model to ./model_2.hdf5
Epoch 19/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0429 - cov_loss: 0.0116

Epoch 00019: loss did not improve from 0.04056
Epoch 20/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0411 - cov_loss: 0.0097

Epoch 00020: loss did not improve from 0.04056
Epoch 21/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0394 - cov_loss: 0.0077

Epoch 00021: loss improved from 0.04056 to 0.03942, saving model to ./model_2.hdf5
Epoch 22/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0397 - cov_loss: 0.0090

Epoch 00022: loss did not improve from 0.03942
Epoch 23/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0386 - cov_loss: 0.0092

Epoch 00023: loss improved from 0.03942 to 0.03864, saving model to ./model_2.hdf5
Epoch 24/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0433 - cov_loss: 0.0136

Epoch 00024: loss did not improve from 0.03864
Epoch 25/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0412 - cov_loss: 0.0113

Epoch 00025: loss did not improve from 0.03864
Epoch 26/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0408 - cov_loss: 0.0117

Epoch 00026: loss did not improve from 0.03864
Epoch 27/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0388 - cov_loss: 0.0093

Epoch 00027: loss did not improve from 0.03864
Epoch 28/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0407 - cov_loss: 0.0117

Epoch 00028: loss did not improve from 0.03864
Epoch 29/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0395 - cov_loss: 0.0103

Epoch 00029: loss did not improve from 0.03864
Epoch 30/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0369 - cov_loss: 0.0083

Epoch 00030: loss improved from 0.03864 to 0.03689, saving model to ./model_2.hdf5
Epoch 31/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0383 - cov_loss: 0.0106

Epoch 00031: loss did not improve from 0.03689
Epoch 32/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0375 - cov_loss: 0.0094

Epoch 00032: loss did not improve from 0.03689
Epoch 33/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0357 - cov_loss: 0.0084

Epoch 00033: loss improved from 0.03689 to 0.03573, saving model to ./model_2.hdf5
Epoch 34/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0350 - cov_loss: 0.0077

Epoch 00034: loss improved from 0.03573 to 0.03498, saving model to ./model_2.hdf5
Epoch 35/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0370 - cov_loss: 0.0104

Epoch 00035: loss did not improve from 0.03498
Epoch 36/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0357 - cov_loss: 0.0077

Epoch 00036: loss did not improve from 0.03498
Epoch 37/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0362 - cov_loss: 0.0096

Epoch 00037: loss did not improve from 0.03498
Epoch 38/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0339 - cov_loss: 0.0073

Epoch 00038: loss improved from 0.03498 to 0.03388, saving model to ./model_2.hdf5
Epoch 39/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0338 - cov_loss: 0.0071

Epoch 00039: loss improved from 0.03388 to 0.03383, saving model to ./model_2.hdf5
Epoch 40/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0368 - cov_loss: 0.0091

Epoch 00040: loss did not improve from 0.03383
Epoch 41/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0349 - cov_loss: 0.0088

Epoch 00041: loss did not improve from 0.03383
Epoch 42/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0378 - cov_loss: 0.0104

Epoch 00042: loss did not improve from 0.03383
Epoch 43/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0338 - cov_loss: 0.0073

Epoch 00043: loss improved from 0.03383 to 0.03376, saving model to ./model_2.hdf5
Epoch 44/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0331 - cov_loss: 0.0078

Epoch 00044: loss improved from 0.03376 to 0.03315, saving model to ./model_2.hdf5
Epoch 45/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0371 - cov_loss: 0.0110

Epoch 00045: loss did not improve from 0.03315
Epoch 46/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0347 - cov_loss: 0.0094

Epoch 00046: loss did not improve from 0.03315
Epoch 47/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0335 - cov_loss: 0.0085

Epoch 00047: loss did not improve from 0.03315
Epoch 48/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0349 - cov_loss: 0.0091

Epoch 00048: loss did not improve from 0.03315
Epoch 49/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0352 - cov_loss: 0.0086

Epoch 00049: loss did not improve from 0.03315
Epoch 50/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0344 - cov_loss: 0.0092

Epoch 00050: loss did not improve from 0.03315
Epoch 51/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0373 - cov_loss: 0.0122

Epoch 00051: loss did not improve from 0.03315
Epoch 52/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0335 - cov_loss: 0.0086

Epoch 00052: loss did not improve from 0.03315
Epoch 53/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0349 - cov_loss: 0.0098

Epoch 00053: loss did not improve from 0.03315
Epoch 54/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0324 - cov_loss: 0.0082

Epoch 00054: loss improved from 0.03315 to 0.03243, saving model to ./model_2.hdf5
Epoch 55/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0337 - cov_loss: 0.0085

Epoch 00055: loss did not improve from 0.03243
Epoch 56/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0341 - cov_loss: 0.0094

Epoch 00056: loss did not improve from 0.03243
Epoch 57/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0326 - cov_loss: 0.0076

Epoch 00057: loss did not improve from 0.03243
Epoch 58/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0358 - cov_loss: 0.0105

Epoch 00058: loss did not improve from 0.03243
Epoch 59/1000
32/32 [==============================] - 2s 73ms/step - loss: 0.0318 - cov_loss: 0.0079

Epoch 00059: loss improved from 0.03243 to 0.03177, saving model to ./model_2.hdf5
Epoch 60/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0343 - cov_loss: 0.0095

Epoch 00060: loss did not improve from 0.03177
Epoch 61/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0319 - cov_loss: 0.0072

Epoch 00061: loss did not improve from 0.03177
Epoch 62/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0318 - cov_loss: 0.0081

Epoch 00062: loss did not improve from 0.03177
Epoch 63/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0339 - cov_loss: 0.0095

Epoch 00063: loss did not improve from 0.03177
Epoch 64/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0345 - cov_loss: 0.0104

Epoch 00064: loss did not improve from 0.03177
Epoch 65/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0324 - cov_loss: 0.0079

Epoch 00065: loss did not improve from 0.03177
Epoch 66/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0294 - cov_loss: 0.0067

Epoch 00066: loss improved from 0.03177 to 0.02939, saving model to ./model_2.hdf5
Epoch 67/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0301 - cov_loss: 0.0067

Epoch 00067: loss did not improve from 0.02939
Epoch 68/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0334 - cov_loss: 0.0090

Epoch 00068: loss did not improve from 0.02939
Epoch 69/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0342 - cov_loss: 0.0111

Epoch 00069: loss did not improve from 0.02939
Epoch 70/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0311 - cov_loss: 0.0074

Epoch 00070: loss did not improve from 0.02939
Epoch 71/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0343 - cov_loss: 0.0100

Epoch 00071: loss did not improve from 0.02939
Epoch 72/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0321 - cov_loss: 0.0082

Epoch 00072: loss did not improve from 0.02939
Epoch 73/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0300 - cov_loss: 0.0065

Epoch 00073: loss did not improve from 0.02939
Epoch 74/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0300 - cov_loss: 0.0066

Epoch 00074: loss did not improve from 0.02939
Epoch 75/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0334 - cov_loss: 0.0092

Epoch 00075: loss did not improve from 0.02939
Epoch 76/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0313 - cov_loss: 0.0077

Epoch 00076: loss did not improve from 0.02939
Epoch 77/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0319 - cov_loss: 0.0088

Epoch 00077: loss did not improve from 0.02939
Epoch 78/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0309 - cov_loss: 0.0081

Epoch 00078: loss did not improve from 0.02939
Epoch 79/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0316 - cov_loss: 0.0082

Epoch 00079: loss did not improve from 0.02939
Epoch 80/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0302 - cov_loss: 0.0075

Epoch 00080: loss did not improve from 0.02939
Epoch 81/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0287 - cov_loss: 0.0060

Epoch 00081: loss improved from 0.02939 to 0.02867, saving model to ./model_2.hdf5
Epoch 82/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0322 - cov_loss: 0.0090

Epoch 00082: loss did not improve from 0.02867
Epoch 83/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0301 - cov_loss: 0.0073

Epoch 00083: loss did not improve from 0.02867
Epoch 84/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0299 - cov_loss: 0.0067

Epoch 00084: loss did not improve from 0.02867
Epoch 85/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0355 - cov_loss: 0.0110

Epoch 00085: loss did not improve from 0.02867
Epoch 86/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0332 - cov_loss: 0.0086

Epoch 00086: loss did not improve from 0.02867
Epoch 87/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0332 - cov_loss: 0.0091

Epoch 00087: loss did not improve from 0.02867
Epoch 88/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0332 - cov_loss: 0.0088

Epoch 00088: loss did not improve from 0.02867
Epoch 89/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0306 - cov_loss: 0.0079

Epoch 00089: loss did not improve from 0.02867
Epoch 90/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0333 - cov_loss: 0.0102

Epoch 00090: loss did not improve from 0.02867
Epoch 91/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0305 - cov_loss: 0.0080

Epoch 00091: loss did not improve from 0.02867
Epoch 92/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0288 - cov_loss: 0.0063

Epoch 00092: loss did not improve from 0.02867
Epoch 93/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0306 - cov_loss: 0.0066

Epoch 00093: loss did not improve from 0.02867
Epoch 94/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0302 - cov_loss: 0.0078

Epoch 00094: loss did not improve from 0.02867
Epoch 95/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0306 - cov_loss: 0.0081

Epoch 00095: loss did not improve from 0.02867
Epoch 96/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0340 - cov_loss: 0.0095

Epoch 00096: loss did not improve from 0.02867
Epoch 97/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0304 - cov_loss: 0.0082

Epoch 00097: loss did not improve from 0.02867
Epoch 98/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0307 - cov_loss: 0.0079

Epoch 00098: loss did not improve from 0.02867
Epoch 99/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0296 - cov_loss: 0.0071

Epoch 00099: loss did not improve from 0.02867
Epoch 100/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0308 - cov_loss: 0.0073

Epoch 00100: loss did not improve from 0.02867
Epoch 101/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0298 - cov_loss: 0.0066

Epoch 00101: loss did not improve from 0.02867
Epoch 102/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0315 - cov_loss: 0.0080

Epoch 00102: loss did not improve from 0.02867
Epoch 103/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0302 - cov_loss: 0.0073

Epoch 00103: loss did not improve from 0.02867
Epoch 104/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0302 - cov_loss: 0.0079

Epoch 00104: loss did not improve from 0.02867
Epoch 105/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0306 - cov_loss: 0.0082

Epoch 00105: loss did not improve from 0.02867
Epoch 106/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0317 - cov_loss: 0.0084

Epoch 00106: loss did not improve from 0.02867

Epoch 00106: ReduceLROnPlateau reducing learning rate to 0.0010000000474974513.
Epoch 107/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0289 - cov_loss: 0.0067

Epoch 00107: loss did not improve from 0.02867
Epoch 108/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0300 - cov_loss: 0.0072

Epoch 00108: loss did not improve from 0.02867
Epoch 109/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0289 - cov_loss: 0.0071

Epoch 00109: loss did not improve from 0.02867
Epoch 110/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0276 - cov_loss: 0.0065

Epoch 00110: loss improved from 0.02867 to 0.02761, saving model to ./model_2.hdf5
Epoch 111/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0281 - cov_loss: 0.0070

Epoch 00111: loss did not improve from 0.02761
Epoch 112/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0306 - cov_loss: 0.0083

Epoch 00112: loss did not improve from 0.02761
Epoch 113/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0278 - cov_loss: 0.0069

Epoch 00113: loss did not improve from 0.02761
Epoch 114/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0298 - cov_loss: 0.0075

Epoch 00114: loss did not improve from 0.02761
Epoch 115/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0287 - cov_loss: 0.0071

Epoch 00115: loss did not improve from 0.02761
Epoch 116/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0279 - cov_loss: 0.0059

Epoch 00116: loss did not improve from 0.02761
Epoch 117/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0279 - cov_loss: 0.0067

Epoch 00117: loss did not improve from 0.02761
Epoch 118/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0299 - cov_loss: 0.0077

Epoch 00118: loss did not improve from 0.02761
Epoch 119/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0287 - cov_loss: 0.0075

Epoch 00119: loss did not improve from 0.02761
Epoch 120/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0289 - cov_loss: 0.0072

Epoch 00120: loss did not improve from 0.02761
Epoch 121/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0299 - cov_loss: 0.0078

Epoch 00121: loss did not improve from 0.02761
Epoch 122/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0289 - cov_loss: 0.0053

Epoch 00122: loss did not improve from 0.02761
Epoch 123/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0283 - cov_loss: 0.0061

Epoch 00123: loss did not improve from 0.02761
Epoch 124/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0278 - cov_loss: 0.0058

Epoch 00124: loss did not improve from 0.02761
Epoch 125/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0288 - cov_loss: 0.0074

Epoch 00125: loss did not improve from 0.02761
Epoch 126/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0294 - cov_loss: 0.0075

Epoch 00126: loss did not improve from 0.02761
Epoch 127/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0290 - cov_loss: 0.0069

Epoch 00127: loss did not improve from 0.02761
Epoch 128/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0282 - cov_loss: 0.0069

Epoch 00128: loss did not improve from 0.02761
Epoch 129/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0295 - cov_loss: 0.0076

Epoch 00129: loss did not improve from 0.02761
Epoch 130/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0287 - cov_loss: 0.0069

Epoch 00130: loss did not improve from 0.02761
Epoch 131/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0296 - cov_loss: 0.0072

Epoch 00131: loss did not improve from 0.02761
Epoch 132/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0312 - cov_loss: 0.0085

Epoch 00132: loss did not improve from 0.02761
Epoch 133/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0278 - cov_loss: 0.0065

Epoch 00133: loss did not improve from 0.02761
Epoch 134/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0294 - cov_loss: 0.0072

Epoch 00134: loss did not improve from 0.02761
Epoch 135/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0303 - cov_loss: 0.0075

Epoch 00135: loss did not improve from 0.02761

Epoch 00135: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
Epoch 136/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0288 - cov_loss: 0.0072

Epoch 00136: loss did not improve from 0.02761
Epoch 137/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0273 - cov_loss: 0.0058

Epoch 00137: loss improved from 0.02761 to 0.02733, saving model to ./model_2.hdf5
Epoch 138/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0290 - cov_loss: 0.0067

Epoch 00138: loss did not improve from 0.02733
Epoch 139/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0273 - cov_loss: 0.0060

Epoch 00139: loss improved from 0.02733 to 0.02726, saving model to ./model_2.hdf5
Epoch 140/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0265 - cov_loss: 0.0054

Epoch 00140: loss improved from 0.02726 to 0.02652, saving model to ./model_2.hdf5
Epoch 141/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0267 - cov_loss: 0.0052

Epoch 00141: loss did not improve from 0.02652
Epoch 142/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0288 - cov_loss: 0.0075

Epoch 00142: loss did not improve from 0.02652
Epoch 143/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0272 - cov_loss: 0.0062

Epoch 00143: loss did not improve from 0.02652
Epoch 144/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0273 - cov_loss: 0.0061

Epoch 00144: loss did not improve from 0.02652
Epoch 145/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0251 - cov_loss: 0.0047

Epoch 00145: loss improved from 0.02652 to 0.02512, saving model to ./model_2.hdf5
Epoch 146/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0290 - cov_loss: 0.0072

Epoch 00146: loss did not improve from 0.02512
Epoch 147/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0299 - cov_loss: 0.0083

Epoch 00147: loss did not improve from 0.02512
Epoch 148/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0284 - cov_loss: 0.0066

Epoch 00148: loss did not improve from 0.02512
Epoch 149/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0264 - cov_loss: 0.0054

Epoch 00149: loss did not improve from 0.02512
Epoch 150/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0286 - cov_loss: 0.0066

Epoch 00150: loss did not improve from 0.02512
Epoch 151/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0283 - cov_loss: 0.0072

Epoch 00151: loss did not improve from 0.02512
Epoch 152/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0264 - cov_loss: 0.0056

Epoch 00152: loss did not improve from 0.02512
Epoch 153/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0289 - cov_loss: 0.0064

Epoch 00153: loss did not improve from 0.02512
Epoch 154/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0272 - cov_loss: 0.0054

Epoch 00154: loss did not improve from 0.02512
Epoch 155/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0291 - cov_loss: 0.0082

Epoch 00155: loss did not improve from 0.02512
Epoch 156/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0278 - cov_loss: 0.0070

Epoch 00156: loss did not improve from 0.02512
Epoch 157/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0278 - cov_loss: 0.0064

Epoch 00157: loss did not improve from 0.02512
Epoch 158/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0267 - cov_loss: 0.0053

Epoch 00158: loss did not improve from 0.02512
Epoch 159/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0278 - cov_loss: 0.0058

Epoch 00159: loss did not improve from 0.02512
Epoch 160/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0265 - cov_loss: 0.0056

Epoch 00160: loss did not improve from 0.02512
Epoch 161/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0278 - cov_loss: 0.0064

Epoch 00161: loss did not improve from 0.02512
Epoch 162/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0268 - cov_loss: 0.0060

Epoch 00162: loss did not improve from 0.02512
Epoch 163/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0268 - cov_loss: 0.0053

Epoch 00163: loss did not improve from 0.02512
Epoch 164/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0263 - cov_loss: 0.0048

Epoch 00164: loss did not improve from 0.02512
Epoch 165/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0249 - cov_loss: 0.0046

Epoch 00165: loss improved from 0.02512 to 0.02489, saving model to ./model_2.hdf5
Epoch 166/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0272 - cov_loss: 0.0056

Epoch 00166: loss did not improve from 0.02489
Epoch 167/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0277 - cov_loss: 0.0067

Epoch 00167: loss did not improve from 0.02489
Epoch 168/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0273 - cov_loss: 0.0064

Epoch 00168: loss did not improve from 0.02489
Epoch 169/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0284 - cov_loss: 0.0063

Epoch 00169: loss did not improve from 0.02489
Epoch 170/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0287 - cov_loss: 0.0071

Epoch 00170: loss did not improve from 0.02489
Epoch 171/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0284 - cov_loss: 0.0076

Epoch 00171: loss did not improve from 0.02489
Epoch 172/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0270 - cov_loss: 0.0062

Epoch 00172: loss did not improve from 0.02489
Epoch 173/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0252 - cov_loss: 0.0045

Epoch 00173: loss did not improve from 0.02489
Epoch 174/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0260 - cov_loss: 0.0048

Epoch 00174: loss did not improve from 0.02489
Epoch 175/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0290 - cov_loss: 0.0071

Epoch 00175: loss did not improve from 0.02489
Epoch 176/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0274 - cov_loss: 0.0057

Epoch 00176: loss did not improve from 0.02489
Epoch 177/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0288 - cov_loss: 0.0074

Epoch 00177: loss did not improve from 0.02489
Epoch 178/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0275 - cov_loss: 0.0060

Epoch 00178: loss did not improve from 0.02489
Epoch 179/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0272 - cov_loss: 0.0059

Epoch 00179: loss did not improve from 0.02489
Epoch 180/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0288 - cov_loss: 0.0079

Epoch 00180: loss did not improve from 0.02489
Epoch 181/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0280 - cov_loss: 0.0063

Epoch 00181: loss did not improve from 0.02489
Epoch 182/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0274 - cov_loss: 0.0066

Epoch 00182: loss did not improve from 0.02489
Epoch 183/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0275 - cov_loss: 0.0061

Epoch 00183: loss did not improve from 0.02489
Epoch 184/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0264 - cov_loss: 0.0051

Epoch 00184: loss did not improve from 0.02489
Epoch 185/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0280 - cov_loss: 0.0064

Epoch 00185: loss did not improve from 0.02489
Epoch 186/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0270 - cov_loss: 0.0055

Epoch 00186: loss did not improve from 0.02489
Epoch 187/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0272 - cov_loss: 0.0065

Epoch 00187: loss did not improve from 0.02489
Epoch 188/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0277 - cov_loss: 0.0064

Epoch 00188: loss did not improve from 0.02489
Epoch 189/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0289 - cov_loss: 0.0075

Epoch 00189: loss did not improve from 0.02489
Epoch 190/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0263 - cov_loss: 0.0056

Epoch 00190: loss did not improve from 0.02489

Epoch 00190: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
Epoch 191/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0265 - cov_loss: 0.0052

Epoch 00191: loss did not improve from 0.02489
Epoch 192/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0270 - cov_loss: 0.0060

Epoch 00192: loss did not improve from 0.02489
Epoch 193/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0268 - cov_loss: 0.0062

Epoch 00193: loss did not improve from 0.02489
Epoch 194/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0276 - cov_loss: 0.0065

Epoch 00194: loss did not improve from 0.02489
Epoch 195/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0268 - cov_loss: 0.0062

Epoch 00195: loss did not improve from 0.02489
Epoch 196/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0269 - cov_loss: 0.0066

Epoch 00196: loss did not improve from 0.02489
Epoch 197/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0273 - cov_loss: 0.0060

Epoch 00197: loss did not improve from 0.02489
Epoch 198/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0275 - cov_loss: 0.0065

Epoch 00198: loss did not improve from 0.02489
Epoch 199/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0258 - cov_loss: 0.0053

Epoch 00199: loss did not improve from 0.02489
Epoch 200/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0250 - cov_loss: 0.0041

Epoch 00200: loss did not improve from 0.02489
Epoch 201/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0273 - cov_loss: 0.0070

Epoch 00201: loss did not improve from 0.02489
Epoch 202/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0248 - cov_loss: 0.0041

Epoch 00202: loss improved from 0.02489 to 0.02480, saving model to ./model_2.hdf5
Epoch 203/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0279 - cov_loss: 0.0068

Epoch 00203: loss did not improve from 0.02480
Epoch 204/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0263 - cov_loss: 0.0059

Epoch 00204: loss did not improve from 0.02480
Epoch 205/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0291 - cov_loss: 0.0073

Epoch 00205: loss did not improve from 0.02480
Epoch 206/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0279 - cov_loss: 0.0067

Epoch 00206: loss did not improve from 0.02480
Epoch 207/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0263 - cov_loss: 0.0054

Epoch 00207: loss did not improve from 0.02480
Epoch 208/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0263 - cov_loss: 0.0057

Epoch 00208: loss did not improve from 0.02480
Epoch 209/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0264 - cov_loss: 0.0057

Epoch 00209: loss did not improve from 0.02480
Epoch 210/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0261 - cov_loss: 0.0052

Epoch 00210: loss did not improve from 0.02480
Epoch 211/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0271 - cov_loss: 0.0058

Epoch 00211: loss did not improve from 0.02480
Epoch 212/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0282 - cov_loss: 0.0066

Epoch 00212: loss did not improve from 0.02480
Epoch 213/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0285 - cov_loss: 0.0069

Epoch 00213: loss did not improve from 0.02480
Epoch 214/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0275 - cov_loss: 0.0068

Epoch 00214: loss did not improve from 0.02480
Epoch 215/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0274 - cov_loss: 0.0063

Epoch 00215: loss did not improve from 0.02480

Epoch 00215: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
Epoch 216/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0256 - cov_loss: 0.0052

Epoch 00216: loss did not improve from 0.02480
Epoch 217/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0262 - cov_loss: 0.0048

Epoch 00217: loss did not improve from 0.02480
Epoch 218/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0259 - cov_loss: 0.0051

Epoch 00218: loss did not improve from 0.02480
Epoch 219/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0256 - cov_loss: 0.0048

Epoch 00219: loss did not improve from 0.02480
Epoch 220/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0278 - cov_loss: 0.0060

Epoch 00220: loss did not improve from 0.02480
Epoch 221/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0242 - cov_loss: 0.0042

Epoch 00221: loss improved from 0.02480 to 0.02424, saving model to ./model_2.hdf5
Epoch 222/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0266 - cov_loss: 0.0055

Epoch 00222: loss did not improve from 0.02424
Epoch 223/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0249 - cov_loss: 0.0045

Epoch 00223: loss did not improve from 0.02424
Epoch 224/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0255 - cov_loss: 0.0052

Epoch 00224: loss did not improve from 0.02424
Epoch 225/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0254 - cov_loss: 0.0055

Epoch 00225: loss did not improve from 0.02424
Epoch 226/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0272 - cov_loss: 0.0062

Epoch 00226: loss did not improve from 0.02424
Epoch 227/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0273 - cov_loss: 0.0063

Epoch 00227: loss did not improve from 0.02424
Epoch 228/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0270 - cov_loss: 0.0059

Epoch 00228: loss did not improve from 0.02424
Epoch 229/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0262 - cov_loss: 0.0052

Epoch 00229: loss did not improve from 0.02424
Epoch 230/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0267 - cov_loss: 0.0058

Epoch 00230: loss did not improve from 0.02424
Epoch 231/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0259 - cov_loss: 0.0052

Epoch 00231: loss did not improve from 0.02424
Epoch 232/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0269 - cov_loss: 0.0065

Epoch 00232: loss did not improve from 0.02424
Epoch 233/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0252 - cov_loss: 0.0042

Epoch 00233: loss did not improve from 0.02424
Epoch 234/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0262 - cov_loss: 0.0057

Epoch 00234: loss did not improve from 0.02424
Epoch 235/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0275 - cov_loss: 0.0062

Epoch 00235: loss did not improve from 0.02424
Epoch 236/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0248 - cov_loss: 0.0042

Epoch 00236: loss did not improve from 0.02424
Epoch 237/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0265 - cov_loss: 0.0057

Epoch 00237: loss did not improve from 0.02424
Epoch 238/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0257 - cov_loss: 0.0055

Epoch 00238: loss did not improve from 0.02424
Epoch 239/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0271 - cov_loss: 0.0060

Epoch 00239: loss did not improve from 0.02424
Epoch 240/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0261 - cov_loss: 0.0058

Epoch 00240: loss did not improve from 0.02424
Epoch 241/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0242 - cov_loss: 0.0041

Epoch 00241: loss improved from 0.02424 to 0.02420, saving model to ./model_2.hdf5
Epoch 242/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0254 - cov_loss: 0.0050

Epoch 00242: loss did not improve from 0.02420
Epoch 243/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0262 - cov_loss: 0.0054

Epoch 00243: loss did not improve from 0.02420
Epoch 244/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0257 - cov_loss: 0.0054

Epoch 00244: loss did not improve from 0.02420
Epoch 245/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0247 - cov_loss: 0.0043

Epoch 00245: loss did not improve from 0.02420
Epoch 246/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0260 - cov_loss: 0.0057

Epoch 00246: loss did not improve from 0.02420

Epoch 00246: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
Epoch 247/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0263 - cov_loss: 0.0067

Epoch 00247: loss did not improve from 0.02420
Epoch 248/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0255 - cov_loss: 0.0045

Epoch 00248: loss did not improve from 0.02420
Epoch 249/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0264 - cov_loss: 0.0059

Epoch 00249: loss did not improve from 0.02420
Epoch 250/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0265 - cov_loss: 0.0054

Epoch 00250: loss did not improve from 0.02420
Epoch 251/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0249 - cov_loss: 0.0045

Epoch 00251: loss did not improve from 0.02420
Epoch 252/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0255 - cov_loss: 0.0049

Epoch 00252: loss did not improve from 0.02420
Epoch 253/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0252 - cov_loss: 0.0048

Epoch 00253: loss did not improve from 0.02420
Epoch 254/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0241 - cov_loss: 0.0041

Epoch 00254: loss improved from 0.02420 to 0.02414, saving model to ./model_2.hdf5
Epoch 255/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0276 - cov_loss: 0.0068

Epoch 00255: loss did not improve from 0.02414
Epoch 256/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0263 - cov_loss: 0.0063

Epoch 00256: loss did not improve from 0.02414
Epoch 257/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0257 - cov_loss: 0.0053

Epoch 00257: loss did not improve from 0.02414
Epoch 258/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0254 - cov_loss: 0.0053

Epoch 00258: loss did not improve from 0.02414
Epoch 259/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0247 - cov_loss: 0.0043

Epoch 00259: loss did not improve from 0.02414
Epoch 260/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0259 - cov_loss: 0.0057

Epoch 00260: loss did not improve from 0.02414
Epoch 261/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0260 - cov_loss: 0.0057

Epoch 00261: loss did not improve from 0.02414
Epoch 262/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0268 - cov_loss: 0.0060

Epoch 00262: loss did not improve from 0.02414
Epoch 263/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0260 - cov_loss: 0.0050

Epoch 00263: loss did not improve from 0.02414
Epoch 264/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0256 - cov_loss: 0.0051

Epoch 00264: loss did not improve from 0.02414
Epoch 265/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0270 - cov_loss: 0.0064

Epoch 00265: loss did not improve from 0.02414
Epoch 266/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0256 - cov_loss: 0.0047

Epoch 00266: loss did not improve from 0.02414
Epoch 267/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0261 - cov_loss: 0.0054

Epoch 00267: loss did not improve from 0.02414
Epoch 268/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0252 - cov_loss: 0.0048

Epoch 00268: loss did not improve from 0.02414
Epoch 269/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0264 - cov_loss: 0.0055

Epoch 00269: loss did not improve from 0.02414
Epoch 270/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0253 - cov_loss: 0.0049

Epoch 00270: loss did not improve from 0.02414
Epoch 271/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0260 - cov_loss: 0.0057

Epoch 00271: loss did not improve from 0.02414
Epoch 272/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0255 - cov_loss: 0.0056

Epoch 00272: loss did not improve from 0.02414
Epoch 273/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0262 - cov_loss: 0.0050

Epoch 00273: loss did not improve from 0.02414
Epoch 274/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0271 - cov_loss: 0.0060

Epoch 00274: loss did not improve from 0.02414
Epoch 275/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0277 - cov_loss: 0.0070

Epoch 00275: loss did not improve from 0.02414
Epoch 276/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0254 - cov_loss: 0.0048

Epoch 00276: loss did not improve from 0.02414
Epoch 277/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0250 - cov_loss: 0.0046

Epoch 00277: loss did not improve from 0.02414
Epoch 278/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0253 - cov_loss: 0.0051

Epoch 00278: loss did not improve from 0.02414
Epoch 279/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0261 - cov_loss: 0.0054

Epoch 00279: loss did not improve from 0.02414

Epoch 00279: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05.
Epoch 280/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0264 - cov_loss: 0.0052

Epoch 00280: loss did not improve from 0.02414
Epoch 281/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0254 - cov_loss: 0.0053

Epoch 00281: loss did not improve from 0.02414
Epoch 282/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0251 - cov_loss: 0.0045

Epoch 00282: loss did not improve from 0.02414
Epoch 283/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0265 - cov_loss: 0.0060

Epoch 00283: loss did not improve from 0.02414
Epoch 284/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0257 - cov_loss: 0.0048

Epoch 00284: loss did not improve from 0.02414
Epoch 285/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0251 - cov_loss: 0.0046

Epoch 00285: loss did not improve from 0.02414
Epoch 286/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0247 - cov_loss: 0.0045

Epoch 00286: loss did not improve from 0.02414
Epoch 287/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0264 - cov_loss: 0.0061

Epoch 00287: loss did not improve from 0.02414
Epoch 288/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0261 - cov_loss: 0.0055

Epoch 00288: loss did not improve from 0.02414
Epoch 289/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0256 - cov_loss: 0.0052

Epoch 00289: loss did not improve from 0.02414
Epoch 290/1000
32/32 [==============================] - 2s 74ms/step - loss: 0.0263 - cov_loss: 0.0062

Epoch 00290: loss did not improve from 0.02414
Epoch 291/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0268 - cov_loss: 0.0061

Epoch 00291: loss did not improve from 0.02414
Epoch 292/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0251 - cov_loss: 0.0048

Epoch 00292: loss did not improve from 0.02414
Epoch 293/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0252 - cov_loss: 0.0049

Epoch 00293: loss did not improve from 0.02414
Epoch 294/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0257 - cov_loss: 0.0057

Epoch 00294: loss did not improve from 0.02414
Epoch 295/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0262 - cov_loss: 0.0056

Epoch 00295: loss did not improve from 0.02414
Epoch 296/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0259 - cov_loss: 0.0048

Epoch 00296: loss did not improve from 0.02414
Epoch 297/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0238 - cov_loss: 0.0041

Epoch 00297: loss improved from 0.02414 to 0.02384, saving model to ./model_2.hdf5
Epoch 298/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0259 - cov_loss: 0.0058

Epoch 00298: loss did not improve from 0.02384
Epoch 299/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0262 - cov_loss: 0.0051

Epoch 00299: loss did not improve from 0.02384
Epoch 300/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0274 - cov_loss: 0.0063

Epoch 00300: loss did not improve from 0.02384
Epoch 301/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0257 - cov_loss: 0.0056

Epoch 00301: loss did not improve from 0.02384
Epoch 302/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0256 - cov_loss: 0.0041

Epoch 00302: loss did not improve from 0.02384
Epoch 303/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0259 - cov_loss: 0.0052

Epoch 00303: loss did not improve from 0.02384
Epoch 304/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0258 - cov_loss: 0.0050

Epoch 00304: loss did not improve from 0.02384
Epoch 305/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0255 - cov_loss: 0.0049

Epoch 00305: loss did not improve from 0.02384
Epoch 306/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0251 - cov_loss: 0.0049

Epoch 00306: loss did not improve from 0.02384
Epoch 307/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0268 - cov_loss: 0.0060

Epoch 00307: loss did not improve from 0.02384
Epoch 308/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0256 - cov_loss: 0.0055

Epoch 00308: loss did not improve from 0.02384
Epoch 309/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0251 - cov_loss: 0.0052

Epoch 00309: loss did not improve from 0.02384
Epoch 310/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0250 - cov_loss: 0.0048

Epoch 00310: loss did not improve from 0.02384
Epoch 311/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0244 - cov_loss: 0.0039

Epoch 00311: loss did not improve from 0.02384
Epoch 312/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0260 - cov_loss: 0.0059

Epoch 00312: loss did not improve from 0.02384
Epoch 313/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0262 - cov_loss: 0.0055

Epoch 00313: loss did not improve from 0.02384
Epoch 314/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0253 - cov_loss: 0.0048

Epoch 00314: loss did not improve from 0.02384
Epoch 315/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0250 - cov_loss: 0.0046

Epoch 00315: loss did not improve from 0.02384
Epoch 316/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0260 - cov_loss: 0.0060

Epoch 00316: loss did not improve from 0.02384
Epoch 317/1000
32/32 [==============================] - 3s 79ms/step - loss: 0.0258 - cov_loss: 0.0048

Epoch 00317: loss did not improve from 0.02384
Epoch 318/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0250 - cov_loss: 0.0043

Epoch 00318: loss did not improve from 0.02384
Epoch 319/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0257 - cov_loss: 0.0052

Epoch 00319: loss did not improve from 0.02384
Epoch 320/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0254 - cov_loss: 0.0047

Epoch 00320: loss did not improve from 0.02384
Epoch 321/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0252 - cov_loss: 0.0046

Epoch 00321: loss did not improve from 0.02384
Epoch 322/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0259 - cov_loss: 0.0054

Epoch 00322: loss did not improve from 0.02384

Epoch 00322: ReduceLROnPlateau reducing learning rate to 1.5625000742147677e-05.
Epoch 323/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0256 - cov_loss: 0.0058

Epoch 00323: loss did not improve from 0.02384
Epoch 324/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0252 - cov_loss: 0.0048

Epoch 00324: loss did not improve from 0.02384
Epoch 325/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0244 - cov_loss: 0.0039

Epoch 00325: loss did not improve from 0.02384
Epoch 326/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0265 - cov_loss: 0.0060

Epoch 00326: loss did not improve from 0.02384
Epoch 327/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0248 - cov_loss: 0.0045

Epoch 00327: loss did not improve from 0.02384
Epoch 328/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0248 - cov_loss: 0.0044

Epoch 00328: loss did not improve from 0.02384
Epoch 329/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0278 - cov_loss: 0.0071

Epoch 00329: loss did not improve from 0.02384
Epoch 330/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0250 - cov_loss: 0.0045

Epoch 00330: loss did not improve from 0.02384
Epoch 331/1000
32/32 [==============================] - 3s 78ms/step - loss: 0.0275 - cov_loss: 0.0069

Epoch 00331: loss did not improve from 0.02384
Epoch 332/1000
32/32 [==============================] - 2s 75ms/step - loss: 0.0256 - cov_loss: 0.0054

Epoch 00332: loss did not improve from 0.02384
Epoch 333/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0269 - cov_loss: 0.0061

Epoch 00333: loss did not improve from 0.02384
Epoch 334/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0249 - cov_loss: 0.0043

Epoch 00334: loss did not improve from 0.02384
Epoch 335/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0268 - cov_loss: 0.0059

Epoch 00335: loss did not improve from 0.02384
Epoch 336/1000
32/32 [==============================] - 2s 78ms/step - loss: 0.0246 - cov_loss: 0.0045

Epoch 00336: loss did not improve from 0.02384
Epoch 337/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0259 - cov_loss: 0.0057

Epoch 00337: loss did not improve from 0.02384
Epoch 338/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0261 - cov_loss: 0.0054

Epoch 00338: loss did not improve from 0.02384
Epoch 339/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0244 - cov_loss: 0.0046

Epoch 00339: loss did not improve from 0.02384
Epoch 340/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0267 - cov_loss: 0.0051

Epoch 00340: loss did not improve from 0.02384
Epoch 341/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0262 - cov_loss: 0.0057

Epoch 00341: loss did not improve from 0.02384
Epoch 342/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0256 - cov_loss: 0.0050

Epoch 00342: loss did not improve from 0.02384
Epoch 343/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0250 - cov_loss: 0.0045

Epoch 00343: loss did not improve from 0.02384
Epoch 344/1000
32/32 [==============================] - 2s 76ms/step - loss: 0.0253 - cov_loss: 0.0051

Epoch 00344: loss did not improve from 0.02384
Epoch 345/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0262 - cov_loss: 0.0054

Epoch 00345: loss did not improve from 0.02384
Epoch 346/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0260 - cov_loss: 0.0052

Epoch 00346: loss did not improve from 0.02384
Epoch 347/1000
32/32 [==============================] - 2s 77ms/step - loss: 0.0257 - cov_loss: 0.0050

Epoch 00347: loss did not improve from 0.02384

Epoch 00347: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00347: early stopping
model_2 = keras.models.load_model('model_2.hdf5', custom_objects={"LatentCovarianceLayer": LatentCovarianceLayer})

Examine the result

img = dataset[1430, ...]
img_rec = model_2.predict(img[np.newaxis,...])
_, ax = plt.subplots(1, 2)
ax[0].imshow(img[...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
ax[0].axis('off')
ax[1].imshow(img_rec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
ax[1].axis('off')
plt.show()
../_images/pca_ae_hierarchy_rotation_34_0.png

There is no significant improvement in reconstruction with the addition of the third latent code.

model_2.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 64, 64, 1)]  0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 64, 64, 4)    40          input_1[0][0]                    
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU)         (None, 64, 64, 4)    0           conv2d[0][0]                     
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 32, 32, 4)    0           leaky_re_lu[0][0]                
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 32, 32, 8)    296         max_pooling2d[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU)       (None, 32, 32, 8)    0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 16, 16, 8)    0           leaky_re_lu_1[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 16, 16, 16)   1168        max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU)       (None, 16, 16, 16)   0           conv2d_2[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 8, 8, 16)     0           leaky_re_lu_2[0][0]              
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 8, 8, 32)     4640        max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU)       (None, 8, 8, 32)     0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 4, 4, 32)     0           leaky_re_lu_3[0][0]              
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 4, 4, 64)     18496       max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU)       (None, 4, 4, 64)     0           conv2d_4[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D)  (None, 2, 2, 64)     0           leaky_re_lu_4[0][0]              
__________________________________________________________________________________________________
flatten (Flatten)               (None, 256)          0           max_pooling2d_4[0][0]            
__________________________________________________________________________________________________
model_1 (Functional)            (None, 2)            49798       input_1[0][0]                    
__________________________________________________________________________________________________
dense (Dense)                   (None, 1)            257         flatten[0][0]                    
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 3)            0           model_1[0][0]                    
                                                                 dense[0][0]                      
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 3)            6           concatenate[0][0]                
__________________________________________________________________________________________________
latent_covariance_layer (Latent (None, 3)            0           batch_normalization[0][0]        
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 16)           64          latent_covariance_layer[0][0]    
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU)       (None, 16)           0           dense_1[0][0]                    
__________________________________________________________________________________________________
reshape (Reshape)               (None, 2, 2, 4)      0           leaky_re_lu_5[0][0]              
__________________________________________________________________________________________________
conv2d_transpose (Conv2DTranspo (None, 4, 4, 32)     1184        reshape[0][0]                    
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU)       (None, 4, 4, 32)     0           conv2d_transpose[0][0]           
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 8, 8, 16)     4624        leaky_re_lu_6[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU)       (None, 8, 8, 16)     0           conv2d_transpose_1[0][0]         
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 16, 16, 8)    1160        leaky_re_lu_7[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU)       (None, 16, 16, 8)    0           conv2d_transpose_2[0][0]         
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 32, 32, 4)    292         leaky_re_lu_8[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU)       (None, 32, 32, 4)    0           conv2d_transpose_3[0][0]         
__________________________________________________________________________________________________
conv2d_transpose_4 (Conv2DTrans (None, 64, 64, 1)    37          leaky_re_lu_9[0][0]              
==================================================================================================
Total params: 82,062
Trainable params: 32,258
Non-trainable params: 49,804
__________________________________________________________________________________________________

Notice that the non-trainable parameters come from the first two encoders.

decoder_2 = keras.models.Sequential(model_2.layers[-12:])
encoder_2 = keras.models.Model(inputs=model_2.input, outputs=model_2.get_layer('latent_covariance_layer').output)
pca_ae_decoder = keras.models.Sequential(model_2.layers[-12:])
vals = [-2, -1, 0, 1, 2, 3]
_, ax = plt.subplots(1, len(vals), figsize=(12, 3))
for i in range(len(vals)):
    img_dec = pca_ae_decoder.predict([[vals[i], 0, 0]])
    ax[i].imshow(img_dec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
    ax[i].axis("off")
    ax[i].text(0, 5, f"z=({vals[i]}, 0)", c='w')
plt.show()
../_images/pca_ae_hierarchy_rotation_40_0.png
vals = [-2, -1, 0, 1, 2, 3]
_, ax = plt.subplots(1, len(vals), figsize=(12, 3))
for i in range(len(vals)):
    img_dec = pca_ae_decoder.predict([[0, vals[i], 0]])
    ax[i].imshow(img_dec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
    ax[i].axis("off")
    ax[i].text(0, 5, f"z=({vals[i]}, 0)", c='w')
plt.show()
../_images/pca_ae_hierarchy_rotation_41_0.png
vals = [-2, -1, 0, 1, 2, 3]
_, ax = plt.subplots(1, len(vals), figsize=(12, 3))
for i in range(len(vals)):
    img_dec = pca_ae_decoder.predict([[0, 0, vals[i]]])
    ax[i].imshow(img_dec[0,...,0], cmap=plt.get_cmap('gray'), vmin=0, vmax=1)
    ax[i].axis("off")
    ax[i].text(0, 5, f"z=({vals[i]}, 0)", c='w')
plt.show()
../_images/pca_ae_hierarchy_rotation_42_0.png

So now with 3 latent codes in the system, the three variables are separated relatively well among the latent codes, with the first code representing the size, the second the axes and the third the rotation. I think the results can be further improved by optimizing the models and the value of \(\lambda\).

Conclusions

This series is almost coming to an end. I have one last comparison left to do in the next part, namely with just the standard PCA. From the preceding notebooks we can conclude that, the PCA autoencoders seem to distinguish the patterns encoded in the dataset differently than how it is mathematically generated. For the dataset, we used two variables \(a\) and \(b\) to control the size and orientation (horizontal/vertical) of the ellipses. The autoencoders “cleverly” identified two features in the dataset, i.e., size (represented by a circle) and the ratio between the two axes of the ellipses, which controls the “length” of the vertical or horizontal ellipses. The advantage of this feature separation is perhaps that, while \(a\) and \(b\) in the mathematical formula cannot be zero, both the size and the ratio are well-defined at 0.

At least based on my tests, the hierarchy scheme doesn’t seem to offer significant advantage over a plain PCA autoencoder that is much easier to implement. The method may prove more useful in cases where different features in the dataset have different levels of “prominence”.