!sudo apt-get install git-lfs
Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: git-lfs 0 upgraded, 1 newly installed, 0 to remove and 37 not upgraded. Need to get 2,129 kB of archives. After this operation, 7,662 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 git-lfs amd64 2.3.4-1 [2,129 kB] Fetched 2,129 kB in 2s (955 kB/s) debconf: unable to initialize frontend: Dialog debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 1.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype dpkg-preconfigure: unable to re-open stdin: Selecting previously unselected package git-lfs. (Reading database ... 155222 files and directories currently installed.) Preparing to unpack .../git-lfs_2.3.4-1_amd64.deb ... Unpacking git-lfs (2.3.4-1) ... Setting up git-lfs (2.3.4-1) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
%tensorflow_version 1.x
TensorFlow 1.x selected.
import tensorflow
print(tensorflow.__version__)
1.15.2
!pip install keras==2.2.4
Collecting keras==2.2.4
Using cached Keras-2.2.4-py2.py3-none-any.whl (312 kB)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from keras==2.2.4) (3.13)
Requirement already satisfied: keras-applications>=1.0.6 in /tensorflow-1.15.2/python3.7 (from keras==2.2.4) (1.0.8)
Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.7/dist-packages (from keras==2.2.4) (1.19.5)
Requirement already satisfied: scipy>=0.14 in /usr/local/lib/python3.7/dist-packages (from keras==2.2.4) (1.4.1)
Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from keras==2.2.4) (3.1.0)
Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from keras==2.2.4) (1.15.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/dist-packages (from keras==2.2.4) (1.1.2)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py->keras==2.2.4) (1.5.2)
Installing collected packages: keras
Attempting uninstall: keras
Found existing installation: keras 2.7.0
Uninstalling keras-2.7.0:
Successfully uninstalled keras-2.7.0
Successfully installed keras-2.2.4
!pip install 'h5py==2.10.0' --force-reinstall
Collecting h5py==2.10.0
Downloading h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)
|████████████████████████████████| 2.9 MB 4.3 MB/s
Collecting six
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting numpy>=1.7
Downloading numpy-1.21.4-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
|████████████████████████████████| 15.7 MB 55.0 MB/s
Installing collected packages: six, numpy, h5py
Attempting uninstall: six
Found existing installation: six 1.15.0
Uninstalling six-1.15.0:
Successfully uninstalled six-1.15.0
Attempting uninstall: numpy
Found existing installation: numpy 1.19.5
Uninstalling numpy-1.19.5:
Successfully uninstalled numpy-1.19.5
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lucid 0.3.10 requires umap-learn, which is not installed.
tensorflow 1.15.2 requires gast==0.2.2, but you have gast 0.4.0 which is incompatible.
lucid 0.3.10 requires numpy<=1.19, but you have numpy 1.21.4 which is incompatible.
yellowbrick 1.3.post1 requires numpy<1.20,>=1.16.0, but you have numpy 1.21.4 which is incompatible.
kapre 0.3.6 requires tensorflow>=2.0.0, but you have tensorflow 1.15.2 which is incompatible.
google-colab 1.0.0 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.
Successfully installed h5py-2.10.0 numpy-1.21.4 six-1.16.0
Restart Runtime and start from this point. This will set the correct versions of modules
%tensorflow_version 1.x
TensorFlow 1.x selected.
import h5py
print("h5py Version: ", h5py.__version__) # needs to be 2.10.0
assert h5py.__version__ == "2.10.0"
h5py Version: 2.10.0
import keras
print("Keras Version: ", keras.__version__) # needs to be 2.2.4
assert keras.__version__ == "2.2.4"
Using TensorFlow backend.
Keras Version: 2.2.4
import sys
import shutil
import os
import configparser
import io
from collections import defaultdict
import numpy as np
import keras.backend as K
from keras.layers import Input, Lambda
from keras.models import Model
from keras.optimizers import Adam
from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping
from keras.layers import (Conv2D, Input, ZeroPadding2D, Add,
UpSampling2D, MaxPooling2D, Concatenate)
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.normalization import BatchNormalization
from keras.regularizers import l2
from keras.utils.vis_utils import plot_model as plot
!wget https://pjreddie.com/media/files/yolov3.weights
--2021-12-05 14:58:11-- https://pjreddie.com/media/files/yolov3.weights Resolving pjreddie.com (pjreddie.com)... 128.208.4.108 Connecting to pjreddie.com (pjreddie.com)|128.208.4.108|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 248007048 (237M) [application/octet-stream] Saving to: ‘yolov3.weights’ yolov3.weights 100%[===================>] 236.52M 23.3MB/s in 11s 2021-12-05 14:58:23 (21.6 MB/s) - ‘yolov3.weights’ saved [248007048/248007048]
#!git clone https://github.com/roboflow-ai/keras-yolo3.git
!rm -rf yolov3
!git clone https://github.com/awells-uva/yolov3.git
Cloning into 'yolov3'... remote: Enumerating objects: 67, done. remote: Counting objects: 100% (67/67), done. remote: Compressing objects: 100% (46/46), done. remote: Total 67 (delta 33), reused 49 (delta 18), pack-reused 0 Unpacking objects: 100% (67/67), done.
%ls
sample_data/ yolov3/ yolov3.weights
"""
Reads Darknet config and weights and creates Keras model with TF backend.
"""
def unique_config_sections(config_file):
"""Convert all config sections to have unique names.
Adds unique suffixes to config sections for compability with configparser.
"""
section_counters = defaultdict(int)
output_stream = io.StringIO()
with open(config_file) as fin:
for line in fin:
if line.startswith('['):
section = line.strip().strip('[]')
_section = section + '_' + str(section_counters[section])
section_counters[section] += 1
line = line.replace(section, _section)
output_stream.write(line)
output_stream.seek(0)
return output_stream
def build_Keras_model(config_path, weights_path, output_path , weights_only, plot_model=False):
config_path = config_path
weights_path = weights_path
assert config_path.endswith('.cfg'), '{} is not a .cfg file'.format(
config_path)
assert weights_path.endswith(
'.weights'), '{} is not a .weights file'.format(weights_path)
output_path = output_path
assert output_path.endswith(
'.h5'), 'output path {} is not a .h5 file'.format(output_path)
output_root = os.path.splitext(output_path)[0]
# Load weights and config.
print('Loading weights.')
weights_file = open(weights_path, 'rb')
major, minor, revision = np.ndarray(
shape=(3, ), dtype='int32', buffer=weights_file.read(12))
if (major*10+minor)>=2 and major<1000 and minor<1000:
seen = np.ndarray(shape=(1,), dtype='int64', buffer=weights_file.read(8))
else:
seen = np.ndarray(shape=(1,), dtype='int32', buffer=weights_file.read(4))
print('Weights Header: ', major, minor, revision, seen)
print('Parsing Darknet config.')
unique_config_file = unique_config_sections(config_path)
cfg_parser = configparser.ConfigParser()
cfg_parser.read_file(unique_config_file)
print('Creating Keras model.')
input_layer = Input(shape=(None, None, 3))
prev_layer = input_layer
all_layers = []
weight_decay = float(cfg_parser['net_0']['decay']
) if 'net_0' in cfg_parser.sections() else 5e-4
count = 0
out_index = []
for section in cfg_parser.sections():
print('Parsing section {}'.format(section))
if section.startswith('convolutional'):
filters = int(cfg_parser[section]['filters'])
size = int(cfg_parser[section]['size'])
stride = int(cfg_parser[section]['stride'])
pad = int(cfg_parser[section]['pad'])
activation = cfg_parser[section]['activation']
batch_normalize = 'batch_normalize' in cfg_parser[section]
padding = 'same' if pad == 1 and stride == 1 else 'valid'
# Setting weights.
# Darknet serializes convolutional weights as:
# [bias/beta, [gamma, mean, variance], conv_weights]
prev_layer_shape = K.int_shape(prev_layer)
weights_shape = (size, size, prev_layer_shape[-1], filters)
darknet_w_shape = (filters, weights_shape[2], size, size)
weights_size = np.product(weights_shape)
print('conv2d', 'bn'
if batch_normalize else ' ', activation, weights_shape)
conv_bias = np.ndarray(
shape=(filters, ),
dtype='float32',
buffer=weights_file.read(filters * 4))
count += filters
if batch_normalize:
bn_weights = np.ndarray(
shape=(3, filters),
dtype='float32',
buffer=weights_file.read(filters * 12))
count += 3 * filters
bn_weight_list = [
bn_weights[0], # scale gamma
conv_bias, # shift beta
bn_weights[1], # running mean
bn_weights[2] # running var
]
conv_weights = np.ndarray(
shape=darknet_w_shape,
dtype='float32',
buffer=weights_file.read(weights_size * 4))
count += weights_size
# DarkNet conv_weights are serialized Caffe-style:
# (out_dim, in_dim, height, width)
# We would like to set these to Tensorflow order:
# (height, width, in_dim, out_dim)
conv_weights = np.transpose(conv_weights, [2, 3, 1, 0])
conv_weights = [conv_weights] if batch_normalize else [
conv_weights, conv_bias
]
# Handle activation.
act_fn = None
if activation == 'leaky':
pass # Add advanced activation later.
elif activation != 'linear':
raise ValueError(
'Unknown activation function `{}` in section {}'.format(
activation, section))
# Create Conv2D layer
if stride>1:
# Darknet uses left and top padding instead of 'same' mode
prev_layer = ZeroPadding2D(((1,0),(1,0)))(prev_layer)
conv_layer = (Conv2D(
filters, (size, size),
strides=(stride, stride),
kernel_regularizer=l2(weight_decay),
use_bias=not batch_normalize,
weights=conv_weights,
activation=act_fn,
padding=padding))(prev_layer)
if batch_normalize:
conv_layer = (BatchNormalization(
weights=bn_weight_list))(conv_layer)
prev_layer = conv_layer
if activation == 'linear':
all_layers.append(prev_layer)
elif activation == 'leaky':
act_layer = LeakyReLU(alpha=0.1)(prev_layer)
prev_layer = act_layer
all_layers.append(act_layer)
elif section.startswith('route'):
ids = [int(i) for i in cfg_parser[section]['layers'].split(',')]
layers = [all_layers[i] for i in ids]
if len(layers) > 1:
print('Concatenating route layers:', layers)
concatenate_layer = Concatenate()(layers)
all_layers.append(concatenate_layer)
prev_layer = concatenate_layer
else:
skip_layer = layers[0] # only one layer to route
all_layers.append(skip_layer)
prev_layer = skip_layer
elif section.startswith('maxpool'):
size = int(cfg_parser[section]['size'])
stride = int(cfg_parser[section]['stride'])
all_layers.append(
MaxPooling2D(
pool_size=(size, size),
strides=(stride, stride),
padding='same')(prev_layer))
prev_layer = all_layers[-1]
elif section.startswith('shortcut'):
index = int(cfg_parser[section]['from'])
activation = cfg_parser[section]['activation']
assert activation == 'linear', 'Only linear activation supported.'
all_layers.append(Add()([all_layers[index], prev_layer]))
prev_layer = all_layers[-1]
elif section.startswith('upsample'):
stride = int(cfg_parser[section]['stride'])
assert stride == 2, 'Only stride=2 supported.'
all_layers.append(UpSampling2D(stride)(prev_layer))
prev_layer = all_layers[-1]
elif section.startswith('yolo'):
out_index.append(len(all_layers)-1)
all_layers.append(None)
prev_layer = all_layers[-1]
elif section.startswith('net'):
pass
else:
raise ValueError(
'Unsupported section header type: {}'.format(section))
# Create and save model.
if len(out_index)==0: out_index.append(len(all_layers)-1)
model = Model(inputs=input_layer, outputs=[all_layers[i] for i in out_index])
print(model.summary())
if weights_only:
model.save_weights('{}'.format(output_path))
print('Saved Keras weights to {}'.format(output_path))
else:
model.save('{}'.format(output_path))
print('Saved Keras model to {}'.format(output_path))
# Check to see if all weights have been read.
remaining_weights = len(weights_file.read()) / 4
weights_file.close()
print('Read {} of {} from Darknet weights.'.format(count, count +
remaining_weights))
if remaining_weights > 0:
print('Warning: {} unused weights'.format(remaining_weights))
if plot_model:
plot(model, to_file='{}.png'.format(output_root), show_shapes=True)
print('Saved model plot to {}.png'.format(output_root))
def train_model(annotation_path, log_dir, classes_path, anchors_path, batch_size, num_epochs1, num_epochs2 ):
class_names = get_classes(classes_path)
print("-------------------CLASS NAMES-------------------")
print(class_names)
print("-------------------CLASS NAMES-------------------")
num_classes = len(class_names)
anchors = get_anchors(anchors_path)
input_shape = (416,416) # multiple of 32, hw
model = create_model(input_shape, anchors, num_classes,
freeze_body=2, weights_path='yolo.h5') # make sure you know what you freeze
logging = TensorBoard(log_dir=log_dir)
checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1)
val_split = 0.2 # set the size of the validation set
with open(annotation_path) as f:
lines = f.readlines()
np.random.seed(10101)
np.random.shuffle(lines)
np.random.seed(None)
num_val = int(len(lines)*val_split)
num_train = len(lines) - num_val
# Train with frozen layers first, to get a stable loss.
# Adjust num epochs to your dataset. This step is enough to obtain a not bad model.
if True:
model.compile(optimizer=Adam(lr=1e-3), loss={
# use custom yolo_loss Lambda layer.
'yolo_loss': lambda y_true, y_pred: y_pred})
batch_size = batch_size
print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
validation_steps=max(1, num_val//batch_size),
epochs=num_epochs1,
callbacks=[logging, checkpoint])
#model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
# steps_per_epoch=max(1, num_train//batch_size),
# validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
# validation_steps=max(1, num_val//batch_size),
# epochs=500,
# initial_epoch=0,
# callbacks=[logging, checkpoint])
model.save_weights(log_dir + 'trained_weights_stage_1.h5')
# Unfreeze and continue training, to fine-tune.
# Train longer if the result is not good.
if True:
for i in range(len(model.layers)):
model.layers[i].trainable = True
model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change
print('Unfreeze all of the layers.')
batch_size = batch_size # note that more GPU memory is required after unfreezing the body
print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
validation_steps=max(1, num_val//batch_size),
epochs=num_epochs2,
initial_epoch=int(num_epochs1 //2),
callbacks=[logging, checkpoint, reduce_lr, early_stopping])
#model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
# steps_per_epoch=max(1, num_train//batch_size),
# validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
# validation_steps=max(1, num_val//batch_size),
# epochs=100,
# initial_epoch=50,
# callbacks=[logging, checkpoint, reduce_lr, early_stopping])
model.save_weights(log_dir + 'trained_weights_final.h5')
return model
# Further training if needed.
def get_classes(classes_path):
'''loads the classes'''
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def get_anchors(anchors_path):
'''loads the anchors from a file'''
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
return np.array(anchors).reshape(-1, 2)
def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
weights_path='model_data/yolo.h5'):
'''create the training model'''
K.clear_session() # get a new session
image_input = Input(shape=(None, None, 3))
h, w = input_shape
num_anchors = len(anchors)
y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
num_anchors//3, num_classes+5)) for l in range(3)]
model_body = yolo_body(image_input, num_anchors//3, num_classes)
print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
if load_pretrained:
model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
print('Load weights {}.'.format(weights_path))
if freeze_body in [1, 2]:
# Freeze darknet53 body or freeze all but 3 output layers.
num = (185, len(model_body.layers)-3)[freeze_body-1]
for i in range(num): model_body.layers[i].trainable = False
print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
[*model_body.output, *y_true])
model = Model([model_body.input, *y_true], model_loss)
return model
def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
'''data generator for fit_generator'''
n = len(annotation_lines)
i = 0
while True:
image_data = []
box_data = []
for b in range(batch_size):
if i==0:
np.random.shuffle(annotation_lines)
image, box = get_random_data(annotation_lines[i], input_shape, random=True)
image_data.append(image)
box_data.append(box)
i = (i+1) % n
image_data = np.array(image_data)
box_data = np.array(box_data)
y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
yield [image_data, *y_true], np.zeros(batch_size)
def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes):
n = len(annotation_lines)
if n==0 or batch_size<=0: return None
return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
#!python keras-yolo3/convert.py keras-yolo3/yolov3.cfg yolov3.weights yolo.h5
# data
!rm -rf DS5060
!git lfs clone https://github.com/awells-uva/DS6050.git
WARNING: 'git lfs clone' is deprecated and will not be updated
with new flags from 'git clone'
'git clone' has been updated in upstream Git to have comparable
speeds to 'git lfs clone'.
Cloning into 'DS6050'...
remote: Enumerating objects: 9695, done.
remote: Counting objects: 100% (9695/9695), done.
remote: Compressing objects: 100% (9682/9682), done.
remote: Total 9695 (delta 26), reused 9674 (delta 10), pack-reused 0
Receiving objects: 100% (9695/9695), 15.36 MiB | 10.63 MiB/s, done.
Resolving deltas: 100% (26/26), done.
Git LFS: (9609 of 9609 files) 209.52 MB / 209.52 MB
ls DS6050
data/ mask_detection.v8-test-v3.yolokeras/ mask_detection.v10-test-v5.yolokeras/ mask_detection.v9-test-v4.yolokeras/ mask_detection.v12-faster-r-cnn.tfrecord/ README.md mask_detection.v2-test-v1.yolokeras/ testimages/ mask_detection.v5-test-v2.yolokeras/
#data_version = 'mask_detection.v5-test-v2.yolokeras'
data_version = 'mask_detection.v10-test-v5.yolokeras'
datapath = '/content/DS6050/{}/'.format(data_version)
!rm -rf /content/logs/000/
!mkdir -p /content/logs/000/
config_path = '/content/yolov3/yolov3.cfg'
weights_path = '/content/yolov3.weights'
output_path = '/content/yolo.h5'
annotation_path = datapath + 'train/_annotations.txt' # path to Roboflow data annotations
log_dir = '/content/logs/000/' # where we're storing our logs
classes_path = datapath + 'train/_classes.txt' # path to Roboflow class names
anchors_path = '/content/yolov3/yolo_anchors.txt'
weights_only = False
shutil.copy2(classes_path,'/content/_classes.txt')
'/content/_classes.txt'
ls
_classes.txt DS6050/ logs/ sample_data/ yolov3/ yolov3.weights
file1 = open(annotation_path, 'r')
file2 = open('_updated_annotations.txt', 'w')
Lines = file1.readlines()
for line in Lines:
file2.writelines(datapath +'train/'+line)
file2.close()
file1.close()
annotation_path = '_updated_annotations.txt'
!head _updated_annotations.txt
/content/DS6050/mask_detection.v10-test-v5.yolokeras/train/image-from-rawpixel-id-2273093-jpeg_jpg.rf.bd8ef0fcf003bb7907d8423fe7d357c1.jpg 111,24,254,290,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/file-20200612-153849-1ugbxy6_jpg.rf.bc44eb76965a37d9bc92fdde984c70ef.jpg 225,126,303,284,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/newFile-4_jpg.rf.bd7fc038e33675ea1b3b53123f4a64f0.jpg 150,12,256,215,1 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/106913050-16267190092021-07-19t180553z_126445762_rc2uno9mhy9g_rtrmadp_0_health-coronavirus-usa_jpeg.rf.bebd75106fe53b6c8e8928e42905223f.jpg 338,53,388,155,0 85,76,136,174,0 248,97,309,204,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/gettyimages-1217679928_jpg.rf.c3347ca71b9858d012440418799202e0.jpg 110,22,294,356,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/_121176522_gettyimages-1265082017_jpg.rf.bf117a7a98ca8a430acb011be3dbe323.jpg 203,40,286,187,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/GettyImages-1202826896_jpg.rf.becc7faca5d80e1ce2e19124492b5f2e.jpg 180,39,275,199,1 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/yell3_png.rf.c61a4a7a57c5940fa14aa72b93beeab9.jpg 116,0,349,411,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/2c4c0663-be3d-4820-bd99-9a7e12f6c8ef-Razer_Hazel_mask_concept--1-_jpg.rf.c3715814eb4e7c094ecc01dcfe717c48.jpg 145,59,241,222,0 /content/DS6050/mask_detection.v10-test-v5.yolokeras/train/with_mask_622_jpg.rf.bf5a88dad93f51e431f1387d5d072ea9.jpg 0,0,415,415,0
ls
_classes.txt logs/ _updated_annotations.txt yolov3.weights DS6050/ sample_data/ yolov3/
import sys
sys.path.append("/content/yolov3/")
from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
from yolo3.utils import get_random_data
!rm -rf /content/yolo.h5
build_Keras_model(config_path, weights_path, output_path , weights_only) # ~10 mins runtime
Loading weights.
Weights Header: 0 2 0 [32013312]
Parsing Darknet config.
Creating Keras model.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
Parsing section net_0
Parsing section convolutional_0
conv2d bn leaky (3, 3, 3, 32)
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.
Parsing section convolutional_1
conv2d bn leaky (3, 3, 32, 64)
Parsing section convolutional_2
conv2d bn leaky (1, 1, 64, 32)
Parsing section convolutional_3
conv2d bn leaky (3, 3, 32, 64)
Parsing section shortcut_0
Parsing section convolutional_4
conv2d bn leaky (3, 3, 64, 128)
Parsing section convolutional_5
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_6
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_1
Parsing section convolutional_7
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_8
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_2
Parsing section convolutional_9
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_10
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_11
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_3
Parsing section convolutional_12
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_13
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_4
Parsing section convolutional_14
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_15
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_5
Parsing section convolutional_16
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_17
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_6
Parsing section convolutional_18
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_19
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_7
Parsing section convolutional_20
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_21
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_8
Parsing section convolutional_22
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_23
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_9
Parsing section convolutional_24
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_25
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_10
Parsing section convolutional_26
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_27
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_28
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_11
Parsing section convolutional_29
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_30
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_12
Parsing section convolutional_31
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_32
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_13
Parsing section convolutional_33
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_34
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_14
Parsing section convolutional_35
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_36
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_15
Parsing section convolutional_37
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_38
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_16
Parsing section convolutional_39
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_40
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_17
Parsing section convolutional_41
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_42
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_18
Parsing section convolutional_43
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_44
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_45
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_19
Parsing section convolutional_46
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_47
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_20
Parsing section convolutional_48
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_49
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_21
Parsing section convolutional_50
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_51
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_22
Parsing section convolutional_52
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_53
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_54
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_55
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_56
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_57
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_58
conv2d linear (1, 1, 1024, 255)
Parsing section yolo_0
Parsing section route_0
Parsing section convolutional_59
conv2d bn leaky (1, 1, 512, 256)
Parsing section upsample_0
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.
Parsing section route_1
Concatenating route layers: [<tf.Tensor 'up_sampling2d_1/ResizeNearestNeighbor:0' shape=(?, ?, ?, 256) dtype=float32>, <tf.Tensor 'add_19/add:0' shape=(?, ?, ?, 512) dtype=float32>]
Parsing section convolutional_60
conv2d bn leaky (1, 1, 768, 256)
Parsing section convolutional_61
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_62
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_63
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_64
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_65
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_66
conv2d linear (1, 1, 512, 255)
Parsing section yolo_1
Parsing section route_2
Parsing section convolutional_67
conv2d bn leaky (1, 1, 256, 128)
Parsing section upsample_1
Parsing section route_3
Concatenating route layers: [<tf.Tensor 'up_sampling2d_2/ResizeNearestNeighbor:0' shape=(?, ?, ?, 128) dtype=float32>, <tf.Tensor 'add_11/add:0' shape=(?, ?, ?, 256) dtype=float32>]
Parsing section convolutional_68
conv2d bn leaky (1, 1, 384, 128)
Parsing section convolutional_69
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_70
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_71
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_72
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_73
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_74
conv2d linear (1, 1, 256, 255)
Parsing section yolo_2
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, None, None, 3 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, None, None, 3 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 128 conv2d_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, None, None, 3 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, None, None, 3 0 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, None, None, 6 18432 zero_padding2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, None, 6 256 conv2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, None, None, 6 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, None, None, 3 2048 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, None, 3 128 conv2d_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, None, None, 3 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, None, None, 6 18432 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, None, None, 6 256 conv2d_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, None, None, 6 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, None, None, 6 0 leaky_re_lu_2[0][0]
leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
zero_padding2d_2 (ZeroPadding2D (None, None, None, 6 0 add_1[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, None, None, 1 73728 zero_padding2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, None, None, 1 512 conv2d_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, None, None, 1 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, None, None, 6 8192 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, None, None, 6 256 conv2d_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, None, None, 6 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, None, None, 1 73728 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, None, None, 1 512 conv2d_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, None, None, 1 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, None, None, 1 0 leaky_re_lu_5[0][0]
leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, None, None, 6 8192 add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, None, None, 6 256 conv2d_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, None, None, 6 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, None, None, 1 73728 leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, None, None, 1 512 conv2d_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, None, None, 1 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, None, None, 1 0 add_2[0][0]
leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
zero_padding2d_3 (ZeroPadding2D (None, None, None, 1 0 add_3[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, None, None, 2 294912 zero_padding2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, None, None, 2 1024 conv2d_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, None, None, 2 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, None, None, 1 512 conv2d_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, None, None, 1 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, None, None, 2 1024 conv2d_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, None, None, 2 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, None, None, 2 0 leaky_re_lu_10[0][0]
leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, None, None, 1 32768 add_4[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, None, None, 1 512 conv2d_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, None, None, 1 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, None, None, 2 1024 conv2d_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, None, None, 2 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, None, None, 2 0 add_4[0][0]
leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, None, None, 1 32768 add_5[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, None, None, 1 512 conv2d_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, None, None, 1 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, None, None, 2 1024 conv2d_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, None, None, 2 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, None, None, 2 0 add_5[0][0]
leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, None, None, 1 32768 add_6[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, None, None, 1 512 conv2d_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, None, None, 1 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, None, None, 2 1024 conv2d_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, None, None, 2 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, None, None, 2 0 add_6[0][0]
leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, None, None, 1 32768 add_7[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, None, None, 1 512 conv2d_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, None, None, 1 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, None, None, 2 1024 conv2d_20[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, None, None, 2 0 batch_normalization_20[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, None, None, 2 0 add_7[0][0]
leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, None, None, 1 32768 add_8[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, None, None, 1 512 conv2d_21[0][0]
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, None, None, 1 0 batch_normalization_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_21[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, None, None, 2 1024 conv2d_22[0][0]
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, None, None, 2 0 batch_normalization_22[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, None, None, 2 0 add_8[0][0]
leaky_re_lu_22[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, None, None, 1 32768 add_9[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, None, None, 1 512 conv2d_23[0][0]
__________________________________________________________________________________________________
leaky_re_lu_23 (LeakyReLU) (None, None, None, 1 0 batch_normalization_23[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_23[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, None, None, 2 1024 conv2d_24[0][0]
__________________________________________________________________________________________________
leaky_re_lu_24 (LeakyReLU) (None, None, None, 2 0 batch_normalization_24[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, None, None, 2 0 add_9[0][0]
leaky_re_lu_24[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, None, None, 1 32768 add_10[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, None, None, 1 512 conv2d_25[0][0]
__________________________________________________________________________________________________
leaky_re_lu_25 (LeakyReLU) (None, None, None, 1 0 batch_normalization_25[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_25[0][0]
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, None, None, 2 1024 conv2d_26[0][0]
__________________________________________________________________________________________________
leaky_re_lu_26 (LeakyReLU) (None, None, None, 2 0 batch_normalization_26[0][0]
__________________________________________________________________________________________________
add_11 (Add) (None, None, None, 2 0 add_10[0][0]
leaky_re_lu_26[0][0]
__________________________________________________________________________________________________
zero_padding2d_4 (ZeroPadding2D (None, None, None, 2 0 add_11[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, None, None, 5 1179648 zero_padding2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, None, None, 5 2048 conv2d_27[0][0]
__________________________________________________________________________________________________
leaky_re_lu_27 (LeakyReLU) (None, None, None, 5 0 batch_normalization_27[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_27[0][0]
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, None, None, 2 1024 conv2d_28[0][0]
__________________________________________________________________________________________________
leaky_re_lu_28 (LeakyReLU) (None, None, None, 2 0 batch_normalization_28[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_28[0][0]
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, None, None, 5 2048 conv2d_29[0][0]
__________________________________________________________________________________________________
leaky_re_lu_29 (LeakyReLU) (None, None, None, 5 0 batch_normalization_29[0][0]
__________________________________________________________________________________________________
add_12 (Add) (None, None, None, 5 0 leaky_re_lu_27[0][0]
leaky_re_lu_29[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D) (None, None, None, 2 131072 add_12[0][0]
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, None, None, 2 1024 conv2d_30[0][0]
__________________________________________________________________________________________________
leaky_re_lu_30 (LeakyReLU) (None, None, None, 2 0 batch_normalization_30[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_30[0][0]
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, None, None, 5 2048 conv2d_31[0][0]
__________________________________________________________________________________________________
leaky_re_lu_31 (LeakyReLU) (None, None, None, 5 0 batch_normalization_31[0][0]
__________________________________________________________________________________________________
add_13 (Add) (None, None, None, 5 0 add_12[0][0]
leaky_re_lu_31[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D) (None, None, None, 2 131072 add_13[0][0]
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, None, None, 2 1024 conv2d_32[0][0]
__________________________________________________________________________________________________
leaky_re_lu_32 (LeakyReLU) (None, None, None, 2 0 batch_normalization_32[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_32[0][0]
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, None, None, 5 2048 conv2d_33[0][0]
__________________________________________________________________________________________________
leaky_re_lu_33 (LeakyReLU) (None, None, None, 5 0 batch_normalization_33[0][0]
__________________________________________________________________________________________________
add_14 (Add) (None, None, None, 5 0 add_13[0][0]
leaky_re_lu_33[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D) (None, None, None, 2 131072 add_14[0][0]
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, None, None, 2 1024 conv2d_34[0][0]
__________________________________________________________________________________________________
leaky_re_lu_34 (LeakyReLU) (None, None, None, 2 0 batch_normalization_34[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_34[0][0]
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, None, None, 5 2048 conv2d_35[0][0]
__________________________________________________________________________________________________
leaky_re_lu_35 (LeakyReLU) (None, None, None, 5 0 batch_normalization_35[0][0]
__________________________________________________________________________________________________
add_15 (Add) (None, None, None, 5 0 add_14[0][0]
leaky_re_lu_35[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D) (None, None, None, 2 131072 add_15[0][0]
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, None, None, 2 1024 conv2d_36[0][0]
__________________________________________________________________________________________________
leaky_re_lu_36 (LeakyReLU) (None, None, None, 2 0 batch_normalization_36[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_36[0][0]
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, None, None, 5 2048 conv2d_37[0][0]
__________________________________________________________________________________________________
leaky_re_lu_37 (LeakyReLU) (None, None, None, 5 0 batch_normalization_37[0][0]
__________________________________________________________________________________________________
add_16 (Add) (None, None, None, 5 0 add_15[0][0]
leaky_re_lu_37[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D) (None, None, None, 2 131072 add_16[0][0]
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, None, None, 2 1024 conv2d_38[0][0]
__________________________________________________________________________________________________
leaky_re_lu_38 (LeakyReLU) (None, None, None, 2 0 batch_normalization_38[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_38[0][0]
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, None, None, 5 2048 conv2d_39[0][0]
__________________________________________________________________________________________________
leaky_re_lu_39 (LeakyReLU) (None, None, None, 5 0 batch_normalization_39[0][0]
__________________________________________________________________________________________________
add_17 (Add) (None, None, None, 5 0 add_16[0][0]
leaky_re_lu_39[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D) (None, None, None, 2 131072 add_17[0][0]
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, None, None, 2 1024 conv2d_40[0][0]
__________________________________________________________________________________________________
leaky_re_lu_40 (LeakyReLU) (None, None, None, 2 0 batch_normalization_40[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_40[0][0]
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, None, None, 5 2048 conv2d_41[0][0]
__________________________________________________________________________________________________
leaky_re_lu_41 (LeakyReLU) (None, None, None, 5 0 batch_normalization_41[0][0]
__________________________________________________________________________________________________
add_18 (Add) (None, None, None, 5 0 add_17[0][0]
leaky_re_lu_41[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D) (None, None, None, 2 131072 add_18[0][0]
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, None, None, 2 1024 conv2d_42[0][0]
__________________________________________________________________________________________________
leaky_re_lu_42 (LeakyReLU) (None, None, None, 2 0 batch_normalization_42[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_42[0][0]
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, None, None, 5 2048 conv2d_43[0][0]
__________________________________________________________________________________________________
leaky_re_lu_43 (LeakyReLU) (None, None, None, 5 0 batch_normalization_43[0][0]
__________________________________________________________________________________________________
add_19 (Add) (None, None, None, 5 0 add_18[0][0]
leaky_re_lu_43[0][0]
__________________________________________________________________________________________________
zero_padding2d_5 (ZeroPadding2D (None, None, None, 5 0 add_19[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D) (None, None, None, 1 4718592 zero_padding2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, None, None, 1 4096 conv2d_44[0][0]
__________________________________________________________________________________________________
leaky_re_lu_44 (LeakyReLU) (None, None, None, 1 0 batch_normalization_44[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_44[0][0]
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, None, None, 5 2048 conv2d_45[0][0]
__________________________________________________________________________________________________
leaky_re_lu_45 (LeakyReLU) (None, None, None, 5 0 batch_normalization_45[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_45[0][0]
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, None, None, 1 4096 conv2d_46[0][0]
__________________________________________________________________________________________________
leaky_re_lu_46 (LeakyReLU) (None, None, None, 1 0 batch_normalization_46[0][0]
__________________________________________________________________________________________________
add_20 (Add) (None, None, None, 1 0 leaky_re_lu_44[0][0]
leaky_re_lu_46[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D) (None, None, None, 5 524288 add_20[0][0]
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, None, None, 5 2048 conv2d_47[0][0]
__________________________________________________________________________________________________
leaky_re_lu_47 (LeakyReLU) (None, None, None, 5 0 batch_normalization_47[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_47[0][0]
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, None, None, 1 4096 conv2d_48[0][0]
__________________________________________________________________________________________________
leaky_re_lu_48 (LeakyReLU) (None, None, None, 1 0 batch_normalization_48[0][0]
__________________________________________________________________________________________________
add_21 (Add) (None, None, None, 1 0 add_20[0][0]
leaky_re_lu_48[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D) (None, None, None, 5 524288 add_21[0][0]
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, None, None, 5 2048 conv2d_49[0][0]
__________________________________________________________________________________________________
leaky_re_lu_49 (LeakyReLU) (None, None, None, 5 0 batch_normalization_49[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_49[0][0]
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, None, None, 1 4096 conv2d_50[0][0]
__________________________________________________________________________________________________
leaky_re_lu_50 (LeakyReLU) (None, None, None, 1 0 batch_normalization_50[0][0]
__________________________________________________________________________________________________
add_22 (Add) (None, None, None, 1 0 add_21[0][0]
leaky_re_lu_50[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D) (None, None, None, 5 524288 add_22[0][0]
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, None, None, 5 2048 conv2d_51[0][0]
__________________________________________________________________________________________________
leaky_re_lu_51 (LeakyReLU) (None, None, None, 5 0 batch_normalization_51[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_51[0][0]
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, None, None, 1 4096 conv2d_52[0][0]
__________________________________________________________________________________________________
leaky_re_lu_52 (LeakyReLU) (None, None, None, 1 0 batch_normalization_52[0][0]
__________________________________________________________________________________________________
add_23 (Add) (None, None, None, 1 0 add_22[0][0]
leaky_re_lu_52[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D) (None, None, None, 5 524288 add_23[0][0]
__________________________________________________________________________________________________
batch_normalization_53 (BatchNo (None, None, None, 5 2048 conv2d_53[0][0]
__________________________________________________________________________________________________
leaky_re_lu_53 (LeakyReLU) (None, None, None, 5 0 batch_normalization_53[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_53[0][0]
__________________________________________________________________________________________________
batch_normalization_54 (BatchNo (None, None, None, 1 4096 conv2d_54[0][0]
__________________________________________________________________________________________________
leaky_re_lu_54 (LeakyReLU) (None, None, None, 1 0 batch_normalization_54[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_54[0][0]
__________________________________________________________________________________________________
batch_normalization_55 (BatchNo (None, None, None, 5 2048 conv2d_55[0][0]
__________________________________________________________________________________________________
leaky_re_lu_55 (LeakyReLU) (None, None, None, 5 0 batch_normalization_55[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_55[0][0]
__________________________________________________________________________________________________
batch_normalization_56 (BatchNo (None, None, None, 1 4096 conv2d_56[0][0]
__________________________________________________________________________________________________
leaky_re_lu_56 (LeakyReLU) (None, None, None, 1 0 batch_normalization_56[0][0]
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_56[0][0]
__________________________________________________________________________________________________
batch_normalization_57 (BatchNo (None, None, None, 5 2048 conv2d_57[0][0]
__________________________________________________________________________________________________
leaky_re_lu_57 (LeakyReLU) (None, None, None, 5 0 batch_normalization_57[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_57[0][0]
__________________________________________________________________________________________________
batch_normalization_59 (BatchNo (None, None, None, 2 1024 conv2d_60[0][0]
__________________________________________________________________________________________________
leaky_re_lu_59 (LeakyReLU) (None, None, None, 2 0 batch_normalization_59[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, None, None, 2 0 leaky_re_lu_59[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, None, None, 7 0 up_sampling2d_1[0][0]
add_19[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, None, None, 2 196608 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_60 (BatchNo (None, None, None, 2 1024 conv2d_61[0][0]
__________________________________________________________________________________________________
leaky_re_lu_60 (LeakyReLU) (None, None, None, 2 0 batch_normalization_60[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_60[0][0]
__________________________________________________________________________________________________
batch_normalization_61 (BatchNo (None, None, None, 5 2048 conv2d_62[0][0]
__________________________________________________________________________________________________
leaky_re_lu_61 (LeakyReLU) (None, None, None, 5 0 batch_normalization_61[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_61[0][0]
__________________________________________________________________________________________________
batch_normalization_62 (BatchNo (None, None, None, 2 1024 conv2d_63[0][0]
__________________________________________________________________________________________________
leaky_re_lu_62 (LeakyReLU) (None, None, None, 2 0 batch_normalization_62[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_62[0][0]
__________________________________________________________________________________________________
batch_normalization_63 (BatchNo (None, None, None, 5 2048 conv2d_64[0][0]
__________________________________________________________________________________________________
leaky_re_lu_63 (LeakyReLU) (None, None, None, 5 0 batch_normalization_63[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_63[0][0]
__________________________________________________________________________________________________
batch_normalization_64 (BatchNo (None, None, None, 2 1024 conv2d_65[0][0]
__________________________________________________________________________________________________
leaky_re_lu_64 (LeakyReLU) (None, None, None, 2 0 batch_normalization_64[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_64[0][0]
__________________________________________________________________________________________________
batch_normalization_66 (BatchNo (None, None, None, 1 512 conv2d_68[0][0]
__________________________________________________________________________________________________
leaky_re_lu_66 (LeakyReLU) (None, None, None, 1 0 batch_normalization_66[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, None, None, 1 0 leaky_re_lu_66[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, None, None, 3 0 up_sampling2d_2[0][0]
add_11[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D) (None, None, None, 1 49152 concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_67 (BatchNo (None, None, None, 1 512 conv2d_69[0][0]
__________________________________________________________________________________________________
leaky_re_lu_67 (LeakyReLU) (None, None, None, 1 0 batch_normalization_67[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_67[0][0]
__________________________________________________________________________________________________
batch_normalization_68 (BatchNo (None, None, None, 2 1024 conv2d_70[0][0]
__________________________________________________________________________________________________
leaky_re_lu_68 (LeakyReLU) (None, None, None, 2 0 batch_normalization_68[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_68[0][0]
__________________________________________________________________________________________________
batch_normalization_69 (BatchNo (None, None, None, 1 512 conv2d_71[0][0]
__________________________________________________________________________________________________
leaky_re_lu_69 (LeakyReLU) (None, None, None, 1 0 batch_normalization_69[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_69[0][0]
__________________________________________________________________________________________________
batch_normalization_70 (BatchNo (None, None, None, 2 1024 conv2d_72[0][0]
__________________________________________________________________________________________________
leaky_re_lu_70 (LeakyReLU) (None, None, None, 2 0 batch_normalization_70[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_70[0][0]
__________________________________________________________________________________________________
batch_normalization_71 (BatchNo (None, None, None, 1 512 conv2d_73[0][0]
__________________________________________________________________________________________________
leaky_re_lu_71 (LeakyReLU) (None, None, None, 1 0 batch_normalization_71[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_57[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_64[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_71[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, None, None, 1 4096 conv2d_58[0][0]
__________________________________________________________________________________________________
batch_normalization_65 (BatchNo (None, None, None, 5 2048 conv2d_66[0][0]
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, None, None, 2 1024 conv2d_74[0][0]
__________________________________________________________________________________________________
leaky_re_lu_58 (LeakyReLU) (None, None, None, 1 0 batch_normalization_58[0][0]
__________________________________________________________________________________________________
leaky_re_lu_65 (LeakyReLU) (None, None, None, 5 0 batch_normalization_65[0][0]
__________________________________________________________________________________________________
leaky_re_lu_72 (LeakyReLU) (None, None, None, 2 0 batch_normalization_72[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, None, None, 2 261375 leaky_re_lu_58[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D) (None, None, None, 2 130815 leaky_re_lu_65[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D) (None, None, None, 2 65535 leaky_re_lu_72[0][0]
==================================================================================================
Total params: 62,001,757
Trainable params: 61,949,149
Non-trainable params: 52,608
__________________________________________________________________________________________________
None
Saved Keras model to /content/yolo.h5
Read 62001757 of 62001757.0 from Darknet weights.
!rm -rf logs/000/trained_weights_stage_1.h5
!rm -rf logs/000/trained_weights_final.h5
batch_size = 4 # Setting too High Returns Memory Errors ( >16)
num_epochs_initial = 10
num_epochs_hyper = 25
model = train_model(annotation_path, log_dir, classes_path, anchors_path, batch_size, num_epochs_initial, num_epochs_hyper)
-------------------CLASS NAMES------------------- ['mask', 'no_mask'] -------------------CLASS NAMES------------------- WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead. Create YOLOv3 model with 9 anchors and 2 classes.
/usr/local/lib/python3.7/dist-packages/keras/engine/saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_59 due to mismatch in shape ((1, 1, 1024, 21) vs (255, 1024, 1, 1)). weight_values[i].shape)) /usr/local/lib/python3.7/dist-packages/keras/engine/saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_59 due to mismatch in shape ((21,) vs (255,)). weight_values[i].shape)) /usr/local/lib/python3.7/dist-packages/keras/engine/saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_67 due to mismatch in shape ((1, 1, 512, 21) vs (255, 512, 1, 1)). weight_values[i].shape)) /usr/local/lib/python3.7/dist-packages/keras/engine/saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_67 due to mismatch in shape ((21,) vs (255,)). weight_values[i].shape)) /usr/local/lib/python3.7/dist-packages/keras/engine/saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_75 due to mismatch in shape ((1, 1, 256, 21) vs (255, 256, 1, 1)). weight_values[i].shape)) /usr/local/lib/python3.7/dist-packages/keras/engine/saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_75 due to mismatch in shape ((21,) vs (255,)). weight_values[i].shape))
Load weights yolo.h5. Freeze the first 249 layers of total 252 layers. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:1521: The name tf.log is deprecated. Please use tf.math.log instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:3080: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. Train on 557 samples, val on 139 samples, with batch size 4. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/callbacks.py:850: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/callbacks.py:853: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead. Epoch 1/10 139/139 [==============================] - 45s 322ms/step - loss: 1222.3702 - val_loss: 127.7135 WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/callbacks.py:995: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead. Epoch 2/10 139/139 [==============================] - 39s 282ms/step - loss: 99.0727 - val_loss: 70.8478 Epoch 3/10 139/139 [==============================] - 39s 282ms/step - loss: 61.9392 - val_loss: 52.7887 Epoch 4/10 139/139 [==============================] - 39s 277ms/step - loss: 48.7122 - val_loss: 43.7561 Epoch 5/10 139/139 [==============================] - 39s 280ms/step - loss: 41.6988 - val_loss: 40.9459 Epoch 6/10 139/139 [==============================] - 39s 280ms/step - loss: 37.8581 - val_loss: 37.0810 Epoch 7/10 139/139 [==============================] - 39s 278ms/step - loss: 34.6459 - val_loss: 33.2931 Epoch 8/10 139/139 [==============================] - 38s 276ms/step - loss: 32.9988 - val_loss: 35.5727 Epoch 9/10 139/139 [==============================] - 39s 279ms/step - loss: 31.5315 - val_loss: 32.6325 Epoch 10/10 139/139 [==============================] - 39s 278ms/step - loss: 30.1777 - val_loss: 32.6869 Unfreeze all of the layers. Train on 557 samples, val on 139 samples, with batch size 4. Epoch 6/25 139/139 [==============================] - 49s 350ms/step - loss: 22.8506 - val_loss: 23.2341 Epoch 7/25 139/139 [==============================] - 40s 289ms/step - loss: 19.9743 - val_loss: 20.2192 Epoch 8/25 139/139 [==============================] - 41s 292ms/step - loss: 19.2252 - val_loss: 21.0395 Epoch 9/25 139/139 [==============================] - 40s 286ms/step - loss: 18.4752 - val_loss: 18.9935 Epoch 10/25 139/139 [==============================] - 40s 290ms/step - loss: 18.3767 - val_loss: 19.4963 Epoch 11/25 139/139 [==============================] - 40s 287ms/step - loss: 18.0468 - val_loss: 20.7277 Epoch 12/25 139/139 [==============================] - 40s 291ms/step - loss: 17.5654 - val_loss: 18.6538 Epoch 13/25 139/139 [==============================] - 40s 288ms/step - loss: 17.3950 - val_loss: 18.8702 Epoch 14/25 139/139 [==============================] - 40s 291ms/step - loss: 17.1694 - val_loss: 21.5211 Epoch 15/25 139/139 [==============================] - 41s 292ms/step - loss: 17.0529 - val_loss: 18.1147 Epoch 16/25 139/139 [==============================] - 40s 288ms/step - loss: 17.3251 - val_loss: 18.9349 Epoch 17/25 139/139 [==============================] - 40s 288ms/step - loss: 16.8210 - val_loss: 18.0090 Epoch 18/25 139/139 [==============================] - 40s 288ms/step - loss: 16.9153 - val_loss: 16.8069 Epoch 19/25 139/139 [==============================] - 40s 288ms/step - loss: 16.2392 - val_loss: 17.4608 Epoch 20/25 139/139 [==============================] - 41s 293ms/step - loss: 16.4715 - val_loss: 17.6285 Epoch 21/25 139/139 [==============================] - 40s 289ms/step - loss: 16.3811 - val_loss: 18.0646 Epoch 00021: ReduceLROnPlateau reducing learning rate to 9.999999747378752e-06. Epoch 22/25 139/139 [==============================] - 41s 291ms/step - loss: 15.6396 - val_loss: 16.5627 Epoch 23/25 139/139 [==============================] - 41s 293ms/step - loss: 15.4715 - val_loss: 16.7291 Epoch 24/25 139/139 [==============================] - 40s 289ms/step - loss: 15.3050 - val_loss: 16.3573 Epoch 25/25 139/139 [==============================] - 40s 289ms/step - loss: 14.9949 - val_loss: 18.1343
import matplotlib.pyplot as plt
plt.figure(figsize=(10,6))
plt.plot(range(1,len(model.history.history['loss'])+1) ,model.history.history['loss'], label = 'loss')
plt.plot(range(1,len(model.history.history['val_loss'])+1) ,model.history.history['val_loss'], label = 'val_loss')
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.title("Model Loss")
plt.legend()
plt.show();
ls logs/000/
ep003-loss61.939-val_loss52.789.h5 ep019-loss16.239-val_loss17.461.h5 ep006-loss37.858-val_loss37.081.h5 ep022-loss15.640-val_loss16.563.h5 ep007-loss19.974-val_loss20.219.h5 events.out.tfevents.1638716834.8e85ffe35ed2 ep009-loss31.532-val_loss32.632.h5 events.out.tfevents.1638717268.8e85ffe35ed2 ep010-loss18.377-val_loss19.496.h5 trained_weights_final.h5 ep013-loss17.395-val_loss18.870.h5 trained_weights_stage_1.h5
model.save('my_model')
%tensorflow_version 1.x
import h5py
print("h5py Version: ", h5py.__version__) # needs to be 2.10.0
h5py Version: 2.10.0
import keras
print("Keras Version: ", keras.__version__) # needs to be 2.2.4
Keras Version: 2.2.4
import sys
import os
import configparser
import io
from collections import defaultdict
import numpy as np
import keras.backend as K
from keras.layers import Input, Lambda
from keras.models import Model
from keras.optimizers import Adam
from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping
!rm -rf yolov3
!git clone https://github.com/awells-uva/yolov3.git
Cloning into 'yolov3'... remote: Enumerating objects: 67, done. remote: Counting objects: 100% (67/67), done. remote: Compressing objects: 100% (46/46), done. remote: Total 67 (delta 33), reused 49 (delta 18), pack-reused 0 Unpacking objects: 100% (67/67), done.
#!python /content/yolov3/yolo_video.py --model="/content/logs/000/trained_weights_stage_1.h5" --classes="/content/DS6050/mask_detection.v5-test-v2.yolokeras/train/_classes.txt" --image
import h5py
h5py.File('/content/logs/000/trained_weights_final.h5','r')
<HDF5 file "trained_weights_final.h5" (mode r)>
#!python /content/yolov3/yolo_video.py --model="/content/logs/000/trained_weights_stage_1.h5" --classes="/content/_classes.txt" --image --testdir='/content/DS6050/testimages/'
!python /content/yolov3/yolo_video.py --model="/content/logs/000/trained_weights_final.h5" --classes="/content/_classes.txt" --image --testdir='/content/DS6050/testimages/'
Using TensorFlow backend. Image detection mode Ignoring remaining command line arguments: ./path2your_video, WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2021-12-05 15:28:14.952626: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2199995000 Hz 2021-12-05 15:28:14.952942: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55775296ca00 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-12-05 15:28:14.952975: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-12-05 15:28:14.954644: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2021-12-05 15:28:15.174583: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-05 15:28:15.175240: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55775296cd80 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-12-05 15:28:15.175271: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0 2021-12-05 15:28:15.175440: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-05 15:28:15.175840: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: name: Tesla P100-PCIE-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.3285 pciBusID: 0000:00:04.0 2021-12-05 15:28:15.176123: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2021-12-05 15:28:15.177501: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 2021-12-05 15:28:15.178372: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10 2021-12-05 15:28:15.178669: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10 2021-12-05 15:28:15.180218: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10 2021-12-05 15:28:15.180905: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10 2021-12-05 15:28:15.183810: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2021-12-05 15:28:15.183896: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-05 15:28:15.184271: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-05 15:28:15.184582: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0 2021-12-05 15:28:15.184625: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 2021-12-05 15:28:15.185411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-12-05 15:28:15.185435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0 2021-12-05 15:28:15.185442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N 2021-12-05 15:28:15.185537: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-05 15:28:15.185893: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-12-05 15:28:15.186217: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2021-12-05 15:28:15.186250: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6931 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0) WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead. /content/logs/000/trained_weights_final.h5 model, anchors, and classes loaded. WARNING:tensorflow:From /tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py:1475: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where (416, 416, 3) 2021-12-05 15:28:24.315069: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] remapper failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2021-12-05 15:28:24.368904: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] layout failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2021-12-05 15:28:24.452553: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] remapper failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2021-12-05 15:28:24.766020: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2021-12-05 15:28:25.386929: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10 Found 1 boxes for img no_mask 0.68 (765, 168) (1317, 962) 1.9709622589998617 Saving: /content/DS6050/testimages/boundedImages/_MG_0723_color_web_model_out.jpg (416, 416, 3) Found 2 boxes for img mask 0.42 (750, 130) (843, 238) mask 0.91 (320, 201) (402, 292) 0.06912714500003858 Saving: /content/DS6050/testimages/boundedImages/facemasks-5_model_out.jpg (416, 416, 3) Found 6 boxes for img no_mask 0.98 (62, 36) (177, 195) mask 0.82 (305, 260) (415, 435) mask 0.90 (295, 30) (428, 201) mask 0.94 (547, 259) (661, 434) mask 0.94 (62, 269) (175, 446) mask 0.94 (543, 28) (662, 202) 0.054203372999836574 Saving: /content/DS6050/testimages/boundedImages/DoubleRow_model_out.jpg (416, 416, 3) Found 2 boxes for img mask 0.97 (123, 0) (358, 366) mask 0.98 (450, 163) (700, 457) 0.045568737999929 Saving: /content/DS6050/testimages/boundedImages/mother_baby_homemade_masks_model_out.jpg (416, 416, 3) Found 3 boxes for img no_mask 0.56 (375, 175) (432, 247) mask 0.81 (449, 99) (527, 191) mask 0.89 (212, 168) (280, 264) 0.046059320000040316 Saving: /content/DS6050/testimages/boundedImages/210728-new-york-pedestrians-mask-ac-818p_model_out.jpg (416, 416, 3) Found 3 boxes for img mask 0.85 (162, 52) (241, 137) mask 0.88 (450, 79) (510, 147) mask 0.93 (314, 94) (359, 163) 0.04055269300033615 Saving: /content/DS6050/testimages/boundedImages/merlin_169651932_5e1345b4-6c9b-4308-89fa-7294c4efaa04-articleLarge_model_out.jpg (416, 416, 3) Found 1 boxes for img no_mask 0.96 (232, 85) (642, 492) 0.049160357999880944 Saving: /content/DS6050/testimages/boundedImages/GettyImages-1232561615_model_out.jpg
%ls logs/000
from IPython.display import Image, display
for image in os.listdir('/content/DS6050/testimages/boundedImages/'):
display(Image(os.path.join('/content/DS6050/testimages/boundedImages/', image)))
from google.colab import files
def gen_bounding_box(uploaded):
import os
from io import BytesIO
import sys
from PIL import Image
import matplotlib.pyplot as plt
outdir = '/content/testimage/'
import shutil
if os.path.isdir(outdir):
shutil.rmtree(outdir)
os.mkdir(outdir)
im = Image.open(BytesIO(uploaded[list(uploaded.keys())[0]]))
plt.imshow(im)
plt.show()
imgName = list(uploaded.keys())[0].replace(" ", "_")
im.save(os.path.join(outdir,imgName))
os.system('{} /content/yolov3/yolo_video.py --model="/content/logs/000/trained_weights_final.h5" --classes="/content/_classes.txt" --image --testdir={} > /dev/null 2>&1'.format(sys.executable, outdir))
from IPython.display import Image, display
for image in os.listdir('{}boundedImages/'.format(outdir)):
display(Image(os.path.join('{}boundedImages/'.format(outdir), image)))
uploaded = files.upload()
gen_bounding_box(uploaded)
Saving Screen Shot 2021-12-05 at 11.11.14 AM.png to Screen Shot 2021-12-05 at 11.11.14 AM (1).png