露天停车场车位识别

2023-10-19 17:50
文章标签 识别 停车场 车位 露天

本文主要是介绍露天停车场车位识别,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

凯哥英语视频

停车场车位识别

    • 凯哥英语视频
    • 1.现有资源梳理
    • 2.实现方案规划
      • 图片预处理
      • 裁剪出需要处理的部分
      • 识别车位位置
      • 识别车位空缺
    • 3.代码实现
      • 项目打包链接
        • 结语

1.现有资源梳理

多张停车位俯瞰照片

在这里插入图片描述
在这里插入图片描述

操作环境 win10-64位
代码语言 Python 3.6
运行环境 TensorFlow 2.1.0 + Keras 2.3.1 + h5py 2.8.0
(一开始h5py版本不对 , 各种报错)

2.实现方案规划

图片预处理

  1. 保留主体背景过滤
  2. 灰度转换
  3. 去除噪点
  4. 边缘检测

裁剪出需要处理的部分

  1. 选取图片中前5个大轮廓
  2. 裁剪出需要处理的部分

识别车位位置

  1. 霍夫变换检测直线
  2. 过滤不是车位的直线
  3. 根据直线的(x,y)分列排序(相同的x为一列)
  4. 根据x1,x2的平均值和y_max,y_min画出列的矩形
  5. 处理多张图片,取最优作为位置参数
  6. 裁剪出车位,作为训练数据集

识别车位空缺

  1. 利用训练好的模型对裁剪的车位图片进行分类

3.代码实现

train.py

import numpy
import os
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D
from keras import backend as k
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard, EarlyStopping
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.initializers import TruncatedNormal
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.core import Densefiles_train = 0
files_validation = 0cwd = os.getcwd()
folder = 'train_data/train'
for sub_folder in os.listdir(folder):path, dirs, files = next(os.walk(os.path.join(folder,sub_folder)))files_train += len(files)folder = 'train_data/test'
for sub_folder in os.listdir(folder):path, dirs, files = next(os.walk(os.path.join(folder,sub_folder)))files_validation += len(files)print(files_train,files_validation)img_width, img_height = 48, 48
train_data_dir = "train_data/train"
validation_data_dir = "train_data/test"
nb_train_samples = files_train
nb_validation_samples = files_validation
batch_size = 32
epochs = 15
num_classes = 2model = applications.VGG16(weights='imagenet', include_top=False, input_shape = (img_width, img_height, 3))for layer in model.layers[:10]:layer.trainable = Falsex = model.output
x = Flatten()(x)
predictions = Dense(num_classes, activation="softmax")(x)model_final = Model(input = model.input, output = predictions)model_final.compile(loss = "categorical_crossentropy", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), metrics=["accuracy"]) train_datagen = ImageDataGenerator(
rescale = 1./255,
horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.1,
width_shift_range = 0.1,
height_shift_range=0.1,
rotation_range=5)test_datagen = ImageDataGenerator(
rescale = 1./255,
horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.1,
width_shift_range = 0.1,
height_shift_range=0.1,
rotation_range=5)train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size = (img_height, img_width),
batch_size = batch_size,
class_mode = "categorical")validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size = (img_height, img_width),
class_mode = "categorical")checkpoint = ModelCheckpoint("car1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=10, verbose=1, mode='auto')history_object = model_final.fit_generator(
train_generator,
samples_per_epoch = nb_train_samples,
epochs = epochs,
validation_data = validation_generator,
nb_val_samples = nb_validation_samples,
callbacks = [checkpoint, early])

park_test.py

from __future__ import division
import matplotlib.pyplot as plt
import cv2
import os, glob
import numpy as np
from PIL import Image
from keras.applications.imagenet_utils import preprocess_input
from keras.models import load_model
from keras.preprocessing import image
from Parking import Parking
import pickle
cwd = os.getcwd()def img_process(test_images,park):# map函数 向某个函数中 执行某个数据white_yellow_images = list(map(park.select_rgb_white_yellow, test_images))park.show_images(white_yellow_images)gray_images = list(map(park.convert_gray_scale, white_yellow_images))park.show_images(gray_images)edge_images = list(map(lambda image: park.detect_edges(image), gray_images))park.show_images(edge_images)roi_images = list(map(park.select_region, edge_images))park.show_images(roi_images)list_of_lines = list(map(park.hough_lines, roi_images))line_images = []for image, lines in zip(test_images, list_of_lines):line_images.append(park.draw_lines(image, lines)) park.show_images(line_images)rect_images = []rect_coords = []for image, lines in zip(test_images, list_of_lines):new_image, rects = park.identify_blocks(image, lines)rect_images.append(new_image)rect_coords.append(rects)park.show_images(rect_images)delineated = []spot_pos = []for image, rects in zip(test_images, rect_coords):new_image, spot_dict = park.draw_parking(image, rects)delineated.append(new_image)spot_pos.append(spot_dict)park.show_images(delineated)final_spot_dict = spot_pos[1]print(len(final_spot_dict))with open('spot_dict.pickle', 'wb') as handle:pickle.dump(final_spot_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)park.save_images_for_cnn(test_images[0],final_spot_dict)return final_spot_dict
def keras_model(weights_path):    model = load_model(weights_path)return model
def img_test(test_images,final_spot_dict,model,class_dictionary):for i in range (len(test_images)):predicted_images = park.predict_on_image(test_images[i],final_spot_dict,model,class_dictionary)
def video_test(video_name,final_spot_dict,model,class_dictionary):name = video_namecap = cv2.VideoCapture(name)park.predict_on_video(name,final_spot_dict,model,class_dictionary,ret=True)if __name__ == '__main__':# 测试图片test_images = [plt.imread(path) for path in glob.glob('test_images/*.jpg')]# 训练好的模型weights_path = 'car1.h5'# 生成的视频video_name = 'parking_video.mp4'# 0-1转换class_dictionary = {}class_dictionary[0] = 'empty'class_dictionary[1] = 'occupied'# 类的调用park = Parking()park.show_images(test_images)final_spot_dict = img_process(test_images,park)model = keras_model(weights_path)img_test(test_images,final_spot_dict,model,class_dictionary)video_test(video_name,final_spot_dict,model,class_dictionary)

parking.py

import matplotlib.pyplot as plt
import cv2
import os, glob
import numpy as npclass Parking:  # parking 类def show_images(self, images, cmap=None):cols = 2rows = (len(images)+1)//colsplt.figure(figsize=(15, 12))for i, image in enumerate(images):plt.subplot(rows, cols, i+1)cmap = 'gray' if len(image.shape)==2 else cmapplt.imshow(image, cmap=cmap)plt.xticks([])plt.yticks([])plt.tight_layout(pad=0, h_pad=0, w_pad=0)plt.show()def cv_show(self,name,img):cv2.imshow(name, img)cv2.waitKey(0)cv2.destroyAllWindows()# 预处理部分可以优化def select_rgb_white_yellow(self,image): #过滤掉背景lower = np.uint8([120, 120, 120])upper = np.uint8([255, 255, 255])# 低于lower_red和高于upper_red的部分分别变成0,lower_red~upper_red之间的值变成255,相当于保留主体过滤背景white_mask = cv2.inRange(image, lower, upper)self.cv_show('white_mask',white_mask)masked = cv2.bitwise_and(image, image, mask = white_mask)self.cv_show('masked',masked)return maskeddef convert_gray_scale(self,image):# 灰度处理return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)def detect_edges(self,image, low_threshold=50, high_threshold=200):# 边缘检测--设置好最小阈值和最大阈值return cv2.Canny(image, low_threshold, high_threshold)def filter_region(self,image, vertices):"""结合坐标所生成的图剔除掉不需要的地方"""mask = np.zeros_like(image)if len(mask.shape)==2:cv2.fillPoly(mask, vertices, 255)self.cv_show('mask', mask)return cv2.bitwise_and(image, mask)def select_region(self,image):"""手动选择停车位整体区域主观判断多边形点的比例进行裁剪"""# first, define the polygon by verticesrows, cols = image.shape[:2]pt_1 = [cols*0.05, rows*0.90]pt_2 = [cols*0.05, rows*0.70]pt_3 = [cols*0.30, rows*0.55]pt_4 = [cols*0.6,  rows*0.15]pt_5 = [cols*0.90, rows*0.15] pt_6 = [cols*0.90, rows*0.90]vertices = np.array([[pt_1, pt_2, pt_3, pt_4, pt_5, pt_6]], dtype=np.int32) point_img = image.copy()       point_img = cv2.cvtColor(point_img, cv2.COLOR_GRAY2RGB)for point in vertices[0]:cv2.circle(point_img, (point[0],point[1]), 10, (0,0,255), 4)self.cv_show('point_img',point_img)return self.filter_region(image, vertices)def hough_lines(self,image):#输入的图像需要是边缘检测后的结果#minLineLengh(线的最短长度,比这个短的都被忽略)和MaxLineCap(两条直线之间的最大间隔,小于此值,认为是一条直线)#rho距离精度,theta角度精度,threshod超过设定阈值才被检测出线段return cv2.HoughLinesP(image, rho=0.1, theta=np.pi/10, threshold=15, minLineLength=9, maxLineGap=4)# HoughLinesP中的P,加快速度def draw_lines(self,image, lines, color=[255, 0, 0], thickness=2, make_copy=True):# 过滤在霍夫变换检测到直线if make_copy:image = np.copy(image)cleaned = []for line in lines:for x1,y1,x2,y2 in line:if abs(y2-y1) <=1 and abs(x2-x1) >=25 and abs(x2-x1) <= 55:  # 去除斜线 去除很长的线cleaned.append((x1,y1,x2,y2))cv2.line(image, (x1, y1), (x2, y2), color, thickness)print(" No lines detected: ", len(cleaned))return imagedef identify_blocks(self,image, lines, make_copy=True):if make_copy:new_image = np.copy(image)#Step 1: 过滤部分直线cleaned = []for line in lines:for x1,y1,x2,y2 in line:if abs(y2-y1) <=1 and abs(x2-x1) >=25 and abs(x2-x1) <= 55:cleaned.append((x1,y1,x2,y2))#Step 2: 对直线按照x1进行排序import operatorlist1 = sorted(cleaned, key=operator.itemgetter(0, 1))#Step 3: 找到多个列,相当于每列是一排车clusters = {}dIndex = 0clus_dist = 10for i in range(len(list1) - 1):distance = abs(list1[i+1][0] - list1[i][0])if distance <= clus_dist:if not dIndex in clusters.keys(): clusters[dIndex] = []clusters[dIndex].append(list1[i])clusters[dIndex].append(list1[i + 1]) else:dIndex += 1#Step 4: 得到坐标rects = {}i = 0for key in clusters:all_list = clusters[key]cleaned = list(set(all_list))if len(cleaned) > 5:cleaned = sorted(cleaned, key=lambda tup: tup[1])avg_y1 = cleaned[0][1]avg_y2 = cleaned[-1][1]avg_x1 = 0avg_x2 = 0for tup in cleaned:avg_x1 += tup[0]avg_x2 += tup[2]avg_x1 = avg_x1/len(cleaned)avg_x2 = avg_x2/len(cleaned)rects[i] = (avg_x1, avg_y1, avg_x2, avg_y2)i += 1print("Num Parking Lanes: ", len(rects))#Step 5: 把列矩形画出来buff = 7for key in rects:tup_topLeft = (int(rects[key][0] - buff), int(rects[key][1]))tup_botRight = (int(rects[key][2] + buff), int(rects[key][3]))cv2.rectangle(new_image, tup_topLeft,tup_botRight,(0,255,0),3)return new_image, rectsdef draw_parking(self,image, rects, make_copy = True, color=[255, 0, 0], thickness=2, save = True):if make_copy:new_image = np.copy(image)gap = 15.5spot_dict = {} # 字典:一个车位对应一个位置tot_spots = 0#微调adj_y1 = {0: 20, 1:-10, 2:0, 3:-11, 4:28, 5:5, 6:-15, 7:-15, 8:-10, 9:-30, 10:9, 11:-32}adj_y2 = {0: 30, 1: 50, 2:15, 3:10, 4:-15, 5:15, 6:15, 7:-20, 8:15, 9:15, 10:0, 11:30}adj_x1 = {0: -8, 1:-15, 2:-15, 3:-15, 4:-15, 5:-15, 6:-15, 7:-15, 8:-10, 9:-10, 10:-10, 11:0}adj_x2 = {0: 0, 1: 15, 2:15, 3:15, 4:15, 5:15, 6:15, 7:15, 8:10, 9:10, 10:10, 11:0}for key in rects:tup = rects[key]x1 = int(tup[0]+ adj_x1[key])x2 = int(tup[2]+ adj_x2[key])y1 = int(tup[1] + adj_y1[key])y2 = int(tup[3] + adj_y2[key])cv2.rectangle(new_image, (x1, y1),(x2,y2),(0,255,0),2)num_splits = int(abs(y2-y1)//gap)for i in range(0, num_splits+1):y = int(y1 + i*gap)cv2.line(new_image, (x1, y), (x2, y), color, thickness)if key > 0 and key < len(rects) -1 :        #竖直线x = int((x1 + x2)/2)cv2.line(new_image, (x, y1), (x, y2), color, thickness)# 计算数量if key == 0 or key == (len(rects) -1):tot_spots += num_splits +1else:tot_spots += 2*(num_splits +1)# 字典对应好if key == 0 or key == (len(rects) -1):for i in range(0, num_splits+1):cur_len = len(spot_dict)y = int(y1 + i*gap)spot_dict[(x1, y, x2, y+gap)] = cur_len +1        else:for i in range(0, num_splits+1):cur_len = len(spot_dict)y = int(y1 + i*gap)x = int((x1 + x2)/2)spot_dict[(x1, y, x, y+gap)] = cur_len +1spot_dict[(x, y, x2, y+gap)] = cur_len +2   print("total parking spaces: ", tot_spots, cur_len)if save:filename = 'with_parking.jpg'cv2.imwrite(filename, new_image)return new_image, spot_dictdef assign_spots_map(self,image, spot_dict, make_copy = True, color=[255, 0, 0], thickness=2):if make_copy:new_image = np.copy(image)for spot in spot_dict.keys():(x1, y1, x2, y2) = spotcv2.rectangle(new_image, (int(x1),int(y1)), (int(x2),int(y2)), color, thickness)return new_imagedef save_images_for_cnn(self,image, spot_dict, folder_name ='cnn_data'):for spot in spot_dict.keys():(x1, y1, x2, y2) = spot(x1, y1, x2, y2) = (int(x1), int(y1), int(x2), int(y2))#裁剪spot_img = image[y1:y2, x1:x2]spot_img = cv2.resize(spot_img, (0,0), fx=2.0, fy=2.0) spot_id = spot_dict[spot]filename = 'spot' + str(spot_id) +'.jpg'print(spot_img.shape, filename, (x1,x2,y1,y2))cv2.imwrite(os.path.join(folder_name, filename), spot_img)def make_prediction(self,image,model,class_dictionary):#预处理img = image/255.#转换成4D tensorimage = np.expand_dims(img, axis=0)# 用训练好的模型进行训练class_predicted = model.predict(image)inID = np.argmax(class_predicted[0])label = class_dictionary[inID]return labeldef predict_on_image(self,image, spot_dict , model,class_dictionary,make_copy=True, color = [0, 255, 0], alpha=0.5):if make_copy:new_image = np.copy(image)overlay = np.copy(image)self.cv_show('new_image',new_image)cnt_empty = 0all_spots = 0for spot in spot_dict.keys():all_spots += 1(x1, y1, x2, y2) = spot(x1, y1, x2, y2) = (int(x1), int(y1), int(x2), int(y2))spot_img = image[y1:y2, x1:x2]spot_img = cv2.resize(spot_img, (48, 48)) label = self.make_prediction(spot_img,model,class_dictionary)if label == 'empty':cv2.rectangle(overlay, (int(x1),int(y1)), (int(x2),int(y2)), color, -1)cnt_empty += 1cv2.addWeighted(overlay, alpha, new_image, 1 - alpha, 0, new_image)cv2.putText(new_image, "Available: %d spots" %cnt_empty, (30, 95),cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 255, 255), 2)cv2.putText(new_image, "Total: %d spots" %all_spots, (30, 125),cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 255, 255), 2)save = Falseif save:filename = 'with_marking.jpg'cv2.imwrite(filename, new_image)self.cv_show('new_image',new_image)return new_imagedef predict_on_video(self,video_name,final_spot_dict, model,class_dictionary,ret=True):   cap = cv2.VideoCapture(video_name)count = 0while ret:ret, image = cap.read()count += 1if count == 5:count = 0new_image = np.copy(image)overlay = np.copy(image)cnt_empty = 0all_spots = 0color = [0, 255, 0] alpha=0.5for spot in final_spot_dict.keys():all_spots += 1(x1, y1, x2, y2) = spot(x1, y1, x2, y2) = (int(x1), int(y1), int(x2), int(y2))spot_img = image[y1:y2, x1:x2]spot_img = cv2.resize(spot_img, (48,48)) label = self.make_prediction(spot_img,model,class_dictionary)if label == 'empty':cv2.rectangle(overlay, (int(x1),int(y1)), (int(x2),int(y2)), color, -1)cnt_empty += 1cv2.addWeighted(overlay, alpha, new_image, 1 - alpha, 0, new_image)cv2.putText(new_image, "Available: %d spots" %cnt_empty, (30, 95),cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 255, 255), 2)cv2.putText(new_image, "Total: %d spots" %all_spots, (30, 125),cv2.FONT_HERSHEY_SIMPLEX,0.7, (255, 255, 255), 2)cv2.imshow('frame', new_image)if cv2.waitKey(10) & 0xFF == ord('q'):breakcv2.destroyAllWindows()cap.release()

所有代码连在一起就是完整的代码

项目打包链接

项目有点大,我就放在百度云了
项目打包下载
提取密码 520y

结语

帮我的小狐狸点个赞吧
别的也没啥说的 , 如果觉得可以 , 希望一键三连支持一下 !

ok,那就这样吧~

欢迎各位大佬留言吐槽,也可以深入交流~

这篇关于露天停车场车位识别的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/241411

相关文章

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

Clion不识别C代码或者无法跳转C语言项目怎么办?

如果是中文会显示: 此时只需要右击项目,或者你的源代码目录,将这个项目或者源码目录标记为项目源和头文件即可。 英文如下:

BERN2(生物医学领域)命名实体识别与命名规范化工具

BERN2: an advanced neural biomedical named entity recognition and normalization tool 《Bioinformatics》2022 1 摘要 NER和NEN:在生物医学自然语言处理中,NER和NEN是关键任务,它们使得从生物医学文献中自动提取实体(如疾病和药物)成为可能。 BERN2:BERN2是一个工具,

行为智能识别摄像机

行为智能识别摄像机 是一种结合了人工智能技术和监控摄像技术的先进设备,它能够通过深度学习算法对监控画面进行实时分析,自动识别和分析监控画面中的各种行为动作。这种摄像机在安防领域有着广泛的应用,可以帮助监控人员及时发现异常行为,并采取相应的措施。 行为智能识别摄像机可以有效预防盗窃事件。在商场、超市等公共场所安装这种摄像机,可以通过识别异常行为等情况,及时报警并阻止不安全行为的发生

flutter开发实战-flutter build web微信无法识别二维码及小程序码问题

flutter开发实战-flutter build web微信无法识别二维码及小程序码问题 GitHub Pages是一个直接从GitHub存储库托管的静态站点服务,‌它允许用户通过简单的配置,‌将个人的代码项目转化为一个可以在线访问的网站。‌这里使用flutter build web来构建web发布到GitHub Pages。 最近通过flutter build web,通过发布到GitHu

T1打卡——mnist手写数字识别

🍨 本文为🔗365天深度学习训练营中的学习记录博客🍖 原作者:K同学啊 1.定义GPU import tensorflow as tfgpus=tf.config.list_physical_devices("GPU")if gpus:gpu0=gpus[0]tf.config.experimental.set_memort_groth(gpu0,True) #设置GPU现存用量按需

使用 VisionTransformer(VIT) FineTune 训练驾驶员行为状态识别模型

一、VisionTransformer(VIT) 介绍 大模型已经成为人工智能领域的热门话题。在这股热潮中,大模型的核心结构 Transformer 也再次脱颖而出证明了其强大的能力和广泛的应用前景。Transformer 自 2017年由Google提出以来,便在NLP领域掀起了一场革命。相较于传统的循环神经网络(RNN)和长短时记忆网络(LSTM), Transformer 凭借自注意力机制

T7:咖啡豆识别

T7:咖啡豆识别 **一、前期工作**1.设置GPU,导入库2.导入数据3.查看数据 **二、数据预处理**1.加载数据2.可视化数据3.配置数据集 **三、构建CNN网络模型**1、手动搭建2、直接调用官方模型 **四、编译模型****五、训练模型****六、模型评估****七、预测**八、暂时总结 🍨 本文为🔗365天深度学习训练营 中的学习记录博客🍖 原作者:K

mysql无法启动以及cmd下mysql命令无法识别的

1.mysql无法启动 解决方式: Win+R,输入services.msc,找到mysql服务 即默认的服务名是mysql55而不是mysql 2.mysql命令无法识别 直接输入mysql进入数据库报错 这是因为系统并不知道mysql是什么,我们需要在环境变量里添加mysql的安装地址中的bin目录地址。 C:\Program Files\My

Selenium 实现图片验证码识别

前言 在测试过程中,有的时候登录需要输入图片验证码。这时候使用Selenium进行自动化测试,怎么做图片验证码识别?本篇内容主要介绍使用Selenium、BufferedImage、Tesseract进行图片 验证码识别。 环境准备 jdk:1.8 tessdata:文章末尾附下载地址 安装Tesseract 我本地是ubuntu系统 sudo apt install tesserac