本文主要是介绍Halcon20.11深度学习语义分割模型,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1.前言:深度目标检测模型已经可以满足一大部分的检测需求,但是在逐像素精度分割方向是无法做到的。这时候就需要训练深度语义分割模型,标注工具依然使用的MVTec Deep Learning Tool 24.05。实现顺序依然是先用标注工具把所有的图标注完毕后,导出标注数据集,即可利用此数据集,在halcon代码中训练。
2.上干货,深度语义分割训练源码:
模型训练预处理准备阶段********************
dev_update_off ()
*
- *** Set Input/Output paths. ***
TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’
- Directory with image data.
ImageDir := TotalPath+‘Images’ - Directory with ground truth segmentation images.
SegmentationDir := TotalPath+‘Bottle_labels’ - All example data is written to this folder.
ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’ - Dataset directory basename for any outputs written by preprocess_dl_dataset.
- This name will be extended by the image dimensions the dataset will have after preprocessing.
DataDirectoryBaseName := ExampleDataDir + ‘/dldataset_Bottle_’ - Store preprocess params separately in order to use it e.g. during inference.
PreprocessParamFileBaseName := ‘/dl_preprocess_param.hdict’
*标注数据集路径
DirectPath_Bottle:=TotalPath + ‘Bottle.hdict’
- *** Set parameters ***
- Class names.
ClassNames := [‘Bacground’,‘Dirty’,‘HuaShang’] - Class IDs.
ClassIDs := [0,1,2] - Percentages for splitting the dataset.
TrainingPercent := 70
ValidationPercent := 15 - Image dimensions the images are rescaled to during preprocessing.
ImageWidth := 400
ImageHeight := 400
ImageNumChannels := 3 - Gray value range for gray value normalization of the images.
ImageRangeMin := -127
ImageRangeMax := 128 - Further parameters for image preprocessing.
NormalizationType := ‘none’
DomainHandling := ‘full_domain’
IgnoreClassIDs := []
SetBackgroundID := []
ClassIDsBackground := [] - In order to get a reproducible split we set a random seed.
- This means that re-running the script results in the same split of DLDataset.
SeedRand := 42
- ** Read the labeled data and split it into train/validation and test ***
- Set the random seed.
set_system (‘seed_rand’, SeedRand) - Read the dataset.
read_dict (DirectPath_Bottle, [], [], DLDataset) - read_dl_dataset_segmentation (ImageDir, SegmentationDir, ClassNames, ClassIDs, [], [], [], DLDataset)
- Generate the split.
split_dl_dataset (DLDataset, TrainingPercent, ValidationPercent, [])
- ** Preprocess the dataset ***
- Create the output directory if it does not exist yet.
file_exists (ExampleDataDir, FileExists)
if (not FileExists)
make_dir (ExampleDataDir)
endif - Create preprocess param.
create_dl_preprocess_param (‘segmentation’, ImageWidth, ImageHeight, ImageNumChannels, ImageRangeMin, ImageRangeMax, NormalizationType, DomainHandling, IgnoreClassIDs, SetBackgroundID, ClassIDsBackground, [], DLPreprocessParam) - Dataset directory for any outputs written by preprocess_dl_dataset.
DataDirectory := DataDirectoryBaseName + ImageWidth + ‘x’ + ImageHeight - Preprocess the dataset. This might take a few minutes.
create_dict (GenParam)
set_dict_tuple (GenParam, ‘overwrite_files’, true)
preprocess_dl_dataset (DLDataset, DataDirectory, DLPreprocessParam, GenParam, DLDatasetFilename) - Store preprocess params separately in order to use it e.g. during inference.
PreprocessParamFile := DataDirectory + PreprocessParamFileBaseName
write_dict (DLPreprocessParam, PreprocessParamFile, [], [])
- ** Preview the preprocessed dataset ***
- Before moving on to training, it is recommended to check the preprocessed dataset.
- Display the DLSamples for 10 randomly selected train images.
get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
find_dl_samples (DatasetSamples, ‘split’, ‘train’, ‘match’, SampleIndices)
tuple_shuffle (SampleIndices, ShuffledIndices)
read_dl_samples (DLDataset, ShuffledIndices[0:9], DLSampleBatchDisplay)
create_dict (WindowHandleDict)
for Index := 0 to |DLSampleBatchDisplay| - 1 by 1
* Loop over samples in DLSampleBatchDisplay.
dev_display_dl_data (DLSampleBatchDisplay[Index], [], DLDataset, [‘image’,‘segmentation_image_ground_truth’], [], WindowHandleDict)
get_dict_tuple (WindowHandleDict, ‘segmentation_image_ground_truth’, WindowHandleImage)
dev_set_window (WindowHandleImage[1])
Text := ‘Press Run (F5) to continue’
dev_disp_text (Text, ‘window’, 400, 40, ‘black’, [], [])
stop ()
endfor
*
- Close windows that have been used for visualization.
dev_close_window_dict (WindowHandleDict)
模型训练阶段*****************
dev_update_off ()
-
Training can be performed on a GPU or CPU.
-
See the respective system requirements in the Installation Guide.
-
If possible a GPU is used in this example.
-
In case you explicitely wish to run this example on the CPU,
-
choose the CPU device instead.
query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
if (|DLDeviceHandles| == 0)
throw (‘No supported device found to continue this example.’)
endif -
Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
DLDevice := DLDeviceHandles[0]
get_dl_device_param (DLDevice, ‘type’, DLDeviceType)
if (DLDeviceType == ‘cpu’)- The number of used threads may have an impact
- on the training duration.
NumThreadsTraining := 4
set_system (‘thread_num’, NumThreadsTraining)
endif
- *** Set Input/Output paths. ***
- All example data is written to this folder.
TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’
ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’ - File path of the preprocessed DLDataset.
- Note: Adapt DataDirectory after preprocessing with another image size.
DataDirectory := ExampleDataDir + ‘/dldataset_Bottle_400x400’
DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’ - Output path for the final trained model.
FinalModelBaseName := ExampleDataDir + ‘/final_dl_model_segmentation’ - Output path of the best evaluated model.
BestModelBaseName := ExampleDataDir + ‘/best_dl_model_segmentation’
- *** Set basic parameters ***
- The following parameters need to be adapted frequently.
- Model parameters.
- The segmentation model to be retrained.
ModelFileName := ‘pretrained_dl_segmentation_enhanced.hdl’ - Batch size.
- If set to ‘maximum’, the batch size is set by set_dl_model_param_max_gpu_batch_size
- if the runtime ‘gpu’ is given.
BatchSize := ‘maximum’ - Initial learning rate.
InitialLearningRate := 0.0001 - Momentum should be high if batch size is small.
Momentum := 0.99 - Parameters used by train_dl_model.
- Number of epochs to train the model.
NumEpochs := 10 - Evaluation interval (in epochs) to calculate evaluation measures on validation split.
EvaluationIntervalEpochs := 1 - Change the learning rate in the following epochs, e.g. [15, 30].
- Set it to [] if the learning rate should not be changed.
ChangeLearningRateEpochs := [] - Change the learning rate to the following values, e.g. InitialLearningRate * [0.1, 0.01].
- The tuple has to be of the same length as ChangeLearningRateEpochs.
ChangeLearningRateValues := []
- *** Set advanced parameters. ***
- The following parameters might need to be changed in rare cases.
- Model parameter.
- Use [] for default weight prior.
WeightPrior := [] - Parameters of train_dl_model.
- Control whether training progress is displayed (true/false).
EnableDisplay := true - Set a random seed for training.
RandomSeed := 42
set_system (‘seed_rand’, RandomSeed) - In order to obtain nearly deterministic training results on the same GPU
- (system, driver, cuda-version) you could specify “cudnn_deterministic” as
- “true”. Note, that this could slow down training a bit.
- set_system (‘cudnn_deterministic’, ‘true’)
- Set generic parameters of create_dl_train_param.
- Please see the documentation of create_dl_train_param for an overview on all available parameters.
GenParamName := []
GenParamValue := [] - Change strategies.
- It is possible to change model parameters during training.
- Here, we change the learning rate if specified above.
if (|ChangeLearningRateEpochs| > 0)
create_dict (ChangeStrategy)- Specify the model parameter to be changed.
set_dict_tuple (ChangeStrategy, ‘model_param’, ‘learning_rate’) - Start the parameter value at ‘initial_value’.
set_dict_tuple (ChangeStrategy, ‘initial_value’, InitialLearningRate) - Change the parameter value at each ‘epochs’ step.
set_dict_tuple (ChangeStrategy, ‘epochs’, ChangeLearningRateEpochs) - Change the parameter value to the corresponding value in values.
set_dict_tuple (ChangeStrategy, ‘values’, ChangeLearningRateValues) - Collect all change strategies as input.
GenParamName := [GenParamName,‘change’]
GenParamValue := [GenParamValue,ChangeStrategy]
endif
- Specify the model parameter to be changed.
- Serialization strategies.
- There are several options for saving intermediate models to disk (see create_dl_train_param).
- Here, the best and final model are saved to the paths set above.
create_dict (SerializationStrategy)
set_dict_tuple (SerializationStrategy, ‘type’, ‘best’)
set_dict_tuple (SerializationStrategy, ‘basename’, BestModelBaseName)
GenParamName := [GenParamName,‘serialize’]
GenParamValue := [GenParamValue,SerializationStrategy]
create_dict (SerializationStrategy)
set_dict_tuple (SerializationStrategy, ‘type’, ‘final’)
set_dict_tuple (SerializationStrategy, ‘basename’, FinalModelBaseName)
GenParamName := [GenParamName,‘serialize’]
GenParamValue := [GenParamValue,SerializationStrategy] - Display parameters.
- In this example, the evaluation measure for the training spit is not displayed during
- training (default). If you want to do so, select a certain percentage of the training
- samples used to evaluate the model during training. A lower percentage helps to speed
- up the evaluation.
SelectedPercentageTrainSamples := 0 - Set the x-axis argument of the training plots.
XAxisLabel := ‘epochs’
create_dict (DisplayParam)
set_dict_tuple (DisplayParam, ‘selected_percentage_train_samples’, SelectedPercentageTrainSamples)
set_dict_tuple (DisplayParam, ‘x_axis_label’, XAxisLabel)
GenParamName := [GenParamName,‘display’]
GenParamValue := [GenParamValue,DisplayParam]
- *** Read model and dataset. ***
- Check if all necessary files exist.
check_data_availability (ExampleDataDir, DLDatasetFileName) - Read the preprocessed DLDataset file.
read_dict (DLDatasetFileName, [], [], DLDataset) - Read in the model that was initialized during preprocessing.
read_dl_model (ModelFileName, DLModelHandle)
set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
*
- *** Set model parameters. ***
- Set model parameters according to preprocessing parameters.
get_dict_tuple (DLDataset, ‘preprocess_param’, DLPreprocessParam)
get_dict_tuple (DLDataset, ‘class_ids’, ClassIDs)
set_dl_model_param_based_on_preprocessing (DLModelHandle, DLPreprocessParam, ClassIDs) - Set model hyper-parameters as specified above.
set_dl_model_param (DLModelHandle, ‘learning_rate’, InitialLearningRate)
set_dl_model_param (DLModelHandle, ‘momentum’, Momentum)
if (BatchSize == ‘maximum’ and DLDeviceType == ‘gpu’)
set_dl_model_param_max_gpu_batch_size (DLModelHandle, 100)
else
if (BatchSize == ‘maximum’ and DLDeviceType == ‘cpu’)
* Please set a suitable batch size in case of ‘cpu’
* training before continuing.
stop ()
endif
set_dl_model_param (DLModelHandle, ‘batch_size’, 1)
endif
if (|WeightPrior| > 0)
set_dl_model_param (DLModelHandle, ‘weight_prior’, WeightPrior)
endif
set_dl_model_param (DLModelHandle, ‘runtime_init’, ‘immediately’)
- *** Train the model. ***
- Create the generic train parameter dictionary.
create_dl_train_param (DLModelHandle, NumEpochs, EvaluationIntervalEpochs, EnableDisplay, RandomSeed, GenParamName, GenParamValue, TrainParam) - Start the training by calling the training operator
- train_dl_model_batch () within the following procedure.
train_dl_model (DLDataset, DLModelHandle, TrainParam, 0.0, TrainResults, TrainInfos, EvaluationInfos) - Stop after the training has finished, before closing the windows.
dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, [], [])
stop () - Close training windows.
dev_close_window ()
dev_close_window ()
模型评估阶段************************
dev_update_off ()
*
- By default, this example uses a model pretrained by MVTec. To use the model
- which was trained in part 2 of this example series, set the following
- variable to false.
UsePretrainedModel := true - Evaluation can be performed on a GPU or CPU.
- See the respective system requirements in the Installation Guide.
- If possible a GPU is used in this example.
- In case you explicitely wish to run this example on the CPU,
- choose the CPU device instead.
query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
if (|DLDeviceHandles| == 0)
throw (‘No supported device found to continue this example.’)
endif - Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
DLDevice := DLDeviceHandles[0]
- ** Set paths and parameters for the evaluation ***
- Paths.
TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’ - Project directory for any outputs written by HALCON.
ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’ - File path of the preprocessed DLDataset.
- Note: Adapt DataDirectory after preprocessing with another image size.
DataDirectory := ExampleDataDir + ‘/dldataset_Bottle_400x400’
DLDatasetFileName := DataDirectory + ‘/dl_dataset.hdict’ - Path of the retrained segmentation model.
RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_segmentation.hdl’ - Evaluation parameters.
- Evaluation measures.
SegmentationMeasures := [‘mean_iou’,‘pixel_accuracy’,‘class_pixel_accuracy’,‘pixel_confusion_matrix’] - Batch size used during evaluation.
BatchSize := 1 - Display some segmentation results after determining the best model.
NumDisplay := 3
- ** Evaluation of the model ***
- Check if all necessary files exist.
check_data_availability_COPY_1 (ExampleDataDir, DLDatasetFileName, RetrainedModelFileName, UsePretrainedModel) - Read the retrained model.
read_dl_model (RetrainedModelFileName, DLModelHandle)
set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSize)
*
set_dl_model_param (DLModelHandle, ‘device’, DLDevice)
*
- Read the preprocessed DLDataset file.
read_dict (DLDatasetFileName, [], [], DLDataset) - Set parameters for evaluation.
create_dict (GenParamEval)
set_dict_tuple (GenParamEval, ‘measures’, SegmentationMeasures)
set_dict_tuple (GenParamEval, ‘show_progress’, ‘true’) - Evaluate the retrained model.
evaluate_dl_model (DLDataset, DLModelHandle, ‘split’, ‘test’, GenParamEval, EvaluationResult, EvalParams)
- ** Display the results ***
- Display measures.
create_dict (WindowHandleDict)
create_dict (GenParamEvalDisplay)
set_dict_tuple (GenParamEvalDisplay, ‘display_mode’, [‘measures’,‘absolute_confusion_matrix’])
dev_display_segmentation_evaluation (EvaluationResult, EvalParams, GenParamEvalDisplay, WindowHandleDict)
dev_disp_text (‘Press Run (F5) to continue’, ‘window’, ‘bottom’, ‘right’, ‘black’, ‘box’, ‘true’)
stop ()
*
- Close window handles.
dev_close_window_dict (WindowHandleDict)
- ** Visual inspection of images ***
- Evaluate the performance of the model qualitatively
- by visual inspection of images.
- Select test images randomly.
get_dict_tuple (DLDataset, ‘samples’, DatasetSamples)
find_dl_samples (DatasetSamples, ‘split’, ‘test’, ‘match’, SampleIndices)
tuple_shuffle (SampleIndices, ShuffledIndices) - Read the selected samples.
read_dl_samples (DLDataset, ShuffledIndices[0:NumDisplay - 1], DLSampleBatch) - Set parameters for visualization of sample images.
create_dict (WindowHandleDict)
create_dict (GenParamDisplay)
set_dict_tuple (GenParamDisplay, ‘segmentation_exclude_class_ids’, 0)
set_dict_tuple (GenParamDisplay, ‘segmentation_transparency’, ‘80’) - Set batch size of the model to 1.
set_dl_model_param (DLModelHandle, ‘batch_size’, 1) - Apply the retrained model and visualize the results.
for SampleIndex := 0 to NumDisplay - 1 by 1
*- Apply the model.
apply_dl_model (DLModelHandle, DLSampleBatch[SampleIndex], [], DLResult) - Display the result.
dev_display_dl_data (DLSampleBatch[SampleIndex], DLResult, DLDataset, [‘segmentation_image_ground_truth’,‘segmentation_image_result’], GenParamDisplay, WindowHandleDict)
stop ()
endfor - Apply the model.
- Close the windows.
dev_close_window_dict (WindowHandleDict) - Optimize the memory consumption.
set_dl_model_param (DLModelHandle, ‘optimize_for_inference’, ‘true’)
write_dl_model (DLModelHandle, RetrainedModelFileName) - Close the windows.
dev_close_window_dict (WindowHandleDict)
模型测试*****************************
dev_update_off ()
*
-
By default, this example uses a model pretrained by MVTec. To use the model
-
which was trained in part 2 of this example series, set the following
-
variable to false.
UsePretrainedModel := true -
Inference can be done on a GPU or CPU.
-
See the respective system requirements in the Installation Guide.
-
If possible a GPU is used in this example.
-
In case you explicitely wish to run this example on the CPU,
-
choose the CPU device instead.
query_available_dl_devices ([‘runtime’,‘runtime’], [‘gpu’,‘cpu’], DLDeviceHandles)
if (|DLDeviceHandles| == 0)
throw (‘No supported device found to continue this example.’)
endif -
Due to the filter used in query_available_dl_devices, the first device is a GPU, if available.
DLDevice := DLDeviceHandles[0]
- ** Set paths and parameters for inference ***
- We will demonstrate the inference on the example images.
- In a real application newly incoming images (not used for training or evaluation)
- would be used here.
- In this example, we read the images from file.
- Directory name with the images to be segmented.
TotalPath:=‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/’ - Example data folder containing the outputs of the previous example series.
ExampleDataDir := TotalPath + ‘segment_Bottle_defects_data’ - File name of dict containing parameters used for preprocessing.
- Note: Adapt DataDirectory after preprocessing with another image size.
DataDirectory := ExampleDataDir + ‘/dldataset_Bottle_400x400’
PreprocessParamFileName := DataDirectory + ‘/dl_preprocess_param.hdict’ - Path of the retrained segmentation model.
RetrainedModelFileName := ExampleDataDir + ‘/best_dl_model_segmentation.hdl’ - Provide the class names and IDs.
- Class names.
ClassNames := [‘BackGround’,‘Dirty’,‘HuaShang’] - Respective class IDs.
ClassIDs := [0,1,2] - Batch Size used during inference.
BatchSizeInference := 1
- ** Inference ***
-
Check if all necessary files exist.
check_data_availability_COPY_2 (ExampleDataDir, PreprocessParamFileName, RetrainedModelFileName, UsePretrainedModel) -
Read in the retrained model.
read_dl_model (RetrainedModelFileName, DLModelHandle) -
Set the batch size.
set_dl_model_param (DLModelHandle, ‘batch_size’, BatchSizeInference) -
Initialize the model for inference.
set_dl_model_param (DLModelHandle, ‘device’, DLDevice) -
Get the parameters used for preprocessing.
read_dict (PreprocessParamFileName, [], [], DLPreprocessParam) -
Set parameters for visualization of results.
create_dict (WindowHandleDict)
create_dict (DatasetInfo)
set_dict_tuple (DatasetInfo, ‘class_ids’, ClassIDs)
set_dict_tuple (DatasetInfo, ‘class_names’, ClassNames)
create_dict (GenParamDisplay)
set_dict_tuple (GenParamDisplay, ‘segmentation_exclude_class_ids’, 0)
set_dict_tuple (GenParamDisplay, ‘segmentation_transparency’, ‘80’)
set_dict_tuple (GenParamDisplay, ‘font_size’, 16) -
Image Acquisition 01: Code generated by Image Acquisition 01
list_files (‘E:/HalconDL_Project/瓶盖划伤语义分割/标注/Images’, [‘files’,‘follow_links’], ImageFiles)
tuple_regexp_select (ImageFiles, [‘\.(tif|tiff|gif|bmp|jpg|jpeg|jp2|png|pcx|pgm|ppm|pbm|xwd|ima|hobj)$’,‘ignore_case’], ImageFiles)
for Index := 0 to |ImageFiles| - 1 by 1
read_image (ImageBatch, ImageFiles[Index])
*- Generate the DLSampleBatch.
gen_dl_samples_from_images (ImageBatch, DLSampleBatch) - Preprocess the DLSampleBatch.
preprocess_dl_samples (DLSampleBatch, DLPreprocessParam) - Apply the DL model on the DLSampleBatch.
apply_dl_model (DLModelHandle, DLSampleBatch, [‘segmentation_image’,‘segmentation_confidence’], DLResultBatch) - Postprocessing and visualization.
- Loop over each sample in the batch.
for SampleIndex := 0 to BatchSizeInference - 1 by 1
*- Get image.
get_dict_object (Image, DLSampleBatch[SampleIndex], ‘image’) - Get result image.
get_dict_object (SegmentationImage, DLResultBatch[SampleIndex], ‘segmentation_image’) - Postprocessing: Get segmented regions for each class.
threshold (SegmentationImage, ClassRegions, ClassIDs, ClassIDs) - Display results.
dev_display_dl_data (DLSampleBatch[SampleIndex], DLResultBatch[SampleIndex], DatasetInfo, ‘segmentation_image_result’, GenParamDisplay, WindowHandleDict)
get_dict_tuple (WindowHandleDict, ‘segmentation_image_result’, WindowHandles)
dev_set_window (WindowHandles[0]) - Separate any components of the class regions
- and display result regions as well as their area.
- Get area of class regions.
region_features (ClassRegions, ‘area’, Areas) - Here, we do not display the first class, since it is the class ‘good’
- and we only want to display the defect regions.
for ClassIndex := 1 to |Areas| - 1 by 1
if (Areas[ClassIndex] > 0)
select_obj (ClassRegions, ClassRegion, ClassIndex + 1)
* Get connected components of the segmented class region.
connection (ClassRegion, ConnectedRegions)
area_center (ConnectedRegions, Area, Row, Column)
for ConnectIndex := 0 to |Area| - 1 by 1
select_obj (ConnectedRegions, CurrentRegion, ConnectIndex + 1)
dev_disp_text (ClassNames[ClassIndex] + '\narea: ’ + Area[ConnectIndex] + ‘px’, ‘image’, Row[ConnectIndex] - 10, Column[ConnectIndex] + 10, ‘black’, [], [])
endfor
endif
endfor - Display whether the pill is OK, or not.
dev_display_ok_nok (Areas, WindowHandles[0])
endfor
endfor - Get image.
- Generate the DLSampleBatch.
-
Close windows.
dev_close_window_dict (WindowHandleDict)
这篇关于Halcon20.11深度学习语义分割模型的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!