开源C++版AI画图大模型框架stable-diffusion.cpp开发使用初体验

本文主要是介绍开源C++版AI画图大模型框架stable-diffusion.cpp开发使用初体验,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

stable-diffusion.cpp是一个C++编写的轻量级开源类AIGC大模型框架,可以支持在消费级普通设备上本地部署运行大模型进行AI画图,以及作为依赖库集成的到应用程序中提供类似于网页版stable-diffusion的功能。

以下基于stable-diffusion.cpp的源码利用C++ api来开发实例demo演示加载本地模型文件输入提示词生成画图,这里采用显卡CUDA加速计算,如果没有显卡也可以直接使用CPU。

项目结构

stable_diffusion_cpp_starter- stable-diffusion.cpp- src|- main.cpp- CMakeLists.txt

有两个前置操作:

  • 在系统安装好CUDA Toolkit
  • 将stable-diffusion.cpp源码根目录的CMakeLists.txt里面SD_CUBLAS选项打开设为ON

不过,如果没有支持CUDA的显卡,默认采用CPU计算,则可以忽略以上两项

CMakeLists.txt

cmake_minimum_required(VERSION 3.15)project(stable_diffusion_cpp_starter)set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_STANDARD_REQUIRED ON)add_subdirectory(stable-diffusion.cpp)include_directories(${CMAKE_CURRENT_SOURCE_DIR}/stable-diffusion.cpp${CMAKE_CURRENT_SOURCE_DIR}/stable-diffusion.cpp/thirdparty
)file(GLOB SRCsrc/*.hsrc/*.cpp
)add_executable(${PROJECT_NAME} ${SRC})target_link_libraries(${PROJECT_NAME} stable-diffusion ${CMAKE_THREAD_LIBS_INIT} # means pthread on unix
)

main.cpp

#include <stdio.h>
#include <string.h>
#include <time.h>
#include <iostream>
#include <random>
#include <string>
#include <vector>#include "stable-diffusion.h"#define STB_IMAGE_IMPLEMENTATION
#define STB_IMAGE_STATIC
#include "stb_image.h"#define STB_IMAGE_WRITE_IMPLEMENTATION
#define STB_IMAGE_WRITE_STATIC
#include "stb_image_write.h"#define STB_IMAGE_RESIZE_IMPLEMENTATION
#define STB_IMAGE_RESIZE_STATIC
#include "stb_image_resize.h"const char* rng_type_to_str[] = {"std_default","cuda",
};// Names of the sampler method, same order as enum sample_method in stable-diffusion.h
const char* sample_method_str[] = {"euler_a","euler","heun","dpm2","dpm++2s_a","dpm++2m","dpm++2mv2","lcm",
};// Names of the sigma schedule overrides, same order as sample_schedule in stable-diffusion.h
const char* schedule_str[] = {"default","discrete","karras","ays",
};const char* modes_str[] = {"txt2img","img2img","img2vid","convert",
};enum SDMode 
{TXT2IMG,IMG2IMG,IMG2VID,CONVERT,MODE_COUNT
};struct SDParams 
{int n_threads = -1;SDMode mode   = TXT2IMG;std::string model_path;std::string vae_path;std::string taesd_path;std::string esrgan_path;std::string controlnet_path;std::string embeddings_path;std::string stacked_id_embeddings_path;std::string input_id_images_path;sd_type_t wtype = SD_TYPE_COUNT;std::string lora_model_dir;std::string output_path = "output.png";std::string input_path;std::string control_image_path;std::string prompt;std::string negative_prompt;float min_cfg     = 1.0f;float cfg_scale   = 7.0f;float style_ratio = 20.f;int clip_skip     = -1;  // <= 0 represents unspecifiedint width         = 512;int height        = 512;int batch_count   = 1;int video_frames         = 6;int motion_bucket_id     = 127;int fps                  = 6;float augmentation_level = 0.f;sample_method_t sample_method = EULER_A;schedule_t schedule           = DEFAULT;int sample_steps              = 20;float strength                = 0.75f;float control_strength        = 0.9f;rng_type_t rng_type           = CUDA_RNG;int64_t seed                  = 42;bool verbose                  = false;bool vae_tiling               = false;bool control_net_cpu          = false;bool normalize_input          = false;bool clip_on_cpu              = false;bool vae_on_cpu               = false;bool canny_preprocess         = false;bool color                    = false;int upscale_repeats           = 1;
};static std::string sd_basename(const std::string& path) 
{size_t pos = path.find_last_of('/');if (pos != std::string::npos) {return path.substr(pos + 1);}pos = path.find_last_of('\\');if (pos != std::string::npos) {return path.substr(pos + 1);}return path;
}std::string get_image_params(SDParams params, int64_t seed) 
{std::string parameter_string = params.prompt + "\n";if (params.negative_prompt.size() != 0) {parameter_string += "Negative prompt: " + params.negative_prompt + "\n";}parameter_string += "Steps: " + std::to_string(params.sample_steps) + ", ";parameter_string += "CFG scale: " + std::to_string(params.cfg_scale) + ", ";parameter_string += "Seed: " + std::to_string(seed) + ", ";parameter_string += "Size: " + std::to_string(params.width) + "x" + std::to_string(params.height) + ", ";parameter_string += "Model: " + sd_basename(params.model_path) + ", ";parameter_string += "RNG: " + std::string(rng_type_to_str[params.rng_type]) + ", ";parameter_string += "Sampler: " + std::string(sample_method_str[params.sample_method]);if (params.schedule == KARRAS) {parameter_string += " karras";}parameter_string += ", ";parameter_string += "Version: stable-diffusion.cpp";return parameter_string;
}/* Enables Printing the log level tag in color using ANSI escape codes */
void sd_log_cb(enum sd_log_level_t level, const char* log, void* data) 
{SDParams* params = (SDParams*)data;int tag_color;const char* level_str;FILE* out_stream = (level == SD_LOG_ERROR) ? stderr : stdout;if (!log || (!params->verbose && level <= SD_LOG_DEBUG)) return;switch (level) {case SD_LOG_DEBUG:tag_color = 37;level_str = "DEBUG";break;case SD_LOG_INFO:tag_color = 34;level_str = "INFO";break;case SD_LOG_WARN:tag_color = 35;level_str = "WARN";break;case SD_LOG_ERROR:tag_color = 31;level_str = "ERROR";break;default: /* Potential future-proofing */tag_color = 33;level_str = "?????";break;}if (params->color == true) fprintf(out_stream, "\033[%d;1m[%-5s]\033[0m ", tag_color, level_str);else fprintf(out_stream, "[%-5s] ", level_str);fputs(log, out_stream);fflush(out_stream);
}int main(int argc, const char* argv[]) 
{// set sd paramsconst std::string model_path = "./v1-5-pruned-emaonly.ckpt";const std::string img_output_path = "./gen_img.png";const std::string prompt = "a cute little dog with flowers";SDParams params;params.model_path = model_path;params.output_path = img_output_path;params.prompt = prompt;sd_set_log_callback(sd_log_cb, (void*)&params);if (params.mode == CONVERT) {bool success = convert(params.model_path.c_str(), params.vae_path.c_str(), params.output_path.c_str(), params.wtype);if (!success) {fprintf(stderr,"convert '%s'/'%s' to '%s' failed\n",params.model_path.c_str(),params.vae_path.c_str(),params.output_path.c_str());return 1;} else {printf("convert '%s'/'%s' to '%s' success\n",params.model_path.c_str(),params.vae_path.c_str(),params.output_path.c_str());return 0;}}if (params.mode == IMG2VID) {fprintf(stderr, "SVD support is broken, do not use it!!!\n");return 1;}// prepare image bufferbool vae_decode_only          = true;uint8_t* input_image_buffer   = NULL;uint8_t* control_image_buffer = NULL;if (params.mode == IMG2IMG || params.mode == IMG2VID) {vae_decode_only = false;int c              = 0;int width          = 0;int height         = 0;input_image_buffer = stbi_load(params.input_path.c_str(), &width, &height, &c, 3);if (input_image_buffer == NULL) {fprintf(stderr, "load image from '%s' failed\n", params.input_path.c_str());return 1;}if (c < 3) {fprintf(stderr, "the number of channels for the input image must be >= 3, but got %d channels\n", c);free(input_image_buffer);return 1;}if (width <= 0) {fprintf(stderr, "error: the width of image must be greater than 0\n");free(input_image_buffer);return 1;}if (height <= 0) {fprintf(stderr, "error: the height of image must be greater than 0\n");free(input_image_buffer);return 1;}// Resize input image ...if (params.height != height || params.width != width) {printf("resize input image from %dx%d to %dx%d\n", width, height, params.width, params.height);int resized_height = params.height;int resized_width  = params.width;uint8_t* resized_image_buffer = (uint8_t*)malloc(resized_height * resized_width * 3);if (resized_image_buffer == NULL) {fprintf(stderr, "error: allocate memory for resize input image\n");free(input_image_buffer);return 1;}stbir_resize(input_image_buffer, width, height, 0,resized_image_buffer, resized_width, resized_height, 0, STBIR_TYPE_UINT8,3 /*RGB channel*/, STBIR_ALPHA_CHANNEL_NONE, 0,STBIR_EDGE_CLAMP, STBIR_EDGE_CLAMP,STBIR_FILTER_BOX, STBIR_FILTER_BOX,STBIR_COLORSPACE_SRGB, nullptr);// Save resized resultfree(input_image_buffer);input_image_buffer = resized_image_buffer;}}// init sd contextsd_ctx_t* sd_ctx = new_sd_ctx(params.model_path.c_str(),params.vae_path.c_str(),params.taesd_path.c_str(),params.controlnet_path.c_str(),params.lora_model_dir.c_str(),params.embeddings_path.c_str(),params.stacked_id_embeddings_path.c_str(),vae_decode_only,params.vae_tiling,true,params.n_threads,params.wtype,params.rng_type,params.schedule,params.clip_on_cpu,params.control_net_cpu,params.vae_on_cpu);if (sd_ctx == NULL) {printf("new_sd_ctx_t failed\n");return 1;}sd_image_t* control_image = NULL;if (params.controlnet_path.size() > 0 && params.control_image_path.size() > 0) {int c                = 0;control_image_buffer = stbi_load(params.control_image_path.c_str(), &params.width, &params.height, &c, 3);if (control_image_buffer == NULL) {fprintf(stderr, "load image from '%s' failed\n", params.control_image_path.c_str());return 1;}control_image = new sd_image_t{(uint32_t)params.width,(uint32_t)params.height,3,control_image_buffer};if (params.canny_preprocess) {  // apply preprocessorcontrol_image->data = preprocess_canny(control_image->data,control_image->width,control_image->height,0.08f,0.08f,0.8f,1.0f,false);}}// generate imagesd_image_t* results;if (params.mode == TXT2IMG) {results = txt2img(sd_ctx,params.prompt.c_str(),params.negative_prompt.c_str(),params.clip_skip,params.cfg_scale,params.width,params.height,params.sample_method,params.sample_steps,params.seed,params.batch_count,control_image,params.control_strength,params.style_ratio,params.normalize_input,params.input_id_images_path.c_str());} else {sd_image_t input_image = {(uint32_t)params.width,(uint32_t)params.height,3,input_image_buffer};if (params.mode == IMG2VID) {results = img2vid(sd_ctx,input_image,params.width,params.height,params.video_frames,params.motion_bucket_id,params.fps,params.augmentation_level,params.min_cfg,params.cfg_scale,params.sample_method,params.sample_steps,params.strength,params.seed);if (results == NULL) {printf("generate failed\n");free_sd_ctx(sd_ctx);return 1;}size_t last            = params.output_path.find_last_of(".");std::string dummy_name = last != std::string::npos ? params.output_path.substr(0, last) : params.output_path;for (int i = 0; i < params.video_frames; i++) {if (results[i].data == NULL) continue;std::string final_image_path = i > 0 ? dummy_name + "_" + std::to_string(i + 1) + ".png" : dummy_name + ".png";stbi_write_png(final_image_path.c_str(), results[i].width, results[i].height, results[i].channel,results[i].data, 0, get_image_params(params, params.seed + i).c_str());printf("save result image to '%s'\n", final_image_path.c_str());free(results[i].data);results[i].data = NULL;}free(results);free_sd_ctx(sd_ctx);return 0;} else {results = img2img(sd_ctx,input_image,params.prompt.c_str(),params.negative_prompt.c_str(),params.clip_skip,params.cfg_scale,params.width,params.height,params.sample_method,params.sample_steps,params.strength,params.seed,params.batch_count,control_image,params.control_strength,params.style_ratio,params.normalize_input,params.input_id_images_path.c_str());}}if (results == NULL) {printf("generate failed\n");free_sd_ctx(sd_ctx);return 1;}int upscale_factor = 4;  // unused for RealESRGAN_x4plus_anime_6B.pthif (params.esrgan_path.size() > 0 && params.upscale_repeats > 0) {upscaler_ctx_t* upscaler_ctx = new_upscaler_ctx(params.esrgan_path.c_str(),params.n_threads,params.wtype);if (upscaler_ctx == NULL) printf("new_upscaler_ctx failed\n");else {for (int i = 0; i < params.batch_count; i++) {if (results[i].data == NULL) {continue;}sd_image_t current_image = results[i];for (int u = 0; u < params.upscale_repeats; ++u) {sd_image_t upscaled_image = upscale(upscaler_ctx, current_image, upscale_factor);if (upscaled_image.data == NULL) {printf("upscale failed\n");break;}free(current_image.data);current_image = upscaled_image;}results[i] = current_image;  // Set the final upscaled image as the result}}}size_t last            = params.output_path.find_last_of(".");std::string dummy_name = last != std::string::npos ? params.output_path.substr(0, last) : params.output_path;for (int i = 0; i < params.batch_count; i++) {if (results[i].data == NULL) continue;std::string final_image_path = i > 0 ? dummy_name + "_" + std::to_string(i + 1) + ".png" : dummy_name + ".png";stbi_write_png(final_image_path.c_str(), results[i].width, results[i].height, results[i].channel,results[i].data, 0, get_image_params(params, params.seed + i).c_str());printf("save result image to '%s'\n", final_image_path.c_str());free(results[i].data);results[i].data = NULL;}free(results);free_sd_ctx(sd_ctx);free(control_image_buffer);free(input_image_buffer);return 0;
}

运行结果

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:Device 0: NVIDIA GeForce GTX 1060 with Max-Q Design, compute capability 6.1, VMM: yes
[INFO ] stable-diffusion.cpp:169  - loading model from './v1-5-pruned-emaonly.ckpt'
[INFO ] model.cpp:736  - load ./v1-5-pruned-emaonly.ckpt using checkpoint format
[INFO ] stable-diffusion.cpp:192  - Stable Diffusion 1.x
[INFO ] stable-diffusion.cpp:198  - Stable Diffusion weight type: f32
[INFO ] stable-diffusion.cpp:419  - total params memory size = 2719.24MB (VRAM 2719.24MB, RAM 0.00MB): clip 469.44MB(VRAM), unet 2155.33MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:423  - loading model from './v1-5-pruned-emaonly.ckpt' completed, taking 18.72s
[INFO ] stable-diffusion.cpp:440  - running in eps-prediction mode
[INFO ] stable-diffusion.cpp:556  - Attempting to apply 0 LoRAs
[INFO ] stable-diffusion.cpp:1203 - apply_loras completed, taking 0.00s
ggml_gallocr_reserve_n: reallocating CUDA0 buffer from size 0.00 MiB to 1.40 MiB
ggml_gallocr_reserve_n: reallocating CUDA0 buffer from size 0.00 MiB to 1.40 MiB
[INFO ] stable-diffusion.cpp:1316 - get_learned_condition completed, taking 514 ms
[INFO ] stable-diffusion.cpp:1334 - sampling using Euler A method
[INFO ] stable-diffusion.cpp:1338 - generating image: 1/1 - seed 42
ggml_gallocr_reserve_n: reallocating CUDA0 buffer from size 0.00 MiB to 559.90 MiB|==================================================| 20/20 - 1.40s/it
[INFO ] stable-diffusion.cpp:1381 - sampling completed, taking 35.05s
[INFO ] stable-diffusion.cpp:1389 - generating 1 latent images completed, taking 35.07s
[INFO ] stable-diffusion.cpp:1392 - decoding 1 latents
ggml_gallocr_reserve_n: reallocating CUDA0 buffer from size 0.00 MiB to 1664.00 MiB
[INFO ] stable-diffusion.cpp:1402 - latent 1 decoded, taking 3.03s
[INFO ] stable-diffusion.cpp:1406 - decode_first_stage completed, taking 3.03s
[INFO ] stable-diffusion.cpp:1490 - txt2img completed in 38.64s
save result image to './gen_img.png'

注:

  • stable_diffusion支持的模型文件需要自己去下载,推荐到huggingface官网下载ckpt格式文件
  • 提示词要使用英文
  • 支持文字生成图和以图辅助生成图,参数很多,可以多尝试

源码

stable_diffusion_cpp_starter

这篇关于开源C++版AI画图大模型框架stable-diffusion.cpp开发使用初体验的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1088143

相关文章

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

AI绘图怎么变现?想做点副业的小白必看!

在科技飞速发展的今天,AI绘图作为一种新兴技术,不仅改变了艺术创作的方式,也为创作者提供了多种变现途径。本文将详细探讨几种常见的AI绘图变现方式,帮助创作者更好地利用这一技术实现经济收益。 更多实操教程和AI绘画工具,可以扫描下方,免费获取 定制服务:个性化的创意商机 个性化定制 AI绘图技术能够根据用户需求生成个性化的头像、壁纸、插画等作品。例如,姓氏头像在电商平台上非常受欢迎,

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

这15个Vue指令,让你的项目开发爽到爆

1. V-Hotkey 仓库地址: github.com/Dafrok/v-ho… Demo: 戳这里 https://dafrok.github.io/v-hotkey 安装: npm install --save v-hotkey 这个指令可以给组件绑定一个或多个快捷键。你想要通过按下 Escape 键后隐藏某个组件,按住 Control 和回车键再显示它吗?小菜一碟: <template

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

Hadoop企业开发案例调优场景

需求 (1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。 (2)需求分析: 1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster 平均每个节点运行10个 / 3台 ≈ 3个任务(4    3    3) HDFS参数调优 (1)修改:hadoop-env.sh export HDFS_NAMENOD

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

从去中心化到智能化:Web3如何与AI共同塑造数字生态

在数字时代的演进中,Web3和人工智能(AI)正成为塑造未来互联网的两大核心力量。Web3的去中心化理念与AI的智能化技术,正相互交织,共同推动数字生态的变革。本文将探讨Web3与AI的融合如何改变数字世界,并展望这一新兴组合如何重塑我们的在线体验。 Web3的去中心化愿景 Web3代表了互联网的第三代发展,它基于去中心化的区块链技术,旨在创建一个开放、透明且用户主导的数字生态。不同于传统