【brpc学习实践七】dummy server、DynamicPartitionChannel

2023-11-23 19:36

本文主要是介绍【brpc学习实践七】dummy server、DynamicPartitionChannel,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

dummy server

如果你的程序只使用了baidu-rpc的client或根本没有使用baidu-rpc,但你也想使用baidu-rpc的内置服务,只要在程序中启动一个空的server就行了,这种server我们称为dummy server。
Dummy server 可以用于原型设计和开发目的,作为简单的 http 服务器使用,多数场景用不上。

brpc怎么开启dummy server

使用brpc的client

方式一(推荐方式)
通过gflags方式声明dummy server port,比如:–dummy_server_port=8888,需要重启服务才能生效,优先级低于方式二。
这种方式使用成本较低,优先级低于方式二的原因是:晚于方式二出现,因此需要兼容方式二。
方式二
在程序运行目录建立dummy_server.port文件,填入一个端口号(比如8888),不需要重启服务就能生效,优先级高于方式一。

没有使用baidu-rpc的client

你必须手动加入dummy server

#include <baidu/rpc/server.h>...int main() {... baidu::rpc::Server dummy_server;baidu::rpc::ServerOptions dummy_server_options;dummy_server_options.num_threads = 0;  // 不要改变寄主程序的线程数。if (dummy_server.Start(8888/*port*/, &dummy_server_options) != 0) {LOG(FATAL) << "Fail to start dummy server";return -1; }...
}

r31803之后加入dummy server更容易了,只要一行:

#include <baidu/rpc/server.h>...int main() {... baidu::rpc::StartDummyServerAt(8888/*port*/);...
}

使用brpc客户端的dummy示例

值得注意的是,我们在这里用了bvar::LatencyRecorder g_latency_recorder(“client”);来记录延迟,我们将在下一篇章详细讲述bvar的功能

#include <gflags/gflags.h>
#include <butil/logging.h>
#include <butil/time.h>
#include <bthread/bthread.h>
#include <brpc/channel.h>
#include <brpc/server.h>
#include "echo.pb.h"
#include <bvar/bvar.h>DEFINE_int32(thread_num, 4, "Number of threads to send requests");
DEFINE_bool(use_bthread, false, "Use bthread to send requests");
DEFINE_string(attachment, "foo", "Carry this along with requests");
DEFINE_string(connection_type, "", "Connection type. Available values: single, pooled, short");
DEFINE_string(server, "0.0.0.0:8000", "IP Address of server");
DEFINE_string(load_balancer, "", "The algorithm for load balancing");
DEFINE_int32(timeout_ms, 100, "RPC timeout in milliseconds");
DEFINE_int32(max_retry, 3, "Max retries(not including the first RPC)"); 
DEFINE_string(protocol, "baidu_std", "Protocol type. Defined in src/brpc/options.proto");
DEFINE_int32(depth, 0, "number of loop calls");
// Don't send too frequently in this example
DEFINE_int32(sleep_ms, 100, "milliseconds to sleep after each RPC");
DEFINE_int32(dummy_port, -1, "Launch dummy server at this port");bvar::LatencyRecorder g_latency_recorder("client");void* sender(void* arg) {brpc::Channel* chan = (brpc::Channel*)arg;// Normally, you should not call a Channel directly, but instead construct// a stub Service wrapping it. stub can be shared by all threads as well.example::EchoService_Stub stub(chan);// Send a request and wait for the response every 1 second.int log_id = 0;while (!brpc::IsAskedToQuit()) {// We will receive response synchronously, safe to put variables// on stack.example::EchoRequest request;example::EchoResponse response;brpc::Controller cntl;request.set_message("hello world");if (FLAGS_depth > 0) {request.set_depth(FLAGS_depth);}cntl.set_log_id(log_id ++);  // set by user// Set attachment which is wired to network directly instead of // being serialized into protobuf messages.cntl.request_attachment().append(FLAGS_attachment);// Because `done'(last parameter) is NULL, this function waits until// the response comes back or error occurs(including timedout).stub.Echo(&cntl, &request, &response, NULL);if (cntl.Failed()) {//LOG_EVERY_SECOND(WARNING) << "Fail to send EchoRequest, " << cntl.ErrorText();} else {g_latency_recorder << cntl.latency_us();}if (FLAGS_sleep_ms != 0) {bthread_usleep(FLAGS_sleep_ms * 1000L);}}return NULL;
}int main(int argc, char* argv[]) {// Parse gflags. We recommend you to use gflags as well.GFLAGS_NS::SetUsageMessage("Send EchoRequest to server every second");GFLAGS_NS::ParseCommandLineFlags(&argc, &argv, true);// A Channel represents a communication line to a Server. Notice that // Channel is thread-safe and can be shared by all threads in your program.brpc::Channel channel;brpc::ChannelOptions options;options.protocol = FLAGS_protocol;options.connection_type = FLAGS_connection_type;options.timeout_ms = FLAGS_timeout_ms/*milliseconds*/;options.max_retry = FLAGS_max_retry;// Initialize the channel, NULL means using default options. // options, see `brpc/channel.h'.if (channel.Init(FLAGS_server.c_str(), FLAGS_load_balancer.c_str(), &options) != 0) {LOG(ERROR) << "Fail to initialize channel";return -1;}std::vector<bthread_t> bids;std::vector<pthread_t> pids;if (!FLAGS_use_bthread) {pids.resize(FLAGS_thread_num);for (int i = 0; i < FLAGS_thread_num; ++i) {if (pthread_create(&pids[i], NULL, sender, &channel) != 0) {LOG(ERROR) << "Fail to create pthread";return -1;}}} else {bids.resize(FLAGS_thread_num);for (int i = 0; i < FLAGS_thread_num; ++i) {if (bthread_start_background(&bids[i], NULL, sender, &channel) != 0) {LOG(ERROR) << "Fail to create bthread";return -1;}}}if (FLAGS_dummy_port >= 0) {brpc::StartDummyServerAt(FLAGS_dummy_port);}while (!brpc::IsAskedToQuit()) {sleep(1);LOG(INFO) << "Sending EchoRequest at qps=" << g_latency_recorder.qps(1)<< " latency=" << g_latency_recorder.latency(1);}LOG(INFO) << "EchoClient is going to quit";for (int i = 0; i < FLAGS_thread_num; ++i) {if (!FLAGS_use_bthread) {pthread_join(pids[i], NULL);} else {bthread_join(bids[i], NULL);}}return 0;
}

DynamicPartitionChannel

主要是用来,就是可以在保持客户端代码不变的情况下,将服务器从一个分区方式更改为另一个分区,这样可以实现服务器之间的无缝切换,提高了系统的可用性和稳定性。具体实例代码如下,由于工作中没用到,大概记录下,后续用到再来补充细节:

client示例:

#include <gflags/gflags.h>
#include <bthread/bthread.h>
#include <butil/logging.h>
#include <butil/string_printf.h>
#include <butil/time.h>
#include <butil/macros.h>
#include <brpc/partition_channel.h>
#include <deque>
#include "echo.pb.h"DEFINE_int32(thread_num, 50, "Number of threads to send requests");
DEFINE_bool(use_bthread, false, "Use bthread to send requests");
DEFINE_int32(attachment_size, 0, "Carry so many byte attachment along with requests");
DEFINE_int32(request_size, 16, "Bytes of each request");
DEFINE_string(connection_type, "", "Connection type. Available values: single, pooled, short");
DEFINE_string(protocol, "baidu_std", "Protocol type. Defined in src/brpc/options.proto");
DEFINE_string(server, "file://server_list", "Mapping to servers");
DEFINE_string(load_balancer, "rr", "Name of load balancer");
DEFINE_int32(timeout_ms, 100, "RPC timeout in milliseconds");
DEFINE_int32(max_retry, 3, "Max retries(not including the first RPC)"); 
DEFINE_bool(dont_fail, false, "Print fatal when some call failed");std::string g_request;
std::string g_attachment;
pthread_mutex_t g_latency_mutex = PTHREAD_MUTEX_INITIALIZER;
struct BAIDU_CACHELINE_ALIGNMENT SenderInfo {size_t nsuccess;int64_t latency_sum;
};
std::deque<SenderInfo> g_sender_info;static void* sender(void* arg) {// Normally, you should not call a Channel directly, but instead construct// a stub Service wrapping it. stub can be shared by all threads as well.example::EchoService_Stub stub(static_cast<google::protobuf::RpcChannel*>(arg));SenderInfo* info = NULL;{BAIDU_SCOPED_LOCK(g_latency_mutex);g_sender_info.push_back(SenderInfo());info = &g_sender_info.back();}int log_id = 0;while (!brpc::IsAskedToQuit()) {// We will receive response synchronously, safe to put variables// on stack.example::EchoRequest request;example::EchoResponse response;brpc::Controller cntl;request.set_message(g_request);cntl.set_log_id(log_id++);  // set by userif (!g_attachment.empty()) {// Set attachment which is wired to network directly instead of // being serialized into protobuf messages.cntl.request_attachment().append(g_attachment);}// Because `done'(last parameter) is NULL, this function waits until// the response comes back or error occurs(including timedout).stub.Echo(&cntl, &request, &response, NULL);if (!cntl.Failed()) {info->latency_sum += cntl.latency_us();++info->nsuccess;} else {CHECK(brpc::IsAskedToQuit() || !FLAGS_dont_fail)<< "error=" << cntl.ErrorText() << " latency=" << cntl.latency_us();// We can't connect to the server, sleep a while. Notice that this// is a specific sleeping to prevent this thread from spinning too// fast. You should continue the business logic in a production // server rather than sleeping.bthread_usleep(50000);}}return NULL;
}class MyPartitionParser : public brpc::PartitionParser {
public:bool ParseFromTag(const std::string& tag, brpc::Partition* out) {// "N/M" : #N partition of M partitions.size_t pos = tag.find_first_of('/');if (pos == std::string::npos) {LOG(ERROR) << "Invalid tag=" << tag;return false;}char* endptr = NULL;out->index = strtol(tag.c_str(), &endptr, 10);if (endptr != tag.data() + pos) {LOG(ERROR) << "Invalid index=" << butil::StringPiece(tag.data(), pos);return false;}out->num_partition_kinds = strtol(tag.c_str() + pos + 1, &endptr, 10);if (endptr != tag.c_str() + tag.size()) {LOG(ERROR) << "Invalid num=" << tag.data() + pos + 1;return false;}return true;}
};int main(int argc, char* argv[]) {// Parse gflags. We recommend you to use gflags as well.GFLAGS_NS::ParseCommandLineFlags(&argc, &argv, true);// A Channel represents a communication line to a Server. Notice that // Channel is thread-safe and can be shared by all threads in your program.brpc::DynamicPartitionChannel channel;brpc::PartitionChannelOptions options;options.protocol = FLAGS_protocol;options.connection_type = FLAGS_connection_type;options.succeed_without_server = true;options.fail_limit = 1;options.timeout_ms = FLAGS_timeout_ms/*milliseconds*/;options.max_retry = FLAGS_max_retry;if (channel.Init(new MyPartitionParser(),FLAGS_server.c_str(), FLAGS_load_balancer.c_str(),&options) != 0) {LOG(ERROR) << "Fail to init channel";return -1;}if (FLAGS_attachment_size > 0) {g_attachment.resize(FLAGS_attachment_size, 'a');}if (FLAGS_request_size <= 0) {LOG(ERROR) << "Bad request_size=" << FLAGS_request_size;return -1;}g_request.resize(FLAGS_request_size, 'r');std::vector<bthread_t> bids;std::vector<pthread_t> pids;if (!FLAGS_use_bthread) {pids.resize(FLAGS_thread_num);for (int i = 0; i < FLAGS_thread_num; ++i) {if (pthread_create(&pids[i], NULL, sender, &channel) != 0) {LOG(ERROR) << "Fail to create pthread";return -1;}}} else {bids.resize(FLAGS_thread_num);for (int i = 0; i < FLAGS_thread_num; ++i) {if (bthread_start_background(&bids[i], NULL, sender, &channel) != 0) {LOG(ERROR) << "Fail to create bthread";return -1;}}}int64_t last_counter = 0;int64_t last_latency_sum = 0;std::vector<size_t> last_nsuccess(FLAGS_thread_num);while (!brpc::IsAskedToQuit()) {sleep(1);int64_t latency_sum = 0;int64_t nsuccess = 0;pthread_mutex_lock(&g_latency_mutex);CHECK_EQ(g_sender_info.size(), (size_t)FLAGS_thread_num);for (size_t i = 0; i < g_sender_info.size(); ++i) {const SenderInfo& info = g_sender_info[i];latency_sum += info.latency_sum;nsuccess += info.nsuccess;if (FLAGS_dont_fail) {CHECK(info.nsuccess > last_nsuccess[i]) << "i=" << i;}last_nsuccess[i] = info.nsuccess;}pthread_mutex_unlock(&g_latency_mutex);const int64_t avg_latency = (latency_sum - last_latency_sum) /std::max(nsuccess - last_counter, (int64_t)1);LOG(INFO) << "Sending EchoRequest at qps=" << nsuccess - last_counter<< " latency=" << avg_latency;last_counter = nsuccess;last_latency_sum = latency_sum;}LOG(INFO) << "EchoClient is going to quit";for (int i = 0; i < FLAGS_thread_num; ++i) {if (!FLAGS_use_bthread) {pthread_join(pids[i], NULL);} else {bthread_join(bids[i], NULL);}}return 0;
}

server分区配置示例

0.0.0.0:8004  0/30.0.0.0:8004  1/3     0.0.0.0:8004  2/3
# 0.0.0.0:8008 0/30.0.0.0:8005  0/4     0.0.0.0:8005  1/4     0.0.0.0:8005  2/4     0.0.0.0:8005  3/4     0.0.0.0:8006 0/40.0.0.0:8006 1/40.0.0.0:8006 2/40.0.0.0:8006 3/4

server端示例

DEFINE_bool(echo_attachment, true, "Echo attachment as well");
DEFINE_int32(port, 8004, "TCP Port of this server");
DEFINE_int32(idle_timeout_s, -1, "Connection will be closed if there is no ""read/write operations during the last `idle_timeout_s'");
DEFINE_int32(logoff_ms, 2000, "Maximum duration of server's LOGOFF state ""(waiting for client to close connection before server stops)");
DEFINE_int32(max_concurrency, 0, "Limit of request processing in parallel");
DEFINE_int32(server_num, 1, "Number of servers");
DEFINE_string(sleep_us, "", "Sleep so many microseconds before responding");
DEFINE_bool(spin, false, "spin rather than sleep");
DEFINE_double(exception_ratio, 0.1, "Percentage of irregular latencies");
DEFINE_double(min_ratio, 0.2, "min_sleep / sleep_us");
DEFINE_double(max_ratio, 10, "max_sleep / sleep_us");// Your implementation of example::EchoService
class EchoServiceImpl : public example::EchoService {
public:EchoServiceImpl() : _index(0) {}virtual ~EchoServiceImpl() {};void set_index(size_t index, int64_t sleep_us) { _index = index; _sleep_us = sleep_us;}virtual void Echo(google::protobuf::RpcController* cntl_base,const example::EchoRequest* request,example::EchoResponse* response,google::protobuf::Closure* done) {brpc::ClosureGuard done_guard(done);brpc::Controller* cntl =static_cast<brpc::Controller*>(cntl_base);if (_sleep_us > 0) {double delay = _sleep_us;const double a = FLAGS_exception_ratio * 0.5;if (a >= 0.0001) {double x = butil::RandDouble();if (x < a) {const double min_sleep_us = FLAGS_min_ratio * _sleep_us;delay = min_sleep_us + (_sleep_us - min_sleep_us) * x / a;} else if (x + a > 1) {const double max_sleep_us = FLAGS_max_ratio * _sleep_us;delay = _sleep_us + (max_sleep_us - _sleep_us) * (x + a - 1) / a;}}if (FLAGS_spin) {int64_t end_time = butil::gettimeofday_us() + (int64_t)delay;while (butil::gettimeofday_us() < end_time) {}} else {bthread_usleep((int64_t)delay);}}// Echo request and its attachmentresponse->set_message(request->message());if (FLAGS_echo_attachment) {cntl->response_attachment().append(cntl->request_attachment());}_nreq << 1;}size_t num_requests() const { return _nreq.get_value(); }private:size_t _index;int64_t _sleep_us;bvar::Adder<size_t> _nreq;
};int main(int argc, char* argv[]) {// Parse gflags. We recommend you to use gflags as well.GFLAGS_NS::ParseCommandLineFlags(&argc, &argv, true);if (FLAGS_server_num <= 0) {LOG(ERROR) << "server_num must be positive";return -1;}// We need multiple servers in this example.brpc::Server* servers = new brpc::Server[FLAGS_server_num];// For more options see `brpc/server.h'.brpc::ServerOptions options;options.idle_timeout_sec = FLAGS_idle_timeout_s;options.max_concurrency = FLAGS_max_concurrency;butil::StringSplitter sp(FLAGS_sleep_us.c_str(), ',');std::vector<int64_t> sleep_list;for (; sp; ++sp) {sleep_list.push_back(strtoll(sp.field(), NULL, 10));}if (sleep_list.empty()) {sleep_list.push_back(0);}// Instance of your services.EchoServiceImpl* echo_service_impls = new EchoServiceImpl[FLAGS_server_num];// Add the service into servers. Notice the second parameter, because the// service is put on stack, we don't want server to delete it, otherwise// use brpc::SERVER_OWNS_SERVICE.for (int i = 0; i < FLAGS_server_num; ++i) {int64_t sleep_us = sleep_list[(size_t)i < sleep_list.size() ? i : (sleep_list.size() - 1)];echo_service_impls[i].set_index(i, sleep_us);// will be shown on /version pageservers[i].set_version(butil::string_printf("example/dynamic_partition_echo_c++[%d]", i));if (servers[i].AddService(&echo_service_impls[i], brpc::SERVER_DOESNT_OWN_SERVICE) != 0) {LOG(ERROR) << "Fail to add service";return -1;}// Start the server.int port = FLAGS_port + i;if (servers[i].Start(port, &options) != 0) {LOG(ERROR) << "Fail to start EchoServer";return -1;}}// Service logic are running in separate worker threads, for main thread,// we don't have much to do, just spinning.std::vector<size_t> last_num_requests(FLAGS_server_num);while (!brpc::IsAskedToQuit()) {sleep(1);size_t cur_total = 0;for (int i = 0; i < FLAGS_server_num; ++i) {const size_t current_num_requests =echo_service_impls[i].num_requests();size_t diff = current_num_requests - last_num_requests[i];cur_total += diff;last_num_requests[i] = current_num_requests;LOG(INFO) << "S[" << i << "]=" << diff << ' ' << noflush;}LOG(INFO) << "[total=" << cur_total << ']';}// Don't forget to stop and join the server otherwise still-running// worker threads may crash your program. Clients will have/ at most// `FLAGS_logoff_ms' to close their connections. If some connections// still remains after `FLAGS_logoff_ms', they will be closed by force.for (int i = 0; i < FLAGS_server_num; ++i) {servers[i].Stop(FLAGS_logoff_ms);}for (int i = 0; i < FLAGS_server_num; ++i) {servers[i].Join();}delete [] servers;delete [] echo_service_impls;return 0;
}

这篇关于【brpc学习实践七】dummy server、DynamicPartitionChannel的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/420242

相关文章

Java调用DeepSeek API的最佳实践及详细代码示例

《Java调用DeepSeekAPI的最佳实践及详细代码示例》:本文主要介绍如何使用Java调用DeepSeekAPI,包括获取API密钥、添加HTTP客户端依赖、创建HTTP请求、处理响应、... 目录1. 获取API密钥2. 添加HTTP客户端依赖3. 创建HTTP请求4. 处理响应5. 错误处理6.

golang内存对齐的项目实践

《golang内存对齐的项目实践》本文主要介绍了golang内存对齐的项目实践,内存对齐不仅有助于提高内存访问效率,还确保了与硬件接口的兼容性,是Go语言编程中不可忽视的重要优化手段,下面就来介绍一下... 目录一、结构体中的字段顺序与内存对齐二、内存对齐的原理与规则三、调整结构体字段顺序优化内存对齐四、内

Java深度学习库DJL实现Python的NumPy方式

《Java深度学习库DJL实现Python的NumPy方式》本文介绍了DJL库的背景和基本功能,包括NDArray的创建、数学运算、数据获取和设置等,同时,还展示了如何使用NDArray进行数据预处理... 目录1 NDArray 的背景介绍1.1 架构2 JavaDJL使用2.1 安装DJL2.2 基本操

查询SQL Server数据库服务器IP地址的多种有效方法

《查询SQLServer数据库服务器IP地址的多种有效方法》作为数据库管理员或开发人员,了解如何查询SQLServer数据库服务器的IP地址是一项重要技能,本文将介绍几种简单而有效的方法,帮助你轻松... 目录使用T-SQL查询方法1:使用系统函数方法2:使用系统视图使用SQL Server Configu

C++实现封装的顺序表的操作与实践

《C++实现封装的顺序表的操作与实践》在程序设计中,顺序表是一种常见的线性数据结构,通常用于存储具有固定顺序的元素,与链表不同,顺序表中的元素是连续存储的,因此访问速度较快,但插入和删除操作的效率可能... 目录一、顺序表的基本概念二、顺序表类的设计1. 顺序表类的成员变量2. 构造函数和析构函数三、顺序表

python实现简易SSL的项目实践

《python实现简易SSL的项目实践》本文主要介绍了python实现简易SSL的项目实践,包括CA.py、server.py和client.py三个模块,文中通过示例代码介绍的非常详细,对大家的学习... 目录运行环境运行前准备程序实现与流程说明运行截图代码CA.pyclient.pyserver.py参

使用C++实现单链表的操作与实践

《使用C++实现单链表的操作与实践》在程序设计中,链表是一种常见的数据结构,特别是在动态数据管理、频繁插入和删除元素的场景中,链表相比于数组,具有更高的灵活性和高效性,尤其是在需要频繁修改数据结构的应... 目录一、单链表的基本概念二、单链表类的设计1. 节点的定义2. 链表的类定义三、单链表的操作实现四、

SQL Server数据库迁移到MySQL的完整指南

《SQLServer数据库迁移到MySQL的完整指南》在企业应用开发中,数据库迁移是一个常见的需求,随着业务的发展,企业可能会从SQLServer转向MySQL,原因可能是成本、性能、跨平台兼容性等... 目录一、迁移前的准备工作1.1 确定迁移范围1.2 评估兼容性1.3 备份数据二、迁移工具的选择2.1

Spring Boot统一异常拦截实践指南(最新推荐)

《SpringBoot统一异常拦截实践指南(最新推荐)》本文介绍了SpringBoot中统一异常处理的重要性及实现方案,包括使用`@ControllerAdvice`和`@ExceptionHand... 目录Spring Boot统一异常拦截实践指南一、为什么需要统一异常处理二、核心实现方案1. 基础组件

SQL Server使用SELECT INTO实现表备份的代码示例

《SQLServer使用SELECTINTO实现表备份的代码示例》在数据库管理过程中,有时我们需要对表进行备份,以防数据丢失或修改错误,在SQLServer中,可以使用SELECTINT... 在数据库管理过程中,有时我们需要对表进行备份,以防数据丢失或修改错误。在 SQL Server 中,可以使用 SE