Language APIs SDKs-C++-Exporters

2024-04-19 18:36
文章标签 c++ apis language exporters sdks

本文主要是介绍Language APIs SDKs-C++-Exporters,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Exporters

导出器

Send telemetry to the OpenTelemetry Collector to make sure it’s exported correctly. Using the Collector in production environments is a best practice. To visualize your telemetry, export it to a backend such as Jaeger , Zipkin , Prometheus , or a vendor-specific backend.
将遥测数据发送到OpenTelemetry Collector以确保其导出正确。在生产环境中使用收集器是最佳实践。要可视化您的遥测数据,请将其导出到后端,例如 Jaeger、Zipkin、 Prometheus或特定于供应商的后端。

Available exporters

可用的导出器

The registry contains a list of exporters for C++.
注册表包含C++的导出器列表。

Among exporters, OpenTelemetry Protocol (OTLP) exporters are designed with the OpenTelemetry data model in mind, emitting OTel data without any loss of information. Furthermore, many tools that operate on telemetry data support OTLP (such as Prometheus , Jaeger, and most vendors), providing you with a high degree of flexibility when you need it. To learn more about OTLP, see OTLP Specification.
在导出器中,OpenTelemetry Protocol (OTLP)导出器在设计时考虑了 OpenTelemetry 数据模型,可在不丢失任何信息的情况下发出OTel数据。此外,许多对遥测数据进行操作的工具都支持OTLP(例如Prometheus、Jaeger和大多数供应商),在您需要时为您提供高度的灵活性。要了解有关OTLP的更多信息,请参阅OTLP规范。

This page covers the main OpenTelemetry C++ exporters and how to set them up.
本页介绍了主要的 OpenTelemetry C++ 导出器以及如何设置它们。

OTLP

Collector Setup

收集器设置

Note
笔记
If you have a OTLP collector or backend already set up, you can skip this section and setup the OTLP exporter dependencies for your application.
如果您已经设置了OTLP收集器或后端,则可以跳过本节并为您的应用程序设置OTLP导出器依赖项。

To try out and verify your OTLP exporters, you can run the collector in a docker container that writes telemetry directly to the console.
要尝试和验证OTLP导出器,您可以在Docker容器中运行收集器,将遥测数据直接写入控制台。

In an empty directory, create a file called collector-config.yaml with the following content:
在空目录中,创建一个包含collector-config.yaml以下内容的文件:

receivers:otlp:protocols:grpc:endpoint: 0.0.0.0:4317http:endpoint: 0.0.0.0:4318
exporters:debug:verbosity: detailed
service:pipelines:traces:receivers: [otlp]exporters: [debug]metrics:receivers: [otlp]exporters: [debug]logs:receivers: [otlp]exporters: [debug]

Now run the collector in a docker container:
现在在 Docker 容器中运行收集器:

docker run -p 4317:4317 -p 4318:4318 --rm -v $(pwd)/collector-config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector

This collector is now able to accept telemetry via OTLP. Later you may want to configure the collector to send your telemetry to your observability backend.
该收集器现在能够通过OTLP接受遥测数据。稍后您可能需要配置收集器以将遥测数据发送到可观测性后端。

Dependencies

If you want to send telemetry data to an OTLP endpoint (like the OpenTelemetry Collector, Jaeger or Prometheus), you can choose between two different protocols to transport your data:
如果您想将遥测数据发送到OTLP端点(例如 OpenTelemetry Collector、Jaeger或 Prometheus),您可以选择两种不同的协议来传输数据:

  • HTTP/protobuf
  • gRPC

Make sure that you have set the right cmake build variables while building OpenTelemetry C++ from source :
确保在从源代码构建OpenTelemetry C++时设置了正确的cmake构建变量:

  • -DWITH_OTLP_GRPC=ON: To enable building OTLP gRPC exporter.
    启用构建 OTLP gRPC 导出器。
  • -DWITH_OTLP_HTTP=ON: To enable building OTLP HTTP exporter.
    启用构建 OTLP HTTP 导出器。

Usage

用法

Next, configure the exporter to point at an OTLP endpoint in your code.
接下来,将导出器配置为指向代码中的 OTLP 端点

// HTTP/proto
#include "opentelemetry/exporters/otlp/otlp_http_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/batch_span_processor_factory.h"
#include "opentelemetry/sdk/trace/batch_span_processor_options.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"
#include "opentelemetry/sdk/trace/tracer_provider.h"#include "opentelemetry/exporters/otlp/otlp_http_metric_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_metric_exporter_options.h"
#include "opentelemetry/metrics/provider.h"
#include "opentelemetry/sdk/metrics/aggregation/default_aggregation.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader_factory.h"
#include "opentelemetry/sdk/metrics/meter_context_factory.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"#include "opentelemetry/exporters/otlp/otlp_http_log_record_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_log_record_exporter_options.h"
#include "opentelemetry/logs/provider.h"
#include "opentelemetry/sdk/logs/logger_provider_factory.h"
#include "opentelemetry/sdk/logs/processor.h"
#include "opentelemetry/sdk/logs/simple_log_record_processor_factory.h"namespace trace_api = opentelemetry::trace;
namespace trace_sdk = opentelemetry::sdk::trace;namespace metric_sdk = opentelemetry::sdk::metrics;
namespace metrics_api = opentelemetry::metrics;namespace otlp = opentelemetry::exporter::otlp;namespace logs_api = opentelemetry::logs;
namespace logs_sdk = opentelemetry::sdk::logs;void InitTracer()
{trace_sdk::BatchSpanProcessorOptions bspOpts{};otlp::OtlpHttpExporterOptions opts;opts.url = "http://localhost:4318/v1/traces";auto exporter  = otlp::OtlpHttpExporterFactory::Create(opts);auto processor = trace_sdk::BatchSpanProcessorFactory::Create(std::move(exporter), bspOpts);std::shared_ptr<trace_api::TracerProvider> provider =  trace_sdk::TracerProviderFactory::Create(std::move(processor));trace_api::Provider::SetTracerProvider(provider);
}void InitMetrics()
{otlp::OtlpHttpMetricExporterOptions opts;opts.url = "http://localhost:4318/v1/metrics";auto exporter = otlp::OtlpHttpMetricExporterFactory::Create(opts);metric_sdk::PeriodicExportingMetricReaderOptions reader_options;reader_options.export_interval_millis = std::chrono::milliseconds(1000);reader_options.export_timeout_millis  = std::chrono::milliseconds(500);auto reader = metric_sdk::PeriodicExportingMetricReaderFactory::Create(std::move(exporter), reader_options);auto context = metric_sdk::MeterContextFactory::Create();context->AddMetricReader(std::move(reader));auto u_provider = metric_sdk::MeterProviderFactory::Create(std::move(context));std::shared_ptr<metrics_api::MeterProvider> provider(std::move(u_provider));metrics_api::Provider::SetMeterProvider(provider);
}void InitLogger()
{otlp::OtlpHttpLogRecordExporterOptions opts;opts.url = "http://localhost:4318/v1/logs";auto exporter  = otlp::OtlpHttpLogRecordExporterFactory::Create(opts);auto processor = logs_sdk::SimpleLogRecordProcessorFactory::Create(std::move(exporter));std::shared_ptr<logs_api::LoggerProvider> provider =logs_sdk::LoggerProviderFactory::Create(std::move(processor));logs_api::Provider::SetLoggerProvider(provider);
}
// gRPC
#include "opentelemetry/exporters/otlp/otlp_grpc_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_grpc_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/batch_span_processor_factory.h"
#include "opentelemetry/sdk/trace/batch_span_processor_options.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"
#include "opentelemetry/sdk/trace/tracer_provider.h"#include "opentelemetry/exporters/otlp/otlp_grpc_metric_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_grpc_metric_exporter_options.h"
#include "opentelemetry/metrics/provider.h"
#include "opentelemetry/sdk/metrics/aggregation/default_aggregation.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader.h"
#include "opentelemetry/sdk/metrics/export/periodic_exporting_metric_reader_factory.h"
#include "opentelemetry/sdk/metrics/meter_context_factory.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"#include "opentelemetry/exporters/otlp/otlp_grpc_log_record_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_grpc_log_record_exporter_options.h"
#include "opentelemetry/logs/provider.h"
#include "opentelemetry/sdk/logs/logger_provider_factory.h"
#include "opentelemetry/sdk/logs/processor.h"
#include "opentelemetry/sdk/logs/simple_log_record_processor_factory.h"namespace trace_api = opentelemetry::trace;
namespace trace_sdk = opentelemetry::sdk::trace;namespace metric_sdk = opentelemetry::sdk::metrics;
namespace metrics_api = opentelemetry::metrics;namespace otlp = opentelemetry::exporter::otlp;namespace logs_api = opentelemetry::logs;
namespace logs_sdk = opentelemetry::sdk::logs;void InitTracer()
{trace_sdk::BatchSpanProcessorOptions bspOpts{};opentelemetry::exporter::otlp::OtlpGrpcExporterOptions opts;opts.endpoint = "localhost:4317";opts.use_ssl_credentials = true;opts.ssl_credentials_cacert_as_string = "ssl-certificate";auto exporter  = otlp::OtlpGrpcExporterFactory::Create(opts);auto processor = trace_sdk::BatchSpanProcessorFactory::Create(std::move(exporter), bspOpts);std::shared_ptr<opentelemetry::trace_api::TracerProvider> provider =trace_sdk::TracerProviderFactory::Create(std::move(processor));// Set the global trace providertrace_api::Provider::SetTracerProvider(provider);
}void InitMetrics()
{otlp::OtlpGrpcMetricExporterOptions opts;opts.endpoint = "localhost:4317";opts.use_ssl_credentials = true;opts.ssl_credentials_cacert_as_string = "ssl-certificate";auto exporter = otlp::OtlpGrpcMetricExporterFactory::Create(opts);metric_sdk::PeriodicExportingMetricReaderOptions reader_options;reader_options.export_interval_millis = std::chrono::milliseconds(1000);reader_options.export_timeout_millis  = std::chrono::milliseconds(500);auto reader = metric_sdk::PeriodicExportingMetricReaderFactory::Create(std::move(exporter), reader_options);auto context = metric_sdk::MeterContextFactory::Create();context->AddMetricReader(std::move(reader));auto u_provider = metric_sdk::MeterProviderFactory::Create(std::move(context));std::shared_ptr<opentelemetry::metrics::MeterProvider> provider(std::move(u_provider));metrics_api::Provider::SetMeterProvider(provider);
}void InitLogger()
{otlp::OtlpGrpcLogRecordExporterOptions opts;opts.endpoint = "localhost:4317";opts.use_ssl_credentials = true;opts.ssl_credentials_cacert_as_string = "ssl-certificate";auto exporter  = otlp::OtlpGrpcLogRecordExporterFactory::Create(opts);auto processor = logs_sdk::SimpleLogRecordProcessorFactory::Create(std::move(exporter));nostd::shared_ptr<logs_api::LoggerProvider> provider(logs_sdk::LoggerProviderFactory::Create(std::move(processor)));logs_api::Provider::SetLoggerProvider(provider);
}

Console

To debug your instrumentation or see the values locally in development, you can use exporters writing telemetry data to the console (stdout).
要调试您的装置或在开发环境中查看本地值,您可以使用导出器将遥测数据写入控制台(stdout)。

While building OpenTelemetry C++ from source the OStreamSpanExporter is included in the build by default.
从源代码构建OpenTelemetry C++时, 默认情况下将其OStreamSpanExporter包含在构建中。

#include "opentelemetry/exporters/ostream/span_exporter_factory.h"
#include "opentelemetry/sdk/trace/exporter.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/simple_processor_factory.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"#include "opentelemetry/exporters/ostream/metrics_exporter_factory.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"
#include "opentelemetry/metrics/provider.h"#include "opentelemetry/exporters/ostream/log_record_exporter_factory.h"
#include "opentelemetry/logs/provider.h"
#include "opentelemetry/sdk/logs/logger_provider_factory.h"
#include "opentelemetry/sdk/logs/processor.h"
#include "opentelemetry/sdk/logs/simple_log_record_processor_factory.h"namespace trace_api      = opentelemetry::trace;
namespace trace_sdk      = opentelemetry::sdk::trace;
namespace trace_exporter = opentelemetry::exporter::trace;namespace metrics_sdk      = opentelemetry::sdk::metrics;
namespace metrics_api      = opentelemetry::metrics;
namespace metrics_exporter = opentelemetry::exporter::metrics;namespace logs_api = opentelemetry::logs;
namespace logs_sdk = opentelemetry::sdk::logs;
namespace logs_exporter = opentelemetry::exporter::logs;void InitTracer()
{auto exporter  = trace_exporter::OStreamSpanExporterFactory::Create();auto processor = trace_sdk::SimpleSpanProcessorFactory::Create(std::move(exporter));std::shared_ptr<opentelemetry::trace::TracerProvider> provider = trace_sdk::TracerProviderFactory::Create(std::move(processor));trace_api::Provider::SetTracerProvider(provider);
}void InitMetrics()
{auto exporter = metrics_exporter::OStreamMetricExporterFactory::Create();auto u_provider = metrics_sdk::MeterProviderFactory::Create();std::shared_ptr<opentelemetry::metrics::MeterProvider> provider(std::move(u_provider));auto *p = static_cast<metrics_sdk::MeterProvider *>(u_provider.get());p->AddMetricReader(std::move(exporter));metrics_api::Provider::SetMeterProvider(provider);
}void InitLogger()
{auto exporter = logs_exporter::OStreamLogRecordExporterFactory::Create();auto processor = logs_sdk::SimpleLogRecordProcessorFactory::Create(std::move(exporter));nostd::shared_ptr<logs_api::LoggerProvider> provider(logs_sdk::LoggerProviderFactory::Create(std::move(processor)));logs_api::Provider::SetLoggerProvider(provider);
}

Jaeger

Jaeger natively supports OTLP to receive trace data. You can run Jaeger in a docker container with the UI accessible on port 16686 and OTLP enabled on ports 4317 and 4318:
Jaeger原生支持OTLP接收Trace数据。您可以在docker容器中运行Jaeger,并在端口16686上访问 UI,并在端口 4317和4318上启用 OTLP:

docker run --rm \-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \-p 16686:16686 \-p 4317:4317 \-p 4318:4318 \-p 9411:9411 \jaegertracing/all-in-one:latest

Now following the instruction to setup the OTLP exporters.
现在按照说明设置OTLP导出器。

**

Prometheus

To send your metric data to Prometheus , you can either enable Prometheus’ OTLP Receiver and use the OTLP exporter or you can use the PrometheusHttpServer , a MetricReader, that starts an HTTP server that will collect metrics and serialize to Prometheus text format on request.
要将指标数据发送到Prometheus,您可以启用Prometheus的OTLP接收器并使用OTLP导出器,也可以使用PrometheusHttpServer——一个MetricReader来启动HTTP服务器,该服务器将收集指标并根据请求序列化为 Prometheus文本格式。

Backend Setup

后端设置

Note
笔记
If you have Prometheus or a Prometheus-compatible backend already set up, you can skip this section and setup the Prometheus or OTLP exporter dependencies for your application.
如果您已经设置了Prometheus或与Prometheus兼容的后端,则可以跳过本节并为您的应用程序设置Prometheus或OTLP导出器依赖项。

You can run Prometheus in a docker container, accessible on port 9090 by following these instructions:
您可以在Docker容器中运行Prometheus ,可以按照以下说明在9090端口上进行访问:

Create a file called prometheus.yml with the following content:
创建一个名为prometheus.yml的文件,内容如下:

scrape_configs:- job_name: dice-servicescrape_interval: 5sstatic_configs:- targets: [host.docker.internal:9464]

Run Prometheus in a docker container with the UI accessible on port 9090:
在 docker 容器中运行 Prometheus,并在端口9090上访问UI:

Note
笔记
When using Prometheus’ OTLP Receiver, make sure that you set the OTLP endpoint for metrics in your application to http://localhost:9090/api/v1/otlp.
使用 Prometheus 的 OTLP 接收器时,请确保将应用程序中指标的OTLP 端点设置为http://localhost:9090/api/v1/otlp。
Not all docker environments support host.docker.internal. In some cases you may need to replace host.docker.internal with localhost or the IP address of your machine.
并非所有docker环境都支持host.docker.internal。在某些情况下,您可能需要替换host.docker.internal为localhost或您计算机的IP地址。

Dependencies

依赖关系

To send your trace data to Prometheus , make sure that you have set the right cmake build variables while building OpenTelemetry C++ from source :
要将Trace数据发送到Prometheus ,请确保在从源代码构建 OpenTelemetry C++时设置了正确的 cmake 构建变量:

cmake -DWITH_PROMETHEUS=ON ...

Update your OpenTelemetry configuration to use the Prometheus Exporter:
更新您的 OpenTelemetry 配置以使用 Prometheus Exporter

#include "opentelemetry/exporters/prometheus/exporter_factory.h"
#include "opentelemetry/exporters/prometheus/exporter_options.h"
#include "opentelemetry/metrics/provider.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include "opentelemetry/sdk/metrics/meter_provider_factory.h"namespace metrics_sdk      = opentelemetry::sdk::metrics;
namespace metrics_api      = opentelemetry::metrics;
namespace metrics_exporter = opentelemetry::exporter::metrics;void InitMetrics()
{metrics_exporter::PrometheusExporterOptions opts;opts.url = "localhost:9464";auto prometheus_exporter = metrics_exporter::PrometheusExporterFactory::Create(opts);auto u_provider = metrics_sdk::MeterProviderFactory::Create();auto *p = static_cast<metrics_sdk::MeterProvider *>(u_provider.get());p->AddMetricReader(std::move(prometheus_exporter));std::shared_ptr<metrics_api::MeterProvider> provider(std::move(u_provider));metrics_api::Provider::SetMeterProvider(provider);
}

With the above you can access your metrics at http://localhost:9464/metrics . Prometheus or an OpenTelemetry Collector with the Prometheus receiver can scrape the metrics from this endpoint.
通过上述内容,您可以在http://localhost:9464/metrics访问您的指标。 Prometheus或带有Prometheus接收器的 OpenTelemetry Collector可以从此端点抓取指标。

Zipkin

Backend Setup

后端设置

Note
笔记
If you have Zipkin or a Zipkin-compatible backend already set up, you can skip this section and setup the Zipkin exporter dependencies for your application.
如果您已经设置了Zipkin或与Zipkin兼容的后端,则可以跳过本节并为您的应用程序设置Zipkin导出器依赖项 。

You can run Zipkin on in a Docker container by executing the following command:
您可以通过执行以下命令在Docker容器中运行Zipkin :

docker run --rm -d -p 9411:9411 --name zipkin openzipkin/zipkin

Dependencies#

依赖关系

To send your trace data to Zipkin , make sure that you have set the right cmake build variables while building OpenTelemetry C++ from source :

cmake -DWITH_ZIPKIN=ON ...*

Update your OpenTelemetry configuration to use the Zipkin Exporter  and to send data to your Zipkin backend:
要将Trace数据发送到Zipkin ,请确保在从源代码构建OpenTelemetry C++时设置了正确的cmake构建变量 :

#include "opentelemetry/exporters/zipkin/zipkin_exporter_factory.h"
#include "opentelemetry/sdk/resource/resource.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/simple_processor_factory.h"
#include "opentelemetry/sdk/trace/tracer_provider_factory.h"
#include "opentelemetry/trace/provider.h"namespace trace     = opentelemetry::trace;
namespace trace_sdk = opentelemetry::sdk::trace;
namespace zipkin    = opentelemetry::exporter::zipkin;
namespace resource  = opentelemetry::sdk::resource;void InitTracer()
{zipkin::ZipkinExporterOptions opts;resource::ResourceAttributes attributes = {{"service.name", "zipkin_demo_service"}};auto resource                           = resource::Resource::Create(attributes);auto exporter                           = zipkin::ZipkinExporterFactory::Create(opts);auto processor = trace_sdk::SimpleSpanProcessorFactory::Create(std::move(exporter));std::shared_ptr<opentelemetry::trace::TracerProvider> provider =trace_sdk::TracerProviderFactory::Create(std::move(processor), resource);// Set the global trace providertrace::Provider::SetTracerProvider(provider);
}

Other available exporters

其他可用的导出器

There are many other exporters available. For a list of available exporters, see the registry.
还有许多其他导出器商可供选择。有关可用导出器商的列表,请参阅注册表。

Finally, you can also write your own exporter. For more information, see the SpanExporter Interface in the API documentation.
最后,您还可以编写自己的导出器。有关更多信息,请参阅 API 文档中的 SpanExporter 接口。

Batching span and log records

批处理跨度和日志记录

The OpenTelemetry SDK provides a set of default span and log record processors, that allow you to either emit spans one-by-on (“simple”) or batched. Using batching is recommended, but if you do not want to batch your spans or log records, you can use a simple processor instead as follows:
OpenTelemetry SDK 提供了一组默认的Span和日志记录处理器,允许您逐个(“简单”)或批量发出Span。建议使用批处理,但如果您不想对Span或日志记录进行批处理,则可以使用简单的处理器,如下所示:

// Bash
#include "opentelemetry/exporters/otlp/otlp_http_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/batch_span_processor_factory.h"
#include "opentelemetry/sdk/trace/batch_span_processor_options.h"opentelemetry::sdk::trace::BatchSpanProcessorOptions options{};auto exporter  = opentelemetry::exporter::otlp::OtlpHttpExporterFactory::Create(opts);
auto processor = opentelemetry::sdk::trace::BatchSpanProcessorFactory::Create(std::move(exporter), options);
// Simple
#include "opentelemetry/exporters/otlp/otlp_http_exporter_factory.h"
#include "opentelemetry/exporters/otlp/otlp_http_exporter_options.h"
#include "opentelemetry/sdk/trace/processor.h"
#include "opentelemetry/sdk/trace/simple_processor_factory.h"auto exporter  = opentelemetry::exporter::otlp::OtlpHttpExporterFactory::Create(opts);
auto processor = opentelemetry::sdk::trace::SimpleSpanProcessorFactory::Create(std::move(exporter));

这篇关于Language APIs SDKs-C++-Exporters的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/918282

相关文章

【C++ Primer Plus习题】13.4

大家好,这里是国中之林! ❥前些天发现了一个巨牛的人工智能学习网站,通俗易懂,风趣幽默,忍不住分享一下给大家。点击跳转到网站。有兴趣的可以点点进去看看← 问题: 解答: main.cpp #include <iostream>#include "port.h"int main() {Port p1;Port p2("Abc", "Bcc", 30);std::cout <<

C++包装器

包装器 在 C++ 中,“包装器”通常指的是一种设计模式或编程技巧,用于封装其他代码或对象,使其更易于使用、管理或扩展。包装器的概念在编程中非常普遍,可以用于函数、类、库等多个方面。下面是几个常见的 “包装器” 类型: 1. 函数包装器 函数包装器用于封装一个或多个函数,使其接口更统一或更便于调用。例如,std::function 是一个通用的函数包装器,它可以存储任意可调用对象(函数、函数

C++11第三弹:lambda表达式 | 新的类功能 | 模板的可变参数

🌈个人主页: 南桥几晴秋 🌈C++专栏: 南桥谈C++ 🌈C语言专栏: C语言学习系列 🌈Linux学习专栏: 南桥谈Linux 🌈数据结构学习专栏: 数据结构杂谈 🌈数据库学习专栏: 南桥谈MySQL 🌈Qt学习专栏: 南桥谈Qt 🌈菜鸡代码练习: 练习随想记录 🌈git学习: 南桥谈Git 🌈🌈🌈🌈🌈🌈🌈🌈🌈🌈🌈🌈🌈�

【C++】_list常用方法解析及模拟实现

相信自己的力量,只要对自己始终保持信心,尽自己最大努力去完成任何事,就算事情最终结果是失败了,努力了也不留遗憾。💓💓💓 目录   ✨说在前面 🍋知识点一:什么是list? •🌰1.list的定义 •🌰2.list的基本特性 •🌰3.常用接口介绍 🍋知识点二:list常用接口 •🌰1.默认成员函数 🔥构造函数(⭐) 🔥析构函数 •🌰2.list对象

06 C++Lambda表达式

lambda表达式的定义 没有显式模版形参的lambda表达式 [捕获] 前属性 (形参列表) 说明符 异常 后属性 尾随类型 约束 {函数体} 有显式模版形参的lambda表达式 [捕获] <模版形参> 模版约束 前属性 (形参列表) 说明符 异常 后属性 尾随类型 约束 {函数体} 含义 捕获:包含零个或者多个捕获符的逗号分隔列表 模板形参:用于泛型lambda提供个模板形参的名

6.1.数据结构-c/c++堆详解下篇(堆排序,TopK问题)

上篇:6.1.数据结构-c/c++模拟实现堆上篇(向下,上调整算法,建堆,增删数据)-CSDN博客 本章重点 1.使用堆来完成堆排序 2.使用堆解决TopK问题 目录 一.堆排序 1.1 思路 1.2 代码 1.3 简单测试 二.TopK问题 2.1 思路(求最小): 2.2 C语言代码(手写堆) 2.3 C++代码(使用优先级队列 priority_queue)

【C++高阶】C++类型转换全攻略:深入理解并高效应用

📝个人主页🌹:Eternity._ ⏩收录专栏⏪:C++ “ 登神长阶 ” 🤡往期回顾🤡:C++ 智能指针 🌹🌹期待您的关注 🌹🌹 ❀C++的类型转换 📒1. C语言中的类型转换📚2. C++强制类型转换⛰️static_cast🌞reinterpret_cast⭐const_cast🍁dynamic_cast 📜3. C++强制类型转换的原因📝

C++——stack、queue的实现及deque的介绍

目录 1.stack与queue的实现 1.1stack的实现  1.2 queue的实现 2.重温vector、list、stack、queue的介绍 2.1 STL标准库中stack和queue的底层结构  3.deque的简单介绍 3.1为什么选择deque作为stack和queue的底层默认容器  3.2 STL中对stack与queue的模拟实现 ①stack模拟实现

c++的初始化列表与const成员

初始化列表与const成员 const成员 使用const修饰的类、结构、联合的成员变量,在类对象创建完成前一定要初始化。 不能在构造函数中初始化const成员,因为执行构造函数时,类对象已经创建完成,只有类对象创建完成才能调用成员函数,构造函数虽然特殊但也是成员函数。 在定义const成员时进行初始化,该语法只有在C11语法标准下才支持。 初始化列表 在构造函数小括号后面,主要用于给

2024/9/8 c++ smart

1.通过自己编写的class来实现unique_ptr指针的功能 #include <iostream> using namespace std; template<class T> class unique_ptr { public:         //无参构造函数         unique_ptr();         //有参构造函数         unique_ptr(