遥感多模态基础大模型汇总-实时更新

2024-08-29 23:52

本文主要是介绍遥感多模态基础大模型汇总-实时更新,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

本文内容来自下面链接,考虑到很多同学登录不了,故在此平台进行分享。
遥感基础大模型

Table of Contents

  • Models
  • Remote Sensing Vision Foundation Models 遥感视觉基础模型
  • Remote Sensing Vision-Language Foundation Models 遥感视觉语言基础模型
  • Remote Sensing Generative Foundation Models 遥感生成式基础模型
  • Remote Sensing Vision-Location Foundation Models 遥感视觉定位基础模型
  • Remote Sensing Vision-Audio Foundation Models 遥感视觉视频基础模型
  • Remote Sensing Task-specific Foundation Models 遥感特定任务基础模型
  • Remote Sensing Agents 遥感智能体
  • Datasets & Benchmarks 基准数据集
  • Benchmarks for RSFMs 遥感预训练模型
  • (Large-scale) Pre-training Datasets 遥感大尺度预训练数据集
  • Others
  • Relevant Projects
  • Survey Papers

Remote Sensing Vision Foundation Models

AbbreviationTitlePublicationPaperCode & Weights
GeoKRGeographical Knowledge-Driven Representation Learning for Remote Sensing ImagesTGRS2021GeoKRlink
-Self-Supervised Learning of Remote Sensing Scene Representations Using Contrastive Multiview CodingCVPRW2021Paperlink
GASSLGeography-Aware Self-Supervised LearningICCV2021GASSLlink
SeCoSeasonal Contrast: Unsupervised Pre-Training From Uncurated Remote Sensing DataICCV2021SeColink
DINO-MMSelf-supervised Vision Transformers for Joint SAR-optical Representation LearningIGARSS2022DINO-MMlink
SatMAESatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite ImageryNeurIPS2022SatMAElink
RS-BYOLSelf-Supervised Learning for Invariant Representations From Multi-Spectral and SAR ImagesJSTARS2022RS-BYOLnull
GeCoGeographical Supervision Correction for Remote Sensing Representation LearningTGRS2022GeConull
RingMoRingMo: A remote sensing foundation model with masked image modelingTGRS2022RingMoCode
RVSAAdvancing plain vision transformer toward remote sensing foundation modelTGRS2022RVSAlink
RSPAn Empirical Study of Remote Sensing PretrainingTGRS2022RSPlink
MATTERSelf-Supervised Material and Texture Representation Learning for Remote Sensing TasksCVPR2022MATTERnull
CSPTConsecutive Pre-Training: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing DomainRS2022CSPTlink
-Self-supervised Vision Transformers for Land-cover Segmentation and ClassificationCVPRW2022Paperlink
BFMA billion-scale foundation model for remote sensing imagesArxiv2023BFMnull
TOVTOV: The original vision model for optical remote sensing image understanding via self-supervised learningJSTARS2023TOVlink
CMIDCMID: A Unified Self-Supervised Learning Framework for Remote Sensing Image UnderstandingTGRS2023CMIDlink
RingMo-SenseRingMo-Sense: Remote Sensing Foundation Model for Spatiotemporal Prediction via Spatiotemporal Evolution DisentanglingTGRS2023RingMo-Sensenull
IaI-SimCLRMulti-Modal Multi-Objective Contrastive Learning for Sentinel-1/2 ImageryCVPRW2023IaI-SimCLRnull
CACoChange-Aware Sampling and Contrastive Learning for Satellite ImagesCVPR2023CAColink
SatLasSatlasPretrain: A Large-Scale Dataset for Remote Sensing Image UnderstandingICCV2023SatLaslink
GFMTowards Geospatial Foundation Models via Continual PretrainingICCV2023GFMlink
Scale-MAEScale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation LearningICCV2023Scale-MAElink
DINO-MCDINO-MC: Self-supervised Contrastive Learning for Remote Sensing Imagery with Multi-sized Local CropsArxiv2023DINO-MClink
CROMACROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked AutoencodersNeurIPS2023CROMAlink
Cross-Scale MAECross-Scale MAE: A Tale of Multiscale Exploitation in Remote SensingNeurIPS2023Cross-Scale MAElink
DeCURDeCUR: decoupling common & unique representations for multimodal self-supervisionArxiv2023DeCURlink
PrestoLightweight, Pre-trained Transformers for Remote Sensing TimeseriesArxiv2023Prestolink
CtxMIMCtxMIM: Context-Enhanced Masked Image Modeling for Remote Sensing Image UnderstandingArxiv2023CtxMIMnull
FG-MAEFeature Guided Masked Autoencoder for Self-supervised Learning in Remote SensingArxiv2023FG-MAElink
PrithviFoundation Models for Generalist Geospatial Artificial IntelligenceArxiv2023Prithvilink
RingMo-liteRingMo-lite: A Remote Sensing Multi-task Lightweight Network with CNN-Transformer Hybrid FrameworkArxiv2023RingMo-litenull
-A Self-Supervised Cross-Modal Remote Sensing Foundation Model with Multi-Domain Representation and Cross-Domain FusionIGARSS2023Papernull
EarthPTEarthPT: a foundation model for Earth ObservationNeurIPS2023 CCAI workshopEarthPTlink
USatUSat: A Unified Self-Supervised Encoder for Multi-Sensor Satellite ImageryArxiv2023USatlink
FoMo-BenchFoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation modelsArxiv2023FoMo-Benchlink
AIEarthAnalytical Insight of Earth: A Cloud-Platform of Intelligent Computing for Geospatial Big DataArxiv2023AIEarthlink
-Self-Supervised Learning for SAR ATR with a Knowledge-Guided Predictive ArchitectureArxiv2023Paperlink
ClayClay Foundation Model-nulllink
HydroHydro–A Foundation Model for Water in Satellite Imagery-nulllink
U-BARNSelf-Supervised Spatio-Temporal Representation Learning of Satellite Image Time SeriesJSTARS2024Paperlink
GeRSPGeneric Knowledge Boosted Pre-training For Remote Sensing ImagesArxiv2024GeRSPGeRSP
SwiMDiffSwiMDiff: Scene-wide Matching Contrastive Learning with Diffusion Constraint for Remote Sensing ImageArxiv2024SwiMDiffnull
OFA-NetOne for All: Toward Unified Foundation Models for Earth VisionArxiv2024OFA-Netnull
SMLFRGenerative ConvNet Foundation Model With Sparse Modeling and Low-Frequency Reconstruction for Remote Sensing Image InterpretationTGRS2024SMLFRlink
SpectralGPTSpectralGPT: Spectral Foundation ModelTPAMI2024SpectralGPTlink
S2MAES2MAE: A Spatial-Spectral Pretraining Foundation Model for Spectral Remote Sensing DataCVPR2024S2MAEnull
SatMAE++Rethinking Transformers Pre-training for Multi-Spectral Satellite ImageryCVPR2024SatMAE++link
msGFMBridging Remote Sensors with Multisensor Geospatial Foundation ModelsCVPR2024msGFMlink
SkySenseSkySense: A Multi-Modal Remote Sensing Foundation Model Towards Universal Interpretation for Earth Observation ImageryCVPR2024SkySenseComming soon
MTPMTP: Advancing Remote Sensing Foundation Model via Multi-Task PretrainingArxiv2024MTPlink
DOFANeural Plasticity-Inspired Foundation Model for Observing the Earth Crossing ModalitiesArxiv2024DOFAlink
PISPretrain A Remote Sensing Foundation Model by Promoting Intra-instance Similarity-nulllink
MMEarthMMEarth: Exploring Multi-Modal Pretext Tasks For Geospatial Representation LearningArxiv2024MMEarthlink
SARATR-XSARATR-X: A Foundation Model for Synthetic Aperture Radar Images Target RecognitionArxiv2024SARATR-Xlink
LeMeViTLeMeViT: Efficient Vision Transformer with Learnable Meta Tokens for Remote Sensing Image InterpretationIJCAI2024LeMeViTlink
SoftConMulti-Label Guided Soft Contrastive Learning for Efficient Earth Observation PretrainingArxiv2024SoftConlink
RS-DFMRS-DFM: A Remote Sensing Distributed Foundation Model for Diverse Downstream TasksArxiv2024RS-DFMnull
A2-MAEA2-MAE: A spatial-temporal-spectral unified remote sensing pre-training method based on anchor-aware masked autoencoderArxiv2024A2-MAEnull
HyperSIGMAHyperSIGMA: Hyperspectral Intelligence Comprehension Foundation ModelArxiv2024HyperSIGMAlink
SelectiveMAEScaling Efficient Masked Autoencoder Learning on Large Remote Sensing DatasetArxiv2024SelectiveMAElink
OmniSatOmniSat: Self-Supervised Modality Fusion for Earth ObservationECCV2024OmniSatlink
MM-VSFTowards a Knowledge guided Multimodal Foundation Model for Spatio-Temporal Remote Sensing ApplicationsArxiv2024MM-VSFnull
MA3EMasked Angle-Aware Autoencoder for Remote Sensing ImagesECCV2024MA3Elink
SpectralEarthSpectralEarth: Training Hyperspectral Foundation Models at ScaleArxiv2024SpectralEarthnull

Remote Sensing Vision-Language Foundation Models

AbbreviationTitlePublicationPaperCode & Weights
RSGPTRSGPT: A Remote Sensing Vision Language Model and BenchmarkArxiv2023RSGPTlink
RemoteCLIPRemoteCLIP: A Vision Language Foundation Model for Remote SensingArxiv2023RemoteCLIPlink
GeoRSCLIPRS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation ModelArxiv2023GeoRSCLIPlink
GRAFTRemote Sensing Vision-Language Foundation Models without Annotations via Ground Remote AlignmentICLR2024GRAFTnull
-Charting New Territories: Exploring the Geographic and Geospatial Capabilities of Multimodal LLMsArxiv2023Paperlink
-Remote Sensing ChatGPT: Solving Remote Sensing Tasks with ChatGPT and Visual ModelsArxiv2024Paperlink
SkyEyeGPTSkyEyeGPT: Unifying Remote Sensing Vision-Language Tasks via Instruction Tuning with Large Language ModelArxiv2024Paperlink
EarthGPTEarthGPT: A Universal Multi-modal Large Language Model for Multi-sensor Image Comprehension in Remote Sensing DomainArxiv2024Papernull
SkyCLIPSkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote SensingAAAI2024SkyCLIPlink
GeoChatGeoChat: Grounded Large Vision-Language Model for Remote SensingCVPR2024GeoChatlink
LHRS-BotLHRS-Bot: Empowering Remote Sensing with VGI-Enhanced Large Multimodal Language ModelArxiv2024Paperlink
H2RSVLMH2RSVLM: Towards Helpful and Honest Remote Sensing Large Vision Language ModelArxiv2024Paperlink
RS-LLaVARS-LLaVA: Large Vision Language Model for Joint Captioning and Question Answering in Remote Sensing ImageryRS2024Paperlink
SkySenseGPTSkySenseGPT: A Fine-Grained Instruction Tuning Dataset and Model for Remote Sensing Vision-Language UnderstandingArxiv2024Paperlink

Remote Sensing Generative Foundation Models

AbbreviationTitlePublicationPaperCode & Weights
Seg2SatSeg2Sat - Segmentation to aerial view using pretrained diffuser modelsGithubnulllink
-Generate Your Own Scotland: Satellite Image Generation Conditioned on MapsNeurIPSW2023Paperlink
GeoRSSDRS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation ModelArxiv2023Paperlink
DiffusionSatDiffusionSat: A Generative Foundation Model for Satellite ImageryICLR2024DiffusionSatlink
CRS-DiffCRS-Diff: Controllable Generative Remote Sensing Foundation ModelArxiv2024Papernull
MetaEarthMetaEarth: A Generative Foundation Model for Global-Scale Remote Sensing Image GenerationArxiv2024Paperlink

Remote Sensing Vision-Location Foundation Models

AbbreviationTitlePublicationPaperCode & Weights
CSPCSP: Self-Supervised Contrastive Spatial Pre-Training for Geospatial-Visual RepresentationsICML2023CSPlink
GeoCLIPGeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localizationNeurIPS2023GeoCLIPlink
SatCLIPSatCLIP: Global, General-Purpose Location Embeddings with Satellite ImageryArxiv2023SatCLIPlink

Remote Sensing Vision-Audio Foundation Models

AbbreviationTitlePublicationPaperCode & Weights
-Self-supervised audiovisual representation learning for remote sensing dataJAG2022Paperlink

Remote Sensing Task-specific Foundation Models

AbbreviationTitlePublicationPaperCode & WeightsTask
SS-MAESS-MAE: Spatial-Spectral Masked Auto-Encoder for Mulit-Source Remote Sensing Image ClassificationTGRS2023PaperlinkImage Classification
TTPTime Travelling Pixels: Bitemporal Features Integration with Foundation Model for Remote Sensing Image Change DetectionArxiv2023PaperlinkChange Detection
CSMAEExploring Masked Autoencoders for Sensor-Agnostic Image Retrieval in Remote SensingArxiv2024PaperlinkImage Retrieval
RSPrompterRSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation ModelTGRS2024PaperlinkInstance Segmentation
BANA New Learning Paradigm for Foundation Model-based Remote Sensing Change DetectionTGRS2024PaperlinkChange Detection
-Change Detection Between Optical Remote Sensing Imagery and Map Data via Segment Anything Model (SAM)Arxiv2024PapernullChange Detection (Optical & OSM data)
AnyChangeSegment Any ChangeArxiv2024PapernullZero-shot Change Detection
RS-CapRetLarge Language Models for Captioning and Retrieving Remote Sensing ImagesArxiv2024PapernullImage Caption & Text-image Retrieval
-Task Specific Pretraining with Noisy Labels for Remote sensing Image SegmentationArxiv2024PapernullImage Segmentation (Noisy labels)
RSBuildingRSBuilding: Towards General Remote Sensing Image Building Extraction and Change Detection with Foundation ModelArxiv2024PaperlinkBuilding Extraction and Change Detection
SAM-RoadSegment Anything Model for Road Network Graph ExtractionArxiv2024PaperlinkRoad Extraction

Remote Sensing Agents

AbbreviationTitlePublicationPaperCode & Weights
GeoLLM-QAEvaluating Tool-Augmented Agents in Remote Sensing PlatformsICLR 2024 ML4RS WorkshopPapernull
RS-AgentRS-Agent: Automating Remote Sensing Tasks through Intelligent AgentsArxiv2024Papernull

Benchmarks for RSFMs

AbbreviationTitlePublicationPaperLinkDownstream Tasks
-Revisiting pre-trained remote sensing model benchmarks: resizing and normalization mattersArxiv2023PaperlinkClassification
GEO-BenchGEO-Bench: Toward Foundation Models for Earth MonitoringArxiv2023PaperlinkClassification & Segmentation
FoMo-BenchFoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation modelsArxiv2023FoMo-BenchComming soonClassification & Segmentation & Detection for forest monitoring
PhilEOPhilEO Bench: Evaluating Geo-Spatial Foundation ModelsArxiv2024PaperlinkSegmentation & Regression estimation
SkySenseSkySense: A Multi-Modal Remote Sensing Foundation Model Towards Universal Interpretation for Earth Observation ImageryCVPR2024SkySenseComming SoonClassification & Segmentation & Detection & Change detection & Multi-Modal Segmentation: Time-insensitive LandCover Mapping & Multi-Modal Segmentation: Time-sensitive Crop Mapping & Multi-Modal Scene Classification
VLEO-BenchGood at captioning, bad at counting: Benchmarking GPT-4V on Earth observation dataArxiv2024VLEO-benchlinkLocation Recognition & Captioning & Scene Classification & Counting & Detection & Change detection
VRSBenchVRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image UnderstandingArxiv2024VRSBenchlinkImage Captioning & Object Referring & Visual Question Answering

(Large-scale) Pre-training Datasets

AbbreviationTitlePublicationPaperAttributeLink
fMoWFunctional Map of the WorldCVPR2018fMoWVisionlink
SEN12MSSEN12MS – A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion-SEN12MSVisionlink
BEN-MMBigEarthNet-MM: A Large Scale Multi-Modal Multi-Label Benchmark Archive for Remote Sensing Image Classification and RetrievalGRSM2021BEN-MMVisionlink
MillionAIDOn Creating Benchmark Dataset for Aerial Image Interpretation: Reviews, Guidances, and Million-AIDJSTARS2021MillionAIDVisionlink
SeCoSeasonal Contrast: Unsupervised Pre-Training From Uncurated Remote Sensing DataICCV2021SeCoVisionlink
fMoW-S2SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite ImageryNeurIPS2022fMoW-S2Visionlink
TOV-RS-BalancedTOV: The original vision model for optical remote sensing image understanding via self-supervised learningJSTARS2023TOVVisionlink
SSL4EO-S12SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for Self-Supervised Learning in Earth ObservationGRSM2023SSL4EO-S12Visionlink
SSL4EO-LSSL4EO-L: Datasets and Foundation Models for Landsat ImageryArxiv2023SSL4EO-LVisionlink
SatlasPretrainSatlasPretrain: A Large-Scale Dataset for Remote Sensing Image UnderstandingICCV2023SatlasPretrainVision (Supervised)link
CACoChange-Aware Sampling and Contrastive Learning for Satellite ImagesCVPR2023CACoVisionComming soon
SAMRSSAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment Anything ModelNeurIPS2023SAMRSVisionlink
RSVGRSVG: Exploring Data and Models for Visual Grounding on Remote Sensing DataTGRS2023RSVGVision-Languagelink
RS5MRS5M: A Large Scale Vision-Language Dataset for Remote Sensing Vision-Language Foundation ModelArxiv2023RS5MVision-Languagelink
GEO-BenchGEO-Bench: Toward Foundation Models for Earth MonitoringArxiv2023GEO-BenchVision (Evaluation)link
RSICap & RSIEvalRSGPT: A Remote Sensing Vision Language Model and BenchmarkArxiv2023RSGPTVision-LanguageComming soon
ClayClay Foundation Model-nullVisionlink
SATINSATIN: A Multi-Task Metadataset for Classifying Satellite Imagery using Vision-Language ModelsICCVW2023SATINVision-Languagelink
SkyScriptSkyScript: A Large and Semantically Diverse Vision-Language Dataset for Remote SensingAAAI2024SkyScriptVision-Languagelink
ChatEarthNetChatEarthNet: A Global-Scale, High-Quality Image-Text Dataset for Remote SensingArxiv2024ChatEarthNetVision-Languagelink
LuoJiaHOGLuoJiaHOG: A Hierarchy Oriented Geo-aware Image Caption Dataset for Remote Sensing Image-Text RetrievalArxiv2024LuoJiaHOGVision-Languagenull
MMEarthMMEarth: Exploring Multi-Modal Pretext Tasks For Geospatial Representation LearningArxiv2024MMEarthVisionlink
SeeFarSeeFar: Satellite Agnostic Multi-Resolution Dataset for Geospatial Foundation ModelsArxiv2024SeeFarVisionlink
FIT-RSSkySenseGPT: A Fine-Grained Instruction Tuning Dataset and Model for Remote Sensing Vision-Language UnderstandingArxiv2024PaperVision-Languagelink
RS-GPT4VRS-GPT4V: A Unified Multimodal Instruction-Following Dataset for Remote Sensing Image UnderstandingArxiv2024PaperVision-Languagelink
RS-4MScaling Efficient Masked Autoencoder Learning on Large Remote Sensing DatasetArxiv2024RS-4MVisionlink
Major TOMMajor TOM: Expandable Datasets for Earth ObservationArxiv2024Major TOMVisionlink
VRSBenchVRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image UnderstandingArxiv2024VRSBenchVision-Languagelink

Relevant Projects

(TODO. This section is dedicated to recommending more relevant and impactful projects, with the hope of promoting the development of the RS community. 😄 🚀)

TitleLinkBrief Introduction
RSFMs (Remote Sensing Foundation Models) PlaygroundlinkAn open-source playground to streamline the evaluation and fine-tuning of RSFMs on various datasets.

Survey Papers

TitlePublicationPaperAttribute
Self-Supervised Remote Sensing Feature Learning: Learning Paradigms, Challenges, and Future WorksTGRS2023PaperVision & Vision-Language
The Potential of Visual ChatGPT For Remote SensingArxiv2023PaperVision-Language
遥感大模型:进展与前瞻武汉大学学报 (信息科学版) 2023PaperVision & Vision-Language
地理人工智能样本:模型、质量与服务武汉大学学报 (信息科学版) 2023Paper-
Brain-Inspired Remote Sensing Foundation Models and Open Problems: A Comprehensive SurveyJSTARS2023PaperVision & Vision-Language
Revisiting pre-trained remote sensing model benchmarks: resizing and normalization mattersArxiv2023PaperVision
An Agenda for Multimodal Foundation Models for Earth ObservationIGARSS2023PaperVision
Transfer learning in environmental remote sensingRSE2024PaperTransfer learning
遥感基础模型发展综述与未来设想遥感学报2023Paper-
On the Promises and Challenges of Multimodal Foundation Models for Geographical, Environmental, Agricultural, and Urban Planning ApplicationsArxiv2023PaperVision-Language
Vision-Language Models in Remote Sensing: Current Progress and Future TrendsIEEE GRSM2024PaperVision-Language
On the Foundations of Earth and Climate Foundation ModelsArxiv2024PaperVision & Vision-Language
Towards Vision-Language Geo-Foundation Model: A SurveyArxiv2024PaperVision-Language
AI Foundation Models in Remote Sensing: A SurveyArxiv2024PaperVision

Citation

If you find this repository useful, please consider giving a star ⭐️ and citation:

@inproceedings{guo2024skysense,title={Skysense: A multi-modal remote sensing foundation model towards universal interpretation for earth observation imagery},author={Guo, Xin and Lao, Jiangwei and Dang, Bo and Zhang, Yingying and Yu, Lei and Ru, Lixiang and Zhong, Liheng and Huang, Ziyuan and Wu, Kang and Hu, Dingxiang and others},booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},pages={27672--27683},year={2024}
}

欢迎点赞,收藏,关注,支持小生,打造一个好的遥感领域知识分享专栏。遥感专栏
同时欢迎私信咨询讨论学习,咨询讨论的方向不限于:地物分类/语义分割(如水体,云,建筑物,耕地,冬小麦等各种地物类型的提取),变化检测,夜光遥感数据处理,目标检测,图像处理(几何矫正,辐射矫正(大气校正),图像去噪等),遥感时空融合,定量遥感(土壤盐渍化/水质参数反演/气溶胶反演/森林参数(生物量,植被覆盖度,植被生产力等)/地表温度/地表反射率等反演)以及高光谱数据处理等领域以及深度学习,机器学习等技术算法讨论,以及相关实验指导/论文指导,考研复习等多方面。

这篇关于遥感多模态基础大模型汇总-实时更新的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1119211

相关文章

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

Andrej Karpathy最新采访:认知核心模型10亿参数就够了,AI会打破教育不公的僵局

夕小瑶科技说 原创  作者 | 海野 AI圈子的红人,AI大神Andrej Karpathy,曾是OpenAI联合创始人之一,特斯拉AI总监。上一次的动态是官宣创办一家名为 Eureka Labs 的人工智能+教育公司 ,宣布将长期致力于AI原生教育。 近日,Andrej Karpathy接受了No Priors(投资博客)的采访,与硅谷知名投资人 Sara Guo 和 Elad G

poj3468(线段树成段更新模板题)

题意:包括两个操作:1、将[a.b]上的数字加上v;2、查询区间[a,b]上的和 下面的介绍是下解题思路: 首先介绍  lazy-tag思想:用一个变量记录每一个线段树节点的变化值,当这部分线段的一致性被破坏我们就将这个变化值传递给子区间,大大增加了线段树的效率。 比如现在需要对[a,b]区间值进行加c操作,那么就从根节点[1,n]开始调用update函数进行操作,如果刚好执行到一个子节点,

hdu1394(线段树点更新的应用)

题意:求一个序列经过一定的操作得到的序列的最小逆序数 这题会用到逆序数的一个性质,在0到n-1这些数字组成的乱序排列,将第一个数字A移到最后一位,得到的逆序数为res-a+(n-a-1) 知道上面的知识点后,可以用暴力来解 代码如下: #include<iostream>#include<algorithm>#include<cstring>#include<stack>#in

hdu1689(线段树成段更新)

两种操作:1、set区间[a,b]上数字为v;2、查询[ 1 , n ]上的sum 代码如下: #include<iostream>#include<algorithm>#include<cstring>#include<stack>#include<queue>#include<set>#include<map>#include<stdio.h>#include<stdl

C#实战|大乐透选号器[6]:实现实时显示已选择的红蓝球数量

哈喽,你好啊,我是雷工。 关于大乐透选号器在前面已经记录了5篇笔记,这是第6篇; 接下来实现实时显示当前选中红球数量,蓝球数量; 以下为练习笔记。 01 效果演示 当选择和取消选择红球或蓝球时,在对应的位置显示实时已选择的红球、蓝球的数量; 02 标签名称 分别设置Label标签名称为:lblRedCount、lblBlueCount

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]

Retrieval-based-Voice-Conversion-WebUI模型构建指南

一、模型介绍 Retrieval-based-Voice-Conversion-WebUI(简称 RVC)模型是一个基于 VITS(Variational Inference with adversarial learning for end-to-end Text-to-Speech)的简单易用的语音转换框架。 具有以下特点 简单易用:RVC 模型通过简单易用的网页界面,使得用户无需深入了

hdu 1754 I Hate It(线段树,单点更新,区间最值)

题意是求一个线段中的最大数。 线段树的模板题,试用了一下交大的模板。效率有点略低。 代码: #include <stdio.h>#include <string.h>#define TREE_SIZE (1 << (20))//const int TREE_SIZE = 200000 + 10;int max(int a, int b){return a > b ? a :

透彻!驯服大型语言模型(LLMs)的五种方法,及具体方法选择思路

引言 随着时间的发展,大型语言模型不再停留在演示阶段而是逐步面向生产系统的应用,随着人们期望的不断增加,目标也发生了巨大的变化。在短短的几个月的时间里,人们对大模型的认识已经从对其zero-shot能力感到惊讶,转变为考虑改进模型质量、提高模型可用性。 「大语言模型(LLMs)其实就是利用高容量的模型架构(例如Transformer)对海量的、多种多样的数据分布进行建模得到,它包含了大量的先验