NEO4J亿级数据全文索引构建优化

2024-02-25 06:38

本文主要是介绍NEO4J亿级数据全文索引构建优化,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

NEO4J亿级数据全文索引构建优化

  • 一、数据量规模(亿级)
  • 二、构建索引的方式
  • 三、构建索引发生的异常
  • 四、全文索引代码优化
    • 1、Java.lang.OutOfMemoryError
    • 2、访问数据库时
    • 3、优化方案
    • 4、优化代码
    • 5、执行效率测试

如果使用基于NEO4J的全文检索作为图谱的主要入口,那么做好图谱搜索引擎的优化是非常关键的。

一、数据量规模(亿级)

count(relationships):500584016

count(nodes):765485810

二、构建索引的方式

使用脚本后服务器台执行构建全文索引的操作。
使用后台脚本执行构建索引程序:

index.sh
#!/usr/bin/env bash
nohup /neo4j-community-3.4.9/bin/neo4j-shell -file build.cql >>indexGraph.log 2>&1 &
build.cql
CALL zdr.index.addChineseFulltextIndex('IKAnalyzer', ['description','fullname','name','lnkurl'], 'LinkedinID') YIELD message RETURN message;

三、构建索引发生的异常

ERROR (-v for expanded information):TransactionFailureException: The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.-host      Domain name or IP of host to connect to (default: localhost)-port      Port of host to connect to (default: 1337)-name      RMI name, i.e. rmi://<host>:<port>/<name> (default: shell)-pid       Process ID to connect to-c         Command line to execute. After executing it the shell exits-file      File containing commands to execute, or '-' to read from stdin. After executing it the shell exits-readonly  Connect in readonly mode (only for connecting with -path)-path      Points to a neo4j db path so that a local server can be started there-config    Points to a config file when starting a local serverExample arguments for remote:-port 1337-host 192.168.1.234 -port 1337 -name shell-host localhost -readonly...or no arguments for default values
Example arguments for local:-path /path/to/db-path /path/to/db -config /path/to/neo4j.config-path /path/to/db -readonly
Caused by: java.lang.OutOfMemoryError: Java heap space | GB+Tree[file:/u02/isi/zdr/graph/neo4j-community-3.4.9/data/databases/graph.db/schema/index/lucene_native-2.0/134/string-1.0/index-134, layout:StringLayout[version:0.1, identifier:24016946018123776], generation:16587/16588]at org.neo4j.io.pagecache.impl.muninn.CursorFactory.takeWriteCursor(CursorFactory.java:62)at org.neo4j.io.pagecache.impl.muninn.MuninnPagedFile.io(MuninnPagedFile.java:186)at org.neo4j.index.internal.gbptree.FreeListIdProvider.releaseId(FreeListIdProvider.java:217)at org.neo4j.index.internal.gbptree.InternalTreeLogic.createSuccessorIfNeeded(InternalTreeLogic.java:1289)at org.neo4j.index.internal.gbptree.InternalTreeLogic.insertInLeaf(InternalTreeLogic.java:513)at org.neo4j.index.internal.gbptree.InternalTreeLogic.insert(InternalTreeLogic.java:356)at org.neo4j.index.internal.gbptree.GBPTree$SingleWriter.merge(GBPTree.java:1234)at org.neo4j.kernel.impl.index.schema.NativeSchemaIndexUpdater.processAdd(NativeSchemaIndexUpdater.java:132)at org.neo4j.kernel.impl.index.schema.NativeSchemaIndexUpdater.processUpdate(NativeSchemaIndexUpdater.java:86)at org.neo4j.kernel.impl.index.schema.NativeSchemaIndexUpdater.process(NativeSchemaIndexUpdater.java:61)at org.neo4j.kernel.impl.index.schema.fusion.FusionIndexUpdater.process(FusionIndexUpdater.java:41)at org.neo4j.kernel.impl.api.index.updater.DelegatingIndexUpdater.process(DelegatingIndexUpdater.java:40)at org.neo4j.kernel.impl.api.index.IndexingService.processUpdate(IndexingService.java:516)at org.neo4j.kernel.impl.api.index.IndexingService.apply(IndexingService.java:479)at org.neo4j.kernel.impl.api.index.IndexingService.apply(IndexingService.java:463)at org.neo4j.kernel.impl.transaction.command.IndexUpdatesWork.apply(IndexUpdatesWork.java:63)at org.neo4j.kernel.impl.transaction.command.IndexUpdatesWork.apply(IndexUpdatesWork.java:42)at org.neo4j.concurrent.WorkSync.doSynchronizedWork(WorkSync.java:231)at org.neo4j.concurrent.WorkSync.tryDoWork(WorkSync.java:157)at org.neo4j.concurrent.WorkSync.apply(WorkSync.java:91)

JAVA代码实现索引

    /*** @param* @return* @Description: TODO(构建索引并返回MESSAGE - 不支持自动更新)*/private String chineseFulltextIndex(String indexName, String labelName, List<String> propKeys) {Label label = Label.label(labelName);// 按照标签找到该标签下的所有节点ResourceIterator<Node> nodes = db.findNodes(label);System.out.println("nodes:" + nodes.toString());int nodesSize = 0;int propertiesSize = 0;// 循环存在问题 更新到3000万之后程序开始卡顿while (nodes.hasNext()) {nodesSize++;Node node = nodes.next();System.out.println("current nodes:" + node.toString());// 每个节点上需要添加索引的属性Set<Map.Entry<String, Object>> properties = node.getProperties(propKeys.toArray(new String[0])).entrySet();System.out.println("current node properties" + properties);// 查询该节点是否已有索引,有的话删除if (db.index().existsForNodes(indexName)) {Index<Node> oldIndex = db.index().forNodes(indexName);System.out.println("current node index" + oldIndex);oldIndex.remove(node);}// 为该节点的每个需要添加索引的属性添加全文索引Index<Node> nodeIndex = db.index().forNodes(indexName, FULL_INDEX_CONFIG);for (Map.Entry<String, Object> property : properties) {propertiesSize++;nodeIndex.add(node, property.getKey(), property.getValue());}// 计算耗时}String message = "IndexName:" + indexName + ",LabelName:" + labelName + ",NodesSize:" + nodesSize + ",PropertiesSize:" + propertiesSize;return message;}

四、全文索引代码优化

1、Java.lang.OutOfMemoryError

Java.lang.OutOfMemory是java.lang.VirtualMachineError的一个子类,当Java虚拟机中断,或是超出可用资源时抛出。

2、访问数据库时

访问数据库时程序会获取锁和内存,在事务没有被完成之前锁和内存是不会释放的。因此现在很容易理解上述BUG的出现的原因。(三)实现的索引程序中,是获取节点之后在WHILE循环中执行构建索引,直到索引构建完毕事务才会自动被关闭,自动执行内存回收等操作。当获取的数据量巨大的时候,必然会出现内存溢出。

3、优化方案

使用批量事务提交的机制。

4、优化代码

 /*** @param* @return* @Description: TODO(构建索引并返回MESSAGE - 不支持自动更新)*/private String chineseFulltextIndex(String indexName, String labelName, List<String> propKeys) {Label label = Label.label(labelName);int nodesSize = 0;int propertiesSize = 0;// 按照标签找到该标签下的所有节点ResourceIterator<Node> nodes = db.findNodes(label);Transaction tx = db.beginTx();try {int batch = 0;long startTime = System.nanoTime();while (nodes.hasNext()) {nodesSize++;Node node = nodes.next();boolean indexed = false;// 每个节点上需要添加索引的属性Set<Map.Entry<String, Object>> properties = node.getProperties(propKeys.toArray(new String[0])).entrySet();// 查询该节点是否已有索引,有的话删除if (db.index().existsForNodes(indexName)) {Index<Node> oldIndex = db.index().forNodes(indexName);oldIndex.remove(node);}// 为该节点的每个需要添加索引的属性添加全文索引Index<Node> nodeIndex = db.index().forNodes(indexName, FULL_INDEX_CONFIG);for (Map.Entry<String, Object> property : properties) {indexed = true;propertiesSize++;nodeIndex.add(node, property.getKey(), property.getValue());}// 批量提交事务if (indexed) {if (++batch == 50_000) {batch = 0;tx.success();tx.close();tx = db.beginTx();// 计算耗时startTime = indexConsumeTime(startTime, nodesSize, propertiesSize);}}}tx.success();// 计算耗时indexConsumeTime(startTime, nodesSize, propertiesSize);} finally {tx.close();}String message = "IndexName:" + indexName + ",LabelName:" + labelName + ",NodesSize:" + nodesSize + ",PropertiesSize:" + propertiesSize;return message;}

5、执行效率测试

50_000为批次进行提交,依次累加nodeSize和propertieSize,consume还是每批提交的耗时。
可以看到在刚开始提交的时候耗时较多,之后基本上稳定在每批提交耗时:2s~5s/5万条。10亿nodes,耗时估算11h~23h之间。

Build index-nodeSize:50000,propertieSize:148777,consume:21434ms
Build index-nodeSize:100000,propertieSize:297883,consume:18493ms
Build index-nodeSize:150000,propertieSize:446936,consume:17140ms
Build index-nodeSize:200000,propertieSize:595981,consume:17323ms
Build index-nodeSize:250000,propertieSize:745039,consume:19680ms
Build index-nodeSize:300000,propertieSize:894026,consume:18451ms
Build index-nodeSize:350000,propertieSize:1042994,consume:20266ms
Build index-nodeSize:400000,propertieSize:1160186,consume:12787ms
Build index-nodeSize:450000,propertieSize:1210186,consume:1946ms
Build index-nodeSize:500000,propertieSize:1260186,consume:3174ms
Build index-nodeSize:550000,propertieSize:1310186,consume:3090ms
Build index-nodeSize:600000,propertieSize:1360186,consume:3063ms
Build index-nodeSize:650000,propertieSize:1410186,consume:1868ms
Build index-nodeSize:700000,propertieSize:1460186,consume:2036ms
Build index-nodeSize:750000,propertieSize:1510186,consume:3784ms
Build index-nodeSize:800000,propertieSize:1560186,consume:3037ms
Build index-nodeSize:850000,propertieSize:1610186,consume:2627ms
Build index-nodeSize:900000,propertieSize:1660186,consume:1900ms
Build index-nodeSize:950000,propertieSize:1710186,consume:2944ms
Build index-nodeSize:1000000,propertieSize:1760186,consume:3369ms
Build index-nodeSize:1050000,propertieSize:1810186,consume:3289ms
Build index-nodeSize:1100000,propertieSize:1860186,consume:2763ms
Build index-nodeSize:1150000,propertieSize:1910186,consume:3237ms
Build index-nodeSize:1200000,propertieSize:1960186,consume:3408ms
Build index-nodeSize:1250000,propertieSize:2010186,consume:3644ms
Build index-nodeSize:1300000,propertieSize:2060186,consume:3661ms
Build index-nodeSize:1350000,propertieSize:2110186,consume:2964ms
Build index-nodeSize:1400000,propertieSize:2160186,consume:3219ms
Build index-nodeSize:1450000,propertieSize:2210186,consume:3356ms
Build index-nodeSize:1500000,propertieSize:2260186,consume:4115ms
Build index-nodeSize:1550000,propertieSize:2310186,consume:3188ms
Build index-nodeSize:1600000,propertieSize:2360186,consume:3364ms
Build index-nodeSize:1650000,propertieSize:2410186,consume:3799ms
Build index-nodeSize:1700000,propertieSize:2460186,consume:4301ms
Build index-nodeSize:1750000,propertieSize:2510186,consume:3772ms
Build index-nodeSize:1800000,propertieSize:2560186,consume:3692ms
Build index-nodeSize:1850000,propertieSize:2610186,consume:3428ms
Build index-nodeSize:1900000,propertieSize:2660186,consume:2930ms

备注:在本次测试的数据集上执行索引构建2小时之后,此时已经被索引了1495万个NODES,速度下降明显,需要进一步优化。

Build index-nodeSize:13850000,propertieSize:14610186,consume:97290ms
Build index-nodeSize:13900000,propertieSize:14660186,consume:7441ms
Build index-nodeSize:13950000,propertieSize:14710186,consume:3730ms
Build index-nodeSize:14000000,propertieSize:14760186,consume:3512ms
Build index-nodeSize:14050000,propertieSize:14810186,consume:4545ms
Build index-nodeSize:14100000,propertieSize:14860186,consume:12100ms
Build index-nodeSize:14150000,propertieSize:14910186,consume:83071ms
Build index-nodeSize:14200000,propertieSize:14960186,consume:7417ms
Build index-nodeSize:14250000,propertieSize:15010186,consume:3579ms
Build index-nodeSize:14300000,propertieSize:15060186,consume:64841ms
Build index-nodeSize:14350000,propertieSize:15110186,consume:7553ms
Build index-nodeSize:14400000,propertieSize:15160186,consume:63141ms
Build index-nodeSize:14450000,propertieSize:15210186,consume:64316ms
Build index-nodeSize:14500000,propertieSize:15260186,consume:187510ms
Build index-nodeSize:14550000,propertieSize:15310186,consume:247571ms
Build index-nodeSize:14600000,propertieSize:15360186,consume:224611ms
Build index-nodeSize:14650000,propertieSize:15410186,consume:244539ms
Build index-nodeSize:14700000,propertieSize:15460186,consume:354684ms
Build index-nodeSize:14750000,propertieSize:15510186,consume:236970ms
Build index-nodeSize:14800000,propertieSize:15560186,consume:308532ms
Build index-nodeSize:14850000,propertieSize:15610186,consume:429815ms
Build index-nodeSize:14900000,propertieSize:15660186,consume:409451ms
Build index-nodeSize:14950000,propertieSize:15710186,consume:456980ms

构建程序在运行4个小时之后,被索引了1530万NODES,索引构建速度几乎慢到不可接受,持续优化中…

Build index-nodeSize:14750000,propertieSize:15510186,consume:236970ms
Build index-nodeSize:14800000,propertieSize:15560186,consume:308532ms
Build index-nodeSize:14850000,propertieSize:15610186,consume:429815ms
Build index-nodeSize:14900000,propertieSize:15660186,consume:409451ms
Build index-nodeSize:14950000,propertieSize:15710186,consume:456980ms
Build index-nodeSize:15000000,propertieSize:15760186,consume:447474ms
Build index-nodeSize:15050000,propertieSize:15810186,consume:580270ms
Build index-nodeSize:15100000,propertieSize:15860186,consume:840488ms
Build index-nodeSize:15150000,propertieSize:15910186,consume:573554ms
Build index-nodeSize:15200000,propertieSize:15960186,consume:748670ms
Build index-nodeSize:15250000,propertieSize:16010186,consume:1305363ms
Build index-nodeSize:15300000,propertieSize:16060186,consume:2495139ms

上述测试案例的源码位置

这篇关于NEO4J亿级数据全文索引构建优化的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/744666

相关文章

使用Python将JSON,XML和YAML数据写入Excel文件

《使用Python将JSON,XML和YAML数据写入Excel文件》JSON、XML和YAML作为主流结构化数据格式,因其层次化表达能力和跨平台兼容性,已成为系统间数据交换的通用载体,本文将介绍如何... 目录如何使用python写入数据到Excel工作表用Python导入jsON数据到Excel工作表用

Mysql如何将数据按照年月分组的统计

《Mysql如何将数据按照年月分组的统计》:本文主要介绍Mysql如何将数据按照年月分组的统计方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录mysql将数据按照年月分组的统计要的效果方案总结Mysql将数据按照年月分组的统计要的效果方案① 使用 DA

鸿蒙中Axios数据请求的封装和配置方法

《鸿蒙中Axios数据请求的封装和配置方法》:本文主要介绍鸿蒙中Axios数据请求的封装和配置方法,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧... 目录1.配置权限 应用级权限和系统级权限2.配置网络请求的代码3.下载在Entry中 下载AxIOS4.封装Htt

Python通过模块化开发优化代码的技巧分享

《Python通过模块化开发优化代码的技巧分享》模块化开发就是把代码拆成一个个“零件”,该封装封装,该拆分拆分,下面小编就来和大家简单聊聊python如何用模块化开发进行代码优化吧... 目录什么是模块化开发如何拆分代码改进版:拆分成模块让模块更强大:使用 __init__.py你一定会遇到的问题模www.

SpringBoot首笔交易慢问题排查与优化方案

《SpringBoot首笔交易慢问题排查与优化方案》在我们的微服务项目中,遇到这样的问题:应用启动后,第一笔交易响应耗时高达4、5秒,而后续请求均能在毫秒级完成,这不仅触发监控告警,也极大影响了用户体... 目录问题背景排查步骤1. 日志分析2. 性能工具定位优化方案:提前预热各种资源1. Flowable

Python获取中国节假日数据记录入JSON文件

《Python获取中国节假日数据记录入JSON文件》项目系统内置的日历应用为了提升用户体验,特别设置了在调休日期显示“休”的UI图标功能,那么问题是这些调休数据从哪里来呢?我尝试一种更为智能的方法:P... 目录节假日数据获取存入jsON文件节假日数据读取封装完整代码项目系统内置的日历应用为了提升用户体验,

SpringBoot3实现Gzip压缩优化的技术指南

《SpringBoot3实现Gzip压缩优化的技术指南》随着Web应用的用户量和数据量增加,网络带宽和页面加载速度逐渐成为瓶颈,为了减少数据传输量,提高用户体验,我们可以使用Gzip压缩HTTP响应,... 目录1、简述2、配置2.1 添加依赖2.2 配置 Gzip 压缩3、服务端应用4、前端应用4.1 N

Spring Boot + MyBatis Plus 高效开发实战从入门到进阶优化(推荐)

《SpringBoot+MyBatisPlus高效开发实战从入门到进阶优化(推荐)》本文将详细介绍SpringBoot+MyBatisPlus的完整开发流程,并深入剖析分页查询、批量操作、动... 目录Spring Boot + MyBATis Plus 高效开发实战:从入门到进阶优化1. MyBatis

MyBatis 动态 SQL 优化之标签的实战与技巧(常见用法)

《MyBatis动态SQL优化之标签的实战与技巧(常见用法)》本文通过详细的示例和实际应用场景,介绍了如何有效利用这些标签来优化MyBatis配置,提升开发效率,确保SQL的高效执行和安全性,感... 目录动态SQL详解一、动态SQL的核心概念1.1 什么是动态SQL?1.2 动态SQL的优点1.3 动态S

Java利用JSONPath操作JSON数据的技术指南

《Java利用JSONPath操作JSON数据的技术指南》JSONPath是一种强大的工具,用于查询和操作JSON数据,类似于SQL的语法,它为处理复杂的JSON数据结构提供了简单且高效... 目录1、简述2、什么是 jsONPath?3、Java 示例3.1 基本查询3.2 过滤查询3.3 递归搜索3.4