NEO4J亿级数据全文索引构建优化

2024-02-25 06:38

本文主要是介绍NEO4J亿级数据全文索引构建优化,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

NEO4J亿级数据全文索引构建优化

  • 一、数据量规模(亿级)
  • 二、构建索引的方式
  • 三、构建索引发生的异常
  • 四、全文索引代码优化
    • 1、Java.lang.OutOfMemoryError
    • 2、访问数据库时
    • 3、优化方案
    • 4、优化代码
    • 5、执行效率测试

如果使用基于NEO4J的全文检索作为图谱的主要入口,那么做好图谱搜索引擎的优化是非常关键的。

一、数据量规模(亿级)

count(relationships):500584016

count(nodes):765485810

二、构建索引的方式

使用脚本后服务器台执行构建全文索引的操作。
使用后台脚本执行构建索引程序:

index.sh
#!/usr/bin/env bash
nohup /neo4j-community-3.4.9/bin/neo4j-shell -file build.cql >>indexGraph.log 2>&1 &
build.cql
CALL zdr.index.addChineseFulltextIndex('IKAnalyzer', ['description','fullname','name','lnkurl'], 'LinkedinID') YIELD message RETURN message;

三、构建索引发生的异常

ERROR (-v for expanded information):TransactionFailureException: The database has encountered a critical error, and needs to be restarted. Please see database logs for more details.-host      Domain name or IP of host to connect to (default: localhost)-port      Port of host to connect to (default: 1337)-name      RMI name, i.e. rmi://<host>:<port>/<name> (default: shell)-pid       Process ID to connect to-c         Command line to execute. After executing it the shell exits-file      File containing commands to execute, or '-' to read from stdin. After executing it the shell exits-readonly  Connect in readonly mode (only for connecting with -path)-path      Points to a neo4j db path so that a local server can be started there-config    Points to a config file when starting a local serverExample arguments for remote:-port 1337-host 192.168.1.234 -port 1337 -name shell-host localhost -readonly...or no arguments for default values
Example arguments for local:-path /path/to/db-path /path/to/db -config /path/to/neo4j.config-path /path/to/db -readonly
Caused by: java.lang.OutOfMemoryError: Java heap space | GB+Tree[file:/u02/isi/zdr/graph/neo4j-community-3.4.9/data/databases/graph.db/schema/index/lucene_native-2.0/134/string-1.0/index-134, layout:StringLayout[version:0.1, identifier:24016946018123776], generation:16587/16588]at org.neo4j.io.pagecache.impl.muninn.CursorFactory.takeWriteCursor(CursorFactory.java:62)at org.neo4j.io.pagecache.impl.muninn.MuninnPagedFile.io(MuninnPagedFile.java:186)at org.neo4j.index.internal.gbptree.FreeListIdProvider.releaseId(FreeListIdProvider.java:217)at org.neo4j.index.internal.gbptree.InternalTreeLogic.createSuccessorIfNeeded(InternalTreeLogic.java:1289)at org.neo4j.index.internal.gbptree.InternalTreeLogic.insertInLeaf(InternalTreeLogic.java:513)at org.neo4j.index.internal.gbptree.InternalTreeLogic.insert(InternalTreeLogic.java:356)at org.neo4j.index.internal.gbptree.GBPTree$SingleWriter.merge(GBPTree.java:1234)at org.neo4j.kernel.impl.index.schema.NativeSchemaIndexUpdater.processAdd(NativeSchemaIndexUpdater.java:132)at org.neo4j.kernel.impl.index.schema.NativeSchemaIndexUpdater.processUpdate(NativeSchemaIndexUpdater.java:86)at org.neo4j.kernel.impl.index.schema.NativeSchemaIndexUpdater.process(NativeSchemaIndexUpdater.java:61)at org.neo4j.kernel.impl.index.schema.fusion.FusionIndexUpdater.process(FusionIndexUpdater.java:41)at org.neo4j.kernel.impl.api.index.updater.DelegatingIndexUpdater.process(DelegatingIndexUpdater.java:40)at org.neo4j.kernel.impl.api.index.IndexingService.processUpdate(IndexingService.java:516)at org.neo4j.kernel.impl.api.index.IndexingService.apply(IndexingService.java:479)at org.neo4j.kernel.impl.api.index.IndexingService.apply(IndexingService.java:463)at org.neo4j.kernel.impl.transaction.command.IndexUpdatesWork.apply(IndexUpdatesWork.java:63)at org.neo4j.kernel.impl.transaction.command.IndexUpdatesWork.apply(IndexUpdatesWork.java:42)at org.neo4j.concurrent.WorkSync.doSynchronizedWork(WorkSync.java:231)at org.neo4j.concurrent.WorkSync.tryDoWork(WorkSync.java:157)at org.neo4j.concurrent.WorkSync.apply(WorkSync.java:91)

JAVA代码实现索引

    /*** @param* @return* @Description: TODO(构建索引并返回MESSAGE - 不支持自动更新)*/private String chineseFulltextIndex(String indexName, String labelName, List<String> propKeys) {Label label = Label.label(labelName);// 按照标签找到该标签下的所有节点ResourceIterator<Node> nodes = db.findNodes(label);System.out.println("nodes:" + nodes.toString());int nodesSize = 0;int propertiesSize = 0;// 循环存在问题 更新到3000万之后程序开始卡顿while (nodes.hasNext()) {nodesSize++;Node node = nodes.next();System.out.println("current nodes:" + node.toString());// 每个节点上需要添加索引的属性Set<Map.Entry<String, Object>> properties = node.getProperties(propKeys.toArray(new String[0])).entrySet();System.out.println("current node properties" + properties);// 查询该节点是否已有索引,有的话删除if (db.index().existsForNodes(indexName)) {Index<Node> oldIndex = db.index().forNodes(indexName);System.out.println("current node index" + oldIndex);oldIndex.remove(node);}// 为该节点的每个需要添加索引的属性添加全文索引Index<Node> nodeIndex = db.index().forNodes(indexName, FULL_INDEX_CONFIG);for (Map.Entry<String, Object> property : properties) {propertiesSize++;nodeIndex.add(node, property.getKey(), property.getValue());}// 计算耗时}String message = "IndexName:" + indexName + ",LabelName:" + labelName + ",NodesSize:" + nodesSize + ",PropertiesSize:" + propertiesSize;return message;}

四、全文索引代码优化

1、Java.lang.OutOfMemoryError

Java.lang.OutOfMemory是java.lang.VirtualMachineError的一个子类,当Java虚拟机中断,或是超出可用资源时抛出。

2、访问数据库时

访问数据库时程序会获取锁和内存,在事务没有被完成之前锁和内存是不会释放的。因此现在很容易理解上述BUG的出现的原因。(三)实现的索引程序中,是获取节点之后在WHILE循环中执行构建索引,直到索引构建完毕事务才会自动被关闭,自动执行内存回收等操作。当获取的数据量巨大的时候,必然会出现内存溢出。

3、优化方案

使用批量事务提交的机制。

4、优化代码

 /*** @param* @return* @Description: TODO(构建索引并返回MESSAGE - 不支持自动更新)*/private String chineseFulltextIndex(String indexName, String labelName, List<String> propKeys) {Label label = Label.label(labelName);int nodesSize = 0;int propertiesSize = 0;// 按照标签找到该标签下的所有节点ResourceIterator<Node> nodes = db.findNodes(label);Transaction tx = db.beginTx();try {int batch = 0;long startTime = System.nanoTime();while (nodes.hasNext()) {nodesSize++;Node node = nodes.next();boolean indexed = false;// 每个节点上需要添加索引的属性Set<Map.Entry<String, Object>> properties = node.getProperties(propKeys.toArray(new String[0])).entrySet();// 查询该节点是否已有索引,有的话删除if (db.index().existsForNodes(indexName)) {Index<Node> oldIndex = db.index().forNodes(indexName);oldIndex.remove(node);}// 为该节点的每个需要添加索引的属性添加全文索引Index<Node> nodeIndex = db.index().forNodes(indexName, FULL_INDEX_CONFIG);for (Map.Entry<String, Object> property : properties) {indexed = true;propertiesSize++;nodeIndex.add(node, property.getKey(), property.getValue());}// 批量提交事务if (indexed) {if (++batch == 50_000) {batch = 0;tx.success();tx.close();tx = db.beginTx();// 计算耗时startTime = indexConsumeTime(startTime, nodesSize, propertiesSize);}}}tx.success();// 计算耗时indexConsumeTime(startTime, nodesSize, propertiesSize);} finally {tx.close();}String message = "IndexName:" + indexName + ",LabelName:" + labelName + ",NodesSize:" + nodesSize + ",PropertiesSize:" + propertiesSize;return message;}

5、执行效率测试

50_000为批次进行提交,依次累加nodeSize和propertieSize,consume还是每批提交的耗时。
可以看到在刚开始提交的时候耗时较多,之后基本上稳定在每批提交耗时:2s~5s/5万条。10亿nodes,耗时估算11h~23h之间。

Build index-nodeSize:50000,propertieSize:148777,consume:21434ms
Build index-nodeSize:100000,propertieSize:297883,consume:18493ms
Build index-nodeSize:150000,propertieSize:446936,consume:17140ms
Build index-nodeSize:200000,propertieSize:595981,consume:17323ms
Build index-nodeSize:250000,propertieSize:745039,consume:19680ms
Build index-nodeSize:300000,propertieSize:894026,consume:18451ms
Build index-nodeSize:350000,propertieSize:1042994,consume:20266ms
Build index-nodeSize:400000,propertieSize:1160186,consume:12787ms
Build index-nodeSize:450000,propertieSize:1210186,consume:1946ms
Build index-nodeSize:500000,propertieSize:1260186,consume:3174ms
Build index-nodeSize:550000,propertieSize:1310186,consume:3090ms
Build index-nodeSize:600000,propertieSize:1360186,consume:3063ms
Build index-nodeSize:650000,propertieSize:1410186,consume:1868ms
Build index-nodeSize:700000,propertieSize:1460186,consume:2036ms
Build index-nodeSize:750000,propertieSize:1510186,consume:3784ms
Build index-nodeSize:800000,propertieSize:1560186,consume:3037ms
Build index-nodeSize:850000,propertieSize:1610186,consume:2627ms
Build index-nodeSize:900000,propertieSize:1660186,consume:1900ms
Build index-nodeSize:950000,propertieSize:1710186,consume:2944ms
Build index-nodeSize:1000000,propertieSize:1760186,consume:3369ms
Build index-nodeSize:1050000,propertieSize:1810186,consume:3289ms
Build index-nodeSize:1100000,propertieSize:1860186,consume:2763ms
Build index-nodeSize:1150000,propertieSize:1910186,consume:3237ms
Build index-nodeSize:1200000,propertieSize:1960186,consume:3408ms
Build index-nodeSize:1250000,propertieSize:2010186,consume:3644ms
Build index-nodeSize:1300000,propertieSize:2060186,consume:3661ms
Build index-nodeSize:1350000,propertieSize:2110186,consume:2964ms
Build index-nodeSize:1400000,propertieSize:2160186,consume:3219ms
Build index-nodeSize:1450000,propertieSize:2210186,consume:3356ms
Build index-nodeSize:1500000,propertieSize:2260186,consume:4115ms
Build index-nodeSize:1550000,propertieSize:2310186,consume:3188ms
Build index-nodeSize:1600000,propertieSize:2360186,consume:3364ms
Build index-nodeSize:1650000,propertieSize:2410186,consume:3799ms
Build index-nodeSize:1700000,propertieSize:2460186,consume:4301ms
Build index-nodeSize:1750000,propertieSize:2510186,consume:3772ms
Build index-nodeSize:1800000,propertieSize:2560186,consume:3692ms
Build index-nodeSize:1850000,propertieSize:2610186,consume:3428ms
Build index-nodeSize:1900000,propertieSize:2660186,consume:2930ms

备注:在本次测试的数据集上执行索引构建2小时之后,此时已经被索引了1495万个NODES,速度下降明显,需要进一步优化。

Build index-nodeSize:13850000,propertieSize:14610186,consume:97290ms
Build index-nodeSize:13900000,propertieSize:14660186,consume:7441ms
Build index-nodeSize:13950000,propertieSize:14710186,consume:3730ms
Build index-nodeSize:14000000,propertieSize:14760186,consume:3512ms
Build index-nodeSize:14050000,propertieSize:14810186,consume:4545ms
Build index-nodeSize:14100000,propertieSize:14860186,consume:12100ms
Build index-nodeSize:14150000,propertieSize:14910186,consume:83071ms
Build index-nodeSize:14200000,propertieSize:14960186,consume:7417ms
Build index-nodeSize:14250000,propertieSize:15010186,consume:3579ms
Build index-nodeSize:14300000,propertieSize:15060186,consume:64841ms
Build index-nodeSize:14350000,propertieSize:15110186,consume:7553ms
Build index-nodeSize:14400000,propertieSize:15160186,consume:63141ms
Build index-nodeSize:14450000,propertieSize:15210186,consume:64316ms
Build index-nodeSize:14500000,propertieSize:15260186,consume:187510ms
Build index-nodeSize:14550000,propertieSize:15310186,consume:247571ms
Build index-nodeSize:14600000,propertieSize:15360186,consume:224611ms
Build index-nodeSize:14650000,propertieSize:15410186,consume:244539ms
Build index-nodeSize:14700000,propertieSize:15460186,consume:354684ms
Build index-nodeSize:14750000,propertieSize:15510186,consume:236970ms
Build index-nodeSize:14800000,propertieSize:15560186,consume:308532ms
Build index-nodeSize:14850000,propertieSize:15610186,consume:429815ms
Build index-nodeSize:14900000,propertieSize:15660186,consume:409451ms
Build index-nodeSize:14950000,propertieSize:15710186,consume:456980ms

构建程序在运行4个小时之后,被索引了1530万NODES,索引构建速度几乎慢到不可接受,持续优化中…

Build index-nodeSize:14750000,propertieSize:15510186,consume:236970ms
Build index-nodeSize:14800000,propertieSize:15560186,consume:308532ms
Build index-nodeSize:14850000,propertieSize:15610186,consume:429815ms
Build index-nodeSize:14900000,propertieSize:15660186,consume:409451ms
Build index-nodeSize:14950000,propertieSize:15710186,consume:456980ms
Build index-nodeSize:15000000,propertieSize:15760186,consume:447474ms
Build index-nodeSize:15050000,propertieSize:15810186,consume:580270ms
Build index-nodeSize:15100000,propertieSize:15860186,consume:840488ms
Build index-nodeSize:15150000,propertieSize:15910186,consume:573554ms
Build index-nodeSize:15200000,propertieSize:15960186,consume:748670ms
Build index-nodeSize:15250000,propertieSize:16010186,consume:1305363ms
Build index-nodeSize:15300000,propertieSize:16060186,consume:2495139ms

上述测试案例的源码位置

这篇关于NEO4J亿级数据全文索引构建优化的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/744666

相关文章

大数据spark3.5安装部署之local模式详解

《大数据spark3.5安装部署之local模式详解》本文介绍了如何在本地模式下安装和配置Spark,并展示了如何使用SparkShell进行基本的数据处理操作,同时,还介绍了如何通过Spark-su... 目录下载上传解压配置jdk解压配置环境变量启动查看交互操作命令行提交应用spark,一个数据处理框架

Java使用Mail构建邮件功能的完整指南

《Java使用Mail构建邮件功能的完整指南》JavaMailAPI是一个功能强大的工具,它可以帮助开发者轻松实现邮件的发送与接收功能,本文将介绍如何使用JavaMail发送和接收邮件,希望对大家有所... 目录1、简述2、主要特点3、发送样例3.1 发送纯文本邮件3.2 发送 html 邮件3.3 发送带

通过ibd文件恢复MySql数据的操作方法

《通过ibd文件恢复MySql数据的操作方法》文章介绍通过.ibd文件恢复MySQL数据的过程,包括知道表结构和不知道表结构两种情况,对于知道表结构的情况,可以直接将.ibd文件复制到新的数据库目录并... 目录第一种情况:知道表结构第二种情况:不知道表结构总结今天干了一件大事,安装1Panel导致原来服务

Jmeter如何向数据库批量插入数据

《Jmeter如何向数据库批量插入数据》:本文主要介绍Jmeter如何向数据库批量插入数据方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录Jmeter向数据库批量插入数据Jmeter向mysql数据库中插入数据的入门操作接下来做一下各个元件的配置总结Jmete

Python结合Flask框架构建一个简易的远程控制系统

《Python结合Flask框架构建一个简易的远程控制系统》这篇文章主要为大家详细介绍了如何使用Python与Flask框架构建一个简易的远程控制系统,能够远程执行操作命令(如关机、重启、锁屏等),还... 目录1.概述2.功能使用系统命令执行实时屏幕监控3. BUG修复过程1. Authorization

Java嵌套for循环优化方案分享

《Java嵌套for循环优化方案分享》介绍了Java中嵌套for循环的优化方法,包括减少循环次数、合并循环、使用更高效的数据结构、并行处理、预处理和缓存、算法优化、尽量减少对象创建以及本地变量优化,通... 目录Java 嵌套 for 循环优化方案1. 减少循环次数2. 合并循环3. 使用更高效的数据结构4

MySQL InnoDB引擎ibdata文件损坏/删除后使用frm和ibd文件恢复数据

《MySQLInnoDB引擎ibdata文件损坏/删除后使用frm和ibd文件恢复数据》mysql的ibdata文件被误删、被恶意修改,没有从库和备份数据的情况下的数据恢复,不能保证数据库所有表数据... 参考:mysql Innodb表空间卸载、迁移、装载的使用方法注意!此方法只适用于innodb_fi

mysql通过frm和ibd文件恢复表_mysql5.7根据.frm和.ibd文件恢复表结构和数据

《mysql通过frm和ibd文件恢复表_mysql5.7根据.frm和.ibd文件恢复表结构和数据》文章主要介绍了如何从.frm和.ibd文件恢复MySQLInnoDB表结构和数据,需要的朋友可以参... 目录一、恢复表结构二、恢复表数据补充方法一、恢复表结构(从 .frm 文件)方法 1:使用 mysq

mysql8.0无备份通过idb文件恢复数据的方法、idb文件修复和tablespace id不一致处理

《mysql8.0无备份通过idb文件恢复数据的方法、idb文件修复和tablespaceid不一致处理》文章描述了公司服务器断电后数据库故障的过程,作者通过查看错误日志、重新初始化数据目录、恢复备... 周末突然接到一位一年多没联系的妹妹打来电话,“刘哥,快来救救我”,我脑海瞬间冒出妙瓦底,电信火苲马扁.

golang获取prometheus数据(prometheus/client_golang包)

《golang获取prometheus数据(prometheus/client_golang包)》本文主要介绍了使用Go语言的prometheus/client_golang包来获取Prometheu... 目录1. 创建链接1.1 语法1.2 完整示例2. 简单查询2.1 语法2.2 完整示例3. 范围值