Python使用zdppy_es国产框架操作Elasticsearch实现增删改查

本文主要是介绍Python使用zdppy_es国产框架操作Elasticsearch实现增删改查,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Python使用zdppy_es国产框架操作Elasticsearch实现增删改查

本套教程配套有录播课程和私教课程,欢迎私信我。

在这里插入图片描述

Docker部署ElasticSearch7

创建基本容器

docker run -itd --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms2g -Xmx2g"  elasticsearch:7.17.17

配置账号密码

容器中配置文件的路径:

/usr/share/elasticsearch/config/elasticsearch.yml

把配置文件复制出来:

# 准备目录
sudo mkdir -p /docker
sudo chmod 777 -R /docker
mkdir -p /docker/elasticsearch/config
mkdir -p /docker/elasticsearch/data
mkdir -p /docker/elasticsearch/log# 拷贝配置文件
docker cp elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml /docker/elasticsearch/config/elasticsearch.yml

将配置文件修改为如下内容:

cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
# 此处开启xpack
xpack.security.enabled: true

把本机的配置文件复制到容器里面:

docker cp /docker/elasticsearch/config/elasticsearch.yml elasticsearch:/usr/share/elasticsearch/config/elasticsearch.yml

重启ES服务:

docker restart elasticsearch

进入容器,设置es的密码:

docker exec -it elasticsearch bash
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

执行上面的命令以后,输入y,会有如下提示,全都输入:zhangdapeng520

Please confirm that you would like to continue [y/N]yEnter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Passwords do not match.
Try again.
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

将会得到如下用户名和密码:

elastic
zhangdapeng520apm_system
zhangdapeng520kibana_system
zhangdapeng520logstash_system
zhangdapeng520beats_system
zhangdapeng520remote_monitoring_user
zhangdapeng520

在宿主机中测试是否成功:

# 不带用户名密码
curl localhost:9200# 带用户名密码
curl localhost:9200 -u elastic

建立连接

from es import Elasticsearchauth = ("elastic", "zhangdapeng520")
es = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)
print(es.info())

创建索引

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 删除索引
edb.indices.delete(index=index)

新增数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
edb.index(index=index,id="1",document={"id": 1,"name": "张三","age": 23,}
)# 删除索引
edb.indices.delete(index=index)

根据ID查询数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
edb.index(index=index,id="1",document={"id": 1,"name": "张三","age": 23,}
)# 查询数据
resp = edb.get(index=index, id="1")
print(resp["_source"])# 删除索引
edb.indices.delete(index=index)

批量新增数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 查询数据
resp = edb.get(index=index, id="1")
print(resp["_source"])
resp = edb.get(index=index, id="2")
print(resp["_source"])
resp = edb.get(index=index, id="3")
print(resp["_source"])# 删除索引
edb.indices.delete(index=index)

查询所有数据

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 查询数据
r = edb.search(index=index,query={"match_all": {}},
)
print(r)
print(r.get("hits").get("hits"))# 删除索引
edb.indices.delete(index=index)

提取搜索结果

from es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 查询数据
r = edb.search(index=index,query={"match_all": {}},
)def get_search_data(data):new_data = []# 提取第一层hits = r.get("hits")if hits is None:return new_data# 提取第二层hits = hits.get("hits")if hits is None:return new_data# 提取第三层for hit in hits:new_data.append(hit.get("_source"))return new_dataprint(get_search_data(r))# 删除索引
edb.indices.delete(index=index)

根据ID修改数据

import timefrom es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 修改
edb.update(index=index,id="1",doc={"id": "1","name": "张三333","age": 23,},
)# 查询数据
time.sleep(1)  # 等一会修改才会生效
r = edb.search(index=index,query={"match_all": {}},
)def get_search_data(data):new_data = []# 提取第一层hits = r.get("hits")if hits is None:return new_data# 提取第二层hits = hits.get("hits")if hits is None:return new_data# 提取第三层for hit in hits:new_data.append(hit.get("_source"))return new_dataprint(get_search_data(r))# 删除索引
edb.indices.delete(index=index)

根据ID删除数据

import timefrom es import Elasticsearch# 连接es
auth = ("elastic", "zhangdapeng520")
edb = Elasticsearch("http://192.168.234.128:9200/", basic_auth=auth)# 创建索引
index = "user"
mappings = {"properties": {"id": {"type": "integer"},"name": {"type": "text"},"age": {"type": "integer"},}
}
edb.indices.create(index=index, mappings=mappings)# 添加数据
data = [{"id": 1,"name": "张三1","age": 23,},{"id": 2,"name": "张三2","age": 23,},{"id": 3,"name": "张三3","age": 23,},
]
new_data = []
for u in data:new_data.append({"index": {"_index": index, "_id": f"{u.get('id')}"}})new_data.append(u)
edb.bulk(index=index,operations=new_data,refresh=True,
)# 删除
edb.delete(index=index, id="1")# 查询数据
time.sleep(1)  # 等一会修改才会生效
r = edb.search(index=index,query={"match_all": {}},
)def get_search_data(data):new_data = []# 提取第一层hits = r.get("hits")if hits is None:return new_data# 提取第二层hits = hits.get("hits")if hits is None:return new_data# 提取第三层for hit in hits:new_data.append(hit.get("_source"))return new_dataprint(get_search_data(r))# 删除索引
edb.indices.delete(index=index)

这篇关于Python使用zdppy_es国产框架操作Elasticsearch实现增删改查的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/680021

相关文章

如何使用celery进行异步处理和定时任务(django)

《如何使用celery进行异步处理和定时任务(django)》文章介绍了Celery的基本概念、安装方法、如何使用Celery进行异步任务处理以及如何设置定时任务,通过Celery,可以在Web应用中... 目录一、celery的作用二、安装celery三、使用celery 异步执行任务四、使用celery

使用Python绘制蛇年春节祝福艺术图

《使用Python绘制蛇年春节祝福艺术图》:本文主要介绍如何使用Python的Matplotlib库绘制一幅富有创意的“蛇年有福”艺术图,这幅图结合了数字,蛇形,花朵等装饰,需要的可以参考下... 目录1. 绘图的基本概念2. 准备工作3. 实现代码解析3.1 设置绘图画布3.2 绘制数字“2025”3.3

Springboot的ThreadPoolTaskScheduler线程池轻松搞定15分钟不操作自动取消订单

《Springboot的ThreadPoolTaskScheduler线程池轻松搞定15分钟不操作自动取消订单》:本文主要介绍Springboot的ThreadPoolTaskScheduler线... 目录ThreadPoolTaskScheduler线程池实现15分钟不操作自动取消订单概要1,创建订单后

Jsoncpp的安装与使用方式

《Jsoncpp的安装与使用方式》JsonCpp是一个用于解析和生成JSON数据的C++库,它支持解析JSON文件或字符串到C++对象,以及将C++对象序列化回JSON格式,安装JsonCpp可以通过... 目录安装jsoncppJsoncpp的使用Value类构造函数检测保存的数据类型提取数据对json数

python使用watchdog实现文件资源监控

《python使用watchdog实现文件资源监控》watchdog支持跨平台文件资源监控,可以检测指定文件夹下文件及文件夹变动,下面我们来看看Python如何使用watchdog实现文件资源监控吧... python文件监控库watchdogs简介随着Python在各种应用领域中的广泛使用,其生态环境也

el-select下拉选择缓存的实现

《el-select下拉选择缓存的实现》本文主要介绍了在使用el-select实现下拉选择缓存时遇到的问题及解决方案,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的... 目录项目场景:问题描述解决方案:项目场景:从左侧列表中选取字段填入右侧下拉多选框,用户可以对右侧

Python中构建终端应用界面利器Blessed模块的使用

《Python中构建终端应用界面利器Blessed模块的使用》Blessed库作为一个轻量级且功能强大的解决方案,开始在开发者中赢得口碑,今天,我们就一起来探索一下它是如何让终端UI开发变得轻松而高... 目录一、安装与配置:简单、快速、无障碍二、基本功能:从彩色文本到动态交互1. 显示基本内容2. 创建链

Java调用Python代码的几种方法小结

《Java调用Python代码的几种方法小结》Python语言有丰富的系统管理、数据处理、统计类软件包,因此从java应用中调用Python代码的需求很常见、实用,本文介绍几种方法从java调用Pyt... 目录引言Java core使用ProcessBuilder使用Java脚本引擎总结引言python

SpringBoot操作spark处理hdfs文件的操作方法

《SpringBoot操作spark处理hdfs文件的操作方法》本文介绍了如何使用SpringBoot操作Spark处理HDFS文件,包括导入依赖、配置Spark信息、编写Controller和Ser... 目录SpringBoot操作spark处理hdfs文件1、导入依赖2、配置spark信息3、cont

springboot整合 xxl-job及使用步骤

《springboot整合xxl-job及使用步骤》XXL-JOB是一个分布式任务调度平台,用于解决分布式系统中的任务调度和管理问题,文章详细介绍了XXL-JOB的架构,包括调度中心、执行器和Web... 目录一、xxl-job是什么二、使用步骤1. 下载并运行管理端代码2. 访问管理页面,确认是否启动成功