postgresql|数据库|pg_repack插件的部署和使用

2024-02-02 05:44

本文主要是介绍postgresql|数据库|pg_repack插件的部署和使用,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一,

表和索引的膨胀现象

 Postgres SQL 实现的MVCC的机制不同于 oracle , mysql innodb 的 undo tablespace 的机制。 表上所用的更新和删除等操作的行为,都不会实际的删除或修改,而是标记为死元祖 (dead rows or dead tuples),也因此,在大表进行长事务增删改的时候,表的磁盘使用空间会逐渐变大,并且,表的读写性能会随着表膨胀的程度加深而逐渐下降。

那么说人话就是,表和索引的膨胀会造成两个后果,第一是磁盘空间的占用,比如,某个几百G的大表delete 删除后,并不会释放磁盘空间,并且在删除的过程中还会引发wal日志的膨胀,而数据库服务器的磁盘空间并不是无限的,第二个就是会使得表的查询和写入性能下降,也就是查询速度降低或者插入/更新速度明显下降。

因此,我们在数据库的使用过程中,应该避免表膨胀,至少是将表膨胀控制在一个合理的,可接受的范围内,完全的避免表膨胀是确定无疑的不可能。

postgresql数据库对于表膨胀这个问题是有几种处理方式

第一,是在postgresql的主配置文件内定义autovacuum,也就是让postgresql数据库自己决定何时治理表膨胀

第二,手动vacuum 治理表膨胀

第三,CLUSTER命令治理表膨胀

第四,利用外部插件,例如pg_repack 治理表膨胀

第五,recreate table or reindex : 相当于重建表和索引。

  • 如果选择重建表的话 是类似于 create table tab_new as select * from tab_old, 然后在 创建相关索引,最后进行表名的 rename 切换。还需注意表的权限:需要重新赋权。
  • 另外这个也是需要应用系统的维护窗口时间的。
  • 如果选择重建索引的话, 类似于 reindex CONCURRENTLY index_name, 需要注意的是需要2倍的索引存储空间,进行online的索引重建。

CLUSTER背后的代码与VACUUM (FULL)相同,只是增加了一个排序。因此,CLUSTER存在和VACUUM (FULL)一样的问题:

  • CLUSTER以ACCESS EXCLUSIVE模式锁定表,锁定期间阻塞所有操作,VACUUM (FULL)也是一样的
  • 需要二倍于表的空间进行操作

对于大表来说,根据表的数据规模大小,很多时候vacuum或者CLUSTER的时候都是几个小时甚至十几个小时,而在此期间表被锁住是无法接受的,读写都有问题,锁表会造成业务的中断。

pg_repack 锁表的时间相对vacuum或者cluster来说是比较少的,大概是vacuum的锁表时间的20%,因此,pg_repack 是一个比较好的选择,但pg_repack 的使用仍然是推荐在业务低峰期使用,虽然锁表时间大幅减少。

repack 实际上是创建了一张临时表, 并在原始表上创建触发器捕获数据变化,同步到临时表中, 并在临时表中重新创建索引,最后进行临时表和原始表的切换。
工作原理和mysql 的 pt-online-schema-change 的工具是十分类似的.

下面将就pg_repack 的部署和基本使用做一个介绍

二,

表膨胀治理的时机

在表膨胀治理之前,我们需要了解哪些表需要治理,在表膨胀治理之后,我们需要清楚的知道,具体治理了多少表膨胀

监控数据库级别的膨胀:Show database bloat - PostgreSQL wiki

SQL语句如下:

这个SQL语句比较简略,主要关注上图标识的这两行,wastedbytes的值越大,表明表膨胀越严重

SELECTcurrent_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::float/otta END)::numeric,1) AS tbloat,CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes,iname, /*ituples::bigint, ipages::bigint, iotta,*/ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::float/iotta END)::numeric,1) AS ibloat,CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes
FROM (SELECTschemaname, tablename, cc.reltuples, cc.relpages, bs,CEIL((cc.reltuples*((datahdr+ma-(CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all colsFROM (SELECTma,bs,schemaname,tablename,(datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,(maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2FROM (SELECTschemaname, tablename, hdr, ma, bs,SUM((1-null_frac)*avg_width) AS datawidth,MAX(null_frac) AS maxfracsum,hdr+(SELECT 1+count(*)/8FROM pg_stats s2WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename) AS nullhdrFROM pg_stats s, (SELECT(SELECT current_setting('block_size')::numeric) AS bs,CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS maFROM (SELECT version() AS v) AS foo) AS constantsGROUP BY 1,2,3,4,5) AS foo) AS rsJOIN pg_class cc ON cc.relname = rs.tablenameJOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'LEFT JOIN pg_index i ON indrelid = cc.oidLEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml
ORDER BY wastedbytes DESC

监控表级别的膨胀:

https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql

注:稍作修改,屏蔽了系统库

该SQL语句执行完后,重点关注上图标识的地方就行了,blooat_pct 越大,表明表膨胀越严重

/* WARNING: executed with a non-superuser role, the query inspect only tables and materialized view (9.3+) you are granted to read.
* This query is compatible with PostgreSQL 9.0 and more
*/
SELECT current_database(), schemaname, tblname, bs*tblpages AS real_size,(tblpages-est_tblpages)*bs AS extra_size,CASE WHEN tblpages > 0 AND tblpages - est_tblpages > 0THEN 100 * (tblpages - est_tblpages)/tblpages::floatELSE 0END AS extra_pct, fillfactor,CASE WHEN tblpages - est_tblpages_ff > 0THEN (tblpages-est_tblpages_ff)*bsELSE 0END AS bloat_size,CASE WHEN tblpages > 0 AND tblpages - est_tblpages_ff > 0THEN 100 * (tblpages - est_tblpages_ff)/tblpages::floatELSE 0END AS bloat_pct, is_na-- , tpl_hdr_size, tpl_data_size, (pst).free_percent + (pst).dead_tuple_percent AS real_frag -- (DEBUG INFO)
FROM (SELECT ceil( reltuples / ( (bs-page_hdr)/tpl_size ) ) + ceil( toasttuples / 4 ) AS est_tblpages,ceil( reltuples / ( (bs-page_hdr)*fillfactor/(tpl_size*100) ) ) + ceil( toasttuples / 4 ) AS est_tblpages_ff,tblpages, fillfactor, bs, tblid, schemaname, tblname, heappages, toastpages, is_na-- , tpl_hdr_size, tpl_data_size, pgstattuple(tblid) AS pst -- (DEBUG INFO)FROM (SELECT( 4 + tpl_hdr_size + tpl_data_size + (2*ma)- CASE WHEN tpl_hdr_size%ma = 0 THEN ma ELSE tpl_hdr_size%ma END- CASE WHEN ceil(tpl_data_size)::int%ma = 0 THEN ma ELSE ceil(tpl_data_size)::int%ma END) AS tpl_size, bs - page_hdr AS size_per_block, (heappages + toastpages) AS tblpages, heappages,toastpages, reltuples, toasttuples, bs, page_hdr, tblid, schemaname, tblname, fillfactor, is_na-- , tpl_hdr_size, tpl_data_sizeFROM (SELECTtbl.oid AS tblid, ns.nspname AS schemaname, tbl.relname AS tblname, tbl.reltuples,tbl.relpages AS heappages, coalesce(toast.relpages, 0) AS toastpages,coalesce(toast.reltuples, 0) AS toasttuples,coalesce(substring(array_to_string(tbl.reloptions, ' ')FROM 'fillfactor=([0-9]+)')::smallint, 100) AS fillfactor,current_setting('block_size')::numeric AS bs,CASE WHEN version()~'mingw32' OR version()~'64-bit|x86_64|ppc64|ia64|amd64' THEN 8 ELSE 4 END AS ma,24 AS page_hdr,23 + CASE WHEN MAX(coalesce(s.null_frac,0)) > 0 THEN ( 7 + count(s.attname) ) / 8 ELSE 0::int END+ CASE WHEN bool_or(att.attname = 'oid' and att.attnum < 0) THEN 4 ELSE 0 END AS tpl_hdr_size,sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 0) ) AS tpl_data_size,bool_or(att.atttypid = 'pg_catalog.name'::regtype)OR sum(CASE WHEN att.attnum > 0 THEN 1 ELSE 0 END) <> count(s.attname) AS is_naFROM pg_attribute AS attJOIN pg_class AS tbl ON att.attrelid = tbl.oidJOIN pg_namespace AS ns ON ns.oid = tbl.relnamespaceLEFT JOIN pg_stats AS s ON s.schemaname=ns.nspnameAND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attnameLEFT JOIN pg_class AS toast ON tbl.reltoastrelid = toast.oidWHERE NOT att.attisdroppedAND tbl.relkind in ('r','m')AND schemaname not IN('pg_catalog','information_schema','repack')GROUP BY 1,2,3,4,5,6,7,8,9,10ORDER BY 2,3) AS s) AS s2
) AS s3
-- WHERE NOT is_na
--   AND tblpages*((pst).free_percent + (pst).dead_tuple_percent)::float4/100 >= 1
ORDER BY schemaname, tblname;

 监控索引级别的膨胀:

https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql

注:稍作修改,屏蔽了系统库

-- WARNING: executed with a non-superuser role, the query inspect only index on tables you are granted to read.
-- WARNING: rows with is_na = 't' are known to have bad statistics ("name" type is not supported).
-- This query is compatible with PostgreSQL 8.2 and after
SELECT current_database(), nspname AS schemaname, tblname, idxname, bs*(relpages)::bigint AS real_size,bs*(relpages-est_pages)::bigint AS extra_size,100 * (relpages-est_pages)::float / relpages AS extra_pct,fillfactor,CASE WHEN relpages > est_pages_ffTHEN bs*(relpages-est_pages_ff)ELSE 0END AS bloat_size,100 * (relpages-est_pages_ff)::float / relpages AS bloat_pct,is_na-- , 100-(pst).avg_leaf_density AS pst_avg_bloat, est_pages, index_tuple_hdr_bm, maxalign, pagehdr, nulldatawidth, nulldatahdrwidth, reltuples, relpages -- (DEBUG INFO)
FROM (SELECT coalesce(1 +ceil(reltuples/floor((bs-pageopqdata-pagehdr)/(4+nulldatahdrwidth)::float)), 0 -- ItemIdData size + computed avg size of a tuple (nulldatahdrwidth)) AS est_pages,coalesce(1 +ceil(reltuples/floor((bs-pageopqdata-pagehdr)*fillfactor/(100*(4+nulldatahdrwidth)::float))), 0) AS est_pages_ff,bs, nspname, tblname, idxname, relpages, fillfactor, is_na-- , pgstatindex(idxoid) AS pst, index_tuple_hdr_bm, maxalign, pagehdr, nulldatawidth, nulldatahdrwidth, reltuples -- (DEBUG INFO)FROM (SELECT maxalign, bs, nspname, tblname, idxname, reltuples, relpages, idxoid, fillfactor,( index_tuple_hdr_bm +maxalign - CASE -- Add padding to the index tuple header to align on MAXALIGNWHEN index_tuple_hdr_bm%maxalign = 0 THEN maxalignELSE index_tuple_hdr_bm%maxalignEND+ nulldatawidth + maxalign - CASE -- Add padding to the data to align on MAXALIGNWHEN nulldatawidth = 0 THEN 0WHEN nulldatawidth::integer%maxalign = 0 THEN maxalignELSE nulldatawidth::integer%maxalignEND)::numeric AS nulldatahdrwidth, pagehdr, pageopqdata, is_na-- , index_tuple_hdr_bm, nulldatawidth -- (DEBUG INFO)FROM (SELECT n.nspname, i.tblname, i.idxname, i.reltuples, i.relpages,i.idxoid, i.fillfactor, current_setting('block_size')::numeric AS bs,CASE -- MAXALIGN: 4 on 32bits, 8 on 64bits (and mingw32 ?)WHEN version() ~ 'mingw32' OR version() ~ '64-bit|x86_64|ppc64|ia64|amd64' THEN 8ELSE 4END AS maxalign,/* per page header, fixed size: 20 for 7.X, 24 for others */24 AS pagehdr,/* per page btree opaque data */16 AS pageopqdata,/* per tuple header: add IndexAttributeBitMapData if some cols are null-able */CASE WHEN max(coalesce(s.null_frac,0)) = 0THEN 8 -- IndexTupleData sizeELSE 8 + (( 32 + 8 - 1 ) / 8) -- IndexTupleData size + IndexAttributeBitMapData size ( max num filed per index + 8 - 1 /8)END AS index_tuple_hdr_bm,/* data len: we remove null values save space using it fractionnal part from stats */sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 1024)) AS nulldatawidth,max( CASE WHEN i.atttypid = 'pg_catalog.name'::regtype THEN 1 ELSE 0 END ) > 0 AS is_naFROM (SELECT ct.relname AS tblname, ct.relnamespace, ic.idxname, ic.attpos, ic.indkey, ic.indkey[ic.attpos], ic.reltuples, ic.relpages, ic.tbloid, ic.idxoid, ic.fillfactor,coalesce(a1.attnum, a2.attnum) AS attnum, coalesce(a1.attname, a2.attname) AS attname, coalesce(a1.atttypid, a2.atttypid) AS atttypid,CASE WHEN a1.attnum IS NULLTHEN ic.idxnameELSE ct.relnameEND AS attrelnameFROM (SELECT idxname, reltuples, relpages, tbloid, idxoid, fillfactor, indkey,pg_catalog.generate_series(1,indnatts) AS attposFROM (SELECT ci.relname AS idxname, ci.reltuples, ci.relpages, i.indrelid AS tbloid,i.indexrelid AS idxoid,coalesce(substring(array_to_string(ci.reloptions, ' ')from 'fillfactor=([0-9]+)')::smallint, 90) AS fillfactor,i.indnatts,pg_catalog.string_to_array(pg_catalog.textin(pg_catalog.int2vectorout(i.indkey)),' ')::int[] AS indkeyFROM pg_catalog.pg_index iJOIN pg_catalog.pg_class ci ON ci.oid = i.indexrelidWHERE ci.relam=(SELECT oid FROM pg_am WHERE amname = 'btree')AND ci.relpages > 0) AS idx_data) AS icJOIN pg_catalog.pg_class ct ON ct.oid = ic.tbloidLEFT JOIN pg_catalog.pg_attribute a1 ONic.indkey[ic.attpos] <> 0AND a1.attrelid = ic.tbloidAND a1.attnum = ic.indkey[ic.attpos]LEFT JOIN pg_catalog.pg_attribute a2 ONic.indkey[ic.attpos] = 0AND a2.attrelid = ic.idxoidAND a2.attnum = ic.attpos) iJOIN pg_catalog.pg_namespace n ON n.oid = i.relnamespaceJOIN pg_catalog.pg_stats s ON s.schemaname = n.nspnameAND s.tablename = i.attrelnameAND s.attname = i.attnameAND schemaname not IN('pg_catalog','information_schema','repack')GROUP BY 1,2,3,4,5,6,7,8,9,10,11) AS rows_data_stats) AS rows_hdr_pdg_stats
) AS relation_stats
ORDER BY nspname, tblname, idxname;

三,

pg_repack的部署

pg_repack现在支持到postgresql-16版本,下载地址:https://github.com/reorg/pg_repack/releases/tag/ver_1.5.0

下载下来的压缩包上传到服务器后,进入解压目录

先安装编译依赖:

yum install openssl openssl-devel readline readline-devel -y

然后编译三连就好了:

make && make install

编译日志如下:

[root@centos10 pg_repack-ver_1.5.0]# make
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/bin'
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 pg_repack.o pgut/pgut.o pgut/pgut-fe.o  -L/data/pgsql/lib   -Wl,--as-needed -Wl,-rpath,'/data/pgsql/lib',--enable-new-dtags  -L/data/pgsql/lib -lpq -L/data/pgsql/lib -lpgcommon -lpgport -lpthread -lz -lreadline -lrt -lcrypt -ldl -lm -o pg_repack
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/bin'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/lib'
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -DREPACK_VERSION=1.5.0 -I. -I./ -I/data/pgsql/include/server -I/data/pgsql/include/internal  -D_GNU_SOURCE   -c -o repack.o repack.c
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -DREPACK_VERSION=1.5.0 -I. -I./ -I/data/pgsql/include/server -I/data/pgsql/include/internal  -D_GNU_SOURCE   -c -o pgut/pgut-spi.o pgut/pgut-spi.c
( echo '{ global:'; gawk '/^[^#]/ {printf "%s;\n",$1}' exports.txt; echo ' local: *; };' ) >exports.list
gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fPIC -shared -Wl,--version-script=exports.list -o pg_repack.so repack.o pgut/pgut-spi.o -L/data/pgsql/lib    -Wl,--as-needed -Wl,-rpath,'/data/pgsql/lib',--enable-new-dtags  
sed 's,REPACK_VERSION,1.5.0,g' pg_repack.sql.in \
| sed 's,relhasoids,false,g'> pg_repack--1.5.0.sql;
sed 's,REPACK_VERSION,1.5.0,g' pg_repack.control.in > pg_repack.control
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/lib'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/regress'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/regress'
[root@centos10 pg_repack-ver_1.5.0]# echo $?
0
[root@centos10 pg_repack-ver_1.5.0]# make install
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/bin'
/usr/bin/mkdir -p '/data/pgsql/bin'
/usr/bin/install -c  pg_repack '/data/pgsql/bin'
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/bin'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/lib'
/usr/bin/mkdir -p '/data/pgsql/lib'
/usr/bin/mkdir -p '/data/pgsql/share/extension'
/usr/bin/mkdir -p '/data/pgsql/share/extension'
/usr/bin/install -c -m 755  pg_repack.so '/data/pgsql/lib/pg_repack.so'
/usr/bin/install -c -m 644 .//pg_repack.control '/data/pgsql/share/extension/'
/usr/bin/install -c -m 644  pg_repack--1.5.0.sql pg_repack.control '/data/pgsql/share/extension/'
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/lib'
make[1]: Entering directory `/root/pg_repack-ver_1.5.0/regress'
make[1]: Nothing to be done for `install'.
make[1]: Leaving directory `/root/pg_repack-ver_1.5.0/regress'
[root@centos10 pg_repack-ver_1.5.0]# 
[root@centos10 pg_repack-ver_1.5.0]# cd
[root@centos10 ~]# whereis pg_repack
pg_repack: /data/pgsql/bin/pg_repack

登录postgresql的命令行,激活插件,这里是哪个数据库需要此插件就切换到哪个数据库内:

比如,我需要在名为test的数据库内使用此插件:

\c test
create extension pg_repack;

此插件激活后,将会看到一个名为repack的scheme和一系列相关函数和两个视图:

四,

 pg_repack的表膨胀治理能力测试

1,

创建新库和大表

新库名称为test,大表的创建语句如下:

大表名称为testpg,数据量级为2000w,只有两列数据

create or replace function gen_id(  a date,  b date  
)   
returns text as $$  
select lpad((random()*99)::int::text, 3, '0') ||   lpad((random()*99)::int::text, 3, '0') ||   lpad((random()*99)::int::text, 3, '0') ||   to_char(a + (random()*(b-a))::int, 'yyyymmdd') ||   lpad((random()*99)::int::text, 3, '0') ||   random()::int ||   (case when random()*10 >9 then 'xy' else (random()*9)::int::text end ) ;  
$$ language sql strict;CREATE SEQUENCE test START 1;
create table if not exists testpg ("id" int8 not null DEFAULT nextval('test'::regclass),CONSTRAINT "user_vendorcode_pkey" PRIMARY KEY ("id"),"suijishuzi" VARCHAR ( 255 ) COLLATE "pg_catalog"."default"
);insert into testpg SELECT generate_series(1,20000000) as xm, gen_id('1949-01-01', '2023-10-16') as num;

2,

查看大表大小和该表的膨胀情况,SQL语句如下:

SELECT
table_schema,
table_name,
pg_size_pretty(table_size) AS table_size,
pg_size_pretty(indexes_size) AS indexes_size,
pg_size_pretty(total_size) AS total_size
FROM (
SELECT
table_schema,
table_name,
pg_table_size(table_name) AS table_size,
pg_indexes_size(table_name) AS indexes_size,
pg_total_relation_size(table_name) AS total_size
FROM (
SELECT table_schema,('"' || table_schema || '"."' || table_name || '"') AS table_name
FROM information_schema.tables
) AS all_tables
ORDER BY total_size DESC
) AS pretty_sizes where table_schema not in ('pg_catalog','information_schema','repack');

输出如下:

使用上面的第二个查询表膨胀的语句,结果如下:

3,

大量更新testpg这个表,人为制造表膨胀:

UPDATE testpg set suijishuzi='123456789' WHERE suijishuzi like'%1%'

查看表大小和膨胀率:

可以看到,膨胀率达到了45%

4,

pg_repack表膨胀治理:

[root@centos10 ~]# su - postgres -c "/data/pgsql/bin/pg_repack -d test  -t public.testpg"
INFO: repacking table "public.testpg"

在膨胀治理期间,可以正常的对该表读写,治理完毕后,查看表大小和表膨胀率:

可以看到,膨胀治理完全成功,

观察磁盘使用,可以看到,符合本次膨胀治理的结果:

[root@centos10 ~]# du -sh /data/pgsql/data/
4.0G	/data/pgsql/data/
[root@centos10 ~]# du -sh /data/pgsql/data/
2.7G	/data/pgsql/data/

未完待续!!!

这篇关于postgresql|数据库|pg_repack插件的部署和使用的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/669639

相关文章

闲置电脑也能活出第二春?鲁大师AiNAS让你动动手指就能轻松部署

对于大多数人而言,在这个“数据爆炸”的时代或多或少都遇到过存储告急的情况,这使得“存储焦虑”不再是个别现象,而将会是随着软件的不断臃肿而越来越普遍的情况。从不少手机厂商都开始将存储上限提升至1TB可以见得,我们似乎正处在互联网信息飞速增长的阶段,对于存储的需求也将会不断扩大。对于苹果用户而言,这一问题愈发严峻,毕竟512GB和1TB版本的iPhone可不是人人都消费得起的,因此成熟的外置存储方案开

Spring Security基于数据库验证流程详解

Spring Security 校验流程图 相关解释说明(认真看哦) AbstractAuthenticationProcessingFilter 抽象类 /*** 调用 #requiresAuthentication(HttpServletRequest, HttpServletResponse) 决定是否需要进行验证操作。* 如果需要验证,则会调用 #attemptAuthentica

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

使用opencv优化图片(画面变清晰)

文章目录 需求影响照片清晰度的因素 实现降噪测试代码 锐化空间锐化Unsharp Masking频率域锐化对比测试 对比度增强常用算法对比测试 需求 对图像进行优化,使其看起来更清晰,同时保持尺寸不变,通常涉及到图像处理技术如锐化、降噪、对比度增强等 影响照片清晰度的因素 影响照片清晰度的因素有很多,主要可以从以下几个方面来分析 1. 拍摄设备 相机传感器:相机传

MySQL数据库宕机,启动不起来,教你一招搞定!

作者介绍:老苏,10余年DBA工作运维经验,擅长Oracle、MySQL、PG、Mongodb数据库运维(如安装迁移,性能优化、故障应急处理等)公众号:老苏畅谈运维欢迎关注本人公众号,更多精彩与您分享。 MySQL数据库宕机,数据页损坏问题,启动不起来,该如何排查和解决,本文将为你说明具体的排查过程。 查看MySQL error日志 查看 MySQL error日志,排查哪个表(表空间

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

pdfmake生成pdf的使用

实际项目中有时会有根据填写的表单数据或者其他格式的数据,将数据自动填充到pdf文件中根据固定模板生成pdf文件的需求 文章目录 利用pdfmake生成pdf文件1.下载安装pdfmake第三方包2.封装生成pdf文件的共用配置3.生成pdf文件的文件模板内容4.调用方法生成pdf 利用pdfmake生成pdf文件 1.下载安装pdfmake第三方包 npm i pdfma