您的当前位置:首页正文

Postgresql流复制切换的时候遇到的一个小问题

2023-11-13 来源:要发发知识网

Pg10搭建了流复制主备切换

配置主库10.10.10.13 pghost4备库10.10.10.14 pghost5

测试主库出现问题,停止后,备库自动切换成为新主库。老主库现在应该是备库了,启动它验证状态。发现备库启动不起来。通过以下方法,备库启动了。

  1. 新备库通过pg_ctl start启动,出现告警日志。

[root@pghost4 ~]# su - pg10pg10@pghost4->pg_ctl startwaiting for server to start....2019-12-20 21:44:10.004 CST [23416] LOG: listening on IPv4 address "0.0.0.0", port 19212019-12-20 21:44:10.004 CST [23416] LOG: listening on IPv6 address "::", port 19212019-12-20 21:44:10.006 CST [23416] LOG: listening on Unix socket "/tmp/.s.PGSQL.1921"2019-12-20 21:44:10.014 CST [23417] LOG: database system was shut down in recovery at 2019-12-20 21:33:31 CST2019-12-20 21:44:10.014 CST [23417] LOG: entering standby mode2019-12-20 21:44:10.014 CST [23417] FATAL: requested timeline 2 is not a child of this server‘s history2019-12-20 21:44:10.014 CST [23417] DETAIL: Latest checkpoint is at 0/11000028 on timeline 1, but in the history of the requested timeline, the server forked off from that timeline at 0/C000098.2019-12-20 21:44:10.015 CST [23416] LOG: startup process (PID 23417) exited with exit code 12019-12-20 21:44:10.015 CST [23416] LOG: aborting startup due to startup process failure2019-12-20 21:44:10.016 CST [23416] LOG: database system is shut downstopped waitingpg_ctl: could not start serverExamine the log output.

从上边的意思可以看到,新备库pg_wal目录下的00000002.history 这个跟新主库不一致。

pg10@pghost4->lltotal 273M-rw------- 1 pg10 pg10 16M Dec 18 22:40 000000010000000000000001-rw------- 1 pg10 pg10 16M Dec 18 23:01 000000010000000000000002-rw------- 1 pg10 pg10 16M Dec 18 23:02 000000010000000000000003-rw------- 1 pg10 pg10 16M Dec 18 23:06 000000010000000000000004-rw------- 1 pg10 pg10 16M Dec 18 23:39 000000010000000000000005-rw------- 1 pg10 pg10 16M Dec 18 23:51 000000010000000000000006-rw------- 1 pg10 pg10 16M Dec 18 23:55 000000010000000000000007-rw------- 1 pg10 pg10 16M Dec 18 23:58 000000010000000000000008-rw------- 1 pg10 pg10 16M Dec 19 00:15 000000010000000000000009-rw------- 1 pg10 pg10 16M Dec 19 00:15 00000001000000000000000A-rw------- 1 pg10 pg10 16M Dec 19 00:27 00000001000000000000000B-rw------- 1 pg10 pg10 16M Dec 19 00:29 00000001000000000000000C-rw------- 1 pg10 pg10 16M Dec 19 00:34 00000001000000000000000D-rw------- 1 pg10 pg10 16M Dec 19 00:36 00000001000000000000000E-rw------- 1 pg10 pg10 16M Dec 19 00:36 00000001000000000000000F-rw------- 1 pg10 pg10 302 Dec 19 00:36 00000001000000000000000F.00000028.backup-rw------- 1 pg10 pg10 16M Dec 19 00:38 000000010000000000000010-rw------- 1 pg10 pg10 16M Dec 20 21:33 000000010000000000000011-rw------- 1 pg10 pg10 41 Dec 19 00:30 00000002.historydrwx------ 2 pg10 pg10 4.0K Dec 20 21:31 archive_statuspg10@pghost4->more 00000002.history 1 0/C000098 no recovery target specifiedpg10@pghost4->

然后看新主库该路径下的目录

pg10@pghost4->lltotal 132K-rw------- 1 pg10 pg10 206 Dec 19 00:36 backup_label.olddrwx------ 5 pg10 pg10 4.0K Dec 19 00:36 basedrwx------ 2 pg10 pg10 4.0K Dec 20 21:27 globaldrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_commit_tsdrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_dynshmem-rw------- 1 pg10 pg10 4.6K Dec 19 00:36 pg_hba.conf-rw------- 1 pg10 pg10 1.6K Dec 19 00:36 pg_ident.confdrwx------ 4 pg10 pg10 4.0K Dec 19 00:38 pg_logicaldrwx------ 4 pg10 pg10 4.0K Dec 19 00:36 pg_multixactdrwx------ 2 pg10 pg10 4.0K Dec 19 00:37 pg_notifydrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_replslotdrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_serialdrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_snapshotsdrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_statdrwx------ 2 pg10 pg10 4.0K Dec 20 21:48 pg_stat_tmpdrwx------ 2 pg10 pg10 4.0K Dec 19 00:38 pg_subtransdrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_tblspcdrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_twophase-rw------- 1 pg10 pg10 3 Dec 19 00:36 PG_VERSIONdrwx------ 3 pg10 pg10 4.0K Dec 19 00:38 pg_waldrwx------ 2 pg10 pg10 4.0K Dec 19 00:36 pg_xact-rw------- 1 pg10 pg10 88 Dec 19 00:36 postgresql.auto.conf-rw------- 1 pg10 pg10 23K Dec 19 00:36 postgresql.conf-rw------- 1 pg10 pg10 31 Dec 19 00:37 postmaster.opts-rw------- 1 pg10 pg10 74 Dec 19 00:38 postmaster.pid-rw-r--r-- 1 pg10 pg10 5.8K Dec 19 00:37 recovery.donepg10@pghost4->cd pg_wal/pg10@pghost4->llbr/>pg10@pghost4->ll-rw------- 1 pg10 pg10 16M Dec 19 00:36 00000001000000000000000F-rw------- 1 pg10 pg10 16M Dec 19 00:38 000000010000000000000010-rw------- 1 pg10 pg10 16M Dec 19 00:38 000000010000000000000011.partial-rw------- 1 pg10 pg10 16M Dec 19 00:38 000000020000000000000011-rw------- 1 pg10 pg10 42 Dec 19 00:38 00000002.historydrwx------ 2 pg10 pg10 4.0K Dec 19 00:38 archive_statuspg10@pghost4->pwdbr/>pg10@pghost4->pwdpg10@pghost4->more 00000002.history 1 0/11000098 no recovery target specified

将新备库的00000002.history 备份,然后删除。将新主库的传递到新备库。然后启动。pg10@pghost4->scp 00000002.history 10.10.10.13:/data/pg10/data/pg_wal/pg10@10.10.10.13‘s password: 00000002.history 100% 42 0.0KB/s 00:00 pg10@pghost4->

pg10@pghost4->pg_ctl startwaiting for server to start....2019-12-20 21:51:25.142 CST [23486] LOG: listening on IPv4 address "0.0.0.0", port 19212019-12-20 21:51:25.142 CST [23486] LOG: listening on IPv6 address "::", port 19212019-12-20 21:51:25.144 CST [23486] LOG: listening on Unix socket "/tmp/.s.PGSQL.1921"2019-12-20 21:51:25.155 CST [23487] LOG: database system was shut down in recovery at 2019-12-20 21:33:31 CST2019-12-20 21:51:25.155 CST [23487] LOG: entering standby mode2019-12-20 21:51:25.171 CST [23487] LOG: consistent recovery state reached at 0/110000982019-12-20 21:51:25.171 CST [23487] LOG: invalid record length at 0/11000098: wanted 24, got 02019-12-20 21:51:25.171 CST [23486] LOG: database system is ready to accept read only connections2019-12-20 21:51:25.179 CST [23491] LOG: started streaming WAL from primary at 0/11000000 on timeline 22019-12-20 21:51:25.225 CST [23487] LOG: redo starts at 0/11000098doneserver startedpg10@pghost4->

发现已经启动。

pg10@pghost4->pg_controldata|grep clusterDatabase cluster state: in archive recoverypg10@pghost4->

可以看到新备库处于备库模式。

Postgresql流复制切换的时候遇到的一个小问题

标签:lob   一个   rtu   test   backup   failure   rom   告警   shu   

小编还为您整理了以下内容,可能对您也有帮助:

perl使用postgresql的copy问题

 

示例1.将整张表拷贝至标准输出

test=# copy tbl_test1 to stdout;1 HA 122 ha 543

 

示例2.将表的部分字段拷贝至标准输出,并输出字段名称,字段间使用‘,‘分隔

test=# copy tbl_test1(a,b) to stdout delimiter ‘,‘ csv header;a,b1,HA2,ha

 

示例3.将查询结果拷贝至标准输出

test=# copy (select a,b from tbl_test1 except select e,f from tbl_test2 ) to stdout delimiter ‘,‘ quote ‘"‘ csv header;a,b2,ha

 

将标准输入拷贝至表中需要注意几点

1.字段间分隔符默认使用【Tab】键

2.换行使用回车键

3.结束使用反斜线+英文据点(.)

4.最好指定字段顺序,要不然可能会错位赋值

 

示例4.将标准输入拷贝至表中

test=# copy tbl_test1(a,b,c) from stdin;Enter data to be copied followed by a newline.End with a backslash and a period on a line by itself.>> 1 公举 公主>> 2 万岁 万万岁>> .COPY 2test=# select * from tbl_test1 ; a | b | c ---+------+-------- 1 | HA | 12 2 | ha | 543 1 | 公举 | 公主 2 | 万岁 | 万万岁(4 rows)

 

示例5.从标准输入拷贝至表中,并将标准输入第一行作为字段名(和表中不符也没关系,copy会自动忽略第一行),字段分隔符为‘,‘

test=# copy tbl_test1(a,b,c) from stdin delimiter ‘,‘ csv header;Enter data to be copied followed by a newline.End with a backslash and a period on a line by itself.>> a,b,c>> 3,你好,hello>> 4,超人,super>> .COPY 2test=# select * from tbl_test1 ; a | b | c ---+------+-------- 1 | HA | 12 2 | ha | 543 1 | 公举 | 公主 2 | 万岁 | 万万岁 3 | 你好 | hello 4 | 超人 | super(6 rows)

 

以上是表与标准输出和标准输入间的相互拷贝,表与文件的拷贝和以上完全相同,只是将标准输出和标准输入换成文件。需要注意的是:

1.数据库用户必须有文件所在的路径的写权限。

2.如果表存在中文字符,导出至csv文件时需要设置编码为GBK,否则使用excel打开是中文显示乱码。

3.将文件导入表中时仍要考虑编码问题

 

示例6.将表拷贝至csv文件中

test=# copy tbl_test1 to ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header;COPY 6

使用excel打开文件,中文显示为乱码

技术分享

 

示例7. 将表以GBK编码拷贝至csv文件中

 

test=# copy tbl_test1 to ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header encoding ‘GBK‘;COPY 6

 

使用excel打开,中文显示正常

技术分享

 

示例8.将刚才导出的文件再次拷贝至表中,使用默认编码UTF8 

test=# copy tbl_test1(a,b,c) from ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header;ERROR: invalid byte sequence for encoding "UTF8": 0xb9CONTEXT: COPY tbl_test1, line 4

 

示例9.将刚才导出的文件再次拷贝至表中,使用GBK编码

test=# copy tbl_test1(a,b,c) from ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header encoding ‘GBK‘;COPY 6

 

postgresql----COPY之表与文件之间的拷贝

标签:

perl使用postgresql的copy问题

 

示例1.将整张表拷贝至标准输出

test=# copy tbl_test1 to stdout;1 HA 122 ha 543

 

示例2.将表的部分字段拷贝至标准输出,并输出字段名称,字段间使用‘,‘分隔

test=# copy tbl_test1(a,b) to stdout delimiter ‘,‘ csv header;a,b1,HA2,ha

 

示例3.将查询结果拷贝至标准输出

test=# copy (select a,b from tbl_test1 except select e,f from tbl_test2 ) to stdout delimiter ‘,‘ quote ‘"‘ csv header;a,b2,ha

 

将标准输入拷贝至表中需要注意几点

1.字段间分隔符默认使用【Tab】键

2.换行使用回车键

3.结束使用反斜线+英文据点(.)

4.最好指定字段顺序,要不然可能会错位赋值

 

示例4.将标准输入拷贝至表中

test=# copy tbl_test1(a,b,c) from stdin;Enter data to be copied followed by a newline.End with a backslash and a period on a line by itself.>> 1 公举 公主>> 2 万岁 万万岁>> .COPY 2test=# select * from tbl_test1 ; a | b | c ---+------+-------- 1 | HA | 12 2 | ha | 543 1 | 公举 | 公主 2 | 万岁 | 万万岁(4 rows)

 

示例5.从标准输入拷贝至表中,并将标准输入第一行作为字段名(和表中不符也没关系,copy会自动忽略第一行),字段分隔符为‘,‘

test=# copy tbl_test1(a,b,c) from stdin delimiter ‘,‘ csv header;Enter data to be copied followed by a newline.End with a backslash and a period on a line by itself.>> a,b,c>> 3,你好,hello>> 4,超人,super>> .COPY 2test=# select * from tbl_test1 ; a | b | c ---+------+-------- 1 | HA | 12 2 | ha | 543 1 | 公举 | 公主 2 | 万岁 | 万万岁 3 | 你好 | hello 4 | 超人 | super(6 rows)

 

以上是表与标准输出和标准输入间的相互拷贝,表与文件的拷贝和以上完全相同,只是将标准输出和标准输入换成文件。需要注意的是:

1.数据库用户必须有文件所在的路径的写权限。

2.如果表存在中文字符,导出至csv文件时需要设置编码为GBK,否则使用excel打开是中文显示乱码。

3.将文件导入表中时仍要考虑编码问题

 

示例6.将表拷贝至csv文件中

test=# copy tbl_test1 to ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header;COPY 6

使用excel打开文件,中文显示为乱码

技术分享

 

示例7. 将表以GBK编码拷贝至csv文件中

 

test=# copy tbl_test1 to ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header encoding ‘GBK‘;COPY 6

 

使用excel打开,中文显示正常

技术分享

 

示例8.将刚才导出的文件再次拷贝至表中,使用默认编码UTF8 

test=# copy tbl_test1(a,b,c) from ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header;ERROR: invalid byte sequence for encoding "UTF8": 0xb9CONTEXT: COPY tbl_test1, line 4

 

示例9.将刚才导出的文件再次拷贝至表中,使用GBK编码

test=# copy tbl_test1(a,b,c) from ‘/tmp/tbl_test1.csv‘ delimiter ‘,‘ csv header encoding ‘GBK‘;COPY 6

 

postgresql----COPY之表与文件之间的拷贝

标签:

你好 刚刚问你的Mysql在pgsql中遇到个问题 可以帮忙看下?

PostgreSQL的话,用这个:

select * from test where id in (select max(id) from test group by concat(name,code));

来自:求助得到的回答

你好 刚刚问你的Mysql在pgsql中遇到个问题 可以帮忙看下?

PostgreSQL的话,用这个:

select * from test where id in (select max(id) from test group by concat(name,code));

来自:求助得到的回答

php 查询pgsql遇到一个问题,就是必须在sql末尾加上分号;才能查询出来结果.

    这个pgsql必须带结束符;

    PostgreSQL数据库使用psql的时候,需要在命令后加上;(分号)或者是g来表示语句已经结束以执行查询;

    一般sql语句都是用分号说明sql语句的结束,mysql sqlserver都是,只是可以省略,如果多条sql同时执行,就需要分割。

追问但是问题比较古怪就是一些sql语句可以不用添加末端的分号也能查询出来,我在怀疑是pgsql查询缓存的原因!

postgreSQL 使用pgAdmin iii 导入.backup文件错误问题

开始然后点击运行,输入cmd,回车,在命令提示符下 直接复制以下命令,然后右键点击命令提示符,再点击粘贴
for %1 in (%windir%\system32\*.ocx) do regsvr32 /s %1
回车,滚动完毕后,再输入:
for %1 in (%windir%\system32\*.dll) do regsvr32.exe /s %1
回车!直到屏幕滚动停止为止,重启电脑 按照以上操作即可追问还是不行,选择完那个backup文件之后显示对象和恢复按钮都是灰色的,不能点,这个文件是别人备份完给我传过来的,我自己在本机备份的话,可以恢复

postgreSQL 使用pgAdmin iii 导入.backup文件错误问题

开始然后点击运行,输入cmd,回车,在命令提示符下 直接复制以下命令,然后右键点击命令提示符,再点击粘贴
for %1 in (%windir%\system32\*.ocx) do regsvr32 /s %1
回车,滚动完毕后,再输入:
for %1 in (%windir%\system32\*.dll) do regsvr32.exe /s %1
回车!直到屏幕滚动停止为止,重启电脑 按照以上操作即可追问还是不行,选择完那个backup文件之后显示对象和恢复按钮都是灰色的,不能点,这个文件是别人备份完给我传过来的,我自己在本机备份的话,可以恢复

Postgresql+Access的问题

方案一:
如果 base64 转换前的大小, 小于 备注字段 上限的话.
那么还是可以直接插入的.
因为文本之中的‘很多, 可以通过 两个 ' 也就是 '' 来转义,使之成功的通过 SQL 语句,插入到表中。

方案二:
如果 base64 转换前的大小, 就大于 备注字段的上限了,那么尝试使用下面数据类型的一种:
OLE 对象
Microsoft Access 表中链接 (OLE/DDE 链接:OLE 对象及其 OLE 服务器之间,或动态数据交换 (DDE) 的源文档与目标文档之间的一种连接。)或嵌入 (嵌入:用于插入来自其他应用程序的 OLE 对象的副本。源对象称为 OLE 服务器端,可以是任意支持链接和嵌入对象的应用程序。对嵌入式对象的更改不会反映在原对象中。)的对象(例如 Microsoft Excel 电子表格、Microsoft Word 文档、图形、声音或其他二进制数据)。 最多为 1 G 字节(受可用磁盘空间*)。

超链接
文本或文本和以文本形式存储的数字的组合,用作超链接地址 (超链接地址:指向诸如对象、文档或网页等目标的路径。超链接地址可以是 URL(Internet 或 Intranet 网站的地址),也可以是 UNC 网络路径(局域网上的文件的地址)。)。超链接地址最多包含三部分:
显示的文本:在字段或控件中显示的文本。

地址:指向文件(UNC 路径 (通用命名约定 (UNC):一种对文件的命名约定,它提供了独立于机器的文件定位方式。UNC 名称使用 \\server\share\path\filename 这一语法格式,而不是指定驱动器符和路径。))或页(URL (统一资源定位符 (URL):一种地址,指定协议(如 HTTP 或 FTP)以及对象、文档、万维网网页或其他目标在 Internet 或 Intranet 上的位置,例如:http://www.microsoft.com/。))的路径。

子地址:位于文件或页中的地址。

屏幕提示:作为工具提示显示的文本。
超链接 (“超链接”数据类型:存储超链接地址的 Access 数据库字段的数据类型。地址最多可以包含四部分,用以下语法格式编写:displaytext#address#subaddress#。)数据类型三个部分中的每一部分最多只能包含 2048 个字符。
附件 任何支持的文件类型 可以将图像、电子表格文件、文档、图表和其他类型的支持文件附加到数据库的记录,这与将文件附加到电子邮件非常类似。还可以查看和编辑附加的文件,具体取决于数据库设计者对附件字段的设置方式。“附件”字段和“OLE 对象”字段相比,有着更大的灵活性,而且可以更高效地使用存储空间,这是因为“附件”字段不用创建原始文件的位图图像。

Postgresql+Access的问题

方案一:
如果 base64 转换前的大小, 小于 备注字段 上限的话.
那么还是可以直接插入的.
因为文本之中的‘很多, 可以通过 两个 ' 也就是 '' 来转义,使之成功的通过 SQL 语句,插入到表中。

方案二:
如果 base64 转换前的大小, 就大于 备注字段的上限了,那么尝试使用下面数据类型的一种:
OLE 对象
Microsoft Access 表中链接 (OLE/DDE 链接:OLE 对象及其 OLE 服务器之间,或动态数据交换 (DDE) 的源文档与目标文档之间的一种连接。)或嵌入 (嵌入:用于插入来自其他应用程序的 OLE 对象的副本。源对象称为 OLE 服务器端,可以是任意支持链接和嵌入对象的应用程序。对嵌入式对象的更改不会反映在原对象中。)的对象(例如 Microsoft Excel 电子表格、Microsoft Word 文档、图形、声音或其他二进制数据)。 最多为 1 G 字节(受可用磁盘空间*)。

超链接
文本或文本和以文本形式存储的数字的组合,用作超链接地址 (超链接地址:指向诸如对象、文档或网页等目标的路径。超链接地址可以是 URL(Internet 或 Intranet 网站的地址),也可以是 UNC 网络路径(局域网上的文件的地址)。)。超链接地址最多包含三部分:
显示的文本:在字段或控件中显示的文本。

地址:指向文件(UNC 路径 (通用命名约定 (UNC):一种对文件的命名约定,它提供了独立于机器的文件定位方式。UNC 名称使用 \\server\share\path\filename 这一语法格式,而不是指定驱动器符和路径。))或页(URL (统一资源定位符 (URL):一种地址,指定协议(如 HTTP 或 FTP)以及对象、文档、万维网网页或其他目标在 Internet 或 Intranet 上的位置,例如:http://www.microsoft.com/。))的路径。

子地址:位于文件或页中的地址。

屏幕提示:作为工具提示显示的文本。
超链接 (“超链接”数据类型:存储超链接地址的 Access 数据库字段的数据类型。地址最多可以包含四部分,用以下语法格式编写:displaytext#address#subaddress#。)数据类型三个部分中的每一部分最多只能包含 2048 个字符。
附件 任何支持的文件类型 可以将图像、电子表格文件、文档、图表和其他类型的支持文件附加到数据库的记录,这与将文件附加到电子邮件非常类似。还可以查看和编辑附加的文件,具体取决于数据库设计者对附件字段的设置方式。“附件”字段和“OLE 对象”字段相比,有着更大的灵活性,而且可以更高效地使用存储空间,这是因为“附件”字段不用创建原始文件的位图图像。

Postgres连接本地数据库出错-"连线被拒,请检查主机名称和埠号,并确定 postmaster 可以接受 TCP/IP 连线"

我也遇到了这样的情况,我的是因为自己查看地图时卡机了,就手动通过任务管理器把postgre.exe进程给结束了,之后项目再重新启动时报了这个错误,我的解决方法是在开始菜单中找到postgresql,重启服务start server,再运行项目就没问题了,希望对你有帮助。

postgresql流复制 怎么验证备份

PostgreSQL自带一个客户端pgAdmin,里面有个备份,恢复选项,也能对数据库进行备份 恢复(还原),但最近发现数据库慢慢庞大的时候,经常出错,备份的文件过程中出错的几率那是相当大,手动调节灰常有限。所以一直寻找完美的备份恢复方案。

梦里寻他千百度,伊人却在灯火阑珊处...其实PostgreSQL内置不少的工具,寻找的备份恢复方案就在其中:pg_mp,psql。这两个指令 在数据库的安装目录下,比如我自己本地安装的,路径形如:C:\Program Files\PostgreSQL\9.5\;然后进入到bin文件夹,会看到不少的exe文件,这就是PostgreSQL内置的工具了。里面会找到 pg_mp.exe,psql.exe两个文件。我们怎么用他们?
用法:
备份数据库,指令如下:
pg_mp -h 164.82.233.54 -U postgres databasename > C:\databasename.bak

开始-运行-cmd 弹出dos控制台;然后 在控制台里,进入PostgreSQL安装目录bin下:
cd C:\Program Files\PostgreSQL\9.0\bin
最后执行备份指令:
pg_mp -h 164.82.233.54 -U postgres databasename > C:\databasename.bak

指令解释:如上命令,pg_mp 是备份数据库指令,164.82.233.54是数据库的ip地址(必须保证数据库允许外部访问的权限哦~),当然本地的数据库ip写 localhost;postgres 是数据库的用户名;databasename 是数据库名。> 意思是导出到C:\databasename.bak文件里,如果没有写路径,单单写databasename.bak文件名,那么备份文件会保存在C: \Program Files\PostgreSQL\9.0\bin 文件夹里。

恢复数据库,指令如下:
psql -h localhost -U postgres -d databasename < C:\databasename.bak(测试没有成功)

pg_restore.exe --host localhost --port 5432 --username "postgres" --dbname "symbolmcnew" --no-password --verbose "databasename.backup"(测试成功)

指令解释:如上命令,psql是恢复数据库命令,localhost是要恢复到哪个数据库的地址,当然你可以写上ip地址,也就是说能远程恢复(必须保证 数据库允许外部访问的权限哦~);postgres 就是要恢复到哪个数据库的用户;databasename 是要恢复到哪个数据库。< 的意思是把C:\databasename.bak文件导入到指定的数据库里。
以上所有的是针对windows而言的,如果在linux下,会不会有效?

在linux里依然有效。有一个值得注意的是:如果直接进入PostgreSQL的安装目录bin下,执行命令,可能会出现 找不到pg_mp,psql的现象,我们在可以这样:
备份:
/opt/PostgreSQL/9.5/bin/pg_mp -h 164.82.233.54 -U postgres databasename > databasename.bak

恢复:
/opt/PostgreSQL/9.5/bin/psql -h localhost -U postgres -d databasename < databasename.bak

postgresql流复制 怎么验证备份

PostgreSQL自带一个客户端pgAdmin,里面有个备份,恢复选项,也能对数据库进行备份 恢复(还原),但最近发现数据库慢慢庞大的时候,经常出错,备份的文件过程中出错的几率那是相当大,手动调节灰常有限。所以一直寻找完美的备份恢复方案。

梦里寻他千百度,伊人却在灯火阑珊处...其实PostgreSQL内置不少的工具,寻找的备份恢复方案就在其中:pg_mp,psql。这两个指令 在数据库的安装目录下,比如我自己本地安装的,路径形如:C:\Program Files\PostgreSQL\9.5\;然后进入到bin文件夹,会看到不少的exe文件,这就是PostgreSQL内置的工具了。里面会找到 pg_mp.exe,psql.exe两个文件。我们怎么用他们?
用法:
备份数据库,指令如下:
pg_mp -h 164.82.233.54 -U postgres databasename > C:\databasename.bak

开始-运行-cmd 弹出dos控制台;然后 在控制台里,进入PostgreSQL安装目录bin下:
cd C:\Program Files\PostgreSQL\9.0\bin
最后执行备份指令:
pg_mp -h 164.82.233.54 -U postgres databasename > C:\databasename.bak

指令解释:如上命令,pg_mp 是备份数据库指令,164.82.233.54是数据库的ip地址(必须保证数据库允许外部访问的权限哦~),当然本地的数据库ip写 localhost;postgres 是数据库的用户名;databasename 是数据库名。> 意思是导出到C:\databasename.bak文件里,如果没有写路径,单单写databasename.bak文件名,那么备份文件会保存在C: \Program Files\PostgreSQL\9.0\bin 文件夹里。

恢复数据库,指令如下:
psql -h localhost -U postgres -d databasename < C:\databasename.bak(测试没有成功)

pg_restore.exe --host localhost --port 5432 --username "postgres" --dbname "symbolmcnew" --no-password --verbose "databasename.backup"(测试成功)

指令解释:如上命令,psql是恢复数据库命令,localhost是要恢复到哪个数据库的地址,当然你可以写上ip地址,也就是说能远程恢复(必须保证 数据库允许外部访问的权限哦~);postgres 就是要恢复到哪个数据库的用户;databasename 是要恢复到哪个数据库。< 的意思是把C:\databasename.bak文件导入到指定的数据库里。
以上所有的是针对windows而言的,如果在linux下,会不会有效?

在linux里依然有效。有一个值得注意的是:如果直接进入PostgreSQL的安装目录bin下,执行命令,可能会出现 找不到pg_mp,psql的现象,我们在可以这样:
备份:
/opt/PostgreSQL/9.5/bin/pg_mp -h 164.82.233.54 -U postgres databasename > databasename.bak

恢复:
/opt/PostgreSQL/9.5/bin/psql -h localhost -U postgres -d databasename < databasename.bak

postgresql中一个查询问题?

这种情况最好是每个设备增加一个表,只存储当前最新的一条数据,每次有新数据来,删除之前的数据,插入最新一条数据。这样查询时,查最新表,仅一条数据,速度很快追问但是总感觉应该有更好的方法

postgresql中一个查询问题?

这种情况最好是每个设备增加一个表,只存储当前最新的一条数据,每次有新数据来,删除之前的数据,插入最新一条数据。这样查询时,查最新表,仅一条数据,速度很快追问但是总感觉应该有更好的方法

postgresQL备份和回复时用户的问题

我本地测试了一下, 观察导出的 sql 文件。

在 CREATE TABLE ..... 的语句后面,

会带有一个
ALTER TABLE public....... OWNER TO 源数据库的用户名;

函数也是

在 CREATE FUNCTION ..... 的语句后面,

会带有一个
ALTER FUNCTION public....... OWNER TO 源数据库的用户名;

如果你这个处理, 不是全自动的话。
倒是可以, 导出以后。 编辑一下那个 sql 文件

把所有的 OWNER TO Auser 替换为 OWNER TO Buser 。
然后保存, 再导入一下看看。

postgresQL备份和回复时用户的问题

我本地测试了一下, 观察导出的 sql 文件。

在 CREATE TABLE ..... 的语句后面,

会带有一个
ALTER TABLE public....... OWNER TO 源数据库的用户名;

函数也是

在 CREATE FUNCTION ..... 的语句后面,

会带有一个
ALTER FUNCTION public....... OWNER TO 源数据库的用户名;

如果你这个处理, 不是全自动的话。
倒是可以, 导出以后。 编辑一下那个 sql 文件

把所有的 OWNER TO Auser 替换为 OWNER TO Buser 。
然后保存, 再导入一下看看。

PostgreSQL游标错误,pgpool和PHP问题,怎么解决

PostgreSQL+pgpooll+Keepalived双机HA方案

(注:本文档中的所有操作在没有说明的情况下都应该使用postgres用户进行操作,postgres用户没有权限的操作方可采用root用户进行操作,可以在本方案中使用postgres用户及该用户的密码替换replcia及replica用户的密码)

 技术分享

方案逻辑图

技术分享 

192.168.100.3的pgpool故障图

 技术分享

192.168.100.4的pgpool故障图

 技术分享

192.168.100.3的Master故障图

 

一、环境信息

操作系统:CentOS6.6

数据库:PostgreSQL9.3.5

物理机两台:node1,node2

Keepalived:keepalived-1.2.15

pgpool:pgpool-II-3.3.3

一、     安装Linux操作系统CentOS6.6(两台物理机操作)

主节点:node1:IP:192.168.100.3

备节点:node2:IP:192.168.2.4

VIP:192.168.100.4

二、     安装完Linux操作系统之后的系统调整(两台物理机操作)

2.1主备节点时钟同步:

/usr/sbin/ntpdate asia.pool.ntp.org && /sbin/hwclock –systohc

 

2.2给两个机器分别命名:node1,node2

 

第一台机器命名为node1

2.2.1 修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network
HOSTNAME=node1 #修改localhost.localdomain为node1

2.2.2 修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 node1 localhost #修改localhost.localdomain为node1
shutdown -r now #最后,重启服务器即可

 

第二台机器命名为node2

2.2.3 修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network
HOSTNAME=node2 #修改localhost.localdomain为node2

2.2.4 修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 node2 localhost #修改localhost.localdomain为node2
shutdown -r now #最后,重启服务器即可。

 

2.3.OS资源调整

时区调整: (如果已经调好同步不需要做)

vi/etc/sysconfig/clock

ZONE="Asia/Shanghai"

UTC=false

ARC=false

vi/etc/sysconfig/i18n

LANG="en_US.UTF-8"

 

vi /etc/sysctl.conf

kernel.shmmni =4096

kernel.sem = 5010064128000 50100 1280

fs.file-max =7672460

net.ipv4.ip_local_port_range= 9000 65000

net.core.rmem_default= 1048576

net.core.rmem_max =4194304

net.core.wmem_default= 262144

net.core.wmem_max =1048576

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_max_syn_backlog= 4096

net.core.netdev_max_backlog= 10000

vm.overcommit_memory= 0

net.ipv4.ip_conntrack_max= 655360

fs.aio-max-nr =1048576

net.ipv4.tcp_timestamps = 0

使文件修改生效

sysctl –p

 

vi/etc/security/limits.conf

* soft nofile131072

* hard nofile131072

* soft nproc 131072

* hard nproc 131072

* soft coreunlimited

* hard coreunlimited

* soft memlock50000000

* hard memlock 50000000

 

vi/etc/sysconfig/selinux

SELINUX=disabled

setenforce 0

 

防火墙根据需要配置

 

vi/etc/sysconfig/iptables

-A INPUT -s192.168.0.0/16 -j ACCEPT

-A INPUT -s10.0.0.0/8 -j ACCEPT

-A INPUT -s172.16.0.0/16 -j ACCEPT

# or

-A INPUT -m state --state NEW -m tcp -p tcp --dport 5432-j ACCEPT

 

service iptables restart

 

2.4.给两个节点分别创建postgres用户并设置密码

useradd postgres

password:postgres

 

2.5.配置两台机器的ssh免密钥登录

[postgres@node1]$ssh-keygen -t rsa

Generatingpublic/private rsa key pair.

Enterfile in which to save the key (/var/lib/pgsql/.ssh/id_rsa):

Enterpassphrase (empty for no passphrase):

Entersame passphrase again:

Youridentification has been saved in /var/lib/pgsql/.ssh/id_rsa.

Yourpublic key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.

The keyfingerprint is:

[postgres@node1]$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@node1]$chmod go-rwx ~/.ssh/*

[postgres@node2$ssh-keygen -t rsa

Generatingpublic/private rsa key pair.

Enterfile in which to save the key (/var/lib/pgsql/.ssh/id_rsa):

Enterpassphrase (empty for no passphrase):

Entersame passphrase again:

Youridentification has been saved in /var/lib/pgsql/.ssh/id_rsa.

Yourpublic key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.

The keyfingerprint is:

[postgres@node2$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@node2$chmod go-rwx ~/.ssh/*

 

[postgres@node1]$cd ~/.ssh

[postgres@node1]$scp id_rsa.pub  postgres@node2:/home/postgres/.ssh/id_rsa.pub1

[postgres@node2]cd ~/.ssh

[postgres@node2]cat id_rsa.pub1 | authorized_keys

[postgres@node2]scpid_rsa.pub postgres@node1:/home/postgres/.ssh/id_rsa.pub2

[postgres@node1]cat id_rsa.pub2| authorized_keys

2.6.创建源码包存储目录

在两个节点的/opt/目录下分别创建soft_bak目录用于存放软件安装包如:postgresql-9.3.5.tar.gz

mkdir /opt/soft_bak

设置相关目录的所属主

postgresql必须用postgres用户安装,安装之前先改变postgresql的将要安装的目录的权限

chown –R postgres:postgres  /opt/soft_bak

chown –R postgres:postgres  /usr/local/

 

三、安装PG的依赖软件包:

yum -y install lrzsz sysstat e4fsprogs ntp readline-devel zlib zlib-developenssl openssl-devel pam-devel libxml2-devel libxslt-devel python-develtcl-devel gcc make flex bison

 

四、下载PostgreSQL 9.3.5 pgpool-II-3.3.3.tar.gz keepalived-1.2.15

在安装PG之前

/opt/soft_bak文件夹下应该有postgresql-9.3.5.tar.gz pgpool-II-3.4.0.tar.gz 等

 

五、安装PostgreSQL

主库(postgres用户安装)

cd/opt/soft_bak/

tar zxvfpostgresql-9.3.5.tar.gz

cd/opt/soft_bak/postgresql-9.3.5

./configure --with-pgport=5432 --with-perl--with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml--with-libxslt --with-blocksize=8

gmakeworld

gmakeinstall-world

 

备库(postgres用户安装 备节点只需要安装软件,不需要init数据库)

cd/opt/soft_bak/

tar zxvfpostgresql-9.3.5.tar.gz

cd/opt/soft_bak/postgresql-9.3.5

./configure --with-pgport=5432 --with-perl--with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml--with-libxslt --with-blocksize=8

gmakeworld

gmakeinstall-world

 

六、主备节点设置相关环境变量(主备节点应该一致)

vi/home/postgres/.bash_profile

export PGPORT=5432

exportPGDATA=/file/data

exportLANG=en_US.utf8

exportPGHOME=/usr/local/pgsql

exportLD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH

export DATE=`date+"%Y%m%d%H%M"`

exportPATH=$PGHOME/bin:$PATH:.

exportMANPATH=$PGHOME/share/man:$MANPATH

exportPGUSER=postgres

exportPGHOST=$PGDATA

exportPGDATABASE=postgres

alias rm=‘rm -i‘

alias ll=‘ls -lh‘

 

在/home/postgres/目录下创建.pgpass(用于主备之间数据传输的流复制用户)

格式:

IP:端口:replication:用户:密码

(master)(在不创建replica用户的情况下可以使用postgres用户和postgres用户的密码)

192.168.100.3:5432:replication:replica:replica

192.168.2.4:5432:replication:replica:replica

(standby) (在不创建replica用户的情况下可以使用postgres用户和postgres用户的密码)

192.168.100.3:5432:replication:replica:replica

192.168.2.4:5432:replication:replica:replica

 

七、主节点初始化postgreSQL实例

su - postgres

initdb -D $PGDATA -E UTF8 --locale=C -U postgres -W

八、PostgreSQL配置文件调整

cd $PGDATA

vi postgresql.conf

listen_addresses = ‘0.0.0.0‘

max_connections = 1000

superuser_reserved_connections = 13

unix_socket_directory = ‘$PGDATA‘

unix_socket_permissions = 0700

tcp_keepalives_idle = 60

tcp_keepalives_interval = 10

tcp_keepalives_count = 10

shared_buffers = 2GB

maintenance_work_mem = 512MB

max_stack_depth = 8MB

vacuum_cost_delay = 10ms

vacuum_cost_limit = 10000

bgwriter_delay = 10ms

wal_level = hot_standby

wal_buffers = 16384kB
checkpoint_segments = 64

checkpoint_timeout = 5min

archive_mode = on

archive_command = ‘/bin/date‘

max_wal_senders = 32

hot_standby = on

random_page_cost = 2.0

effective_cache_size = 12000MB

log_destination = ‘csvlog‘

logging_collector = on

log_directory = ‘pg_log‘

log_truncate_on_rotation = on

log_rotation_age = 1d

log_rotation_size = 10MB

log_min_duration_statement = 1s

log_checkpoints = on

log_lock_waits = on

deadlock_timeout = 1s

log_connections = on

log_disconnections = on

log_error_verbosity = verbose

log_statement = ‘ddl‘

track_activity_query_size = 2048

autovacuum = on

log_autovacuum_min_duration = 0

log_timezone = ‘PRC‘

datestyle = ‘iso, mdy‘

timezone = ‘PRC‘

lc_messages = ‘C‘                               

lc_monetary = ‘C‘                               

lc_numeric = ‘C‘                                

lc_time = ‘C‘                                   

default_text_search_config =‘pg_catalog.english‘

shared_preload_libraries =‘pg_stat_statements,auto_explain‘

pg_stat_statements.max = 1000

pg_stat_statements.track = all

pg_stat_statements.track_utility = off

track_io_timing = off

custom_variable_class= ‘pg_stat_statements,auto_explain’

pg_stat_statements.max= 1000

pg_stat_statements.track= all

auto_explain.log_min_duration= 10ms

auto_explain.log_analyze= false

auto_explain.log_verbose= false

auto_explain.log_nested_statements= false

 

九、使系统具有监控SQL功能

su -postgres

pg_clt–D $PGDATA start

psql –hnode1 –p 5432 –U postgres –d postgres

postgres=#createextention pg_stat_statements;

postgres=#selectpg_stat_statements_reset();

//select* from pg_stat_statements;

 

十、主备方案postgresql.conf pg_hba.conf的配置

vim postgres.conf

  listen_addresses=‘*‘

  wal_level = ‘hot_standby‘

  archive_mode = on

  archive_command = ‘cd ./‘  # we can also use exit 0

  max_wal_senders = 10

  wal_keep_segments = 5000   # 80 GB required on pg_xlog

  hot_standby = on

vim pg_hba.conf

host    all             all               192.168.2.4/32          trust

host    postgres        postgres          192.168.2.4/32          trust

host    replication      replica           192.168.2.4/32          trust

host    all             all               192.168.100.3/32        trust

host    postgres        postgres          192.168.100.3/32        trust

host    replica         replica           192.168.100.3/32        trust

 

十一 主备节点标志文件的配置

su –postgres

cp/usr/local/pgsql/share/recovery.conf.sample /file/data/recovery.done

standby_mode= on

recovery_target_timeline =’latest’时间线

primary_conninfo = ‘host=node2 port=5432 user=postgres’主节点信息

trigger_file = ‘/file/data/trigger_file’

十二、生成备库实例(node2上执行)

su –postgres

pg_basebackup –D $PGDATA –Fp –Xs –v –P –h node1–p主库端口号 –U replica(postgres)

十三、修改备库实例的备库标识文件

cd/file/data

mvrecovery.done recovery.conf

vimrecovery.conf

standby_mode= on

recovery_target_timeline =’latest’时间线

primary_conninfo = ‘host=node1 port=5432user=postgres’主节点信息

trigger_file = ‘/file/data/trigger_file’

 

启动备库

chmod 0700 /file/data

pg_ctl –D $PGDATA

查看备库进程

ps –ef|grep postgres

查看主库进程

ps –ef|grep postgres

 

十四、安装pgpool-II3.3.3(两个节点操作)

将pgpool-II3.3.3下载到/opt/soft_bak

创建pgpool的安装目录

mkdir/opt/pgpool

su -postgres

tar zxvf pgpool-II-3.3.3.tar.gz

cd pgpool-II-3.3.3

mkdir -p /opt/pgpool

./configure --prefix=/opt/pgpool --with-pgsql=path--with-pgsql=/usr/local/pgsql

make

make install

十五、安装 pgpool 相关函数

su –postgres

cd pgpool-II-3.3.3/sql

make

makeinstall

cdpgpool-recovery

makeinstall

cdpgpool-regclass

makeinstall

 

echo “export PATH=” $PATH:/opt/pgpool/bin”" >>/home/postgres/.bashrc

source /home/postgres/.bashrc

 

安装完以后/usr/local/pgsql/share/extension/目录下应该有如下文件

pgpool_recovery--1.0.sql
pgpool_recovery.control
pgpool-recovery.sql
pgpool_regclass--1.0.sql
pgpool_regclass.control
pgpool-regclass.sql

su - postgres

psqltemplate1   (备节点不需要创建pgpool_regclass,pgpool_recovery)

template1=# create extensionpgpool_regclass;
CREATE EXTENSION
template1=# CREATE EXTENSION pgpool_recovery;
CREATE EXTENSION

查看新增加的  recovery 管理函数
template1=# df
                                                    List of functions
 Schema |       Name         | Result data type|                     Argument datatypes                      |  Type  
--------+---------------------+------------------+----------------------------------------------------------------+--------
 public | pgpool_pgctl        | boolean         | action text, stop_modetext                                   | normal
 public | pgpool_recovery     |boolean          | script_nametext, remote_host text, remote_data_directory text | normal
 public | pgpool_remote_start |boolean          | remote_hosttext, remote_data_directorytext                  | normal
 public | pgpool_switch_xlog  |text             |arcive_dirtext                                               | normal
(4 rows)

 

十六、配置 pgpool-II ( 两节点操作)
--配置  pcp.conf

cd /opt/pgpool/etc
cp pcp.conf.sample pcp.conf
pg_md5 -u postgres -p
password: postgres
ba777e4c2f15c11ea8ac3be7e0440aa0

vim pcp.conf

--编写 pcp.conf 文件,写入以下

#USERID:MD5PASSWD
postgres:ba777e4c2f15c11ea8ac3be7e0440aa0

--配置 ifconfig, arping 执行权限(由root用户执行)

# chmod u+s /sbin/ifconfig
# chmod u+s /usr/sbin/

--配置 pgpool.conf

cd /opt/pgpool/etc        
cp pgpool.conf.sample pgpool.conf

--主节点的 pgpool.conf(配置文件中的用户名和配置能用postgres用户就用postgres用户,尽量减少用户)
listen_addresses = ‘*‘
port = 9999
socket_dir = ‘/opt/pgpool‘
pcp_port = 9898
pcp_socket_dir = ‘/opt/pgpool‘
backend_hostname0 = ‘node1‘   ##配置数据节点 node1
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = ‘ALLOW_TO_FAILOVER‘
backend_hostname1 = ‘node2‘   ##配置数据节点  node2
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = ‘ALLOW_TO_FAILOVER‘
enable_pool_hba = on
pool_passwd = ‘pool_passwd‘
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = ‘syslog‘
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = ‘none‘
syslog_facility = ‘LOCAL0‘
syslog_ident = ‘pgpool‘
debug_level = 0
pid_file_name = ‘/opt/pgpool/pgpool.pid‘
logdir = ‘/tmp‘
connection_cache = on
reset_query_list = ‘ABORT; DISCARD ALL‘
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ‘‘
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ‘‘
black_function_list = ‘nextval,setval‘
master_slave_mode = on # 设置流复制模式
master_slave_sub_mode = ‘stream‘ # 设置流复制模式
sr_check_period = 5
sr_check_user = ‘replica‘
sr_check_password = ‘replica‘
delay_threshold = 16000
follow_master_command = ‘‘
parallel_mode = off
pgpool2_hostname = ‘‘
system_db_hostname  = ‘localhost‘
system_db_port = 5432
system_db_dbname = ‘pgpool‘
system_db_schema = ‘pgpool_catalog‘
system_db_user = ‘pgpool‘
system_db_password = ‘‘
health_check_period = 5
health_check_timeout = 20
health_check_user = ‘replica‘
health_check_password = ‘replcia‘
health_check_max_retries = 3
health_check_retry_delay = 1
failover_command = ‘/opt/pgpool/failover_stream.sh  %d %H /file/data/trigger_file‘ 
failback_command = ‘‘
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = ‘nobody‘
recovery_password = ‘‘
recovery_1st_stage_command = ‘‘
recovery_2nd_stage_command = ‘‘
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = on
trusted_servers = ‘‘
ping_path = ‘/bin‘
wd_hostname = ‘node1‘
wd_port = 9000
wd_authkey = ‘‘
delegate_IP = ‘‘ 
ifconfig_path = ‘/sbin‘  
if_up_cmd = ‘ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0‘
if_down_cmd = ‘ifconfig eth0:0 down‘
arping_path =‘/usr/sbin‘           #arping command path
arping_cmd = ‘arping -U $_IP_$ -w 1‘
clear_memqcache_on_escalation = on
wd_escalation_command = ‘‘
wd_lifecheck_method = ‘heartbeat‘
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = ‘node2‘   # 配置对端的 hostname
heartbeat_destination_port0 = 9694
heartbeat_device0 = ‘eth0‘
wd_life_point = 3
wd_lifecheck_query = ‘SELECT 1‘
wd_lifecheck_dbname = ‘template1‘
wd_lifecheck_user = ‘nobody‘
wd_lifecheck_password = ‘‘
other_pgpool_hostname0 = ‘node2‘   ## 配置对端的 pgpool
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = ‘shmem‘
memqcache_memcached_host = ‘localhost‘
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = ‘/opt/pgpool/oiddir‘(需要现在/opt/pgpool目录下创建oiddr)
white_memqcache_table_list = ‘‘
black_memqcache_table_list = ‘‘

 

--备节点的 pgpool.conf

pgpool.conf
listen_addresses = ‘*‘
port = 9999
socket_dir = ‘/opt/pgpool‘
pcp_port = 9898
pcp_socket_dir = ‘/opt/pgpool‘
backend_hostname0 = ‘node1‘
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = ‘ALLOW_TO_FAILOVER‘
backend_hostname1 = ‘node2‘
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = ‘ALLOW_TO_FAILOVER‘
enable_pool_hba = on
pool_passwd = ‘pool_passwd‘
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = ‘syslog‘
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = ‘none‘
syslog_facility = ‘LOCAL0‘
syslog_ident = ‘pgpool‘
debug_level = 0
pid_file_name = ‘/opt/pgpool/pgpool.pid‘
logdir = ‘/tmp‘
connection_cache = on
reset_query_list = ‘ABORT; DISCARD ALL‘
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ‘‘
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ‘‘
black_function_list = ‘nextval,setval‘
master_slave_mode = on
master_slave_sub_mode = ‘stream‘
sr_check_period = 0
sr_check_user = ‘replica‘
sr_check_password = ‘replica‘
delay_threshold = 16000
follow_master_command = ‘‘
parallel_mode = off
pgpool2_hostname = ‘‘
system_db_hostname  = ‘localhost‘
system_db_port = 5432
system_db_dbname = ‘pgpool‘
system_db_schema = ‘pgpool_catalog‘
system_db_user = ‘pgpool‘
system_db_password = ‘‘
health_check_period = 0
health_check_timeout = 20
health_check_user = ‘nobody‘
health_check_password = ‘‘
health_check_max_retries = 0
health_check_retry_delay = 1
failover_command = ‘/opt/pgpool/failover_stream.sh  %d %H /file/data/trigger/file‘
failback_command = ‘‘
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = ‘nobody‘
recovery_password = ‘‘
recovery_1st_stage_command = ‘‘
recovery_2nd_stage_command = ‘‘
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = off
trusted_servers = ‘‘
ping_path = ‘/bin‘
wd_hostname = ‘ ‘
wd_port = 9000
wd_authkey = ‘‘
delegate_IP = ‘ ‘
ifconfig_path = ‘/sbin‘
if_up_cmd = ‘ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0‘
if_down_cmd = ‘ifconfig eth0:0 down‘
arping_path = ‘/usr/sbin‘          # arping command path
arping_cmd = ‘arping -U $_IP_$ -w 1‘
clear_memqcache_on_escalation = on
wd_escalation_command = ‘‘
wd_lifecheck_method = ‘heartbeat‘
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = ‘node1‘
heartbeat_destination_port0 = 9694
heartbeat_device0 = ‘eth0‘
wd_life_point = 3
wd_lifecheck_query = ‘SELECT 1‘
wd_lifecheck_dbname = ‘template1‘
wd_lifecheck_user = ‘nobody‘
wd_lifecheck_password = ‘‘
other_pgpool_hostname0 = ‘node1‘
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = ‘shmem‘
memqcache_memcached_host = ‘localhost‘
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = ‘/var/log/pgpool/oiddir‘
white_memqcache_table_list = ‘‘
black_memqcache_table_list = ‘‘

failover_stream.sh文件内容

#! /bin/sh

# Failover command for streamingreplication.

# This script assumes that DB node 0is primary, and 1 is standby.

#

# If standby goes down, do nothing.If primary goes down, create a

# trigger file so that standby takesover primary node.

#

# Arguments: $1: failed node id. $2:new master hostname. $3: path to

# trigger file.

 

failed_node=$1

new_master=$2

trigger_file=$3

# Do nothing if standby goes down.

#if [ $failed_node = 1 ]; then

#        exit 0;

#fi

/usr/bin/ssh -T $new_master/bin/touch $trigger_file

exit 0;

 

主节点配置

cd /opt/pgpool/etc

cp pool_hba.conf.sample pool_hba.conf

vim pool_hba.conf

host   all             all           192.168.2.4/32        trust

host   replication     replica       192.168.2.4/32        trust

host   postgres        postgres      192.168.2.4/32        trust

host   all             all           192.168.100.3/32      trust

host   replication     replica       192.168.100.3/32      trust

host   postgres        postgres      192.168.100.3/32      trust

host   postgres        postgres      192.168.100.3/32      trust

host   all             all           192.168.100.4/32      trust

host   replication     replica       192.168.100.4/32      trust

host   postgres        postgres      192.168.100.4/32      trust

host   postgres        postgres      192.168.100.4/32      trust

备节点配置

cd /opt/pgpool/etc

cp pool_hba.conf.sample pool_hba.conf

vim pool_hba.conf

host    all             all           192.168.2.4/32        trust

host    replication     replica       192.168.2.4/32        trust

host    postgres        postgres      192.168.2.4/32        trust

host    all             all           192.168.100.3/32      trust

host    replication     replica       192.168.100.3/32      trust

host    postgres        postgres      192.168.100.3/32      trust

host    postgres        postgres      192.168.100.3/32      trust

host    all             all           192.168.100.4/32      trust

host    replication     replica       192.168.100.4/32      trust

host    postgres        postgres      192.168.100.4/32      trust

host    postgres        postgres      192.168.100.4/32      trust

配置密码文件(两节点操作)

su – postgres

psql

select rolname,rolpassword frompg_authid;

vim pool_passwd

postgres:postgres的md5密码

rolname:rolpassword

pgpool启动:pgpool –n &

pgpool关闭:pgpool –m faststop

连接pgpool:psql –h node (or IP) –p 9999 –U postgres –d postgres

查看pgpool集群中nodes的状态

show pool_nodes;

 

 

 

十七、安装keepalived-1.2.15(主备两节点操作)

下载keepalived-1.2.15到/opt/sotf_bak

tar zxvf keepalived-1.2.15.tar.gz

cd keepalived-1.2.15

./configure

make

make install

node1 keepalived.conf 文件内容

 

global_defs {

  notification_email {

    acassen@firewall.loc

    failover@firewall.loc

    sysadmin@firewall.loc

   }

  notification_email_from Alexandre.Cassen@firewall.loc

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

 

vrrp_script chk_pgpool {

       script "killall -0 pgpool"

       interval 3

       weight 2

}

 

vrrp_instance VI_1 {

   state MASTER

    interface eth0

   virtual_router_id 51

   priority 100

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.100.4

    }

         track_script{

       chk_pgpool

}

 

}

node2 keepalived.conf 文件内容

 

global_defs {

  notification_email {

    acassen@firewall.loc

    failover@firewall.loc

    sysadmin@firewall.loc

   }

  notification_email_from Alexandre.Cassen@firewall.loc

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

 

vrrp_script chk_pgpool {

       script "killall -0 pgpool"

       interval 3

       weight 2

}

 

vrrp_instance VI_1 {

   state BACKUP

   interface eth0

   virtual_router_id 51

   priority 99

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.100.4

    }

       track_script {

       chk_pgpool

}

}

 

搜索

复制

PostgreSQL游标错误,pgpool和PHP问题,怎么解决

PostgreSQL+pgpooll+Keepalived双机HA方案

(注:本文档中的所有操作在没有说明的情况下都应该使用postgres用户进行操作,postgres用户没有权限的操作方可采用root用户进行操作,可以在本方案中使用postgres用户及该用户的密码替换replcia及replica用户的密码)

 技术分享

方案逻辑图

技术分享 

192.168.100.3的pgpool故障图

 技术分享

192.168.100.4的pgpool故障图

 技术分享

192.168.100.3的Master故障图

 

一、环境信息

操作系统:CentOS6.6

数据库:PostgreSQL9.3.5

物理机两台:node1,node2

Keepalived:keepalived-1.2.15

pgpool:pgpool-II-3.3.3

一、     安装Linux操作系统CentOS6.6(两台物理机操作)

主节点:node1:IP:192.168.100.3

备节点:node2:IP:192.168.2.4

VIP:192.168.100.4

二、     安装完Linux操作系统之后的系统调整(两台物理机操作)

2.1主备节点时钟同步:

/usr/sbin/ntpdate asia.pool.ntp.org && /sbin/hwclock –systohc

 

2.2给两个机器分别命名:node1,node2

 

第一台机器命名为node1

2.2.1 修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network
HOSTNAME=node1 #修改localhost.localdomain为node1

2.2.2 修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 node1 localhost #修改localhost.localdomain为node1
shutdown -r now #最后,重启服务器即可

 

第二台机器命名为node2

2.2.3 修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network
HOSTNAME=node2 #修改localhost.localdomain为node2

2.2.4 修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 node2 localhost #修改localhost.localdomain为node2
shutdown -r now #最后,重启服务器即可。

 

2.3.OS资源调整

时区调整: (如果已经调好同步不需要做)

vi/etc/sysconfig/clock

ZONE="Asia/Shanghai"

UTC=false

ARC=false

vi/etc/sysconfig/i18n

LANG="en_US.UTF-8"

 

vi /etc/sysctl.conf

kernel.shmmni =4096

kernel.sem = 5010064128000 50100 1280

fs.file-max =7672460

net.ipv4.ip_local_port_range= 9000 65000

net.core.rmem_default= 1048576

net.core.rmem_max =4194304

net.core.wmem_default= 262144

net.core.wmem_max =1048576

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_max_syn_backlog= 4096

net.core.netdev_max_backlog= 10000

vm.overcommit_memory= 0

net.ipv4.ip_conntrack_max= 655360

fs.aio-max-nr =1048576

net.ipv4.tcp_timestamps = 0

使文件修改生效

sysctl –p

 

vi/etc/security/limits.conf

* soft nofile131072

* hard nofile131072

* soft nproc 131072

* hard nproc 131072

* soft coreunlimited

* hard coreunlimited

* soft memlock50000000

* hard memlock 50000000

 

vi/etc/sysconfig/selinux

SELINUX=disabled

setenforce 0

 

防火墙根据需要配置

 

vi/etc/sysconfig/iptables

-A INPUT -s192.168.0.0/16 -j ACCEPT

-A INPUT -s10.0.0.0/8 -j ACCEPT

-A INPUT -s172.16.0.0/16 -j ACCEPT

# or

-A INPUT -m state --state NEW -m tcp -p tcp --dport 5432-j ACCEPT

 

service iptables restart

 

2.4.给两个节点分别创建postgres用户并设置密码

useradd postgres

password:postgres

 

2.5.配置两台机器的ssh免密钥登录

[postgres@node1]$ssh-keygen -t rsa

Generatingpublic/private rsa key pair.

Enterfile in which to save the key (/var/lib/pgsql/.ssh/id_rsa):

Enterpassphrase (empty for no passphrase):

Entersame passphrase again:

Youridentification has been saved in /var/lib/pgsql/.ssh/id_rsa.

Yourpublic key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.

The keyfingerprint is:

[postgres@node1]$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@node1]$chmod go-rwx ~/.ssh/*

[postgres@node2$ssh-keygen -t rsa

Generatingpublic/private rsa key pair.

Enterfile in which to save the key (/var/lib/pgsql/.ssh/id_rsa):

Enterpassphrase (empty for no passphrase):

Entersame passphrase again:

Youridentification has been saved in /var/lib/pgsql/.ssh/id_rsa.

Yourpublic key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.

The keyfingerprint is:

[postgres@node2$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@node2$chmod go-rwx ~/.ssh/*

 

[postgres@node1]$cd ~/.ssh

[postgres@node1]$scp id_rsa.pub  postgres@node2:/home/postgres/.ssh/id_rsa.pub1

[postgres@node2]cd ~/.ssh

[postgres@node2]cat id_rsa.pub1 | authorized_keys

[postgres@node2]scpid_rsa.pub postgres@node1:/home/postgres/.ssh/id_rsa.pub2

[postgres@node1]cat id_rsa.pub2| authorized_keys

2.6.创建源码包存储目录

在两个节点的/opt/目录下分别创建soft_bak目录用于存放软件安装包如:postgresql-9.3.5.tar.gz

mkdir /opt/soft_bak

设置相关目录的所属主

postgresql必须用postgres用户安装,安装之前先改变postgresql的将要安装的目录的权限

chown –R postgres:postgres  /opt/soft_bak

chown –R postgres:postgres  /usr/local/

 

三、安装PG的依赖软件包:

yum -y install lrzsz sysstat e4fsprogs ntp readline-devel zlib zlib-developenssl openssl-devel pam-devel libxml2-devel libxslt-devel python-develtcl-devel gcc make flex bison

 

四、下载PostgreSQL 9.3.5 pgpool-II-3.3.3.tar.gz keepalived-1.2.15

在安装PG之前

/opt/soft_bak文件夹下应该有postgresql-9.3.5.tar.gz pgpool-II-3.4.0.tar.gz 等

 

五、安装PostgreSQL

主库(postgres用户安装)

cd/opt/soft_bak/

tar zxvfpostgresql-9.3.5.tar.gz

cd/opt/soft_bak/postgresql-9.3.5

./configure --with-pgport=5432 --with-perl--with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml--with-libxslt --with-blocksize=8

gmakeworld

gmakeinstall-world

 

备库(postgres用户安装 备节点只需要安装软件,不需要init数据库)

cd/opt/soft_bak/

tar zxvfpostgresql-9.3.5.tar.gz

cd/opt/soft_bak/postgresql-9.3.5

./configure --with-pgport=5432 --with-perl--with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml--with-libxslt --with-blocksize=8

gmakeworld

gmakeinstall-world

 

六、主备节点设置相关环境变量(主备节点应该一致)

vi/home/postgres/.bash_profile

export PGPORT=5432

exportPGDATA=/file/data

exportLANG=en_US.utf8

exportPGHOME=/usr/local/pgsql

exportLD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH

export DATE=`date+"%Y%m%d%H%M"`

exportPATH=$PGHOME/bin:$PATH:.

exportMANPATH=$PGHOME/share/man:$MANPATH

exportPGUSER=postgres

exportPGHOST=$PGDATA

exportPGDATABASE=postgres

alias rm=‘rm -i‘

alias ll=‘ls -lh‘

 

在/home/postgres/目录下创建.pgpass(用于主备之间数据传输的流复制用户)

格式:

IP:端口:replication:用户:密码

(master)(在不创建replica用户的情况下可以使用postgres用户和postgres用户的密码)

192.168.100.3:5432:replication:replica:replica

192.168.2.4:5432:replication:replica:replica

(standby) (在不创建replica用户的情况下可以使用postgres用户和postgres用户的密码)

192.168.100.3:5432:replication:replica:replica

192.168.2.4:5432:replication:replica:replica

 

七、主节点初始化postgreSQL实例

su - postgres

initdb -D $PGDATA -E UTF8 --locale=C -U postgres -W

八、PostgreSQL配置文件调整

cd $PGDATA

vi postgresql.conf

listen_addresses = ‘0.0.0.0‘

max_connections = 1000

superuser_reserved_connections = 13

unix_socket_directory = ‘$PGDATA‘

unix_socket_permissions = 0700

tcp_keepalives_idle = 60

tcp_keepalives_interval = 10

tcp_keepalives_count = 10

shared_buffers = 2GB

maintenance_work_mem = 512MB

max_stack_depth = 8MB

vacuum_cost_delay = 10ms

vacuum_cost_limit = 10000

bgwriter_delay = 10ms

wal_level = hot_standby

wal_buffers = 16384kB
checkpoint_segments = 64

checkpoint_timeout = 5min

archive_mode = on

archive_command = ‘/bin/date‘

max_wal_senders = 32

hot_standby = on

random_page_cost = 2.0

effective_cache_size = 12000MB

log_destination = ‘csvlog‘

logging_collector = on

log_directory = ‘pg_log‘

log_truncate_on_rotation = on

log_rotation_age = 1d

log_rotation_size = 10MB

log_min_duration_statement = 1s

log_checkpoints = on

log_lock_waits = on

deadlock_timeout = 1s

log_connections = on

log_disconnections = on

log_error_verbosity = verbose

log_statement = ‘ddl‘

track_activity_query_size = 2048

autovacuum = on

log_autovacuum_min_duration = 0

log_timezone = ‘PRC‘

datestyle = ‘iso, mdy‘

timezone = ‘PRC‘

lc_messages = ‘C‘                               

lc_monetary = ‘C‘                               

lc_numeric = ‘C‘                                

lc_time = ‘C‘                                   

default_text_search_config =‘pg_catalog.english‘

shared_preload_libraries =‘pg_stat_statements,auto_explain‘

pg_stat_statements.max = 1000

pg_stat_statements.track = all

pg_stat_statements.track_utility = off

track_io_timing = off

custom_variable_class= ‘pg_stat_statements,auto_explain’

pg_stat_statements.max= 1000

pg_stat_statements.track= all

auto_explain.log_min_duration= 10ms

auto_explain.log_analyze= false

auto_explain.log_verbose= false

auto_explain.log_nested_statements= false

 

九、使系统具有监控SQL功能

su -postgres

pg_clt–D $PGDATA start

psql –hnode1 –p 5432 –U postgres –d postgres

postgres=#createextention pg_stat_statements;

postgres=#selectpg_stat_statements_reset();

//select* from pg_stat_statements;

 

十、主备方案postgresql.conf pg_hba.conf的配置

vim postgres.conf

  listen_addresses=‘*‘

  wal_level = ‘hot_standby‘

  archive_mode = on

  archive_command = ‘cd ./‘  # we can also use exit 0

  max_wal_senders = 10

  wal_keep_segments = 5000   # 80 GB required on pg_xlog

  hot_standby = on

vim pg_hba.conf

host    all             all               192.168.2.4/32          trust

host    postgres        postgres          192.168.2.4/32          trust

host    replication      replica           192.168.2.4/32          trust

host    all             all               192.168.100.3/32        trust

host    postgres        postgres          192.168.100.3/32        trust

host    replica         replica           192.168.100.3/32        trust

 

十一 主备节点标志文件的配置

su –postgres

cp/usr/local/pgsql/share/recovery.conf.sample /file/data/recovery.done

standby_mode= on

recovery_target_timeline =’latest’时间线

primary_conninfo = ‘host=node2 port=5432 user=postgres’主节点信息

trigger_file = ‘/file/data/trigger_file’

十二、生成备库实例(node2上执行)

su –postgres

pg_basebackup –D $PGDATA –Fp –Xs –v –P –h node1–p主库端口号 –U replica(postgres)

十三、修改备库实例的备库标识文件

cd/file/data

mvrecovery.done recovery.conf

vimrecovery.conf

standby_mode= on

recovery_target_timeline =’latest’时间线

primary_conninfo = ‘host=node1 port=5432user=postgres’主节点信息

trigger_file = ‘/file/data/trigger_file’

 

启动备库

chmod 0700 /file/data

pg_ctl –D $PGDATA

查看备库进程

ps –ef|grep postgres

查看主库进程

ps –ef|grep postgres

 

十四、安装pgpool-II3.3.3(两个节点操作)

将pgpool-II3.3.3下载到/opt/soft_bak

创建pgpool的安装目录

mkdir/opt/pgpool

su -postgres

tar zxvf pgpool-II-3.3.3.tar.gz

cd pgpool-II-3.3.3

mkdir -p /opt/pgpool

./configure --prefix=/opt/pgpool --with-pgsql=path--with-pgsql=/usr/local/pgsql

make

make install

十五、安装 pgpool 相关函数

su –postgres

cd pgpool-II-3.3.3/sql

make

makeinstall

cdpgpool-recovery

makeinstall

cdpgpool-regclass

makeinstall

 

echo “export PATH=” $PATH:/opt/pgpool/bin”" >>/home/postgres/.bashrc

source /home/postgres/.bashrc

 

安装完以后/usr/local/pgsql/share/extension/目录下应该有如下文件

pgpool_recovery--1.0.sql
pgpool_recovery.control
pgpool-recovery.sql
pgpool_regclass--1.0.sql
pgpool_regclass.control
pgpool-regclass.sql

su - postgres

psqltemplate1   (备节点不需要创建pgpool_regclass,pgpool_recovery)

template1=# create extensionpgpool_regclass;
CREATE EXTENSION
template1=# CREATE EXTENSION pgpool_recovery;
CREATE EXTENSION

查看新增加的  recovery 管理函数
template1=# df
                                                    List of functions
 Schema |       Name         | Result data type|                     Argument datatypes                      |  Type  
--------+---------------------+------------------+----------------------------------------------------------------+--------
 public | pgpool_pgctl        | boolean         | action text, stop_modetext                                   | normal
 public | pgpool_recovery     |boolean          | script_nametext, remote_host text, remote_data_directory text | normal
 public | pgpool_remote_start |boolean          | remote_hosttext, remote_data_directorytext                  | normal
 public | pgpool_switch_xlog  |text             |arcive_dirtext                                               | normal
(4 rows)

 

十六、配置 pgpool-II ( 两节点操作)
--配置  pcp.conf

cd /opt/pgpool/etc
cp pcp.conf.sample pcp.conf
pg_md5 -u postgres -p
password: postgres
ba777e4c2f15c11ea8ac3be7e0440aa0

vim pcp.conf

--编写 pcp.conf 文件,写入以下

#USERID:MD5PASSWD
postgres:ba777e4c2f15c11ea8ac3be7e0440aa0

--配置 ifconfig, arping 执行权限(由root用户执行)

# chmod u+s /sbin/ifconfig
# chmod u+s /usr/sbin/

--配置 pgpool.conf

cd /opt/pgpool/etc        
cp pgpool.conf.sample pgpool.conf

--主节点的 pgpool.conf(配置文件中的用户名和配置能用postgres用户就用postgres用户,尽量减少用户)
listen_addresses = ‘*‘
port = 9999
socket_dir = ‘/opt/pgpool‘
pcp_port = 9898
pcp_socket_dir = ‘/opt/pgpool‘
backend_hostname0 = ‘node1‘   ##配置数据节点 node1
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = ‘ALLOW_TO_FAILOVER‘
backend_hostname1 = ‘node2‘   ##配置数据节点  node2
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = ‘ALLOW_TO_FAILOVER‘
enable_pool_hba = on
pool_passwd = ‘pool_passwd‘
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = ‘syslog‘
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = ‘none‘
syslog_facility = ‘LOCAL0‘
syslog_ident = ‘pgpool‘
debug_level = 0
pid_file_name = ‘/opt/pgpool/pgpool.pid‘
logdir = ‘/tmp‘
connection_cache = on
reset_query_list = ‘ABORT; DISCARD ALL‘
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ‘‘
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ‘‘
black_function_list = ‘nextval,setval‘
master_slave_mode = on # 设置流复制模式
master_slave_sub_mode = ‘stream‘ # 设置流复制模式
sr_check_period = 5
sr_check_user = ‘replica‘
sr_check_password = ‘replica‘
delay_threshold = 16000
follow_master_command = ‘‘
parallel_mode = off
pgpool2_hostname = ‘‘
system_db_hostname  = ‘localhost‘
system_db_port = 5432
system_db_dbname = ‘pgpool‘
system_db_schema = ‘pgpool_catalog‘
system_db_user = ‘pgpool‘
system_db_password = ‘‘
health_check_period = 5
health_check_timeout = 20
health_check_user = ‘replica‘
health_check_password = ‘replcia‘
health_check_max_retries = 3
health_check_retry_delay = 1
failover_command = ‘/opt/pgpool/failover_stream.sh  %d %H /file/data/trigger_file‘ 
failback_command = ‘‘
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = ‘nobody‘
recovery_password = ‘‘
recovery_1st_stage_command = ‘‘
recovery_2nd_stage_command = ‘‘
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = on
trusted_servers = ‘‘
ping_path = ‘/bin‘
wd_hostname = ‘node1‘
wd_port = 9000
wd_authkey = ‘‘
delegate_IP = ‘‘ 
ifconfig_path = ‘/sbin‘  
if_up_cmd = ‘ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0‘
if_down_cmd = ‘ifconfig eth0:0 down‘
arping_path =‘/usr/sbin‘           #arping command path
arping_cmd = ‘arping -U $_IP_$ -w 1‘
clear_memqcache_on_escalation = on
wd_escalation_command = ‘‘
wd_lifecheck_method = ‘heartbeat‘
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = ‘node2‘   # 配置对端的 hostname
heartbeat_destination_port0 = 9694
heartbeat_device0 = ‘eth0‘
wd_life_point = 3
wd_lifecheck_query = ‘SELECT 1‘
wd_lifecheck_dbname = ‘template1‘
wd_lifecheck_user = ‘nobody‘
wd_lifecheck_password = ‘‘
other_pgpool_hostname0 = ‘node2‘   ## 配置对端的 pgpool
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = ‘shmem‘
memqcache_memcached_host = ‘localhost‘
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = ‘/opt/pgpool/oiddir‘(需要现在/opt/pgpool目录下创建oiddr)
white_memqcache_table_list = ‘‘
black_memqcache_table_list = ‘‘

 

--备节点的 pgpool.conf

pgpool.conf
listen_addresses = ‘*‘
port = 9999
socket_dir = ‘/opt/pgpool‘
pcp_port = 9898
pcp_socket_dir = ‘/opt/pgpool‘
backend_hostname0 = ‘node1‘
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = ‘ALLOW_TO_FAILOVER‘
backend_hostname1 = ‘node2‘
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = ‘ALLOW_TO_FAILOVER‘
enable_pool_hba = on
pool_passwd = ‘pool_passwd‘
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = ‘syslog‘
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = ‘none‘
syslog_facility = ‘LOCAL0‘
syslog_ident = ‘pgpool‘
debug_level = 0
pid_file_name = ‘/opt/pgpool/pgpool.pid‘
logdir = ‘/tmp‘
connection_cache = on
reset_query_list = ‘ABORT; DISCARD ALL‘
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ‘‘
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ‘‘
black_function_list = ‘nextval,setval‘
master_slave_mode = on
master_slave_sub_mode = ‘stream‘
sr_check_period = 0
sr_check_user = ‘replica‘
sr_check_password = ‘replica‘
delay_threshold = 16000
follow_master_command = ‘‘
parallel_mode = off
pgpool2_hostname = ‘‘
system_db_hostname  = ‘localhost‘
system_db_port = 5432
system_db_dbname = ‘pgpool‘
system_db_schema = ‘pgpool_catalog‘
system_db_user = ‘pgpool‘
system_db_password = ‘‘
health_check_period = 0
health_check_timeout = 20
health_check_user = ‘nobody‘
health_check_password = ‘‘
health_check_max_retries = 0
health_check_retry_delay = 1
failover_command = ‘/opt/pgpool/failover_stream.sh  %d %H /file/data/trigger/file‘
failback_command = ‘‘
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = ‘nobody‘
recovery_password = ‘‘
recovery_1st_stage_command = ‘‘
recovery_2nd_stage_command = ‘‘
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = off
trusted_servers = ‘‘
ping_path = ‘/bin‘
wd_hostname = ‘ ‘
wd_port = 9000
wd_authkey = ‘‘
delegate_IP = ‘ ‘
ifconfig_path = ‘/sbin‘
if_up_cmd = ‘ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0‘
if_down_cmd = ‘ifconfig eth0:0 down‘
arping_path = ‘/usr/sbin‘          # arping command path
arping_cmd = ‘arping -U $_IP_$ -w 1‘
clear_memqcache_on_escalation = on
wd_escalation_command = ‘‘
wd_lifecheck_method = ‘heartbeat‘
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = ‘node1‘
heartbeat_destination_port0 = 9694
heartbeat_device0 = ‘eth0‘
wd_life_point = 3
wd_lifecheck_query = ‘SELECT 1‘
wd_lifecheck_dbname = ‘template1‘
wd_lifecheck_user = ‘nobody‘
wd_lifecheck_password = ‘‘
other_pgpool_hostname0 = ‘node1‘
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = ‘shmem‘
memqcache_memcached_host = ‘localhost‘
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = ‘/var/log/pgpool/oiddir‘
white_memqcache_table_list = ‘‘
black_memqcache_table_list = ‘‘

failover_stream.sh文件内容

#! /bin/sh

# Failover command for streamingreplication.

# This script assumes that DB node 0is primary, and 1 is standby.

#

# If standby goes down, do nothing.If primary goes down, create a

# trigger file so that standby takesover primary node.

#

# Arguments: $1: failed node id. $2:new master hostname. $3: path to

# trigger file.

 

failed_node=$1

new_master=$2

trigger_file=$3

# Do nothing if standby goes down.

#if [ $failed_node = 1 ]; then

#        exit 0;

#fi

/usr/bin/ssh -T $new_master/bin/touch $trigger_file

exit 0;

 

主节点配置

cd /opt/pgpool/etc

cp pool_hba.conf.sample pool_hba.conf

vim pool_hba.conf

host   all             all           192.168.2.4/32        trust

host   replication     replica       192.168.2.4/32        trust

host   postgres        postgres      192.168.2.4/32        trust

host   all             all           192.168.100.3/32      trust

host   replication     replica       192.168.100.3/32      trust

host   postgres        postgres      192.168.100.3/32      trust

host   postgres        postgres      192.168.100.3/32      trust

host   all             all           192.168.100.4/32      trust

host   replication     replica       192.168.100.4/32      trust

host   postgres        postgres      192.168.100.4/32      trust

host   postgres        postgres      192.168.100.4/32      trust

备节点配置

cd /opt/pgpool/etc

cp pool_hba.conf.sample pool_hba.conf

vim pool_hba.conf

host    all             all           192.168.2.4/32        trust

host    replication     replica       192.168.2.4/32        trust

host    postgres        postgres      192.168.2.4/32        trust

host    all             all           192.168.100.3/32      trust

host    replication     replica       192.168.100.3/32      trust

host    postgres        postgres      192.168.100.3/32      trust

host    postgres        postgres      192.168.100.3/32      trust

host    all             all           192.168.100.4/32      trust

host    replication     replica       192.168.100.4/32      trust

host    postgres        postgres      192.168.100.4/32      trust

host    postgres        postgres      192.168.100.4/32      trust

配置密码文件(两节点操作)

su – postgres

psql

select rolname,rolpassword frompg_authid;

vim pool_passwd

postgres:postgres的md5密码

rolname:rolpassword

pgpool启动:pgpool –n &

pgpool关闭:pgpool –m faststop

连接pgpool:psql –h node (or IP) –p 9999 –U postgres –d postgres

查看pgpool集群中nodes的状态

show pool_nodes;

 

 

 

十七、安装keepalived-1.2.15(主备两节点操作)

下载keepalived-1.2.15到/opt/sotf_bak

tar zxvf keepalived-1.2.15.tar.gz

cd keepalived-1.2.15

./configure

make

make install

node1 keepalived.conf 文件内容

 

global_defs {

  notification_email {

    acassen@firewall.loc

    failover@firewall.loc

    sysadmin@firewall.loc

   }

  notification_email_from Alexandre.Cassen@firewall.loc

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

 

vrrp_script chk_pgpool {

       script "killall -0 pgpool"

       interval 3

       weight 2

}

 

vrrp_instance VI_1 {

   state MASTER

    interface eth0

   virtual_router_id 51

   priority 100

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.100.4

    }

         track_script{

       chk_pgpool

}

 

}

node2 keepalived.conf 文件内容

 

global_defs {

  notification_email {

    acassen@firewall.loc

    failover@firewall.loc

    sysadmin@firewall.loc

   }

  notification_email_from Alexandre.Cassen@firewall.loc

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id LVS_DEVEL

}

 

vrrp_script chk_pgpool {

       script "killall -0 pgpool"

       interval 3

       weight 2

}

 

vrrp_instance VI_1 {

   state BACKUP

   interface eth0

   virtual_router_id 51

   priority 99

   advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

       192.168.100.4

    }

       track_script {

       chk_pgpool

}

}

 

搜索

复制