关于 九月, 2015 的文章

在CentOS 6上部署Shadowsocks Server

参考资料:
http://www.shadowsocks.org

背景介绍:
相对于VPN而言,搭建一个Shadowsocks服务,然后通过浏览器代理的方式来使用,要方便很多。
它的原理跟SSH Tunnel类似,就是通过Shadowsocks的服务端与其专用的Shadowsocks客户端建立起一个加密的隧道,然后Shadowsocks客户端会在本地监听一个端口,默认为1080;所有经过这个本地端口的数据都会通过这个加密隧道。

相关配置:
OS: CentOS 6.4 x86_64 Minimal

1. 安装Shadowsocks Server
# pip install shadowsocks

2. 配置/etc/shadowsocks.json
# vim /etc/shadowsocks.json

{
  "server": "0.0.0.0",
  "server_port": 443,
  "local_address": "127.0.0.1",
  "local_port": 1080,
  "password": "shadowsockspass",
  "timeout": 600,
  "method": "aes-256-cfb",
  "fast_open": false,
  "workers": 1
}

注解:在以上配置文件中,
定义了监听的服务器地址为任意地址:"server": "0.0.0.0",
定义了监听的服务器端口为443:"server_port": 443,
定义了客户端本地的监听地址为127.0.0.1:"local_address": "127.0.0.1",
定义了客户端本地的监听端口为1080:"local_port": 1080,
定义了密码为shadowsockspass:"password": "shadowsockspass",
定义了连接超时的时间为600秒:"timeout": 600,
定义了加密的方式为aes-256-cfb:"method": "aes-256-cfb",
默认关闭了fast_open属性:"fast_open": false,
定义了进程数为1:"workers": 1

3. 配置/etc/sysctl.conf,新增如下配置:
# vim /etc/sysctl.conf

# For shadowsocks
fs.file-max = 65535
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5120
net.ipv4.tcp_mem = 25600 51200 102400
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_congestion_control = hybla

# sysctl -p

4. 启动Shadowsocks服务
# ssserver -c /etc/shadowsocks.json -d start

# netstat -lntp | grep 443

tcp      0      0      0.0.0.0:443      0.0.0.0:*      LISTEN      11037/python

5. 下载Shadowsocks客户端
Windows:https://github.com/shadowsocks/shadowsocks-csharp/releases/download/2.5.6/Shadowsocks-win-2.5.6.zip
Mac OS X:https://github.com/shadowsocks/shadowsocks-iOS/releases/download/2.6.3/ShadowsocksX-2.6.3.dmg

6. 配置客户端
创建服务器连接,输入:
服务器地址,如:heylinux.com
端口:443
加密方式:aes-256-cfb
密码:shadowsockspass

启动客户端并一直保持在启动状态,默认选择Auto Proxy Mode,并执行一次Update PAC from GFWList,如下图所示:
shadowsocks_client

7. 配置浏览器插件
安装插件Proxy SwitchySharp:https://chrome.google.com/webstore/detail/dpplabbmogkhghncfbfdeeokoefdjegm

配置插件,如下图所示:
proxy_switchsharp

启用刚刚配置好的Proxy:shadowsocks

8. 结束

, , ,

No Comments

在CentOS 6上部署PPTP VPN Server

参考资料:
https://www.digitalocean.com/community/tutorials/how-to-setup-your-own-vpn-with-pptp

背景介绍:
搭建PPTP VPN Server应该是非常容易的,可身边有不少朋友在参考了一些文章后仍然来求助于我,走了不少的弯路。
因此,觉得自己有必要写一篇文章来讲解一下。毕竟我写文章的习惯是边操作边记录,所以一步一步照着做就可以完成,大家都喜欢看。

相关配置:
OS: CentOS 6.4 x86_64 Minimal

1. 安装EPEL扩展库
# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

2. 安装PPTP扩展库
# yum install http://poptop.sourceforge.net/yum/stable/rhel6/pptp-release-current.noarch.rpm

3. 安装PPTP VPN Server
# yum install pptpd

4. 编辑/etc/pptpd.conf
# vim /etc/pptpd.conf

###############################################################################
# $Id: pptpd.conf,v 1.11 2011/05/19 00:02:50 quozl Exp $
#
# Sample Poptop configuration file /etc/pptpd.conf
#
# Changes are effective when pptpd is restarted.
###############################################################################

# TAG: ppp
#	Path to the pppd program, default '/usr/sbin/pppd' on Linux
#
#ppp /usr/sbin/pppd

# TAG: option
#	Specifies the location of the PPP options file.
#	By default PPP looks in '/etc/ppp/options'
#
option /etc/ppp/options.pptpd

# TAG: debug
#	Turns on (more) debugging to syslog
#
debug

# TAG: stimeout
#	Specifies timeout (in seconds) on starting ctrl connection
#
stimeout 120

# TAG: noipparam
#       Suppress the passing of the client's IP address to PPP, which is
#       done by default otherwise.
#
#noipparam

# TAG: logwtmp
#	Use wtmp(5) to record client connections and disconnections.
#
#logwtmp

# TAG: vrf <vrfname>
#	Switches PPTP & GRE sockets to the specified VRF, which must exist
#	Only available if VRF support was compiled into pptpd.
#
#vrf test

# TAG: bcrelay <if>
#	Turns on broadcast relay to clients from interface <if>
#
#bcrelay eth1

# TAG: delegate
#	Delegates the allocation of client IP addresses to pppd.
#
#       Without this option, which is the default, pptpd manages the list of
#       IP addresses for clients and passes the next free address to pppd.
#       With this option, pptpd does not pass an address, and so pppd may use
#       radius or chap-secrets to allocate an address.
#
#delegate

# TAG: connections
#       Limits the number of client connections that may be accepted.
#
#       If pptpd is allocating IP addresses (e.g. delegate is not
#       used) then the number of connections is also limited by the
#       remoteip option.  The default is 100.
#connections 100

# TAG: localip
# TAG: remoteip
#	Specifies the local and remote IP address ranges.
#
#	These options are ignored if delegate option is set.
#
#       Any addresses work as long as the local machine takes care of the
#       routing.  But if you want to use MS-Windows networking, you should
#       use IP addresses out of the LAN address space and use the proxyarp
#       option in the pppd options file, or run bcrelay.
#
#	You can specify single IP addresses seperated by commas or you can
#	specify ranges, or both. For example:
#
#		192.168.0.234,192.168.0.245-249,192.168.0.254
#
#	IMPORTANT RESTRICTIONS:
#
#	1. No spaces are permitted between commas or within addresses.
#
#	2. If you give more IP addresses than the value of connections,
#	   it will start at the beginning of the list and go until it
#	   gets connections IPs.  Others will be ignored.
#
#	3. No shortcuts in ranges! ie. 234-8 does not mean 234 to 238,
#	   you must type 234-238 if you mean this.
#
#	4. If you give a single localIP, that's ok - all local IPs will
#	   be set to the given one. You MUST still give at least one remote
#	   IP for each simultaneous client.
#
# (Recommended)
#localip 192.168.0.1
#remoteip 192.168.0.234-238,192.168.0.245
# or
#localip 192.168.0.234-238,192.168.0.245
#remoteip 192.168.1.234-238,192.168.1.245
localip 10.192.168.1
remoteip 10.192.168.100-200

注解:在以上配置文件中,
指定了PPP配置文件路径:option /etc/ppp/options.pptpd
开启了调试日志:debug
设置了建立连接时的超时时间为120秒:stimeout 120
PPTP VPN Server的本地地址,即客户端会自动获取到的网关地址:localip 10.192.168.1
分配给客户端的地址范围:remoteip 10.192.168.100-200

5. 编辑/etc/ppp/options.pptpd

###############################################################################
# $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $
#
# Sample Poptop PPP options file /etc/ppp/options.pptpd
# Options used by PPP when a connection arrives from a client.
# This file is pointed to by /etc/pptpd.conf option keyword.
# Changes are effective on the next connection.  See "man pppd".
#
# You are expected to change this file to suit your system.  As
# packaged, it requires PPP 2.4.2 and the kernel MPPE module.
###############################################################################


# Authentication

# Name of the local system for authentication purposes
# (must match the second field in /etc/ppp/chap-secrets entries)
name ec2-tokyo

# Strip the domain prefix from the username before authentication.
# (applies if you use pppd with chapms-strip-domain patch)
#chapms-strip-domain


# Encryption
# (There have been multiple versions of PPP with encryption support,
# choose with of the following sections you will use.)


# BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o
# {{{
refuse-pap
refuse-chap
refuse-mschap
# Require the peer to authenticate itself using MS-CHAPv2 [Microsoft
# Challenge Handshake Authentication Protocol, Version 2] authentication.
require-mschap-v2
# Require MPPE 128-bit encryption
# (note that MPPE requires the use of MSCHAP-V2 during authentication)
require-mppe-128
# }}}


# OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o
# {{{
#-chap
#-chapms
# Require the peer to authenticate itself using MS-CHAPv2 [Microsoft
# Challenge Handshake Authentication Protocol, Version 2] authentication.
#+chapms-v2
# Require MPPE encryption
# (note that MPPE requires the use of MSCHAP-V2 during authentication)
#mppe-40	# enable either 40-bit or 128-bit, not both
#mppe-128
#mppe-stateless
# }}}


# Network and Routing

# If pppd is acting as a server for Microsoft Windows clients, this
# option allows pppd to supply one or two DNS (Domain Name Server)
# addresses to the clients.  The first instance of this option
# specifies the primary DNS address; the second instance (if given)
# specifies the secondary DNS address.
#ms-dns 10.0.0.1
#ms-dns 10.0.0.2
ms-dns 172.31.0.2

# If pppd is acting as a server for Microsoft Windows or "Samba"
# clients, this option allows pppd to supply one or two WINS (Windows
# Internet Name Services) server addresses to the clients.  The first
# instance of this option specifies the primary WINS address; the
# second instance (if given) specifies the secondary WINS address.
#ms-wins 10.0.0.3
#ms-wins 10.0.0.4

# Add an entry to this system's ARP [Address Resolution Protocol]
# table with the IP address of the peer and the Ethernet address of this
# system.  This will have the effect of making the peer appear to other
# systems to be on the local ethernet.
# (you do not need this if your PPTP server is responsible for routing
# packets to the clients -- James Cameron)
proxyarp

# Normally pptpd passes the IP address to pppd, but if pptpd has been
# given the delegate option in pptpd.conf or the --delegate command line
# option, then pppd will use chap-secrets or radius to allocate the
# client IP address.  The default local IP address used at the server
# end is often the same as the address of the server.  To override this,
# specify the local IP address here.
# (you must not use this unless you have used the delegate option)
#10.8.0.100


# Logging

# Enable connection debugging facilities.
# (see your syslog configuration for where pppd sends to)
debug

# Print out all the option values which have been set.
# (often requested by mailing list to verify options)
dump


# Miscellaneous

# Create a UUCP-style lock file for the pseudo-tty to ensure exclusive
# access.
lock

# Disable BSD-Compress compression
nobsdcomp

# Disable Van Jacobson compression
# (needed on some networks with Windows 9x/ME/XP clients, see posting to
# poptop-server on 14th April 2005 by Pawel Pokrywka and followups,
# http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 )
novj
novjccomp

# turn off logging to stderr, since this may be redirected to pptpd,
# which may trigger a loopback
nologfd

# put plugins here
# (putting them higher up may cause them to sent messages to the pty)

logfile /var/log/pptpd.log
multilink

注解:在以上配置文件中,
定义了PPTP VPN Server的服务名:name ec2-tokyo
定义了加密的规则,如下:
refuse-pap
refuse-chap
refuse-mschap
require-mschap-v2
require-mppe-128
定义了推送到客户端的DNS地址:ms-dns 172.31.0.2 (我通常选择PPTP VPN Server所在服务器的默认DNS设置)
允许相同局域网的主机在PPTP VPN Server上互相可见:proxyarp
开启了调试信息:debug
启用了一些通用的设置,如下:
dump
lock
nobsdcomp
novj
novjccomp
nologfd
指定了日志文件的位置:logfile /var/log/pptpd.log
允许把多个物理通道捆绑为单一逻辑信道:multilink

6. 编辑用户账号密码文件/etc/ppp/chap-secrets
# vim /etc/ppp/chap-secrets

# Secrets for authentication using CHAP
# client	server	secret			IP addresses
"username"  *       "password"        *

7. 编辑/etc/sysconfig/iptables-config
修改 IPTABLES_MODULES="" 为 IPTABLES_MODULES="ip_nat_pptp" 确保在启动iptables服务时自动加载模块。

8. 编辑/etc/sysconfig/iptables(默认eth0为公网IP地址所在网口)
# vim /etc/sysconfig/iptables

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p gre -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1723 -j ACCEPT
-A INPUT -s 10.192.168.0/255.255.255.0 -m state --state NEW -m tcp -p tcp -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -s 10.192.168.0/255.255.255.0 -o eth0 -j MASQUERADE
COMMIT

注解:在以上iptables脚本中,
对所有GRE协议的数据包放行;
对TCP端口1723放行;
对整个PPTP VPN的局域网地址段10.192.168.0/24放行;
将整个PPTP VPN的局域网地址段10.192.168.0/24通过NAT映射到eth0网口,实现共享上网;

9. 开启数据转发,编辑/etc/sysctl.conf
修改 net.ipv4.ip_forward = 0 为 net.ipv4.ip_forward = 1
执行 sysctl -p

10. 启动PPTP VPN Server
# /etc/init.d/pptpd restart
# /etc/init.d/iptables restart

11. 设置PPTP VPN Server与iptables服务开机自启动
# chkconfig pptpd on
# chkconfig iptables on

12. 在本地PC上配置客户端并连接PPTP VPN Server

13. 结束

, , ,

No Comments

分享一个我用Flask+Bootstrap实现的简单Web

话不多说,地址:https://github.com/mcsrainbow/devopsdemo

,

No Comments

使用innobackupex备份和恢复MySQL

参考资料:
https://www.percona.com/doc/percona-xtrabackup/2.2/innobackupex/privileges.html
https://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/streaming_backups_innobackupex.html
https://www.percona.com/doc/percona-xtrabackup/2.1/howtos/recipes_ibkx_compressed.html
https://www.percona.com/doc/percona-xtrabackup/2.1/innobackupex/incremental_backups_innobackupex.html

背景介绍:
在一些技术群里面,看到仍然有一些运维在用mysqldump这样的命令来备份MySQL,于是感觉有必要介绍一下innobackupex。
现在,绝大多数使用MySQL的场景中,都用到了Master-Slave这样的架构。相对于mysqldump而言,使用innobackupex备份有以下好处:
1. 以数据文件为备份对象,文件级别备份,速度快,尤其适合需要对所有数据进行备份的场景;
2. 热备份,不会对现有的数据库访问造成影响;
3. 记录binlog以及replication相关信息,在创建和恢复Slave时非常有用;
4. 支持对备份后的数据进行同步并行压缩,有效节省磁盘空间;

目前,在我们的线上环境中,数据库的大小,在没有压缩之前为500G左右,压缩之后的大小为90G左右。
而在风哥的环境中,数据库的大小已经超过了1T,以下是风哥的几点补充:
1.用innobackupex可以做到不停业务在线备份,前提是对innodb引擎,对myisam也会锁表;
2.在备份过程会导致IO很高,建议在一台slave上做备份(一般用一台slave只做备份用),不建议在主上备份;
3.innobackupex可以用增量与全量备份方式配合;

另外,杨总在学习了ITIL之后,补充到:对于最近的一次全量备份,除了要做到异地备份以外,还应该尽量在数据库所在的服务器本地保存一份没有经过压缩打包的备份,这样在进行灾难恢复的时候,能够节省大量的时间。

具体用例:
环境介绍

架构:Master-Slave
服务器:idc1-master1, idc1-slave1
MySQL端口:3308
配置文件:/etc/my_3308.cnf
备份目录:/mysql-backup/3308
MySQL数据目录:/opt/mysql_3308/data
服务脚本:/etc/init.d/mysql_3308

1. 在Master和Slave上安装xtrabackup:
注意:如果你安装的不是Percona版本的MySQL,或者MySQL版本低于5.5,请安装percona-xtrabackup-20,并忽略下面所有压缩相关的步骤与参数。

# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
# yum install -y percona-xtrabackup qpress
# yum install -y percona-xtrabackup-20 # 非Percona版本的MySQL,或者MySQL版本低于5.5

2. 在Master和Slave上创建一个用于备份的用户backup-user:

mysql> CREATE USER 'backup-user'@'localhost' IDENTIFIED BY 'backup-pass';
mysql> GRANT SUPER, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup-user'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> EXIT;

注意:对于非Percona版本的MySQL,或者MySQL版本低于5.5,如5.1,需要额外增加SELECT权限,否则会出现mysql系统数据库备份权限不足的问题。

mysql> GRANT SELECT, SUPER, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup-user'@'localhost';

3. 在Master上备份
常规方式

[root@idc1-master1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G /mysql-backup/3308

[root@idc1-master1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_03-00-10

压缩打包方式

[root@idc1-master1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G --compress --compress-threads=8 --stream=xbstream --parallel=4 /mysql-backup/3308 > /mysql-backup/3308/$(date +%Y-%m-%d_%H-%M-%S).xbstream

[root@idc1-master1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_03-05-05.xbstream

4. 在Slave上备份
常规方式

[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G --slave-info --safe-slave-backup /mysql-backup/3308

[root@idc1-slave1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_03-11-03

压缩打包方式

[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G --slave-info --safe-slave-backup --compress --compress-threads=8 --stream=xbstream --parallel=4 /mysql-backup/3308 > /mysql-backup/3308/$(date +%Y-%m-%d_%H-%M-%S).xbstream

[root@idc1-slave1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_03-15-03.xbstream

5. 在Master上恢复

[root@idc1-master1 ~]# /etc/init.d/mysql_3308 stop

[root@idc1-master1 ~]# mv /opt/mysql_3308/data /opt/mysql_3308/data_broken
[root@idc1-master1 ~]# mkdir /opt/mysql_3308/data

# 常规方式
[root@idc1-master1 ~]# innobackupex --apply-log --use-memory=4G /mysql-backup/3308/2015-10-26_03-00-10
[root@idc1-master1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --copy-back --use-memory=4G /mysql-backup/3308/2015-10-26_03-00-10 

# 压缩打包方式
[root@idc1-master1 ~]# mkdir -p /mysql-backup/3308/2015-10-26_03-05-05
[root@idc1-master1 ~]# xbstream -x < /mysql-backup/3308/2015-10-26_03-05-05.xbstream -C /mysql-backup/3308/2015-10-26_03-05-05
[root@idc1-master1 ~]# innobackupex --decompress --parallel=4 /mysql-backup/3308/2015-10-26_03-05-05
[root@idc1-master1 ~]# find /mysql-backup/3308/2015-10-26_03-05-05 -name "*.qp" -delete
[root@idc1-master1 ~]# innobackupex --apply-log --use-memory=4G /mysql-backup/3308/2015-10-26_03-05-05
[root@idc1-master1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --copy-back --use-memory=4G /mysql-backup/3308/2015-10-26_03-05-05 

[root@idc1-master1 ~]# chown -R mysql:mysql /opt/mysql_3308/data

[root@idc1-master1 ~]# /etc/init.d/mysql_3308 start

6. 在Slave上恢复

[root@idc1-slave1 ~]# /etc/init.d/mysql_3308 stop

[root@idc1-slave1 ~]# mv /opt/mysql_3308/data /opt/mysql_3308/data_broken
[root@idc1-slave1 ~]# mkdir /opt/mysql_3308/data

# 常规方式
[root@idc1-slave1 ~]# innobackupex --apply-log --use-memory=4G /mysql-backup/3308/2015-10-26_03-11-03
[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --copy-back --use-memory=4G /mysql-backup/3308/2015-10-26_03-11-03 

# 压缩打包方式
[root@idc1-slave1 ~]# mkdir -p /mysql-backup/3308/2015-10-26_03-15-03
[root@idc1-slave1 ~]# xbstream -x < /mysql-backup/3308/2015-10-26_03-15-03.xbstream -C /mysql-backup/3308/2015-10-26_03-15-03 
[root@idc1-slave1 ~]# innobackupex --decompress --parallel=4 /mysql-backup/3308/2015-10-26_03-15-03 
[root@idc1-slave1 ~]# find /mysql-backup/3308/2015-10-26_03-15-03 -name "*.qp" -delete [root@idc1-slave1 ~]# innobackupex --apply-log --use-memory=4G /mysql-backup/3308/2015-10-26_03-15-03 
[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --copy-back --use-memory=4G /mysql-backup/3308/2015-10-26_03-15-03 

[root@idc1-slave1 ~]# chown -R mysql:mysql /opt/mysql_3308/data 
[root@idc1-slave1 ~]# /etc/init.d/mysql_3308 start 

[root@idc1-slave1 ~]# cd /opt/mysql_3308/data 

# 从Master的备份中恢复时查看 xtrabackup_binlog_pos_innodb 
[root@idc1-slave1 data]# cat xtrabackup_binlog_pos_innodb ./bin-log-mysqld.000222 222333 

# 从Slave的备份中恢复时查看 xtrabackup_slave_info 
[root@idc1-slave1 data]# cat xtrabackup_slave_info 
CHANGE MASTER TO MASTER_LOG_FILE='bin-log-mysqld.000222', MASTER_LOG_POS=222333 

[root@idc1-slave1 data]# mysql_3308 -uroot -p 
mysql> change master to
master_host='idc1-master1',
master_port=3308,
master_user='backup-user',
master_password='backup-pass',
master_log_file='bin-log-mysqld.000222',
master_log_pos=222333;
mysql> start slave;
mysql> show slave status\G;
mysql> exit;

7. 增量备份与恢复
增量备份的原理是,基于一个现有的完整备份,针对InnoDB-based表仅备份增量的部分,针对MyISAM表则仍然保持全量备份。

备份环境:
在Slave服务器上进行

备份策略:
每天1次完整备份 + 每天2次增量备份

具体步骤:
7.1 增量备份
7.1.1 准备完整备份(压缩但不打包方式):

[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G --slave-info --safe-slave-backup --compress --compress-threads=8 /mysql-backup/3308

[root@idc1-slave1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_06-48-33

[root@idc1-slave1 ~]# cat /mysql-backup/3308/2015-10-26_06-48-33/xtrabackup_checkpoints
backup_type = full-backuped
from_lsn = 0
to_lsn = 1631145
last_lsn = 1631145
compact = 0
recover_binlog_info = 0

7.1.2 进行第1次增量备份(压缩但不打包方式):

[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G --slave-info --safe-slave-backup --compress --compress-threads=8 --incremental /mysql-backup/3308 --incremental-basedir=/mysql-backup/3308/2015-10-26_06-48-33

[root@idc1-slave1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_06-55-12

[root@idc1-slave1 ~]# cat /mysql-backup/3308/2015-10-26_06-55-12/xtrabackup_checkpoints
backup_type = incremental
from_lsn = 1631145
to_lsn = 1635418
last_lsn = 1635418
compact = 0
recover_binlog_info = 0

7.1.3 进行第2次增量备份(压缩但不打包方式):

[root@idc1-slave1 ~]# innobackupex --defaults-file=/etc/my_3308.cnf --user=backup-user --password=backup-pass --use-memory=4G --slave-info --safe-slave-backup --compress --compress-threads=8 --incremental /mysql-backup/3308 --incremental-basedir=/mysql-backup/3308/2015-10-26_06-55-12

[root@idc1-slave1 ~]# ls -rt1 /mysql-backup/3308/ | tail -n 1
2015-10-26_06-59-49

[root@idc1-slave1 ~]# cat /mysql-backup/3308/2015-10-26_06-59-49/xtrabackup_checkpoints
backup_type = incremental
from_lsn = 1635418
to_lsn = 1639678
last_lsn = 1639678
compact = 0
recover_binlog_info = 0

7.2 增量恢复:
7.2.1 取回完整备份(必须加参数 --redo-only)

[root@idc1-slave1 ~]# innobackupex --decompress --parallel=4 /mysql-backup/3308/2015-10-26_06-48-33
[root@idc1-slave1 ~]# find /mysql-backup/3308/2015-10-26_06-48-33 -name "*.qp" -delete
[root@idc1-slave1 ~]# innobackupex --apply-log --redo-only --use-memory=4G /mysql-backup/3308/2015-10-26_06-48-33

7.2.2 合并第1个增量(必须加参数 --redo-only)

[root@idc1-slave1 ~]# innobackupex --decompress --parallel=4 /mysql-backup/3308/2015-10-26_06-55-12
[root@idc1-slave1 ~]# find /mysql-backup/3308/2015-10-26_06-55-12 -name "*.qp" -delete
[root@idc1-slave1 ~]# innobackupex --apply-log --redo-only --use-memory=4G /mysql-backup/3308/2015-10-26_06-48-33 --incremental-dir=/mysql-backup/3308/2015-10-26_06-55-12

7.2.3 合并第2个增量(合并最后一个增量备份时不加 --redo-only)

[root@idc1-slave1 ~]# innobackupex --decompress --parallel=4 /mysql-backup/3308/2015-10-26_06-59-49
[root@idc1-slave1 ~]# find /mysql-backup/3308/2015-10-26_06-59-49 -name "*.qp" -delete
[root@idc1-slave1 ~]# innobackupex --apply-log --use-memory=4G /mysql-backup/3308/2015-10-26_06-48-33 --incremental-dir=/mysql-backup/3308/2015-10-26_06-59-49

7.2.4 准备完整备份(定稿完整备份时不加 --redo-only)

[root@idc1-slave1 ~]# innobackupex --apply-log --use-memory=4G /mysql-backup/3308/2015-10-26_06-48-33

7.2.5 恢复完整备份(按照以上 常规方式,执行从--copy-back开始及之后的步骤)

,

1 Comment

Linux小技巧 之 在进程尚未停止前拯救被误删的文件

背景介绍:
今天,在运维群里面跟一些群友聊天,我主要吐槽了关于EXT4文件系统难以做数据恢复的问题,因为过去自己有尝试过EXT3和EXT4上的数据恢复,主要用的是ext3grep和ext4undelete,EXT3基本上每次都能恢复成功,但EXT4却没有成功过一次,每次恢复出来的文件都是破损的,或残缺不全的。

期间,一些比较资深的牛人说EXT4他们成功的恢复过,并且说抱怨Linux文件系统数据不好恢复的人都是对Linux文件系统的基础知识不熟悉,多去看看Linux文件系统关于block, inode, superblock的知识,对于删除文件就是另外的认识了。

听到之后很是惭愧,的确过去只是停留在工具层面,没有深入了解这些方面的知识。

然后,我提到了一个场景,就是某个进程在运行过程中一直在打印一个日志,这时候,有人误删了这个日志。群友说这种情况下恢复文件是非常简单的,因为文件其实还存在于系统当中,不过必须要让进程保持在运行状态,一旦停止或重启后,文件就消失了。

相信大家都有过类似的经验,在清理空间的时候,虽然删掉了一些大的日志文件,但是空间并没有得到释放,而是必须要等到重启服务或杀掉进程的时候才会。

于是我简单的搜索了一下,找到了这篇文章:http://unix.stackexchange.com/questions/101237/how-to-recover-files-i-deleted-now-by-running-rm
并且通过ping命令打印日志并删除日志来成功模拟了这样一个场景。

具体步骤如下:

[dong@idc1-dong1 ~]$ ping heylinux.com &> ping.output.log &
[1] 22672

[dong@idc1-dong1 ~]$ tail -n 5 ping.output.log
64 bytes from 54.238.131.140: icmp_seq=14 ttl=47 time=176 ms
64 bytes from 54.238.131.140: icmp_seq=15 ttl=47 time=126 ms
64 bytes from 54.238.131.140: icmp_seq=16 ttl=47 time=205 ms
64 bytes from 54.238.131.140: icmp_seq=17 ttl=47 time=121 ms
64 bytes from 54.238.131.140: icmp_seq=18 ttl=47 time=121 ms

[dong@idc1-dong1 ~]$ rm -f ping.output.log
[dong@idc1-dong1 ~]$ ls ping.output.log
ls: cannot access ping.output.log: No such file or directory

[dong@idc1-dong1 ~]$ sudo lsof | grep ping.output
ping 22672 dong 1w REG 253,0 2666 2016 /home/dong/ping.output.log (deleted)
ping 22672 dong 2w REG 253,0 2666 2016 /home/dong/ping.output.log (deleted)

[dong@idc1-dong1 ~]$ sudo -i
[root@idc1-dong1 ~]# cd /proc/22672/fd

[root@idc1-dong1 fd]# ll
total 0
lrwx------ 1 root root 64 Sep  1 11:23 0 -> /dev/pts/0
l-wx------ 1 root root 64 Sep  1 11:23 1 -> /home/dong/ping.output.log (deleted)
l-wx------ 1 root root 64 Sep  1 11:23 2 -> /home/dong/ping.output.log (deleted)
lrwx------ 1 root root 64 Sep  1 11:23 3 -> socket:[26968949]

[root@idc1-dong1 fd]# tail -n 5 1
64 bytes from 54.238.131.140: icmp_seq=119 ttl=47 time=161 ms
64 bytes from 54.238.131.140: icmp_seq=120 ttl=47 time=125 ms
64 bytes from 54.238.131.140: icmp_seq=121 ttl=47 time=198 ms
64 bytes from 54.238.131.140: icmp_seq=122 ttl=47 time=151 ms
64 bytes from 54.238.131.140: icmp_seq=123 ttl=47 time=135 ms
[root@idc1-dong1 fd]# tail -n 5 2
64 bytes from 54.238.131.140: icmp_seq=121 ttl=47 time=198 ms
64 bytes from 54.238.131.140: icmp_seq=122 ttl=47 time=151 ms
64 bytes from 54.238.131.140: icmp_seq=123 ttl=47 time=135 ms
64 bytes from 54.238.131.140: icmp_seq=124 ttl=47 time=135 ms
64 bytes from 54.238.131.140: icmp_seq=125 ttl=47 time=134 ms

[root@idc1-dong1 fd]# cp 1 /root/ping.output.log.recover

[root@idc1-dong1 fd]# cd
[root@idc1-dong1 ~]# head -n 5 ping.output.log.recover
PING heylinux.com (54.238.131.140) 56(84) bytes of data.
64 bytes from 54.238.131.140: icmp_seq=1 ttl=47 time=227 ms
64 bytes from 54.238.131.140: icmp_seq=2 ttl=47 time=196 ms
64 bytes from 54.238.131.140: icmp_seq=3 ttl=47 time=157 ms
64 bytes from 54.238.131.140: icmp_seq=4 ttl=47 time=235 ms
[root@idc1-dong1 ~]# tail -n 5 ping.output.log.recover
64 bytes from 54.238.131.140: icmp_seq=146 ttl=47 time=172 ms
64 bytes from 54.238.131.140: icmp_seq=147 ttl=47 time=132 ms
64 bytes from 54.238.131.140: icmp_seq=148 ttl=47 time=212 ms
64 bytes from 54.238.131.140: icmp_seq=149 ttl=47 time=172 ms
64 bytes from 54.238.131.140: icmp_seq=150 ttl=47 time=132 ms

[root@idc1-dong1 ~]# pkill -kill -f ping
[root@idc1-dong1 ~]# cd /proc/22672/fd
-bash: cd: /proc/22672/fd: No such file or directory

No Comments