分享一个Oozie Job Debug脚本

参考资料:
https://oozie.apache.org/docs/4.0.0/WebServicesAPI.html

背景介绍:
在我们的线上Hadoop集群中,采用了Oozie来作为Workflow的管理,而平时有不少Workflow在执行过程中会因为各种问题而失败。
于是,我们通常都会通过Oozie Web Console去Troubleshooting,但是整个过程并不方便,在研究了Oozie API之后,我写了一个脚本来自动化的帮我们完成绝大部分的Troubleshooting步骤。

具体配置:
整个脚本需要模拟的Troubleshooting思路如下:
1. 获取整个Workflow所有步骤的信息,通常的状态有:OK,RUNNING,FAILED,KILLED,ERROR
2. 对FAILED,KILLED,ERROR状态的步骤,首先获取其consoleUrl,然后进一步获取更有价值的logsLinks,同时打印相关的调试信息,并导出该步骤的相关XML配置文件

脚本地址:https://github.com/mcsrainbow/python-demos/blob/master/demos/debug_oozie_job.py

执行示例:

, ,

No Comments

分享一个 RAID磁盘健康状态 监控脚本

参考资料
http://blog.irq1.com/megacli-commands-to-storcli-command-conversion/
https://github.com/mcsrainbow/shell-scripts/blob/master/scripts/MegaRAID_SUM

背景介绍:
在我们的线上环境中,有大量的物理实体服务器,主要用于对配置要求很高的Hadoop集群。
通常在这些服务器中,都配置了RAID卡并且挂载有16块大小至少为3T的硬盘,由于Hadoop集群的IO密集型特征,不少硬盘经常不堪重负而损坏,因此对RAID磁盘健康状态的检查,非常有必要。

具体配置:
整个脚本的思路如下:
1. 通过MegaCli64分别获取异常状态的信息,通常有Degrade,Offline,Critical,Failed等状态
2. 将获取到的异常状态汇总,并提取出有问题的磁盘槽位信息

脚本地址:https://github.com/mcsrainbow/shell-scripts/blob/master/scripts/check_megaraid_status

执行示例:

, ,

2 Comments

运维工具汇总之 性能调优,性能监控,性能测试

背景介绍:
关于运维工具,网上已经有前辈用了三张图表,将系统各个层面的性能调优,性能监控,性能测试都进行了总结。
我觉得非常有必要再学习一次,因此打算将这三张图贴到本文当中,并且在之后不断的完善,针对各个命令做一些简单的介绍,并跟上用例。

图表浏览:
性能调优
linux_tuning_tools

性能监控
linux_observability_tools

性能测试
linux_benchmarking_tools

工具详解:
待续…

,

No Comments

HAProxy RPM SPECS与HTTPS Load配置分享

话不多说,具体内容如下:
haproxy-1.5.17.spec

haproxy.cfg

, , ,

No Comments

给rm命令加上回收站功能

背景:
在群里,总会有人聊到曾经做过的最坑的事情,其中当然少不了rm命令,比如最出名的rm -rf /*命令。
受HDFS回收站机制的启发,我即兴的写了一个shell脚本来实现类似的功能。

具体配置:
[dong@localhost ~]$ sudo touch /usr/bin/delete
[dong@localhost ~]$ sudo chmod +x /usr/bin/delete
[dong@localhost ~]$ sudo vim /usr/bin/delete

[dong@localhost ~]$ mkdir tmp
[dong@localhost ~]$ cd tmp
[dong@localhost tmp]$ mkdir 1 2 3
[dong@localhost tmp]$ echo 1 > 1/1.txt
[dong@localhost tmp]$ echo 2 > 2/2.txt
[dong@localhost tmp]$ echo 3 > 3/3.txt
[dong@localhost tmp]$ touch a b c
[dong@localhost tmp]$ ln -s a d
[dong@localhost tmp]$ delete 1

Move 1 to /home/dong/.Trash/20160415114210/home/dong/tmp/1? [y/n] y
Moved 1 to /home/dong/.Trash/20160415114210/home/dong/tmp/1

[dong@localhost tmp]$ delete -f *

Moved 2 to /home/dong/.Trash/20160415114217/home/dong/tmp/2
Moved 3 to /home/dong/.Trash/20160415114217/home/dong/tmp/3
Moved a to /home/dong/.Trash/20160415114217/home/dong/tmp/a
Moved b to /home/dong/.Trash/20160415114217/home/dong/tmp/b
Moved c to /home/dong/.Trash/20160415114217/home/dong/tmp/c
Moved d to /home/dong/.Trash/20160415114217/home/dong/tmp/d

,

No Comments

在CentOS 6上安装部署Graphite

参考资料:
http://centoshowtos.org/monitoring/graphite/

背景介绍:
通常,我们会将比较重要的指标都纳入到监控系统中,并在监控系统中进行绘图。
但有时候,可能会需要临时的针对某些特定的数据进行分析并绘图,并且通常都是一堆历史数据,进行事后分析的。
比如,近期线上的日志传输系统,在某些节点上传输的比较慢,那么我们就想分析一下哪些时段的日志比较慢,就从历史记录中取出了在这些节点上近4天所有日志的传输细节,包括日志大小,传输时间等;然后,通过Graphite,非常方便的导入了这些数据,并绘图分析。

具体配置:
环境介绍:
OS: CentOS6.5 x86_64 Minimal

1. 安装EPEL扩展库
# yum install -y epel-release

# sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/epel.repo
# sed -i s/mirrorlist=/#mirrorlist=/g /etc/yum.repos.d/epel.repo

# yum clean all

2. 安装系统所需套件
yum install -y python python-devel python-pip
yum groupinstall -y ‘Development Tools’

3. 安装配置Graphite相关软件(MySQL部分可以分开配置,使用独立的服务器)
# yum install -y graphite-web graphite-web-selinux mysql mysql-server MySQL-python

# mysql_secure_installation

Set root password? [Y/n] Y
New password: graphite
Re-enter new password: graphite
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

# mysql -uroot -pgraphite

mysql> CREATE DATABASE graphite;
mysql> GRANT ALL PRIVILEGES ON graphite.* TO 'graphite'@'localhost' IDENTIFIED BY 'graphite';
mysql> FLUSH PRIVILEGES;
mysql> exit;

# vi /etc/graphite-web/local_settings.py

DATABASES = {
 'default': {
 'NAME': 'graphite',
 'ENGINE': 'django.db.backends.mysql',
 'USER': 'graphite',
 'PASSWORD': 'graphite',
 }
}

# /usr/lib/python2.6/site-packages/graphite/manage.py syncdb

Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): root
E-mail address: guosuiyu@gmail.com
Password: graphite
Password (again): graphite

4. 启动Apache服务,作为Graphite的Web服务器
# /etc/init.d/httpd start

5. 安装底层的绘图与数据采集软件
# yum install -y python-carbon python-whisper

6. 启动数据采集进程
# /etc/init.d/carbon-cache start

7. 更新配置,保留所有metrics目录下数据90天(默认仅保留1天,也就是说看不到1天以前的数据绘图)
# vi /etc/carbon/storage-schemas.conf

[carbon]
priority = 101
pattern = ^carbon\.
retentions = 60:90d

[default_1min_for_90days]
priority = 100
pattern = .*
retentions = 60:90d

发送一些测试数据
# python /usr/share/doc/graphite-web-0.9.12/example-client.py

sending message
--------------------------------------------------------------------------------
system.loadavg_1min 0.01 1446086849
system.loadavg_5min 0.03 1446086849
system.loadavg_15min 0.05 1446086849

8. 查看当前服务器进程
# netstat -lntp

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address     Foreign Address     State       PID/Program name
tcp        0      0 0.0.0.0:80        0.0.0.0:*           LISTEN      2131/httpd
tcp        0      0 0.0.0.0:2003      0.0.0.0:*           LISTEN      2210/python
tcp        0      0 0.0.0.0:2004      0.0.0.0:*           LISTEN      2210/python
tcp        0      0 0.0.0.0:22        0.0.0.0:*           LISTEN      1566/sshd
tcp        0      0 127.0.0.1:25      0.0.0.0:*           LISTEN      976/master
tcp        0      0 0.0.0.0:7002      0.0.0.0:*           LISTEN      2210/python
tcp        0      0 0.0.0.0:3306      0.0.0.0:*           LISTEN      2063/mysqld

9. 生成24小时的模拟数据,体验Graphite的绘图功能
安装nc命令
# yum install -y nc

创建生成模拟数据的脚本
# vi feed_random_data.sh

#!/bin/bash
#
# Generate random pageview data and feed Graphite

tree_path=metrics.random.pageview
time_period_hours=24

now_timestamp=$(date +%s)
minutes_number=$((${time_period_hours}*60))

echo ${minutes_number}
timestamp=$((${now_timestamp}-${minutes_number}*60))
for i in $(seq 1 ${minutes_number}); do
  echo "echo ${tree_path} $(($RANDOM%5000)) ${timestamp} | nc localhost 2003"
  echo ${tree_path} $(($RANDOM%5000)) ${timestamp} | nc localhost 2003
  let timestamp+=60
done

执行脚本,将数据喂给Graphite,在使用nc命令的时候固定格式为:

echo 目录结构 数值 时间戳 | nc 服务器地址 服务端口

例如:

echo metrics.random.pageview 3680 1446095415 | nc localhost 2003

# chmod +x feed_random_data.sh
# ./feed_random_data.sh

当然,也可以参考上面的example-client.py脚本,使用Python的方式来喂数据。

然后,打开Graphite Web,即可看到如下所示的绘图:
Graphite

使用账号root/graphite登陆以后,还可以创建一个Dashboard,将多个绘图放在一起方便查看:
graphite_dashboard

Graphite还支持通过API生成图片,方便我们获取,如下所示:
API URL:http://graphite.readthedocs.org/en/latest/render_api.html
graphite_api

,

1 Comment

推荐一个学英语的很棒的方法

学好英语,不仅仅是能考试,更重要的是,听力,和口语表达。
这里,我推荐给大家一个学英语的很棒的方法,以前外教教过的,亲身体验很爽,妙不可言,很多英语本身才有的幽默都可以体会到。
具体如下:
1. 在网上搜索一部英文电影的剧本
2. 阅读剧本并将所有的生词都学会
3. 找到这部电影
4. 去掉电影字幕(包括英文字幕)来欣赏

2 Comments

使用 iperf 检测主机间网络带宽

参考资料:
https://blogs.oracle.com/mandalika/entry/measuring_network_bandwidth_using_iperf

背景介绍:
在调试网络时,经常需要检测两台主机间的最大带宽,我一直使用iperf命令,效果很好很准确,但发现有一些运维朋友并不知道有这个工具,于是打算写篇文章简单介绍一下。

具体操作:
操作系统:CentOS6.5 x86_64 Minimal
服务器:
192.168.10.11
192.168.10.12

[root@192.168.10.11 ~]# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
[root@192.168.10.11 ~]# yum install iperf

[root@192.168.10.12 ~]# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
[root@192.168.10.12 ~]# yum install iperf

[root@192.168.10.12 ~]# iperf -s
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)

[root@192.168.10.11 ~]# iperf -c 192.168.10.12
————————————————————
Client connecting to 192.168.10.12, TCP port 5001
TCP window size: 64.0 KByte (default)
————————————————————
[ 3] local 192.168.10.11 port 23351 connected with 192.168.10.12 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 744 MBytes 624 Mbits/sec

[root@192.168.10.12 ~]# iperf -s
————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 192.168.10.12 port 5001 connected with 192.168.10.11 port 23351
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 744 MBytes 623 Mbits/sec

1 Comment

在CentOS 6上部署Shadowsocks Server

参考资料:
http://www.shadowsocks.org

背景介绍:
相对于VPN而言,搭建一个Shadowsocks服务,然后通过浏览器代理的方式来使用,要方便很多。
它的原理跟SSH Tunnel类似,就是通过Shadowsocks的服务端与其专用的Shadowsocks客户端建立起一个加密的隧道,然后Shadowsocks客户端会在本地监听一个端口,默认为1080;所有经过这个本地端口的数据都会通过这个加密隧道。

相关配置:
OS: CentOS 6.4 x86_64 Minimal

1. 安装Shadowsocks Server
# pip install shadowsocks

2. 配置/etc/shadowsocks.json
# vim /etc/shadowsocks.json

{
  "server": "0.0.0.0",
  "server_port": 443,
  "local_address": "127.0.0.1",
  "local_port": 1080,
  "password": "shadowsockspass",
  "timeout": 600,
  "method": "aes-256-cfb",
  "fast_open": false,
  "workers": 1
}

注解:在以上配置文件中,
定义了监听的服务器地址为任意地址:”server”: “0.0.0.0”,
定义了监听的服务器端口为443:”server_port”: 443,
定义了客户端本地的监听地址为127.0.0.1:”local_address”: “127.0.0.1”,
定义了客户端本地的监听端口为1080:”local_port”: 1080,
定义了密码为shadowsockspass:”password”: “shadowsockspass”,
定义了连接超时的时间为600秒:”timeout”: 600,
定义了加密的方式为aes-256-cfb:”method”: “aes-256-cfb”,
默认关闭了fast_open属性:”fast_open”: false,
定义了进程数为1:”workers”: 1

3. 配置/etc/sysctl.conf,新增如下配置:
# vim /etc/sysctl.conf

# For shadowsocks
fs.file-max = 65535
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_max_tw_buckets = 5120
net.ipv4.tcp_mem = 25600 51200 102400
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_congestion_control = hybla

# sysctl -p

4. 启动Shadowsocks服务
# ssserver -c /etc/shadowsocks.json -d start

# netstat -lntp | grep 443

tcp      0      0      0.0.0.0:443      0.0.0.0:*      LISTEN      11037/python

5. 下载Shadowsocks客户端
Windows:https://github.com/shadowsocks/shadowsocks-csharp/releases/download/2.5.6/Shadowsocks-win-2.5.6.zip
Mac OS X:https://github.com/shadowsocks/shadowsocks-iOS/releases/download/2.6.3/ShadowsocksX-2.6.3.dmg

6. 配置客户端
创建服务器连接,输入:
服务器地址,如:heylinux.com
端口:443
加密方式:aes-256-cfb
密码:shadowsockspass

启动客户端并一直保持在启动状态,默认选择Auto Proxy Mode,并执行一次Update PAC from GFWList,如下图所示:
shadowsocks_client

7. 配置浏览器插件
安装插件Proxy SwitchySharp:https://chrome.google.com/webstore/detail/dpplabbmogkhghncfbfdeeokoefdjegm

配置插件,如下图所示:
proxy_switchsharp

启用刚刚配置好的Proxy:shadowsocks

8. 结束

, , ,

No Comments

在CentOS 6上部署PPTP VPN Server

参考资料:
https://www.digitalocean.com/community/tutorials/how-to-setup-your-own-vpn-with-pptp

背景介绍:
搭建PPTP VPN Server应该是非常容易的,可身边有不少朋友在参考了一些文章后仍然来求助于我,走了不少的弯路。
因此,觉得自己有必要写一篇文章来讲解一下。毕竟我写文章的习惯是边操作边记录,所以一步一步照着做就可以完成,大家都喜欢看。

相关配置:
OS: CentOS 6.4 x86_64 Minimal

1. 安装EPEL扩展库
# yum install http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

2. 安装PPTP扩展库
# yum install http://poptop.sourceforge.net/yum/stable/rhel6/pptp-release-current.noarch.rpm

3. 安装PPTP VPN Server
# yum install pptpd

4. 编辑/etc/pptpd.conf
# vim /etc/pptpd.conf

###############################################################################
# $Id: pptpd.conf,v 1.11 2011/05/19 00:02:50 quozl Exp $
#
# Sample Poptop configuration file /etc/pptpd.conf
#
# Changes are effective when pptpd is restarted.
###############################################################################

# TAG: ppp
#	Path to the pppd program, default '/usr/sbin/pppd' on Linux
#
#ppp /usr/sbin/pppd

# TAG: option
#	Specifies the location of the PPP options file.
#	By default PPP looks in '/etc/ppp/options'
#
option /etc/ppp/options.pptpd

# TAG: debug
#	Turns on (more) debugging to syslog
#
debug

# TAG: stimeout
#	Specifies timeout (in seconds) on starting ctrl connection
#
stimeout 120

# TAG: noipparam
#       Suppress the passing of the client's IP address to PPP, which is
#       done by default otherwise.
#
#noipparam

# TAG: logwtmp
#	Use wtmp(5) to record client connections and disconnections.
#
#logwtmp

# TAG: vrf <vrfname>
#	Switches PPTP & GRE sockets to the specified VRF, which must exist
#	Only available if VRF support was compiled into pptpd.
#
#vrf test

# TAG: bcrelay <if>
#	Turns on broadcast relay to clients from interface <if>
#
#bcrelay eth1

# TAG: delegate
#	Delegates the allocation of client IP addresses to pppd.
#
#       Without this option, which is the default, pptpd manages the list of
#       IP addresses for clients and passes the next free address to pppd.
#       With this option, pptpd does not pass an address, and so pppd may use
#       radius or chap-secrets to allocate an address.
#
#delegate

# TAG: connections
#       Limits the number of client connections that may be accepted.
#
#       If pptpd is allocating IP addresses (e.g. delegate is not
#       used) then the number of connections is also limited by the
#       remoteip option.  The default is 100.
#connections 100

# TAG: localip
# TAG: remoteip
#	Specifies the local and remote IP address ranges.
#
#	These options are ignored if delegate option is set.
#
#       Any addresses work as long as the local machine takes care of the
#       routing.  But if you want to use MS-Windows networking, you should
#       use IP addresses out of the LAN address space and use the proxyarp
#       option in the pppd options file, or run bcrelay.
#
#	You can specify single IP addresses seperated by commas or you can
#	specify ranges, or both. For example:
#
#		192.168.0.234,192.168.0.245-249,192.168.0.254
#
#	IMPORTANT RESTRICTIONS:
#
#	1. No spaces are permitted between commas or within addresses.
#
#	2. If you give more IP addresses than the value of connections,
#	   it will start at the beginning of the list and go until it
#	   gets connections IPs.  Others will be ignored.
#
#	3. No shortcuts in ranges! ie. 234-8 does not mean 234 to 238,
#	   you must type 234-238 if you mean this.
#
#	4. If you give a single localIP, that's ok - all local IPs will
#	   be set to the given one. You MUST still give at least one remote
#	   IP for each simultaneous client.
#
# (Recommended)
#localip 192.168.0.1
#remoteip 192.168.0.234-238,192.168.0.245
# or
#localip 192.168.0.234-238,192.168.0.245
#remoteip 192.168.1.234-238,192.168.1.245
localip 10.192.168.1
remoteip 10.192.168.100-200

注解:在以上配置文件中,
指定了PPP配置文件路径:option /etc/ppp/options.pptpd
开启了调试日志:debug
设置了建立连接时的超时时间为120秒:stimeout 120
PPTP VPN Server的本地地址,即客户端会自动获取到的网关地址:localip 10.192.168.1
分配给客户端的地址范围:remoteip 10.192.168.100-200

5. 编辑/etc/ppp/options.pptpd

###############################################################################
# $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $
#
# Sample Poptop PPP options file /etc/ppp/options.pptpd
# Options used by PPP when a connection arrives from a client.
# This file is pointed to by /etc/pptpd.conf option keyword.
# Changes are effective on the next connection.  See "man pppd".
#
# You are expected to change this file to suit your system.  As
# packaged, it requires PPP 2.4.2 and the kernel MPPE module.
###############################################################################


# Authentication

# Name of the local system for authentication purposes
# (must match the second field in /etc/ppp/chap-secrets entries)
name ec2-tokyo

# Strip the domain prefix from the username before authentication.
# (applies if you use pppd with chapms-strip-domain patch)
#chapms-strip-domain


# Encryption
# (There have been multiple versions of PPP with encryption support,
# choose with of the following sections you will use.)


# BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o
# {{{
refuse-pap
refuse-chap
refuse-mschap
# Require the peer to authenticate itself using MS-CHAPv2 [Microsoft
# Challenge Handshake Authentication Protocol, Version 2] authentication.
require-mschap-v2
# Require MPPE 128-bit encryption
# (note that MPPE requires the use of MSCHAP-V2 during authentication)
require-mppe-128
# }}}


# OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o
# {{{
#-chap
#-chapms
# Require the peer to authenticate itself using MS-CHAPv2 [Microsoft
# Challenge Handshake Authentication Protocol, Version 2] authentication.
#+chapms-v2
# Require MPPE encryption
# (note that MPPE requires the use of MSCHAP-V2 during authentication)
#mppe-40	# enable either 40-bit or 128-bit, not both
#mppe-128
#mppe-stateless
# }}}


# Network and Routing

# If pppd is acting as a server for Microsoft Windows clients, this
# option allows pppd to supply one or two DNS (Domain Name Server)
# addresses to the clients.  The first instance of this option
# specifies the primary DNS address; the second instance (if given)
# specifies the secondary DNS address.
#ms-dns 10.0.0.1
#ms-dns 10.0.0.2
ms-dns 172.31.0.2

# If pppd is acting as a server for Microsoft Windows or "Samba"
# clients, this option allows pppd to supply one or two WINS (Windows
# Internet Name Services) server addresses to the clients.  The first
# instance of this option specifies the primary WINS address; the
# second instance (if given) specifies the secondary WINS address.
#ms-wins 10.0.0.3
#ms-wins 10.0.0.4

# Add an entry to this system's ARP [Address Resolution Protocol]
# table with the IP address of the peer and the Ethernet address of this
# system.  This will have the effect of making the peer appear to other
# systems to be on the local ethernet.
# (you do not need this if your PPTP server is responsible for routing
# packets to the clients -- James Cameron)
proxyarp

# Normally pptpd passes the IP address to pppd, but if pptpd has been
# given the delegate option in pptpd.conf or the --delegate command line
# option, then pppd will use chap-secrets or radius to allocate the
# client IP address.  The default local IP address used at the server
# end is often the same as the address of the server.  To override this,
# specify the local IP address here.
# (you must not use this unless you have used the delegate option)
#10.8.0.100


# Logging

# Enable connection debugging facilities.
# (see your syslog configuration for where pppd sends to)
debug

# Print out all the option values which have been set.
# (often requested by mailing list to verify options)
dump


# Miscellaneous

# Create a UUCP-style lock file for the pseudo-tty to ensure exclusive
# access.
lock

# Disable BSD-Compress compression
nobsdcomp

# Disable Van Jacobson compression
# (needed on some networks with Windows 9x/ME/XP clients, see posting to
# poptop-server on 14th April 2005 by Pawel Pokrywka and followups,
# http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 )
novj
novjccomp

# turn off logging to stderr, since this may be redirected to pptpd,
# which may trigger a loopback
nologfd

# put plugins here
# (putting them higher up may cause them to sent messages to the pty)

logfile /var/log/pptpd.log
multilink

注解:在以上配置文件中,
定义了PPTP VPN Server的服务名:name ec2-tokyo
定义了加密的规则,如下:
refuse-pap
refuse-chap
refuse-mschap
require-mschap-v2
require-mppe-128
定义了推送到客户端的DNS地址:ms-dns 172.31.0.2 (我通常选择PPTP VPN Server所在服务器的默认DNS设置)
允许相同局域网的主机在PPTP VPN Server上互相可见:proxyarp
开启了调试信息:debug
启用了一些通用的设置,如下:
dump
lock
nobsdcomp
novj
novjccomp
nologfd
指定了日志文件的位置:logfile /var/log/pptpd.log
允许把多个物理通道捆绑为单一逻辑信道:multilink

6. 编辑用户账号密码文件/etc/ppp/chap-secrets
# vim /etc/ppp/chap-secrets

# Secrets for authentication using CHAP
# client	server	secret			IP addresses
"username"  *       "password"        *

7. 编辑/etc/sysconfig/iptables-config
修改 IPTABLES_MODULES=”” 为 IPTABLES_MODULES=”ip_nat_pptp” 确保在启动iptables服务时自动加载模块。

8. 编辑/etc/sysconfig/iptables(默认eth0为公网IP地址所在网口)
# vim /etc/sysconfig/iptables

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p gre -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1723 -j ACCEPT
-A INPUT -s 10.192.168.0/255.255.255.0 -m state --state NEW -m tcp -p tcp -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -s 10.192.168.0/255.255.255.0 -o eth0 -j MASQUERADE
COMMIT

注解:在以上iptables脚本中,
对所有GRE协议的数据包放行;
对TCP端口1723放行;
对整个PPTP VPN的局域网地址段10.192.168.0/24放行;
将整个PPTP VPN的局域网地址段10.192.168.0/24通过NAT映射到eth0网口,实现共享上网;

9. 开启数据转发,编辑/etc/sysctl.conf
修改 net.ipv4.ip_forward = 0 为 net.ipv4.ip_forward = 1
执行 sysctl -p

10. 启动PPTP VPN Server
# /etc/init.d/pptpd restart
# /etc/init.d/iptables restart

11. 设置PPTP VPN Server与iptables服务开机自启动
# chkconfig pptpd on
# chkconfig iptables on

12. 在本地PC上配置客户端并连接PPTP VPN Server

13. 结束

, , ,

No Comments