关于 四月, 2014 的文章

简单实现自动过滤WEB攻击源IP

最近VPS负载经常会比较高,刚刚仔细检查了一下,发现有一些IP在非常频繁的向Blog发请求,大多数是关于评论和后台登陆页面的。
由此造成了fastcgi的进程非常繁忙,使得VPS本身就非常弱小的CPU一直处于高利用率的状态。
于是,我编写了一个简单的脚本,每隔1分钟从服务器日志中取出这些IP并通过iptables将其直接Block掉,用起来效果很不错。

根据我目前服务器的访问日志来看,如果在10000次请求中,有500次请求来自于同一个IP的话,那么其攻击特征就非常明显了。
另外,针对后台登陆页面和评论页面,我单独制定了更加严格的规则,那就是在10000次POST请求中,如果有50次尝试,就Block掉。

具体的脚本内容如下:
$ vim /home/rainbow/sbin/block_attack_ips.sh

#!/bin/bash

logfiles=(
/webserver/blog/logs/rainbow_access.log
/webserver/blog/logs/eric_access.log
)

whitelist=$(last | awk '{print $3}' | grep ^[1-9] | sort | uniq | xargs)
whitelist+=" 127.0.0.1 172.31.23.107 52.69.213.155"

function check_root(){
  if [ $EUID -ne 0 ]; then
    echo "This script must be run as root"
    exit 1
  fi
}

function block_ips(){
  blacklist=$@
  if [ ! -z "${blacklist}" ]; then
    for ip in ${blacklist}
    do
      if ! $(echo ${whitelist} | grep -wq ${ip}); then
        if ! $(/sbin/iptables-save | grep -wq ${ip}); then
          echo "Blocked ${ip}"
          /sbin/iptables -I INPUT -s ${ip}/32 -p tcp -m tcp --dport 80 -j DROP
        fi
      fi
    done
  fi
}

function check_post(){
  page=$1
  tailnum=$2
  retry=$3

  command="grep -w POST ${logfile} |tail -n ${tailnum} |grep -w ${page} |awk '{print \$1}' |sort |uniq -c |awk '(\$1 > ${retry}){print \$2}'"
  blacklist=$(eval ${command})
  block_ips ${blacklist}
}

function check_all(){
  tailnum=$1
  retry=$2

  command="tail -n ${tailnum} ${logfile} |awk '{print \$1}' |sort |uniq -c |awk '(\$1 > ${retry}){print \$2}'"
  blacklist=$(eval ${command})
  block_ips ${blacklist}
}

check_root
for logfile in ${logfiles[@]}
do
  check_post wp-login.php 10000 50
  check_post wp-comments-post.php 10000 50
  check_all 10000 500
done

$ chmod +x /home/rainbow/sbin/block_attack_ips.sh

配置crontab计划任务,每1分钟检查一次,并每月定时重启iptables服务清除旧的记录:
$ sudo crontab -e

*/1 * * * * /home/rainbow/sbin/block_attack_ips.sh
00 01 1 * * /etc/init.d/iptables restart

No Comments

使用Ansible自动安装部署CDH5集群

背景介绍:
之前,就CDH3和CDH4的安装部署,我写过两个系列的文章:
http://heylinux.com/archives/1980.html
http://heylinux.com/archives/2827.html

但随着CDH5的发展,尤其是在Namenode的高可用方面的成熟,项目上决定将现有的Hadoop集群都迁移到CDH5上。
为了方便管理,我采用了Ansible来自动安装部署整个CDH5集群,目前在线上运行状况良好。

相关配置:
所有的配置,我都放到了GitHub中,对应的仓库地址如下:
https://github.com/mcsrainbow/ansible-playbooks-cdh5

,

2 Comments

使用Monit监控进程与系统状态

背景介绍:
随着线上服务器数量的增加,各种开源软件和工具的广泛使用,一些服务自动停止或无响应的情况时有发生。
而其中有很大一部分都是由于软件自身的稳定性或者机器硬件资源的限制而造成的,按道理来讲,这些情况都应该设法找到本质原因,然后避免再次出现。

但现实是残酷的,不少软件本身的稳定性有待提升,机器的硬件资源提升会触及成本,因此在集群的环境中,具备冗余,使得执行简单的服务重启成为了最现实的选择。

这本身不是什么困难的事情,实现的方法有很多,比如在Zabbix或Nagios的报警中增加Action或Commands,或自己写脚本放到计划任务中执行都可以。

但本文要介绍的,是专门来做这种事情的一个工具:Monit。
它最大的特点是配置文件简单易读,同时支持进程和系统状态的监控,并灵活的提供了各种检测的方式,周期,并进行报警和响应(重启服务,执行命令等)

系统环境:
OS: CentOS 6.4 x86_64 Minimal

具体配置:
1. 安装EPEL仓库
# yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

2. 安装Monit软件包
# yum install monit

3. 配置Monit通用参数,包括开启HTTP统计界面,邮件报警等
# vim /etc/monit.conf

###############################################################################
## Monit control file
###############################################################################
##
## Comments begin with a '#' and extend through the end of the line. Keywords
## are case insensitive. All path's MUST BE FULLY QUALIFIED, starting with '/'.
##
## Below you will find examples of some frequently used statements. For 
## information about the control file and a complete list of statements and 
## options, please have a look in the Monit manual.
##
##
###############################################################################
## Global section
###############################################################################
##
## Start Monit in the background (run as a daemon):
#
set daemon 120            # check services at 2-minute intervals
    with start delay 60   # optional: delay the first check by 4-minutes (by 
#                         # default Monit check immediately after Monit start)
#
#
## Set syslog logging with the 'daemon' facility. If the FACILITY option is
## omitted, Monit will use 'user' facility by default. If you want to log to 
## a standalone log file instead, specify the full path to the log file
#
# set logfile syslog facility log_daemon                       
#
#
### Set the location of the Monit id file which stores the unique id for the
### Monit instance. The id is generated and stored on first Monit start. By 
### default the file is placed in $HOME/.monit.id.
#
set idfile /var/run/monit/.monit.id
#
### Set the location of the Monit state file which saves monitoring states
### on each cycle. By default the file is placed in $HOME/.monit.state. If
### the state file is stored on a persistent filesystem, Monit will recover
### the monitoring state across reboots. If it is on temporary filesystem, the
### state will be lost on reboot which may be convenient in some situations.
#
set statefile /var/run/monit/.monit.state
#
## Set the list of mail servers for alert delivery. Multiple servers may be 
## specified using a comma separator. By default Monit uses port 25 - it is
## possible to override this with the PORT option.
#
set mailserver localhost
# set mailserver mail.bar.baz,               # primary mailserver
#                backup.bar.baz port 10025,  # backup mailserver on port 10025
#                localhost                   # fallback relay
#
#
## By default Monit will drop alert events if no mail servers are available. 
## If you want to keep the alerts for later delivery retry, you can use the 
## EVENTQUEUE statement. The base directory where undelivered alerts will be 
## stored is specified by the BASEDIR option. You can limit the maximal queue
## size using the SLOTS option (if omitted, the queue is limited by space 
## available in the back end filesystem).
#
set eventqueue
    basedir /var/run/monit  # set the base directory where events will be stored
#   slots 100               # optionally limit the queue size
#
#
## Send status and events to M/Monit (for more informations about M/Monit 
## see http://mmonit.com/).
#
# set mmonit http://monit:monit@192.168.1.10:8080/collector
#
#
## Monit by default uses the following alert mail format:
##
## --8< --
## From: monit@$HOST                                     # sender
## Subject: Monit Alert - Event:$EVENT Service:$SERVICE  # subject
## 
## Event:$EVENT Service:$SERVICE             #
##                                           #
## 	Date:        $DATE                   #
## 	Action:      $ACTION                 #
## 	Host:        $HOST                   # body
## 	Description: $DESCRIPTION            #
##                                           #
## Your faithful employee,                   #
## Monit                                     #
## --8<--
##
## You can override this message format or parts of it, such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded at runtime. For example, to override the sender, use:
#
set mail-format { 
  from: monit@heylinux.com
  subject: [$SERVICE] $EVENT
  message:
[$SERVICE] $EVENT

  Date:        $DATE
  Action:      $ACTION
  Host:        heylinux.com
  Description: $DESCRIPTION

Your faithful employee,                   
Monit }
#
#
## You can set alert recipients whom will receive alerts if/when a 
## service defined in this file has errors. Alerts may be restricted on 
## events by using a filter as in the second example below. 
#
set alert guosuiyu@foxmail.com
# set alert sysadm@foo.bar                       # receive all alerts
# set alert manager@foo.bar only on { timeout }  # receive just service-
#                                                # timeout alert
#
#
## Monit has an embedded web server which can be used to view status of 
## services monitored and manage services from a web interface. See the
## Monit Wiki if you want to enable SSL for the web server. 
#
set httpd port 2812 and
    use address localhost  # only accept connection from localhost
    allow localhost        # allow localhost to connect to the server and
#     allow admin:monit      # require user 'admin' with password 'monit'
#     allow @monit           # allow users of group 'monit' to connect (rw)
#     allow @users readonly  # allow users of group 'users' to connect readonly
#
#
###############################################################################
## Services
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
#
#  check system myhost.mydomain.tld
#    if loadavg (1min) > 4 then alert
#    if loadavg (5min) > 2 then alert
#    if memory usage > 75% then alert
#    if cpu usage (user) > 70% then alert
#    if cpu usage (system) > 30% then alert
#    if cpu usage (wait) > 20% then alert
#
#    
## Check a file for existence, checksum, permissions, uid and gid. In addition
## to alert recipients in the global section, customized alert can be sent to 
## additional recipients by specifying a local alert handler. The service may 
## be grouped using the GROUP option. More than one group can be specified by
## repeating the 'group name' statement.
#    
#  check file apache_bin with path /usr/local/apache/bin/httpd
#    if failed checksum and 
#       expect the sum 8f7f419955cefa0b33a2ba316cba3659 then unmonitor
#    if failed permission 755 then unmonitor
#    if failed uid root then unmonitor
#    if failed gid root then unmonitor
#    alert security@foo.bar on {
#           checksum, permission, uid, gid, unmonitor
#        } with the mail-format { subject: Alarm! }
#    group server
#
#    
## Check that a process is running, in this case Apache, and that it respond
## to HTTP and HTTPS requests. Check its resource usage such as cpu and memory,
## and number of children. If the process is not running, Monit will restart 
## it by default. In case the service is restarted very often and the 
## problem remains, it is possible to disable monitoring using the TIMEOUT
## statement. This service depends on another service (apache_bin) which
## is defined above.
#    
#  check process apache with pidfile /usr/local/apache/logs/httpd.pid
#    start program = "/etc/init.d/httpd start" with timeout 60 seconds
#    stop program  = "/etc/init.d/httpd stop"
#    if cpu > 60% for 2 cycles then alert
#    if cpu > 80% for 5 cycles then restart
#    if totalmem > 200.0 MB for 5 cycles then restart
#    if children > 250 then restart
#    if loadavg(5min) greater than 10 for 8 cycles then stop
#    if failed host www.tildeslash.com port 80 protocol http
#       and request "/somefile.html"
#       then restart
#    if failed port 443 type tcpssl protocol http
#       with timeout 15 seconds
#       then restart
#    if 3 restarts within 5 cycles then timeout
#    depends on apache_bin
#    group server
#    
#    
## Check filesystem permissions, uid, gid, space and inode usage. Other services,
## such as databases, may depend on this resource and an automatically graceful
## stop may be cascaded to them before the filesystem will become full and data
## lost.
#
#  check filesystem datafs with path /dev/sdb1
#    start program  = "/bin/mount /data"
#    stop program  = "/bin/umount /data"
#    if failed permission 660 then unmonitor
#    if failed uid root then unmonitor
#    if failed gid disk then unmonitor
#    if space usage > 80% for 5 times within 15 cycles then alert
#    if space usage > 99% then stop
#    if inode usage > 30000 then alert
#    if inode usage > 99% then stop
#    group server
#
#
## Check a file's timestamp. In this example, we test if a file is older 
## than 15 minutes and assume something is wrong if its not updated. Also,
## if the file size exceed a given limit, execute a script
#
#  check file database with path /data/mydatabase.db
#    if failed permission 700 then alert
#    if failed uid data then alert
#    if failed gid data then alert
#    if timestamp > 15 minutes then alert
#    if size > 100 MB then exec "/my/cleanup/script" as uid dba and gid dba
#
#
## Check directory permission, uid and gid.  An event is triggered if the 
## directory does not belong to the user with uid 0 and gid 0.  In addition, 
## the permissions have to match the octal description of 755 (see chmod(1)).
#
#  check directory bin with path /bin
#    if failed permission 755 then unmonitor
#    if failed uid 0 then unmonitor
#    if failed gid 0 then unmonitor
#
#
## Check a remote host availability by issuing a ping test and check the 
## content of a response from a web server. Up to three pings are sent and 
## connection to a port and an application level network check is performed.
#
#  check host myserver with address 192.168.1.1
#    if failed icmp type echo count 3 with timeout 3 seconds then alert
#    if failed port 3306 protocol mysql with timeout 15 seconds then alert
#    if failed url http://user:password@www.foo.bar:8080/?querystring
#       and content == 'action="j_security_check"'
#       then alert
#
#
###############################################################################
## Includes
###############################################################################
##
## It is possible to include additional configuration parts from other files or
## directories.
#
#  include /etc/monit.d/*
#
#

# Include all files from /etc/monit.d/
include /etc/monit.d/*

阅读全文 »

3 Comments

实用运维小技巧之 给ssh命令加上自动完成功能

背景介绍:
因为随着项目服务器数量的越来越多,已经快超过1000台了,在Windows上使用PuTTY或XShell就感觉非常低效了。
首先,随着线上服务器的自动伸缩,服务器数量在不断的变化,根本不可能将那么多的服务器都在这些Client工具中进行创建并同步更新,而且记忆这些主机名也会很难。

于是我回归到了使用Linux的命令行终端,通过创建多个alias来满足不同的ssh参数的情况,比如有的需要用key,有的需要用密码;再将所有的服务器添加到某个文本文件中以列表的方式进行更新,同时将其内容作为这些alias的自动完成列表,用起来感觉非常不错。

具体配置:
定义alias
[dong.guo@heydevops ~]$ vim .bashrc

alias sshdevops='ssh -t -i /home/dong.guo/workspace/sshkeys/key_devops -l devops'
alias sshrootpass='sshpass -p "password" ssh -l root'
alias sshrootkey='ssh -t -i /home/dong.guo/workspace/sshkeys/key_root -l root'

创建服务器列表文件,输入所有的服务器主机名
[dong.guo@heydevops ~]$ head -n 10 /home/dong.guo/workspace/autocomp/servers.list

 
idc1-server1
idc1-server2
idc1-server3
idc2-server1
idc2-server2
idc2-server3
idc3-server1
idc3-server2
idc3-server3

给创建的alias加上自动完成功能
[dong.guo@heydevops ~]$ vim .bashrc

# Enable auto-completion of servers via sshdevops, sshrootpass and sshrootkey
function _servers() {
  COMPREPLY=()
  local cur=${COMP_WORDS[COMP_CWORD]};
  local com=${COMP_WORDS[COMP_CWORD-1]};
  case $com in
    'sshdevops')
      local servers=($(cat /home/dong.guo/workspace/autocomp/servers.list))
      COMPREPLY=($(compgen -W '${servers[@]}' -- $cur))
      ;;
    'sshrootpass')
      local servers=($(cat /home/dong.guo/workspace/autocomp/servers.list))
      COMPREPLY=($(compgen -W '${servers[@]}' -- $cur))
      ;;
    'sshrootkey')
      local servers=($(cat /home/dong.guo/workspace/autocomp/servers.list))
      COMPREPLY=($(compgen -W '${servers[@]}' -- $cur))
      ;;
  esac
}

complete -F _servers sshdevops
complete -F _servers sshrootpass
complete -F _servers sshrootkey

使配置生效
[dong.guo@heydevops ~]$ source .bashrc

优雅的敲下TAB键来体验自动完成所带来的快感,如下图所示:
sshdevops

3 Comments

在CentOS 6.4上安装配置GlusterFS

参考资料:
http://www.sohailriaz.com/glusterfs-howto-on-centos-6-x/
http://navyaijm.blog.51cto.com/4647068/1258250

背景介绍:
项目目前在文件同步方面采用的是rsync,在尝试用分布式文件系统替换的时候,使用过MooseFS,效果差强人意,在了解到了GlusterFS之后,决定尝试一下,因为它跟MooseFS相比,感觉部署上更加简单一些,同时没有元数据服务器的特点使其没有单点故障的存在,感觉非常不错。

环境介绍:
OS: CentOS 6.4 x86_64 Minimal
Servers: idc1-server1,idc1-server2,idc1-server3,idc1-server4
Client: idc1-server5

具体步骤:
1. 在idc1-server{1-4}上安装GlusterFS软件包:
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum install -y glusterfs glusterfs-server glusterfs-fuse

# /etc/init.d/glusterd start
# chkconfig glusterd on

2. 在idc1-server1上配置整个GlusterFS集群:
[root@idc1-server1 ~]# gluster peer probe idc1-server1

peer probe: success: on localhost not needed

[root@idc1-server1 ~]# gluster peer probe idc1-server2

peer probe: success

[root@idc1-server1 ~]# gluster peer probe idc1-server3

peer probe: success

[root@idc1-server1 ~]# gluster peer probe idc1-server4

peer probe: success

注意事项:
在某些情况下,idc1-server1在peer列表中会被识别为IP地址,这会造成一些通讯的问题。
假设idc1-server1的IP地址为10.100.1.11,则需要通过以下步骤来手动修复。
[root@idc1-server2 ~]# gluster peer detach 10.100.1.11

peer detach: success

[root@idc1-server2 ~]# gluster peer probe idc1-server1

peer probe: success

[root@idc1-server2 ~]# gluster peer status

Number of Peers: 3
  
Hostname: idc1-server3
Uuid: 01f25251-9ee6-40c7-a322-af53a034aa5a
State: Peer in Cluster (Connected)
  
Hostname: idc1-server4
Uuid: 212295a6-1f38-4a1e-968c-577241318ff1
State: Peer in Cluster (Connected)
  
Hostname: idc1-server1
Port: 24007
Uuid: ed016c4e-7159-433f-88a5-5c3ebd8e36c9
State: Peer in Cluster (Connected)

4. 在idc1-server1上创建GlusterFS磁盘:
[root@idc1-server1 ~]# gluster volume create datavolume1 replica 2 transport tcp idc1-server1:/usr/local/share/datavolume1 idc1-server2:/usr/local/share/datavolume1 idc1-server3:/usr/local/share/datavolume1 idc1-server4:/usr/local/share/datavolume1 force

volume create: datavolume1: success: please start the volume to access data

[root@idc1-server1 ~]# gluster volume create datavolume2 replica 2 transport tcp idc1-server1:/usr/local/share/datavolume2 idc1-server2:/usr/local/share/datavolume2 idc1-server3:/usr/local/share/datavolume2 idc1-server4:/usr/local/share/datavolume2 force

volume create: datavolume2: success: please start the volume to access data

[root@idc1-server1 ~]# gluster volume create datavolume3 replica 2 transport tcp idc1-server1:/usr/local/share/datavolume3 idc1-server2:/usr/local/share/datavolume3 idc1-server3:/usr/local/share/datavolume3 idc1-server4:/usr/local/share/datavolume3 force

volume create: datavolume3: success: please start the volume to access data

阅读全文 »

,

6 Comments