Ansible自动部署nginx+keepalived高可用负载均衡

文章目录

[隐藏]

  • 1. 部署前准备工作
  • 2. Ansible主机与远程主机秘钥认证
  • 3. 安装配置Ansible
  • 4. 编写roles,实现web的部署
  • 5. 编写roles角色部署nginx+keepalived
  • 6. 编写roles角色
  • 7. 其他问题

本篇文章记录通过Ansible自动化部署nginx的负载均衡高可用,前端代理使用nginx+keepalived,后端web server使用3台nginx用于负载效果的体现,结构图如下:

1. 部署前准备工作

主机规划

  • Ansible : 192.168.214.144
  • Keepalived-node-1 : 192.168.214.148
  • Keepalived-node-2 : 192.168.214.143
  • web1 : 192.168.214.133
  • web2 : 192.168.214.135
  • web3 : 192.168.214.139
2. Ansible主机与远程主机秘钥认证
#!/bin/bash    keypath=/root/.ssh  [ -d ${keypath} ] || mkdir -p ${keypath}  rpm -q expect &> /dev/null || yum install expect -y  ssh-keygen -t rsa -f /root/.ssh/id_rsa  -P ""  password=centos  while read ip;do  expect <<EOF  set timeout 5  spawn ssh-copy-id $ip  expect {  "yes/no" { send "yesn";exp_continue }  "password" { send "$passwordn"  }  }  expect eof  EOF  done < /home/iplist.txt  

iplist.txt

192.168.214.148  192.168.214.143  192.168.214.133  192.168.214.135  192.168.214.139  192.168.214.134  

执行脚本

[[email protected] script]# ./autokey.sh  

测试验证

[[email protected] script]# ssh 192.168.214.148 'date'  Address 192.168.214.148 maps to localhost, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!  Sat Jul 14 11:35:21 CST 2018  

配置Ansible基于主机名认证,方便单独管理远程主机

vim  /etc/hosts  #  127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4  ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6  192.168.214.148 node-1  192.168.214.143 node-2  192.168.214.133 web-1  192.168.214.135 web-2  192.168.214.139 web-3  
3. 安装配置Ansible
#安装ansible  [[email protected] ~]# yum install ansible -y    #配置ansible主机清单  [[email protected] ~]# vim /etc/ansible/hosts  [all]    192.168.214.148  192.168.214.143  192.168.214.133  192.168.214.135  192.168.214.139    [node]    192.168.214.148  192.168.214.143    [web]  192.168.214.133  192.168.214.135  192.168.214.139    #Ansible执行ping测试   [[email protected] ~]# ansible all -m ping  
4. 编写roles,实现web的部署

先看一下web的目录结构

[[email protected] ~]# tree /opt/roles/web  /opt/roles/web  .  ├── tasks  │   ├── install_nginx.yml  │   ├── main.yml  │   ├── start.yml  │   ├── temps.yml  │   └── user.yml  └── templates      ├── index.html.j2      └── nginx.conf.j2    2 directories, 7 files  

按照角色执行的顺序编写

编写user.yml

- name: create group nginx    group: name=nginx  - name: create user nginx    user: name=nginx group=nginx system=yes shell=/sbin/nologin  

编写install_nginx.yml

- name: install nginx webserver    yum: name=nginx  

创建nginx配置文件的template模板

由于是测试,后端web服务的nginx.conf配置文件基本保持默认,只只更具后端主机情况设置worker进程数,使用ansible的setup模块中的变量获取远程主机的cpu的数量值

#将配置文件转换成template文件  [[email protected] conf]# cp nginx.conf /opt/roles/web/templates/nginx.conf.j2  #做出修改的内容如下  worker_processes {{ansible_proccessor_vcpus}};    #在templates目录写一个测试页内如下  vim index.html.j2  {{ ansible_hostname }} test page.  

编写temps.yml

- name: cp nginx.conf.j2 to nginx web server rename nginx.conf    template: src=/opt/roles/web/templates/nginx.conf.j2 dest=/etc/nginx/nginx.conf  - name: cp index test page to nginx server    template: src=/opt/roles/web/templates/index.html.j2 dest=/usr/share/nginx/html/index.html  

编写start.yml

- name: restart nginx    service: name=nginx state=started  

编写main.yml

- import_tasks: user.yml  - import_tasks: install_nginx.yml  - import_tasks: temps.yml  - import_tasks: start.yml  

编写执行主文件web_install.yml,执行文件不能与web角色放在同一目录,通常放在roles目录

[[email protected] ~]# vim /opt/roles/web_install.yml      ---  - hosts: web    remote_user: root    roles:      - web  

安装前测试: -C选项为测试

[r[email protected] ~]# ansible-playbook -C /opt/roles/web_install.yml  

如没有问题则执行安装

[[email protected] ~]# ansible-playbook /opt/roles/web_install.yml  

测试访问

[[email protected] ~]# ansible web -m shell -a 'iptables -F'  192.168.214.139 | SUCCESS | rc=0 >>      192.168.214.135 | SUCCESS | rc=0 >>      192.168.214.133 | SUCCESS | rc=0 >>      [[email protected] ~]# curl 192.168.214.133  web-1 test page.  
5. 编写roles角色部署nginx+keepalived

部署高可用集群需要注意各节点包括后端主机的时间问题,保证各主机时间一致。

[[email protected] ~]# ansible all -m shell -a 'yum install ntpdate -y'    [ro[email protected] ~]# ansible all -m shell -a 'ntpdate gudaoyufu.com'  
6. 编写roles角色

编写user.yml

- name: create nginx group    group: name=nginx  - name: create nginx user    user: name=nginx group=nginx system=yes shell=/sbin/nologin  

编写install_server.yml

- name: install nginx and keepalived    yum: name={{ item }} state=latest    with_items:      - nginx      - keepalived  

编写temps.yml

- name: copy nginx proxy conf and rename    template: src=/opt/roles/ha_proxy/templates/nginx.conf.j2  dest=/etc/nginx/nginx.conf    - name: copy master_keepalived.conf.j2 to MASTER node    when: ansible_hostname == "node-1"    template: src=/opt/roles/ha_proxy/templates/master_keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf    - name: copy backup_keepalived.conf.j2 to BACKUP node    when: ansible_hostname == "node-2"    template: src=/opt/roles/ha_proxy/templates/backup_keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf  

配置nginx proxy配置文件模板

[[email protected] ~]# cp /opt/conf/nginx.conf /opt/roles/ngx_proxy/templates/nginx.conf.j2  
[[email protected] ~]# vim /opt/roles/ngx_proxy/templates/nginx.conf.j2    user nginx;  worker_processes {{ ansible_processor_vcpus }};  error_log /var/log/nginx/error.log;  pid /var/run/nginx.pid;    # Load dynamic modules. See /usr/share/nginx/README.dynamic.    events {      worker_connections  1024;  }      http {      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                        '$status $body_bytes_sent "$http_referer" '                        '"$http_user_agent" "$http_x_forwarded_for"';        access_log  /var/log/nginx/access.log  main;        sendfile            on;      tcp_nopush          on;      tcp_nodelay         on;      keepalive_timeout   65;      types_hash_max_size 2048;        include             /etc/nginx/mime.types;      default_type        application/octet-stream;          include /etc/nginx/conf.d/*.conf;        upstream web {            server 192.168.214.133:80 max_fails=3 fail_timeout=30s;          server 192.168.214.135:80 max_fails=3 fail_timeout=30s;          server 192.168.214.139:80 max_fails=3 fail_timeout=30s;          }            server {        listen       80 default_server;      server_name  {{ ansible_hostname }};      root         /usr/share/nginx/html;      index index.html index.php;             location / {                  proxy_pass http://web;               }             error_page 404 /404.html;              }      }  

配置keepalived配置文件模板

[[email protected] ~]# cp /opt/conf/keepalived.conf /opt/roles/ha_proxy/templates/master_keepalived.conf.j2  

[[email protected] templates]# vim master_keepalived.conf.j2 # ! Configuration File for keepalived global_defs { notification_email { [email protected] [email protected] [email protected] } notification_email_from [email protected] smtp_server 192.168.214.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 vrrp_iptables vrrp_mcast_group4 224.17.17.17 } vrrp_script chk_nginx { script "killall -0 nginx" interval 1 weight -20 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 55 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 12345678 } virtual_ipaddress { 192.168.214.100 } track_script { chk_nginx } }

同样,在master_keepalived.conf.j2基础修改另存为backup_keepalived.conf.j2,只修改角色与优先级即可。注意:master_keepalived.conf.j2文件中的检测故障降低优先级的值要确保降低后MASTER优先级小于BACKUP的优先级
编写start.yml

- name: start nginx proxy server    service: name=nginx state=started  

编写main.yml

- import_tasks: user.yml  - import_tasks: install_server.yml  - import_tasks: temps.yml  - import_tasks: start.yml  

编写执行主文件

[[email protected] ~]# vim /opt/roles/ha_proxy_install.yml      ---  - hosts: node    remote_user: root    roles:      - ha_proxy  

执行检测roles

[[email protected] ~]# ansible-playbook -C /opt/roles/ha_proxy_install.yml  

执行测试没问题即可执行自动部署

执行过程如下:

[[email protected] ~]# ansible-playbook  /opt/roles/ha_proxy_install.yml        PLAY [node] **********************************************************************************************************************    TASK [Gathering Facts] ***********************************************************************************************************  ok: [192.168.214.148]  ok: [192.168.214.143]    TASK [ha_proxy : create nginx group] *********************************************************************************************  changed: [192.168.214.148]  ok: [192.168.214.143]    TASK [ha_proxy : create nginx user] **********************************************************************************************  changed: [192.168.214.148]  ok: [192.168.214.143]    TASK [ha_proxy : install nginx and keepalived] ***********************************************************************************  changed: [192.168.214.143] => (item=[u'nginx', u'keepalived'])  changed: [192.168.214.148] => (item=[u'nginx', u'keepalived'])    TASK [ha_proxy : copy nginx proxy conf and rename] *******************************************************************************  changed: [192.168.214.148]  changed: [192.168.214.143]    TASK [ha_proxy : copy master_keepalived.conf.j2 to MASTER node] ******************************************************************  skipping: [192.168.214.143]  changed: [192.168.214.148]    TASK [ha_proxy : copy backup_keepalived.conf.j2 to BACKUP node] ******************************************************************  skipping: [192.168.214.148]  changed: [192.168.214.143]    TASK [ha_proxy : start nginx proxy server] ***************************************************************************************  changed: [192.168.214.143]  changed: [192.168.214.148]    PLAY RECAP ***********************************************************************************************************************  192.168.214.143            : ok=7    changed=4    unreachable=0    failed=0  192.168.214.148            : ok=7    changed=6    unreachable=0    failed=0  

至此,自动部署nginx+keepalived高可用负载均衡完成了

最后看一下roles目录的结构

[[email protected] ~]# tree /opt/roles/  /opt/roles/  ├── ha_proxy  │   ├── tasks  │   │   ├── install_server.yml  │   │   ├── main.yml  │   │   ├── start.yml  │   │   ├── temps.yml  │   │   └── user.yml  │   └── templates  │       ├── backup_keepalived.conf.j2  │       ├── master_keepalived.conf.j2  │       └── nginx.conf.j2  ├── ha_proxy_install.retry  ├── ha_proxy_install.yml  ├── web  │   ├── tasks  │   │   ├── install_nginx.yml  │   │   ├── main.yml  │   │   ├── start.yml  │   │   ├── temps.yml  │   │   └── user.yml  │   └── templates  │       ├── index.html.j2  │       └── nginx.conf.j2  ├── web_install.retry  └── web_install.yml    6 directories, 19 files  

下面测试服务:keepalived的服务没有在ansible中设置自动启动,到keepalived节点启动即可。

测试node节点

[[email protected] ~]# for i in {1..10};do curl 192.168.214.148;done  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  

将node-1 的MASTER服务停掉测试故障转移,同时查看node-2状态变化

执行: nginx -s stop

查看vrrp通知,可以看到主备切换正常:

[[email protected] ~]# tcpdump -i ens33 -nn host 224.17.17.17    listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes    16:55:20.804327 IP 192.168.214.148 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 100, authtype simple, intvl 1s, length 20  16:55:25.476397 IP 192.168.214.148 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 0, authtype simple, intvl 1s, length 20  16:55:26.128474 IP 192.168.214.143 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 90, authtype simple, intvl 1s, length 20  16:55:27.133349 IP 192.168.214.143 > 224.17.17.17: VRRPv2, Advertisement, vrid 55, prio 90, authtype simple, intvl 1s, length 20  

再测试访问:

[[email protected] ~]# for i in {1..10};do curl 192.168.214.148;done  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  

node-1恢复主节点,抢回MASTER角色

node-1节点执行nginx指令,可以看到VIP漂移回到node-1节点,测试访问

[[email protected] ~]# for i in {1..10};do curl 192.168.214.148;done  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  web-2 test page.  web-3 test page.  web-1 test page.  
7. 其他问题

上面的自动部署方式还有可以改进的地方,比如,可以将配置keepalived的配置文件中的许多参数在roles中以统一变量的方式定义,然后在template模板文件中引用参数就可以了

此外还有一个需要注意的地方是:keepalived的配置文件中使用了killall指令检测本地的nginx服务状态,如果检测结果状态为非0就会执行vrrp_script中定义的降级操作,要确保系统这个指令可以执行,有时该指令没有被安装,如果该指令没有存在,即使MASTER节点发生故障也不会发生变化。

原文出处:gudaoyufu -> http://gudaoyufu.com/?p=1183

本站所发布的一切资源仅限用于学习和研究目的;不得将上述内容用于商业或者非法用途,否则,一切后果请用户自负。本站信息来自网络,版权争议与本站无关。您必须在下载后的24个小时之内,从您的电脑中彻底删除上述内容。如果您喜欢该程序,请支持正版软件,购买注册,得到更好的正版服务。如果侵犯你的利益,请发送邮箱到 [email protected],我们会很快的为您处理。