nginx+lua在帐号系统中的应用

我们的帐号系统要应用到多个产品里,所以设计的系统需要满足高并发的特性。项目A更新密码,项目B就得下一次触发接口时,自动登出。

我们帐号系统没有使用Oauth5.0,而是采用了简单的JWT(Json Web Token)的方式来处理的用户认证。所以,帐号系统要提供一个验证用户密码修改的API。

这里就不展开讲jwt了。不了解的可以去google。jwt一共三段:xxx.yyy.zzz, 我们把重要的信息放在payload中,也就是yyy的位置,可以通过base64解码,类似于我们在session存的用户信息。payload也可以做加密处理。

payload一般里面会有一些默认的字段,sub代表用户主体,比如用户的id就可以赋值给sub,也可以是手机号。除了公共的字段,我们也可以定义私有字段,比如seq,可以用在单个应用内来处理单设备登录问题。这里我们要增加一个全局的字段表示密码的状态,或者说用户的状态,比如冻结用户,解冻,踢掉用户登录状态。我们先解决密码状态问题,增加一个字段passwd_seq,初始值1。每更新一次密码passwd_seq加一。所有应用内需要认证的接口都需要校验密码状态。所以都会调用该接口(/token)。失效后,返回401,重新登录。

初步调研A项目每日的接口调用次数10w(接口数百个),除了注册占比较低,忽略不计。就也是说/token接口至少会调用10w次一天。

我在自己的电脑上测试。配置如截图:

redis压测数据:

$ redis-benchmark -t set,get  ====== SET ======    100000 requests completed in 2.65 seconds    50 parallel clients    3 bytes payload    keep alive: 1    32.18% <= 1 milliseconds  98.80% <= 2 milliseconds  99.62% <= 3 milliseconds  99.71% <= 4 milliseconds  99.79% <= 5 milliseconds  99.93% <= 6 milliseconds  99.95% <= 10 milliseconds  99.95% <= 11 milliseconds  99.96% <= 12 milliseconds  99.97% <= 13 milliseconds  99.98% <= 14 milliseconds  99.99% <= 15 milliseconds  99.99% <= 16 milliseconds  100.00% <= 16 milliseconds  37664.79 requests per second    ====== GET ======    100000 requests completed in 2.60 seconds    50 parallel clients    3 bytes payload    keep alive: 1    34.93% <= 1 milliseconds  99.62% <= 2 milliseconds  99.91% <= 3 milliseconds  100.00% <= 3 milliseconds  38491.14 requests per second  

nginx+lua读写redis接口测试,单核测试。

测试的nginx配置文件如下:

worker_processes  1;   # we could enlarge this setting on a multi-core machine  error_log  logs/error.log warn;    events {      worker_connections  2048;  }    http {      lua_package_path 'conf/?.lua;;';        server {          listen       8080;          server_name  localhost;            #lua_code_cache on;            location /test {                access_by_lua_block {                  local jwt = require "resty.jwt"                  local foo = require "foo"                    local err_msg = {                     x_token = {err = "set x-token in request, please!"},                     payload = {err = "payload not found"},                     user_key = {err = "用户配置信息有问题"},                     password = {err = "密码已修改,请重新登录"},                     ok = {ok = "this is my own error page content"},                  }                    -- 判断token是否传递                  local req_headers = ngx.req.get_headers()                  if req_headers.x_token == nil then                     foo:response_json(422, err_msg.x_token)                     return                  end                    local jwt_obj = jwt:load_jwt(req_headers.x_token)            local redis = require "resty.redis"          local red = redis:new()            red:set_timeout(1000) -- 1 sec            local ok, err = red:connect("127.0.0.1", 6379)          if not ok then              ngx.say("failed to connect: ", err)              return          end            -- 请注意这里 auth 的调用过程          local count          count, err = red:get_reused_times()          if 0 == count then              ok, err = red:auth("test123456")              if not ok then              ngx.say("failed to auth: ", err)              return              end          elseif err then              ngx.say("failed to get reused times: ", err)              return          end                    if jwt_obj.payload == nil then                     foo:response_json(422, err_msg.payload)                     return                  end                  local sub = jwt_obj.payload.sub                  user_key, err = red:get('user:' .. sub)                    if user_key == ngx.null then                      foo:response_json(500, err_msg.user_key)                      return                  elseif tonumber(user_key) > 3  then                     foo:response_json(401, err_msg.password)                     return                  else                     foo:response_json(200, err_msg.ok)                     return                  end            -- 连接池大小是200个,并且设置最大的空闲时间是 10 秒          local ok, err = red:set_keepalive(10000, 200)          if not ok then              ngx.say("failed to set keepalive: ", err)              return          end      }          }      }  }  

上面的配置文件代码格式化,目前没找到合适的工具.

测试结果如下:

$   ab -c10 -n5000 -H 'x-token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOi8vYWNjb3VudC1hcGkuc3VwZXJtYW4yMDE0LmNvbS9sb2dpbiIsImlhdCI6MTUwNTQ2Njg5OSwiZXhwIjoxNTA1NDcwNDk5LCJuYmYiOjE1MDU0NjY4OTksImp0aSI6ImJ0TWFISmltYmtxSGVUdTEiLCJzdWIiOjIsInBydiI6Ijg3MTNkZTA0NTllYTk1YjA0OTk4NmFjNThlYmY1NmNkYjEwMGY4NTUifQ.yiXqkHBZrYXuxtUlSiy5Ialle--q_88G32lxUsDZO0k'  http://127.0.0.1:8080/token  This is ApacheBench, Version 2.3 <$Revision: 1757674 $>  Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/  Licensed to The Apache Software Foundation, http://www.apache.org/    Benchmarking 127.0.0.1 (be patient)  Completed 500 requests  Completed 1000 requests  Completed 1500 requests  Completed 2000 requests  Completed 2500 requests  Completed 3000 requests  Completed 3500 requests  Completed 4000 requests  Completed 4500 requests  Completed 5000 requests  Finished 5000 requests      Server Software:        openresty/1.11.2.5  Server Hostname:        127.0.0.1  Server Port:            8080    Document Path:          /token  Document Length:        175 bytes    Concurrency Level:      10  Time taken for tests:   0.681 seconds  Complete requests:      5000  Failed requests:        0  Non-2xx responses:      5000  Total transferred:      1655000 bytes  HTML transferred:       875000 bytes  Requests per second:    7344.73 [#/sec] (mean)  Time per request:       1.362 [ms] (mean)  Time per request:       0.136 [ms] (mean, across all concurrent requests)  Transfer rate:          2374.13 [Kbytes/sec] received    Connection Times (ms)                min  mean[+/-sd] median   max  Connect:        0    1   0.2      1       4  Processing:     0    1   0.4      1       5  Waiting:        0    1   0.4      1       4  Total:          1    1   0.5      1       6    Percentage of the requests served within a certain time (ms)    50%      1    66%      1    75%      1    80%      1    90%      2    95%      2    98%      3    99%      4   100%      6 (longest request)  

单核每秒的qps在7k以上(几乎没优化lua代码)。php之前的测试数据在60左右(大家可以实际测试下)。

看到这个比例。单机单核每日的请求量最大上面是604,800k,每天可以处理6亿个请求。

比如我们优化后再测试,nginx上的lua_code_cache开启,同时开启了2个worker, 测试结果如下:

 ab -c10 -n5000 -H 'x-token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwOi8vYWNjb3VudC1hcGkuc3VwZXJtYW4yMDE0LmNvbS9sb2dpbiIsImlhdCI6MTUwNTQ2Njg5OSwiZXhwIjoxNTA1NDcwNDk5LCJuYmYiOjE1MDU0NjY4OTksImp0aSI6ImJ0TWFISmltYmtxSGVUdTEiLCJzdWIiOjIsInBydiI6Ijg3MTNkZTA0NTllYTk1YjA0OTk4NmFjNThlYmY1NmNkYjEwMGY4NTUifQ.yiXqkHBZrYXuxtUlSiy5Ialle--q_88G32lxUsDZO0k'  http://127.0.0.1:8080/token  This is ApacheBench, Version 2.3 <$Revision: 1757674 $>  Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/  Licensed to The Apache Software Foundation, http://www.apache.org/    Benchmarking 127.0.0.1 (be patient)  Completed 500 requests  Completed 1000 requests  Completed 1500 requests  Completed 2000 requests  Completed 2500 requests  Completed 3000 requests  Completed 3500 requests  Completed 4000 requests  Completed 4500 requests  Completed 5000 requests  Finished 5000 requests      Server Software:        openresty/1.11.2.5  Server Hostname:        127.0.0.1  Server Port:            8080    Document Path:          /token  Document Length:        175 bytes    Concurrency Level:      10  Time taken for tests:   0.608 seconds  Complete requests:      5000  Failed requests:        0  Non-2xx responses:      5000  Total transferred:      1655000 bytes  HTML transferred:       875000 bytes  Requests per second:    8217.29 [#/sec] (mean)  Time per request:       1.217 [ms] (mean)  Time per request:       0.122 [ms] (mean, across all concurrent requests)  Transfer rate:          2656.18 [Kbytes/sec] received    Connection Times (ms)                min  mean[+/-sd] median   max  Connect:        0    1   0.2      1       2  Processing:     0    1   0.2      1       2  Waiting:        0    1   0.2      1       2  Total:          1    1   0.3      1       3    Percentage of the requests served within a certain time (ms)    50%      1    66%      1    75%      1    80%      1    90%      1    95%      1    98%      2    99%      3   100%      3 (longest request)  

除了测试用具ab,还可以用Python写的boom或者go写的hey。可以去github下。其他的两个用具测试结果要比ab工具更稳定。

项目的部署工具可以选用walle开源项目(https://github.com/meolu/walle-web),但是不支持docker部署方式,docker一般部署有两种方式,把代码包到docker image或者做目录映射。我基于walle v1.2.0打了一个patch。 如下:

我们项目的开发部署环境可以使用:openresty image ,其实我们可以把这个项目clone下来。做些处理,或者直接继承这个image。

开发的项目最好使用绝对路径引入lua文件。

一般的项目路径如下:

.  ├── config  │   ├── domain  │   │   └── nginx_token.conf  │   ├── nginx.conf  │   └── resources.properties.json  ├── lua  │   ├── init.lua  │   └── token_controller.lua  └── lualib      └── luka          └── response.lua    5 directories, 6 files  

感觉lua这种脚本语言还是不错的,在性能上不比编译型语言差多少,但开发效率却高出不少。后期准备把laravel写的那些中间件都改为lua+nginx,各种各样的校验的事交给lua处理会提升不少性能,甚至,某些不复杂的接口也可以改用lua编写。

lua真是个不错的家伙。借助nginx的高性能,让它成为运行最快的脚本语言。开发web 应用目前已经非常方便了。openresty提供了不少库,比如:mysql,redis,memcache,jwt,json等等吧。

原文出处:superman2014 -> https://www.superman2014.com/2017/09/19/nginxlua%E5%9C%A8%E5%B8%90%E5%8F%B7%E7%B3%BB%E7%BB%9F%E4%B8%AD%E7%9A%84%E5%BA%94%E7%94%A8/

本站所发布的一切资源仅限用于学习和研究目的;不得将上述内容用于商业或者非法用途,否则,一切后果请用户自负。本站信息来自网络,版权争议与本站无关。您必须在下载后的24个小时之内,从您的电脑中彻底删除上述内容。如果您喜欢该程序,请支持正版软件,购买注册,得到更好的正版服务。如果侵犯你的利益,请发送邮箱到 [email protected],我们会很快的为您处理。