本文主要是介绍Flask+Celery+Redis+Gunicorn+Nginx+Supervisor部署异步任务,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
环境:
系统:Ubuntu16.04
语言:Python3.6.5
安装:
安装redis:
sudo apt-get install redis-server
验证redis:
ideal@ideal196:~$ redis-cli
127.0.0.1:6379>
安装Flask,Celery,Redis,Gunicorn, Supervisor;我安装了Anaconda3.5.2,这个安装完之后就不需要安装Flask,我假设你没有用Anaconda,那就用pip安装
python -m pip install flask redis celery gunicorn supervisor
安装Nginx
sudo apt-get install nginx -y
异步任务:
创建Flask服务:
编辑app.py
from flask import Flaskapp = Flask(__name__) @app.route("/mul/<arg1>/<arg2>")
def sum_(arg1,arg2):return str(int(arg1)+ int(arg2))
运行该服务
flask run -h 0.0.0.0 -p 8000
测试接口:
ideal@ideal196:~$ curl http://127.0.0.1:8000/mul/2/3
6
利用Gunicorn部署Flask服务
启动命令:
gunicorn --workers=4 --bind=0.0.0.0:8000 app:app
如果正常启动,没有报错,你请求上述接口应该可以计算出两个数的积, --workers是用来定义工作线程的数量,一般 worker
的数量为 (2×$num_cores)+1,
你后边会用到这个值,这里给了4个线程。
我们通过配置文件来部署gunicorn,编写gunicorn_flask.py
import multiprocessingbind = '0.0.0.0:8000'
workers = multiprocessing.cpu_count() * 2 + 1 # 获取cpu数量来设置进程数量backlog = 2048
# worker_class = "gevent" # 默认为sync, 也可以使用eventlet, gevent, tornado, gthread
worker_connections = 1000
daemon = False
debug = True
proc_name = 'app' # Flask 主程序文件名
pidfile = './logs/gunicorn.pid'
errorlog = './logs/gunicorn.log'
启动命令:
gunicorn -c gunicorn_flask.py app:app
再次测试接口是否可以正常访问,如果没有问题,那我们接着往下部署,
利用Supervisor部署gunicorn
生成配置文件,
ideal@ideal196:~/$ echo_supervisord_conf > supervisords.conf
ideal@ideal196:~/$ sudo mv supervisords.conf /etc/supervisords.conf
在supervisor.conf中配置gunicorn
[program:gunicorn] # gunicorn为进程的名字
user=ideal # 操作的用户
directory=/home/ideal/workspace/tilyp # 项目目录,
command=/home/ideal/anaconda3/bin/gunicorn -c gunicorn_flask.py app:app # 启动flask服务的命令
startsecs=5 # 启动5秒后没有异常退出,视作正常启动
autostart=true # 在 supervisord 启动时自动启动
autorestart=true # 程序异常退出后重启
redirect_stderr=true # 将错误信息重定向至stdout日志
stdout_logfile=/home/ideal/workspace/tilyp/logs/gunicorn.log # 进程日志
加载配置文件
supervisord -c /etc/supervisord.conf
启动gunicorn
supervisorctl start gunicorn
supervisor的常用命令
supervisorctl status # 获取所有进程状态supervisorctl stop gunicorn # 停止进程supervisorctl start gunicorn # 启动进程supervisorctl restart gunicorn # 重启进程,不会重新加载配置文件supervisorctl reread # 重新加载配置文件,不会新增和删除进程supervisorctl update # 加载配置文件,会删除和新增进程,并重启受影响的程序supervisorctl shutdown # 停止supervisord supervisorctl all # 停止全部进程
这时我们继续访问接口,如果可以正常访问,那就证明没有问题,
利用celery部署异步任务
创建celery_task.py
from celery import Celery
from time import sleepCELERY_BROKER_URL = 'redis://127.0.0.1:6379/0'
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379/0'
app = Celery("app", broker=CELERY_BROKER_URL, backend=CELERY_RESULT_BACKEND)@app.task # 这里写了一个异步方法,等待被调用
def mul(arg1, arg2):sleep(10)result = arg1*arg2return resultdef get_result(task_id): # 通过任务id可以获取该任务的结果result = app.AsyncResult(task_id)return result.result
修改app.py
from flask import Flask
from celery_task import *app = Flask(__name__)@app.route("/mul/<arg1>/<arg2>")
def mul_(arg1,arg2):result = mul.delay(int(arg1),int(arg2)) # 调用异步方法并传参数return result.id@app.route("/get_result/<result_id>")
def result_(result_id):result = get_result(result_id)return str(result)
重启flask服务:
supervisorctl restart gunicorn
启动celery服务
ideal@ideal196:~/workspace/tilyp$ celery -A celery_task worker --loglevel=info-------------- celery@ideal196 v4.3.0 (rhubarb)
---- **** -----
--- * *** * -- Linux-4.10.0-28-generic-x86_64-with-debian-stretch-sid 2019-08-09 14:11:06
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: app:0x7f6793813668
- ** ---------- .> transport: redis://127.0.0.1:6379/0
- ** ---------- .> results: redis://127.0.0.1:6379/0
- *** --- * --- .> concurrency: 32 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues].> celery exchange=celery(direct) key=celery[tasks]. celery_task.mul[2019-08-09 14:11:07,278: INFO/MainProcess] Connected to redis://127.0.0.1:6379/0
[2019-08-09 14:11:07,285: INFO/MainProcess] mingle: searching for neighbors
[2019-08-09 14:11:08,302: INFO/MainProcess] mingle: all alone
[2019-08-09 14:11:08,315: INFO/MainProcess] celery@ideal196 ready.
看到这些信息,证明没有错,测试接口
ideal@ideal196:~$ curl http://127.0.0.1:8000/mul/4/5
4de1a42c-194d-45fe-98d9-dfdcb8363ee6
ideal@ideal196:~$ curl http://127.0.0.1:8000/get_result/4de1a42c-194d-45fe-98d9-dfdcb8363ee6
20
接下来在supervisor中部署celery,在/etc/supervisord.conf中添加如下内容:
[program:celeryworker] # celeryworker是进程的名字,随意起
command=celery -A celery_task worker --loglevel=info
directory=/home/ideal/workspace/tilyp # 项目路径,
user=ideal
numprocs=1
# 设置log的路径
stdout_logfile=/home/ideal/workspace/tilyp/logs/celeryworker.log
stderr_logfile=/home/ideal/workspace/tilyp/logs/celeryworker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
priority=15
更新supervisor配置
ideal@ideal196:~$ supervisorctl update
celeryworker: added process group
ideal@ideal196:~$
至此我们的异步任务框架已经搭建起来了,但是还是不完美,gunicorn应该和nginx搭配使用,所以我们再来配置一下nginx
配置nginx
上边已经安装好了nginx,我们接下来直接配置即可,
ideal@ideal196:~$ cd /etc/nginx/sites-enabled/
ideal@ideal196:/etc/nginx/sites-enabled$ rm defult
ideal@ideal196:/etc/nginx/sites-enabled$ vim app
server {listen 80;server_name _; # 有域名可以配置在这里access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log;location / {proxy_pass http://127.0.0.1:8000/; # 转发服务的地址proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Proto $scheme;}
}
测试配置文件是否正确
nginx -t
没有报错则重启nginx
nginx -s reload
或者
service nginx restart
再次测试接口是否可用,如果没有报错,那么整个部署步骤到此为止。
如有问题请加技术交流群:526855734
这篇关于Flask+Celery+Redis+Gunicorn+Nginx+Supervisor部署异步任务的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!