本文主要是介绍使用python logging处理多机多进程写同一个日志文件,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
MemoryHandler的性能问题:
如果target是StreamHandler的子类
上是有严重的IO性能问题
是一个for调用handler,
handler中的处理侧是 io.write(), io.flush()
立马的flush到硬盘中,并有多次flush,io性能很差
如tornado的多进程模式与django的fastcgi (flup)多进程模式的场境,写日志都可以应用以下代码:
CS结构
server代码如下, 应该是09 年的项目代码:
#coding:utf8
#author:TooNTonG 2011-11-07from SocketServer import ThreadingTCPServer, StreamRequestHandler
import logging.config
import logging.handlers as lhandlers
import os
import struct
import cPickleLOG_BIND_PORT = 20001class LogRequestHandler(StreamRequestHandler):def handle(self):while 1:chunk = self.connection.recv(4)if len(chunk) < 4:breakslen = struct.unpack(">L", chunk)[0]chunk = self.connection.recv(slen)while len(chunk) < slen:chunk = chunk + self.connection.recv(slen - len(chunk))obj = self.unPickle(chunk)# 使用SocketHandler发送过来的数据包,要使用解包成为LogRecord# 看SocketHandler文档record = logging.makeLogRecord(obj)self.handleLogRecord(record)def unPickle(self, data):return cPickle.loads(data)def handleLogRecord(self, record):logger = logging.getLogger(record.name)logger.handle(record)def startLogSvr(bindAddress, requestHandler):svr = ThreadingTCPServer(bindAddress, requestHandler)svr.serve_forever()def addHandler(name, handler):logger = logging.getLogger(name)logger.addHandler(handler)fmt = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(message)s')handler.setFormatter(fmt)logger.setLevel(logging.NOTSET)def memoryWapper(handler, capacity):hdlr = lhandlers.MemoryHandler(capacity, target = handler)hdlr.setFormatter(handler.formatter)return hdlrdef main():path, dirname = os.path, os.path.dirnamepth = dirname((path.realpath(__file__)))filename = path.join(dirname(pth), 'log', 'logging.log')
# logging.config.fileConfig(pth + r'/logging.conf')# 最终写到文件中hdlr = lhandlers.RotatingFileHandler(filename,maxBytes = 1024,backupCount = 3)# 还可以一个memoryhandler,达到一定数据或是有ERROR级别再flush到硬盘hdlr = memoryWapper(hdlr, 1024)addHandler('core', hdlr)print 'OK: logerserver running...'startLogSvr(('0.0.0.0', LOG_BIND_PORT), LogRequestHandler)if __name__ == "__main__":main()
再帖上客户端代码:
#coding:utf8
#author: TooNTonG 2012-11-07import logging
import logging.handlers as handlersAPP_NAME = 'app1'
LOG_SVR_HOST = '127.0.0.1'
LOG_SVR_PORT = 20001# 此logger name必需与服务端中有相应的logger处理handler
# 如果服务端logging.getLogger()返回空,会使用root处理
LOGGER_NAME = 'core'def getSocketLogger(name, level, host, port, memoryCapacity):target = handlers.SocketHandler(host, port)if memoryCapacity > 0:hdlr = handlers.MemoryHandler(memoryCapacity,logging.ERROR, # 此参数是指遇到此级别时,马上flushtarget)else:hdlr = targethdlr.setLevel(level)logger = logging.getLogger(name)logger.addHandler(hdlr)logger.setLevel(level)return loggerdef main():logger = getSocketLogger(LOGGER_NAME,logging.DEBUG, # 如果使用NOTSET,相当warninghost = LOG_SVR_HOST,port = LOG_SVR_PORT,memoryCapacity = 1024)for i in range(10):logger.info('run %s main' % APP_NAME)logger.debug('thisis the debug log by %s' % APP_NAME)logger.warning('thisis the warning log by %s' % APP_NAME)logger.error('thisis the error log by %s' % APP_NAME)logger.critical('thisis the critical log by %s' % APP_NAME)print 'end main'if '__main__' == __name__:main()
如果不设置带名字的logger,就是统一处理了。设置带名字的好处是可以N个不同功能的进程、不在机器上的,服务使用一个logger-server就可以了。
logging.handlers中有很多handler,可以自行进行组装:
这篇关于使用python logging处理多机多进程写同一个日志文件的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!