在Tornado 邮件列表里看到,经常打不开,转在这里记录。
《Streaming request body handler (for uploading large files)》
https://groups.google.com/forum/#!topic/python-tornado/eRxny4bC3qI
Hi,
I really like Tornado, but I am missing a useful feature that nodejs has, namely the ability to stream the request body (directly to a disc or to a socket), instead of handling the whole request body in server memory. This feature is essential, e.g. if you want to receive/proxy large files, or run the server on a system with limited resources.
After searching this discussion forum I have found several posts discussing the lack of this feature in Tornado. The default response is to look into the file upload features in nginx. But considering the native support in nodejs, I really feel that Tornado should also offer a similar feature.
I have implemented experimental support for a streaming request body handling. The code is on GitHub: https://github.com/nephics/tornado/commit/1bd964488926aac9ef6b52170d5bec76b36df8a6
Here is an example demonstrating the use of this feature: https://gist.github.com/1134964
I am sure that my implementation can be improved. But maybe this experimental branch can inspire Ben and others to get this feature implemented in the Tornado main branch. Please fork and improve my code!
关键的地方
So it should be okay, if the server has memory to handle a bytestring of 100 MB. Note that if nginx is used in front of tornado, the entire request will be buffered in nginx berfore it is passed upstream. Hence, you may need (at least) the double amount of memory to handle the request. (read more here: http://wiki.nginx.org/HttpProxyModule)
Implementing the feature with streaming body may reduce the memory requirement of tornado when uploading large files. (But there is still a memory issue if nginx is used as proxy, so then it is better to use the nginx file upload module: http://www.grid.net.ru/nginx/upload.en.html )
You can switch off proxy buffering for nginx
proxy_buffering off;
According to the documentation (http://wiki.nginx.org/HttpProxyModule): If buffering is switched off, then the response is synchronously transferred to client immediately as it is received -> this is a pure streaming feature by nginx, better I think than store upload to file. You can also set a specific header by the client (X-Accel-Buffering) in order to activate this feature.
Secondly when nginx buffer the request , all data doesn't go to memory (see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_busy_buffers_size) but just some segments.
So I think it is inexact that nginx double the memory amount for handling this request.