Suppose I am using OpenERP with a load balancer, to handle concurrency.
How many users could one instance handle concurrently?
I assume that postgresql would never run into performance problems? (ie OpenERP application is the bottleneck?)
DO NOT scale the number of workers to the number of clients you expect to have. Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second.
Gunicorn relies on the operating system to provide all of the load balancing when handling requests. Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with. While not overly scientific, the formula is based on the assumption that for a given core, one worker will be reading or writing from the socket while the other worker is processing a request.
Obviously, your particular hardware and application are going to affect the optimal number of workers. Our recommendation is to start with the above guess and tune using TTIN and TTOU signals while the application is under load.
Always remember, there is such a thing as too many workers. After a point your worker processes will start thrashing system resources decreasing the throughput of the entire system.
For more details about gunicorn based workers link...
More information from Opendays 2014:
The recommended practice is to use `gunicorn` to handle concurrency. It deploys several workers serving requests on the same port. The recommended number of workers is `[number of cores] * 2 + 1`.
You have to take care of your Postgres configs and tuning, but usually the bottleneck will come from the application server ORM.
Please try to give a substantial answer. If you wanted to comment on the question or answer, just use the commenting tool. Please remember that you can always revise your answers - no need to answer the same question twice. Also, please don't forget to vote - it really helps to select the best questions and answers!
About This Community
|Asked: 5/19/14, 6:15 AM|
|Seen: 2919 times|
|Last updated: 3/16/15, 8:10 AM|