This is happening so often on high volume database and system which has many users on it! Does anyone have a nice suggestion to overcome this situation of not allowing cpu usage go 100%?
Separating database on another system is the only alternate? We see that its python which leads this issue.
We are on OpenERP 6.1 Thanks.
That's clearly not a Python problem, but an OpenERP one. We have the same problem here, working on big databases, with billions of rows, and performance are just so bad.
That's mainly because OpenERP ORM basically sucks :
- You have to write SQL every time if you want performances : you can't even do a basic JOIN with OpenERP.
- Model's methods accept ids as parameters. This means that every function does its own call to browse(). For example, if you define 3 _contraints on an object, each function will issue a browse() (or a read) and make a query to the DB. Same apply for method overloading.
- OpenERP query are just bad, for example, if you want to get a list of res_partner where name match "Thibaut", OpenERP will do 2 queries (Yeah, very strange).
SELECT id FROM res_partner WHERE name ILIKE 'Thibaut'
SELECT <hudge list of fields> FROM res_partner WHERE id = <result from 1st one>
- When you use browse(), which is much easier than read, you have absolutely no way to specify which fields you will fetch. This means that even if you just want to check 5 fields of an object, you will get all of them. Some will tell you "use read instead !", yeah but no, thanks. Unreadable code because of successive read() just sucks. Moreover, getting the name of M2O objects every-time you do a read() sucks too. What if I just want the id ?
- Function fields are a bad idea, which doesn't scale. A lot of function fields could have been implemented in SQL. But there is absolutely no way in OpenERP to do this, whereas it's totally doable. SQLAlchemy performs very well at this for example.
All of this are just example of things which just sucks, and just kills performances in real-business apps.
By the way, you can try to use Gunicorn on 6.1 to improve all of this a little, or multiprocessing options on 7.0. It won't solve all the databases issues, but might help...
It's not usual that the python code is the bottleneck, usually it's the database. If Python is slow, it's probably due to high computation in custom codes with a wrong implementation (all methods must be in O(1) complexity to avoid explosion of the computation complexity).
In most of the instance we analysed the performance, the complexity not in O(1) is usually due to:
- Bad implementation of function fields in custom code (function fields must be in O(1) --> compute all IDs in a fixed number of operations or be stored)
- Wrong usage of browse (don't do browse inside FOR but before any loop or method call)
May I request you to share your database size, number of concurrent users and Server configuration?
Some time back I have asked a question and no response. here is the link
Please try to give a substantial answer. If you wanted to comment on the question or answer, just use the commenting tool. Please remember that you can always revise your answers - no need to answer the same question twice. Also, please don't forget to vote - it really helps to select the best questions and answers!
About This Community
|Asked: 6/20/13, 4:17 AM|
|Seen: 5869 times|
|Last updated: 9/8/15, 6:32 AM|