CPU Usage goes 100% and system hangs!

This is happening so often on high volume database and system which has many users on it! Does anyone have a nice suggestion to overcome this situation of not allowing cpu usage go 100%?

Separating database on another system is the only alternate? We see that its python which leads this issue.

We are on OpenERP 6.1 Thanks.

4 Answers
Thibaut DIRLIK
Best Answer

That's clearly not a Python problem, but an OpenERP one. We have the same problem here, working on big databases, with billions of rows, and performance are just so bad.

That's mainly because OpenERP ORM basically sucks :

  1. You have to write SQL every time if you want performances : you can't even do a basic JOIN with OpenERP.
  2. Model's methods accept ids as parameters. This means that every function does its own call to browse(). For example, if you define 3 _contraints on an object, each function will issue a browse() (or a read) and make a query to the DB. Same apply for method overloading.
  3. OpenERP query are just bad, for example, if you want to get a list of res_partner where name match "Thibaut", OpenERP will do 2 queries (Yeah, very strange).
    • SELECT id FROM res_partner WHERE name ILIKE 'Thibaut'
    • SELECT <hudge list of fields> FROM res_partner WHERE id = <result from 1st one>
  4. When you use browse(), which is much easier than read, you have absolutely no way to specify which fields you will fetch. This means that even if you just want to check 5 fields of an object, you will get all of them. Some will tell you "use read instead !", yeah but no, thanks. Unreadable code because of successive read() just sucks. Moreover, getting the name of M2O objects every-time you do a read() sucks too. What if I just want the id ?
  5. Function fields are a bad idea, which doesn't scale. A lot of function fields could have been implemented in SQL. But there is absolutely no way in OpenERP to do this, whereas it's totally doable. SQLAlchemy performs very well at this for example.

All of this are just example of things which just sucks, and just kills performances in real-business apps.

By the way, you can try to use Gunicorn on 6.1 to improve all of this a little, or multiprocessing options on 7.0. It won't solve all the databases issues, but might help...


Your anger is obvious! You replied perfectly! We will check the possibilities to get rid of that. Thanks a ton! Looking for solutions!!

OpenERP's ORM is not perfect, but clearly better than othe ORM in python. (not in the syntax, in perfs) 1/ ORM support Join in Where clause, not in data retrieved because it's not the goal. 2/ That's a problem, we are porting methods to the new API to pass objects as arguements. 3/ This is not that much time consuming. 4/ Because browse is smart and prefetch low-costs fields (not all fields) --> normal as ou may not know which fields are required if you pass the object in other methods 5/ wrong, nearly all function fields of official addons are in O(1) --> if implemented correctly, they scale

Best Answer

It's not usual that the python code is the bottleneck, usually it's the database. If Python is slow, it's probably due to high computation in custom codes with a wrong implementation (all methods must be in O(1) complexity to avoid explosion of the computation complexity).

In most of the instance we analysed the performance, the complexity not in O(1) is usually due to:

  • Bad implementation of function fields in custom code (function fields must be in O(1) --> compute all IDs in a fixed number of operations or be stored)
  • Wrong usage of browse (don't do browse inside FOR but before any loop or method call)

Thank you Fabien sir!

Best Answer

Greetings :)

May I request you to share your database size, number of concurrent users and Server configuration?

Some time back I have asked a question and no response. here is the link

Thank you