That's clearly not a Python problem, but an OpenERP one. We have the same problem here, working on big databases, with billions of rows, and performance are just so bad.
That's mainly because OpenERP ORM basically sucks :
- You have to write SQL every time if you want performances : you can't even do a basic JOIN with OpenERP.
- Model's methods accept ids as parameters. This means that every function does its own call to browse(). For example, if you define 3 _contraints on an object, each function will issue a browse() (or a read) and make a query to the DB. Same apply for method overloading.
- OpenERP query are just bad, for example, if you want to get a list of res_partner where name match "Thibaut", OpenERP will do 2 queries (Yeah, very strange).
SELECT id FROM res_partner WHERE name ILIKE 'Thibaut'
SELECT <hudge list of fields> FROM res_partner WHERE id = <result from 1st one>
- When you use browse(), which is much easier than read, you have absolutely no way to specify which fields you will fetch. This means that even if you just want to check 5 fields of an object, you will get all of them. Some will tell you "use read instead !", yeah but no, thanks. Unreadable code because of successive read() just sucks. Moreover, getting the name of M2O objects every-time you do a read() sucks too. What if I just want the id ?
- Function fields are a bad idea, which doesn't scale. A lot of function fields could have been implemented in SQL. But there is absolutely no way in OpenERP to do this, whereas it's totally doable. SQLAlchemy performs very well at this for example.
All of this are just example of things which just sucks, and just kills performances in real-business apps.
By the way, you can try to use Gunicorn on 6.1 to improve all of this a little, or multiprocessing options on 7.0. It won't solve all the databases issues, but might help...