This question has been flagged
3411 Views

I'm synchronising my backend DB to OpenERP by using openerplib and xmlrpclib in a python script. By going line by line through my source database result (row), and then executing a write per line, it works how it should, but is extremely slow for thousand items:

   if row:

      for x in range(0, len(row)):

         partner_model = openerp_db.get_model('res.partner')
         partner_id = partner_model.search(['&', ('ref', '=', autoid), ('is_company', '=', True)])
         openerp_class.execute('res.partner', 'write', partner_id, {'ref': row[x][1], 'is_company': 'True',
                                                            'name': row[x][3], 'street': row[x][4],
                                                            'city': row[x][5], 'zip': row[x][6]}))

I assume this is because like this I make 1000x a call to get the model & partner_id which is unavoidable, but on top of that I also do 1000x an execute to save the new information to openERP which I hope can be done differently.

Would you have a recommendation or link with information on how to do this more efficiently? For example by doing only one execute but passing a list/dictionary of partner_id's and data information as a parameter?

Thank you in advance

Avatar
Discard

You might like to do a quick test with oerplib, https://pypi.python.org/pypi/OERPLib to see if openerplib is the bottleneck.