This question has been flagged

Sometimes I need to read data from postgresql tables (pre-upgrade scripts, raw imported data, etc..) and verify it against the object values. The following code chunk is a simplified example:

env = ...
field_name = 'my_res_partner_field'

sql_fmt = 'SELECT id as res_id, %(column)s as value FROM res_partner_import'
sql = sql_fmt % {'column': field_name}
env.cr.execute(sql)
values = dict(env.cr.fetchall())

for obj in env['res.partner']:
_field = obj._fields[field_name]
sql_value = values[obj.id]
obj_value = getattr(obj, field_name)
# normalize values to compare
sql_value = _field.convert_to_cache(sql_value, obj, False)
v1 = _field.convert_to_read(sql_value, True)
v2 = _field.convert_to_read(obj_value, True)
assert v1 == v2, 'Value missmatch "%s" != "%s"' % (v1, v2)

The problem is to compare both values in a general way (no matter the field). That's why I do try to normalize the values from the query result using convert_to_cache because that's what the fields implementation seem to do. After that both values are converted to simple values using convert_to_read in order to compare simple types and avoid the comparison of browse values.

It seems to work except for two cases

1) missmatch of empty values from fields.Text  ( v1=u''  &  v2=False)

my current workaround is:

 v1 = _field.convert_to_read(sql_value, True) or False

2) type missmatch of the selection values (v1='1' & v2=1)

my current workaround is:

assert str(v1) == str(v2), 'Value missmatch "%s" != "%s"' % (v1, v2)


With both workaround applied it seems to work.  

Does anybody knows a way to normalize the sql values to object values that work for all the fields ? What I am missing ?

 

Avatar
Discard