Community mailing list archives

Re: Calculating "QTY available to MAKE" - Products with a BoM ?? (v7 or 8)

Open For Small Business Ltd, Graeme Gellatly
- 03/03/2015 04:56:25
I only have a few multilevel BoM's but a lot of products with single level BoM's.  But the algorithmic complexity is pretty straight forward.

assuming no caching of intermediate results and your server is using the (now) very low default of 40 function fields at a time so limit this to products with no more than 40 lines.

Product available is a lengthy function but the hard work is done with direct sql calls and all products are in 1 query, so the time to execute product_available for each n where number of lines < 40 approaches n * product_available. (Note this is very different than a list of products which would approach _product_available).

product_available performance variance compared to other functions mostly depends on postgres query planning and whether it needs to hit disk, which depends on many things.  Query plan below for a product with a lot of stock moves  for both it and its compenents in a stock move table with a few dozen locations and 9 companies done in the busiest warehouse.  Note just on my laptop on a test database with roughly 1m stock moves.

"HashAggregate  (cost=17605.61..17607.65 rows=204 width=14) (actual time=4.916..4.922 rows=30 loops=1)"
"  ->  Bitmap Heap Scan on stock_move  (cost=2749.13..17574.75 rows=4114 width=14) (actual time=1.547..3.996 rows=2403 loops=1)"
"        Recheck Cond: ((product_id = ANY ('{108559,4582,108560,4583,162185,153969,153970,108561,4584,108562,4585,108563,4586,108569,4592,108564,4587,108565,4588,108648,102391,108566,4589,108568,4591,108567,4590,108645,102388,119473,119474,108570,4593,16257 (...)"
"        Filter: (location_dest_id <> ALL ('{24,44}'::integer[]))"
"        ->  Bitmap Index Scan on stock_move_location_id_location_dest_id_product_id_state  (cost=0.00..2748.10 rows=5785 width=0) (actual time=1.287..1.287 rows=2403 loops=1)"
"              Index Cond: ((product_id = ANY ('{108559,4582,108560,4583,162185,153969,153970,108561,4584,108562,4585,108563,4586,108569,4592,108564,4587,108565,4588,108648,102391,108566,4589,108568,4591,108567,4590,108645,102388,119473,119474,108570,4593,1 (...)"
"Total runtime: 4.972 ms"

That is with user composite index on a warmed database.  The query needs to run twice and this was the first one, the second was a bit quicker. I didn't create the index just for kicks, this query showed up a lot in our performance monitoring consistently around 35-40ms with thousands of calls / day.  Until just now I'd forgotten about and I see for the big warehouse it is filtering location rather than using the index.  But in any case the stock lookup is roughly equivelant to looking up access rights

Looking at a list of 80 products where I know there are varying amounts of stock/commonality of underlying BoM components but otherwise identical I get a fairly consistent result of 200-300ms max for really common products down to about 15ms for products with no moves.  That is the enitre product read transaction, not just stock lookup.  Easily scrollable in realtime.  I then also traced lookups and loads of the raw materials that made them up.  The results were a bit more variable but roughly in line and about 20% cheaper than above 11ms for no stocks up to 180-300ms for common products.

I'm not saying there is no performance impact but when you consider all the other things that happen when loading product records (translations, product template fields, property fields in addition to the standard relational fields), the well optimised stock_available function isn't one I'd be worrying about even without custom indexes.  Mostly I think the performance depends on avoiding disk, but that is true of everything in postgres.


On Tue, Mar 3, 2015 at 7:48 PM, Nhomar Hernández <> wrote:

2015-03-02 22:37 GMT-04:30 Graeme Gellatly <>:
Sure the more complex bom's and kits the longer it takes.  Simple cases are pretty much no time at all, not more than a few percent of total time. It is basically identical in performance to calculating the BoM requirements (in fact the OCA version only goes 1 level deep so even less) plus some basic arithmetic operations to determine the maximum qty.  I've always found it fast enough to be used in real time even on a full list of products.

It should be cool an analysis of the algorithm.

We have a customer which this feature should help, but with 10000 SKU's with BoM of 2/3 levels, it should be a pain.

May be with some triggers whcih record in the product itself such qantities it can be afford correctly.

But good and useful feature!

Thanks for share.

Saludos Cordiales
Nhomar Hernandez

Post to: