Community mailing list archives
Re: sales order confirmation v9: speed issueby
I think you are right Nhomar.Looking at individual app server transactions we are seeing a very modest reduction, 5-10% in postgres. Same with average browser response time. But what we are not seeing is the outliers from serialization failures and the like in browser or app server response times. It is a real flat graph now, whereas before you'd get some massive spikes throughout the day.
On Wed, Feb 3, 2016 at 11:58 AM, Nhomar Hernández <firstname.lastname@example.org> wrote:
Hello Graeme.You are absolutely right.Just for the records and the sake of the history, I want extend our tests results in Just few Points.1.- On huge transactions, (let's say batch update of -whatever-) Postgres 9.3 locked some rows incorrectly, the the error shown was not the programmer error (on python side) it was the Postgres error (ir_sequence blocked for example), Then huge operation with more than 100 of lines and 100% automation setted (real valuation, automatic invoice, 3 steps picking, automatic procurements) took something like 60 minutes, Due to the fact that postgres 9.5 manage such elements more efficiently, it let's us receive a programming error (odors issue and customization issues) in less than 5 minutes, we made the fix and then let's postgres make the job of manage transactions securelly, then the "Perception" was only for fix postgres of and improvement of almost 90% of the time (even if we know that's not true mathematically speaking as perfectly you explained). I hope it clarify my point.2.- The manage of the orm in a securest way making the lazy * stuff helped a lot "also" because those 10 minutes left (the 10% I just mentioned in terms of perception) comes down others 90% now a sale order of 100% lines takes only 1 - 4 minutes depending of if the procurements generate purchase or manufacturing orders.3.- Now we have 100% of repositories up-to-date and we face "another" issue which was using an incorrect approach on programing terms, but as before the error shown (due to postgres 9.3 way of manage blocking points [I do not remember te correct technical term for that]) bring a computation of 100% of 5460 SKU cost in 9 hours when finished the python part, then committing to Postgres it expend 2 more hours (and silently stopped). With the same 2 fixed the same inefficient algorithm took 3.5 hours and 20 minutes commiting 15k SKU with ne costs + logging messages per product. And NO error, it allow us re-think the algorithm and we build a ne one to compute splitting in separate parts, less memory -less silently errors.That was our 3 situations that can be considered BIG, we had last week 4 or 5 more but that comply perfectly with the ones you explained dude!.Thanks and happy hacking.2016-02-02 15:57 GMT-06:00 Graeme Gellatly <email@example.com>:Anyway once I've got a week or so of full logs I'll see what happens and post again. The performance stats I really want to see are the improvements with python 2.7.11 and computed gotos. I'll give that a go next week once I understand what 9.5 really does.The abbreviated keys indexing I imagine will also help massively if you aren't already using something like pg_trgm for indexing varchars and relying on the standard ORM btree created ones. That would mostly be seen in searching though in large databases, not so much in confirming workflow related documents.Hi Nhomar,If you are running 9.3 then that database would be a couple years old. Merely the process of updating, compressing pages, and recreating indexes would account for a little of that. Far more than any regular full vacuum, which seems to save disk space at the expense of slightly slower queries. As an example our database went from 31GB to 20GB on disk with the upgrade (attachments in file system). As always the worst offender was the workflow tables.
I think the improvement will depend with upgrading postgres.
As for the extra workers capability, I think that is going to depend. We spent a lot of time optimizing postgres and for most transactions have it down to roughly an 80/20 split between python / postgres. In rough terms then, with a more efficient postgres running even 100% faster you'd be talking a 10% time saving in server processing. But even a standard database with no extra work on the database is roughly 70/30 so 90% is simply not possible I think unless something is really wrong with your 9.3 postgres setup (or you've skipped the ORM and using SQL for everything). Allowing for browser response and latency drops that saving even further. For small databases, or with very few users the savings would be even less.
A check in our historical stats shows that preupgrade we had an average load of roughly 4.0 (12 cores/128GB Ram), post upgrade also 4.0 so extra workers not setting the world on fire. Postgres response times as a percentage of total transaction times actually appear to have increased slightly although that is likely just because it takes a day or so to fully warm the database so lots of cache misses (65ms vs 71ms / transaction average).On Mon, Feb 1, 2016 at 7:21 PM, Nhomar Hernández <firstname.lastname@example.org> wrote:2016-01-31 22:46 GMT-06:00 Nils Hamerlinck <email@example.com>:Can you give us the commit id?Just informative.We improve 90% on speed in aaaaaalll our processes just updating db engine to 9.5.We made huge tests this weekend with huge db's and all works like a charmAll over v8.0The speed improvements are not related to Odoo itself, we didn't apply any update. Psql 9.3 had problems on performance blocking some tables.....Regards. (Working on update our production environments now....)--