Community archivos de la lista de correo


Re: About Attachment File Location (again)

- 03/02/2015 05:22:14
How do you keep a total sync between what is in the NAS and the ir.attachment metadata in the database?
The containing directory is set to world readonly. Revisions are uploaded as different attachments.
>it would incur unnecessary database overhead
>every time the files need to be accessed or manipulated.

With the default 7.0 storage maybe, with the blob there are no more overhead than just reading a file from disk.
I am not an expert in PostgreSQL but I came across a few times that storing very large binaries in the database is not recommended. I can't recall the sources, but I found this: 
>For instance, we needed to generate previews and thumbnails for each of
>the known types. We also need to add watermark on-the-fly. These type
>of operations are easier and faster when files are saved on local

It seems easier in the first place (but not faster). It's easier to modify a file in the filesystem with a filesystem tool indeed, 
Presumably, I need to repeatedly read, modify and update the database column for every binary I need to manipulate, compared to passing the files to external tool/library which modifies them in-place. I'd assume direct manipulation is faster.
but it's harder to do it while keeping real sync with what is in the database anyway. And eventually I would bet it's easier in the long run to rely on the database security and transactionality.
It all depend on the importance you give to your data.

However, one of the other perks you get if you store the binaries on filesystem is that it is easier to apply message broker like rabbitmq to manipulate the binaries (on a different machine) asynchronously.
>Having these binaries saved on local filesystem also benefit from
>quicker streaming when served via web server such as nginx.

This is totally wrong, you can stream large blobs directly from the database do the browser, even PHP offers such features. And regarding small files they are supposed to be correctly cached anyway so they end up being never served by the filesystem nor the database.
With nginx at least, resume download works out of the box. If binaries are stored in database, I assume you need to provide some custom code for that to work?