NAS Synology + Moments + Syncthing: dealing with @eaDir that prevent directory removal – Problems and Solutions

I have a photo directory on one of the HDDs on my Desktop PC, my It:\Photos where I always download the photos from my camera's SD card and copy the photos captured with my phone. In it I currently have 84.790 files, totaling 274GB (yes, I take a lot of photos).

Through the Syncthing I synchronize this directory with another one that is in my NAS Synology DS918 +, already shown here on Skooter Blog, more precisely in the shared folder \\DS918 photo Photos, that stays in mine /volume1/photo/Fotos. In this way the photos are accessible on any network equipment, including TV Box Minix, already shown here too. On the NAS, beyond redundancy with RAID SHR, photos are also synchronized with the Google Drive, to have extra security. There are still other machines synchronizing and other backup processes on the PC.

Also in DS918 +, the photos are all indexed by the NAS and are available on the Moments, an application similar to Google Photos, but that runs locally on the NAS and is accessible via the browser and Android and iPhone apps. It also separates pictures of people by their faces, identifies subjects, local, among other features.

The system of Synology saves metadata, including Moments' own, in subdirectories that are created within the directories that have photos and other media files. These subdirectories are called @eaDir.

Then my first trouble, these metadata were being replicated to other machines, so that they were left with several @eaDir that are of no use in Windows. A solution that I found was to put in \\DS918 photo Photos (and also in the other photo directories of each device) a file called .stignore with the following contents:


The first line instructs the Syncthing ignoring @eaDir in the synchronization process and, so, these directories are not replicated on the other machines. The second line is not related to this problem, but it serves for the Syncthing do not synchronize temporary files that are created by Google Backup & Sync.

But a new trouble emerged: eventually i delete some subdirectory in some reorganization of the photos, and then the Syncthing gave an error that it could not replicate the deletion on the NAS, on account of @eaDir who were left behind in those directories that, therefore, were not empty to be deleted.

A solution was to change the .stignore and leave it like this:


In this way the Syncthing understands that you should not stop making a directory exclusion due to content that is being ignored. In other words, in excluded directories it should go ahead and delete the @eaDir. Note que os @eaDir are necessary for the Moments work properly, but there seems to be no problem deleting those in empty directories.

Por fim, a new one has emerged trouble, now the Syncthing could not exclude the @eaDir because they are created by a process that runs like root. So, all the @eaDir have the root as owner. A solution I found was to make a script that periodically changes the owner of all files within the \\DS918 photo Photos passing possession to the user sc-syncthing, which is the user who runs the process Syncthing. Note that this does not cause a problem for the Moments, because the group of @eaDir remains the root, beyond root have unrestricted access in any way. But with the @eaDir being owned by sc-syncthing, the Syncthing can delete them when they are in directories that should be deleted.

I named the script that makes this change as chownSyncthing and put it on /volume1/photo. It has the following content:

chown -R sc-syncthing /volume1/photo/Fotos/

No painel de controle do DS918 + eu usei o Task Scheduler para criar uma tarefa que diariamente executa este comando como root:

bash /volume1/photo/chownSyncthing
Task Scheduler

Task Scheduler

Now yes, everything running perfectly. Eventualmente pode até ocorrer algum erro no Syncthing, caso algum @eaDir precise ser excluído novamente logo após ter sido criado, sem que o script tenha tido chance de agir, mas como ele vai ficar tentando novamente, in tops 24 horas o problema se resolve.

Share this article with your friends if you liked 😉 . The Skooter Blog needs your help in spreading to continue existing.

5 1 vote
Article Rating

Permanent link to this article:

Sign up
Notify about
Inline Feedbacks
See all reviews
We would like to know what you think, Leave your commentx
Enable Notifications    OK No thanks