Discussion:
[unison-users] Keep backups for at least N days? / Prefer to keep older backups instead of creating many backups in a short time?
Eduard Braun Eduard.Braun2@gmx.de [unison-users]
2018-02-05 10:32:07 UTC
Permalink
Hi all,

we have the "maxbackups" preference to control how many backups of a
file are kept.
Is there a possibility to override this setting and keep backups for at
least N days (or similar)?

My use case is that I use unison with the "repeat=watch" option. As I
sometimes modify a single file in quick succession it can result in many
very recent backups but the backup from some time back (which I'm
sometimes looking for) is then lost.

Alternatively I could imagine a setting like "at least N minutes between
backups" and make Unison not store a new backup if a files changes
twices withing N minutes but this might cause other issues...

Best Regards,
Eduard
worley@alum.mit.edu [unison-users]
2018-02-06 02:13:25 UTC
Permalink
Post by Eduard Braun ***@gmx.de [unison-users]
we have the "maxbackups" preference to control how many backups of a
file are kept.
Is there a possibility to override this setting and keep backups for at
least N days (or similar)?
One way to achieve this efffect might be to have Unison keep all backups
(rather than deleting ones beyond maxbackups). Then, periodically run a
process to delete all backups that are old enough, such as,

find [root directory of the destination] -atime [days] -delete

This assumes that while Unison copies a file's modification date, I
don't think that it copies the file's access date, but rather leaves
that as the current moment. You'd need to check that.

Dale
Dave Warren davew@hireahit.com [unison-users]
2018-02-06 23:53:30 UTC
Permalink
Post by ***@alum.mit.edu [unison-users]
Post by Eduard Braun ***@gmx.de [unison-users]
we have the "maxbackups" preference to control how many backups of a
file are kept.
Is there a possibility to override this setting and keep backups for at
least N days (or similar)?
One way to achieve this efffect might be to have Unison keep all backups
(rather than deleting ones beyond maxbackups). Then, periodically run a
process to delete all backups that are old enough, such as,
find [root directory of the destination] -atime [days] -delete
This assumes that while Unison copies a file's modification date, I
don't think that it copies the file's access date, but rather leaves
that as the current moment. You'd need to check that.
I've been experimenting with "timegaps" <https://gehrcke.de/timegaps/>
for this type of thing, description from their website:

"This is useful for implementing backup retention policies with the goal
to keep backups "logarithmically" distributed in time, e.g. one for each
of the last 24 hours, one for each of the last 30 days, one for each of
the last 8 weeks, and so on."

However, whether this applies to Unison or could clean up Unison
backups, I have no idea, I haven't used Unison's backup feature recently
enough to remember how exactly it works. I'm using timegaps with zbackup
which generates index files describing each backup. I then use timegap
to remove the index files and zbackup itself is responsible for removing
the blocks of data.





------------------------------------

------------------------------------


------------------------------------

Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/unison-users/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/unison-users/join
(Yahoo! ID required)

<*> To change settings via email:
unison-users-***@yahoogroups.com
unison-users-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
unison-users-***@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/
Alan Schmitt alan.schmitt@polytechnique.org [unison-users]
2018-02-07 14:09:39 UTC
Permalink
Post by Dave Warren ***@hireahit.com [unison-users]
I've been experimenting with "timegaps" <https://gehrcke.de/timegaps/>
This looks like a great tool, thanks a lot for the link.
Post by Dave Warren ***@hireahit.com [unison-users]
However, whether this applies to Unison or could clean up Unison
backups, I have no idea, I haven't used Unison's backup feature recently
enough to remember how exactly it works. I'm using timegaps with zbackup
which generates index files describing each backup. I then use timegap
to remove the index files and zbackup itself is responsible for removing
the blocks of data.
One would need to experiment to use it with unison, but I see the
following potential issues. First, unison keeps multiple backups of the
same file by adding a VERSION number at the end, where 1 is the newest
version. Thus each time there is a backup, all the versions of the file
are renamed. Could this change their modified date? Also, if there are
many backups, that's a lot of renaming. Second, there is no way to say
that one wants an unbounded number of backups (but one can always use a
very large number). And third, one needs a way to pass sets of backup
files to timegap for it to say which ones to delete (even with local
backups, there are backups of many files in each directory, each of the
form .bak.VERSION.FILENAME, which can be configured).

Best,
Alan
--
OpenPGP Key ID : 040D0A3B4ED2E5C7
Monthly Athmospheric CO₂, Mauna Loa Obs. 2018-01: 407.98, 2017-01: 406.13
Eduard Braun Eduard.Braun2@gmx.de [unison-users]
2018-02-08 17:23:42 UTC
Permalink
First of all thank you all for your ideas!

You gave me some interesting pointers. Main question now will be if I
find a solution convenient enough to not simply choose the "do not care
about disk space and keep a huge amount of backups" approach ;-)


Am 07.02.2018 um 15:09 schrieb Alan Schmitt
Post by Alan Schmitt ***@polytechnique.org [unison-users]
One would need to experiment to use it with unison, but I see the
following potential issues. First, unison keeps multiple backups of the
same file by adding a VERSION number at the end, where 1 is the newest
version. Thus each time there is a backup, all the versions of the file
are renamed. Could this change their modified date? Also, if there are
many backups, that's a lot of renaming. Second, there is no way to say
that one wants an unbounded number of backups (but one can always use a
very large number). And third, one needs a way to pass sets of backup
files to timegap for it to say which ones to delete (even with local
backups, there are backups of many files in each directory, each of the
form .bak.VERSION.FILENAME, which can be configured).
timegaps indeed sounds like the ideal solution. Regarding the issues
Alan mentioned:

1. is not an issue, modification times are not changed by renaming
2. As long as Unison only renames it's not ideal but I guess it would
not be a limiting factor. After all the aim is to keep the total
number of backups low by deleting intermediate versions.
One potential issue I'm not sure about: Does Unison handle the case
where there are "gaps" in the numbering of the backup files
gracefully? (e.g. if backups are numbered "1 2 4 7" and I add a file
(or two), will it simply become "1 2 3 4 7" (or "1 2 3 4 5 7")?
3. is indeed the biggest problem as one would basically need to run
timegap on each file which would be inefficient and/or a lot of work
to implement. Is there any functionality in Unison to "hook" into
file operations? I.e. run a script for each file that is about to /
was just synchronized?

Best Regards,
Eduard
Alan Schmitt alan.schmitt@polytechnique.org [unison-users]
2018-02-09 07:42:26 UTC
Permalink
timegaps indeed sounds like the ideal solution. Regarding the issues Alan
1. is not an issue, modification times are not changed by renaming
Good.
2. As long as Unison only renames it's not ideal but I guess it would
not be a limiting factor. After all the aim is to keep the total
number of backups low by deleting intermediate versions.
One potential issue I'm not sure about: Does Unison handle the case
where there are "gaps" in the numbering of the backup files
gracefully? (e.g. if backups are numbered "1 2 4 7" and I add a file
(or two), will it simply become "1 2 3 4 7" (or "1 2 3 4 5 7")?
Yes, it works with gaps as you expect. The logic at
https://github.com/bcpierce00/unison/blob/master/src/stasher.ml#L311
is this
- if the backup with version i exists
- if i is smaller than maxbackups then recursively move it away,
rename i to i+1, and return the path at i (saying it is available
for backup)
- otherwise, i is equal to maxbackups, then delete the backup and
return the path at i
- otherwise (there is no backup with version i), return that path

so it will basically push the backups until there is a gap.
3. is indeed the biggest problem as one would basically need to run
timegap on each file which would be inefficient and/or a lot of work
to implement.
I'm not sure it would be a lot of work to implement. Since you can
choose the prefix and suffix of a backup (with the file name in the
middle), if you use centrally located backups it's "just" a question of
going through the tree of backups and for each file find the file name
(using some regexp) and remember you've dealt with it.

Performance would be bad, however.
Is there any functionality in Unison to "hook" into
file operations? I.e. run a script for each file that is about to /
was just synchronized?
I don't know. If it's there, it's probably in stasher.ml.

Best,

Alan
--
OpenPGP Key ID : 040D0A3B4ED2E5C7
Monthly Athmospheric CO₂, Mauna Loa Obs. 2018-01: 407.98, 2017-01: 406.13
'Benjamin C. Pierce' bcpierce@cis.upenn.edu [unison-users]
2018-02-09 13:38:36 UTC
Permalink
Post by Alan Schmitt ***@polytechnique.org [unison-users]
Post by Eduard Braun ***@gmx.de [unison-users]
Is there any functionality in Unison to "hook" into
file operations? I.e. run a script for each file that is about to /
was just synchronized?
I don't know. If it's there, it's probably in stasher.ml.
We don’t currently have any such functionality. Would not be too hard to provide, I think — we can discuss designs if you (or someone) wants to have a go at it.

- B

Continue reading on narkive:
Loading...