Discussion:
[unison-users] Handling Conflicts and Visibility in Unison while running via ssh (client/server)
stef stef_204@yahoo.com [unison-users]
2017-04-08 12:01:10 UTC
Permalink
Hi,

I posted to this list a few months ago and got some great feedback from
several users.

I was trying to resolve backing up a lot of data (200 to 500 GBs) from a
local box to a NAS. And doing this regularly/incrementally after original dump.

At the time, I asked what would would be the advantage of using unison's client/
server model over mounting locally via cifs or nfs, etc.

I received this reply from Alan Schmitt, which makes sense.
Performance. Unison detects changes to files by hashing them and
comparing the hash to its previous value. If you have unison running on
the NAS, then the hash is computed locally and only the new hash is sent
on the network. If you mount the NAS as a file system, then the hash
will be computed on the main computer, which means sending the whole
file over the network then computing the hash.
I could not get the binary built to use a server on NAS at the time, so
used cifs to mount locally. It does work quite nicely. but, especially
on the original sync/dump, it can be slow.

I ended using rsync for original dump to NAS--which was not great, prob.
due to my lack of experience with it, etc. Really had to dig in to man
pages, find the right options. test, etc.

Fast forward a few months and I bricked the NAS so had to zero out the NAS
HDD (using "dd") and rebuild it completely. That is now done.

Looking at starting this sync or "dump" again; and then continue to
propagate changes as I go along.

I have been able to build a binary for the NAS, for Unison 2.48.4, by
following this link:

<http://mybookworld.wikidot.com/get-unison-working>

For those interested, it works. On many types of NAS, I believe. You
just have to install Optware, etc.

I have tested it and all is fine--amazingly.

Back to the sync.

I am assuming that the original dump will be just as long using ssh
client/server as if I were using cifs. True?
(If so, doesn't bode well.)

Where I might benefit most is on any changes _after_ initial dump
which will propagate faster with client/server than with cifs/nfs.

Note: this is not really a backup per se. I am diverting Unison's
primary use which is to _sync_ to doing a _backup_. (For the task at hand.)
So I will use -force option from left to right.

While this should remove the possibility of conflicts on initial dump,
still, I would like to monitor the process in case something odd happens;
there are conflicts at some point e.g if I have to restart, or if
something gets changed on destination root. I want some visibility
--which to some degree, usually the GTK GUI offers.

(I really could not afford whatsoever getting anything deleted from
source root.)

In terms of interface, I see this when testing:

changed <-?-> changed test3.txt []
No default command [type '?' for help]
changed <-?-> changed test3.txt [] ?
Commands:
f follow unison's recommendation (if any)
I ignore this path permanently
E permanently ignore files with this extension
N permanently ignore paths ending with this name
m merge the versions
d show differences
x show details
L list all suggested changes tersely
l list all suggested changes with details
p or b go back to previous item
g proceed immediately to propagating changes
q exit unison without propagating any changes
/ skip
or . propagate from from local to destB
< or , propagate from from destB to local
changed <-?-> changed test3.txt [] d

All of this is OK, on a few files, it iterates through them asking how to
resolve, etc.

But how does one handle it if there are hundreds of files?
Looks pretty tricky in text mode.

Seems easier on the GUI since you can pick and choose, skipping items,
where needed, multiple selection, etc.

Looking forward to feedback.

Sorry for verbosity of this post.
Alan Schmitt alan.schmitt@polytechnique.org [unison-users]
2017-04-10 07:11:06 UTC
Permalink
Post by stef ***@yahoo.com [unison-users]
I am assuming that the original dump will be just as long using ssh
client/server as if I were using cifs. True?
(If so, doesn't bode well.)
I'm not sure, it will really depend on the speed of the transfers. In
particular, the first sync will use rsync for new files, and I'm not
sure it did it with the filesystem mounted locally.
Post by stef ***@yahoo.com [unison-users]
Where I might benefit most is on any changes _after_ initial dump
which will propagate faster with client/server than with cifs/nfs.
Yes.
Post by stef ***@yahoo.com [unison-users]
Note: this is not really a backup per se. I am diverting Unison's
primary use which is to _sync_ to doing a _backup_. (For the task at hand.)
So I will use -force option from left to right.
I do this as well, but instead of using -force, I use these options
(with “/First/Root” being the actual local root):

noupdate = /First/Root
nocreation = /First/Root

This way, I see when there is a problem (i.e., some file on the backup
has changed).
Post by stef ***@yahoo.com [unison-users]
While this should remove the possibility of conflicts on initial dump,
still, I would like to monitor the process in case something odd happens;
there are conflicts at some point e.g if I have to restart, or if
something gets changed on destination root. I want some visibility
--which to some degree, usually the GTK GUI offers.


Post by stef ***@yahoo.com [unison-users]
But how does one handle it if there are hundreds of files?
Looks pretty tricky in text mode.
The way I do it is by monitoring the unison.log file for problems (I
basically echo the last line), and when there is a problem I run Unison
with a different profile where I can choose what to do (i.e., without
the “silent=true” option).

Best,

Alan
--
OpenPGP Key ID : 040D0A3B4ED2E5C7
Monthly Athmospheric CO₂, Mauna Loa Obs. 2017-03: 407.18, 2016-03: 404.83
stef stef_204@yahoo.com [unison-users]
2017-04-10 10:51:38 UTC
Permalink
Alan,


Thanks a lot for your helpful feedback.
And I will look into the options you mentioned, to replace -force.
Post by stef ***@yahoo.com [unison-users]
I am assuming that the original dump will be just as long using ssh
client/server as if I were using cifs. True?
(If so, doesn't bode well.)
I'm not sure, it will really depend on the speed of the transfers. In
particular, the first sync will use rsync for new files, and I'm not
sure it did it with the filesystem mounted locally.
Post by stef ***@yahoo.com [unison-users]
Where I might benefit most is on any changes _after_ initial dump
which will propagate faster with client/server than with cifs/nfs.
Yes.
Post by stef ***@yahoo.com [unison-users]
Note: this is not really a backup per se. I am diverting Unison's
primary use which is to _sync_ to doing a _backup_. (For the task at hand.)
So I will use -force option from left to right.
I do this as well, but instead of using -force, I use these options
(with “/First/Root” being the actual local root):

noupdate = /First/Root
nocreation = /First/Root

This way, I see when there is a problem (i.e., some file on the backup
has changed).

Loading...