stef stef_204@yahoo.com [unison-users]
2017-04-08 12:01:10 UTC
Hi,
I posted to this list a few months ago and got some great feedback from
several users.
I was trying to resolve backing up a lot of data (200 to 500 GBs) from a
local box to a NAS. And doing this regularly/incrementally after original dump.
At the time, I asked what would would be the advantage of using unison's client/
server model over mounting locally via cifs or nfs, etc.
I received this reply from Alan Schmitt, which makes sense.
used cifs to mount locally. It does work quite nicely. but, especially
on the original sync/dump, it can be slow.
I ended using rsync for original dump to NAS--which was not great, prob.
due to my lack of experience with it, etc. Really had to dig in to man
pages, find the right options. test, etc.
Fast forward a few months and I bricked the NAS so had to zero out the NAS
HDD (using "dd") and rebuild it completely. That is now done.
Looking at starting this sync or "dump" again; and then continue to
propagate changes as I go along.
I have been able to build a binary for the NAS, for Unison 2.48.4, by
following this link:
<http://mybookworld.wikidot.com/get-unison-working>
For those interested, it works. On many types of NAS, I believe. You
just have to install Optware, etc.
I have tested it and all is fine--amazingly.
Back to the sync.
I am assuming that the original dump will be just as long using ssh
client/server as if I were using cifs. True?
(If so, doesn't bode well.)
Where I might benefit most is on any changes _after_ initial dump
which will propagate faster with client/server than with cifs/nfs.
Note: this is not really a backup per se. I am diverting Unison's
primary use which is to _sync_ to doing a _backup_. (For the task at hand.)
So I will use -force option from left to right.
While this should remove the possibility of conflicts on initial dump,
still, I would like to monitor the process in case something odd happens;
there are conflicts at some point e.g if I have to restart, or if
something gets changed on destination root. I want some visibility
--which to some degree, usually the GTK GUI offers.
(I really could not afford whatsoever getting anything deleted from
source root.)
In terms of interface, I see this when testing:
changed <-?-> changed test3.txt []
No default command [type '?' for help]
changed <-?-> changed test3.txt [] ?
Commands:
f follow unison's recommendation (if any)
I ignore this path permanently
E permanently ignore files with this extension
N permanently ignore paths ending with this name
m merge the versions
d show differences
x show details
L list all suggested changes tersely
l list all suggested changes with details
p or b go back to previous item
g proceed immediately to propagating changes
q exit unison without propagating any changes
/ skip
changed <-?-> changed test3.txt [] d
All of this is OK, on a few files, it iterates through them asking how to
resolve, etc.
But how does one handle it if there are hundreds of files?
Looks pretty tricky in text mode.
Seems easier on the GUI since you can pick and choose, skipping items,
where needed, multiple selection, etc.
Looking forward to feedback.
Sorry for verbosity of this post.
I posted to this list a few months ago and got some great feedback from
several users.
I was trying to resolve backing up a lot of data (200 to 500 GBs) from a
local box to a NAS. And doing this regularly/incrementally after original dump.
At the time, I asked what would would be the advantage of using unison's client/
server model over mounting locally via cifs or nfs, etc.
I received this reply from Alan Schmitt, which makes sense.
Performance. Unison detects changes to files by hashing them and
comparing the hash to its previous value. If you have unison running on
the NAS, then the hash is computed locally and only the new hash is sent
on the network. If you mount the NAS as a file system, then the hash
will be computed on the main computer, which means sending the whole
file over the network then computing the hash.
I could not get the binary built to use a server on NAS at the time, socomparing the hash to its previous value. If you have unison running on
the NAS, then the hash is computed locally and only the new hash is sent
on the network. If you mount the NAS as a file system, then the hash
will be computed on the main computer, which means sending the whole
file over the network then computing the hash.
used cifs to mount locally. It does work quite nicely. but, especially
on the original sync/dump, it can be slow.
I ended using rsync for original dump to NAS--which was not great, prob.
due to my lack of experience with it, etc. Really had to dig in to man
pages, find the right options. test, etc.
Fast forward a few months and I bricked the NAS so had to zero out the NAS
HDD (using "dd") and rebuild it completely. That is now done.
Looking at starting this sync or "dump" again; and then continue to
propagate changes as I go along.
I have been able to build a binary for the NAS, for Unison 2.48.4, by
following this link:
<http://mybookworld.wikidot.com/get-unison-working>
For those interested, it works. On many types of NAS, I believe. You
just have to install Optware, etc.
I have tested it and all is fine--amazingly.
Back to the sync.
I am assuming that the original dump will be just as long using ssh
client/server as if I were using cifs. True?
(If so, doesn't bode well.)
Where I might benefit most is on any changes _after_ initial dump
which will propagate faster with client/server than with cifs/nfs.
Note: this is not really a backup per se. I am diverting Unison's
primary use which is to _sync_ to doing a _backup_. (For the task at hand.)
So I will use -force option from left to right.
While this should remove the possibility of conflicts on initial dump,
still, I would like to monitor the process in case something odd happens;
there are conflicts at some point e.g if I have to restart, or if
something gets changed on destination root. I want some visibility
--which to some degree, usually the GTK GUI offers.
(I really could not afford whatsoever getting anything deleted from
source root.)
In terms of interface, I see this when testing:
changed <-?-> changed test3.txt []
No default command [type '?' for help]
changed <-?-> changed test3.txt [] ?
Commands:
f follow unison's recommendation (if any)
I ignore this path permanently
E permanently ignore files with this extension
N permanently ignore paths ending with this name
m merge the versions
d show differences
x show details
L list all suggested changes tersely
l list all suggested changes with details
p or b go back to previous item
g proceed immediately to propagating changes
q exit unison without propagating any changes
/ skip
or . propagate from from local to destB
< or , propagate from from destB to localchanged <-?-> changed test3.txt [] d
All of this is OK, on a few files, it iterates through them asking how to
resolve, etc.
But how does one handle it if there are hundreds of files?
Looks pretty tricky in text mode.
Seems easier on the GUI since you can pick and choose, skipping items,
where needed, multiple selection, etc.
Looking forward to feedback.
Sorry for verbosity of this post.