Discussion:
[unison-users] Unison from multiple sources to single target
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-11-28 02:44:03 UTC
Permalink
Greetings, I'm trying to setup Unison for copies from multiple Windows server to a single Linux server. I need these to run nightly and I'm expecting the replication to be up to a couple GB per Windows server.

I tried setting this up with multiple profiles and having them all start at the same time via Cron, but this didn't work. I keep getting failures. However, if I run a single server with the same command (ie unison profilename) it works great.

What am I doing wrong? I looked through the documentation and searched online but can't find a similar scenario to what I am doing. Essentially I'm just performing a backup.

Thanks


--James
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-11-28 14:40:43 UTC
Permalink
On 11/27/2016 06:44 PM, 'Stull, James'
Greetings, I’m trying to setup Unison for copies from multiple Windows
server to a single Linux server. I need these to run nightly and I’m
expecting the replication to be up to a couple GB per Windows server.
I tried setting this up with multiple profiles and having them all start
at the same time via Cron, but this didn’t work. I keep getting
failures. However, if I run a single server with the same command (ie
unison profilename) it works great.
What am I doing wrong? I looked through the documentation and searched
online but can’t find a similar scenario to what I am doing. Essentially
I’m just performing a backup.
Without the error messages or the profiles you are using this is pretty
much unsolvable.
Thanks
--James
--
Adrian Klaver
***@aklaver.com


------------------------------------

------------------------------------


------------------------------------

Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/unison-users/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/unison-users/join
(Yahoo! ID required)

<*> To change settings via email:
unison-users-***@yahoogroups.com
unison-users-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
unison-users-***@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-11-28 16:47:52 UTC
Permalink
Glad to know it is possible.

When the cron job kicks off I see several unison instances start, then they all stop. I have the profiles setup to send logs, but all logs are empty.

Here is one of my profiles:

root = ssh://***@server//folder1
root = /shares/path/Folder1
sshargs = -i
batch = true
fastcheck = yes
prefer = newer
confirmmerge = false
silent = true
times = true
logfile = /var/log/unison/Servername.log

I did verify that unison can write to /var/log/unison/*. I'm running this as a normal user from Centos7. As I said if I kick off a single job it works fine.

Let me know what else you may need. Thank you in advance for your help.

--James

-----Original Message-----
From: Adrian Klaver [mailto:***@aklaver.com]
Sent: Monday, November 28, 2016 9:41 AM
To: Stull, James <***@samsondentalpartners.com>; unison-***@yahoogroups.com
Subject: Re: [unison-users] Unison from multiple sources to single target

On 11/27/2016 06:44 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Greetings, I'm trying to setup Unison for copies from multiple Windows
server to a single Linux server. I need these to run nightly and I'm
expecting the replication to be up to a couple GB per Windows server.
I tried setting this up with multiple profiles and having them all
start at the same time via Cron, but this didn't work. I keep getting
failures. However, if I run a single server with the same command (ie
unison profilename) it works great.
What am I doing wrong? I looked through the documentation and searched
online but can't find a similar scenario to what I am doing.
Essentially I'm just performing a backup.
Without the error messages or the profiles you are using this is pretty much unsolvable.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks
--James
--
Adrian Klaver
***@aklaver.com
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-11-28 18:04:53 UTC
Permalink
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Glad to know it is possible.
That has not been established yet as what 'it is' is still a matter of
conjecture.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
When the cron job kicks off I see several unison instances start, then they all stop. I have the profiles setup to send logs, but all logs are empty.
root = /shares/path/Folder1
sshargs = -i
batch = true
fastcheck = yes
prefer = newer
confirmmerge = false
silent = true
times = true
logfile = /var/log/unison/Servername.log
I did verify that unison can write to /var/log/unison/*. I'm running this as a normal user from Centos7. As I said if I kick off a single job it works fine.
Does the single job log to the logfile?

What is the command you are using in the cron job?

Have you used -debug?:

http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html#prefs

debug xxx
This preference is used to make Unison print various sorts of
information about what it is doing internally on the standard error
stream. It can be used many times, each time with the name of a module
for which debugging information should be printed. Possible arguments
for debug can be found by looking for calls to Util.debug in the sources
(using, e.g., grep). Setting -debug all causes information from all
modules to be printed (this mode of usage is the first one to try, if
you are trying to understand something that Unison seems to be doing
wrong); -debug verbose turns on some additional debugging output from
some modules (e.g., it will show exactly what bytes are being sent
across the network).
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Let me know what else you may need. Thank you in advance for your help.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 9:41 AM
Subject: Re: [unison-users] Unison from multiple sources to single target
On 11/27/2016 06:44 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Greetings, I'm trying to setup Unison for copies from multiple Windows
server to a single Linux server. I need these to run nightly and I'm
expecting the replication to be up to a couple GB per Windows server.
I tried setting this up with multiple profiles and having them all
start at the same time via Cron, but this didn't work. I keep getting
failures. However, if I run a single server with the same command (ie
unison profilename) it works great.
What am I doing wrong? I looked through the documentation and searched
online but can't find a similar scenario to what I am doing.
Essentially I'm just performing a backup.
Without the error messages or the profiles you are using this is pretty much unsolvable.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks
--James
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com


------------------------------------

------------------------------------


------------------------------------

Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/unison-users/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/unison-users/join
(Yahoo! ID required)

<*> To change settings via email:
unison-users-***@yahoogroups.com
unison-users-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
unison-users-***@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-11-29 22:57:13 UTC
Permalink
What 'it is' I mean is that you can run multiple instances of unison simultaneously.
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1
But this is showing running one instance of Unison only.
/dev/null 2>&1
you are swallowing any output and errors. Probably explains why you are
not seeing anything in the logs.
I have not used -debug, but when I run a single instance it runs fine.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 1:05 PM
Subject: Re: [unison-users] Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Glad to know it is possible.
That has not been established yet as what 'it is' is still a matter of conjecture.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
When the cron job kicks off I see several unison instances start, then they all stop. I have the profiles setup to send logs, but all logs are empty.
root = /shares/path/Folder1
sshargs = -i
batch = true
fastcheck = yes
prefer = newer
confirmmerge = false
silent = true
times = true
logfile = /var/log/unison/Servername.log
I did verify that unison can write to /var/log/unison/*. I'm running this as a normal user from Centos7. As I said if I kick off a single job it works fine.
Does the single job log to the logfile?
What is the command you are using in the cron job?
http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html#prefs
debug xxx
This preference is used to make Unison print various sorts of information about what it is doing internally on the standard error stream. It can be used many times, each time with the name of a module for which debugging information should be printed. Possible arguments for debug can be found by looking for calls to Util.debug in the sources (using, e.g., grep). Setting -debug all causes information from all modules to be printed (this mode of usage is the first one to try, if you are trying to understand something that Unison seems to be doing wrong); -debug verbose turns on some additional debugging output from some modules (e.g., it will show exactly what bytes are being sent across the network).
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Let me know what else you may need. Thank you in advance for your help.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 9:41 AM
Subject: Re: [unison-users] Unison from multiple sources to single target
On 11/27/2016 06:44 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Greetings, I'm trying to setup Unison for copies from multiple
Windows server to a single Linux server. I need these to run nightly
and I'm expecting the replication to be up to a couple GB per Windows server.
I tried setting this up with multiple profiles and having them all
start at the same time via Cron, but this didn't work. I keep getting
failures. However, if I run a single server with the same command (ie
unison profilename) it works great.
What am I doing wrong? I looked through the documentation and
searched online but can't find a similar scenario to what I am doing.
Essentially I'm just performing a backup.
Without the error messages or the profiles you are using this is pretty much unsolvable.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks
--James
--
Adrian Klaver
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com


------------------------------------

------------------------------------


------------------------------------

Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/unison-users/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/unison-users/join
(Yahoo! ID required)

<*> To change settings via email:
unison-users-***@yahoogroups.com
unison-users-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
unison-users-***@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-11-30 00:30:48 UTC
Permalink
Sorry about the confusion. The only way I know to run unison is to setup each profile. So I have 27 profiles to sync with 27 different servers and directories on the Linux server.

When the cron job starts, it has 27 lines just like the one I showed you, so for example:
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1



--James


-----Original Message-----
From: Adrian Klaver [mailto:***@aklaver.com]
Sent: Tuesday, November 29, 2016 5:57 PM
To: Stull, James <***@samsondentalpartners.com>; unison-***@yahoogroups.com
Subject: Re: [unison-users] Unison from multiple sources to single target
What 'it is' I mean is that you can run multiple instances of unison simultaneously.
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1
But this is showing running one instance of Unison only.
/dev/null 2>&1
you are swallowing any output and errors. Probably explains why you are not seeing anything in the logs.
I have not used -debug, but when I run a single instance it runs fine.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 1:05 PM
Subject: Re: [unison-users] Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Glad to know it is possible.
That has not been established yet as what 'it is' is still a matter of conjecture.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
When the cron job kicks off I see several unison instances start, then they all stop. I have the profiles setup to send logs, but all logs are empty.
sshargs = -i batch = true fastcheck = yes prefer = newer confirmmerge
= false silent = true times = true logfile =
/var/log/unison/Servername.log
I did verify that unison can write to /var/log/unison/*. I'm running this as a normal user from Centos7. As I said if I kick off a single job it works fine.
Does the single job log to the logfile?
What is the command you are using in the cron job?
http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/uni
son-manual.html#prefs
debug xxx
This preference is used to make Unison print various sorts of information about what it is doing internally on the standard error stream. It can be used many times, each time with the name of a module for which debugging information should be printed. Possible arguments for debug can be found by looking for calls to Util.debug in the sources (using, e.g., grep). Setting -debug all causes information from all modules to be printed (this mode of usage is the first one to try, if you are trying to understand something that Unison seems to be doing wrong); -debug verbose turns on some additional debugging output from some modules (e.g., it will show exactly what bytes are being sent across the network).
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Let me know what else you may need. Thank you in advance for your help.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 9:41 AM
Subject: Re: [unison-users] Unison from multiple sources to single target
On 11/27/2016 06:44 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Greetings, I'm trying to setup Unison for copies from multiple
Windows server to a single Linux server. I need these to run nightly
and I'm expecting the replication to be up to a couple GB per Windows server.
I tried setting this up with multiple profiles and having them all
start at the same time via Cron, but this didn't work. I keep
getting failures. However, if I run a single server with the same
command (ie unison profilename) it works great.
What am I doing wrong? I looked through the documentation and
searched online but can't find a similar scenario to what I am doing.
Essentially I'm just performing a backup.
Without the error messages or the profiles you are using this is pretty much unsolvable.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks
--James
--
Adrian Klaver
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-11-30 01:42:30 UTC
Permalink
What would you recommend then? I have all these that need to sync each night back to this server. Ideally to sync throughout the day.

--James

________________________________
From: Adrian Klaver <***@aklaver.com>
Sent: Tuesday, November 29, 2016 8:34:46 PM
To: Stull, James; unison-***@yahoogroups.com
Subject: Re: [unison-users] Unison from multiple sources to single target
Sorry for the confusion. The cron job is kicking off unison for each profile I made. So for example, I have 27 servers I want to sync back to this single server. I have this server kicking off 27 unison jobs at once via cron.
22 21 * * * /usr/bin/unison profile1 >/dev/null 2>&1
22 21 * * * /usr/bin/unison profile2 >/dev/null 2>&1
22 21 * * * /usr/bin/unison profile3 >/dev/null 2>&1
And so on.
------------
root = /shares/path/Folder1
sshargs = -i
batch = true
fastcheck = yes
prefer = newer
confirmmerge = false
silent = true
times = true
logfile = /var/log/unison/Server1.log
------------
------------
root = /shares/path/Folder2
sshargs = -i
batch = true
fastcheck = yes
prefer = newer
confirmmerge = false
silent = true
times = true
logfile = /var/log/unison/Server2.log
------------
And so on.
Again, if I start a single job by running "unison profile1" then it runs fine. But trying to run these from my cron job and it fails.
Let me know if this makes sense.
No it does not:

1) >/dev/null 2>&1 is destroying any useful troubleshooting information.

2) Starting 27 concurrent Unison instances, each of which is syncing
GB's of data is probably going to peg/overload network bandwidth and/or
storage I/O even if it works.

To figure out what is going out:

1) Lose the >/dev/null 2>&1

2) Run a cron job with a single instance of Unison

3) If that does not tell you anything, change the command in the cron
job to include -debug.

4) If the single job cron works, then try two jobs and see what happens.


My personal opinion is that trying 27 jobs at a time will be a
continuous disaster.
--James
-----Original Message-----
Sent: Tuesday, November 29, 2016 5:57 PM
Subject: Re: [unison-users] Unison from multiple sources to single target
What 'it is' I mean is that you can run multiple instances of unison simultaneously.
22 21 * * * /usr/bin/unison profilename >/dev/null 2>&1
But this is showing running one instance of Unison only.
/dev/null 2>&1
you are swallowing any output and errors. Probably explains why you are not seeing anything in the logs.
I have not used -debug, but when I run a single instance it runs fine.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 1:05 PM
Subject: Re: [unison-users] Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Glad to know it is possible.
That has not been established yet as what 'it is' is still a matter of conjecture.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
When the cron job kicks off I see several unison instances start, then they all stop. I have the profiles setup to send logs, but all logs are empty.
sshargs = -i batch = true fastcheck = yes prefer = newer confirmmerge
= false silent = true times = true logfile =
/var/log/unison/Servername.log
I did verify that unison can write to /var/log/unison/*. I'm running this as a normal user from Centos7. As I said if I kick off a single job it works fine.
Does the single job log to the logfile?
What is the command you are using in the cron job?
http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/uni
son-manual.html#prefs
debug xxx
This preference is used to make Unison print various sorts of information about what it is doing internally on the standard error stream. It can be used many times, each time with the name of a module for which debugging information should be printed. Possible arguments for debug can be found by looking for calls to Util.debug in the sources (using, e.g., grep). Setting -debug all causes information from all modules to be printed (this mode of usage is the first one to try, if you are trying to understand something that Unison seems to be doing wrong); -debug verbose turns on some additional debugging output from some modules (e.g., it will show exactly what bytes are being sent across the network).
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Let me know what else you may need. Thank you in advance for your help.
--James
-----Original Message-----
Sent: Monday, November 28, 2016 9:41 AM
Subject: Re: [unison-users] Unison from multiple sources to single target
On 11/27/2016 06:44 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Greetings, I'm trying to setup Unison for copies from multiple
Windows server to a single Linux server. I need these to run nightly
and I'm expecting the replication to be up to a couple GB per Windows server.
I tried setting this up with multiple profiles and having them all
start at the same time via Cron, but this didn't work. I keep
getting failures. However, if I run a single server with the same
command (ie unison profilename) it works great.
What am I doing wrong? I looked through the documentation and
searched online but can't find a similar scenario to what I am doing.
Essentially I'm just performing a backup.
Without the error messages or the profiles you are using this is pretty much unsolvable.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks
--James
--
Adrian Klaver
--
Adrian Klaver
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-11-30 02:11:35 UTC
Permalink
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
What would you recommend then? I have all these that need to sync each
night back to this server. Ideally to sync throughout the day.
Without knowing what the problem is I can make no recommendation. So the
first thing to do is to get a successful cron run. This means following
the steps I outlined in the previous post.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
--James
--
Adrian Klaver
***@aklaver.com


------------------------------------

------------------------------------


------------------------------------

Yahoo Groups Links

<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/unison-users/

<*> Your email settings:
Individual Email | Traditional

<*> To change settings online go to:
http://groups.yahoo.com/group/unison-users/join
(Yahoo! ID required)

<*> To change settings via email:
unison-users-***@yahoogroups.com
unison-users-***@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
unison-users-***@yahoogroups.com

<*> Your use of Yahoo Groups is subject to:
https://info.yahoo.com/legal/us/yahoo/utos/terms/
sesc03@web.de [unison-users]
2016-12-02 19:51:32 UTC
Permalink
CC:ing the list after noticing that the default reply only was sent to James...




Hello James,

my reply yesterday seems not to have made it through to the group? At any rate I can't see it.


The gist: As Adrian says, running that many syncs in parallel is an invitation for disaster. I have a Unison wrapper script in bash that can run many profiles serially (used daily by myself), you are welcome to use it if it fits your needs: https://bitbucket.org/sesc/us/overview https://bitbucket.org/sesc/us/overview
Best,
Sebastian


---In unison-***@yahoogroups.com, <***@...> wrote :
What would you recommend then? I have all these that need to sync each night back to this server. Ideally to sync throughout the day.
--James
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-12-05 17:37:37 UTC
Permalink
My apologies about the delay in response. Thank you both for your
replies. It sounds like either I’m going about this the wrong way and/or
Unison may not be the best fit for me. Let me explain what exactly I’m
doing so you have a complete picture.
I currently have 27 remote offices, each have their own Windows file
server. Previously we used Windows DFS replication back to a single
windows server to essentially back these files up from the remote
offices to a central server. This worked very well because all changes
were replicated instantly. However, we continue to grow, both as a
company adding more offices and also in data. I typically see a growth
of ~10GB a day in total organization wide. Last year our central server
started to run out of space. I ended up replacing this single server
with a GlusterFS two node system. Gluster runs great for this scenario.
However, now I need to find a good way to replicate all changes/new
files quickly and efficiently from my Windows servers to a Linux system.
In my research I found Unison as a good solution since it indexes
everything locally and only copies changes. Great for slow WAN links
which is exactly what I need.
I’m only going to grow in the number of servers I replicate from. So I
need a replication solution that can handle this. Ideally I want to be
able to replicate throughout the day, not just once at night.
Also, here are the results that Adrian asked me for: Nothing was output
to Stdout. I even redirected Stdout to a file to ensure nothing was
missed. However, I did find the solution. Sebastian pointed me in the
right direction. I had a single server in the list that is having
issues. This would cause the entire cronjob to fail. I am now
successfully running multiple jobs at once. Unfortunately, as you and
Sebastian pointed out, this may not be a good way due to system
requirements.
I would love to hear if you have any suggestions on how I should run
Unison in a different way, or perhaps a different solution entirely that
would work better for my environment/needs.
Well what ever you do is going to have the same restrictions with a lot
of data being processed at one time. It is not clear to me from the
above whether you are doing your original plan of kicking all the Unison
jobs off at one time or whether you are using Sebastian's script to run
them sequentially?

Is there a reason all the transfers have to be done at one time?

It seems having more jobs spread out over the day would reduce the
instantaneous load at any point in time.
Thanks
--
Adrian Klaver
***@aklaver.com
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-12-05 17:54:37 UTC
Permalink
All these servers are connected via T1's. If I try and run them sequentially I fear the jobs would take longer than 24 hours to complete.

I am considering spinning up 1 or two more servers to split these jobs up between. I imagine running 9 jobs or so at once is far better than trying to run 27 or more.

As a side question, I noticed unison does have a fsmonitor utility. I see notes about it in the changelog but nothing in the documentation on how to use. Could I use this more effectively? If I understand correctly, it would upload changes as they happen instead of performing a single backup job daily. If so, how do I use it? Can you point me to any documentation on how to use it?

Thanks!



--James

-----Original Message-----
From: Adrian Klaver [mailto:***@aklaver.com]
Sent: Monday, December 5, 2016 12:38 PM
To: Stull, James <***@samsondentalpartners.com>; ***@web.de; unison-***@yahoogroups.com
Subject: Re: Unison from multiple sources to single target
My apologies about the delay in response. Thank you both for your
replies. It sounds like either I’m going about this the wrong way
and/or Unison may not be the best fit for me. Let me explain what
exactly I’m doing so you have a complete picture.
I currently have 27 remote offices, each have their own Windows file
server. Previously we used Windows DFS replication back to a single
windows server to essentially back these files up from the remote
offices to a central server. This worked very well because all changes
were replicated instantly. However, we continue to grow, both as a
company adding more offices and also in data. I typically see a growth
of ~10GB a day in total organization wide. Last year our central
server started to run out of space. I ended up replacing this single
server with a GlusterFS two node system. Gluster runs great for this scenario.
However, now I need to find a good way to replicate all changes/new
files quickly and efficiently from my Windows servers to a Linux system.
In my research I found Unison as a good solution since it indexes
everything locally and only copies changes. Great for slow WAN links
which is exactly what I need.
I’m only going to grow in the number of servers I replicate from. So I
need a replication solution that can handle this. Ideally I want to be
able to replicate throughout the day, not just once at night.
Also, here are the results that Adrian asked me for: Nothing was
output to Stdout. I even redirected Stdout to a file to ensure nothing
was missed. However, I did find the solution. Sebastian pointed me in
the right direction. I had a single server in the list that is having
issues. This would cause the entire cronjob to fail. I am now
successfully running multiple jobs at once. Unfortunately, as you and
Sebastian pointed out, this may not be a good way due to system
requirements.
I would love to hear if you have any suggestions on how I should run
Unison in a different way, or perhaps a different solution entirely
that would work better for my environment/needs.
Well what ever you do is going to have the same restrictions with a lot of data being processed at one time. It is not clear to me from the above whether you are doing your original plan of kicking all the Unison jobs off at one time or whether you are using Sebastian's script to run them sequentially?

Is there a reason all the transfers have to be done at one time?

It seems having more jobs spread out over the day would reduce the instantaneous load at any point in time.
Thanks
--
Adrian Klaver
***@aklaver.com
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-12-05 17:59:54 UTC
Permalink
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
All these servers are connected via T1's. If I try and run them sequentially I fear the jobs would take longer than 24 hours to complete.
Not following. If you launch 27 jobs at one time to all servers how is
that less resource intensive then hitting each server one at a time?
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I am considering spinning up 1 or two more servers to split these jobs up between. I imagine running 9 jobs or so at once is far better than trying to run 27 or more.
As a side question, I noticed unison does have a fsmonitor utility. I see notes about it in the changelog but nothing in the documentation on how to use. Could I use this more effectively? If I understand correctly, it would upload changes as they happen instead of performing a single backup job daily. If so, how do I use it? Can you point me to any documentation on how to use it?
Search for watch in the docs:

http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks!
--James
-----Original Message-----
Sent: Monday, December 5, 2016 12:38 PM
Subject: Re: Unison from multiple sources to single target
My apologies about the delay in response. Thank you both for your
replies. It sounds like either I’m going about this the wrong way
and/or Unison may not be the best fit for me. Let me explain what
exactly I’m doing so you have a complete picture.
I currently have 27 remote offices, each have their own Windows file
server. Previously we used Windows DFS replication back to a single
windows server to essentially back these files up from the remote
offices to a central server. This worked very well because all changes
were replicated instantly. However, we continue to grow, both as a
company adding more offices and also in data. I typically see a growth
of ~10GB a day in total organization wide. Last year our central
server started to run out of space. I ended up replacing this single
server with a GlusterFS two node system. Gluster runs great for this scenario.
However, now I need to find a good way to replicate all changes/new
files quickly and efficiently from my Windows servers to a Linux system.
In my research I found Unison as a good solution since it indexes
everything locally and only copies changes. Great for slow WAN links
which is exactly what I need.
I’m only going to grow in the number of servers I replicate from. So I
need a replication solution that can handle this. Ideally I want to be
able to replicate throughout the day, not just once at night.
Also, here are the results that Adrian asked me for: Nothing was
output to Stdout. I even redirected Stdout to a file to ensure nothing
was missed. However, I did find the solution. Sebastian pointed me in
the right direction. I had a single server in the list that is having
issues. This would cause the entire cronjob to fail. I am now
successfully running multiple jobs at once. Unfortunately, as you and
Sebastian pointed out, this may not be a good way due to system
requirements.
I would love to hear if you have any suggestions on how I should run
Unison in a different way, or perhaps a different solution entirely
that would work better for my environment/needs.
Well what ever you do is going to have the same restrictions with a lot of data being processed at one time. It is not clear to me from the above whether you are doing your original plan of kicking all the Unison jobs off at one time or whether you are using Sebastian's script to run them sequentially?
Is there a reason all the transfers have to be done at one time?
It seems having more jobs spread out over the day would reduce the instantaneous load at any point in time.
Thanks
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-12-05 19:33:41 UTC
Permalink
Ah, watch, got it. Any recommendations on how to set this up properly for my scenario? I assume I could use it without need of replicating, or does the watch flag only for files that are changed and doesn't include new file creation?

Each job has a few hundred to over a GB of data to transfer. If I run them one at a time my worry isn't the resources of the central server, it’s the bandwidth. The bandwidth really limits what I can do and how long those jobs run.



--James


-----Original Message-----
From: Adrian Klaver [mailto:***@aklaver.com]
Sent: Monday, December 5, 2016 1:00 PM
To: Stull, James <***@samsondentalpartners.com>; ***@web.de; unison-***@yahoogroups.com
Subject: Re: Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
All these servers are connected via T1's. If I try and run them sequentially I fear the jobs would take longer than 24 hours to complete.
Not following. If you launch 27 jobs at one time to all servers how is that less resource intensive then hitting each server one at a time?
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I am considering spinning up 1 or two more servers to split these jobs up between. I imagine running 9 jobs or so at once is far better than trying to run 27 or more.
As a side question, I noticed unison does have a fsmonitor utility. I see notes about it in the changelog but nothing in the documentation on how to use. Could I use this more effectively? If I understand correctly, it would upload changes as they happen instead of performing a single backup job daily. If so, how do I use it? Can you point me to any documentation on how to use it?
Search for watch in the docs:

http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks!
--James
-----Original Message-----
Sent: Monday, December 5, 2016 12:38 PM
Subject: Re: Unison from multiple sources to single target
My apologies about the delay in response. Thank you both for your
replies. It sounds like either I’m going about this the wrong way
and/or Unison may not be the best fit for me. Let me explain what
exactly I’m doing so you have a complete picture.
I currently have 27 remote offices, each have their own Windows file
server. Previously we used Windows DFS replication back to a single
windows server to essentially back these files up from the remote
offices to a central server. This worked very well because all
changes were replicated instantly. However, we continue to grow, both
as a company adding more offices and also in data. I typically see a
growth of ~10GB a day in total organization wide. Last year our
central server started to run out of space. I ended up replacing this
single server with a GlusterFS two node system. Gluster runs great for this scenario.
However, now I need to find a good way to replicate all changes/new
files quickly and efficiently from my Windows servers to a Linux system.
In my research I found Unison as a good solution since it indexes
everything locally and only copies changes. Great for slow WAN links
which is exactly what I need.
I’m only going to grow in the number of servers I replicate from. So
I need a replication solution that can handle this. Ideally I want to
be able to replicate throughout the day, not just once at night.
Also, here are the results that Adrian asked me for: Nothing was
output to Stdout. I even redirected Stdout to a file to ensure
nothing was missed. However, I did find the solution. Sebastian
pointed me in the right direction. I had a single server in the list
that is having issues. This would cause the entire cronjob to fail. I
am now successfully running multiple jobs at once. Unfortunately, as
you and Sebastian pointed out, this may not be a good way due to
system requirements.
I would love to hear if you have any suggestions on how I should run
Unison in a different way, or perhaps a different solution entirely
that would work better for my environment/needs.
Well what ever you do is going to have the same restrictions with a lot of data being processed at one time. It is not clear to me from the above whether you are doing your original plan of kicking all the Unison jobs off at one time or whether you are using Sebastian's script to run them sequentially?
Is there a reason all the transfers have to be done at one time?
It seems having more jobs spread out over the day would reduce the instantaneous load at any point in time.
Thanks
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-12-05 21:08:54 UTC
Permalink
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Ah, watch, got it. Any recommendations on how to set this up properly for my scenario? I assume I could use it without need of replicating, or does the watch flag only for files that are changed and doesn't include new file creation?
Have no idea, I have not used it. Maybe someone else can chime in on
what the best way to use it is.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Each job has a few hundred to over a GB of data to transfer. If I run them one at a time my worry isn't the resources of the central server, it’s the bandwidth. The bandwidth really limits what I can do and how long those jobs run.
Still not seeing how running 27 jobs at one time is less bandwidth
intensive then running jobs sequentially?

Seems when you run them together you are more likely to hit a bandwidth
limit and experience throttling that slows the whole process down.
Running jobs in smaller increments would seem to be more bandwidth
friendly. The question comes down to what expect in the process? Do you
want all the remote office backups to be within some time frame of each
other or does it not matter that the backups may differ in point of time?
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
--James
-----Original Message-----
Sent: Monday, December 5, 2016 1:00 PM
Subject: Re: Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
All these servers are connected via T1's. If I try and run them sequentially I fear the jobs would take longer than 24 hours to complete.
Not following. If you launch 27 jobs at one time to all servers how is that less resource intensive then hitting each server one at a time?
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I am considering spinning up 1 or two more servers to split these jobs up between. I imagine running 9 jobs or so at once is far better than trying to run 27 or more.
As a side question, I noticed unison does have a fsmonitor utility. I see notes about it in the changelog but nothing in the documentation on how to use. Could I use this more effectively? If I understand correctly, it would upload changes as they happen instead of performing a single backup job daily. If so, how do I use it? Can you point me to any documentation on how to use it?
http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/unison-manual.html
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks!
--James
-----Original Message-----
Sent: Monday, December 5, 2016 12:38 PM
Subject: Re: Unison from multiple sources to single target
My apologies about the delay in response. Thank you both for your
replies. It sounds like either I’m going about this the wrong way
and/or Unison may not be the best fit for me. Let me explain what
exactly I’m doing so you have a complete picture.
I currently have 27 remote offices, each have their own Windows file
server. Previously we used Windows DFS replication back to a single
windows server to essentially back these files up from the remote
offices to a central server. This worked very well because all
changes were replicated instantly. However, we continue to grow, both
as a company adding more offices and also in data. I typically see a
growth of ~10GB a day in total organization wide. Last year our
central server started to run out of space. I ended up replacing this
single server with a GlusterFS two node system. Gluster runs great for this scenario.
However, now I need to find a good way to replicate all changes/new
files quickly and efficiently from my Windows servers to a Linux system.
In my research I found Unison as a good solution since it indexes
everything locally and only copies changes. Great for slow WAN links
which is exactly what I need.
I’m only going to grow in the number of servers I replicate from. So
I need a replication solution that can handle this. Ideally I want to
be able to replicate throughout the day, not just once at night.
Also, here are the results that Adrian asked me for: Nothing was
output to Stdout. I even redirected Stdout to a file to ensure
nothing was missed. However, I did find the solution. Sebastian
pointed me in the right direction. I had a single server in the list
that is having issues. This would cause the entire cronjob to fail. I
am now successfully running multiple jobs at once. Unfortunately, as
you and Sebastian pointed out, this may not be a good way due to
system requirements.
I would love to hear if you have any suggestions on how I should run
Unison in a different way, or perhaps a different solution entirely
that would work better for my environment/needs.
Well what ever you do is going to have the same restrictions with a lot of data being processed at one time. It is not clear to me from the above whether you are doing your original plan of kicking all the Unison jobs off at one time or whether you are using Sebastian's script to run them sequentially?
Is there a reason all the transfers have to be done at one time?
It seems having more jobs spread out over the day would reduce the instantaneous load at any point in time.
Thanks
--
Adrian Klaver
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-12-06 15:00:12 UTC
Permalink
I need the replication to happen, at least once a day. My offices are open from 9-9 so I only have 12 hours for them to complete the copy. If I run the job then I need the backups to happen in less than 30 minutes each, which with only a 1.54mb connection I know it will take, at minimum, a couple hours. This means with 27 locations I can't even complete them all within the 24 day much less 12 hour night.

I'm not worried about running them all at once. The T1's will limit their rate. The only thing I need to make sure of is the main system they are syncing with has enough memory/cpu as you pointed out earlier.

I could run multiple batches of locations at once, then they are not all running at the same time and they could get completed before the next day. I may be able to use Sebastian's script to help me accomplish that. But I'm really hopeful I can use the "watch" and it will replicate as soon as new files are created.



--James


-----Original Message-----
From: Adrian Klaver [mailto:***@aklaver.com]
Sent: Monday, December 5, 2016 4:09 PM
To: Stull, James <***@samsondentalpartners.com>; ***@web.de; unison-***@yahoogroups.com
Subject: Re: Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Ah, watch, got it. Any recommendations on how to set this up properly for my scenario? I assume I could use it without need of replicating, or does the watch flag only for files that are changed and doesn't include new file creation?
Have no idea, I have not used it. Maybe someone else can chime in on what the best way to use it is.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Each job has a few hundred to over a GB of data to transfer. If I run them one at a time my worry isn't the resources of the central server, it’s the bandwidth. The bandwidth really limits what I can do and how long those jobs run.
Still not seeing how running 27 jobs at one time is less bandwidth intensive then running jobs sequentially?

Seems when you run them together you are more likely to hit a bandwidth limit and experience throttling that slows the whole process down.
Running jobs in smaller increments would seem to be more bandwidth friendly. The question comes down to what expect in the process? Do you want all the remote office backups to be within some time frame of each other or does it not matter that the backups may differ in point of time?
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
--James
-----Original Message-----
Sent: Monday, December 5, 2016 1:00 PM
Subject: Re: Unison from multiple sources to single target
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
All these servers are connected via T1's. If I try and run them sequentially I fear the jobs would take longer than 24 hours to complete.
Not following. If you launch 27 jobs at one time to all servers how is that less resource intensive then hitting each server one at a time?
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I am considering spinning up 1 or two more servers to split these jobs up between. I imagine running 9 jobs or so at once is far better than trying to run 27 or more.
As a side question, I noticed unison does have a fsmonitor utility. I see notes about it in the changelog but nothing in the documentation on how to use. Could I use this more effectively? If I understand correctly, it would upload changes as they happen instead of performing a single backup job daily. If so, how do I use it? Can you point me to any documentation on how to use it?
http://www.cis.upenn.edu/~bcpierce/unison/download/releases/stable/uni
son-manual.html
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
Thanks!
--James
-----Original Message-----
Sent: Monday, December 5, 2016 12:38 PM
Subject: Re: Unison from multiple sources to single target
My apologies about the delay in response. Thank you both for your
replies. It sounds like either I’m going about this the wrong way
and/or Unison may not be the best fit for me. Let me explain what
exactly I’m doing so you have a complete picture.
I currently have 27 remote offices, each have their own Windows file
server. Previously we used Windows DFS replication back to a single
windows server to essentially back these files up from the remote
offices to a central server. This worked very well because all
changes were replicated instantly. However, we continue to grow,
both as a company adding more offices and also in data. I typically
see a growth of ~10GB a day in total organization wide. Last year
our central server started to run out of space. I ended up replacing
this single server with a GlusterFS two node system. Gluster runs great for this scenario.
However, now I need to find a good way to replicate all changes/new
files quickly and efficiently from my Windows servers to a Linux system.
In my research I found Unison as a good solution since it indexes
everything locally and only copies changes. Great for slow WAN links
which is exactly what I need.
I’m only going to grow in the number of servers I replicate from. So
I need a replication solution that can handle this. Ideally I want
to be able to replicate throughout the day, not just once at night.
Also, here are the results that Adrian asked me for: Nothing was
output to Stdout. I even redirected Stdout to a file to ensure
nothing was missed. However, I did find the solution. Sebastian
pointed me in the right direction. I had a single server in the list
that is having issues. This would cause the entire cronjob to fail.
I am now successfully running multiple jobs at once. Unfortunately,
as you and Sebastian pointed out, this may not be a good way due to
system requirements.
I would love to hear if you have any suggestions on how I should run
Unison in a different way, or perhaps a different solution entirely
that would work better for my environment/needs.
Well what ever you do is going to have the same restrictions with a lot of data being processed at one time. It is not clear to me from the above whether you are doing your original plan of kicking all the Unison jobs off at one time or whether you are using Sebastian's script to run them sequentially?
Is there a reason all the transfers have to be done at one time?
It seems having more jobs spread out over the day would reduce the instantaneous load at any point in time.
Thanks
--
Adrian Klaver
--
Adrian Klaver
--
Adrian Klaver
***@aklaver.com
shouldbe q931 shouldbeq931@gmail.com [unison-users]
2016-12-12 22:29:25 UTC
Permalink
On Tue, Dec 6, 2016 at 3:00 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I need the replication to happen, at least once a day. My offices are open from 9-9 so I only have 12 hours for them to complete the copy. If I run the job then I need the backups to happen in less than 30 minutes each, which with only a 1.54mb connection I know it will take, at minimum, a couple hours. This means with 27 locations I can't even complete them all within the 24 day much less 12 hour night.
I'm not worried about running them all at once. The T1's will limit their rate. The only thing I need to make sure of is the main system they are syncing with has enough memory/cpu as you pointed out earlier.
I could run multiple batches of locations at once, then they are not all running at the same time and they could get completed before the next day. I may be able to use Sebastian's script to help me accomplish that. But I'm really hopeful I can use the "watch" and it will replicate as soon as new files are created.
If you're just replicating in one direction, then why not just run run
rsync initiated from each remote office ?

Cheers
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-12-14 21:52:40 UTC
Permalink
All I really need is an rsync since, you are right, I only need things copied one way. The issue I run into is I don't know of any good rysnc clients for Windows.



--James


-----Original Message-----
From: shouldbe q931 [mailto:***@gmail.com]
Sent: Monday, December 12, 2016 5:29 PM
To: unison-***@yahoogroups.com
Cc: Stull, James <***@samsondentalpartners.com>
Subject: Re: [unison-users] RE: Unison from multiple sources to single target

On Tue, Dec 6, 2016 at 3:00 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I need the replication to happen, at least once a day. My offices are open from 9-9 so I only have 12 hours for them to complete the copy. If I run the job then I need the backups to happen in less than 30 minutes each, which with only a 1.54mb connection I know it will take, at minimum, a couple hours. This means with 27 locations I can't even complete them all within the 24 day much less 12 hour night.
I'm not worried about running them all at once. The T1's will limit their rate. The only thing I need to make sure of is the main system they are syncing with has enough memory/cpu as you pointed out earlier.
I could run multiple batches of locations at once, then they are not all running at the same time and they could get completed before the next day. I may be able to use Sebastian's script to help me accomplish that. But I'm really hopeful I can use the "watch" and it will replicate as soon as new files are created.
If you're just replicating in one direction, then why not just run run rsync initiated from each remote office ?

Cheers
Adrian Klaver adrian.klaver@aklaver.com [unison-users]
2016-12-14 21:58:23 UTC
Permalink
On 12/14/2016 01:52 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
All I really need is an rsync since, you are right, I only need things
copied one way. The issue I run into is I don't know of any good rysnc
clients for Windows.
I have used:

http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
--James
-----Original Message-----
Sent: Monday, December 12, 2016 5:29 PM
Subject: Re: [unison-users] RE: Unison from multiple sources to single target
On Tue, Dec 6, 2016 at 3:00 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I need the replication to happen, at least once a day. My offices are
open from 9-9 so I only have 12 hours for them to complete the copy. If
I run the job then I need the backups to happen in less than 30 minutes
each, which with only a 1.54mb connection I know it will take, at
minimum, a couple hours. This means with 27 locations I can't even
complete them all within the 24 day much less 12 hour night.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I'm not worried about running them all at once. The T1's will limit
their rate. The only thing I need to make sure of is the main system
they are syncing with has enough memory/cpu as you pointed out earlier.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I could run multiple batches of locations at once, then they are not
all running at the same time and they could get completed before the
next day. I may be able to use Sebastian's script to help me accomplish
that. But I'm really hopeful I can use the "watch" and it will replicate
as soon as new files are created.
If you're just replicating in one direction, then why not just run run
rsync initiated from each remote office ?
Cheers
--
Adrian Klaver
***@aklaver.com
'Stull, James' jamesstull@samsondentalpartners.com [unison-users]
2016-12-14 22:36:24 UTC
Permalink
Thanks, I will take a look at that.



--James


-----Original Message-----
From: Adrian Klaver [mailto:***@aklaver.com]
Sent: Wednesday, December 14, 2016 4:58 PM
To: Stull, James <***@samsondentalpartners.com>; shouldbe q931 <***@gmail.com>; unison-***@yahoogroups.com
Subject: Re: [unison-users] RE: Unison from multiple sources to single target

On 12/14/2016 01:52 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
All I really need is an rsync since, you are right, I only need things
copied one way. The issue I run into is I don't know of any good rysnc
clients for Windows.
I have used:

http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
--James
-----Original Message-----
Sent: Monday, December 12, 2016 5:29 PM
Subject: Re: [unison-users] RE: Unison from multiple sources to single target
On Tue, Dec 6, 2016 at 3:00 PM, 'Stull, James'
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I need the replication to happen, at least once a day. My offices are
open from 9-9 so I only have 12 hours for them to complete the copy.
If I run the job then I need the backups to happen in less than 30
minutes each, which with only a 1.54mb connection I know it will take,
at minimum, a couple hours. This means with 27 locations I can't even
complete them all within the 24 day much less 12 hour night.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I'm not worried about running them all at once. The T1's will limit
their rate. The only thing I need to make sure of is the main system
they are syncing with has enough memory/cpu as you pointed out earlier.
Post by 'Stull, James' ***@samsondentalpartners.com [unison-users]
I could run multiple batches of locations at once, then they are not
all running at the same time and they could get completed before the
next day. I may be able to use Sebastian's script to help me
accomplish that. But I'm really hopeful I can use the "watch" and it
will replicate as soon as new files are created.
If you're just replicating in one direction, then why not just run run
rsync initiated from each remote office ?
Cheers
--
Adrian Klaver
***@aklaver.com
Loading...