I'm going to make some assumptions here and you can let me (us) know which of them are OK and which may run into problems.
First, these are all Unix/Linux servers, right? And you are the admin with root access for all of them? If we don't meet these initial criteria, the problem gets a lot more complicated fast. I am also going to assume that the job needs to run unattended, e.g., in the middle of the night from a cron job.
I am not understanding from where will this script run?
As usual, the correct answer is "It depends." Do you need it to run from a single master location or can you just fire it off on all 7 of your servers independently? The answer to this question may depend on the quality of your internal network and the capacity of your NAS. If you have no worries about your network and have enough physical spindles on your NAS, then you might even be able to run the same transfers on all of the seven servers at the same time. OTOH, if you have just one large physical drive on your NAS, then you will want to run the jobs sequentially rather than simultaneously. I cannot answer those questions for you.
The simplest case is where you don't really need to worry about collisions and you can just run the jobs independently on all seven of your servers. You could choose to run them all at the same time or, at least as likely, stagger the starting times to reduce the demands on the NAS. If you can just drop the same script onto all of the servers, possibly with some minimal configuration changes, and then put an entry into each server's crontab, the problem becomes nearly trivial.
However, if you need to make sure that each one is finished before the next one starts, then you need to set up some kind of signalling. As usual, "there is more than one way to do it." (the Perl motto) About the simplest way I can think of to set this up (off the top of my head) might be to create an account on each machine with minimal privs and basically no other access to anything where the previous server can drop a message that it is done.
Here's an example. Suppose the first server ServerOne starts off at 6pm and you expect it to take about an hour. Note that I'm not saying that you are going to
guarantee that it will be done in an hour. I'm just picking something reasonable as a starting point. In that case, the root crontab on ServerTwo might point to a script something like this (very high-level pseudocode) which is set to start at, say, 6:45, or shortly before you expect ServerOne to finish:
-
BEGIN
-
open log file
-
LOOP until you can find the file /home/dummy/ServerOne.isdone
-
...(sleep between checks)
-
mount NAS drive
-
copy (or better rsync) /home to NAS
-
unmount NAS drive
-
log in to dummy@ServerThree and write file: /home/dummy/ServerTwo.isdone
-
rm /home/dummy/ServerOne.isdone locally
-
...(so it won't be there too soon tomorrow)
-
close log file
-
END
-
Obviously ServerThree through ServerSeven (and, for that matter, ServerOne) will have essentially the same script, just starting at different times.
This, of course, depends on the assumption that it is ok to log the operations locally for each server. If you want to log all of them in a single location, there are still a number of ways to handle it, including NFS mounting a single location to all of them or scp'ing all the logs to a single location or ...
Do look up rsync, by the way. This looks more or less like a backup scheme and rsync would be a much less intensive way to backup your /home filesystems than cp.
There are reasons I could think of why this scheme might not work for some situation, but when it comes to setting it up, this is probably about as simple as you can get. In order for one system to log in to another, look up ssh. DON'T use rsh! (unless your circumstances are right for it and if you don't know, then they are almost certainly not right to use rsh.)
Clearly I've left out lots of details, but I hope that this has given you enough of an idea to get you going.
Best Regards,
Paul