Performance Issues and rsync

I’m upgrading my external hard drive from 1.5 TB to 2.0 TB. Not a big jump, no, but it’s a good opportunity to throw the 1.5 TB in the closet for backup/storage.

My 1.5 TB external was hooked up to my Dell Mini 9 netbook and used as a media server as I found it far too small (dimensions-wise) to be used for programming or school. Since the old external was already attached, I threw on the new external, formatted it to ext4 using gparted and started transferring the data.

Here’s the command I used:
sudo rsync -avrz /media/external/ /media/external_/

D’oh! Can anyone catch what I did?

My Dell Mini 9, equipped with a less-than-powerful Intel Atom N270 was gobbled up by the command, almost making the netbook unusable.

You see, I’m used to using the tar command tar -xvzf filename.tar.gz which extracts an archive of files.

For tar, ‘-z’ means

-z, –gzip, –gunzip –ungzip

meaning that the file is compressed with gzip.

For rsync however, ‘-z’ means

-z, –compress compress file data during the transfer

Yikes! My little Intel Atom N270 was trying to compress all my files before transferring them! Not good nor efficient for local transfer.

If you’re transferring MB or GB of data over a network (and have the time and CPU power to compress files), then by all means, use the ‘-z’ with rsync. But don’t use the ‘-z’ flag when copying data from one local hard drive to another, especially when using a netbook. That’s reserved for people by the name of Brett Alton.

So i switched the transfer over to my new latop, sporting a Core i5 430m, transferring 1.5 TB of data without sweating

CPU breakdown
CPU CPU-usage Cores/Threads Speed Cache (L2/L3) rsync flags
Intel Atom N270 97% 1/2 1.6 GHz 512 kB / none -avrz
Intel Core i5 430m 3% 2/4 2.26 GHz 2×256 kB / 3 MB -av

Here is the suggested command to use (changing the path names of course):
sudo rsync -av /media/external/ /media/external_/

Good luck!

  • Biophysics

    btw I read somewhere that the fastest way of moving data is

    cd /oldplace
    tar -c source | tar -C destination/ -xv

    • Anonymous

      That’s interesting, but isn’t that as effective as cp -p oldir/ newdir/ ??

    • Guest

      Not necessarily — tar will properly handle special devices and files such as /dev, in case you’re moving an entire root filesystem to a subdirectory or vice-versa. cp will not handle this situation properly. One possibly easier way to do it remotely is to tar -c (with or without -z) as you describe above the entire filesystem tree and then scp the tarball.
      Also… if you know no files are different and/or it’s a completely new copy, scp is faster than rsync over remote links, and cp is faster than either of them (of course!)

  • Rich

    yo you should note that over most local networks, the compress doesn’t help at all…depending on the speed of your network and cpu’s of course, though I know that on 1Gbps network and >2ghz >2 cores cpus it doesn’t help.

    • Anonymous

      That’s very true. Any time you’re working over a local network, you don’t need compression. You only /should use/ (not need) compression is over WAN.