A command line utility to visualize how fast a file is growing?

I want to grok how fast a particular file is growing.

I could do

watch ls -l file

And deduce this information from the rate of change.

Is there something similar that would directly output the rate of growth of the file over time?


Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.

Method 1

tail -f file | pv > /dev/null

But beware that it involves acually reading the file, so it might consume a bit more resources than something that watches just file size.

Method 2

progress (Coreutils progress viewer) or recent versions of pv can watch a file descriptor of a particular process. So you can do:

lsof your-file

to see what process ($pid) is writing to it and on which file descriptor ($fd), and do:
pv -d "$pid:$fd"

progress -mp "$pid"

Method 3

I have a little perl script that I put in my bash environment as a function:

fileSizeChange <file> [seconds]

Sleep seconds defaults to 1.

fileSizeChange() {
  perl -e '
  $file = shift; die "no file [$file]" unless -f $file; 
  $sleep = shift; $sleep = 1 unless $sleep =~ /^[0-9]+$/;
  $format = "%0.2f %0.2fn";
    $size = ((stat($file))[7]);
    $change = $size - $lastsize;
    printf $format, $size/1024/1024, $change/1024/1024/$sleep;
    sleep $sleep;
    $lastsize = $size;
  }' "$1" "$2"

Method 4

The following shell function monitors a file or directory and shows an estimate of throughput / write speed. Execute with monitorio <target_file_or_directory>. If your system doesn’t have du, which could be the case if you are monitoring io throughput on an embedded system, then you can use ls instead (see comment in code)

monitorio () {
# show write speed for file or directory
    size=$(du -ks "$target" | awk '{print $1}')
    echo ""
    while [ 1 ]; do
        size=$(du -ks "$target" | awk '{print $1}')
        #size=$(ls -l "$1"  | awk '{print $5/1024}')
        kb=$((${size} - ${prevsize}))
        kbmin=$((${kb}* (60/${interval}) ))
        # exit if this is not first loop & file size has not changed
        if [ $firstrun -ne 1 ] && [ $kb -eq 0 ]; then break; fi
        echo -e "e[1A $target changed ${kb}KB ${kbmin}KB/min ${kbhour}KB/hour size: ${size}KB"
        sleep $interval

example use:

<a href="https://getridbug.com/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="6411170116240c0b1710">[email protected]</a>:~$ dd if=/dev/zero of=/tmp/zero bs=1 count=50000000 &
<a href="https://getridbug.com/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="2257514750624a4d5156">[email protected]</a>:~$ monitorio /tmp/zero
/tmp/zero changed 4KB 24KB/min 1440KB/hour size: 4164KB
/tmp/zero changed 9168KB 55008KB/min 3300480KB/hour size: 13332KB
/tmp/zero changed 9276KB 55656KB/min 3339360KB/hour size: 22608KB
/tmp/zero changed 8856KB 53136KB/min 3188160KB/hour size: 31464KB
<a href="https://getridbug.com/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="f184829483b1999e8285">[email protected]</a>:~$ killall dd; rm /tmp/zero

Method 5

tail -f -c 1 file | pv > /dev/null

A variation on the other answer, the -c 1 means start from the last byte of the file, which avoids having to read in the last ten lines first (which can take a while on binary files).

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments