Useful One Liners

From ThomasAdam:

Rename all files in a directory to lower case

mmv "[A-Z]" "[a-z]" * 

Or, for Debian users, assuming all files are in the same directory:

rename 'y/A-Z/a-z/' *

and assuming that they are in subdirectories under the current directory:

find -type f | xargs rename 'y/A-Z/a-z/'

(Note that the Debian version of “rename” is not the same as the version shipped with many other distributions. The above won’t work on Red Hat, for example. It is perl based.)

From ThomasAdam:

This example also is potentially dangerous, as it assumes *all* files in every subdirectory below the one you invoke it from is needed. To just change just those files in the current working directory:

find . -type -f -maxdepth 1 -exec rename 'y/A-Z/a-z/' {} ;

or to be consistent with Hugo’s original style:

find -type -f -maxdepth 1 | xargs rename 'y/A-Z/a-z/'

There is no difference between the two commands other than xargs can handle a slightly larger command-line chain.

Of course, what happens when a file contains a space? Or indeed, a directory? In instances such as this, you *could* do:

for i in *; do rename "$i" 's/some/expression/'; done

But this is cumbersome, since it invokes a new ‘rename’ process each time the loop is iterated. With find though, one can do:

find . type -f -maxdepth 1 -print0 | xargs -0 rename 'y/A-Z/a-z/'

The -print0 option to find, means treat the string, not delimiting on ” ” (the normal) but null-terminated strings (‘’) hence this encapsulates spaces. The -0 option to xargs is used to tell it that it is to delimit on null-terminated strings.

From: DanPope

There is also a perl script called chcase available at which can do pretty much all of these things even more simply:

chcase *                    # change all files in current directory to lowercase chcase -u *                 # change all files in current directory to uppercase chcase -r *                 # also rename directories in the current directory chcase -r '*.JPG'           # change all jpegs to lowercase recursively chcase -x 's/foo/bar/' *    # run a regular expression on all files

Renaming upper-case directories to lowercase

From: ThomasAdam:

Following on from the example above, often it is the case that the directories are also in uppercase. Ugh. This is horrid, and not at all Unix-like. The following will take care of that:

cd /somewhere  find . -type d -depth -name '*[A-Z]*' -print | {{{   while read dir; do dirn="$(dirname $dir)"; basen="$(basename $dir)";   newbasen="$(echo $basen | tr [:upper:] [:lower:])"; mv "$dir" "$dirn/$newbasen"; done


Use at your own risk, as it is imperfect. It will fail if the directory has spaces or characters in them. For the inquisitive, you might be wondering why I am piping the output to the shell. Quite simply is because the ‘-exec ‘ option to find (which I would have used) is not passed to the shell, which is what we need to use here to accomplish this task.

Tracking down large files

From: ThomasAdam:

You may find that your system is full, in which case:

cd /directory && du -sk *

Here’s another way:

find /some_directory -size +5000k -ls 

Which will list files over 5000kb (5MB). Change as appropriate.

Clock Skew during make?

From: ThomasAdam:

Sometimes, when you are compiling an application from source (this may be more applicable to gentoo users) you may well get an error message like:

Make ** Error [1] Clock Skew… **

This means that the files you are using have a timestamp that is in the future, relative to the time and date that your system clock is set to. This can be avoided at the untarring stage, by specifying the “m” flag to tar, for instance:

$ tar xzvfm ./some_file.tar.gz

which will preserve the modification times of the files untarred (often tar will report the “modification time set in the future” which is an indication that you should use the “m” flag). Often though one does not need to do that, and so the following command will get things back on track:

find . -type f | xargs touch -c && make 

Finding files containing a string in a directory hierarchy

From ThomasAdam:

In this example, all .php files are being checked for the string mysql:

find . -name '*.php' -type f | xargs grep -H 'mysql'

As a side-note, one interesting feature of find that isn’t that well documented is that for a command such as:

find . -type f -exec grep -l "$search" {}

…it’s possible to tell find to match as many files as possible before they’re passed to grep. Somewhat faster, and native to GNU-find (and on Solaris), is to add a ‘+':

find . -type f -exec grep -l "$search" {} +

From AndyRansom:

In this example, line numbers are returned (using -n) and the search is case-insensitive (using -i):

find . -name '*.php' -type f | xargs grep -n -i '[[MySQL]]'

Bulk image resize

From DavidRamsden:

In this example, all jpg files in the current directory only will be resized to 800×600 and placed in a directory called resized:

find . -maxdepth 1 -name '*.jpg' -type f -exec convert -resize 800x600 {} resized/{} ;

In this example, all jpg files in the directory and sub-directories will be resized to 800×600 and placed in a directory called ../resized:

find . -follow -name '*.jpg' -type f -exec convert -resize 800x600 {} ../resized/{} ;

Very useful for when you return from a day trip with your digital camera full of large high-quality images!

convert is a program that’s part of the ImageMagick suite.

Testing Samba Share Access

From StephenDavies:

Having problems with access to shares on your network. The testparm command can do more than just list your shares.

testparm /etc/samba/smb.conf

This will test access to your share from “” who has the address of Especially useful if you are running a firewall.

Alternative to the find command

From StephenDavies:

Instead of running find from / there is an easier way. (This may differ according to your distro) So, for RedHat, just after the install, do the following

cd /etc/cron.daily; ./slocate.cron &

Then the locate command works

 1. locate <string> | grep <substring>  

Its fast and easy. Much less demanding on the system than find. Then ,

crontab -e

and add

59 23 * * * /etc/cron.daily/slocate.cron 

There are other good scripts such as logrotate, etc. also in the /etc/cron* directories

Selecting random lines from a file

From: ThomasAdam

If you wanted to print a random line from a file, then the following works:

` FILE=”/some/file_name”; nlines=$(wc -l < “$FILE”); IFS=$’ ‘; array=($(<“$FILE”)); echo “${array[$((RANDOM%nlines))]}” `

Here, ‘nlines’ holds the total number of lines in the file. The file is read into an array (note the use of IFS — this splits the lines based on ‘ ‘). Then, once the array has been populated, print a random line from it.

Printing a block of text from a file

From: ThomasAdam

Sometimes, there are situations where one might want to extract a block of text (be it a paragraph, or somesuch) possibly between a pair of delimiters. The traditional way might be to use sed, as in:

sed -n -e ‘$start,${end}p’

… where $start and $end are line numbers. So for instance:

sed -n -e ‘0,45p’ < ./myfile.txt

would print out the chunk of text between line numbers 0 and 45 inclusive. But that’s not very accurate if you have a delimiter. So you can use a pair of regexps:

sed -n -e ‘/start/,/end/p’ < ./myfile.txt

This would search the file for the start and end regexp and print the lines upto and including both the start and end regexps. You should be careful if you get unpredicted results with using sed. Remember that sed is a Stream EDitor. So it doesn’t lookahead to see where the /end/ delimiter might be. If it did, it wouldn’t be a streaming editor. Choose your regexps with caution.

You can use awk to accomplish the same thing:

awk “/start/,/end/”

If you care for it, you can also use the shell, effectively glob between two shell-patterns:

while read x; do case $x in $1) flag=true;; $2) flag=””;; esac; [[ -n|”$flag” ]] && echo $x; done < ./myfile.txt

So you could save that script above and run it as:

./myscript [b]egin [e]nd

Remember though that in doing so, you can only use the pattern-matching facilities inherent with bourne-shells (if you’re using ksh or zsh, you have some additional flexibility, admittedly. At least ksh handles globbing in a saner fashion than bash does). Consult the relevant man page for your shell, if you are unsure.

Contextual std{out,err} (advanced)

From: ThomasAdam

Often, if you’re compiling a program and it errors, you might do something like:

foo 2>&1 | less

… which redirects stderr to stdout. That’s fine, but that won’t tell you where they occured with respect to each other. So, some other means are necessary. What needs to be done is to redirect stdout to one file, stderr to another and the both of them to a final file.

For this to happen though, some basic concepts need to be made aware — shell redirection is nothing more than manipulating file-descriptors. File descriptor 1 and 2 are well-known and used actively, but there’s nothing stopping you from creating your own, as in the command below which achieves the task:

`((foo 2>&1 1>&3 | tee ~/error.log) 3>&1 1>&2 | tee ~/out.log) > ~/stdouterr.log 2>&1`

All that is happening here is file descriptor three is being used to track stdout so that it isn’t polluted at each stage. tee allows us to both see what’s on the screen and filter off into the individual files. If we had have used redirection, there would have been a race condition in waiting for the commands to run, despite the explicit subshell created.

You can also achieve the same thing via process substitution:

(foo 2>&1 1>&3) >(tee ~/error.log 3>&1 1>&2) | tee ~/out.log > ~/stdouterr.log 2>&1

Correct all files that have a specific incorrect file extension (advanced)

From: SimonCapstick

So you have thousands of word processing files (could be any other file format) that must all be in the same format, in this example Rich Text Format. However someone has saved some of the files in another format, e.g. MS Word format, but still used the .rtf extension.

Here’s a one liner to fix all those incorrectly named Word documents to .doc . It can cope with spaces and apostrophes in filenames.

Important: Remove the -vn after rename to actually perform the renaming. The -vn makes it a test run and will just print what it would have renamed.

`find /path-to-my-files/ -maxdepth 1 -print0 -iname ‘*.rtf’ | xargs -0 -i{} file -0 -b {} | grep -i ‘Microsoft Office Document’ | xargs -0 -i{} rename -nv ‘s/.rtf$/.doc/’ {} `

Notes: To customise this one liner simply change the parts that mention: *.rtf , ‘Microsoft Office Document’ .rtf .doc

The above uses the rename command as supplied with Debian Linux distribution.

Many different file formats can be detected by the file command, not just ‘Microsoft Office Document’ !

The grep pattern should really be more specific in case a file is called “Microsoft Office Document.rtf”

Change maxdepth to a higher number to recurse into sub directories.

Leave a Reply