Difference between revisions of "Miscellaneous commands"

From Christoph's Personal Wiki
Jump to: navigation, search
Line 72: Line 72:
  
 
==Multiple unzip==
 
==Multiple unzip==
The following command will <tt>unzip</tt> all zip files in the current directory.
+
The following command will <tt>unzip</tt> all zip files in the current directory:
 
+
for i in $(ls *.zip)
+
do
+
    unzip $i
+
done
+
 
+
Or, as a single command:
+
 
  for i in $(ls *.zip); do unzip $i; done
 
  for i in $(ls *.zip); do unzip $i; done
 
==Encrypt CDs/DVDs==
 
mkisofs -r backup | aespipe -e aes256 > backup.iso
 
modprobe aes        # as root
 
modprobe cryptoloop  # as root
 
*Mount the ISO (see: [[ISO Images]]):
 
mount -t iso9660 backup.iso /mnt/iso -o loop=/dev/loop0,encryption=aes256
 
*Mount the burnt CD/DVD:
 
mount -t iso9660 /dev/cdrom /mnt/iso -o loop=/dev/loop0,encryption=aes256
 
 
See: http://loop-aes.sourceforge.net/
 
  
 
==Linux I/O dedirection==
 
==Linux I/O dedirection==
 
* The following command saves <tt>stdout</tt> and <tt>stderr</tt> to the files "<tt>out.txt</tt>" and "<tt>err.txt</tt>", respectively:
 
* The following command saves <tt>stdout</tt> and <tt>stderr</tt> to the files "<tt>out.txt</tt>" and "<tt>err.txt</tt>", respectively:
 
 
  ./cmd 1>out.txt 2>err.txt
 
  ./cmd 1>out.txt 2>err.txt
  
 
* The following command ''appends'' <tt>stdout</tt> and <tt>stderr</tt> to the files "<tt>out.txt</tt>" and "<tt>err.txt</tt>", respectively:
 
* The following command ''appends'' <tt>stdout</tt> and <tt>stderr</tt> to the files "<tt>out.txt</tt>" and "<tt>err.txt</tt>", respectively:
 
 
  ./cmd 1>>out.txt 2>>err.txt
 
  ./cmd 1>>out.txt 2>>err.txt
  

Revision as of 08:09, 11 April 2007

This article will present various miscellaneous commands tested in Linux (running SuSE 10.1 and Mandriva). Most of these will be simple command line tools. Eventually, many of these will have their own article. For now, they are presented as is with absolutely no guarantee and zero responsibility on my part if they cause loss of information or files. Use at your own risk.

Copying files through tar filter

(cd /path/dir/from && tar -cvf - .) | (cd /path/dir/to && tar -xvf -)

Google tricks

  • Finding files:
-inurl:(htm|html|php) intitle:"index of" +"last modified" 
+"parent directory" +description +size +(jpg|png) "Lion"

Note: See Google Search Manual for more tricks.

Bulk image resize

If you are like me and have a high resolution digital camera, it is often necessary to resize the images before emailing them to friends and family. It is, of course, possible to manually resize them using Adobe Photoshop, The Gimp, or any other image editing programme. However, it is possible to automate this task using simple command line tools.

For an example, say you want to resize all of the jpeg images in your current directory to 800x600 and place them in a sub-directory called, "resized". Then you would execute the following commands:

find . -maxdepth 1 -name '*.jpg' -type f -exec convert -resize 800x600 {} resized/{} \;

It is also possible to have the above commands run recursively through a directory and its sub-directories like so:

find . -follow -name '*.jpg' -type f -exec convert -resize 800x600 {} ../resized/{} \;

Note that the programme convert is part of the ImageMagick suite and you will need to have it installed to use the above commands (it is, by default, in SuSE Linux).

Tracking down large files

Sometimes it is necessary to find files over a certain size and it can be somewhat tedious ls-ing through your many directories. The following command will list only those files over a certain size and only within the specified directory (and sub-directories):

find some_directory/ -size +2000k -ls

which will only list files over 2000 kb (2 MB).

Finding files containing a string in a directory hierarchy

In this example, all .php files will be searched for the string "MySQL" (case-insensitive with -i) and the line numbers will also be returned (using -n):

find . -name '*.php' -type f | xargs grep -n -i 'MySQL'

Selecting random lines from a file

This example could be used for printing random quotes from a file (note: the following should be issued as a single command):

FILE="/some/file_name"; 
nlines=$(wc -l < "$FILE"); 
IFS=$'\n'; 
array=($(<"$FILE")); 
echo "${array[$((RANDOM%nlines))]}"

Here, nlines holds the total number of lines in the file. The file is read into an array (note the use of IFS — this splits the lines based on '\n'). Then, once the array has been populated, print a random line from it.

Printing a block of text from a file

Say you have a file, foo, and it contains the following lines (note the captial letters and the full stop in line six):

one blah blah
Two blah blah
three blah blah 3, 4
Four blah blah
five blah blah 5, 6
six blah blah.

If you only want to print out lines 3, 4, and 5, execute the following command:

awk "/three/,/five/" < foo

If you only want to print out lines starting with a capital "F", execute the following command:

awk "/^F/" < foo

If you only want to print out lines ending in a full stop, execute the following command:

awk "/\.$/" < foo

Finally, if you only want to print out lines containing the numbers "5" and "6", execute the following command:

awk "/[5-6]/" < foo

Multiple unzip

The following command will unzip all zip files in the current directory:

for i in $(ls *.zip); do unzip $i; done

Linux I/O dedirection

  • The following command saves stdout and stderr to the files "out.txt" and "err.txt", respectively:
./cmd 1>out.txt 2>err.txt
  • The following command appends stdout and stderr to the files "out.txt" and "err.txt", respectively:
./cmd 1>>out.txt 2>>err.txt
  • The following command functions similar to the above two commands, but also copies stdout and stderr to the files "stdout.txt" and "stderr.txt", respectively:
(((./cmd | tee stdout.txt) 3>&1 1>&2 2>&3\
| tee stderr.txt) 3>&1 1>&2 2>&3) 1>out.txt 2>err.txt

Note: The above should be entered as one command (ie, the line that ends with a backslash is only continued on the next line because of the formatting constraints of this page).

Also note that Linux uses the following redirection codes/handles (see: redirection):

  • 0 = stdin
  • 1 = stdout
  • 2 = stderr

Count number of n-word lengths

for i in `seq 1 32` {
     egrep '^.{'$i'}$' /usr/share/dict/words | wc -l
}
# OR (depending on your shell),
for i in `seq 1 32`; do egrep '^.{'$i'}$' /usr/share/dict/words | wc -l; done

Then paste the numbers together (or, just add them to the above for-loop):

seq 1 32 | paste - tmp_n-word_lenghts.dat >n-word_lengths.dat

Edit a file from the CLI using Perl

perl -p -i.bak -e 's|before|after|' filename  # backup of original file 'filename.bak'
perl -p -i -e 's|before|after|ig' filename
-p 
Execute the command for every line of the file.
-i 
Invokes in place editing. If a backup extension is provided, a backup of the original file will be made with the extension provided. e.g. -i.bak.
-e 
Invoke command.

Command-line calculator

% echo "111111111 * 111111111" | bc
  12345678987654321
% echo -e "sqrt(25)\nquit\n" | bc -q -i
  5

Split large files into small pieces

% ls -lh mylargefile
  -rw-r--r--  1 foo users 800M Feb 18 11:17 mylargefile
% split -b 2m largefile mylargefile_
% ls -lh mylargefile_* | head 3
  -rw-r--r--  1 foo users 2.0M Feb 18 11:19 mylargefile_aa
  -rw-r--r--  1 foo users 2.0M Feb 18 11:19 mylargefile_ab
  -rw-r--r--  1 foo users 2.0M Feb 18 11:19 mylargefile_ac
  ...

Save man pages as plain text

% man grep | col -b > grep.txt

Stego trick using built-in Linux utilities

cat foo.zip >> bar.gif  # "hides" 'foo.zip' inside 'bar.gif'
xv bar.gif     # views just fine
unzip bar.gif  # extracts 'foo.zip'

Misc

Display the total number of files in the current working directory and all of its subdirectories:

find . -type f -print | wc  -l

Display a list of directories and how much space they consume, sorted from the largest to the smallest:

du | sort -nr

Format text for printing:

cat poorly_formatted_report.txt | fmt | pr | lpr
cat unsorted_list_with_dupes.txt | sort | uniq | pr | lpr

Delete files older than n days:

find /path/to/files* -mtime +5 -exec rm {} \;

See also

External links