Create Floppy Disk Images from within Linux

Submitted by jbreland on Sat, 06/05/2010 - 20:49

It's possible to create floppy disk images (IMG files) from withing Linux using native Linux utilities. Although you most likely won't have a very frequent need for this these days, one place where it can come in handy is when dealing with virtual machines. Emulators such as VirtualBox and VMware Player can mount virtual floppy images and present them to guest machines as physical disks, just as they can mount CD-ROM ISO images and present them as physical CDs.

Now again, there probably isn't a very widespread need to do this, but in my case I needed to be able to create floppy disk images for my Windows installation CD. I use a heavily customized installation CD with an answer file to automate Windows installation. Unfortunately, Windows XP is only capable of reading answer files from the CD itself (which doesn't work for me because I need to be able to change the file) or from a floppy disk. Newer versions of Windows, I believe, can read from USB drives, but as I only (and infrequently) run Windows inside a virtual machine, I don't have any great need to upgrade. Being able to easily generate floppy disk images containing updated answer files, etc. has been a huge help compared to keeping up with physical floppy disks, especially since my current desktop no longer supports a floppy drive. Now, I just point VirtualBox to the appropriate IMG files, and when I boot Windows (or the Windows installer) it'll see it as a normal floppy drive. Very handy.

In order to create floppy disk images, you'll need a copy of dosfstools installed. It should be available in most package repositories. Once installed, the following command does all the magic:

mkfs.vfat -C "floppy.img" 1440

You now have an empty, but valid, floppy disk image. In order to copy files to the image, you need to mount the image using the loop device:

sudo mount -o loop,uid=$UID -t vfat floppy.img /mnt/floppy

Note that the mount command must either be run as root or using sudo; the uid argument makes the mount point owned by the current user rather so that you have permission to copy files into it.

After you're finished copying files, unmount the image and you're done. You can now attach it to your emulator of choice as a floppy disk image. W00t.

To make things even easier, the following script automates the entire process; just pass it the directory containing all of the files you want copied to the floppy disk and it'll do the rest.

#!/bin/bash
 
# Setup environment
FORMAT=$(which mkfs.vfat 2>/dev/null)
MOUNT=$(which mount 2>/dev/null)
TMP='/tmp'
shopt -s dotglob
 
# Verify binaries exist
MISSING=''
[ ! -e "$FORMAT" ] && MISSING+='mkfs.vfat, '
[ ! -e "$MOUNT" ] && MISSING+='mount, '
if [ -n "$MISSING" ]; then
   echo "Error: cannot find the following binaries: ${MISSING%%, }"
   exit
fi
 
# Verify arguments
if [ ! -d "$1" ]; then
   echo "Error: You must specify a directory containing the floppy disk files"
   exit
else
   DISK=$(basename "${1}")
   IMG="${TMP}/${DISK}.img"
   TEMP="${TMP}/temp_${DISK}"
fi
 
# Load loopback module if necessary
if [ ! -e /dev/loop0 ]; then
   sudo modprobe loop
   sleep 1
fi
 
# Create disk image
${FORMAT} -C "${IMG}" 1440
mkdir "${TEMP}"
sudo $MOUNT -o loop,uid=$UID -t vfat "${IMG}" "${TEMP}"
cp -f "${DISK}"/* "${TEMP}"/
sudo umount "${TEMP}"
rmdir "${TEMP}"
mv "${IMG}" .

Universal Extractor 1.6.1 Released

Submitted by jbreland on Wed, 05/12/2010 - 03:15

After a nearly two year hiatus, I finally got around to updating Universal Extractor. This release focuses heavily on bug fixes, reliability improvements, and component updates, so the "new features" list is rather short. It is, however, an important update and I recommend all Universal Extractor users upgrade when they get the chance. It also includes several new and updated translations. Please check out the changelog for all the details.

For more information:
Universal Extractor home page and downloads
Universal Extractor ChangeLog
Universal Extractor feedback and support

Quick Domain Name / IP Address / MX Record Lookup Functions

Submitted by jbreland on Fri, 05/07/2010 - 16:06

Today's tip is once again focused on Bash functions (I have a whole bunch to share; they're just too useful :-) ). These are three quick and easy functions for performing DNS lookups:

ns - perform standard resolution of hostnames or IP addresses using nslookup; only resolved names/addresses are shown in the results

mx - perform MX record lookup to determine mail servers (and priority) for a particular domain

mxip - perform MX record lookup, but return mail server IP addresses instead of host names

Here are the functions:

# Domain and MX record lookups
#   $1 = hostname, domain name, or IP address
function ns() {
    nslookup $1 | tail -n +4 | sed -e 's/^Address:[[:space:]]\+//;t;' -e 's/^.*name = \(.*\)\.$/\1/;t;d;'
}
function mx() {
    nslookup -type=mx $1 | grep 'exchanger' | sed 's/^.* exchanger = //'
}
function mxip() {
    nslookup -type=mx $1 | grep 'exchanger' | awk '{ print $NF }' | nslookup 2>/dev/null | grep -A1 '^Name:' | sed 's/^Address:[[:space:]]\+//;t;d;'
}

And finally, some examples:

$ ns mail.legroom.net # forward lookup
64.182.149.164
$ ns 64.182.149.164   # reverse lookup
mail.legroom.net
$ ns www.legroom.net  # cname example
legroom.net
64.182.149.164
$ mx legroom.net      # mx lookup
10 mail.legroom.net.
$ mxip legroom.net    # mx->ip lookup
64.182.149.164

Bash Random Password Generator

Submitted by jbreland on Thu, 05/06/2010 - 17:50

Random password generators are certainly nothing new, but they, of course, come in handy from time to time. Here's a quick and easy Bash function to do the job:

# Generate a random password
#  $1 = number of characters; defaults to 32
#  $2 = include special characters; 1 = yes, 0 = no; defaults to 1
function randpass() {
  [ "$2" == "0" ] && CHAR="[:alnum:]" || CHAR="[:graph:]"
    cat /dev/urandom | tr -cd "$CHAR" | head -c ${1:-32}
    echo
}

I use this a good bit myself; it can be as strong (or weak) as you need, and only uses core Linux/UNIX commands, so it should work anywhere. Here are a few examples to demonstrate the flags:

$ randpass
UEJ1#QgdFbiJDvCiG*WbQoM:yM'y*[5d
$ randpass 10
4y8jsp#}&(
$ randpass 20 0
RT3Q3SJEgvnQDgz616RJ

Get BIOS/Motherboard Info from within Linux

Submitted by jbreland on Wed, 05/05/2010 - 19:31

It's possible to read the BIOS version and motherboard information (plus more) from a live Linux system using dmidecode. This utility "reports information about your system's hardware as described in your system BIOS according to the SMBIOS/DMI standard (see a sample output). This information typically includes system manufacturer, model name, serial number, BIOS version, asset tag as well as a lot of other details of varying level of interest and reliability depending on the manufacturer." It can be handy if you want to check the BIOS version of your desktop and you're too lazy to reboot, but it's far more useful when trying to get information about production servers that you simply cannot take down.

Simply run dmidecode (as root) to get a dump of all available information. You can specify --string or --type to filter the results. The dmidecode man page is quite thorough, so I won't rehash it here.

One extremely useful application that may not be immediately obvious is the ability to pull the system serial number. Let's say you need to call support for a particular server that can't be taken down, or that you may not even have physical access to. A vendor like Dell will always want the system serial number, and as long as you can login to the server you can obtain the serial number with dmidecode -s system-serial-number. This has saved me on a couple of occasions with remotely hosted servers.

A lot more information is available through dmidecode, so I definitely encourage you to check it out. To wrap things up, I'll leave you with this obnoxiously long alias:

alias bios='[ -f /usr/sbin/dmidecode ] && sudo -v && echo -n "Motherboard" && sudo /usr/sbin/dmidecode -t 1 | grep "Manufacturer\|Product Name\|Serial Number" | tr -d "\t" | sed "s/Manufacturer//" && echo -ne "\nBIOS" && sudo /usr/sbin/dmidecode -t 0 | grep "Vendor\|Version\|Release" | tr -d "\t" | sed "s/Vendor//"'

This will spit out a nicely formatted summary of the bios and motherboard information, using sudo so it can be run as a normal user. Example output:

$ bios
Motherboard: Dell Inc.
Product Name: Latitude D620
Serial Number: XXXXXXXX
 
BIOS: Dell Inc.
Version: A10
Release Date: 05/16/2008

Enjoy.

Generic Method to Determine Linux (or UNIX) Distribution Name

Submitted by jbreland on Wed, 05/05/2010 - 01:58

A while back I had a need to programmatically determine the which Linux distribution is running in order to have some scripts do the right thing depending on the distro. Unfortunately, there doesn't appear to be one completely foolproof method to do so. What I ended up coming up with was a combination of techniques that combines querying the LSB utilities, distro release info files, and kernel info from uname. It'll take the most specific distro name it can find, falling back to generic Linux if necessary. It'll also identify UNIX variants as well, such as Solaris or AIX.

Here's the code:

# Determine OS platform
UNAME=$(uname | tr "[:upper:]" "[:lower:]")
# If Linux, try to determine specific distribution
if [ "$UNAME" == "linux" ]; then
    # If available, use LSB to identify distribution
    if [ -f /etc/lsb-release -o -d /etc/lsb-release.d ]; then
        export DISTRO=$(lsb_release -i | cut -d: -f2 | sed s/'^\t'//)
    # Otherwise, use release info file
    else
        export DISTRO=$(ls -d /etc/[A-Za-z]*[_-][rv]e[lr]* | grep -v "lsb" | cut -d'/' -f3 | cut -d'-' -f1 | cut -d'_' -f1)
    fi
fi
# For everything else (or if above failed), just use generic identifier
[ "$DISTRO" == "" ] && export DISTRO=$UNAME
unset UNAME

I include this code in my ~/.bashrc file so that it always runs when I login and sets the $DISTRO variable to the appropriate distribution name. I can then use that variable at any later time to perform actions based on the distro. If preferred, this could also easily be adapted into a function by having it return instead of export $DISTRO.

I've tested this on a pretty wide range of Linux and UNIX distributions, and it works very well for me, so I figured I'd share it. Hope you find it useful.

Delete Old Files ONLY If Newer Files Exist

Submitted by jbreland on Tue, 05/04/2010 - 18:17

I discovered recently that one of my automated nightly backup processes had failed. I didn't discover this until about a week after it happened, and though I was able to fix it easily enough, I discovered another problem in the process: all of my backups for those systems had been wiped out. The cause turned out to be a nightly cron job that deletes old backups:

find /home/backup -type f -mtime +2 -exec rm -f {} +

This is pretty basic: find all files under /home/backup/ that are more than two days old and remove them. When new backups are added each night, this is no problems; even though all old backups get removed, newer backups are uploaded to replace them. However, when the backup process failed, the cron job kept happily deleting the older backups until, three days later, I had none left. Oops.

Fortunately, this didn't end up being an issue as I didn't need those specific backups, but nevertheless I wanted to fix the process so that the cleanup cron job would only delete old backups if newer backups exist. After a bit of testing, I cam up with this one-liner:

for i in /home/backup/*; do [[ -n $(find "$i" -type f -mtime -3) ]] && find "$i" -type f -mtime +2 -exec rm -f {} +; done

That line will work great as a cron job, but for the purpose of discussion let's break it down a little more:

1. for i in /home/backup/*; do
2.     if [[ -n $(find "$i" -type f -mtime -3) ]]; then
3.         find "$i" -type f -mtime +2 -exec rm -f {} +
4.     fi
5. done

So, there are three key parts involved. Beginning with step 2 (ignore the for loop for now), I want to make sure "new" backups exist before deleting the older ones. I do this by checking for any files that are younger than the cutoff date; if at least one or more files are found, then we can proceed with step three. The -n test verifies that the output of the find command is "not null", hence files were found.

Step 3 is pretty much exactly what I was doing previously, ie., deleting all files older than two days. However, this time it only gets executed if the previous test was true, and only operates on each subdirectory of /home/backup instead of the whole thing.

This brings us neatly back to step 1. In order for this part to make sense, you must first understand that I backup multiple systems to this directory, each under their own directory. So, I have:

/home/backup/server1
/home/backup/server2
/home/backup/server3
etc.

If I just use steps 2 and 3 operate on /home/backup directly, I could still end up losing backups. Eg., let's say backups for eveery thing except server1 began failing. New backups for server1 would continue to get added to /home/backup/server1, which means a find command on /home/backup (such as my test in step 2) would see those new files and assume everything just dandy. Meanwhile, server2, server3, etc. have not been getting any new backups, and once we cross the three day threshold all of their backups would be removed.

So, in step one I loop through each subdirectory under /home/backup, and then have the find operations run independently for each server's backups. This way, if all but server1 stops backing up, the test in step 2 will succeed on server1/, but fail on server2/, server3, etc,, thus retaining the old backups until new backups are generated.

And there you go: a safer way to cleanup old files and backups.

Make GTK+ Apps Look Better Under KDE (plus mini GTK+ rant)

Submitted by jbreland on Mon, 05/03/2010 - 21:01

Anyone that knows me knows that I'm not a fan of GTK+ applications, the GTK+ toolkit itself, or indeed even the entire GNOME desktop. I don't hide this. I love Linux and open source software, but I've always thought GTK+ applications look ugly and feel wrong. I can't even fully explain why I have this averse reaction to GTK+, but I've literally always felt this way for as long as I've used Linux, going back to the Red Hat 6.0 days. Granted, GTK+ has come a long way since then, but I still don't think it can hold a candle to Qt, both in terms of look and feel and attractiveness. This is one of the (admittedly geeky) reasons I've long preferred KDE over GNOME.

(Interesting aside that I just noticed: I think a comparison between between the GTK+/QT and GNOME/KDE websites also says a lot.)

Unfortunately (for me, at least), many of the "best of breed" Linux applications are built on GTK+. I consider Firefox to be the best general purpose web browser, I think Thunderbird (despite some stagnation over the last few years) is still the best e-mail client, Pidgin is the best IM client, GIMP (even though it pains me to say it) is the best image editor, etc. Despite my bias against GTK+, these are great applications that I use every single day.

Running GTK+ applications under KDE, however, can be an unpleasant experience; aside from the whole "feeling wrong" thing mentioned above, they look absolutely horrendous by default. If you use a KDE-based distribution, such as openSUSE, Kubuntu, or Mandriva, the maintainers usually apply special themes to the GTK+ applications to make them fit in better on the KDE desktop. For users of desktop neutral distributions, or even (gasp!) GNOME-based distributions, though, you'll need to do some extra work to spruce up GTK+ applications.

There are several options to do this, but the easiest method I've found is to install a high quality GTK+ theme links against the Oxygen icons and widgets. For a long time I've used QtCurve to do this, which is mature and works very well. More recently I've switched over to Oxygen-Molecule which looks a bit more accurate under KDE 4.4. In either case, once you get the theme setup it'll be very difficult to distinguish GTK+ applications from Qt applications based on appearance alone.

Many distros already include packages for these themes, which makes installation extremely easy. For example, Gentoo names the packages x11-themes/oxygen-molecule and x11-themes/gtk-engines-qtcurve; Arch includes qtcurve-gtk2 in the base repositories, and oxygen-molecule-theme is available as an AUR package. If your distribution doesn't provide a package, you can install the themes manually by downloading them from the previous links; installation instructions are included in the download.

Once the theme is installed, you need to instruct your GTK+ applications to use it. Sadly, this can be tricky if you don't already have GNOME installed, as the KDE control panel only applies themes to KDE and Qt-based applications. The easiest way I've found to do this is to install gtk-chtheme. It's a lightweight theme switcher specifically for GTK+ applications, and should be packaged by most distributions. Run gtk-chtheme after it's installed and you should see a list of available GTK+ themes. Raleigh is the default "horrendous" theme that I described earlier. Assuming Oxygen-Molecule or QtCurve have already been installed, you should also see them in the list. Select the new them and hit OK. You'll need to restart your GTK+ applications for the change to take effect. After that... viola! Enjoy the attractive new look of your GTK+ applications.

A lot more helpful information can be found in the Arch Linux KDE wiki page.

New Navigation Feature: News Categories

Submitted by jbreland on Sun, 05/02/2010 - 15:55

This is something I've been meaning to add to the site for quite a long time. In the Navigation menu on the left side of the screen, you'll find a new News Categories link. Click on that and you'll see a list of all terms used to categories posts on this site. Click on any term and you'll see a list of all posts in that category. This provides and easy way to, for example, see posts relating to all of my software projects or tips and tricks.

It's also possible to grab RSS feeds for specific categories. For example, if you're only interested in posts about software updates, browse to the Software category, then select the RSS feed icon provided through that page.

Please report any problems in the comments. Thanks.