Display Colored Output in Shell Scripts

Submitted by jbreland on Fri, 06/18/2010 - 04:10

Most modern terminals* (xterm, Linux desktop environment terminals, Linux console, etc.) support ANSI escape sequences for providing colorized output. While I'm not a fan of flash for flash's sake, a little splash of color here and there in the right places can greatly enhance script output.

In Bash, I include the following functions in any script where I want colored output:

# Display colorized information output
function cinfo() {
	COLOR='\033[01;33m'	# bold yellow
	RESET='\033[00;00m'	# normal white
	MESSAGE=${@:-"${RESET}Error: No message passed"}
	echo -e "${COLOR}${MESSAGE}${RESET}"
# Display colorized warning output
function cwarn() {
	COLOR='\033[01;31m'	# bold red
	RESET='\033[00;00m'	# normal white
	MESSAGE=${@:-"${RESET}Error: No message passed"}
	echo -e "${COLOR}${MESSAGE}${RESET}"

This allows me to easily output yellow (cinfo) or red (cwarn) text with a single line in a script. Eg.:

cwarn "Error: operation failed"

If this message was output normally with echo and it was surrounded by a lot of other text, it might be overlooked by the user. By making it red, however, it's significantly more likely to stand out from any surrounding, "normal" output.

My most common use for these functions are simple status output messages. Eg., if I have a script or function that's going to do five different things and display output for each of those tasks, I'd like to have any easy way to visually distinguish each of the steps, as well as easily determine which step the script is on. So, I'll do something like this (from one of my system maintenance scripts):

# Rebuild packages with broken dependencies
cinfo "\nChecking for broken reverse dependencies\n"
revdep-rebuild -i -- -av
# Rebuild packages with new use flags
cinfo "\nChecking for updated ebuild with new USE flags\n"
emerge -DNav world

For more details, the Advanced Bash Scripting guide provides a detailed discussion on using ANSI escape sequences in scripts, both for color and other purposes. You can also find some additional info in the Bash Prompt HOWTO, as well as useful color charts on the Wikipedia page.

*Note: Traditional (read: old) Unixes generally don't support useful modern conveniences like this. If you regularly work with AIX or Solaris and the like, you may want to skip this tip.

Create Floppy Disk Images from within Linux

Submitted by jbreland on Sat, 06/05/2010 - 20:49

It's possible to create floppy disk images (IMG files) from withing Linux using native Linux utilities. Although you most likely won't have a very frequent need for this these days, one place where it can come in handy is when dealing with virtual machines. Emulators such as VirtualBox and VMware Player can mount virtual floppy images and present them to guest machines as physical disks, just as they can mount CD-ROM ISO images and present them as physical CDs.

Now again, there probably isn't a very widespread need to do this, but in my case I needed to be able to create floppy disk images for my Windows installation CD. I use a heavily customized installation CD with an answer file to automate Windows installation. Unfortunately, Windows XP is only capable of reading answer files from the CD itself (which doesn't work for me because I need to be able to change the file) or from a floppy disk. Newer versions of Windows, I believe, can read from USB drives, but as I only (and infrequently) run Windows inside a virtual machine, I don't have any great need to upgrade. Being able to easily generate floppy disk images containing updated answer files, etc. has been a huge help compared to keeping up with physical floppy disks, especially since my current desktop no longer supports a floppy drive. Now, I just point VirtualBox to the appropriate IMG files, and when I boot Windows (or the Windows installer) it'll see it as a normal floppy drive. Very handy.

In order to create floppy disk images, you'll need a copy of dosfstools installed. It should be available in most package repositories. Once installed, the following command does all the magic:

mkfs.vfat -C "floppy.img" 1440

You now have an empty, but valid, floppy disk image. In order to copy files to the image, you need to mount the image using the loop device:

sudo mount -o loop,uid=$UID -t vfat floppy.img /mnt/floppy

Note that the mount command must either be run as root or using sudo; the uid argument makes the mount point owned by the current user rather so that you have permission to copy files into it.

After you're finished copying files, unmount the image and you're done. You can now attach it to your emulator of choice as a floppy disk image. W00t.

To make things even easier, the following script automates the entire process; just pass it the directory containing all of the files you want copied to the floppy disk and it'll do the rest.

# Setup environment
FORMAT=$(which mkfs.vfat 2>/dev/null)
MOUNT=$(which mount 2>/dev/null)
shopt -s dotglob
# Verify binaries exist
[ ! -e "$FORMAT" ] && MISSING+='mkfs.vfat, '
[ ! -e "$MOUNT" ] && MISSING+='mount, '
if [ -n "$MISSING" ]; then
   echo "Error: cannot find the following binaries: ${MISSING%%, }"
# Verify arguments
if [ ! -d "$1" ]; then
   echo "Error: You must specify a directory containing the floppy disk files"
   DISK=$(basename "${1}")
# Load loopback module if necessary
if [ ! -e /dev/loop0 ]; then
   sudo modprobe loop
   sleep 1
# Create disk image
${FORMAT} -C "${IMG}" 1440
mkdir "${TEMP}"
sudo $MOUNT -o loop,uid=$UID -t vfat "${IMG}" "${TEMP}"
cp -f "${DISK}"/* "${TEMP}"/
sudo umount "${TEMP}"
rmdir "${TEMP}"
mv "${IMG}" .

Quick Domain Name / IP Address / MX Record Lookup Functions

Submitted by jbreland on Fri, 05/07/2010 - 16:06

Today's tip is once again focused on Bash functions (I have a whole bunch to share; they're just too useful :-) ). These are three quick and easy functions for performing DNS lookups:

ns - perform standard resolution of hostnames or IP addresses using nslookup; only resolved names/addresses are shown in the results

mx - perform MX record lookup to determine mail servers (and priority) for a particular domain

mxip - perform MX record lookup, but return mail server IP addresses instead of host names

Here are the functions:

# Domain and MX record lookups
#   $1 = hostname, domain name, or IP address
function ns() {
    nslookup $1 | tail -n +4 | sed -e 's/^Address:[[:space:]]\+//;t;' -e 's/^.*name = \(.*\)\.$/\1/;t;d;'
function mx() {
    nslookup -type=mx $1 | grep 'exchanger' | sed 's/^.* exchanger = //'
function mxip() {
    nslookup -type=mx $1 | grep 'exchanger' | awk '{ print $NF }' | nslookup 2>/dev/null | grep -A1 '^Name:' | sed 's/^Address:[[:space:]]\+//;t;d;'

And finally, some examples:

$ ns # forward lookup
$ ns   # reverse lookup
$ ns  # cname example
$ mx      # mx lookup
$ mxip    # mx->ip lookup

Bash Random Password Generator

Submitted by jbreland on Thu, 05/06/2010 - 17:50

Random password generators are certainly nothing new, but they, of course, come in handy from time to time. Here's a quick and easy Bash function to do the job:

# Generate a random password
#  $1 = number of characters; defaults to 32
#  $2 = include special characters; 1 = yes, 0 = no; defaults to 1
function randpass() {
  [ "$2" == "0" ] && CHAR="[:alnum:]" || CHAR="[:graph:]"
    cat /dev/urandom | tr -cd "$CHAR" | head -c ${1:-32}

I use this a good bit myself; it can be as strong (or weak) as you need, and only uses core Linux/UNIX commands, so it should work anywhere. Here are a few examples to demonstrate the flags:

$ randpass
$ randpass 10
$ randpass 20 0

Get BIOS/Motherboard Info from within Linux

Submitted by jbreland on Wed, 05/05/2010 - 19:31

It's possible to read the BIOS version and motherboard information (plus more) from a live Linux system using dmidecode. This utility "reports information about your system's hardware as described in your system BIOS according to the SMBIOS/DMI standard (see a sample output). This information typically includes system manufacturer, model name, serial number, BIOS version, asset tag as well as a lot of other details of varying level of interest and reliability depending on the manufacturer." It can be handy if you want to check the BIOS version of your desktop and you're too lazy to reboot, but it's far more useful when trying to get information about production servers that you simply cannot take down.

Simply run dmidecode (as root) to get a dump of all available information. You can specify --string or --type to filter the results. The dmidecode man page is quite thorough, so I won't rehash it here.

One extremely useful application that may not be immediately obvious is the ability to pull the system serial number. Let's say you need to call support for a particular server that can't be taken down, or that you may not even have physical access to. A vendor like Dell will always want the system serial number, and as long as you can login to the server you can obtain the serial number with dmidecode -s system-serial-number. This has saved me on a couple of occasions with remotely hosted servers.

A lot more information is available through dmidecode, so I definitely encourage you to check it out. To wrap things up, I'll leave you with this obnoxiously long alias:

alias bios='[ -f /usr/sbin/dmidecode ] && sudo -v && echo -n "Motherboard" && sudo /usr/sbin/dmidecode -t 1 | grep "Manufacturer\|Product Name\|Serial Number" | tr -d "\t" | sed "s/Manufacturer//" && echo -ne "\nBIOS" && sudo /usr/sbin/dmidecode -t 0 | grep "Vendor\|Version\|Release" | tr -d "\t" | sed "s/Vendor//"'

This will spit out a nicely formatted summary of the bios and motherboard information, using sudo so it can be run as a normal user. Example output:

$ bios
Motherboard: Dell Inc.
Product Name: Latitude D620
Serial Number: XXXXXXXX
BIOS: Dell Inc.
Version: A10
Release Date: 05/16/2008


Generic Method to Determine Linux (or UNIX) Distribution Name

Submitted by jbreland on Wed, 05/05/2010 - 01:58

A while back I had a need to programmatically determine the which Linux distribution is running in order to have some scripts do the right thing depending on the distro. Unfortunately, there doesn't appear to be one completely foolproof method to do so. What I ended up coming up with was a combination of techniques that combines querying the LSB utilities, distro release info files, and kernel info from uname. It'll take the most specific distro name it can find, falling back to generic Linux if necessary. It'll also identify UNIX variants as well, such as Solaris or AIX.

Here's the code:

# Determine OS platform
UNAME=$(uname | tr "[:upper:]" "[:lower:]")
# If Linux, try to determine specific distribution
if [ "$UNAME" == "linux" ]; then
    # If available, use LSB to identify distribution
    if [ -f /etc/lsb-release -o -d /etc/lsb-release.d ]; then
        export DISTRO=$(lsb_release -i | cut -d: -f2 | sed s/'^\t'//)
    # Otherwise, use release info file
        export DISTRO=$(ls -d /etc/[A-Za-z]*[_-][rv]e[lr]* | grep -v "lsb" | cut -d'/' -f3 | cut -d'-' -f1 | cut -d'_' -f1)
# For everything else (or if above failed), just use generic identifier
[ "$DISTRO" == "" ] && export DISTRO=$UNAME
unset UNAME

I include this code in my ~/.bashrc file so that it always runs when I login and sets the $DISTRO variable to the appropriate distribution name. I can then use that variable at any later time to perform actions based on the distro. If preferred, this could also easily be adapted into a function by having it return instead of export $DISTRO.

I've tested this on a pretty wide range of Linux and UNIX distributions, and it works very well for me, so I figured I'd share it. Hope you find it useful.

Delete Old Files ONLY If Newer Files Exist

Submitted by jbreland on Tue, 05/04/2010 - 18:17

I discovered recently that one of my automated nightly backup processes had failed. I didn't discover this until about a week after it happened, and though I was able to fix it easily enough, I discovered another problem in the process: all of my backups for those systems had been wiped out. The cause turned out to be a nightly cron job that deletes old backups:

find /home/backup -type f -mtime +2 -exec rm -f {} +

This is pretty basic: find all files under /home/backup/ that are more than two days old and remove them. When new backups are added each night, this is no problems; even though all old backups get removed, newer backups are uploaded to replace them. However, when the backup process failed, the cron job kept happily deleting the older backups until, three days later, I had none left. Oops.

Fortunately, this didn't end up being an issue as I didn't need those specific backups, but nevertheless I wanted to fix the process so that the cleanup cron job would only delete old backups if newer backups exist. After a bit of testing, I cam up with this one-liner:

for i in /home/backup/*; do [[ -n $(find "$i" -type f -mtime -3) ]] && find "$i" -type f -mtime +2 -exec rm -f {} +; done

That line will work great as a cron job, but for the purpose of discussion let's break it down a little more:

1. for i in /home/backup/*; do
2.     if [[ -n $(find "$i" -type f -mtime -3) ]]; then
3.         find "$i" -type f -mtime +2 -exec rm -f {} +
4.     fi
5. done

So, there are three key parts involved. Beginning with step 2 (ignore the for loop for now), I want to make sure "new" backups exist before deleting the older ones. I do this by checking for any files that are younger than the cutoff date; if at least one or more files are found, then we can proceed with step three. The -n test verifies that the output of the find command is "not null", hence files were found.

Step 3 is pretty much exactly what I was doing previously, ie., deleting all files older than two days. However, this time it only gets executed if the previous test was true, and only operates on each subdirectory of /home/backup instead of the whole thing.

This brings us neatly back to step 1. In order for this part to make sense, you must first understand that I backup multiple systems to this directory, each under their own directory. So, I have:


If I just use steps 2 and 3 operate on /home/backup directly, I could still end up losing backups. Eg., let's say backups for eveery thing except server1 began failing. New backups for server1 would continue to get added to /home/backup/server1, which means a find command on /home/backup (such as my test in step 2) would see those new files and assume everything just dandy. Meanwhile, server2, server3, etc. have not been getting any new backups, and once we cross the three day threshold all of their backups would be removed.

So, in step one I loop through each subdirectory under /home/backup, and then have the find operations run independently for each server's backups. This way, if all but server1 stops backing up, the test in step 2 will succeed on server1/, but fail on server2/, server3, etc,, thus retaining the old backups until new backups are generated.

And there you go: a safer way to cleanup old files and backups.

Port Testing (and Scanning) with Bash

Submitted by jbreland on Sun, 05/02/2010 - 14:53

Posts on my site have been rather... slow, to be generous. To try to change that, I'm going to begin posting neat tips and tricks that I discover as I go about my daily activities. Normally I just mention these to whoever happens to be on IM at the time, but I figure I can post here instead to share the information with a much wider audience and breathe some life back into my site. So, it's a win-win for everyone. :-)

I should note that many of these tips will likely be rather technical, and probably heavily Linux-focused, since that's my primary computing environment. Today's tip definitely holds true on both counts.

One of the neat features supported by Bash is socket programming. Using this, you can connect to any TCP or UDP port and any remote system. Of course, this is of rather limited usefulness as Bash won't actually do anything once connected unless specific protocol instructions are sent as well. As a relatively simple example of how this works:

exec 3<>/dev/tcp/
echo -e "GET / HTTP/1.1\n\n">&3
cat <&3

(Note: Example taken from Dave Smith's Blog.)

This will establish a connection to on port 80 (the standard HTTP port), send an HTTP GET command requesting the home page, and then display the response on your terminal. The &3 stuff is necessary to create a new file descriptor used to pass the input and output back and forth. The end result is that Google's home page (or the raw HTML for it, at least), will be downloaded and displayed on your terminal.

That's pretty slick, but like I said above, it's of rather limited usefulness. Not many people would be interested in browsing the web in this manner. However, we can use these same concepts for various other tasks and troubleshooting, including port scanning.

To get started, try running this command:

[ echo >/dev/tcp/ ] && echo "open"

This will attempt to send and empty string to on port 80, and if it receives a successful response it will display "open". Conversely, if you attempt to connect to a server/port that is not open, Bash will respond with a connection refused error.

Let's expand this a bit into a more flexible and robust function:

# Test remote host:port availability (TCP-only as UDP does not reply)
    # $1 = hostname
    # $2 = port
function port() {
    (echo >/dev/tcp/$1/$2) &>/dev/null
    if [ $? -eq 0 ]; then
        echo "$1:$2 is open"
        echo "$1:$2 is closed"

Now, we can run port 80 and get back " is open". Conversely, try something like port localhost 80. Unless you're running a webserver on your local computer, you should get back "localhost:80 is closed". This can provide a quick and dirty troubleshooting technique to test whether a server is listening on a given port, and ensure you can reach that port (eg., traffic is not being dropped by a firewall, etc.).

To take this another step further, we can use this function as a basic port scanner as well. For example:

for i in $(seq 1 1023); do port localhost $i; done | grep open

This will check all of the well-known ports on your local computer and report any that are open. I should not that this will be slower and more inefficient than "real" port scanners such as Nmap. However, for one-off testing situations where Nmap isn't available (or can't be installed), using bash directly can really be quite handy.

Additional information on Bash socket programming can be found in the Advanced Bash-Scripting Guide.

I hope you find this tip useful. Future tips will likely be shorter and more to the point, but I figured some additional explanation would be useful for this one. Feel free to post and questions or feedback in the comments.

Bash (Shell) Aliases and Functions

Submitted by jbreland on Tue, 08/18/2009 - 19:14

I started using Linux 10 years ago this month (actually, my very first Linux install would've been around 10 years ago today, though I'm not sure of the exact date). Throughout all those years, I've compiled a number of useful Bash functions and aliases that I use on a daily basis to save me time and help get things done. I figure that some of these would be useful to others as well, so I'm posting a list of them here, along with commentary where appropriate.

For those of you that either don't know what I'm talking about, or just aren't very familiar familiar with this topic, Bash stands for the "Bourne-again shell", and is the standard command line interface for Linux. There are plenty of other shells available, but Bash is the most common and is the default on most Linux distributions. Bash aliases and functions allow you to defined shortcuts for longer or more complicated commands.

Bash aliases are used for substituting a long/complicated string for a much shorter one that you type on the command line. As a simple example, consider the following alias that is defined by default on most distributions:

alias ls='ls --color'

This simply means that anytime you use the command ls, bash will automatically substitute ls --color for you. So, if you entered the command ls /home, bash will treat this as ls --color /home.

Bash functions provide the same essential concept, but allow for allow for much more complicated functionality through the use of shell scripting. Here's an example of a function:

function l() { locate "$1" | grep --color "$1"; }

This defines a function named l that will:

  1. Pass the supplied argument (search term) to the locate command, then
  2. pipe the output to the grep command to highlight the matched results

So, if you entered the command l filename, bash would actually run locate "filename" | grep --color "filename", This will search for all files on your computer named "filename", then use grep to highlight the word "filename" in the results. These are two fairly simple examples of aliases and functions, but when used frequently they can lead to significant time savings.

I'm including a full list of my personal aliases and functions below. Note: Some of these commands are rather obscure, but I'm including them anyway just for reference. At the very least, it may inspire similar shortcuts that make sense for you.

To use any of these, simply add them to your ~/.bashrc file.

# Aliases

# Show filetype colors and predictable date/timestamps
alias ls="ls --color=auto --time-style=long-iso"

# Highlight matched pattern
alias grep='grep --color'

# Common shortcuts and typos
alias c=clear
alias x=startx
alias m=mutt
alias svi='sudo vim'
alias ci='vim'
alias reboot='sudo /sbin/reboot'
alias halt='sudo /sbin/halt'

# Clear and lock console (non-X) terminal
alias lock="clear && vlock -c"

# If in a directory containing a symlink in the path, change to the "real" path
alias rd='cd "`pwd -P`"'

# Useful utility for sending files to trash from command line instead of
#   permanently deleting with rm - see
alias tp='trash-put'

# Generic shortcut for switching to root user depending on system
#alias root='su -'
#alias root='sudo -i'
alias root='sudo bash -l'

# Compile kernel, install modules, display kernel version and current date
#   useful for building custom kernels; version and date are for the filename
alias kernbuild='make -j3 && make modules_install && ls -ld ../linux && date'

# Shortcut for downloading a torrent file on the command line
alias bt='aria2c --max-upload-limit=10K --seed-time=60 --listen-port=8900-8909'

# Only show button events for xev
alias xevs="xev | grep 'keycode\|button'"

# Launch dosbox with a preset configuration for Daggerfall
alias daggerfall='dosbox -conf ~/.dosbox.conf.daggerfall'

# Functions

# Search Gentoo package database (portage) using eix
#   $1 = search term (package name)
function s() { eix -Fc "$1"; }     # Search all available; show summary
function sd() { eix -FsSc "$1"; }  # Search all available w/ desc.; show summary
function se() { eix -F "^$1\$"; }  # Search exact available; show details
function si() { eix -FIc "$1"; }   # Search installed; show summary

# Search Debian package database (apt) using dpkg
#   $1 = search term (package name)
#function s() { apt-cache search "$1" | grep -i "$1"; }  # search all available

# Search Arch package database using pacman
#   $1 = search term (package name
#function s() {
#    echo -e "$(pacman -Ss "$@" | sed \
#        -e 's#^core/.*#\\033[1;31m&\\033[0;37m#g' \
#        -e 's#^extra/.*#\\033[0;32m&\\033[0;37m#g' \
#        -e 's#^community/.*#\\033[1;35m&\\033[0;37m#g' \
#        -e 's#^.*/.* [0-9].*#\\033[0;36m&\\033[0;37m#g' ) \
#        \033[0m"

# Mount/unmount CIFS shares; pseudo-replacement for smbmount
#   $1 = remote share name in form of //server/share
#   $2 = local mount point
function cifsmount() { sudo mount -t cifs -o username=${USER},uid=${UID},gid=${GROUPS} $1 $2; }
function cifsumount() { sudo umount $1; }

# Generate a random password
#   $1 = number of characters; defaults to 32
#   $2 = include special characters; 1 = yes, 0 = no; defaults to 1
function randpass() {
    if [ "$2" == "0" ]; then
        cat /dev/urandom | tr -cd '[:alnum:]' | head -c ${1:-32}
        cat /dev/urandom | tr -cd '[:graph:]' | head -c ${1:-32}

# Display text of ODF document in terminal
#   $1 = ODF file
function o3() { unzip -p "$1" content.xml | o3totxt | utf8tolatin1; }

# Search all files on system using locate database
#   $1 = search term (file name)
function li() { locate -i "$1" | grep -i --color "$1"; }  # case-insensitive
function l() { locate "$1" | grep --color "$1"; }         # case-sensitive

# View latest installable portage ebuild for specified package
#   $1 = package name
function eview() {
    FILE=$(equery which $1)
    if [ -f "$FILE" ]; then
        view $FILE

# View portage changelog for specified package
#   $1 = package name
function echange() {
    PACKAGE="$(eix -e --only-names $1)"
    if [ "$PACKAGE" != "" ]; then
        view /usr/portage/$PACKAGE/ChangeLog

# Displays metadata for specified media file
#   $1 = media file name
function i() {
    EXT=`echo "${1##*.}" | sed 's/\(.*\)/\L\1/'`
    if [ "$EXT" == "mp3" ]; then
        id3v2 -l "$1"
        mp3gain -s c "$1"
    elif [ "$EXT" == "flac" ]; then
        metaflac --list --block-type=STREAMINFO,VORBIS_COMMENT "$1"
        echo "ERROR: Not a supported file type."

# Sets custom Catalog Number ID3 tag for all MP3 files in current directory
#   $1 = catalog number
function cn() { for i in *.mp3; do id3v2 --TXXX "Catalog Number":"$1" "$i"; done; }

Linux Kernel Newbies website

Submitted by jbreland on Mon, 07/14/2008 - 11:37

I came across a new (to me) Linux-related website a couple months ago that rather impressed me (which is something that doesn't happen all that often). The name of the site is Linux Kernel Newbies, and it's located at

I stumbled across the site while looking for a good kernel changelog. Most changelogs that I've been able to find discuss the changes in one of three formats

  • List changes/commits made in each release candidate
  • List all individual commits made during the release cycle
  • Briefly summarize major changes or new features

None of these really provided the information that I was looking for. Documenting changes for each release candidate is fine if you're actually using/testing -rc kernels, but it's a pain when looking for changes from version to version because it requires looking through multiple posts or documents. The commit list approach is also fine for the gritty details, but unfortunately the summaries of each change are rather cryptic and often don't mean a lot to people not actively involved in the development. The new feature and major change approach is nice in that it's easily digestible and hits the highlights, but unfortunately it usually doesn't cover enough detail for me.

While searching for a decent changelog that was something in between the detailed commit list and a high-level summary, I found the LinuxChanges page on the Linux Kernel Newbies wiki. This is almost exactly what I've been looking for. They do a great job of describing all of the new/important features of the given kernel release, including providing links to the actual commit records if you really want the full details. They also provide a list of all individual commits,
logically grouped and sorted, which makes it much easier to understand what was changes. Finally, they even cover the highlights of new/upcoming patches are are actively being development for succeeding kernel releases.

The LinuxChanges link always displays the changelog for the most recent stable kernel release (currently 2.6.26 as I write this). Changelogs for older release can be found on the Linux26Changes page.

While the changelog is what keeps me coming back every couple of months, Linux Kernel Newbies also offers a few other useful resources that may be of interest to Linux users, such as the KernelGlossary, FAQ, and Forum. The homepage also provides links to other content on the site.

I don't have any affiliation with the site, and to be honest I haven't spent much time on the site outside of the changelog pages, but even so I found it so useful that I wanted to mention it here. Hopefully some others can benefit from this site as well.