Tag Archives: bash

A while-read Loop

I found myself in need of a way to optionally run the same line of code before a script exited.  Well… need… it’s what I wanted.  Anyway.

After some trouble sorting out some syntax issues and some confusion about how to get a while loop to depend upon some user input, I came up with this code snippet which I will do just what I wanted.

# by JamesIsIn

while [ "${variable}" = "y" ]

   ## What you want repeated goes here.##
   ## What you want repeated goes here.##

     read -e -p "Do you want to go again? [y/n]"
     echo $REPLY
     ## variable="${REPLY,,}" ## after bash v4 ##
     ## variable="$( echo "${REPLY}" | tr '[A-Z]' '[a-z]' )" ## for Mac and bash < v4 ##
     echo $variable
     if [[ "${variable}" = y ]]; then
          echo "Ok!"
     elif [[ "${variable}" = yes ]]; then
          echo "Ok!"
          echo "Damn!"


As you can see, the line which takes the REPLY and pushes it into variable is repeated.  This is because there is a simpler way to make the reply lower-case starting with bash v4.  I included both lines in case anyone wants to use this on older versions of bash (including on Macs which still use an archaic version of bash).  Just uncomment the line for your system.



Delete Keychain Folder

We run the Casper Suite to control our Macs at work, and we are using folder re-direction for our conference room machines (for the users’ home directories).  Since all of these machines are Active Directory members and users do change their passwords (quite frequently per policy), we have issues when folks attempt to log into a conference room machine after they have changed their passwords.

The real trouble seems to be that Apple hasn’t quite readied the Mac OS for full enterprise AD integration.  Though the Macs are members and though a user is able to log in using network credentials, once those credentials are cached the OS doesn’t like to check with AD when the credentials offered by the user are not matched with those cached in the keychain.

I created a Self Service script which simply removes the entire keychain folder for the then logged-in user.  If there is a less heavy-handed solution to this matter I have not yet found it.  Here is that script for entertainment.

## Conference Room Keychain Fix

## Delete user's Keychains folder (located in redirected home directory's Library folder)

username=$(stat -f %Su /dev/console)
rm -R /home/"$username"/Library/Keychains/


I hope you find this useful and expedient.


Fix for Firefox’s Profile Manager Error

There is a bug in Firefox (as near as I can tell) and it has been present for many versions (more than a dozen at least). It only effects users in a particular configuration on Macs, so it is not very likely to get any love any time soon. (I filed a bug report here ages ago.)

In short, Firefox is able to create it’s Profiles folder under /path/to/home/[username]/Library/Application Support/Firefox/ and it is able to create the associated profiles.ini file next to it.  However, Firefox is not able to add the information pointing the profiles.ini folder at the newly created profile folder.

If you try to launch Firefox you will only get the Profile Manager and it will not be able to see any profiles, nor will it be able to create one.  Instead it throws an error:

Profile Manager Error
Profile Manager Error

Anyway, perhaps one day Mozilla will fix it.  In the meantime I need to be able to fix this for users.  I know I can add a known-good profile and profiles.ini pair, so I figured I could just build my own profiles.ini file based on what I saw in the Profiles folder.  That worked so I just needed to create a way to use that information.

We use the Casper Suite to manage the Macs in our environment, so I was bent on doing something through Casper.  Additionally I wanted to user Casper’s Self Service application so I could just point a user to a single button to fix the problem.

Here is the script I added for users to evoke through Self Service.

#! /usr/bin/env bash 
# Fix Firefox profile manager error on machines with re-directed home directories. 
# by JamesIsIn 
# Do something nice today. 
# https://github.com/jamesisin/slop-bucket/blob/master/popit/ConfRm_Replace_Firefox_profiles_ini.sh

#Get current logged-in username. 
username=$( stat -f %Su /dev/console ) 

# Get first profile name in user's Library folder. 
profile="$( basename /home/"${username}"/Library/Application\ Support/Firefox/Profiles/* | head -1 )" 

# Empty and populate user's Firefox profiles file. 
printf "[General]\nStartWithLastProfile=1\n\n[Profile0]\nName=Default User\nIsRelative=1\n" 1>/home/"${username}"/Library/Application\ Support/Firefox/profiles.ini 
printf "Path=Profiles/""${profile}" 1>>/home/"${username}"/Library/Application\ Support/Firefox/profiles.ini 

exit 0 

First I get the username of whomever happens to be logged in at the time Self Service is run on that machine and save that in a variable (called username).

Then I get the name of the first profile located in the user’s Firefox folder (under the user’s Library folder).  It doesn’t matter which one I use, I just arbitrarily chose the first one.  This way if there is only one I’ll be ok too.  I store that in a separate variable (called profile).

Finally I use those two variables to construct the appropriate profiles.ini file (using printf and standard output redirection).

Hope that helps you.


Using find and rm in Conjunction

I downloaded a bunch of music or pictures or somesuch beyond memory, and in each folder the generous sharer included two or three dozen rich text files (.rtf) with some advertising for something about which I couldn’t have cared less.  In the end this accounted for a few hundred unwanted files all ending in .rtf.

In order to find them I used this command:

find /path/to/containing/folder/ -type f -name \*.[Rr][Tt][Ff]

That’s all well and good, but I want to delete them (using rm presumably).

I tried a few guesses…

rm `find /path/to/containing/folder/ -type f -name \*.[Rr][Tt][Ff]`
rm: missing operand

rm "`find /path/to/containing/folder/ -type f -name \*.[Rr][Tt][Ff]`"
rm: cannot remove `': No such file or directory

rm '`find /path/to/containing/folder/ -type f -name \*.[Rr][Tt][Ff]`'

This last one was close but it divided words on spaces and so failed. At this point I thought about changing the internal field separator ($IFS) to the line break (\n) instead of the default space ( ) but instead decided to look for something simpler.

I found it here. In the end my code looked like this:

find /path/to/containing/folder/ -type f -name \*.[Rr][Tt][Ff] -exec rm -f {} \;

You can read quite a bit about the -exec argument in the find manpage (in your terminal type man find).

Warning: This will delete all rich text files located under the containing folder. I was safe to do this because there is nothing on my server stored in the rich text format. If you have files of the same type you are seeking to remove, you will want to take appropriate precautions.

Hope this helps you out.


Script for Skipping Tracks

I use Rhythmbox to play music on my hi-fi system.  There is a command line element for Rhythmbox called rhythmbox-client.  This element can be used to initiate a series of commands in Rhythmbox.  The problem is that these commands end up being a bit longish if you are attached to that machine via ssh.  Here is an example:

DISPLAY=:0.0 rhythmbox-client –play-pause

That’s a lot of typing especially when you are attached via ssh from your Android phone.  It was just too tedious to type all that out on that on-screen keyboard.  So I wrote a script that would manage the arguments I felt were most important.

# by JamesIsIn from JamesIsIn.com
# Do something nice today.

   if [ "$1" = "n" ]; then
      argument="--next --print-playing"
      printf "\nI am skipping to the next track:\n\n"
   elif [ "$1" = "p" ]; then
      printf "\nI am toggling between play and pause.\n\n"
   elif [ "$1" = "s" ]; then
      printf "\nThis is what is currently playing:\n\n"
#   if [ "$1" = "q" ]; then
#      $argument="--enqueue"
   else [ argument = "" ]; 
      printf "\nI'm sorry.  I only understand the following commands:\n\n"
      printf "p (play/pause)\n\n"
      printf "s (show what's playing)\n\n"
      printf "n (skip to next track)\n\n"
      printf "Please try again.\n\n"
      # I could add --enqueue but I have to figure out to make it work

# assuming DISPLAY does not matter for local runs (though this is for a single monitor configuration)

DISPLAY=:0.0 rhythmbox-client $argument



The use of DISPLAY is required so that rhythmbox-client knows on which monitor Rhythmbox is present.  The configuration above is for a single monitor arrangement.  You’re on your own to figure out what you need in there if you have a different monitor configuration.

So, using my script, if you wanted to issue the same command I mentioned above you would merely type:

Rb p

(Now I just need to streamline the ssh command and this whole procedure will be much easier and thereby more impressive.)

I placed this script in /usr/local/bin on the hi-fi machine. I called it merely Rb. That way the script is available to all users on that machine. Be sure to make the file executable:

sudo chmod a+x /usr/local/bin/Rb

(Since I use my server as a proper server now I moved my script to the share: /media/[share]/Rb so that it would be available across the network.)

Have fun with that.


Conquer Giant FLAC

I have already written posts on splitting and converting from either album-length APE files or album-length FLAC files into track-length FLAC files.

(My post on converting APE files; my post on splitting APE files; my post on splitting FLAC files. You will want to go over these other articles if you have not done any of this before as there are some dependencies which you will not likely have installed.)

That was rather satisfying, but I then found that I would locate entire discographies where every album was a single APE or FLAC.  I did not enjoy the prospect of running each command on each pair of files.

That’s why scripting languages exist.  It’s a good thing I bought that BASH book.

Anyway, this script is designed to recursively locate APE, FLAC, and CUE files under a user-supplied root directory; then it runs shntool and cuetag on each APE/FLAC and CUE pair.  (You can read my previous posts linked above for more details about how these two commands work.)

It even cleans up after itself.  Since it deletes not only the temporary files it creates but also all the APE, FLAC, and CUE files it uses, you will want to pay close attention when the script tells you it is about to clean up all those files.  If you tell it to proceed (by hitting ENTER) without checking its work all those files will be deleted.  As such you should work from copies and you should check that it did what you wanted it to do before hitting ENTER when it asks about cleaning up all files.

Feel free to suggest any improvements.  I hope you find that it works well and improves your lives.

One important thing to note is that it is not an intelligent script.  It cannot determine for you whether a given APE or FLAC file is actually album length.  It assumes that you are offering it a folder (with or without sub-folders) which contains only album-length APE or album-length FLAC files with an accompanying CUE file (one matching pair per containing folder).

Sometimes a cue file will be formatted incorrectly or have some other issue.  Sometimes they can be fixed by opening them in a text editor.  Probably they can always be fixed in a text editor, but it might be difficult to determine what’s wrong with the CUE file.  If a CUE files has issues you will see errors on the command line (if you are staring at your monitor) and the album-length file will not be split/tagged.  You’ll have to work on that pair separately, but since you are working from copies it won’t matter if they get deleted.

If you throw something unusual at it, it’s hard to imagine how it might behave.  It’s likely not dangerous but you should always work from copies and preserve the originals until you are satisfied.

A couple of things have changed since I first wrote this script and article.  First, the script no longer accepts ape files as input.  You will want to convert them directly into flac files before splitting.  This is very easy and at some point I’ll add notes on how to do that.  Second, the script no longer lives here because it’s easier for me to maintain and share them over at GitHub.

splitFLAC.sh at GitHub

Have a great time!


BASH Examples

I have been earnestly studying BASH for a while now, but I recently came across something that will be very useful and I thought I’d pass it along.

Famously enough I run Ubuntu (a Debian-based distribution of Linux).  This is where I do my scripting in BASH, though I also use BASH on my Mac (native) and Windows (through Cygwin).

I have discovered a package called bash-doc which contains a plethora of BASH example scripts and code chunks.

To install bash-doc you just mark it for installation in Synaptic and Apply that change.

  • System —> Administration —> Synaptic Package Manager
  • Scroll down to locate bash-doc
  • Right-click on bash-doc and select Mark for Installation
  • Click the Apply button
  • Double-check the package you are installing and click OK

Alternatively you could enter this simple line of code directly into BASH via the terminal: sudo apt-get install bash-doc (your password will be required either way).

Once you have installed this package you will find a host of examples:


Have fun with that.  I know I will.


Backup Script: A Love Story

I have written a few backup scripts by now. Every time I do I find a new interesting challenge somewhere in the task. As such I’d like to talk a little about my most recent backup script and offer my script to the community at large.

First of all, I tend to use rsync for backups. It’s powerful and it works well. You can use rsync to backup any file system and so it is also very flexible in a network or on a virtual machine. I’m not going to dive into the man page for rsync, but you will want to take a look there (man rsync) because there is a wealth of information about the various switches available for that command. I have selected the switches that fit my purposes and that is what is displayed below.

Next, where possible I prefer to leave my backup drives unmounted until they are actually needed for the backup process. I consider my current system for dealing with this imperfect, but again that is what you will see in my current script.

Finally, the server I have written this script for is running the desktop version of Ubuntu. This is not likely important in any way except that (as you will see) if I had been running the server version I would not likely have had the same problems when I did my test run.

Ok, so here is my current script.

# Simple Back Up Script
# by JamesIsIn from JamesIsIn.com
# Do something nice today.

# run in cron Mondays at 4 AM
# [0 4 * * 1 /home/[scriptpath] >/dev/null 2>1]

# redirect from script --> sends all STDERR to log file

exec 2> /home/[username]/Desktop/$(date +%Y%m%d)
## 2> routes standard error as well as standard output together the rest uses a dated folder at the specified location

## backup [DriveA]

if ! mountpoint -q /media/[BUDriveA]
   mount -t [filesystem] -U [UUID goes here; no brackets or quotes] /media/[BUDriveA]

# Note rsync -a copies permissions but will not copy owner:group if not run as root

rsync -ailS --delete --progress /media/[DriveA]/[Folder] /media/[BUDriveA]

umount /media/[BUDriveA]

## backup [DriveB]

if ! mountpoint -q /media/[BUDriveB]
   mount -t [filesystem] -U [UUID goes here; no brackets or quotes] /media/[BUDriveB]

# Note rsync -a copies permissions but will not copy owner:group if not run as root

rsync -ailS --delete --progress /media/[DriveB]/[Y] /media/[BUDriveB]
rsync -ailS --delete --progress /media/[DriveB]/[Z] /media/[BUDriveB]

umount /media/[BUDriveB]


As you can see I like to keep my scripts well documented. I encourage you to do the same. Memory is fallible, after all.

(Note: Everything in square brackets [] will be replaced in your script.)


This is where you specify the file system type you are using and is called out by the -t argument. For my script it was ext3.


This is a unique identifier for a drive and it is called out by the -U argument. I prefer using UUID’s because other drive/partition designations can change (for instance, sda1 can become sdb1 if you add a new drive). (For information on adding drives to your system see this post.)

My backup script manages the backups for three directories (recursive) over two drives. I first test to see if a backup drive is mounted and if it is not I mount it. That’s the job of the if statements. (This is what I consider imperfect and will seek to improve as time moves forward.) It works well enough, and if you don’t want your backup drives to be mounted all the time this is a decent way of dealing with the matter.

You will also note that I unmount each drive as I finish with it (umount).

You will also see that in the DriveB example I backup (synchronize) two directories (Y & Z). This also helps to make it clear that you do not need to specify the name of the directory at the backup location.

Lastly, you see my note about -a copying permissions and requiring root to copy also the owner and user information. As such I put this script into a cron job as root.


It’s easy enough to do. Just open cron as root:

sudo crontab -e

You will be prompted for your password. You can learn more here. You can see my cron entry for this script in a comment in my script above. The man page for cron (man cron) will help you understand how 0 4 * * 1 means every Monday at four in the morning.

Let’s talk about some of the mistakes I made.

The first big snag was not having the if statements correct. I left out the UUID’s and so the script did not mount the drive for the first sync operation. The if statement tested to see if DriveA was mounted and found that it was not. Then it ran the mount command which failed because no valid drive (UUID) was specified.

Because the drive was not mounted at the mount point rsync began synchronizing data to the mount folder and not the mounted drive. This caused the script to fail (once the OS drive containing / was full—about 105 GB later) and borked the / partition.

I was not able to restart Gnome (Gnome, the desktop environment, requires some free space on your / partition to function and mine was 100% full). I ssh’d into the machine from a Windows box nearby (using Cygwin) and maybe two hours later I sorted out what I had done. I was able to remove (using the rm command) the offending folder after making absolutely certain DriveA was in fact not mounted. After that I was able to reboot and get back into Gnome. (However, I did have to run fsck on the drive probably due also to the drive having filled itself. If you boot your system and get a shell stating you cannot login try running fsck and answering y to all the fixit questions.)

Oops. This is why we make test runs, right?

So I fixed the if statements (specifically the mount and umount commands) and that took care of that.

Then the script ran fine through the backup for DriveA but finished DriveB in a few seconds. Not possible. I looked back at the script and realized that I had specified the backup location for both the source and the destination. Damn it. Fixed that and DriveB was synchronizing properly.

I hope this helps you out.

(Thanks to Ian over at the Ubuntu Forums for his suggestions while I was troubleshooting.)

Happy scripting.


BAShing sudo through a Pipe

I fixed my dad’s Windows box and after getting it all set up I figured I would create a backup image based on the completed build, installations, and updates.  A nice best-of restore point, eh?

So I booted the machine into an Ubuntu 10.04 Live CD, created the necessary partition using gparted, and mounted the partition onto which I would create the backup.

I tried using dd to create the backup image, but found that my backup space was too small for the full OS partition.  To build the backup I ran this:

# dd command from partition to file

sudo dd if=/dev/hdb of=/path/to/image


Not such a problem because I could of course pipe my dd output into gzip and compress my backup image.  Typically if you want to compress the  image you would merely pipe the dd output into gzip like thus:

# normal format for piping dd into gzip

dd if=/dev/hdb | gzip > /path/to/image.gz


The problem is that if I used sudo on this full command I hit a permission denied error.  This is because sudo only applies to the first command (dd) and not to any subsequent commands (in this case gzip)—sudo does not flow through the pipe.  However, I pulled out my trusty BASH Cookbook and found a way to run the whole thing under sudo:

# proper way to manage using dd, gzip, and sudo

sudo bash -c 'dd if=/dev/hdb | gzip > /path/to/image.gz'


This runs a bash terminal as root (non-interactive) for the duration of the quoted command and then exits back into the interactive terminal I had been using.  Worked great.

(The drive I backed up was > adjust your code accordingly.)

Thanks, O’Reilly.


Update to My Renaming Script


I have vastly improved my renaming script and published it to GitHub for your convenience.


Original article:

A while back I wrote a post on renaming FLAC files to include their associated disc numbers.  For example, if you have a file from disc 1 named “01 – Track Name.flac” my script would change it to “01.01 – Track Name.flac”.

This updated version of the script allows the user to enter the path to the folder needing files renamed and the associated disc number.

I think this is a slightly simpler method than my previous method for using this script.

Here is the current script:

# Please refer to note at top of article for direction to current script.

This version also handles FLAC extensions in any case (flac=FLAC=FlAc &c).

If I make any updates, I’ll likely just change this post.  As such, I should say that this current version is current as of 8 March 2010.

Have fun with that.