Battery Powered Altoids LED Lamp


Altoids LED Lamp.
I've been interested in LED lamps for some time now, and finally bought a batch of bright white LEDs. I got these from LED supply. The LEDs run at 4.5V max at 20 mA and have a luminous intensity of 6400 mcd. They aren't the brightest LEDs I could find on the net, but they are brighter than most, and the price isn't too bad, $13.50 plus shipping for 10 of them. In contrast, Radio Shack wants over $5 each for their bright white LEDs (at about half the luminosity). LED Supply shipped two days after I ordered, and I received them by priority mail the next day, about 72 hours after placing my order.

I had a chat over IM with my father last night about the LED lamps, and he made a couple suggestions that helped with the design. I was originally going to hack up my halogen desk lamp, but decided against that since I sometimes need the intense white light it provides, like when I was building this lamp. He suggested using household copper wire and wrapping the leads around it. I still think that's a good idea, and may pursue it for my next light, but I ended up using some 20 guage wire, twisted tightly for the stalk and an Altoids tin for the base.

Continue reading

using a bash ‘for loop’ to wget

One of the ways that I frequently fetch files from the internet is with wget. It is a very useful command line utility that is capable of fetching anything from one file to mirroring whole sites.

When looking for new and interesting music I often find myself on a page with a few or more mp3 urls. Using firefox to download them all, even with a good download manager is tedious. The following bash script will read from stdin and download each url it sees.

# while read url ; do wget "${url}" ; done

Continue reading

read a file with bash

I'm going to try to post a few howto articles for some of the simpler tasks. Many of these will be things that I use daily.

It is often necessary to read a file with bash, and act upon the entire line. There are many different ways to do this, but I'll outline two of the simpler methods, both suitable for stacking on a single command line.

For this exercise I'll assume the file is a list of files that we need to execute a command on.

# cat file.lst |while read line; do echo "${line}"; done /tmp/file1.txt /tmp/file with space.txt #

Continue reading

tagging mp3s

Final installment of the podcast bash script for a while. I will eventually add the ability to auto-sync with my iPod, but I'm still undecided at which command line iPod tools I want to use. Currently gtkpod is doing a great job of keeping my iPod fed, and it's simple to load it up, add my podcast folder and sync. gtkpod weeds out the dupes and automatically transfers just what's needed.

I'm just going to describe the tagmp3 function, much like I did for the updated getmp3 function last time. Read the script to see how I hook the tagmp3 function into getmp3. It could all us a bit more error handling, but it works well enough now to be useful.

I am using id3v2 from here. It is easy to compile and only has one dependency, id3lib. I'm using the id3lib and id3lib-devel RPMs for FC3 from FreshRPMs.

Continue reading

fetching mp3s

So far so good. The next exercise is to actually make this script useful by fetching the MP3s and arranging them into easy to understand directories.

The first order of business is to determine the name for the folder. I chose to use the channel name, and if that fails, the title of this episode. The later isn't a very good choice as we could easily end up with a directory for each item in the feed.

Continue reading

reading a list of feeds

Tonight I'm going to start off the script with reading a list of feeds, and fetching them for parsing.

#!/bin/bash BASEDIR="/mnt/usb0/mp3/podCast" FEEDS="${BASEDIR}/feeds.lst" while read URL ; do while read LINE; do echo $LINE|sed -n 's/.*<link>\([^<]*\)<\/link<.*/\1/p' done < <(wget -q -O - $URL) done < <(grep -v -e '^[;#]' -e '^$' $FEEDS)

podcast.001.sh

We're using grep to filter out lines starting with ; and #, as well as blank lines. We could get fancy and validate the URL, but this will suffice for now.

If all we really wanted was a list of mp3 URLs, we could pipe wget directly through the sed command, but I have plans to parse out more than just the mp3. To keep our files organized and minimize network traffic I plan to also parse out the titles of the feed, show, and pubdates. We'll delve into the parser more tomorrow, for now good nigh, and happy bashing.

using sed to parse a file.


sed -n 's/.*href="\([^"]*\)".*/\1/p'

-n suppresses printing.

's/.../.../p' is a command. s is search and replace, s/pattern/replacement/. The trailing p is a command to print the result of the command. Since we used a -n to suppress normal printing, this causes sed to print only the replacement text. In most cases the replacement text will be static, but you can also use \1 through \9 to replace with the regular expresions within parentheses.

The pattern in this case is: .*href="([^"]*)".*
.*href=" matches the begining of the line, including the href="
([^"]*) matches everything except a quote (the url itself).
".* matches the quote and the rest of the line.

Using \1 as the replacement text causes the url, and only the url to be printed. If the line doesn't contain a matching pattern, sed continues on silently to the next line.

This method only catches the first url on a line, ignoring the rest. I will attempt to address that in a later article.

Here's a simple example:


# wget -q http://lr2.com/ -O - |sed -n 's/.*href="\([^"]*\)".*/\1/p'

Notice we escaped the parens so bash doesn't get confused and think that's a subscript.

Tomorrow I'll start to build this into a smarter parser that can be used to harvest both web pages and xml feeds for mp3 links. The end goal will be a simple script to fetch podcasts, add them to my library, and automatically dump them on my iPod if it's connected. Along the way I expect to learn a few more bash tricks.