-->

Previous | Table of Contents | Next

Page 444

FTP Retrieval

Create a shell script, retrieve_one, with the contents of Listing 22.3 and with execution enabled (that is, command chmod +x retrieve_one).

Listing 22.3. retrieve_one—automating FTP retrieval.


# Usage:  "retrieve_one HOST:FILE" uses anonymous FTP to connect

#     to HOST and retrieve FILE into the local directory.



MY_ACCOUNT=myaccount@myhost.com

HOST=`echo $1 | sed -e "s/:.*//"`

FILE=`echo $1 | sed -e "s/.*://"`

LOCAL_FILE=`basename $FILE`



     # -v:  report all statistics.

     # -n:  connect without interactive user authentication.

ftp -v -n $HOST << SCRIPT

     user anonymous $MY_ACCOUNT

     get $FILE $LOCAL_FILE

     quit



SCRIPT

retrieve_one is useful for purposes such as ordering a current copy of a FAQ into your local directory; start experimenting with it by making a request with the following:


retrieve_one rtfm.mit.edu:/pub/usenet-by-hierarchy/comp/os/linux/answers/linux/faq/

Âpart1

HTTP Retrieval

For an HTTP interaction, let the Lynx browser do the bulk of the work. The Lynx browser that comes as part of the Red Hat distribution is adequate for all but the most specialized purposes. In those cases, pick up a binary executable of the latest Lynx and simple installation directions at http://www.crl.com/~subir/lynx/bin. Although most Lynx users think of Lynx as an interactive browser, it's also handy for dropping a copy of the latest headlines, with live links, in a friend's mailbox with


lynx -source http://www.cnn.com | mail someone@somewhere.com

To create a primitive news update service, script


NEW=/tmp/news.new

OLD=/tmp/news.old

URL=http://www.cnn.com

while true

do

     mv $NEW $OLD

     lynx -dump -nolist $URL >$NEW

     diff $NEW $OLD

          # Wait ten minutes before starting the next comparison.

     sleep 600

done

Page 445

and launch it in the background. Any changes in the appearance of CNN's home page will come to your screen every ten minutes. This simple approach is less practical than you might first expect because CNN periodically shuffles the content without changing the information. It's an instructive example, though, and a starting point from which you can elaborate your own scripts.

Conclusions on Shell Programming

Shells are "glue"; if there's a way to get an application to perform an action from the command line, there's almost certainly a way to "wrap" it in a shell script that gives you power over argument validation, iteration, and input-output redirection. These are powerful techniques and well worth the few minutes of study and practice it takes to begin learning them.

Even small automations pay off. My personal rule of thumb is to write tiny disposable one-line shell scripts when I expect to use a sequence even twice during a session. For example, although I have a sophisticated set of reporting commands for analyzing World Wide Web server logs, I also find myself going to the trouble of editing a disposable script such as /tmp/r9,


grep claird `ls -t /usr/cern/log/* | head -1` | grep -v $1 | wc -l

to do quick, ad hoc queries on recent hit patterns; this particular example reports on the number of requests for pages that include the string claird and exclude the first argument to /tmp/r9, in the most recent log.

cron and at Jobs

Linux comes with several utilities that manage the rudiments of job scheduling. at schedules a process for later execution, and cron (or crontab—it has a couple of interfaces, and different engineers use both these names) periodically launches a process.

cron and find—Exploring Disk Usage

One eternal reality of system administration is that there's not enough disk space. The following sections offer a couple little expedients recommended for keeping on top of what's happening with your system.

Cores

cron use always involves a bit of setup. Although Appendix B gives more details on cron's features and options, I'll go carefully through an example here, one that helps track down core clutter.
You need at least one external file to start using the cron facility. Practice cron concepts by commanding first


echo "0,5,10,15,20,25,30,35,40,45,50,55 * * * * date > `tty`" >/tmp/experiment

then,


crontab /tmp/experiment

Page 446

and finally,


crontab -l

The last of these gives you a result that looks something like the following:


0,5,10,15,20,25,30,35,40,45,50,55 * * * * date > /dev/ttyxx

Every five minutes, the current time will appear in the window from which you launched this experiment.

For a more useful example, create a /tmp/entry file with the single line


0 2 * * * find / -name "core*" -exec ls -l {} \;

Next, use the command


crontab /tmp/entry

The result is that each morning, at 2:00, cron launches the core-searching job and e-mails you the results when finished. This is quite useful because Linux creates files core* under certain error conditions. These core images are often large and can easily fill up a distressing amount of real estate on your disk. With the preceding sequence, you'll have a report in your e-mail inbox each morning, listing exactly the locations and sizes of a collection of files that are likely doing you no good.

User Space

Suppose you've experimented a bit and have accumulated an inventory of cron jobs to monitor the health of your system. Now, along with your other jobs, you want your system to tell you every Monday morning at 2:10 which ten users have the biggest home directory trees (/home/*). First, enter


crontab -l >/tmp/entries

to capture all the jobs you've scheduled, and append the line


10 2 * * 1 du -s /home/* | sort -nr | head -10

to the bottom of /tmp/entries. Make the request


crontab /tmp/entries

and cron will e-mail the reports you seek.

at: Scheduling Future Events

Suppose you write a weekly column on cycles in the material world, which you deliver by e-mail. To simplify legal ramifications involving financial markets, you make a point of delivering it at 5:00 Friday afternoon. It's Wednesday now, you've finished your analysis, and you're

Previous | Table of Contents | Next