Linux Gazette... making Linux just a little more fun!

Copyright © 1996-97 Specialized Systems Consultants, Inc.


Welcome to Linux Gazette!(tm)

Linux Gazette, a member of the Linux Documentation Project, is an on-line WWW publication that is dedicated to two simple ideas:


Table of Contents Issue #13


Weekend Mechanic
will return next month.


TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.


Got any great ideas for improvements! Send your


This page written and maintained by the Editor of Linux Gazette,


"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at

Contents:


Help Wanted -- Article Ideas


 Date: Sun, 17 Nov 1996 18:49:56 -0600
Subject: a reply type thing...
From: Glenn E. Satan,

 
> Subject: Xwindows depth 
> From: James Amendolagine uq274@freenet.victoria.bc.ca 
>
> I have recently been messing with my x-server, and have managed
> to get a depth of 16, ie 2^16 colors. This works
> really nice with Netscape, but some programs (doom, abuse, and
> other games) wont work with this many colors. Do
> you know of a fix? I have tried to get X to support multiple
> depths--to no avail. The man-page suggests that some
> video cards support multiple depths and some don't. How do I know
> if mine does. 
>
> I would really like to see an article on this subject, 
I would like to say, yes, please someone help.... thought maybe a reply would motivate someone a little more to write a article on this.
(All right a second request for help in this area. Anybody out there with suggestions and/or wanting to write an article? --Editor)


 Date: Sun, 01 Dec 1996 00:20:12 +1000
Subject: Quilting and geometry
From: Chris Hennessy,

I liked your comment about quilting being an interest. We tend to forget that people have interests outside of computers in general (and linux in particular).

Just like to say thanks for what is obviously an enormous effort you are putting into the gazette. I'm new(ish) to linux and I find it a great resource, not to say entertaining.

Has anyone suggested an article on the use of Xresources? As I said I'm fairly new and find this a bit confusing... maybe someone would be interested in an example or three?

Oh and with the quilting and geometry ... better make sure its not the 80x25+1-1 variety.

(Thanks, LG is a lot of work, as well as a lot of fun. And yes, I do have a life outside of Linux. Anyone interested in writing about Xresources? Thanks for writing. It's always nice to know we are attracting new readers. --Editor)


 Date: Wed, 4 Dec 1996 13:33:26 +0200 (EET)
Subject: security issue!
From: Arto Repola,

Hi there!

I was wondering that could you write in some Gazette something about Linux security...how to improve it, how to setup firewall,shadow password systems etc?

I'm considering to build up my own linux-server and i really would like to make it as secure as possible!

Nothing more this time!

http://raahenet.ratol.fi/~arepola

(And another great idea for an article. Any takers? --Editor)


 Date: Wed, 04 Dec 1996 08:08:06 -0700
Subject: Reader Response
From: James Cannon,
Organization: JADS JTF

Great Resource,

I really like the resource Linux offers new users. I have already applied a few tricks to my PC. I wish some one would explain how to use the GNU C/C++ compiler with Linux. It is a tool resting in my hard drive. With commercial compilers, there is a programming environment that links libraries automatically. Are there any tricks to command line C/C++ programming with Linux?? Stay online!

James Cannon

(Thanks for the tip. Online is the best place to be. Anyone out there got some C++ help for this guy? --Editor)


 Date: Mon, 9 Dec 1996 23:27:21 +0000 (GMT)
Subject: Linux InfraRed Support
From: Hong Kim,

Hi,

I have been so far unsuccessful in finding information for InfraRed support on Linux.

I am particularly interested in hooking up Caldera Linux on a Thinkpad 560 using Extended Systems JetEye Net Plus. Caldera on Thinkpad I can handle but the JetEye allows connection to ethernet or token ring networks via IR.

My searches of Linux Resources page come up negative. I have posted to USENET and also emailed any web master that has any mention of ThinkPad or IR on their pages. Still no answer.

Can you help me to find information. If I am successful, I would be willing to write an article about it.

Hong

(I have sent your question on to Linux Journal's Tech Support Column. Answers from this source can be slow as author contacts companys involved. Sounds like you have covered all the bases in your search -- can anyone out there help him? If you write the article, I'll be happy to post it in the LG so next person who needs this information will have a quicker answer. --Editor)


 Date: Thu, 5 Dec 96 13:00:01 MET
Subject: Linux networking problem ...
From:

Hi there,

First I have to apologize for writing to this address with my problem, but I don't no where to search for an answer and university's network is so damned slow that surfing through the net searching for an answer makes no fun. Another reason is that I've got no access to Usenet... means can't post in comp.os.linux.networking... 8-((

I tried to find a news server near to Germany which allows posting without using that damned -> identd <- but found none, may be you know where to find a list with (free) news servers ?

Here's the problem:
I want to setup Linux in our University's LAN but ran into problems, because the LAN is VINES-IP based so that normal TCP/IP packet drivers won't work. The admin says I do need a driver which can tunnel the normal Linux TCP/IP packets into those VINES-IP packets, so that they can be send over the LAN to that box which has Internet connection....

Maybe you know if such thing is available and/or where I can get it. Or maybe you can give some Email-addresses for asking people which real knowledge 'bout Linux (maybe even that of Linus T. himself) and it's drivers.

Hope you can help me 8-))

Thanks in advantage
Stefan 8-))

(I've sent your problem on to Linux Journal's Technical Support column and will post it in Linux Gazette's Mailbag next month. Neither one will give you a fast answer.

I did a search of LG, LJ and SSC's Linux Resources using VINES as the keyword. I found only one entry from an author's biography. It's old -- March 1995 -- and the guy was in the marine corp then so may or may not be a good address. Anyway here's what it said:
"Jon Frievald ... manages Wide Area Network running Banyan VINES. ... e-mail to jaf@jaflrn.liii.com"

Anyway you might give him a try for help ideas.

For faster access to LG have you tried any of LG's mirror sites in Germany:

Please note that mirror sites wont help search time -- all searching is done on SSC site. --Editor)


General Mail


 Date: Sat, 30 Nov 1996 20:35:17 -0600 (CST)
Subject: Re: Slang Applications for Linux
From: Duncan Hill,
To: Larry Ayers,

On Sat, 30 Nov 1996, Duncan Hill wrote:

Greetings. I was reading your article in the Linux Gazette, and thought you might be interested to know that Lynx also has its own web site now at:
http://lynx.browser.org/
It's up to version 2.6 now, and is rather nice, specially with slang included :)

Duncan Hill, Student of the Barbados Community College

(Thanks for the tip! I really appreciate responses from readers; confirms that there are really readers out there! --Larry Ayers)


 Date: Sat, 30 Nov 96 16:42:58 0200
Subject: Linux Gazette
From: Paul Beard,

Hello from Zimbabwe.

Very nice production. Keep up the good work.

Regards,
Paul Beard.

(Thanks. --Editor)


 Date: Thu, 28 Nov 1996 23:54:38 +0000
Subject: Thanks!
From: Russ Spooner,
Organization: Kontagx

Hi,
I have been an avid reader of Linux Gazette since its inception! I would just like to say that it has helped me a lot and that I am really glad that it has become more regular :-)

The Image you have developed now has come a long way and it is now one of the best organized sites I visit!

Also I would like to thank you for the link to my site :-) it was a real surprise to "see myself up in lights" :)

Best regards!
Russ Spooner, http://www.pssltd.co.uk/kontagx


 Date: Thu, 28 Nov 1996 12:49:12 -0500
Subject: LG Width
From: frank haynes,
Organization: The Vatmom Organization

Re: LG page width complaint, LG looks great here, and I don't think my window is particularly large. Keep up the fine work.

--Frank, http://www.mindspring.com/~fmh

(Good to hear. --Editor)


 Date: Fri, 29 Nov 1996 10:30:32 +0000
Subject: LG #12
From: Adam D. Moss.

Nice job on the Gazette, as usual. :)

Adam D. Moss / Consulting

( :-) --Editor)


 Date: Tue, 3 Dec 1996 12:55:18 -0800 (PST)
Subject: Re: images in tcsh article
From: Scott Call,

Most of the images in the TCSH article in issue 12 are broken

-Scott

(You must be looking at one of the mirror sites. I inadvertently left those images out of the issue12 tar file that I made for the mirror sites. When I discovered it yesterday, I made an update file for the mirrors. Unfortunately, I have found that not all the mirrors are willing to update LG more than once a month, so my mistakes remain until the next month. Sorry for the inconvenience and thanks for writing. --Editor)


 Date: Fri, 06 Dec 1996 21:21:00 +0600
Subject: 12? why can you make so bad distributive?????????????
From: Sergey A. Panskih,

i ftpgeted lg12 and untar.gz it as made with lg11. lg11 was read as is: with graphics and so, but lg12... all graphics was loosed. i've verified hrefs and found out that href was written with principial errors : i must copy all it to /images in my httpd server!!!!

this a pre-alpha version!!!

i can't do so unfixed products!!!

i'm sorry, but you forgotten how make a http-ready distrbutions... :)

Sergey Panskih

P.S. email me if i'm not true.

(I'm having a little trouble with your English and don't quite understand what "all graphics was loosed" means. You shouldn't have to copy anything anywhere: what are you copying to /images?

There is one problem I had that may apply to you. Are you throwing away previous issues and only getting the current one? If so, I apologize most humbly. I was not aware until this month that people were doing this and when I made the tar file I included only new files and those that had been changed since the last month. To correct this problem I put a new tar file on the ftp site called standard_gifs.html. It's not that I've forgotten how to make http-ready distributions, it's that I'm just learning all the complexities. In the future I will make the tar file to include all files needed for the current single issue, whether they were changed or not.

I am very sorry to have caused you such problems and distress. --Editor)


 Date: Mon, 02 Dec 96 18:13:48
Subject: spiral trashes letters
From:

It's clever and pretty, but the spiral notebook graphic still trashes the left edge of letters printed in the issue 12 Mailbag.

Problem occurs using OS/2's Web Explorer version 1.2 (comes with OS/2 Warp 4.0). Problem does NOT occur using Netscape 2.02 for OS/2 beta 2 (the latest beta for OS/2).

Problem occurs even while accessing www.ssc.com/lg

Jep Hill

(Problem will always occur with versions of either Microsoft Explorer or Netscape before 2.0. It is caused by a bug in TABLES that was fixed in the 2.0 versions. I don't have access to OS/2's Web Explorer, so I can only guess that it's the same problem. I'd recommend always using the latest version of your browser. --Editor)


 Date Mon, 9 Dec 1996 10:14:04 -0800 PST
Subject: Background
From:

I run at a resolution of 1152x846 (a bit odd I suppose) and although the Gazette pages look very nice indeed, it is a bit hard to read when I have my Netscape window maximized. The bindings part of the background seems to be optimized for a width of 1024 and thus tiles over again on the right side of the page. This makes reading a bit difficult as some of the text now overlaps the bindings on the far right.

I'm not sure if that's a great description of the problem, but I can easily make you a screenshot if you want to see what I mean.

Anyhow, this is only a minor annoyance--certainly one I'm willing to live with in order to read your great 'zine. :)

Ray Van Dolson -=-=- Bludgeon Creations (Web Design) - DALnet #Bludgeon -- http://www.shocking.com/~rayvd/

(Screen shot wont be necessary. When the web master first put the spiral out there, the same thing happened to me -- I use a large window too, but not as large as yours. He was able to expand it to fix it at that time. I notified him of your problem, but not sure if he can expand it even more or not. We'll see. Glad it's a problem you can live with. :-) --Editor)


 Date: Sat, 7 Dec 1996 22:16:55 +0100 (MET)
Subject: Problem with Printing.
From:

Hi,

This is just to let you people know, that there might be a slight problem. I want to point out and make it perfectly clear that this is NOT a complaint. I feel perfectly satisfied with the Linux Gazette as it is.

However sometimes I prefer to have a printed copy to take with me. Therefore I used to print the LG. from Netscape. I'm using the new 3.1 version now. With the last two issues I have difficulties doing so. All the pages with this new nice look don't print too well. The graphics show up at all the wrong places and only one page is printed on the paper. The rest is swallowed. Did you ever try to print it?

I had to use an ancient copy of Mosaic, that doesn't know anything about tables, to print these pages. They don't look too good this way too, and never did. I know this old Mosaic is buggy. At least it doesn't swallow half of the stuff. This could as well be a bug in Netscape. I know next to nothing about html.

Anyway, have fun.
Regards Friedhelm

(No, I don't try to print it, but will look into it. Are you printing out "TWDT" from the TOC or trying to do it page by page? It is out there in multi-file format and so if you print from say the Front Page, the front page is all you'll get. "TWDT" is one single file containing the whole issue, and the spiral and table stuff are removed so it should print out for you okay. Let me know if this is already what you are printing, so I'll know where to look for the problem. --Editor)


 Date: Wed, 18 Dec 1996 04:02:37 +0200
Subject: Greetings
From: Trucza Csaba, ctrucza@cemc.soroscj.roi
To: fiskjm@ctrvax.Vanderbilt.Edu

Well, Hi there!

Amazing. I've just read the Linux Gazette from the first issue to this one, the 12th (actually I read just the first 7 issues through, because the others were not downloaded correctly).

It's 4 in the morning and I'm enthusiastic. I knew Linux was good, I'm using it for a year (this is because of the lack of my english grammar, I mean the previous sentence, well...), so I knew it was good, but I didn't expect to see something so nice like this Gazette.

It's good to see that there are a WHOLE LOT of people with huge will to share.

I think we owe You a lot of thanks for starting it.

Merry Christmas, a Happy New Year, and keep it up!

Trucza Csaba, Romania

(Thanks, I will. -- Editor)


 Date: Mon, 23 Dec 1996 12:16:30 -0800 (PST)
Subject: lg issue 12 via ftp?
From: schwarz@monet.m.isar.de (Christian Schwarz)

I just saw that issue #12 is out and accessible via WWW, but I can't find the file on your ftp server nor on any mirrors.

(Sorry for the problems. We changed web servers and I went on vacation. Somehow in the web server change, some of the December files got left behind. I didn't realize until today that this had happened. Sorry for the inconvenience. --Editor)


 Date: Fri, 20 Dec 1996 00:31:45 -0500
Subject: Great IDEA
From: Pedro A Cruz, pcruz@panixc.com

Hi:

I visited your site recently and was astounded by the wealth of information there. I have lots of bandwidth to read your site. I noticed that you have issues for download. I Think it will be a great service to the LINUX community if you consider publishing a CDROM (maybe from walnut creek cdrom) as a subscription item.

pedro

(Yes, that is a good idea. I'll talk to my publisher about it. --Editor)


 Date: Sun, 22 Dec 1996 20:24:51 -0600
Subject: Linux as router
From: Robert Binz, rbinz@swconnect.net

I have found myself trying to learn how to use Linux as a usenet server to provide news feeds to people, and to use Linux as a IRC server. Information on these topics are hard to come buy. If you have any sources on these subjects that you can point me to I would be most appreciative.

But any how, I have found an article in SysAdmin (Jan 96 (5.1)) that is titled Using Linux as a Router, by johnathon Feldman. Is it possible to reprint this article or get the author to write a new one for you?

TIA
Robert Binz

(I'll look into it. In the meantime, I've forwarded your letter to a guy I think may be able to help you. --Editor)


 Date: Fri, 13 Dec 1996 03:57:09 -0500 (EST)
Subject: Correction for LG #12
From: Joe Hohertz, jhohertz@golden.net
Organization: Golden Triangle On-Line

Noticed the folowing in the News section.

A couple of new Linux Resources sites:

(Seems I had Joe's address wrong. Sorry. --Editor)


 Date: Tue, 17 Dec 1996 01:19:43 -0500
Subject: One-shot downloads
From: David M. Razler, david.razler@postoffice.worldnet.att.net

Folks:

While I realize that the economies of the LINUX biz require that there be some method of making money even on the distribution of free and "free" software, I have a request for them of us who 1) are currently scraping for the cash for our Internet accounts and 2) would like to try LINUX.

How about a one-shot download? I mean, oh, everything needed to establish a LINUX system in one ZIP'ed (or tar/gz'd, though zip is a more compatible format) file, one for each distribution?

I'm currently looking to establish LINUX on my "spare" PC, a 386DX-16 w/4 meg and a scavenged 2500MB IDE drive, etc. It will be relatively slow, limited, lacks a CD-rom drive, but it's free, since the machine is currently serving as a paperweight.

I could go out and buy a used CD-rom for the beast, or run a bastard connection from my primary, indispensable work machine and buy the CDs. But I am currently disabled and spending for these things has to be weighed against other expenses (admittedly, I am certainly lucky and not destitute, it would just be better)

I could get a web robot and download umpteen little files, puzzle them out and put them together, though the load on your server would be higher.

Or, under my proposed system, I could download Distribution Code, Documents, and Major accessories in one group, then go back for the individual bits and pieces I need to build my system.

Again, I realize that running your site costs money, and that people make money, admirably little money, distributing LINUX on CDs, with the big bucks (grin) of LINUX coming in non-free software, support and book sales.

But if the system is to spread, providing a series of one-shot downloads, possibly available only to individuals (I believe one could copyright the *package* and require someone downloading to agree to use it only on a single non-commercial system and not to redistribute, but I am not an intellectual properties lawyer), to make life easier for them of us who need to learn a UNIX-style system and build one on the cheap.

dmr


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Next

This page written and maintained by the Editor of Linux Gazette,
Copyright © 1997 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun! "


More 2¢ Tips!


Send Linux Tips and Tricks to


Contents:


Another 2cent Script for LG

Date: Wed, 11 Dec 1996 23:34:58 +0100
From: Hans Zoebelein,

Hello LG people,

here comes a short script which will check from time to time that there is enough free space available on anything which shows up in mount (disks, cdrom, floppy...)

If space runs out, a message is printed every X seconds to the screen and 1 mail message per filled device is fired up.

Enjoy!
Hans

 
#!/bin/sh

# 
# $Id: issue13.html,v 1.1.1.1 1997/09/14 15:01:39 schwarz Exp $
#

#
# Since I got mysterious error messages during compile when
# tmp files filled up my disks, I wrote this to get a warning 
# before disks are full.
#
# If this stuff saved your servers from exploding, 
# send praising email to zocki@goldfish.cube.net.
# If your site burns down because of this, sorry but I 
# warned you: no comps.
# If you really know how to handle sed, please forgive me :)
#

#
# Shoot and forget: Put 'check_hdspace &' in rc.local.
# Checks for free space on devices every $SLEEPTIME sec. 
# You even might check your floppies or tape drives. :)
# If free space is below $MINFREE (kb), it will echo a warning 
# and send one mail for each triggering device to $MAIL_TO_ME.
# If there is more free space than trigger limit again, 
# mail action is also armed again. 
#

# TODO: Different $MINFREE for each device. 
# Free /*tmp dirs securely from old junk stuff if no more free space.


DEVICES='/dev/sda2 /dev/sda8 /dev/sda9'         # device; your put disks here
MINFREE=20480                                   # kb; below this do warning
SLEEPTIME=10                                    # sec; sleep between checks
MAIL_TO_ME='root@localhost'                     # fool; to whom mail warning


# ------- no changes needed below this line (hopefully :) -------

MINMB=0
ISFREE=0
MAILED=""
let MINMB=$MINFREE/1024         # yep, we are strict :)

while [ 1 ]; do
        DF="`/bin/df`"
        for DEVICE in $DEVICES ; do
                ISFREE=`echo $DF | sed s#.\*$DEVICE" "\*[0-9]\*" "\*[0-9]\*" "\*## | sed s#" ".\*##`
                
                if [ $ISFREE -le $MINFREE ] ; then
                        let ISMB=$ISFREE/1024
                        echo  "WARNING: $DEVICE only $ISMB mb free." >&2
                        #echo "more stuff here" >&2
                        echo -e "\a\a\a\a"
                        
                        if [ -z  "`echo $MAILED | grep -w $DEVICE`" ] ; then 
                                echo "WARNING: $DEVICE only $ISMB mb free.
(Trigger is set to $MINMB mb)" \
                                | mail -s "WARNING: $DEVICE only $ISMB mb free!" $MAIL_TO_ME
                                MAILEDH="$MAILED $DEVICE"
                                MAILED=$MAILEDH
                                # put further action here like cleaning 
                                # up */tmp dirs...
                        fi
                elif [ -n  "`echo $MAILED | grep -w $DEVICE`" ] ; then 
                        # Remove mailed marker if enough disk space 
                        # again. So we are ready for new mailing action.  
                        MAILEDH="`echo $MAILED  | sed s#$DEVICE##`"
                        MAILED=$MAILEDH
                fi
        done
        sleep $SLEEPTIME

done


Console Trick Follow-up

Date: Wed, 27 Nov 1996 16:20:06 -0500 (EST)
From: Elliot Lee,

Just finished reading issue #12, nice work.

A followup to the "Console Tricks" 2-cent tip:
What I like to do is have a line in /etc/syslog.conf that says:

  
*.*                                                     /dev/tty10
that sends all messages to VC 10, so I can know what's going on whether in X or text mode. Very useful IMHO.

-- Elliot, http://www.redhat.com/


GIF Animations

Date: Thu, 28 Nov 1996 20:41:22 -0600 (CST)
From: Greg Roelofs,

I too thought WhirlGIF (Graphics Muse, issue 12) was the greatest thing since sliced bread (well, aside from PNG) when I first discovered it, but for creating animations, it's considerably inferior to Andy Wardley's MultiGIF. The latter can specify tiny sprite images as parts of the animation, not just full images. For my PNG-balls animation (see http://quest.jpl.nasa.gov/PNG/), this resulted in well over a factor-of-two reduction in size (577k to 233k). For another animation with a small, horizontally oscillating (Cylon eyes) sprite, the savings was more than a factor of 20(!).

MultiGIF is available as source code, of course. (And I had nothing to do with it, but I do find it darned handy.)

Regards,
Greg Roelofs, http://pobox.com/~newt/
Newtware, Info-ZIP, PNG Group, U Chicago, Philips Research, ...


Re: How to close and reopen a new /var/adm/messages file

Date: Thu, 05 Dec 1996 01:09:27 -0800
From: CyberTech,

Regarding the posting in issue #12 of your gazette, how to backup the current messages file & recreate, here is an alternative method...

Place the lines at the end of this messages in a shell script (/root/cron/swaplogs in this example). Don't forget to make it +x! Execute it with 'sh scriptname', or by adding the following lines to your (root's) crontab:

 
# Swap logfiles every day at 1 am, local time
0 01 * * *       /root/cron/swaplogs
The advantage to this method over renaming the logfile and creating a new one is that in this method, syslogd is not required to be restarted.
 
#!/bin/sh
cp /var/adm/messages /var/adm/messages.`date +%d-%m-%y_%T`
cat /dev/null >/var/adm/messages

cp /var/adm/syslog /var/adm/syslog.`date +%d-%m-%y_%T`
cat /dev/null >/var/adm/syslog

cp /var/adm/debug /var/adm/debug.`date +%d-%m-%y_%T`
cat /dev/null >/var/adm/debug


How to truncate /var/adm/messages

Date: Mon, 02 Dec 1996 16:47:20 +0100
From: Eje Gustafsson,
 
>In answer to the question: 
>
>            What is the proper way to close and reopen a new >/var/adm/messages
>            file from a running system? 
>
>       Step one: rename the file. Syslog will still be writing in it >after renaming so you don't
>       lose messages. Step two: create a new one. After re-initializing >syslogd it will be used.
>just re-initialize. 
>
>          1.mv /var/adm/messages /var/adm/messages.prev 
>          2.touch /var/adm/messages 
>          3.kill -1 pid-of-syslogd 
>
>       This should work on a decent Unix(like) system, and I know Linux >is one of them. 
This is NOT an proper way of truncate /var/adm/messages.

It is better to do:

  1. cp /var/adm/messages /var/adm/messages.prev
  2. >/var/adm/messages or cp /dev/null /var/adm/messages (both of them makes the file empty).
  3. No more.
The problem is that when you remove the /var/adm/messages syslogd gets confused and unhappy and you have to give syslogd a HUPSIG but if you just sets the file length to zero without removing the file syslogd don't complain. And if you are really unlucky your system will go down because you didn't create /var/adm/messages quick enough or forgot it.

Best of regards,
Eje Gustafsson, System Administrator
THE AERONAUTICAL RESEARCH INSTITUTE OF SWEDEN


Info-ZIP encryption code

Date: Thu, 28 Nov 1996 20:58:39 -0600 (CST)
From: Greg Roelofs,

This is a relatively minor point, but Info-ZIP's Zip/UnZip encryption code is *not* DES as reported in Robert Savage's article (LG issue 12). It's actually considerably weaker, so much so that Paul Kocher has pub- lished a known-plaintext attack (the existence of which is undoubtedly the reason PKWARE was granted an export license for the code). While the encryption is good enough to keep your mom and probably your boss from reading your files, those who desire *real* security should look to PGP (which is also based on Info-ZIP code, but only for compression).

And while I'm at it, Linux users will be happy to learn that the upcoming releases of UnZip 5.3 and Zip 2.2 will be noticeably faster than the cur- rent publicly released code. In Zip's case this is due to a work-around for a gcc bug that prevented a key assembler routine from being used--Zip is now 30-40% faster on large files. In UnZip's case the improvement is due to a couple of things, one of which is simply better-optimized CRC code. UnZip 5.3 is about 10-20% faster than 5.2, I believe. The new ver- sions should be released in early January, if all goes well. And then... we start working on multi-part archives. :-)

Greg Roelofs, http://pobox.com/~newt/
Newtware, Info-ZIP, PNG Group, U Chicago, Philips Research, ...


Kernel Compile Woes

Date: Mon, 2 Dec 1996 21:35:29 +0400 (GMT-4)
From: Duncan Hill,

Greetings. Having been through hell after a recompile of my kernel, I thought I'd pass this on.

It all started with me compiling a kernel for JAVA binary support..who tell me do that. Somehow I think I got experimental code in..even worse :> Anyway, it resulted in a crash, and I couldn't recompile since then.

Well, after several cries for help, and trying all sorts of stuff, I upgraded binutils to 2.7.0.3, and told the kernel to build elf support and in elf format, and hey presto. I'd been wrestling with the problem for well over a week, and every time, I'd get an error. Unfortunately, I had to take out sound support, so I'm going to see if it'll add back in.

I have to say thank you to the folks on the linux-kernel mailing list at vger.rutgers.edu. I posted there once, and had back at least 5 replies in an hour. (One came back in 10 minutes).

As for the LG, it looks very nice seen thru Lynx 2-6 (no graphics to get messed up :>) I love the Weekend Mechanic, and the 2 cent tips mainly. Perhaps one day I'll contribute something,.

Duncan Hill, Student of the Barbados Community College http://www.sunbeach.net/personal/dhill/dhill.htm http://www.sunbeach.net/personal/dhill/lynx/lynx-main.html


 Letter 1 to the LJ Editor re Titlebar

Date: Sat, 21 Dec 1996 15:18:01 -0600
From: Roger Booth,
To: linux@ssc.com The Jan97 Issue 33 of Linux Journal contained the "Linux Gazette Two Cent Tips". I was interested in the tip "X Term Titlebar Function". Although the text of the tip stated that the tip would work in ksh-based systems, I could not get it to work as shown. I think there are three problems. First, I think there are a few transcription errors in the script. Second, I believe the author is using embedded control characters and it was not obvious to me which character sequences are representations of control characters and which characters should be typed verbatim. Third, the author uses a command-line option to the echo command which is not available on all Unix platforms.

I finally used the following script:

    if [ ${SHELL##/*/} = "ksh" ] ; then
        if [[ $TERM = x"term" ]] ; then
            HOSTNAME=`uname -n`
            label () { echo "\\033]2;$*\\007\\c"; }
            alias stripe='label $LOGNAME on $HOSTNAME - ${PWD#$HOME/}'
            cds () { "cd" $*; eval stripe; }
            alias cd=cds
            eval stripe
        fi
    fi
I don't use vi, so I left out that functionality.

The functional changes I made are all in the arguments to the echo command. The changes are to use \\033 rather than what was shown in the original tip as ^[, to use \\007 rather than ^G, and to terminate the string with \\c rather than use the option -n.

On AIX 4.1, the command "echo -n hi" echoes "-n hi"; in other words, -n is not a portable command-line option to the echo command. I tested the above script on AIX 3.2, AIX 4.1, HPUX 9.0, HPUX 10.0, Solaris 2.4 and Solaris 2.5. I'm still trying to get Linux and my Wintel box mutually configured, so I haven't tested it on Linux.

I have noticed a problem with this script. I use the rlogin command to log in to a remote box. When I exit from the remote box, the caption is not updated, and still shows the hostname and path that was valid just before I exited. I tried adding

    exits () { "exit" $*; eval stripe; }
    alias exit=exits
and
    rlogins () { "rlogin" $*; eval stripe; }
    alias rlogin=rlogins
Neither addition updated the caption to the host/path returned to. Any suggestions?

Roger Booth, rbooth@bmc.com


 Letter 2 to the LJ Editor re Titlebar

Date: Fri, 13 Dec 1996 23:03:37 -0700 (MST) From: Gary Masters,

Some further clarification is needed with respect to the X Term Titlebar Function tip in the Linux Gazette Two Cent Tips column of the January 1997 issue. With regard to the -print option to find, Michael Hammel says, "Linux does require this." This is yet another example of "Your mileage may vary." Some versions of Linux do not require the -print option. And, although Solaris may not, SunOS 4.1.3_U1 and 4.1.4 do require the -print option. Also, if running csh or tcsh, remember to escape wildcards in the file specification ( e.g. find ./ -name \*txt\* ) so that the shell doesn't attempt to expand them.

Second, for those tcsh fans out there, here is an xterm title bar function for tcsh.

NOTE: This works on Slackware 3.0 with tcsh version 6.04.00, under the tab, fv, and OpenLook window managers. Your mileage may vary.

if ( $TERM == xterm ) then
  set prompt="%h> "
  alias cwdcmd 'echo -n "^[]2;`whoami` on ${HOST} - $cwd^G^[]1;${HOST}^G"'
  alias vi 'echo -n "^[]2;${HOST} - editing file-> \!*^G" ; vim \!* ;
cwdcmd'
  alias telnet '/bin/telnet \!* ; cwdcmd'
  alias rlogin '/usr/bin/rlogin \!* ; cwdcmd'
  cwdcmd
else
  set prompt="[%m]%~% "
endif
  1. Check to see if tcsh is running in an xterm.
  2. Set the prompt to show the current history event number.
  3. Set the alias cwdcmd to display the user, host, and current path in the xterm title bar, and set the icon name to the host name. cwdcmd is a special tcsh alias, which if set holds a command that will be executed after changing the value of $cwd.
  4. Set a vi alias to display the user, host, and file name under edit in the xterm title bar. And run cwdcmd on exit to restore the xterm title bar and icon name.
  5. Alias telnet and rlogin to restore the xterm title bar and icon name upon exit. NOTE: Paths to telnet and rlogin may vary.
  6. Run the alias cwdcmd to set the initial xterm title bar and icon name.
  7. If this wasn't an xterm, set the prompt to show hostname and path. Gary Masters


    PPP redialer script--A Quick Hack

    Date: Sun, 08 Dec 1996 13:20:25 +0200
    From: Markku J. Salama,

    This here is the way I do it, but don't use it if your area has some regulations about redialing the same phone numbers over and over:

     
    #!/bin/sh
    
    # A quick hack for redialing with ppp by 
    # Tries 2 numbers sequentially until connected
    # Takes 1 cmdline parm, the interface (ppp0, ppp1...)
    
    # You need 2 copies of the ppp-on script (here called modemon{1,2}) with 
    # different telephone numbers for the ISP. These scripts should be
    slightly 
    # customized so that the passwd is _not_ written in them, but is taken
    # separately from the user in the main (a.k.a. this) script. 
    
    # Here's how (from the customized ppp-on a.k.a. modemon1):
    # ...
    # TELEPHONE=your.isp.number1 # Then make a copy of this script ->
    modemon2 
                                 # and change this to your.isp.number2
    # ACCOUNT=your.account  
    # PASSWD=$1                  # This gets the passwd from the main
    script.
    # ...
    
    # /sbin/ifconfig must be user-executable for this hack to work.
     
    wd1=1                                                   # counter start
    stty -echo                                              # echo off
    echo -n "Password: "                                    # for the ISP
    account
    read wd2
    stty echo                                               # back on
    echo                                                    
    echo "Trying..."                                        
    echo 'ATE V1 M0 &K3 &C1 ^M' > /dev/modem                # modem init, 
                                                            # change as
    needed
                                                            
    /usr/sbin/modemon1 $wd2                                 # first try
    flag=1                                                  # locked
    
    while [ 1 ]; do                                         # just keep on
    going
    
           if [ "$flag" = 1 ]; then                         # locked?
    
                  bar=$(ifconfig | grep -c $1)        # check for a link
    
                  if [ "$bar" = 1 ]; then                   # connected?
                         echo "Connected!"                  # if so, then
                         exit 0                             # get outta here
                  else
                         foo=$(ps ax)                       # already
    running?
                         blaat=$(echo $foo | grep "/usr/sbin/pppd") 
    
                         if [ "$blaat" = "" ]; then         # if not, then
                                flag=0                      # unset lock
                         fi
                  fi
    
           else                                             # no lock, ready
                                                            # to continue
                  wd1=$[wd1+1]                              
                  echo "Trying again... $wd1"
    
                  if [ $[wd1%2] = 1 ]; then                 # this modulo
    test
                         /usr/sbin/modemon1 $wd2            # does the
    switching
                  else                                      # between the 2
    numbers
                         /usr/sbin/modemon2 $wd2            # we are using
                  fi                                        
    
                  flag=1                                    # locked again
                                                            
           fi
    
    done                                                    # All done!
    
    There. Customize as needed & be an excellent person. Ant DON'T break any laws if redialing is illegal in your area!

    Mark


    TABLE tags in HTML

    Date: Fri, 20 Dec 1996 11:51:22 -0500
    From: Michael O'Keefe,
    Organization: Ericsson Research Canada

    G'day,

    Just browsing through the mailbox, and I noticed your reply to a user about HTML standard compliance and long download times. You replied that you use the spiral image (a common thing these days) inside a <TABLE>.

    I hope you are aware that a browser cannot display any contents of a <TABLE> until it has received the </TABLE> tag (no matter what version of any browser - it is a limitation of the HTML tag) because the browser cannot run its algorithm until it has received all of the <TR> and <TD> tags, and it can't be sure of that until the </TABLE> tag comes through. I have seen many complex sites, using many images (thankfully they at least used the HEIGHT and WIDTH tags on those images to tell the browser how big the image will be so it didn't have to download it to find out) but still, putting it in a table nullifies much of the speediness that users require.

    A solution I often offer the HTML designers under me is to use a <DL><DD> combination. Though this doesn't technically fit the HTML DTD (certain elements are not allowed in a <DL>) and I use an editor that will not allow illegal HTML, so I can't do it myself (without going via a backdoor - but that's bad quality in my opionion). The downside of the this is of course that you don't know what sized FONT the user has set on the browser, and the FONT size affects the indetation width of the <DD> element. But if your spiral image is not too wide, then that could be made a NULL factor. The plus to the <DL><DD> is that the page can be displayed instantly as it comes down (again..providing the developer uses the HEIGHT and WIDTH attributes on *ALL* images so that the browser doesn't have to pause it's display to get the image and work out how to lay out around the image)

    Michael O'Keefe


    Text File undelete

    Date: Sat, 7 Dec 1996 15:00:58 +1300 (NZDT)
    From: Michael Hamilton,

    Here's a trick I've had to use a few times.

    Desperate person's text file undelete.

    If you accidentally remove a text file, for example, some email, or the results of a late night programming session, all may not be lost. If the file ever made it to disk, ie it was around for more than 30 seconds, its contents may still be in the disk partition.

    You can use the grep command to search the raw disk partition for the contents of file.

    For example, recently, I accidentally deleted a piece of email. So I immediately ceased any activity that could modify that partition: in this case I just refrained from saving any files or doing any compiles etc. On other occasions, I've actually gone to the trouble of bring the system down to single user mode, and unmounted the filesystem.

    I then used the egrep command on the disk partition: in my case the email message was in /usr/local/home/michael/, so from the output from df, I could see this was in /dev/hdb5

     
      sputnik3:~ % df
      Filesystem         1024-blocks  Used Available Capacity Mounted on
      /dev/hda3              18621    9759     7901     55%   /
      /dev/hdb3             308852  258443    34458     88%   /usr
      /dev/hdb5             466896  407062    35720     92%   /usr/local
    
      sputnik3:~ % su
      Password:
      [michael@sputnik3 michael]# egrep -50 'ftp.+COL' /dev/hdb5 > /tmp/x
    
    Now I'm ultra careful when fooling around with disk partitions, so I paused to make sure I understood the command syntax BEFORE pressing return. In this case the email contained the word 'ftp' followed by some text followed by the word 'COL'. The message was about 20 lines long, so I used -50 to get all the lines around the phrase. In the past I've used -3000 to make sure I got all the lines of some source code. I directed the output from the egrep to a different disk partition - this prevented it from over writing the message I was looking for.

    I then used strings to help me inspect the output

     
      strings /tmp/x | less
    
    Sure enough the email was in there.

    This method can't be relied on, all, or some, of the disk space may have already been re-used.

    This trick is probably only useful on single user systems. On multi-users systems with high disk activity, the space you free'ed up may have already been reused. And most of use can't just rip the box out from under our users when ever we need to recover a file.

    On my home system this trick has come in handy on about three occasions in the past few years - usually when I accidentally trash some of the days work. If what I'm working survives to a point where I feel I made significant progress, it get's backed up onto floppy, so I haven't needed this trick very often.

    Michael


     Truncating /var/adm/messages

    Date: Tue, 31 Dec 1996 15:32:57 GMT+100
    From: Michel Vanaken,
    Organization: IDtech

    Hi !

    About the topic "How to truncate /var/adm/messages", here's the way to do it with a shell script :

    mv /var/adm/messages /var/adm/messages.prev
    touch /var/adm/messages
    mv /var/adm/syslog /var/adm/syslog.prev
    touch /var/adm/syslog
    kill -1 `ps x | grep syslog | grep -v grep | awk '{ print $1 }'`
    
    Happy new year !
    Michel


    2c Host Trick

    Date: Tue, 10 Dec 1996 17:27:46 +0300
    From: Paul Makeev,

    In order to make DHCPD by ISC/Vixie to run under Linux, you should have route to host 255.255.255.255. Standard "route" from Slackware distribution does not like the string "route add -host 255.255.255.255 dev eth0". But you can add hostname to your /etc/hosts file with address 255.255.255.255, and use "route add hostname dev eth0" instead. It works.

    Paul.


    Use of TCSH's :e and :r Extensions

    Date: Mon, 02 Dec 1996 23:25:23 -0500
    From: Bill C. Riemers,

    I'd like to congratulate Jesper Pedersen on his article on tcsh tricks. Tcsh has long been my favorite shell. But most of the features Jesper hit upon are also found in bash. Tcsh's most useful and unique features are its variable/history suffixes.

    For example, if after applying a patch one wishes to undo things, by moving the *.orig files to there base names, the :r extension which means to strip the extension comes in handy. e.g.

     
     foreach a ( *.orig )
        mv $a $a:r
     end
    
    The same loop for ksh looks like:
     
      for a in *.orig; do=20
        mv $a `echo $a|sed -e 's,\.orig$,,g'`
      done
    
    Even better, one can use the :e extension to extract the file extension. For example, lets say we we want to do the same thing on compressed files:
     
      foreach a ( *.orig.{gz,Z} )
        mv $a $a:r:r.$a:e
      end
    
    The $a:r:r is the filename without .orig.gz and .orig.Z, we tack the .gz or .Z back on with .$a:e.

    Bill


    Various notes on 2c tips, Gazette 12

    Date: Wed, 04 Dec 1996 15:30:21 -0600
    From: Justin Dossey,

    I noticed a few overly difficult or unnecessary procedures recommended in the 2c tips section of Issue 12. Since there is more than one, I'm sending it to you:

     
    #!/bin/sh
             # lowerit
             # convert all file names in the current directory to lower case 
             # only operates on plain files--does not change the name of
    directories
             # will ask for verification before overwriting an existing file
             for x in `ls`
               do
               if [ ! -f $x ]; then
                 continue
                 fi
               lc=`echo $x  | tr '[A-Z]' '[a-z]'`
               if [ $lc != $x ]; then
                 mv -i $x $lc
               fi
               done
    
    Wow. That's a long script. I wouldn't write a script to do that; instead, I would use this command:
     
    for i in * ; do [ -f $i ] && mv -i $i `echo $i | tr '[A-Z]' '[a-z]'`;
    done;
    
    on the command line.

    The contributor says he wrote the script how he did for understandability (see below).

    On the next tip, this one about adding and removing users, Geoff is doing fine until that last step. Reboot? Boy, I hope he doesn't reboot every time he removes a user. All you have to do is the first two steps. What sort of processes would that user have going, anyway? An irc bot? Killing the processes with a simple

     
    kill -9 `ps -aux |grep ^ |tr -s " " |cut -d " " -f2`
    
    Example, username is foo
     
    kill -9 `ps -aux |grep ^foo |tr -s " " |cut -d " " -f2`
    
    That taken care of, let us move to the forgotten root password.

    The solution given in the Gazette is the most universal one, but not the easiest one. With both LILO and loadlin, one may provide the boot parameter "single" to boot directly into the default shell with no login or password prompt. From there, one may change or remove any passwords before typing ``init 3``to start multiuser mode. Number of reboots: 1 The other way Number of reboots: 2

    That's just about it. Thanks for the great magazine and continuing contribution to the Linux community. The Gazette is a needed element for many linux users on the 'net.

    Justin Dossey

    Date: Wed, 4 Dec 1996 08:46:24 -0800 (PST)
    Subject: Re: lowerit shell script in the LG
    From: Phil Hughes,

    The amazing Justin Dossey wrote:

     
    > #!/bin/sh
    > for i in * ; do [ -f $i ] && mv -i $i `echo $i | tr '[A-Z]' '[a-z]'`;
    > done;
    > 
    > may be more cryptic than 
    ...
    > 
    > but it is a lot nicer to the system (speed & memory-wise) too.
    
    Can't argue. If I had written it for what I considered a high usage situation I would have done it more like you suggested. The intent, however, was to make something that could be easily understood.

    Phil Hughes


     Viewing HOWTO Documents

    Date: Sun, 22 Dec 1996 09:43:40 -0800
    From: Didier Juges,

    >From a newbie to another, here is a short script that eases looking for and viewing howto documents. My howto's are in /usr/doc/faq/howto/ and are gzipped. The file names are XXX-HOWTO.gz, XXX being the subject. I created the following script called "howto" in the /usr/local/sbin directory:

    #!/bin/sh
    if [ "$1" = "" ]; then
        ls /usr/doc/faq/howto | less
    else
        gunzip -c /usr/doc/faq/howto/$1-HOWTO.gz | less
    fi
    
    When called without argument, it displays a directory of the available howto's. Then when entered with the first part of the file name (before the hyphen) as an argument, it unzips (keeping the original intact) then displays the document.

    For instance, to view the Serial-HOWTO.gz document, enter: $ howto Serial

    Keep up the good work.

    Didier


     Xaw-XPM .Xresources troubleshooting tip.

    Date: Wed, 18 Dec 1996 17:02:07 +0100 (GMT+0100)
    From: Robin Smidsroed,

    I'm sure a lot of you folks out there have installed the new Xaw-XPM and like it a lot. But I've had some trouble with it. If I don't install the supplied .Xresources-file, xcalc and some other apps (ghostview is one) segfaults whenever you try to use them.

    I found out that the entry which causes this, is this:

    *setPixmap: /path/to/an/xpm-file
    
    If this entry isn't in your .Xresources, xcalc and ghostview won't work. Hope some of you out there need this.

    And while you're at ghostview, remember to upgrade ghostscript to the latest version to get the new and improved fonts, they certainly look better on paper than the old versions.

    Ciao!
    Robin

    PS: Great mag, now I'm just waiting for the arrival of my copy of LJ


     xterm title bar

    Date: Wed, 18 Dec 1996 21:21:47 -0800 (PST) From: bradshaw@nlc.com (Lee Bradshaw)

    Hi Guys,

    I noticed the "alias for cd xterm title bar tip" from Michael Hammel in the Linux Gazette and wanted to offer a possible improvement for your .bashrc file. A similar solution might work for ksh, but you may need to substitute $HOSTNAME for \h, etc:

    if [ "x$TERM" = "xxterm" ]; then
       PS1='\h \w-> \[\033]0;\h \w\007\]'
    else
       PS1='\h \w-> '
    fi
    
    PS1 is an environment variable used in bash and ksh for storing the normal prompt. \h and \w are shorthand for hostname and working directory in bash. The \[ and \] strings enclose non-printing characters from the prompt so that command line editing will work correctly. The \O33]0; and \007 strings enclose a string which xterm will use for the title bar and icon name. Sorry, I don't remember the codes for setting these independently. (ksh users note: \033 is octal for ESC and \007 is octal for CTRL-G.) This example just changes the title bar and icon names to match the prompt before the cursor.

    Any program which changes the xterm title will cause inconsistencies if you try an alias for cd instead of PS1. Consider rlogin to another machine which changes the xterm title. When you quit rlogin, there is nothing to force the xterm title back to the correct value when using the cd alias (at least not until the next cd). This is not a problem when using PS1.

    You could still alias vi to change the xterm title bar, but it may not always be correct. If you use ":e filename" to edit a new file, vi will not update the xterm title. I would suggest upgrading to vim (VI iMproved). It has many nice new features in addition to displaying the current filename on the xterm title.

    Hopefully this tip is a good starting point for some more experimenting. Good luck!

    Lee Bradshaw, bradshaw@nlc.com


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


    This page maintained by the Editor of Linux Gazette,
    Copyright © 1997 Specialized Systems Consultants, Inc.

    "Linux Gazette...making Linux just a little more fun!"


    News Bytes

    Contents:


    News in General


     SECURITY: (linux-alert) LSF Update#14: Vulnerability of the lpr program.

    Date: Sat, 26 Nov 1996
    Linux Security FAQ Update -- lpr Vulnerability
    A vulnerability exists in the lpr program version 0.06. If installed suid to root, the lpr program allows local users to gain access to a super-user account.

    Local users can gain root privileges. The exploits that exercise this vulnerability were made available.

    lpr utility from the lpr 0.06 suffers from the buffer overrun problem. Installing lpr as a suid-to-root is needed to allow print spooling.

    This LSF Update is based on the information originally posted to linux-security mailing list.

    For additional information and distribution corrections:
    Linux Security WWW: http://bach.cis.temple.edu/linux/linux-security linux-security & linux-alert mailing list archives: ftp://linux.nrao.edu/pub/linux/security/list-archive


     LINUXEXPO '97 TECHNICAL CONFERENCE

    Durham, N.C. December 31,1996-- It was announced today that the third annual LinuxExpo Technical Conference will be held at the N.C. Biotechnology Center in Research Triangle Park, NC on April 4-5, 1997. The conference will consist of fourteen elite developers who will give technical talks on various topics all related to the development of Linux. This year the event is expected to draw 1,000 attendees who will be coming not only for the conference, but to visit the estimated 30 Linux companies and organizations that will be selling their own Linux products and giving demonstrations. The event will also include a Linux User's Group meeting, an install fair, and a job fair for all of the computer programming hopefuls. LinuxExpo '97 will be complete with refreshments and entertainment from the Class Action Jugglers.

    For addtional information: Anna Selvia,
    LinuxExpo '97 Technical Conference,
    3201 Yorktown Ave. Suite 113
    Durham, NC 27713


     WWW: Linux Archive Search Site

    Date: Thu, 21 Nov 1996
    Tired of searching sunsite or tsx-11 for some program you heard about on irc? Well, the Linux Archive Search (LAS) is here. It is a search engine that searches an updated database of the files contained on sunsite.unc.edu, tsx-11.mit.edu, ftp.funet.fi, and ftp.redhat.com. You can now quickly find out where the files are hiding! The LAS is living at http://torgo.ml.org/las (It may take a second to respond, its on a slow link). So give it a whirl, who knows, you may use it a lot!

    For additional information:
    Jeff Trout,
    The Internet Access Company, Inc.


     Netherlands - Linux Book On-line

    Date: Thu, 05 Dec 1996
    The very first book to appear in Holland on the Linux operating system has gone on-line and can be found at:

    http://www.cv.ruu.nl/~eric/linux/boek/

    And of course from every (paper) copy sold, one dollar is sent to the Free Software Foundation. For additional information:
    Hans Paijmans, KUB-University, Tilburg, the Netherlands
    , http://purl.oclc.org/NET/PAAI/


     New O'Reilly Linux WWW Site

    Date: 26 Nov 1996

    Check out the new O'Reilly & Associates, Inc. Linux web site at http://www.ora.com/info/linux/

    It has:

    For additional information:
    O'Reilly & Associates, Inc.,


     PCTV Reminder

    The "Unix III - Linux" show will air on the Jones Computer Network (JCN) and the Mind Extension University Channel (MEU) the week of January 20, 1997.

    The scheduled times are:

    This show will also air on the NBC Superchannel (CNBC) January 25, 1997.

    It is best to call your local cable operator to find the appropriate channel.

    Tom Schauer, Production Assoc. PCTV


    Software Announcements


     daVinci V2.0.2 - Graph Visualization System

    November 20, 1996 (Bremen, Germany) - The University of Bremen announces daVinci V2.0.2, the new edition of the noted visualization tool for generating high-quality drawings of directed graphs with more than 2000 installations worldwide. Users in the commercial and educational domain have already integrated daVinci as user interface for their application programs to visualize hierarchies, dependency structures, networks, configuration diagrams, dataflows, etc. daVinci combines hierarchical graph layout with powerful interactive capabilities and an API for remote access from a connected application. In daVinci V2.0.2, a few extensions related to improving performance and usage of the previous V2.0.1 release have been made based on user feedback.

    daVinci V2.0.2 is licensed free of charge for non-profit use and is immediately available Linux. The daVinci system can be downloaded with this form:

    http://www.informatik.uni-bremen.de/~davinci/daVinci_get_daVinci.html

    For additional information:
    Michael Froehlich, daVinci Graph Visualization Project
    Computer Science Department, University of Bremen, Germany
    http://www.informatik.uni-bremen.de/~davinci ,


     WWW: getwww 1.3 - download an entire HTML source tree

    Date: Wed, 04 Dec 1996
    Getwww is designed to download an entire HTML source tree from a remote URL, recursively changing image and hypertext links.

    From the LSM:
    Primary-site: ftp.kaist.ac.kr /incoming/www 25kB getwww++-1.3.tar.gz
    Alternate-site: sunsite.unc.edu /pub/Linux/system/Network/info-systems/www 25kB getwww++-1.3.tar.gz
    Platform: Linux-2.0.24
    Copying-policy: GPL

    For additional information:
    In-sung Kim, Network Tool Group,


     Motif Interface Builder on Unifix 2.0

    Date: Sun, 01 Dec 1996
    Unifix Software GmbH is proud to announce View Designer/X, a new Motif interface builder available for Linux. A demo version of VDX is included on Unifix Linux 2.0.

    With object oriented and interactive application development tools, the software developer is able to design applications with better quality and in shorter times.

    For more information and to download the latest demo version, see:

    http://www.unifix.de/products/vdx

    For additional information: Unifix Software GmbH,


     View Designer/X

    Date: Fri, 13 Dec 1996

    View Designer/X, a new Motif Interface Builder for Linux has been released. It enables application developers to design user interfaces with Motif 2.0 widgets and to generate C and C++ code. The VDX provides an interactive Wysiwyg View and a Widget Tree Browser which can be used to modify the structure of the user interface. All resources are adjustable by Widget Resource Editor and by using template files the code generation of VDX is more flexible than those of other interface builders.

    Bredex GmbH, Germany is distributing the View Designer/X via Web service. Please see following web page for more information and downloading the free demo version:

    http://www.bredex.de/EN/vdx/

    Dirk Laessig,


     X-Files 1.21 - graphical file manager in tcl/tk

    Date: Sun, 01 Dec 1996

    X-Files is a graphical file management program for Unix/X-Window environment developed on Linux.

    For more information and packages see:
    http://pinhead.tky.hut.fi/~xf_adm/ http://www.hut.fi/~mkivinie/xfindex.html ftp://java.inf.tu-dresden.de:/pub/unix/x-files/

    For questions:

    For additional information:
    Mikko Kiviniemi, ,
    Helsinki University of Technology


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


    This page written and maintained by the Editor of Linux Gazette,
    Copyright © 1997 Specialized Systems Consultants, Inc.


    "Linux Gazette...making Linux just a little more fun! "


    The Answer Guy


    By James T. Dennis,


    Contents:

  8. Dialup Problem
  9. File Referencing
  10. Combining Modems for More Speed
  11. WWW Server


     Combining modems for more speed

    Date: Mon, 23 Dec 1996 23:37:00 -0800 (PST)
    From: liberty@pe.net (Keith)

    Thanks for reading this post. I have heard that it's possible to set up Linux to combine two analog modems into one so as to double the speed of a connection. Is this true, how does this work and where can I get more info, guidance, how-to, etc.? I have Slackware 96 from Infomagic. Your truly, Keith Bell

    I've heard of this as well. I've never used it but let's look it up...

    Ahh... that would be the EQL option in the kernel. Here's an excerpt from the 'make menuconfig' help pages (in the 2.0.27 kernel sources):

    Linux Kernel v2.0.27 Configuration

    EQL (serial line load balancing) support:
    If you have two serial connections to some other computer (this usually requires two modems and two telephone lines) and you use SLIP (= the protocol for sending internet traffic over telephone lines) or PPP (= a better SLIP) on them, you can make them behave has to be supported at the other end as well, either with a similar EQL Linux driver or with a Livingston Portmaster 2e. Say Yes if you want this and read drivers/net/README.eql.

    So that file is :
    EQL Driver: Serial IP Load Balancing HOWTO
    Simon "Guru Aleph-Null" Janes, simon@ncm.com
    v1.1, February 27, 1995

    (After reading this you'll know about as much on this subject as I do -- after using any of this you'll know *much* more).


     Dialup Problem

    Date: Tue, 31 Dec 1996 05:13:51 -0800 (PST)
    From:

    I don't know if you can, or even are willing, help me witha problem i have. I'm running redhat 4.0, on a p120 w/24 megs of ram, kernel 2.0.18

    I'm willing.
    anyway...i have this ppp connection problem and no I know knows what the problem is, i've looked through the FAQS, HOWTO's, tried #linux on irc, etc etc...no one knows what my problem is, so now i'm desperate.

    When i try to dial my isp, i get logged in fine, but its REALLY slow. i'm using the 'network module' ppp thing in control panel on X. mru=1500, asyncmap=0,speed=115000, i couldn't find a place to insert mtu, and when i tried putting that in /etc/ppp/options the script this program was using wouldn't work.

    Usually I see these symptoms when there is an IRQ conflict. Some of the data gets through -- with lots of errors and lots of retransmits but any activity on the rest of the machine -- or even just sitting there -- and you get really bad throughput and very unreliable connections.
    I noticed that after i input something and then move the cursor off of the windows, it runs at a much faster speed, and it gets annoying moving the cursor back and forth. I tried dip, minicom, and this 'network module' thing...all are slow
    I would do all of your troubleshooting from outside of X. Just use the virtual consoles until everything else works right. (Fewer layers of things to conflict with one another).
    if you can shed any light on this, it would be much appreciated. thanks
    Take a really thorough look at the hardware settings for everything in the machine. Make a list of all the cards and interfaces -- go through the docs for each one and map out which ones are using which interfaces.

    I ended up going through several combinations of video cards and I/O cards before I got my main system all integrated. Luckily newer systems are getting better (this is a 386DX33 with 32Mb of RAM and a 2Mb video cards -- two IDE's, two floppy drives, two SCSI hardisks, an internal CD-ROM, and external magneto optical drive, a serial mouse, a modem (used for dial-in dial-out, uucp, and ppp) and null modem (I hook a laptop to it as a terminal for my wife) and an ethernet card.

    Another thing to check is the cabling between your serial connector and your modem. If you're configured for XON/XOFF you're in trouble. If you're configured for hardware flow control and you don't have the appropriate wires in your cable than you're in worse trouble.

    Troubleshooting of this sort really is best done over voice or in person. There are too many steps to the troubleshooting and testing to do effectively via e-mail.


     File Referencing

    Date: Wed, 18 Dec 1996 00:16:42 -0800 (PST)
    > "A month of sundays ago L.U.S.T List wrote:"
    >>      1. I do not know why on Linux some program could not run
    >>      correctly.
    >>              for example
    >>      #include 
    >>      main()
    >>      {
    >>              printf("test\n");
    >>              fflush(stdout);
    >>      }
    >>      They will not echo what I print.
    > 
    > Oh yes it will. I bet you named the executable "test" ... :-)
    > (this is a UNIX faq).
    > 
    
    I really suggest that people learn the tao of "./"

    This is easy -- any time you mean to refer to any file in the current directory precede it with "./" -- this forces all common Unix shells to refer to the file in THIS directory. It solves all the problems with files that start with dashes and it allows you to remove :.: from your path (which *all* sysadmins should do right NOW).

    That is the tao of "./" -- the two keystrokes that can save you many hours of grief and maybe save your whole filesystem too.


     WWW Server?

    Date: Tue, 31 Dec 1996 05:19:11 -0800 (PST)
    From: (Paulo Marcio Villaca Veiga)

    Where can I get (or buy) a WWW server for LINUX?
    Please, help me.

    Web servers are included with most distributions of Linux. The most popular one right now is called Apache. You can look on your CD's (if you bought a set) or you can point a web client (browser) at http://www.apache.org for more information and for an opportunity to download a copy.

    There are several others available -- however Apache is the most well known -- so it will be the best for you to start with. It is also widely considered to offer the best performance and feature set (of course that is a matter of considerable controversy among "connosieurs" just as is the ongoing debate about 'vi' vs. 'emacs').

    thank you
    You're welcome.


    Copyright © 1997, James T. Dennis
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


    "Linux Gazette...making Linux just a little more fun! "


    COMDEX '96

    By Belinda Frazier and Kevin Pierce


    Comdex/Fall '96 has come and gone once again. COMDEX is the second largest computer trade show in the world, offering multiple convention floors with 2000 exhibitors plying their new computer products to approximately 220,000 attendees in Las Vegas, Nevada in November of 1996.

    This year's show was a great success for Linux in general. The first ever ``Linux Pavilion'' was organized at the Sands Convention Center and Linux vendors from all over the country participated. The Linux International (LI) booth was in the center, giving away literature and information for all the Linux Vendors. Linux International is a not-for-profit organization formed to promote Linux to computer users and organizations. Staffed by volunteers including Jon ``Maddog'' Hall and Steve Harrington, the LI booth was a great place for people to go to have their questions answered. Needless to say, the Linux International Booth was never empty. Surrounding LI, were Red Hat Software and WorkGroup Solutions.

    Other vendors in the pavilion included Craftwork Solutions, DCG Computers, Digital Equipment Corporation, Frank Kasper & Associates, Infomagic, Linux Hardware Solutions, SSC (publishers of Linux Journal), and Yggdrasil Computing. Caldera, Pacific HiTech, and Walnut Creek both exhibited at Comdex, but not as part of the Linux Pavilion.

    SSC gave out Linux Journals at the show and actually ran out of magazines early Thursday morning. Luckily, we were able to have some more shipped to us, but we still ran out again on Friday, the last day of the show. Comdex ran five full days and the Sands pavilion was open from 8:30 to 6 most show days which meant long days for all the exhibitors there.

    Show management put up signs, directing attendees to the Linux Pavilion and to "more Linux vendors". The show was so large that it was easy to get lost.

    At the LI booth and at SSC's booth, the response to Linux was overwhelmingly positive. Questions ranged from ``I've heard a lot about Linux, but I'm not sure what it is, can you enlighten me?'' to ``I haven't checked for a few days---what is the latest development kernel?''

    For next year's Comdex in November '97, Linux vendors, coordinated by Linux International, are already working to put together a Linux pavilion at least three times as big as the one this year.

    Vendors interested in being part of the Linux pavilion in November '97 may contact Softbank who put on Comdex at mandino@comdex.com or to do this through Linux International, contact ``Jon Maddog'' Hall via e-mail at .


    Copyright © 1997, Belinda Frazier & Kevin Pierce
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    Filtering Advertisements from Web Pages using IPFWADM

    By David Rudder


    Lately, a lot of Web pages have begun selling ad space "banners." Wasting valuable bandwidth, these banners often hawk products I don't care to hear about. I'd rather not see them, and not have to download their contents.

    There are two ways of filtering out these banners. The first is to deny all pictures that are wider than tall and generally towards the top or bottom of the page. The second is to simply block all the accesses to and from the web sites that are the notorious advertisers. This second approach is the one I'm going to take.

    When searching around the web, you will see that many of the banners come from the site ad.linkexchange.com. This is the site we will want to ban.


    Setting Up Your Firewall

    Our first order of business is to set up our firewall. We won't be using it for security, although this doesn't prohibit also using the firewall for security. First, we recompile the kernel, saying "Yes" to CONFIG_FIREWALL. This allows us to use the built in kernel firewalling.

    Then, we need to get the IPFWADM utility. You can find it at: http://www.xos.nl/linux/ipfwadm . Untar, compile and install this utility.

    Since we are doing no other firewalling, this should be sufficient.


    Blocking Unwanted Sites

    Now, we come to the meat of the maneuver. We need now to block access to our machine from ad.linkexchange.com. First, block out access to the sight, so that our requests don't even make it there. ipfwadm -O -a reject -P tcp -S 0.0.0.0/0 -D ad.linkexchange.com 80

    This tells ipfwadm to append a rule to the Output filter. The rule says to reject all packets of protocol TCP from anywhere to ad.linkexchange.com on port 80. If you don't get this, read Chris Kostick's excellent article on IP firewalling at http://www.ssc.com/lj/issue24/1212.html.

    The next rule is to keep any stuff from ad.linkexchange.com from coming in. Technically, this shouldn't be necessary. If we haven't requested it, it shouldn't come. But, better safe than sorry. ipfwadm -I -a reject -P tcp -S ad.linkexchange.com 80 -D 0.0.0.0/0

    Now, all access to and from ad.linkexchange.com is rejected.

    Note: this will only work when web browsing from that machine. To filter for a whole network, do them same but with -F instead of -O and -I.


    Testing It Out

    To test, visit the site http://www.reply.net. They have a banner on top which should either not appear or appear as a broken icon. Either way, no network bandwidth will be wasted downloading the picture, and all requests will be rejected immediately.


    Filling It Out

    Not all banners are so easily dealt with. Many companies, like Netscape, host their own banners. You don't want to block access to Netscape, so this approach won't work. But, you will find a number of different advertisers set up like linkexchange. As you find more, add them to the list of rejected sites.

    Good luck, and happy filtering!


    Copyright © 1997, David Rudder
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    Floppy Disk Tips

    By Bill Duncan, VE3IED,


    Although more computers are becoming network connected every day, there are many instances where you need to transfer files by the ol' sneaker-net method. Here are some hints, tips and short-cuts for doing this, aimed at users who are new to Linux or Unix. (There may even be some information useful to old-timers...)

    What do I use floppies for? As a consultant, I frequently do contract work for companies which, because of security policies, do not connect to the 'Net. So, FTP'ing files which I need from my network at home is out of the question.

    My current contract as an example, I am using Linux as an X-Windows terminal for developing software on their DEC Alphas running OSF. (I blew away the Windoze '95 which they had loaded on the computer they gave me.) I often need to bring files with me from my office at home, or backup my work to take back home for work in the evening. (Consultants sometimes work flex-hours, which generally means more hours...)

    Why use cpio(1) or tar(1) when copying files? Because it is a portable method of transferring files from a group of subdirectories with the file dates left intact. The cp(1) command may or may not do the job depending on Operating Systems and versions you are dealing with. In addition, specifying certain options will only copy files which are new or have changed.


    Formatting, Filesystems and Mounting


    The first thing you need to do to make the floppies useful is to format them, and usually lay down a filesystem. There are also some preliminary steps which make using floppy disks much easier, which is the point of this article.

    I find it useful to make my username part of the floppy group in the /etc/group file. This saves you from needing to su to root much of the time. (You will need to log out and log back in again for this to take effect.) I also use the same username both on the client's machine and my home office which saves time. The line should now look like this:

    floppy::11:root,username
    

    The following setup is assumed for the examples I present here. The root user must have the system directories in the PATH environment variable. Add the following to the .profile file in /root if not already there by su'ing to root.

    su -   # this should ask for the root password.
    cat >> .profile
    PATH=/sbin:/usr/sbin:$PATH
    <ctrl>-D
    
    You can also use your favorite editor to do this... I prefer vim(1) and have this symlinked to /usr/bin/vi instead of elvis(1) which is usually the default on many distributions. VIM has online help, and multiple window support which is very useful! (A symlink is created with a -s option to ln(1), and is actually called a symbolic link.)

    Next, add the following lines to the /etc/fstab file: (I have all the user mountable partitions in one place under /mnt. You may want a different convention, but this is useful. I also have /mnt/cdrom symlinked to /cd for convenience.)

    /dev/fd0    /mnt/fd0  ext2    noauto,user 1 2
    

    Still logged in as root, make the following symlink: (If you have more than one floppy drive, then add the floppy number as well.)

    ln  -s  /mnt/fd0  /fd
    
        -or-
    
    ln  -s  /mnt/fd0  /fd0
    
    These two things make mounting and unmounting floppies a cinch. The mount(8) command follows the symlink and accesses the /etc/fstab file for any missing parameters, making it a useful shortcut.

    To make the floppy usable as an ext2fs Linux filesystem, do the following as root: (The username is whatever username you use on regularly on the system. You, of course, should not use the root user for normal use!)

    export PATH=/sbin:/usr/sbin:$PATH   # not needed if you set environment
    fdformat /dev/fd0
    mke2fs /dev/fd0
    mount /dev/fd0 /mnt/fd0
    chown username /mnt/fd0
    
    You may need to specify the geometry of the floppy you are using. If it is the standard 3.5 inch double sided disk, you may need to substitute /dev/fd0H1440 for the device name (in 1.2.x kernels). If you have a newer 2.xx kernel and superformat(1), you may want to substitute this for fdformat. See the notes in the Miscellaneous section below, or look at the man page. You may now exit out of su(1) by typing:
    exit
    

    From this point on, you may use the mount(8) and umount(8) commands logged in as your normal username by typing the following:

    mount /fd
    umount /fd
    


    Backups, Cpio and Gzip


    For backing up my work to take home or to take back to the office I use cpio(1) instead of tar(1) as it is far more flexible, and better at handling errors etc. To use this on a regular basis, first create all the files you need by specifying the command below without the -mtime -1 switch. Then you can make daily backups from the base directory of your work using the following commands:

    cd directory
    mount /fd
    find . -mtime -1 -print | cpio -pmdv /fd
    sync
    umount /fd
    

    When the floppy stops spinning, and the light goes out, you have your work backed up. The -mtime option to find(1) specifies files which have been modified (or created) within one day (the -1 parameter). The options for cpio(1) specify copy-pass mode, which retain previous file modification times, create directories where needed, and do so verbosely. Without a -u (unconditional) flag, it will not overwrite files which are the same age or newer.

    This operation may also be done over a network, either from NFS mounted filesystems, or by using a remote shell as the next example shows.

    mount /fd
    cd /fd
    rsh  remotesystem '(cd directory; find . -mtime -1 -print | cpio -oc)' |
         cpio -imdcv
    sync
    cd
    umount /fd
    
    This example uses cpio(1) to send files from the remote system, and update the files on the floppy disk mounted on the local system. Note the pipe (or veritical bar) symbol at the end of the remote shell line. The arguments which are enclosed in quotes are executed remotely, with everything enclosed in braces happening in a subshell. The archive is sent as a stream across the network, and used as input to the cpio(1) command executing on the local machine. (If both systems are using a recent version of GNU cpio, then specify -Hcrc instead of c for the archive type. This will do error checking, and won't truncate inode numbers.)
    The remote system would have: cpio -oHcrc
    and the local side would have: cpio -imdvHcrc

    To restore the newer files to the other computer, change directories to the base directory of your work, and type the following:

    cd directory
    mount -r /fd
    cd /fd
    find . -mtime -1 -print | cpio -pmdv ~-
    cd -
    umount /fd
    

    If you needed to restore the files completely, you would of course leave out the -mtime parameter to find(1).

    The previous examples assume that you are using the bash(1) shell, and uses a few quick tricks for specifying directories. The "~-" parameter to cpio is translated to the previous default directory. In other words, where you were before cd'ing to the /fd directory. (Try typing: echo ~- to see the effect, after you have changed directories at least once.) The cd ~- or just cd - command is another shortcut to switch directories to the previous default. These shortcuts often save a lot of time and typing, as you frequently need to work with two directories, using this command to alternate between them or reference files from where you were.

    If the directory which you are tranferring or backing up is larger than a single floppy disk, you may need to resort to using a compressed archive. I still prefer using cpio(1) for this, although tar(1) will work too. Change directories to your work directory, and issue the following commands:

    cd directory
    mount /fd
    find . -mtime -1 -print | cpio -ovHcrc | gzip -v > /fd/backup.cpio.gz
    sync
    umount /fd
    
    The -Hcrc option to cpio(1) is a new type of archive which older versions of cpio might not understand. This allows error checking, and inode numbers with more than 16 bits.

    Of course, your original archive should be created using find(1) without the -mtime -1 options.


    Floppy as a Raw Device for Large Files or Directories


    Sometimes it is necessary to backup or transfer a file or directories which are larger than a floppy disk, even when compressed. For this, we finally need to resort to using tar.

    Prepare as many floppies as you think you'll need by using the fdformat(8) command. You do not need to make filesystems on them however, as you will be using them in raw mode.

    If you are backing up a large set of subdirectories, switch to the base subdirectory and issue the following command:

    cd directory
    tar  -cv -L 1440 -M -f /dev/fd0  .
    
    This command will prompt you when to change floppies. Wait for the floppy drive light to go out of course!

    If you need to backup or transfer multiple files or directories, or just a single large file, then specify them instead of the period at the end of the tar command above.

    Unpacking the archive is similar to the above command:

    cd directory
    tar  -xv -L 1440 -M -f /dev/fd0
    


    Miscellaneous


    Finally, here are some assorted tips for using floppies.

    The mtools(1) package is great for dealing with MS-DOG floppies, as we sometimes must. You can also mount(8) them as a Linux filesystem with either msdos or umsdos filesystem types. Add another entry to the /etc/fstab entry you made before, so that the two lines will look like this:

    /dev/fd0    /mnt/fd0  ext2    noauto,user 1 2
    /dev/fd0    /mnt/dos  msdos   noauto,user 1 2
    
    You can now mount an MS-DOS floppy using the command:
    mount /mnt/dos
    
    You can also symlink this to another name as a further shortcut.
    ln -s /mnt/dos /dos
    mount /dos
    

    The danger of using the mount(8) commands rather than mtools(1) for users who are more familiar with MSDOS, is that you need to explicitly unmount floppies before taking them out of the drive using umount(8). Forgetting this step can make the floppy unusable! If you are in the habit of forgetting, a simple low-tech yellow Post-it note in a strategic place beside your floppy drive might save you a few headaches. If your version of Post-it notes has the <BLINK> tag, use it!   ;-)

    "umount me first!"

    Newer systems based on the 2.xx kernel are probably shipped with fdutils. Check to see if you have a /usr/doc/fdutils-xxx directory, where xxx is a version number. (Mine is 4.3). Also check for the superformat(1) man page. This supersedes fdformat(1) and gives you options for packing much more data on floppies. If you have an older system, check the ftp://ftp.imag.fr/pub/Linux/ZLIBC/fdutils/ ftp site for more information.

    The naming convention for floppies in newer 2.xx kernels has also changed, although the fd(4) man page has not been updated in my distribution. If you do not have a /dev/fd0H1440 device, then you probably have the newer system.


    Copyright © 1997, Bill Duncan
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    Welcom to the Graphics Muse
    Set your browser to the width of the line below for best viewing.
    © 1996 by

    Button Bar muse:
    1. v; to become absorbed in thought
    2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration
    Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration.

    [Graphics Mews] [Musings] [Resources]
    indent This column is dedicated to the use, creation, distribution, and dissussion of computer graphics tools for Linux systems.
          Last month I introduced a new format to this column. The response was mixed, but generally positive. I'm still getting more comments on the format of the column rather than the content. I don't know if this means I'm covering all the issues people want to hear about or people just aren't reading the column. Gads. I hope its not the latter.
          This months issue will include another book review, a discussion on adding fonts to your system, a Gimp user's story, and a review of the AC3D modeller. The holiday season is always busy one for me. I would have liked to do a little more, but there just never seems to be enough time in the day.
    Graphics Mews


          Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.
          I went wondering through a local computer book store this month and scanned the graphics texts section. I found a few new tidbits that might be of interest to some of you.      

    3D Graphic File Formats: A Programmers Reference

          Keith Rule has written a new book on 3D Graphics File Formats. The book, which contains over 500 pages, has been published by Addison-Wesley Developers Press and is listed at $39.95. It includes a CD-ROM with a software library for processing various 3D file formats (both reading and writing), but the code is written for MS systems. Keith states there isn't any reason why the code shouldn't be portable to other platforms such as Linux. Any takers out there?
    ISBN 0-201-48835-3
    indent indent

    OpenGL Programming for the X Window System

          I noticed a new text on the shelf of a local book store (Softpro, in Englewood, Colorado) this past month - Mark J. Kilgard's OpenGL Programming for the X Window System. This book, from Addison Wesley Developers Press, appears to have a very good coverage of how to write OpenGL applications that make use of X Windows API's. I haven't read it yet (or even purchased it - yet, but I will) so can't say how good it is. Mark is the author of the GLUT toolkit for OpenGL. GLUT is to OpenGL what Xt or Motif is to Xlib. Well, sort of.
    indent

    Fast Algorithms for 3D-Graphics

          This book, by Georg Glaeser and published by Springer, includes a 3.5" diskette of source for Unix systems. The diskette, however, is DOS formatted. All the algorithms in the text are written using pseudocode, so readers could convert the algorithms to the language of choice.
    indent indent

    ImageMagick 3.7.8 released, including a new set of image library plug-ins

          A new release of ImageMagick has been released from . This release includes a "Plug In" library containing the various image libraries ImageMagick needs to run. Alexander has uploaded this new release to Sunsite as well as ImageMagick's Web site.
    indent

    MpegTV Player v0.9

          A new version of the MpegTV Player has been released. This version now includes audio support.
    indent indent

    Imaging Technology Inc. IC-PCI frame grabber board driver

          The second public release (v 0.2.0) of a kernel module for the Imaging Technology Inc. IC-PCI frame grabber board (rev 1) and the AM-VS acquisition module has been released. This driver is maintained by GOM mbH (Gesellschaft fuer optische Messtechnik) TU Braunschweig, Institute for Experimental Mechanics. A full motion video test program and a read function for original camera files are included.
    Author:
    Maintained by:
    This software is not really free (see the LICENSE file).
    indent

    Viewmol 2.0 released

          I don't know much about this tool, but it appears to have alot of graphics related features, so I thought I'd mention it here. The LSM gives the following information:

    Viewmol is a program for the visualization of outputs from quantum chemical as well as from molecular mechanics programs. Currently supported are Gaussian 9x, Discover, DMol/DSolid, Gulp, Turbomole, and PDB files. Properties visualized include geometry (with various drawing modes), vibrations (animated or with arrows), optimization history/MD trajectories, MO energy level diagram, MOs, basis functions, electron density. Drawings can be saves as TIFF, HPGL, Postscript, input files for Rayshade.

    ftp://ftp.ask.uni-karlsruhe.de/pub/education/chemistry/viewmol-2.0.tgz

    indent
    indent
    indent

    Did You Know?

          3D Site (http://www.3dsite.com/3dsite) is a Web site devoted to everything 3D. There are job postings, links to free repositories of 3D models and lots of other valuable information.

          3D Cafe (http://156.46.199.2/3dcafe/) is another Web site with various useful 3D information. It also maintains a large collection of DXF and 3DS model files.

    An Important Survey

          I've been talking to a couple of publishers about doing a book aimed at Linux users. I'd like to write a User's Guide for the Gimp but the publisher feels a more general text on doing Web-based graphics might have a wider appeal (face it - the Linux audience just isn't the size of the MS audience - yet - but the publishers are considering both types of books). I told them I'd ask my readers which type of text they'd like to see. The Gimp book would include details on how to use each of the applications features as well as a number of tutorials for doing various types of effects. The book on doing graphics for Web pages would include discussions on using HTML, information on tools besides the Gimp and a few case studies (including something on animation). However, the Web book wouldn't go into as much detail for each of the tools. That information would be more general in nature.
          I don't have a server to run any CGI scripts to register votes, so simply with your opinions. Thanks!

    A Call for Help

          I plan on covering more 3D tools in the future, but I have to learn to use them first. The next tool I'm going to look into is BMRT. If you use BMRT and want to help me get started . I'd like to do an introduction to BMRT in the March issue if possible but I want to make sure I know what I'm talking about first. Thanks!
    indent
    indent
    indent

    Musings

         

    A Gimp User's Story (or "Why I Use the Gimp")

    The following piece was posted on the Gimp User's Mailing list by .

          At work, we have a "Library News Network", which is actually a 386 pc running a TV via a video converter in a continual slideshow with information about upcoming events in the law library and the law school. Last year, my boss did some stuff in Freelance Graphics which, quite frankly, was rather limited in effect.
          This year, it's my baby, and I'm making the slideshow (640x480x256 GIF files, run by a simple DOS program and looped by a batch file) in the GIMP. Here are some things I've done to make the text more readable and make the display reasonably eye-catching. Nothing fancy, but hopefully the tricks will give other people ideas to play with on their own.
          First, don't use a plain background. The blend tool is very nice for this, and shaded green or blue with bright text is rather nice looking. Start with a color and add some noise Create a blend image of the same size and multiply by the image with noise. This creates a very cool background for a slide. Better yet, if there's an appropriate photograph, use it! (I used a gorgeous picture of Yosemite Park to announce an environmental law symposium, and a decent photo of the U.S. Supreme Court justices to announce our Supreme Court Preview.)
          On the subjects of backgrounds, since I don't remember seeing this tip, here's a quickie for clouds: Make a plasma of the appropriate size, grayscale it, convert it back to color, and Brightness/Contrast/Gamma it into submission. I usually knock the brightness up about 75-100, and the blue up to around 5 and the green to about 2. Instant pretty sky (Obviously, skies from other planets could be done with reds and greens and whatnot.)
          For the text, nothing beats some good fonts. Hit a font archive, or buy a $10-$15 CD filled with fonts. Granted, I have the Caldera Network Desktop, so I can use some fonts that (I think) XFree can't, thanks to the font server, but it's worth a shot. I got a CD with 1250 fonts for $13. [Ed. Next month I'll cover how to add fonts to your system so you can use them with the Gimp. mjh]
          Here's a variation on the rounded-text tips: work out your text, then Duplicate it once and Offset it once (say 4x4). Edge Detect then Invert the duplicate and Gaussian Blur the offset twice. Multiply the resulting images, and use the original as a mask to composite something else over the image resulting from the multiplication. Very nice, edged & floating/shadowed text. Shows up great on a TV monitor.
          For the text, use any appropriate single color. Bright colors and high contrast work very well for what I do, although I've played with textures, rippled blends, plasma clouds, and what-not.
          Of course, it can be spiced up with all sorts of clipart (I heartily recommend Barry's Clipart Server (www.barrysclipart.com), from which I shamelessly borrow, and voila, instant slideshow!
          I have left our Fall Break edition of the LNN at: http://www.lawlib.wm.edu/LNN-old/. if you want to see some of what can be done with it. You might be better off watching the show when the graphics aren't resized to 320x240. Also, the latest version of these is available at http://www.lawlib.wm.edu/LNN/.


    [Ed. Later Mike posted another message that included some interesting effects. I thought it might be appropriate to include them with his other posting.]


          Recently, while wandering through the plug-ins available, I found the charcoal plug-in. Compiled it, added it, used it. Rather nifty, actually. However, it got me thinking and experimenting, and I produced two potentially interesting effects:

    (1) Pastel sketch: Take a color (RGB) image, Edge-detect it, Invert, and (optionally) contrast autostretch. On many images, this will produce a nifty pastel sketch. If the image is too high of detail, degrade the color or pixelize it first, otherwise you may end up with too many extraneous lines.

    (2) Watercolor sketch: Take a color (RGB) image, make a grayscale of it. Edge Detect the grayscale (this will give you the sketch lines); this can be hard to balance the way you want, so you may want to threshold it or pixelize the image first. Then, pixelize and degrade the main image to 32 colors (16 or 20 works even better). Eliminate the background you don't want, Gaussian blur it a few times, and brighten it some. Multiply the edging onto it. Voila; (nearly) instant watercolor, akin to the court sketches on news shows.

    indent

    Jim Blinn's Corner - A Trip Down the Graphics Pipeline

    indent I am not formally trained in computer graphics (1). Everything I know I've learned in the last year or so by reading, examining source code, and through the kind assistance of many members of the Net. So my ability to understand some of the more formal texts on computer graphics is limited.
          Given this limitation, I found I was still able to read and comprehend a good portion of Jim Blinn's book Jim Blinn's Corner - A Trip Down the Graphics Pipeline, which is a collection of articles taken from his column in the IEEE Computer Graphics and Applications journal. This book is the first of what may be two books, assuming there is sufficient interest in the first book. The second will cover a set of pixel arithmetic articles taken from the same column.
          In the preface Jim describes how he used a writing style that is "certainly lighter than a typical SIGGRAPH paper, both in depth and in attitude." I can't agree more. Computer graphics should be a fun subject and, despite the math, this book does provide a giggle here and there.
          Don't get me wrong, though. There is plenty of the technical details on how to compute positions in 3D space, perspective shadows, and subpixelic particles. Hefty stuff for the beginner. Nearly incomprehensible to the person who hasn't used matrix arithmetic in the past 8 years. Still, chapters like The Ultimate Design Tool (which talks about how an idea should start), and Farewell to Fortran (which talks about using various languages in computer graphics) provided enough non-mathematical discussions to let my brain recover while still keeping my interest peaked.
          I haven't read the book front to back yet. I'm saving whats left (about half the book) for my 16 days of freedom scheduled to start later this month. Its first on my reading list. Second will be my college Linear Algebra text. The first half of Jim's book reminded me about how much I'd forgotten in 8 years. Like the saying goes, one must strive for the impossible before they know what is possible.
    indent indent indent
    indent
    indent
    indent
    indent

    The IRTC - A raytracing competition for the fun of it

    indent For the past few months, I've been helping to administer an Internet-based competition for users of raytracing software. This competition, the Internet Ray Tracing Competition or IRTC, is open to anyone interested in creating 3D images using software on any platform as long as the software falls within a few basic guidelines. It is based on another competition started back in 1994 by Matt Kruse. Matt eventually had to close down the contest due to the enormous amount of time it takes to run such a contest. At the time, he was more or less doing all the work himself.
          Earlier this year Chip Richards started to organize the contest once again. A group of interested individuals signed up to help out. In the end, most of us (myself included) provide only organizational input - ideas for rules or input on rulings regarding cheating (yes, there has been some of that), helping to select topics, and so forth. Most of the real work has been done by Chip, Bill Marrs, and Jon Peterson (although Jon has since had to move on to other things).
          The contest is made up independent rounds that last 2 months. Each round has a topic which entrants must use as the basis for their images. Entries are supposed to be new images, created during the span of the contest, however most people use bits and pieces of older models that they or someone else has created. The tools allowed vary but raytracing tools are preferred and no post processing is allowed (for example, you can't add a lens flare after the image has been rendered). Anyone is allowed to vote (currently) on the images and winners receive small prizes like CDs and prints of their images.

    more IRTC... (same page as AC3D review)

    Resources

    The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.

    Linux Graphics mini-Howto
    Unix Graphics Utilities
    Linux Multimedia Page

    Future Directions

    Next month:


    1. Anyone having an extra, unclaimed scholarship in computer graphics is encouraged to contact me. I give preference to those who have them within commuting distance of Denver, where I live.


    Copyright © 1997, Michael J. Hammel
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    More...


    Musings

    indent
    © 1996
    indent

    Review: The AC3D Modeller


          There are only a few 3D modellers available for Linux: AMAPI (which may now only be available for the Mac, based on one report I've received), Midnight Modeller, SCED/SCEDA, and AC3D. Each of these has its advantages and disadvantages. I've tried each of these at least briefly. A couple, SCED and AC3D, I've used to actually create scenes. Lets take a quick look at one of these - AC3D.
          AC3D comes from . It is a shareware modeller that comes in binary format only. It is available for Linux, SGI's and Sun's (both SunOS and Solaris). Once registered you have access to a private Web site from which you can download the full version of the software. Documentation is a bit sparse (a common problem with much of the software available for Linux, in this authors opinion), consisting of about 12 or 13 pages formatted in either HTML or Postscript. The distribution package contains the binary, the HTML manual with a few images and a set of object files that are necessary for packages which use the Mesa Graphics Library, which AC3D does. AC3D
    Figure 1: Example AC3D session (this is taken from the AC3D Web site).

          The interface consists of 4 view windows and a control panel. There are 3 orthographic views and a 3D view. Changes in one of the orthographic view windows are reflected in the other views. Edits are not allowed in the 3D view. The 3D view can provide wireframe, filled or textured surfaces. My system is not quite fast enough to handle anything but the wireframe surface so I won't be able to say much about the texturing features of the modeller.
          AC3D supports a number of import file formats, including DXF and Lightwave files. It can export POV, RIB, VRML and a couple of other formats. Since the modeller is a based on vertices (as opposed to primitives like spheres or boxes) it is quite easy to manipulate basic shapes into more complex ones. This can be a disadvantage to those used to the CSG aspects of SCED or users of POV-Ray, but it really doesn't take that long to get used to. Even though the modeller bases its shapes on vertices, there are still a collection of basic shapes provided: disk, line, box, sphere, and mesh are just some of these. These shapes are displayed with a given number of vertices. The number of vertices can be configured and its possible to add more vertices where necessary.
         
          One of the nicest features is the ability to extrude a 2D shape into 3 dimensions. Lets follow an example of this. First, select the "ellipse" drawing function from the control panel and create a stretched out ellipse in the XY orthographic view.
    AC3D
    Figure 2: An ellipse
          Next, change the Edit type to "vertex" in the control panel and select all the vertices on the lower half of the ellipse (but not the ones on the end of the ellipse). Delete these with the "delete" function in the control panel. Then select the two lowest vertices and insert some new vertices (as shown in Figure 3).
    AC3D
    Figure 3: The ellipse has been halved and some new vertices added.
          This next part is a little tricky. What you want to do is select the vertices on each end of the object and move them, one at a time, until you get a slightly rounded effect. Then select 4 or 5 of the vertices on the back end (the left side in Figure 4) and strecth them out a little, to flatten the wings trailing edge. Then get rid of the extra vertices along the bottom of the wings edge by selecting them and using the "delete" function in the control panel. The result should look something like Figure 4.
    AC3D
    Figure 4: The wing edge takes shape
          This isn't bad, but a wing should be smoother around the top and edges, so change the Edit type to "Object" and select the wing. Then use the "Spline Surfaces" option from the Object pull-down menu. This adds a bunch of new vertices to the object and creates a smoother line around the wing.
    AC3D
    Figure 5: Smoothing the wing edge.
          Now lets take this simple shape and extrude it. Make sure the Edit type is still "Object" and select the wing edge. We need to change to the XZ orthographic view window. Up to this point we've been using the XY viewport. Click on "Extrude" under the Mouse options in the control panel. Grab the object in the XZ viewport and drag up. The original points stay put and a new set of points is moved to where ever you drag to. Once you let go of the mouse button you'll see the new points get connected to their corresponding points on the original object. Notice how the connecting lines aren't quite straight. This would be bad for a real wing, so we'll straighten them out.
    AC3D
    Figure 6: The wing edge gets extruded.
          In this next figure, the control panel was used to change the Edit mode to "vertices" and the vertices on the new side of the extruded object have been selected. Once selected, the "move" option under the Mouse features allows the selected vertices to be moved as a group. When these vertices are moved the lines connecting them to the opposite end are redrawn. You can play with this a bit in order to get the connecting lines to become straight. Note that its not absolutely necessary to correctly align the two ends of the extruded object, but this is one way to do so if you feel it necessary.
    AC3D
    Figure 7: The ends of the extruded object are aligned.
          Now switch back to the "object" Edit mode and select the object. The bounding box (in green) has handles that can be grabbed to drag the bounding box to resize the object. Use the middle top and bottom handles to make the object wider in the XZ view.
    AC3D
    Figure 8: Stretch the object a bit.
          Next, switch back to "vertices" Edit mode and select the vertices on one end of the object. Click on "Create Surface" under Functions in the control panel. If the surface (as viewed in the 3D view) does not appear solid you can select "Poly" under Surface to create a solid surface to close the end of the object. Repeat this process for the other end.

          Viola! You've got a solid surfaced wing, just like the one in Figure 9. (The grid is an option for the 3D view window and not part of the image.) Of course, this is a pretty simplistic example, but you should get the idea of how easy it is to create shapes using AC3D. You'll need to export the file to POV or RIB format and add some real textures to finish up the project, of course.

    AC3D
    Figure 9: The solid surfaced wing.

          When I first started examining modellers I got my hands on SCED, a nifty modeller from Stephen Chenney. One of the nice features of SCED is that it is constraint based - you can join objects using CSG and then constrain them to certain points. This allows you to create an arm, for example, that can bend only at the elbow. AC3D works similarly in that you can rotate any set of points around a single point within one of the orthographic views. For example, if I created an arm I could select "Rotate" from the control panel and then use the mouse to rotate the arm around a single point, such as the elbow, in one of the 2D view windows. If I need it to move in 3D I need to do this type of rotation in 2 or more of the 2D view windows. This process is a little different than SCED, which can move objects in 3 dimensions, but the result is the same. In fact, at times it can be a little easier to keep your bearings using multiple rotations in 2D.
          Since I've never used any of the modellers available for other systems (such as high end modellers on SGI's or any of the modellers available for Microsoft or Mac systems) I can't say how well AC3D compares to them. I do know that I found the modeller fairly easy to learn, but I tend to be more motivated than some folks. AC3D could use some online help (whats the difference between a "Poly" and a "Polyline", for example) and more detailed documentation in general. It would also be nice to be able to unhide selected objects instead of all hidden objects. Andy has told me that a new version coming soon will include the ability to specify the exact dimensions of a selected object or set of vertices. This is a very important feature in my eyes. I tend to like to use modellers to create individual objets and then use the conditional constructs of POV-Ray to position multiple copies of them, such as trees or rocks or houses. By constraining an object to a unit size it makes it easier to position and resize objects using POV-Ray.
          Of all the modellers I've tried AC3D is probably the easiest to use. Its ability to import formats like DXF gives it a step up on SCED, although I really like the latter too. I don't particularly mind that you don't get the source to AC3D since I'm mostly interested in just using the modeller and not in developing new features for it. It would be nice if there were a plug-in interface, but I'm not such a power user yet that I need that feature. Aside from a lack of detailed documentation, and a few keystrokes that are already used by fvwm, I find the AC3D modeller worth the registration price.

    indent

    more IRTC...


          The contest started in earnest in May/June of 1996. The topic then was Time and there were some stunning entries. In July/August we had fewer entries, but the topic - Summer - was a little tougher to nail down. In September/October we hit the jackpot with Science Fiction. Well over 200 entries were submitted for this round. Thats quite a difference from the 20-30 submitted during Matt's original contests. Fortunately much of the work for viewing, voting, and tabulating information has been automated. The contest has been great fun and has been accompanied with lively discussions on the associated irtc-l mailing list.
          Unfortunately, there are always those that have to try to ruin things for everyone else. We had some people submit images that were fair but not likely winners. They then submitted multiple votes. A vote consists of ratings of all the images with values between 1-20 for each in 3 categories - needless to say this takes awhile to accomplish. Any vote that does not include ratings for all images is not counted in the final tally. The multiple votes were done offline (which is permissable) and submitted from different email accounts. The artists images received very high marks while all the rest received very low (within a very small range) ratings. Then they got some of their friends to do the same thing. Beyond this, others have submitted numerous entries that they had made in the past (prior to the contest) that just happen to fit the category (how many 3D artists, for fun or profit, have *never* made a space scene?) in order to turn the contest into their own private gallery. The spirit of the competition is lost on some people, I'm afraid.
          I haven't done any of the automation nor have I worked on the very nice web site for the contest. But I've watched Chip and Bill do so. Its very frustrating knowing how much effort they put into this, trying very hard not to give themselves or anyone else unfair advantages and still make the contest fun for everyone only to see someone still try to cheat the system. Why? For a couple of CDs? Remember when the Internet was a friendly, honest place?
          Still, the contest continues and the Admin Team is working on ways of keeping the contest fun, open to participation, and fair. The guys could use a new host for their contest. Walnut Creek, the current host, appears to be limiting ftp connections and the amount of disk space required for 200+ images can get rather large.
          If you are into 3D rendering for the fun of it you owe it to yourself to take a shot at the IRTC. Its fun to see how your images stack up against others. Many of the voters offer comments on the images which can very useful in any future images you render. Check our the IRTC Web Site to get more details and join in!

    indent
    © 1996 by


    More...


    Musings

    History of the Portable Network Graphics (PNG) format

    by

    Prehistory
          The Story of PNG actually begins way back in 1977 and 1978 when two Israeli researchers, Jacob Ziv and Abraham Lempel, first published a pair of papers on a new class of lossless data-compression algorithms, now collectively referred to as ``LZ77'' and ``LZ78.'' Some years later, in 1983, Terry Welch of Sperry (which later merged with Burroughs to form Unisys) developed a very fast variant of LZ78 called LZW. Welch also filed for a patent on LZW, as did two IBM researchers, Victor Miller and Mark Wegman. The result was...you guessed it...the USPTO granted both patents (in December 1985 and March 1989, respectively).
          Meanwhile CompuServe--specifically, Bob Berry--was busily designing a new, portable, compressed image format in 1987. Its name was GIF, for ``Graphics Interchange Format,'' and Berry et al. blithely settled on LZW as the compression method. Tim Oren, Vice President of Future Technology at CompuServe (now with Electric Communities), wrote: ``The LZW algorithm was incorporated from an open publication, and without knowledge that Unisys was pursuing a patent. The patent was brought to our attention, much to our displeasure, after the GIF spec had been published and passed into wide use.'' There are claims [1] that Unisys was made aware of this as early as 1989 and chose to ignore the use in ``pure software''; the documents to substantiate this claim have apparently been lost. In any case, Unisys for years limited itself to pursuit of hardware vendors--particularly modem manufacturers implementing V.42bis in silicon.
          All of that changed at the end of 1994. Whether due to ongoing financial difficulties or as part of the industry-wide bonk on the head provided by the World Wide Web, Unisys in 1993 began aggressively pursuing commercial vendors of software-only LZW implementations. CompuServe seems to have been its primary target at first, culminating in an agreement--quietly announced on 28 December 1994, right in the middle of the Christmas holidays--to begin collecting royalties from authors of GIF-supporting software. The spit hit the fan on the Internet the following week; what was then the comp.graphics newsgroup went nuts, to use a technical term. As is the way of Usenet, much ire was directed at CompuServe for making the announcement, and then at Unisys once the details became a little clearer; but mixed in with the noise was the genesis of an informal Internet working group led by Thomas Boutell [2]. Its purpose was not only to design a replacement for the GIF format, but a successor to it: better, smaller, more extensible, and FREE.

    The Early Days (All Seven of 'Em)
          The very first PNG draft--then called ``PBF,'' for Portable Bitmap Format-- was posted by Tom to comp.graphics, comp.compression and comp.infosystems.www.providers on Wednesday, 4 January 1995. It had a three-byte signature, chunk numbers rather than chunk names, maximum pixel depth of 8 bits and no specified compression method, but even at that stage it had more in common with today's PNG than with any other existing format.
          Within one week, most of the major features of PNG had been proposed, if not yet accepted: delta-filtering for improved compression (Scott Elliott); deflate compression (Tom Lane, the Info-ZIP gang and many others); 24-bit support (many folks); the PNG name itself (Oliver Fromme); internal CRCs (myself); gamma chunk (Paul Haeberli) and 48- and 64-bit support (Jonathan Shekter). The first proto-PNG mailing list was also set up that week; Tom released the second draft of the specification; and I posted some test results that showed a 10% improvement in compression if GIF's LZW method was simply replaced with the deflate (LZ77) algorithm. Figure 1 is a timeline listing many of the major events in PNG's history.

    indent
    4 Jan 95 PBF draft 1 (Thomas Boutell)
    4 Jan 95 delta-filtering (Scott Elliott)
    4 Jan 95 deflate compression (Tom Lane et al.)
    4 Jan 95 24-bit support (many)
    5 Jan 95 TeleGrafix LZHUF proposal (same or slightly larger)
    6 Jan 95 PNG name (Oliver Fromme)
    7 Jan 95 PBF draft 2 (Thomas Boutell)
    7 Jan 95 ZIF early results (Greg Roelofs)
    7 Jan 95 internal CRC(s) (Greg Roelofs)
    8 Jan 95 gamma chunk (Paul Haeberli)
    8 Jan 95 48-, 64-bit support (Jonathan Shekter)
    9 Jan 95 FGF proposal, implementation (Jeremy Wohl)
    10 Jan 95 first NGF/PBF/proto-PNG mailing list (Jeremy Wohl)
    15 Jan 95 PBF draft 3 (Thomas Boutell)
    16 Jan 95 CompuServe announces GIF24 development (Tim Oren)
    16 Jan 95 spec available on WWW (Thomas Boutell)
    16 Jan 95 PBF draft 4 (Thomas Boutell)
    23 Jan 95 PNG draft 5 (Thomas Boutell)
    24 Jan 95 PNG draft 6 (Thomas Boutell)
    26 Jan 95 final 8-byte signature (Tom Lane)
    1 Feb 95 PNG draft 7 (Thomas Boutell)
    2 Feb 95 Adam7 interlacing scheme (Adam Costello)
    7 Feb 95 CompuServe announces PNG == GIF24 (Tim Oren)
    13 Feb 95 PNG draft 8 (Thomas Boutell)
    7 Mar 95 PNG draft 9 (Thomas Boutell)
    11 Mar 95 first working PNG viewer (Oliver Fromme)
    13 Mar 95 first valid PNG images posted (Glenn Randers-Pehrson)
    1 May 95 pnglib 0.6 released (Guy Eric Schalnat)
    1 May 95 zlib 0.9 released (Jean-loup Gailly, Mark Adler)
    5 May 95 PNG draft 10 (Thomas Boutell)
    13 Jun 95 PNG home page (Greg Roelofs)
    8 Dec 95 PNG spec 0.92 released as W3C Working Draft
    23 Feb 96 PNG spec 0.95 released as IETF Internet Draft
    28 Mar 96 deflate and zlib approved as Informational RFCs (IESG)
    22 May 96 deflate and zlib released as Informational RFCs (IETF)
    1 Jul 96 PNG spec 1.0 released as W3C Proposed Recommendation
    11 Jul 96 PNG spec 1.0 approved as Informational RFC (IESG)
    4 Aug 96 VRML 2.0 spec released with PNG as requirement (VAG)
    1 Oct 96 PNG spec 1.0 approved as W3C Recommendation
    14 Oct 96 image/png approved (IANA)
    indent
    Figure 1: a PNG timeline

          Perhaps equally interesting are some of the proposed features and design suggestions that ultimately were not accepted: the Amiga IFF format; uncompressed bitmaps either gzip'd or stored inside zipfiles; thumbnail images and/or generic multi-image support; little-endian byte order; Unicode UTF-8 character set for text; YUV and other lossy image-encoding schemes; and so forth. Many of these topics produced an amazing amount of discussion--in fact, the main proponent of the zipfile idea is still making noise two years later.

    Onward, Frigidity
          One of the real strengths of the PNG group was its ability to weigh the pros and cons of various issues in a rational manner (well, most of the time, anyway), reach some sort of consensus and then move on to the next issue without prolonging discussion on ``dead'' topics indefinitely. In part this was probably due to the fact that the group was relatively small, yet possessed of a sufficiently broad range of graphics and compression expertise that no one felt unduly ``shut out'' when a decision went against him. (All of the PNG authors were male. Most of them still are. I'm sure there's a dissertation in there somewhere...) But equally important was Tom Boutell, who, as the initiating force behind the PNG project, held the role of benevolent dictator--much the way Linus Torvalds does with Linux kernel development. When consensus was impossible, Tom would make a decision, and that would settle the matter. (On one or two rare occasions he might later have been persuaded to reverse the decision, but this generally only happened if new information came to light.)
          In any case, the development model worked: by the beginning of February 1995, seven drafts had been produced, and the PNG format was settling down. (The PNG name was adopted in Draft 5.) The next month was mainly spent working out the details: chunk-naming conventions, CRC size and placement, choice of filter types, palette-ordering, specific flavors of transparency and alpha-channel support, interlace method, etc. CompuServe was impressed enough by the design that on the 7th of February they announced support for PNG as the designated successor to GIF, supplanting what they had initially referred to as the GIF24 development project. [3] By the beginning of March, PNG Draft 9 was released and the specification was officially frozen--just over two months from its inception. Although further drafts followed, they merely added clarifications, some recommended behaviors for encoders and decoders, and a tutorial or two. Indeed, Glenn Randers-Pehrson has kept some so-called ``paleo PNGs'' that were created at the time of Draft 9; they are still readable by any PNG decoder today. [4]

    Oy, My Head Hurts
          But specifying a format is one thing; implementing it is quite another. Although the original intent was to create a "lightweight" format--and, compared to TIFF or even JPEG, PNG is fairly lightweight--even a completely orthogonal feature set can introduce substantial complications. For example, consider progressive display of an image in a web browser. First comes straight decoding of the compressed data; no problems there. Then any line-filtering must be inverted to get the actual image data. Oops, it's an interlaced image: now pixels are appearing here and there within each 8x8 block, so they must be rendered appropriately (and possibly buffered). The image also has transparency and is being overlaid on a background image, adding a bit more complexity. So far we're not much worse off than we would be with an interlaced, transparent GIF; the line filters and 2D interlacing scheme are pretty straightforward extensions to what programmers have already dealt with. Even adding gamma correction to the foreground image isn't too much trouble.
          But wait, it's not just simple transparency; we have an alpha channel! And we don't want sparse display--we really like the replicating progressive method Netscape Navigator uses. Now things are tricky: each replicated pixel-block has some percentage of the fat foreground pixel mixed in with complementary amounts of the background pixels in the block. And just because the current fat pixel is 65% transparent (or, even worse, completely opaque) doesn't mean later ones in the same block will be, too: thus we have to remember all of the original background pixel-values until their final foreground pixels are composited and overlaid. Toss in the ability to render all of this nicely on an 8-bit, colormapped display, and most programmers' heads will explode.

    Make It So!
          Of course, some of these things are application (presentation or front-end) issues, not general PNG-decoding (back-end) issues. Nevertheless, a good PNG library should allow for the possibility of such applications--which is another way of saying that it should be general enough not to place undue restrictions on any programmer who wants to implement such things.
          Once Draft 9 was released, many people set about writing PNG encoders and/or decoders. The true glory is really reserved for three people, however: Info-ZIP's Jean-loup Gailly and Mark Adler (both also of gzip fame), who originally wrote Zip's deflate() and UnZip's inflate() routines and then, for PNG, rewrote them as a portable library called zlib [5]; and Guy Eric Schalnat of Group 42, who almost single-handedly wrote the libpng reference implementation (originally ``pnglib'') from scratch. [6] The first truly usable versions of the libraries were released two months after Draft 9, on the first of May, 1995. Although both libraries were missing some features required for full implementation, they were sufficiently complete to be used in various freeware applications. (Draft 10 of the specification was released at the same time, with clarifications resulting from these first implementations.)

    Fast-Forward to the Present
          The pace of subsequent developments slowed at that point. This was partly due to the fact that, after four months of intense development and dozens of e-mail messages every day, everyone was burned out; partly because Guy controlled libpng's development and became busy with other things at work; and partly because of the perception that PNG was basically ``done.'' The latter point was emphasized by a CompuServe press release to that effect in mid-June (and one, I might add, in which their PR guys claimed much of the credit for PNG's development, sigh).
          Nevertheless, progress continued. In June of 1995 I set up the PNG home page, now grown to roughly a dozen pages [7]; Kevin Mitchell officially registered the ``PNGf'' Macintosh file ID with Apple Computer. In August Alexander Lehmann and Willem van Schaik released a fine pair of additions to the NetPBM image-manipulation suite, particularly handy under Linux: pnmtopng and pngtopnm version 2.0. And in December at the Fourth International World Wide Web Conference, the World Wide Web Consortium (W3C) released the PNG Specification version 0.92 as an official standards-track Working Draft.
          1996 saw the February release of version 0.95 as an Internet Draft by the Internet Engineering Task Force (IETF), followed in July by the Internet Engineering Steering Group's (IESG) approval of version 1.0 as an official Informational RFC. (However, the IETF secretary still hasn't issued the actual RFC number at the time of this writing, five months later. Sigh.) The Virtual Reality Modeling Language (VRML) Architecture Group in early August adopted PNG as one of the two required image formats for minimal VRML 2.0 conformance. [8] Meanwhile the W3C promoted the spec to Proposed Recommendation status in July and then to full Recommendation status on the first of October. [9] Finally, in mid-October the Internet Assigned Numbers Authority (IANA) formally approved ``image/png'' as an official Internet Media Type, joining image/gif and image/jpeg as non-experimental image formats for the Web. Much of this standardization would not have happened nearly as quickly without the tireless efforts of Tom Lane and Glenn Randers-Pehrson, who took over editing duties of the spec from Thomas Boutell.

    Current Status
          So where are we today? The future is definitely bright for PNG, and the present isn't looking too bad, either. I now have over 125 applications listed [10] with PNG support either current or planned (mostly current); among the ones available for Linux are:


          Discerning readers will note the conspicuous absence of Netscape Navigator. Despite the fact that Netscape was aware of the PNG project from the beginning and unofficially indicated ``probable support''; despite the nice benefits gamma correction, alpha support and 2D interlacing bring to WWW applications; despite the fact that the WWW Consortium, of which Netscape is a member, released the PNG spec as its first official Recommendation; despite the requirement to support PNG in VRML 2.0 viewers like Netscape's own Live3D plug-in; and despite considerable pestering by members of the PNG group and the Internet community at large, Netscape is still only ``considering'' future support of PNG. Until Netscape either supports PNG natively or gets swept away by Microsoft or someone else, PNG's usefulness as an image format for the Web is considerably diminished.
          On the other hand, our buds at Microsoft recognized the benefits of PNG and apparently embraced it wholeheartedly. They have not only made it the native image format of the Office97 application suite but have also repeatedly promised to put it into Internet Explorer (theoretically by the time of the 4.0 betas--we'll see about that). Assuming they do, Netscape is almost certain to follow suit. (See? Microsoft is good for something!) At that point PNG should enjoy a real burst of WWW interest and usage.
          In the meantime, PNG viewing actually is possible with Linux Netscape; it's just not very useful. Rasca Gmelch is working on a Unix plug-in with (among other things) PNG support. Although it's still an alpha version and requires ImageMagick's convert utility to function, that's not the problem; Netscape's brain-damaged plug-in architecture is. Plug-ins have no effect on HTML's IMG tag: if there's no native support for the image format and no helper app defined, the image is ignored regardless of whether an installed plug-in supports it. Instead you must use Netscape's EMBED extension. That means anyone who wants universally viewable web pages loses either way: PNG with IMG doesn't work under Netscape, and PNG with EMBED doesn't work under much of anything except Netscape and MSIE (and those only if the user has installed a working PNG plug-in).
          But support by five or six other Linux web browsers ain't bad, and even mainstream applications like Adobe's Photoshop now do PNG natively. More are showing up every week, too. Life is good.

    The Future
          As VRML takes off--which it almost certainly will, especially with the advent of truly cheap, high-performance 3D accelerators--PNG will go along for the ride. (JPEG, which is the other required VRML 2.0 image format, doesn't support transparency.) Graphic artists will use PNG as an intermediate format because of its lossless 24-bit (and up) compression and as a final format because of its ability to store gamma and chromaticity information for platform-independence. Once the ``big-name'' browsers support PNG natively, users will adopt it as well--for the 2D interlacing method, the cross-platform gamma correction, and the ability to make anti-aliased balls, buttons, text and other graphic elements that look good on *any* color background (no more ``ghosting,'' thanks to the alpha-channel support).
          Indeed, the only open issue is support for animations and other multi-image applications. In retrospect, the principal failure of the PNG group was its delay in extending PNG to MNG, the "Multi-image Network Graphics" format. As noted earlier, everyone was pretty burned out by May 1995; in fact, it was a full year before serious discussion of MNG resumed. As (bad) luck would have it, October 1995 is when the first Netscape 2.0 betas arrived with animation support, giving the (dying?) GIF format a huge resurgence in popularity.
          At the time of this writing (mid-December 1996), the MNG specification has undergone some 27 drafts--almost entirely written by Glenn Randers-Pehrson--and is close to being frozen. A couple of special-purpose MNG implementations have been written, as well. But MNG is too late for the VRML 2.0 spec, and despite some very compelling features, it may never be perceived as anything more than PNG's response to GIF animations. Time will tell.

    At Last...
          It's always difficult for an insider to render judgment on a project like PNG; that old forest-versus-trees thing tends to get in the way of objectivity. But it seems to me that the PNG story, like that of Linux, represents the best of the Internet: international cooperation, rapid development and the production of a Good Thing that is not only useful but also freely available for everyone to enjoy.
          Then again, maybe I'm just a shameless egotist (nyuk nyuk nyuk). You decide....

    Acknowledgments
          I'd like to thank Jean-loup Gailly for his excellent comp.compression FAQ, which was the source for much of the patent information given above. [11] Thanks also to Mark Adler and JPL, who have been the fine and generous hosts for the PNG home pages, zlib home pages, Info-ZIP home pages and my own, personal home pages. (Through no fault of Mark's, that will all come to an end as of the new year; oddly enough, JPL has decided that none of it is particularly relevant to planetary research. Go figure.)

    References

    [1] Raymond Gardner, rgardner@teal.csn.org, 8 Jan 1995 23:11:58 GMT, comp.graphics/comp.compression, Message-ID <3eprfu$jqs@news-2.csn.net>. See also Michael Battilana's article discussing the legal history of the GIF/LZW controversy:
          http://www.cloanto.com/users/mcb/19950127giflzw.html
    [2] http://www.boutell.com/boutell/
    [3] http://www.w3.org/pub/WWW/Graphics/PNG/CS-950214.html
    [4] http://www.rpi.edu/~randeg/paleo_pngs.html
    [5] http://quest.jpl.nasa.gov/zlib/
    [6] ftp://swrinde.nde.swri.edu/pub/png/src/
    [7] http://quest.jpl.nasa.gov/PNG/ (but probably moved to http://www.wco.com/~png/ by 1 January 1997)
    [8] http://vag.vrml.org/VRML2.0/FINAL/spec/part1/conformance.html
    [9] http://www.w3.org/pub/WWW/TR/REC-png.html
    [10] http://quest.jpl.nasa.gov/PNG/pngapps.html
    [11] http://www.cis.ohio-state.edu/hypertext/faq/usenet/compression-faq/top.html
    indent
    © 1996 by


    "Linux Gazette...making Linux just a little more fun! "


    Indexing Texts with Smart

    By Hans Paijmans


    1. The uses of Linux and MS-DOS

    Although my colleagues here on Tilburg University may think that I spend my time fiddling with Linux on a PC that could be put to better uses, they are wrong. The 'fiddling with Linux' I do at home; at my work I only do the bare minimum necessary to keep Linux fed and happy. As most readers of this journal know, this involves making the occasional backup and for the rest: nothing.

    When I sit in front of my PC, I work (well, mostly). Linux makes it possible to do my work with a minimum of fuss and a big part of the credit for this goes to Jacques Gelinas, the man who wrote Umsdos: a layer between the Unix operating system and the vanilla MS-DOS 8+3 FAT system. This makes it possible to access the DOS-partition of my hard disk from either operating system. This is good news, because I am totally dependent from two programs: SMART, an indexing and retrieval system and SPSS for Windows to twiddle the data I obtain form SMART. SMART only runs under Unix (and not all Unixes for that matter) and SPSS4Windows, obviously, runs under MS-Windows and whatever the virtues of this operating system may be, you emphatically do not want to use it in any kind of experimental environment.

    I suppose that SPSS (Statistical Package for the Social Sciences) will be familiar to most Linux users. If not: SPSS is just what it says, a statistical package but not only for the 'social sciences' but for about everyone who needs statistical analysis of his data. SMART, however, is an indexing and retrieval program for text. What is more: it does not just index the words, it also adds weights to them. It also allows the user to compare the indexed documents in the so-called Vector Space Model and to compute the distances between the documents, or between documents and queries. To understand why this is special we must delve a bit in the typical problems of Information Retrieval, i.e. the storage of books, articles etcetera and the retrieval of those on content.

    1.1 Why indexing is not enough

    When at the end of the sixties automatic indexing of texts became a viable option many people thought that the problems of information retrieval were solved. Programs like STAIRS (IBM,1972) enabled the users to file and rapidly retrieve documents on any word in the text or on boolean combinations (AND, OR, NOT) of those words and who could ask for more? Then, in 1985 a famous article was published by two researchers in the field [1]. In this article they reported on the performance of STAIRS in real life and they showed that the efficiency of STAIRS and similar systems was, in fact, much lower than assumed. Even experienced users could not obtain a recall of more than 20-40% of the relevant documents in a database of 100,000 documents, and worse, they were not aware of the fact.

    The problem with all retrieval systems of this type is that human language is so fuzzy. There may be as much as a dozen different terms and words pointing to one and the same object, whereas one word may have widely different meanings. In Information Retrieval this will lead to one of two situations. Either you try to obtain a high precision, when almost all the retrieved documents are relevant (but an unknown number of other relevant documents are not included) or you go for high recall, but then a lot of irrelevant documents will be included in the result. When in a retrieved set of documents the proportion of irrelevant documents is high, the user will probably stop looking at the documents before he or she has found all the relevant ones: in fact his futility-point has been reached. In such a case the net result is equal to the situation in which those relevant records that would be presented after the user reached that futility-point were not retrieved. Therefore the concept of ranking, i.e. the ordering of retrieved documents on relevance, is very important in Information Retrieval.

    2. SMART

    Modern (and not so modern) research has offered a number of possible solutions to this dilemma. Some of those solutions use the concept of weighted keywords. This means that every keyword-document combination has a weight attached that (hopefully) is an indication of the relevance of that particular keyword for that particular document. SMART does just that: it creates indexes for a database of documents and attaches weights to them. The way that happens may be expressed intuitively as 'the more a word occurs in the less documents, the higher the weight'. Or, if the word 'dog' occurs twenty times in a given document, but in no other documents, you may be relatively certain that this document is about dogs. Information Retrieval addicts like me talk about the tf.idf weight.

    Smart offers several options as to how that weight should be arrived at: I generally prefer the so-called atc-variation, because it adjusts for the length of the individual documents.

    It calculates the tf.idf in three steps. The first step creates the value tex2html_wrap_inline96 for the term-frequency (tf) as

    displaymath86

    where tex2html_wrap_inline100 is the term with the highest frequency in the document. This adjusts for the document-length and the number of terms. Then the weight tex2html_wrap_inline102 is calculated as

    displaymath87

    where N is as before the number of documents and F the document frequency of term t (the number of documents in which term t occurs). Finally the cosine normalization is applied by

    displaymath88

    where T is the number of terms in the document vector. Now we have a number between zero and one that hopefully correlates with the importance of the word as a keyword for that document. For a detailed discussion of these and similar techniques see e.g. Salton and McGill ([2]). You will love it!

    This is not all. When SMART has constructed the index in one of the various ways available, it also can retrieve the documents for you. This is done according to something called ``the Vector Space Model''. This model is best explained using a three-dimensional example of a vector-space; you can add another few thousand dimensions in your own imagination.

    Imagine you want to index your documents according to three keywords 'cat', 'dog' and 'horse'; keywords that may or may not occur in your documents. So you draw three axes to get a normal three-dimensional coordinate system. One dimension can be used to indicate the ``cat-ness'' of every document, the other its ``dog-ness'' and the third the ``lion-ness''. To make things easy we only use binary values 0 and 1, although SMART can cope with floats (the 'weights' mentioned before. So if a document is about cats, it scores a one on the corresponding axis, otherwise it scores a zero. Any document may now be drawn in that space according to the occurrence of one or more of the keywords and now we have a relatively easy way to compute the difference between those document. Moreover a query consisting of one or more of the keywords can be drawn in the same space and the documents can be ranked according to the distance to that query. Of course a typical document database has thousands of keywords and accordingly thousands of dimensions, but the arithmetics involved in multi-dimensional distances do not matter much to modern computers, and if they bother you, you just have to smoke something illegal and matters will rapidly become clear. If only till the next morning.

    So SMART accepts queries, ranks the documents according to the ``nearness'' to that query and return them to you in that order. Therefore it is still one of the best retrieval systems that are ever written although it lacks the bells and whistles of its more expensive counterparts in some operating systems I could mention. And although it is not really optimized for speed, it runs typically 10-30 times faster than the fastest indexing program I ever saw under MS-Windows.

    3. The DOS connection

    But I am not using SMART for bread-and-butter retrieval, but for the weights it computes and the indexes it creates. At this point I want to do some other manipulations of these data and again I have to offer my thanks to the developers of unix in general and to Linux in particular. A whole string of ever more complicated and sophisticated shell scripts, the standard unix tools and a few of My Very Own utilities suffice to process the SMART output to a file that is ready for importing in SPSS.

    Nevertheless now I have to quit Linux and boot MS-DOS, start MS-Windows and finally enter SPSS to do the statistics and create some graphs. I am a newcomer to Unix (indeed it was the fact that Linux offered a way to use SMART that pulled me over the line two years ago), but already I am wondering how people can live in the stifling atmosphere of MS-Windows. The fact that you can't really run two applications at the same time is not even the worst thing. But who is responsible for the idea that Icons and Popups were better and more efficient than the plain old command line? And what happened to pipes and filters? And a sensible command language? Be that as it may, SPSS gets the job done and when the output is written to disk I immediately escape back to Linux to write the final article, report or whatever with LaTeX.

    4. The bad news

    On this point I have two messages: one is good, the other bad. I'll start with the good news. SMART is obtainable by anonymous ftp from Cornell University and may be used for free for scientific and experimental purposes. Better yet: it compiles under Linux without much tweaking and twiddling. Also there exists a fairly active mailing list for people who use SMART (smart-peoplecs.cornell.edu).

    The bad news: the manual. What manual? SMART is not for the faint of heart; after unpacking and compilation you'll find some extremely obscure notes and examples and that is it. Nevertheless, if you have more than just a few megabytes of text to manage AND the stamina to learn SMART, it certainly is the best solution for your information retrieval needs. But don't I wish somebody would write a comprehensive manual! In the meantime you may perhaps be helped by my ``tutorial for newbees'', to be found at http://pi0959.kub.nl:2080/Paai/Onderw/Smart/hands.html.


    Bibliography

    1
    Blair, D.C.; Maron, M.E.,An evaluation of retrieval effectiveness for a full-text document retrieval system,Communications of the ACM V28:3, 1985, pp. 289-299.

    2
    G. Salton and M.J. McGill,Introduction to Modern Information Retrieval New York [etc.] : McGraw-Hill, 1983.


    Copyright © 1997, Hans Paijmans
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    About Linux text editors / product announcement

    By Oleg L. Machulskiy



    Generally about UNIX text editors

    All of my friends who use Linux always told me that text editors is very important problem under this OS (I mean text-mode editors with 100K executable for writing programs). As I understand, this problem is not a problem of Linux itself, but it's a problem of every UNIX. What's more I know which reasons cause this trouble. There is two reasons: putc output to screen and getc input from keyboard. In other OS-es user (programmer) can catch Alt+<letter> or Shift+<letter> keystrokes and use it as a commands of editor. In UNIX user has only 256 ASCII-codes avialable (really, much less than 256) and because of it every UNIX editor uses either very long sequences of keystrokes as editing commands (as emacs or joe) or it has two editing modes (command mode and typing mode in VI for example). On X-s everything is better, because here we can get scan code (not real scan, but this code is anough for all my needs) of the key pressed and status of Shift-keys (Alt, Caps, Shift, Ctrl and mouse buttons), so we can use functional keys, arrow-keys and everything else You can find on Your keyboard (everybody knows how to do that).

    But even with text mode editors under Linux everything is not so bad: You can switch keyboard to RAW mode and do with it all You want (don't forget to get another console from which You will execute shutdown -r now command during beta-testing Your program). But it's very important to understand that RAW-keyboard programs will not work through telnet. Also is very important to set SIGSEGV and SIGKILL signal handlers so that they'll switch keyboard back to normal mode from RAW when something happens. Once I heard about kernel patching so that You can use ScrollLock key to switch between raw and normal mode, but I don't know how to apply this patch.


    What I'd like to have in text editor

    Caution : This section is very private. Maybe someone will find something useful for him(here) here, but probably not. This section is mostly about my own text editor, so, if You got used to Turbo-Vision-like editors and You're satisfied, then probably You will find the follwing not interesting. So don't read it, don't waste Your time.

    1. Editor must be the same on every operating system (multi-platform).

    2. Editor must handle advanced search feature: I need not only case sensetive/insensetive search, but as a minimum a wildcard searh or regular expressions search (this type of search includes wildcards).

    3. Editor must support projects: user must be able to create a list of files (sources of some program) and walk through these files freely (enter into file, quit from file, switch to another file, ...). The possible solution is to assign some keystroke as an "enter-into-file" command, and then, when user invokes "enter-into-file", open a file with a name similar to the word under cursor (For example, you can enter in h-file from the text of c-program; just move cursor to the #include "..." statement and press the "enter-into-file" key).

    4. Editor must handle many files opened at the same time so that user can freely move from one text to another (very often I need to read declarations of functions in .h files)

    5. Editor must support compiling, make etc. from within text editor (generally: execution OS command from within editor)

    6. Commands must be rather simultaneous pressings of a few keys than sequences of keystrokes. It seems to me, it isn't comfortable to type (F10, 'F', 'O' , filename) every time You need to open a new file. Besides, with such a keyboard layout it's impossible to work fast. This requirement causes a problem: text editor cannot work through telnet, because telnet protocol transfers only ASCII-codes, but not scan-codes.

    7. Text editor must consider a text not as a sequence of chars but as a sequence of lines, where each line is a sequence of chars. There is a lot of text editors in which text is a vector (For example ME (MSDOS MultiEdit) , Turbo Editor (Borland Programming Environments), JOE (linux), etc.), But I don't know how to work with tables in these editors or how to set // (C++ comment) at the beginning of 10 lines of program on the screen.

    8. Editor must support macrocommands as a recordable sequences of keystrokes.

    9. If editor supports programming language, so that I can write my own commands, it would be fine.

    10. I think programs-structurizing is very useful feature. I'll explain: I'd like to have text of my program in pre-hypertext form so that I see a list of functions on the screen, I put cursor on the name of desired function, press "open"-key and now I can edit source of that function, but besides that I must be able to edit this file with usual text editor and I must be able to compile that file without errors, hence all additional info about hypertext structure of program must be placed into comments (comments are specific for each file type, but in most cases it depends on file extension).

    11. Keyboard layout must be as much tunable as possible (if my End key is broken, I can use F11 key instead). Besides I often need keyboard layout for second language (cyrillic).

    12. I don't like when editor wastes screen space on frames of windows or any other unuseful things (80 * 25 isn't very roomy)

    13. Font on the screen must be fixed (I hate proportional fonts).

    If You're interested in all that, You can try our example of such a text editor. I think it isn't the best editor, but I got used to it. May It will be useful for someone. To get it, go to http://shade.msu.ru/~machulsk and download 330K zip file, which contains sources and 5 executables for Linux console, Linux X11, OS/2, DOS and Win32 (95/nt). Docs are also included in HTML / plainTeX format.


    Example of switching to RAW keyboard mode (C++ syntax)

      
       #include < stdio.h >
       #include < stdlib.h >
       #include < unistd.h >
       #include < errno.h >
       #include < linux/kd.h >
       #include < signal.h >
       #include < sys/ioctl.h >
       /*.................*/
       void Terminate(int a) {
           if( ioctl(con_fd,KDSKBMODE,om)<0 ) {puts("Press RESET?");exit(-1);}
                      /*trying to set old keydoard mode*/
           }
       /*.................*/
      class TKeyboard{
         int om; /* old keyboard mode descriptor */
         int con_fd; /* console descriptor */
         TKeyboard(){
             signal(SIGKILL, Terminate ); /*setting SIGKILL signal handler*/
             signal(SIGQUIT, Terminate ); /*setting SIGQUIT signal handler*/
             signal(SIGSEGV, Terminate ); /*setting SIGSEGV signal handler*/
             if( 0==(con_fd=open("/dev/tty",O_RDWR)) ) {puts("error");exit(-1);}
                               /*getting console descriptor*/
             if( ioctl(con_fd,KDGKBMODE,&om)<0 ) {puts("error");exit(-1);}
                               /*getting old keydoard mode*/
             if( ioctl(con_fd,KDSKBMODE,K_RAW)<0 ) {puts("error");exit(-1);}
                               /*setting RAW keydoard mode*/
             }
         ~TKeyboard(){
             Terminate(0);
             }
         int GetScanCode(){
             int c;
             ioctl(con_fd,TIOCINQ,&cd); /*query*/
             if(cd==0) /*keyboard buffer is empty*/
             read(con_fd,&c,1); /*get next scancode from console*/
             }
         } KBD;
       /*.................*/
       void main() {
           /*.................*/
           /*................. program body */
           /*.................*/
           }
    

    Thank You!

    My addresses:
    homepage on(in?) Shade

    scuze me for bad english, but my native language is Russian


    Copyright © 1997, Oleg L. Machulskiy
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    Two New X-windows Mail Clients

    Copyright (c) 1996

    Published in Issue 13 of the Linux Gazette

    Introduction

    There are several full-featured text-mode mail clients available for Linux, and these programs (such as Pine and Elm) are probably the most commonly used mailers in the Linux/unix world. One reason for this tendency is that they run equally well in both console and X sessions (in an xterm). They also have a longer development history than their X-windows counterparts which results in the accretion of more features and options. There has been ample time for the developers to deal with bugs as well.

    Many of the X-windows mailers I've tried have either been too basic, too beta, or awkward to use. I've always returned to Pine, my standby. Recently two X-based mailers have been released (in late beta versions), both of which are stable and well-provided with features and options. When I say stable I mean that they have functioned well for me, I haven't lost any mail, and they both have been through several releases in which the most egregious bugs seem to have been ironed out.

    Mail programs are a rather personal sort of software. I've found it to be prudent to copy any existing mailbox files or directories to a safe location before installing any new mail client. You never know until you try just what a new mailer will do with your existing mail messages the first time it is run. As an example, mbox-style mail "folders" (which are just single files with messages concatenated) might be willy-nilly transformed into MH-style directories, with each message becoming an individual numbered file. I suppose there may exist a technique to reverse this metamorphosis, but I don't know what it might be, aside from manually using an editor.


    TkMail

    has been working on a Tcl/Tk mail client for some time now. I'll let him describe its origin:

    
    
       I began the project at the beginning of '92 while a physics
      grad student at the Univ. of Penn.  I had been put in charge
      of several SGI workstations and was disappointed in the X
      window mail readers I had found. I had recently got into
      Tcl/Tk programming and played around with putting Tk
      interfaces on top of command line programs for physics
      simulation.  I figured it would be easy to do one to sit on
      top of the mailx command and did. That produced tkmail 1.x.
      Eventually I decided I was too limited by the mailx command
      and wrote a Perl backend to serve as an extensible
      equivalent. That produced tkmail 2.x.  Perl was used because
      its text processing features were much faster than Tcl but I
      wanted to keep the whole program as scripts for portability.
      This proved a lost cause as Perl proved as hard to port as C
      code.  For my update to work with Tk4.0, I decided to drop
      Perl in favor of writing my own C code as a module extension
      to Tcl. The past year was the last of my graduate career and
      mostly devoted to finishing my thesis leaving little time
      for work on tkmail. It is sort of behind in some of the
      features out there today (MIME, POP, IMAP, etc) but I hope
      to rectify that soon.
    
                The most important future plans are: 
    
                       * better MIME support
    
                       * better key binding customization 
                        
                       * an "auto-filing" feature
    
                       * better search support 

    TkMail is very customizable; Paul Raines includes with the distribution an alternate Tk text-manipulation library which allows the use of emacs-style key-bindings in the compose window. This library can be used with other Tk programs as well. Colors and fonts can be independently selected for the folder-view and compose window. Much of the configuration can be done from menu-selections.

    Here is a screenshot of the main folder-view window:

    Tkmail Main Window

    And here is one of the composition window:

    the Composition Window

    TkMail, like many other Linux mailers, in effect acts as a front-end to sendmail. Luckily most recent Linux distributions come with sendmail preconfigured. If your inbox is on a POP server you will need to use popclient, fetchmail, or fetchpop to retrieve your messages and leave them in a mailbox file on your local disk, where mail clients can find them.

    Among the many features of this beta release are:

    • easy inclusion of files into message compositions with automatic uuencoding and compression, if desired
    • ability to access an alternate editor from the composition window
    • spell check compositions using an X windows interface (using ispell)
    • reads aliases from either standard .mailrc or elm aliases.txt
    • sorting of messages on any field and the ability to write out the folder physically in that order
    • simple MIME reading and composition tool
    • built in 'biff' icon for notification of new mail on multiple folders
    • dynamic (at startup) menus for quick access to mail folders for reading, copying, and moving messages

    TkMail is set up initially to open a small debugging window from which the main program can be started. Once it becomes evident that the program is working to your satisfaction this can be disabled by editing the main tkmail4 script and changing the line set mfp(debug) 1 to set mfp(debug) 0, or just start it with the -nodebug option.

    I have found TkMail 4.0b8 to be easy to learn and use, and its interface is nice-looking. With a little more work on the MIME abilities it will be as effective an X mail client as any available.

    Paul Raines maintains a home page for TkMail; the source for the 4.0b8 version is available here.


    XFMail

    Some months ago John Fisk wrote about the XFmail program in the Gazette. His account inspired me to try it out, but I had quite a few problems with the message editing window, so much so that when I tried to mail the developers a comment on their program, the message was corrupted and I doubt that it was legible to them. I gave up and deleted it soon after, making a mental note to check it out later when perhaps it had become more usable.

    Recently I did just that, and found that a new editing module had been contributed which really makes a difference in usage of the mailer. No longer is there a limit to the amount of the text in the editing window. This change, I believe, makes XFMail a credible choice as a Linux mail client.

    XFMail requires the XForms library. This is available from the XForms web-site, which will always have the latest version and news. If you obtain the archive be aware that the package includes a GUI designer as well as many samples. All you need to keep if you're not a programmer is the XForms shared and static libraries (libforms.so.81 and libforms.a) and the header file (forms.h). These three files will enable you to compile XForms applications, such as XFMail from source.

    In order to try the current beta (which I recommend) you'll need to obtain the source archive from the XFMail home FTP site. As long as you have the XForms library files installed it should compile for you, notwithstanding the warning message at the FTP site. If your current mailbox is in the common mailx format (a single file), you might want to copy the file (INBOX or whatever) to another location before installing XFMail. The default behaviour is for XFMail to transform your messages into the multiple-file MH format; after installation you can disable this and move your mailbox back. If you already store your mail in the MH manner the program will load your messages without moving them.

    Even though XFMail reads and stores messages in MH format, it doesn't require that you have the MH system installed.

    This mail client can handle all mail fetching and delivery needs for a single-user machine. The user is given the option of using sendmail for delivery (either on- or off-line), or using XFMail to directly contact the SMPT server and deliver outgoing mail. Fetching new mail can be done externally,(popclient et al), or via XFMail directly. These features could be helpful for new users who would rather not deal with sendmail; all functions can be handled by the mailer.

    XFMail has the recognizable XForms look, familiar to users of the Lyx front-end program for TeX/Latex. The XForms library gives programs a unique look, unlike standard X or Motif. The user interface is perhaps not quite as fancy as some, but it's not hard to become accustomed to it. There are some limitations in choice of colors; the selection available is greater than that of console-ANSI programs, but less than the amount available to standard X clients.

    Here are some screenshots of the various XFMail windows:

    The Main Window

    The Composition Window

    And here is the logging window:


    Among the other features of this mailer are an internal address book, full MIME support, and support for faces and picons. Support is planned for compatibility with mailx-style mail-folders.

    XFMail is quite an ambitious programming project; if you do try out the beta version I'm sure the authors would appreciate hearing any comments you may have. There also exists an XFMail mailing list; send a message to: with "subscribe xfmail" in the message body.

    Visit the XFMail homepage for the latest news; by the time you read this, beta 0.5 may well have been released.

    XFMail is being developed by and .


    Larry Ayers<layers@vax2.rainis.net>
    Last modified: Tue Dec 17 19:05:43 CST 1996


    Copyright © 1997, Larry Ayers
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    Miscellaneous Notes

    Copyright (c) 1996

    Published in Issue 13 of the Linux Gazette

    Fun with Little Shell-Scripts

    After typing the same command or editing the same rc-file for the dozenth time the idea of a short executable shell-script will gradually rise to the surface of my mind. As an example, last year after much trial-and error I figured out how to start my S3 X-server in 16-bit mode. This was great, but I found that there were a few programs which preferred to be run in 8-bit mode. Typing startx -- -bpp 16 and startx -- -bpp 8 began to become tiresome; then it dawned on me that I could write a shell script for each color-depth which would do the typing for me. One of them looks like this:

    
            #!/bin/sh
            # x16: starts x in 16-bit mode
            startx -- -bpp 16
    

    Just a simple little script (made executable with chmod +x x16) but so handy!

    Encouraged by this, it occurred to me that changing window-managers could be done in a similar way. I normally use fvwm2, but lately I've been fooling around with one of fvwm's hacked offspring, the Afterstep window-manager. Since I didn't have Afterstep's configuration quite as usefully customised as my mainstay fvwm2's, I didn't want to use it the majority of the time. Rather than editing ~/.xinitrc each time I wanted to switch to Afterstep, then again to switch back, I copied ~/.xinitrc twice. The first copy is .xinitrc-f and it's just my normal copy. The second, .xinitrc-a starts Afterstep instead. The scripts which control this are as follows:

    
            #!/bin/sh
            # xa: starts x with afterstep
            cp ~/.xinitrc-a ~/.xinitrc ; startx 
    

    and

    
            #!/bin/sh
            # xf: starts x with fvwm2
            cp ~/.xinitrc-f ~/.xinitrc ; startx
    

    Of course, while in an X-session another window-manager can be easily started from a menu. I spend a fair amount of time working in a console session without X running, in which case the above scripts are useful.

    It just occurred to me as I write this that these tasks could be as easily done using aliases or functions in ~/.bashrc. The only difference I suppose would be that shell-functions are memory-resident whereas the scripts aren't.

    These examples may seem self-evident or trivial to the unix-gurus out there, but they were part of the learning process for me. Perhaps this piece will encourage the beginners out there to try some similar scripting.


    Keyboards and RXVT

    Here's a discovery I made recently concerning rxvt, the memory-saving alternative to xterm. I received an email message recently in response to my article last month concerning S-lang applications, in which I opinionated about rxvt vs. xterm. The poster of the message wondered whether there is any way to use shift-page-up and shift-page-down to scroll the rxvt window, similar to the way console screens (and xterms) scroll. I had tried to get this to work without success, and some usenet messages had led me to believe that without patching the source rxvt just wouldn't scroll from the keyboard.

    Recently I installed the S.u.S.E. distribution, but didn't install the supplied rxvt package. I recompiled rxvt version 2.19 in this new environment, and to my surprise the above-mentioned scrolling keys worked! This piqued my curiosity, so I began prowling through the directory hierarchy searching for the difference in config files which made this behaviour possible. I came up with two differences: first, there was a new entry in the ~/.Xmodmap file. The lines

     
    
               keycode 64 = Meta_L
               keycode 0x6D = Multi_key
    

    had been added to the "keycode 22 = BackSpace" line which I had in my previous installation. Second, the /etc/termcap file was different than the ones I'd seen before; a new rxvt stanza had been included which looks like this:

    
    rxvt|rxvt terminal emulator:\
            :am:km:mi:ms:xn:xo:\
            :co#80:it#8:li#65:\
            :AL=\E[%dL:DC=\E[%dP:DL=\E[%dM:DO=\E[%dB:IC=\E[%d@:\
            :LE=\E[%dD:RI=\E[%dC:UP=\E[%dA:ae=^O:al=\E[L:as=^N:bl=^G:\
            :cd=\E[J:ce=\E[K:cl=\E[H\E[2J:cm=\E[%i%d;%dH:cr=^M:\
            :cs=\E[%i%d;%dr:ct=\E[3k:dc=\E[P:dl=\E[M:do=^J:ei=\E[4l:\
            :ho=\E[H:ic=\E[@:im=\E[4h:\
            :is=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;3;4;6l\E[4l:\
            :k1=\E[11~:k2=\E[12~:k3=\E[13~:k4=\E[14~:k5=\E[15~:\
            :k6=\E[17~:k7=\E[18~:k8=\E[19~:k9=\E[20~:kI=\E[2~:\
            :kN=\E[6~:kP=\E[5~:kb=\177:kd=\EOB:ke=\E[?1l\E>:kh=\E[H:\
            :kl=\EOD:kr=\EOC:ks=\E[?1h\E=:ku=\EOA:le=^H:md=\E[1m:\
            :me=\E[m:mr=\E[7m:nd=\E[C:rc=\E8:sc=\E7:se=\E[m:sf=^J:\
            :so=\E[7m:sr=\EM:ta=^I:te=\E[2J\E[?47l\E8:ti=\E7\E[?47h:\
            :ue=\E[m:up=\E[A:us=\E[4m:
    

    I have noticed, though, that if I type the command echo $TERM in an rxvt window the result is xterm-color, so perhaps the above rxvt termcap entry isn't being used at all.

    I'd love to know if anyone else has any luck transplanting either or both of these two changes into their system. The rxvt termcap entry can be pasted right into your /etc/termcap file; in mine it is right after the xterm stanzas. I don't believe the order of stanzas is important, though.


    Partitions and Directories

    After using linux for a while you tend to take for granted the supple flexibility inherent in the Linux manner of dealing with files, partitions, and mount-points. Recently I began to feel constrained by a relatively small /usr partition, so I thought I'd do some experimenting.

    I happened to have an unused 100 mb. partition on my disk, so I created an ext-2 filesystem on it and mounted it on an empty directory, /new, created for this purpose. Then I ran this command: cp -a /usr/X11R6 /new. Using cp with the -a switch is really handy, as it copies all subdirectories, links, and files, and also saves permissions.

    The next step was modifying the /etc/fstab file, inserting the following entry which causes /usr/X11R6 to be mounted on the new partition:

    
             /dev/hda11     /usr/X11R6   ext2     defaults   1   2
    

    Before rebooting I dropped back to a console and deleted the entire contents of the /usr/X11R6 directory.

    I was reasonably certain this would work, but I must confess I was surprised when (after rebooting) X started up without comment, as if nothing had changed.

    Linux doesn't really care, after all, where files are located, as long as there is a congruence between the partition table and the contents of the /etc/fstab file. One benefit of this laxity is that repartitioning (with all of the attendant backing up, restoring, etc.) should seldom be necessary.


    Larry Ayers<layers@vax2.rainis.net>
    Last modified: Tue Dec 17 21:31:27 CST 1996


    Copyright © 1997, Larry Ayers
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    Petition to Cancel Filed Against Linux Trademark

    Members of the LINUX community have been up in arms during the past six months over the efforts of an individual named William R. Della Croce, Jr. from the Boston area to collect 10% royalties on sales from businesses marketing Linux products. He bases his written demands on a US trademark which he claims to hold on the name "LINUX" for a computer operating system. He, in fact, holds such a registered trademark, based on his claim made under penalty of perjury that he is the owner and first user of the mark for operating systems, and that he was not aware in 1994 or 1995 of any other person who might claim or be using this name and mark for an operating system. This claim is absurd on its face.

    WorkGroup Solutions, Yggdrasil Computing, Linux International, SSC/Linux Journal, and Linus Torvalds have retained an internationally known software industry attorney, G. Gervaise Davis III, of the Davis & Schroeder law firm in Monterey, CA to seek cancelation of this registration on the grounds that it is fraudulent and obtained under false pretenses. Mr. Davis and his firm are handling the case on a vastly reduced fee basis, because of their long standing relationship with the U.S. software industry. Davis was the original attorney for Gary Kildall and Digital Research of CP/M fame in the 1980s.

    A Petition to Cancel was in fact filed with the Trademark Trial and Appeals Board in Washington, DC. on November 27, 1996, detailing the improper actions of Della Croce and setting out the true facts with a number of exhibits and attachments. Mr. Davis advises us that we can expect to have further steps taken by TTAB, under their complex procedural rules over the next few months. TTAB will first notify Della Croce of the filing and permit him time to respond, then evidence can be collected and depositions taken, and then the parties can file briefs and other responses. Often these cases take more than a year to be resolved by a TTAB decision.

    All of our industry is fully aware that Linus Torvalds developed Linux and that it has become one of the world's most popular operating systems during the past six years. The participants in this proceeding expect the TTAB to cancel the registration, after hearing and seeing the massive evidence demonstrating that Della Croce had no conceivable legal basis for his claim to the mark.

    The petition itself was available and on the websites of each of the petitioners and Mr. Davis' law firm at http://www.iplawyers.com. We urge that interested persons read it, and distribute it and this message to all members of the LINUX community so that they will be aware of what is being done about this outrageous trademark claim. We will try to keep everyone posted on developments in the case through user groups and webpages.

    We will continue to keep you updated on the happenings in this action. Check the Linux Hot News Button for the latest updates.


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    "Linux Gazette...making Linux just a little more fun! "


    SLEW: Space Low Early Warning

    By James T. Dennis


    One of the worst things you can do to your linux or other Unix-like system is to allow any of the filesystems to get full.

    System performance and stability will suffer noticeable degradation when you pass about 95% and programs will begin failing and dying at %100 percent. Processes that are run as 'root' (like sendmail and syslog) will actually fill their filesystem past 100% since the kernel will allocate some of the reserved space for them.

    (Yes, you read that right -- when you format a file system a bit of space is reserved for root's exclusive use -- read the mke2fs, e2fsck, tune2fs for more on that).

    Considering the importance of this issue you might think that our sophisticated distributions would come with a pre-configured way to warn you long before there was a real problem.

    Sadly this is one of those things that is "too easy" to bother with. Any professional Unix developer, system administrator or consultant would estimate a total time for writing, installing and testing such an application at about 15 minutes (I took my time and spent an hour on it).

    Here's the script:

    
            #! /bin/bash
                    ## SLEW: Space Low Early Warning
                    ##      by James T. Dennis, 
                    ##      Starshine Technical Services
                    ##      
                    ## Warns if any filesystem in df's output
                    ## is over a certain percentage full --
                    ## mails a short report -- listing just 
                    ## "full" filesystem.
                    ## Additions can be made to specify
                    ## *which* host is affected for 
                    ## admins that manage multiple hosts
    
            adminmail="root"
                    ## who to mail the report to
    
    
            threshold=${1:?"Specify a numeric argument"}
                    ## a percentage -- *just the digits*
    
            # first catch the output in a variable
            fsstat=`/bin/df`
    
            echo "$fsstat" \
                    | gawk '$5 + 0 > '$threshold' {exit 1}'  \
               || echo "$fsstat" \
                    | { echo -e "\n\n\t Warning: some of your" \
                            "filesystems are almost full \n\n" ;
                            gawk  '$5 + 0 > '${threshold}' + 0 { print $NF, $5, $4 }' } \
                    | /bin/mail -s "SLEW Alert" $adminmail
    
            exit
    
    
    
    That's twelve lines of code and a mess of comments (counting each of the "continued" lines as a separate line).

    Here's my crontab entry:

            ## jtd: antares /etc/crontab
            ## SLEW: Space Low Early Warning
            ##      Warn me of any filesystems that fill past 90%
            30 1 * * * nobody /root/bin/slew 90
    

    Note that the only parameter is a 1 to 3 digit percentage. slew will silently fail (ignore without error) any parameter(s) that don't "look like" numbers to gawk.

    Here's some typical output from the 'df' command:

    Filesystem         1024-blocks  Used Available Capacity Mounted on
    /dev/sda5              96136   25684    65488     28%   /
    /dev/sda6              32695      93    30914      0%   /tmp
    /dev/sda10            485747  353709   106951     77%   /usr
    /dev/sda7              48563   39150     6905     85%   /var
    /dev/sda8              96152    1886    89301      2%   /var/log
    /dev/sda9             254736  218333    23246     90%   /var/spool
    /dev/sdb5             510443  229519   254557     47%   /usr/local
    

    Note that I will be getting a message from slew tomorrow if my news expire doesn't clean off some space on /var/spool. The rest of my filesystems are fine.

    Obviously you can set the cron job to run more or less often and at any time you like -- this script takes almost no time memory or resources.

    The message generated can be easily modified -- just add more "continuation" lines after the 'echo -e' command like:

    
               || echo "$fsstat" \
                    | { echo -e "\n\n\t Warning: some of your" \
                            "filesystems are almost full \n\n" \
                            "You should add your custom text here.\n"\
                            "Don't forget to move the ';' semicolon "\
                            "and don't put whitespace\n" \
                            "after the backslash at the ends of these lines\n\n";
    
    
    Note how the first echo feeds into the grouping (enclosed by the braces) so that the contents of $fsstat are appended after the message. This is a trick that might not work under other shells.

    Also, if you plan on writing similar shell scripts, note that the double quotes around the variable names (like "$fsstat") preserve the linefeeds in their values. If you leave the quotes out your text will look ugly.

    The obvious limitation to this script is that you can only specify one threshold value for all of the file systems. While it would be possible (and probably quite easy) to do this some other way -- it doesn't matter to 90% of us. I suspect that almost anyone who does install this script will set the threshold to 85, 90 or 95 and forget about it.

    One could also extend this script to do some groping (using various complex find commands) to list things like:

    • Who is the biggest disk hog (which user is taking up all the space and what are his or her largest files)?

    • What are the oldest, least accessed files on that filesystem?

      This last question could be answered using something like

              
                      'find -xdev -printf "%A@" | sort -n | head' --
      
      
      which reads something like "find all the links on this filesystem and print time that they were last access (expressed as seconds since 1970) and their filenames; sort that and just give me a few of the ones from the top of the sorted list." As you can see, find commands can get very complex very quickly.

    I chose to keep this script very simple and will develope specific scripts to suggest file migrations and deletions as needed.

    As you can see it is possible to do quite a bit in Linux/Unix using high level tools and very terse commands. Certainly the hardest part of writing a script like this is figuring out minor details about quoting and syntax (when to enclose blocks in braces or parentheses) and in determining how to massage the text that's flowing through your pipes.

    The first time I wrote slew was while standing in a bookstore a couple of years ago. A woman near me was perusing Unix books in my favorite section of the store and I asked if I could help her find something in particular. She described the problem as it was presented to her in a job interview. I suggested a 'df | grep && df | mail' type of approach and later, at home, fleshed it in and got it working.

    Over the years I lost the original (which was a one-liner) and eventually had one of the systems I was working with hiccup. That made me re-write this. I've left it on all of my systems ever since.

    I'd like to encourage anyone who developes or maintains a distribution (Linux, FreeBSD, or whatever) to add this or something like it to the default configuration on your systems. Naturally it is free for any use (it's too trivial to copyright in my personal opinion; so, that there is no doubt, I hereby place SLEW (comments and code) into the public domain).


    Copyright © 1997, James T. Dennis, Starshine Technical Services
    Published in Issue 13 of the Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



    Linux Gazette Back Page

    Copyright © 1997 Specialized Systems Consultants, Inc.
    For information regarding copying and distribution of this material see the Copying License.


    Contents:


    About This Month's Authors


    Larry Ayers

    Larry Ayers lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP.

    James T. Dennis

    Jim Dennis is the proprietor of Starshine Technical Services. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/ Peter Norton Group, and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and recently got married at the World Science Fiction Convention in Anaheim.

    Bill Duncan

    Bill has worked with Unix systems since the early Version 7 days on PDP-11's. He worked with Xenix throughout most of the eighties and has also worked with many other flavors of Unix over the years, but his operating system of choice is now Linux. When not working or fiddling with his four Linux systems at home (which is rare), he might have some time left over for his other hobbies; his dog (Daisy), photography and Amateur Radio.

    Michael J. Hammel

    Michael J. Hammel, is a transient software engineer with a background in everything from data communications to GUI development to Interactive Cable systems--all based in Unix. His interests outside of computers include 5K/10K races, skiing, Thai food and gardening. He suggests if you have any serious interest in finding out more about him, you visit his home pages at http://www.csn.net/~mjhammel. You'll find out more there than you really wanted to know.

    Oleg Machulski

    Oleg Machulski is a student of Laboratory of Computing methods at the Faculty of Mechanics and Mathematics, Moscow State University. He has been a Linux enthusiast since Sept.1996 as well as an OS/2 enthusiast. After receiving the source of a very unusual DOS text editor, where the program was structurized in a hypertext manner from his scientific advisor Andrey V. Astreling, he wrote and ported the following functions: search, macrocommands, multiple pages and so on. Brief history of that freeware project could be found at http://shade.msu.ru/~machulsk/mmm/mmm.html Also, I like to play guitar and listen to jazz music. Additional info can be found at my homepage http://shade.msu.ru/~machulsk

    Hans Paijmans

    Hans "Paai" Paijmans is University lecturer & researcher at Tilburg University and a regular contributor to several Dutch journals. Together with E. Maryniak he wrote the first dutch book on Linux--already two years ago. My, doesn't the time fly. His homepage is at http://pi0959.kub.nl:2080/paai.html.

    Greg Roelofs

    Greg Roelofs, aka Cave Newt, is best described as a phenomenon, as in, ``Captain, we're picking up strange readings from that unexplained phenomenon over there.'' Greg's job description is appropriately schizoid, given his interest in far too many things for his own good. He's a full-time researcher in multimedia/image-compression/WWW stuff at Philips Research at Palo Alto, having made the switch from Unix system administrator in August 1995. He likes to fancy himself a software developer; among other things, he has been a member of the Info-ZIP team for six years and the principal author of UnZip for most of that time. (He's also a member of the Portable Network Graphics Development Group and the maintainer of the PNG and zlib home pages.) As for recreational interests, he likes to amuse himself by cycling (often to work); skiing--any flavor, although snow preferred, especially if it means he can drive in it; scuba diving--for 18 years now, from the shores of Lake Superior to the coast of Venezuela to the kelp beds of Monterey; and hiking/backpacking, particularly in the Sierra Nevada range; and amateur photographer.

    James Shelburne

    James Shelburne currently lives in Waco, Texas where he spends most of his free time working on various Linux networking projects. Some of his interests include Perl + CGI, Russian, herbal medicine and the Ramones (yes, you heard right, the Ramones). He is also a staunch Linux advocate and tries to convert every MacOS/MS Windows/AMIGA user he comes into contact with. Needless to say, only other Linux users can stand him.


    Not Linux


    Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites.

    I'd like to apologize for being later than usual getting LG posted. The weather in Seattle has been a more than a little bizarre lately. My neighborhood got about 20 inches of snow from December 26 to December 29. Since the normal yearly snowfall is about 4 inches, everything stopped, including the buses for the first time in Metro history. SSC had a portion of the roof give way under the weight of snow and water (the rains started December 29 and haven't quit yet). As a result of the flooding, things are in quite a mess around the office. Yearly rainfall in Seattle is usually 31 inches; this year we had 55 inches. I thought I was back in Houston!

    Actually, I was back in Houston during my vacation week before Christmas. The weather wasn't great there either -- rainy and cold, and I was counting on sunshine. However, I still had a good time visiting with family and friends. My grandchildren, Sarah and Rebecca, are a delight to be with -- I miss them a lot.

    Have fun!


    Marjorie L. Richardson
    Editor, Linux Gazette


    [ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back


    Linux Gazette, http://www.ssc.com/lg/
    This page written and maintained by the Editor of Linux Gazette,