Linux Gazette... making Linux just a little more fun!

Copyright © 1996-97 Specialized Systems Consultants, Inc.


Welcome to Linux Gazette!(tm)

Sponsored by:

InfoMagic

Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at .


"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at

Contents:


Help Wanted -- Article Ideas


 Date: Mon, 23 Jun 1997 22:40:39 -0500
From: Tom Cannon
Subject: Cobal

Are there any Cobal compilers that will run under Linux, I have a serious need to move some code to the Linux platform if there is something available. Thanks.

(Check with Acucobol Inc., info@accubol.com, http://www.acucobol.com --Editor)


 Date: Sat, 21 Jun 1997 16:02:04 -0400
From: Linda Brooks
Subject: Packard Bell SOUND16A Soundcard

I have a Packard Bell Pack-Mate 4990CD, which has a soundcard apparently called a "SOUND16A" (the documentation doesn't make it clear whether PB or Aztech made it, or if it was a joint production). It is a 16 bit sound card, which I can use under Windows 95 as such. However, in Linux, the best I can do is 8 bit sound (via Sound Blaster Pro 2.0 emulation). The card claims to support MSS, but nowhere in the documentation or setup program does it specify which IRQ this runs at, although it does tell what port. I have contacted Packard Bell's tech support, but they say they only support Windows software for free, and that if I wanted to talk about Linux or some such operating system, I would need to get "special support" which would cost a ridiculously high number.

As a struggling college student, I don't have much money to spend on the computer (it is actually my family's that I scratched up enough space to install Linux on), so I can't get a new sound card, and I am not even sure if the commercial sound drivers support this particular sound card.

I'm probably spoiled by Windows, but it's not asking too much for 16 bit sound so I can listen to 44KHz samples in stereo (I'm quite a MOD fan), listen to MP2's or MP3's, etc...

I'm not much of a coder, so I can't go about writing my own drivers. If anyone knows of how to set up this sound card for full 161 bit sound, please inform me. Or, if you know of any 8 bit .MP? players, that would work too =)


 Date: Thu, 19 Jun 1997 12:03:00 -0400
From: Albert Race
Subject: Linux HELP!

I would like to install Linux on a Sun 386i machine with 16 meg of ram, 2 350 meg scsii drives color video adapter and a tape drive, and Network support. When I try to install using a boot disk, I get the following message.

   Boot: Device fd(0,0,0): Invalid Boot Block

This occurs with any boot disk except for Sun. Is there a way I can get Linux to install to this system? Any suggestions would be greatly appreciated. If you can not help me, Please redirect this message to someone who could. I don't know where to get this type of information. I received these machines for free and would like to put them to use using Linux.

Thank you Albert F. Race


 Date: Mon, 16 Jun 1997 11:44:43 +0200
From: Claudio
Subject: Matrox

Is there a chance to correctly configure a Matrox Mystique with 4MB RAM under X or I must throw away it ?


 Date: Mon, 2 Jun 1997 00:11:40 -0300 (EST)
From: Rildo Pragana
Subject: Interfacing Genius Color Page-CS Scanner

Hello, Please help-me to interface my Color Page CS desktop scanner to Linux. Now, I can scan only from Windows (Argh!!) and it would be fine to have The Gimp accessing my scanned material. I can program in C and Tcl/tk, if I at least have the information on its SCSI card, and the scanner itself. Any information you may have is precious to me. When I have this job done, of course, I'll be happy to publish my adventures in the Gazette.

best regards,
Rildo Pragana
Greetings from Recife, the Brazilian's Venice


 Date: Thu, 12 Jun 1997 14:36:09 -0400 (EDT)
From: David Bubar
Subject: Q: How do you un virtual a virtual screen?

My screen maybe 800x600 but my virtual screen is set at something like 1600x1200. How do I change this? Note:

  1. This is not virtual Desktops, I like use of the PAGER
  2. I wish you would put out a configuration guide for X that does NOT have to be a TOME but a small book(let) that helps users customize X to work the way they want.


 Date: Mon Jun 16 13:46:14
From: Ade Bellini,
Subject: *2+ Processing

Sir, I am a 35 from Sweden using at present *2 90 Pentium /NT4 and Slakeware 1.2.13 and Red Hat 3.0 (and DOS 6.22 !) all on the same machine. (paranoid !) I am interested in knowing how to take advantage of the *2 cpu's on a Linux based machine. Any thing regarding *2 + processing is of interest to me, as i use the NT4 as a server and would like to try using Linux instead. many thanks in advance

Ade.


 Date: Tue, 3 Jun 1997 07:33:50 -0700 (PDT)
From: David Mandel
Subject: CD Burners, Scanners, Digital Cammeras, etc.

I have a mess of family photographs and possibly 35mm slides that I want to preserve. One idea I'm considering is scanning these and putting them on CDs. So I have a few questions.

  1. Will a Sony CDU926S burner work with xcdroast? The documentation says a Sony CDU920S will work, but I don't know the differences between the CDU920S and CDU926S. A bare bones (no docs, drivers, software) CDU926S is only $265. The MS ready version is $350, but who would want that?
  2. What is a good, but cheap flatbed scanner to use? (Good means 24 bit color and >= 300dpi optical resolution.) What software (in Linux) supports the scanner?
  3. I can't afford one, but... Are there any 35mm slide scanners on the market with Linux support?
  4. And as long as I'm asking dumb questions... Does Linux have support for any digital cameras yet? Someday many of us will want to change to digital photography, and it would be awful to have to learn Windows to do this.

Thank you for your time and help,
Dave Mandel

(We'll have to depend on our readers for 1 and 3. As to 2, we use the HP5P flatbed scanner, which fits your qualifications for good. As to cheap it depends on your definition--it sells for around $400. The Linux software that supports all HP scanners is XVscan, and a very nice program it is. As to 4, the answer is yes; Hitach MP-EG1A, http://www.mpegcam.net/. --Editor)


 Date: Tue, 3 Jun 1997 09:01:06 +0100 (BST)
From: Andrew Philip Crook
Subject: Ascii Problems with FTP

When I use a dos ftp(in ascii mode) program to download a Linux Script, because it is not running yet, the script fails to work when installed. This is because a ^M is appended to every line, take them out and it works.

What's happening?

How can I stop it?

Or how can a filter all the ^M's out?

Many Thanks
Andrew Crook.

(In a couple of last year's issues, there are several Tips & Tricks for getting rid of ^M. You can't stop them from happening. I personally get rid of them in vi using a global replace (e.g., :%s/^M//g); one command and they're gone forever. --Editor)


 Date: Thu, 05 Jun 1997 23:08:08 -0400
From: Steve Malenfant,
Subject: Problems with XFree86

I'm a new user to Linux and the problem still XFree86! So then I tried to know want can I do to Linux community. In Issue #16, you said that the problem is not video card and is Monitor balancing. So why Windows 95 can have all these preset on monitor and Linux don't have? Why we can't use the stuff in the Microsoft Lib to transfer it into the database of XF86Setup or something like that. Cause that's real that the dotclock and all this is very scrambled! Why not just resolution and Virtual Refresh, that's all we need to know, the program could do the rest! We don't have to know what horizontal frequency and dotclock it is!

Steve Malenfant


 Date: Thu, 19 Jun 1997 15:39:58 -0700
From: Kevin Hartman
Subject: Afterstep

Would anyone be interested in an Afterstep customization how-to/where to get?

Kevin

(Have you got one setup or just trying to find out if there's a need? --Editor)


 Date: Sat, 07 Jun 1997 02:34:57 -0400
From: sinyz,
Subject: Need Help

Hi, If you happen to have time on your end please be so kind as to answer a few questions for a newbie!

Well , here is the situation and I need to get some serious advice from people like you. I have been reading the newsgroups and HOWTO's . They have been quite informative and ,increasingly so, as I continue . Now , thank GOD I got my Linux (RED HAT 4.1) box set up and running on my slave drive with Win95 on the Master. It detected my CDROM and I also configured my Xwindows (X11R6) .

But there are couple of questions

  1. I have a video card of type diamond s3 virge 3D 2000 . The driver for S3 was a choice in the XF86Setup which I chose and everything seems to work fine. Also I chose the 800*600 resolution SVGA monitor . I have been hearing rumors from friends that the video card when being used by Xwindows may mess up the monitor . This has been troubling me quite a bit . What's up with this ??
  2. I read using the dmesg command that Linux at boot time does not notice that there is a device on tty1 . The specific line reads this
    Serial driver version 4.13 with no serial options enabled
     tty00 at 0x03f8 (irq=4) is a 16550A
     tty03 at 0x02e8 (irq=3) is a 16550A
    There seems to be no mention of tty1 (com 2 irq 3) where my modem is installed at !How to fix this ?? By the way my modem happens to be a plug-n-play modem -SUpra 28.8bps. I have heard that pnp modems have problems with Linux and there are fixes for pnp types - please recommend any.(In effect how do I get my Modem to work)
  3. Also I did not notice during the boot time messages any thing to do with PPP Protocol which I definitely need to dialup to an ISP . Does that mean recompiling the Kernel -- HOw ( if red hat distribution has specific or simpler way of doing things then let me know ) Thanks a lot in anticipation.


 Date: Wed, 11 Jun 1997 11:06:57 +0200 (MET DST)
From: Martin Lersch
Subject: User-Level Driver For HP ScanJet 5p?

Hello! Please can you point me to some direction were I can find a user-level driver for the HP ScanJet 5p? There exist the HPSCANPBM driver which works in part, but does not support the -width and -height options for the ScanJet 5p. I guess it was written for a ScanJet 4c or something like that. BTW: The homepage of HP does not give much support for Linux users. They do not publish the ESCAPE sequences of the scanners.

Regards, Martin Lersch


General Mail


 Date: Sun, 01 Jun 1997 00:56:52 -0500
From: Piotr Mitros
Subject: WordPerfect for Linux

Before more users spend many hours downloading the 50 megabyte (!) WordPerfect for Linux, you may want to note that the beta download lets you get a demo version that times out after just 15 days. They seem to have demo versions of WordPerfect 6 available, so it is not that big a deal.

However, I would like to see a comparison of WordPerfect for Linux, StarOffice's word processor and the what is planned for GNU WP.

Piotr

(I'd like to see that comparisom too. --Editor)


 Date: Thu, 12 Jun 1997 06:42:47 -0400
From: Stephen L. Cito
Subject: Question about downloading the archive

Hello, I'd like to download the past issues of LG (having enjoyed LJ now since last fall), but I don't think I could even get an 11 meg file downloaded over my 14.4 modem within the 1 hour that I have before my local Internet connection (the Greater Detroit Free Net) times out on me. Is there any way to download the past issues in smaller "chunks"?

Thanks and have a real nice day...

SC, Novi, MI

(Hmmm, that is a problem. No, I don't save the individual tar files of previous issues separately. There is, of course, TWDT, option for each issue which gives you the issue as one great big file. Not as nice as the normal multi-file format but very popular so must work for some. --Editor)


 Date: Wed, 04 Jun 1997 22:52:36 -0700
From: James Zubb
Subject: ActiveX for Linux

Hi, I read the ActiveX for Linux question in the Answer Guy's article, I did a little looking and came up with a web site: http://www.sagus.com/Prod-i~1/Net-comp/dcom/index.htm

I don't know if this is actually the ActiveX port for Linux or not, I didn't feel like trying to figure it out, but there is a Beta for Linux there. Beats me what it does or how it does it...

-- Jim Zubb


 Date: Fri, 6 Jun 1997 19:01:40 +0100 (BST)
From: Adrian Bridgett
Subject: Re: X Color Depth (In response to the message by Roland Smith)

Normally 8-bit displays use 256 colours chosen from 2^24 (16,777,216), and 15/16/24/32 bits displays just use a fixed number of colours spread "evenly" throughout the colour spectrum.

16-bit displays use 5 bits for red, 5 bits for blue and 6 bits for green, however the 65536 colours cannot be changed and so the overall "resolution" of colour is lower than 256 bit displays. For instance you can only have 2^5 different shades of green, rather than 2^8.

Adrian


 Date:Thu June 12 08:39:19 PDT 1997
Timothy Gray
Subject: CNE Certification for Linux?

Oh, no not a certification suggestion......

Linux was developed as a better and free version of UNIX. Now someone wants to make a CNE for Linux? As a successful Linux Network Administrator (and Business owner that proudly states no Microsoft here!) I am appalled at charging ten's of thousands of dollars to get a piece of paper that states I can do my job. As an Internet service provider and an avid Linux, Freeware, and Free Software Foundation supporter I hire my network administrators and Engineers( We call them System Administrators ) based on their abilities and trainability. A CNE paper does not nor will ever impress me. Even suggesting such an idea toward Linux is appalling. Let's keep our last bastion of freedom from the clutches of cooperate greed! If we must have a Linux CNE make it 100% free and available to everyone on the planet.

Thank you, Timothy Gray


Published in Linux Gazette Issue 19, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Next

This page written and maintained by the Editor of Linux Gazette,
Copyright © 1997 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun! "


More 2¢ Tips!


Send Linux Tips and Tricks to


Contents:


Rude Getty

Date: Mon, 23 June 1997 21:12:23
From: Heather Stern

I have a fairly important UNIX box at work, and I have come across a good trick to keep around.

Set one of your console getty's to a nice value of very rude, -17 or worse. That way if a disaster comes up and you have to use the console, it won't take forever to respond to you (because of whatever went wrong).


Keeping Track of File Size

Date:Mon 16 June 1997 13:34:24
From: Volker Hilsenstein

Hello everyone, I just read Bob Grabau's 2C-tip for keeping track of the size of file. Since it is a bit inconvenient to type all these lines each time you download something I wrote this little script:

#!/bin/bash
# This script monitors the size of the files given
# on the command line.
while :
do
  clear
    for i in $@; do
     echo File $i has the size `ls -l $i   | tr -s " " | cut -f 5 -d " "` bytes;
   done
sleep 1
done

Bye ... Volker


Reply to "What Packages do I Need?"

Date: Tue 24 June 1997 11:15:56
From: Michael Hammel,

You asked about what packages you could get rid of and mentioned that you had AcceleratedX and that because of this you "can get rid of a lot off the X stuff". Well, thats not really true. AcceleratedX provides the X server, but you still need to hang onto the X applications (/usr/X11R6/bin/*) and the libraries and include files (/usr/X11R6/lib and /usr/X11R6/include) if you wish to compile X applications or run X binaries that require shared libraries.

Keep in mind that X is actually made up of three distinct parts: the clients (the X programs you run like XEmacs or Netscape or xterm), the server (the display driver that talks to your video adapter), and the development tools (the libs, header files, imake, etc). General users (non-developers) can forego installation of the development tools but need to make sure to install the runtime libraries. Each Linux distribution packages these differently, so just be careful about which ones you remove.

One caveat: I used to work for Xi Graphics, but that was over a year and a half ago. Although I keep in touch with them, I haven't really looked at the product line lately. Its possible they ship the full X distributions now, but I kind of doubt it. If they are shipping the full X distributions (clients, server, development tools) then disregard what I've said.

Hope this helps.
-- Michael J. Hammel


Sound Card Support

Date: Mon 24 June 1997: 11:16:34
From: Michael Hammel,

With regards to your question in the LG about support for the MAD16 Pro from Shuttle Sound System under Linux, you might consider the OSS/Linux product from 4Front-Techologies. The sound drivers they supply support a rather wide range of adapters. The web paget http://www.4front-tech.com/osshw.html gives a list of what is and isn't supported. The Shuttle Sound System 48 is listed as being supported as well as generic support for the OPTi 82C929 chipset (which you listed as the chipset on this adapter).

This is commercial software but its only $20. I've been thinking of getting it myself. I have used its free predecessor, known at times as OSS/Lite or OSS/Free, and found it rather easy to use. I just haven't gotten around to ordering (mostly cuz I never seem to have time for doing installation or any other kind of admin work). I will eventually.

4Front's web site is at http://www.4front-tech.com.

Hope this helps.

-- Michael J. Hammel


InstallNTeX is Dangerous

Date: Fri 06 June 1997 12:31:14
From: Frank Langbein

Dear James:
On Fri, 6 Jun 1997, James wrote:

You have still

 make_dir "       LOG" "$VARDIR/log"       $DOU 1777
 
   make_dir " TMP-FONTS" "$VARDIR/fonts"     $DOU 1777

If I hadn't (now) commented-out your

(cd "$2"; $RM -rf *)
then both my /var/log/* and /var/fonts/* files and directories would have been deleted!

Actually VARDIR should also be a directory reserved for NTeX only (something like /var/lib/texmf). Deleting VARDIR/log is not really necessary unless someone has some MakeTeX* logs in there which are not user writable. Any pk or tfm files from older or non-NTeX installations could cause trouble later. Sometimes the font metrics change and if there are some old metrics used with a new bitmap or similar the resulting document might look rather strange. Further log and fonts have to be world writable (there are ways to prevent this, but I haven't implemented a wrapper for the MakeTeX* scripts yet), so placing them directly under /var is not really a good idea. I am aware that the documentation of the installation procedure is minimal which makes it especially hard to select the directories freely.

The real problem is that allowing to choose the directories freely. Selecting the TDS or the Linux filesystem standard is rather save and at most any other TeX files are deleted. The only real secure option would be to remove the free choice and only offer the Linux filesystem standard, the one from web2c 7.0 which is also TDS conform and a TDS conform sturcutre in a special NTeX directory. The free selection would not be accessible for a new user. I could add some expert option which still allows to use a totally free selection. Additional instead of deleting the directories they could be renamed.

There are plans for a new installation procedure, also supporting such things as read only volumes/AFS, better support for multiple platform installation, etc. This new release will not be available before I managed to implement all the things which were planed for 2.0. But that also means that there will probably be no new release this year as I have to concentrate on my studies. Nevertheless I will add a warning to the free selection in InstallNTeX. That's currently the only thing I can do without risking to add further bugs to InstallNTeX. Considering that my holiday starts next week I can't do more this month.

BTW, on another point, I had difficulty finding what directory was searched for the packages to be installed. Only in the ntex-guide, seemingly buried, is there:

This is caused by different ways to look for the package in NTeX-install, the text version of InstallNTeX and the Tcl/Tk version of InstallNTeX. Therefore you get some warnings even if NTeX-install would be able to install the packages. The minimal documentation is one of the real big drawbacks of NTeX. I'm currently working on a complete specification for the next release which will turn into a real documentation.

Thanks for pointing out the problems with the free selection of that paths. So far I concentrated on setting the installation paths to non-existing directories.

Regards,
Frank


Reply to Dangerous InstallNTeX Letter

To: Frank Langbein,
Date: Sat, 07 Jun 1997 10:11:06 -0600
From: James

Dear Frank:
The hidden application of the operation

rm -rf *
to the unpredictable and unqualified input from a broad base of naive users is highly likely to produce unexpected and undesired results for some of these users. This is the kind or circumstance more usually associated with a "prank". If this is _not_ your intent, then further modifications to the script "InstallNTeX" are required.

The script functions at issue include: mk_dirchain() ($RM -f $P), make_dir() ($RM -rf * and $RM -f "$2"), make_tds_ln() ($RM -f "$3"), and link_file() ($RM -rf "$2"). The impact of the operations when using unexpected parameters, from misspellings or misinterpretations, for instance, should be considered.

You might simply replace these operations with an authorization dialog, or you could create a dialog with several recovery options. (For the moment, I have replaced them with `echo "<some <warning parm&gr;"'.)

James G. Feeney


Monitoring An FTP Download

Date: Tue, 10 Jun 1997 19:54:25 +1000 (EST)
From: Nathan Hand

I saw the recent script someone posted in the 2c column to monitor an ftp download using the clear ; ls -l ; sleep trick. I'd just like to point out there's A Better Way.

Some systems will have the "watch" command installed. This command works pretty much like the script, except it uses curses and buffers for lightning fast updates. You use it something like

   watch -n 1 ls -l

And it prints out the current time, the file listing, and it does the refreshes so fast that you don't see the ls -l redraws. I think it looks a lot slicker, but otherwise it's the same as the script.

I don't know where the watch command comes from. I'm using a stock standard Red Hat system (4.0) so hopefully people with similar setups will also have a copy of this nifty little tool.


Programming Serial Ports

Date: Wed 18 June 1997 14:15:23
From: Tom Verbeure

Hello, A few days ago, I had to communicate using the serial port of a Sun workstation. A lot of information can be found here: http://www.stokely.com/stokely/unix.serial.port.resources and here: http://www.easysw.com/~mike/serial

Reading chapters 3 and 4 of that last page, can do wonders. It took me about 30 minutes to communicate with the machine connected to the serial port. The code should work on virtually any unix machine.

Hope this helps, Tom Verbeure


Another Way of Grepping Files in a Directory Tree

Date: Thu 12 June 15:34:12
From: Danny Yarbrough

That's a good tip. To work around the command line length limitation, you can use xargs(1):

find . -name "\*.c" -print | xargs grep foo
this builds a command line containing "grep foo" (in this case), plus as many arguments (one argument for each line of its standard input) as it can to make the largest (but not too long) command line it can. It then executes the command. It continues to build command lines and executing them until it reaches the end of file on standard input.

(Internally, I suppose xargs doesn't build command lines, but an array of arguments to pass to one of the exec*(2) family of system calls. The concept, however is the same).

xargs has a number of other useful options for inserting arguments into the middle of a command string, running a command once for each line of input, echoing each execution, etc. Check out the man page for more.

Cheers! Danny


More Grepping Files

Date: Mon 16 June 1997 08:45:56
From: Alec Clews

grep foo `find . -name \*.c -print`

The only caveat here is that UNIX is configured to limit max chars in a command line and the "find" command may generate a list of files too huge for shell to digest when it tries to run the grep portion as a command line. Typically this limit is 1024 chars per command line.

You can get around this with

find . -type f -name \*.c -exec grep foo {} /dev/null \;

Notes: The -type f skips directories (and soft links, use -follow if needed) that end with a c

The /dev/null is required to make grep display the name of the file it's searching. grep only displays the file name *and* the search string when there are multiple files to search, and /dev/null is a 0 length file.

Regards,
Alec


Still More On Grepping Files

Date: Sat 14 June 1997 10:57:34
From: Rick Bronson

Here is similiar way to grep for files in a directory tree. This method uses xargs and as such does not suffer from the max chars in a command line limit.

sea () 
{ 
    find . -name "$2" -print | xargs grep -i "$1"
}

I've defined it as a function in my .bashrc file, you would use it like:

sea "search this string" '*.[ch]'

Rick


Grepping

Date: Thu 19 June 1997 09:29:12
From: David Kastrup
Reply to "Grepping Files in a Tree Directory"

Well right. That's why most solutions to this problem are given using the xargs command which will construct command lines of appropriate size.

You'd write

find . -name \*.c -print|xargs grep foo
for this. This can be improved somewhat, however. If you suspect that you have files containing newlines or otherwise strange characters in them, try
find . -name \*.c -print0|xargs -0 grep foo --
This will use a special format for passing the file list from find to xargs which can properly identify all valid filenames. The -- tells grep that even strange file names like "-s" are to be interpreted as file names.

Of course, we would want to have a corresponding file name listed even if xargs calls a single grep in one of its invocation. We can manage this with

find . -name \*.c -print0|xargs -0 grep foo -- /dev/null
This will have at least two file names for grep (/dev/null and one given by xargs), so grep will print the file name for found matches.

The -- is a good thing to keep in mind when writing shell scripts. Most of the shell scripts searching through directories you find flying around get confused by file names like "-i" or "xxx\ yyy" and similar perversities.

David Kastrup


More on Grepping Files in a Tree

Date: Mon 02 June 1997 15:34:23
From: Chris Cox

My favorite trick for look for a string (or strings - egrep) in a tree:

$ find . -type f -print | xargs file | grep -i text |
   cut -f1 -d: | xargs grep pattern

This is a useful technique for other things...not just grepping.


Untarring/Zip

Date: Sun 22 June 1997 13:23:14
From: Mark Moran

I read the following 2-cent tip and was excited to think that I've finally reached a point in my 'linux' expertise I COULD contribute a 2-cent tip! I typically run:

tar xzf foo.tar.gz

to unzip and untar a program. But as Paul mentions the directory structure isn't included in the archive and it dumps in your current directory. Well before I do the above I run:
tar tzf foo.tar.gz

This will dump out to your console what going to be unarchived easily allowing you to see if there's a directory structure!!!!

Mark


An Addition to Hard Disk Duplication (LG #18)

Date: Thu 12 June 1997 15:34:32
From: Andreas Schiffler

Not suprisingly, Linux can do that of course for free and - even from a floppy bootimage for example (i.e. Slackware bootdisk console).

For identical harddrives the following will do the job:

cat /dev/hda >/dev/hdb

For non-identical harddrives one has to repartition the target first:

fdisk /dev/hda record the partitions (size, type)
fdisk /dev/hdb create same partitions
cat /dev/hda1 >/dev/hdb1 copy partitions
cat /dev/hda2 >/dev/hdb2
...

To create image files, simply redirect the target device to a file.

cat /dev/hda >image-file

To reinstall the MBR and lilo, just boot with a floppy using parameters that point to the root partition (as in LILO> linux root=/dev/hda1) and rerun lilo from within Linux.

Have fun
Andreas


Reply to ncftp (LG #18)

Date: Fri 20 June 1997 14:23:12
From: Andrew M. Dyer,

To monitor an ftp session I like to use ncftp which puts up a nice status bar. It comes in many linux distributions. When using the standard ftp program you can also use the

hash
command which prints a
#
every 1K bytes received. Some ftp clients also have the
bell
command which will send a bell character to your console for every file transferred.

For grepping files in a directory tree I like to use the -exec option to find. The syntax is cryptic, but there is no problem with overflowing the shell argument list. A version of the command shown in #18 whould be like this:

find . -name \*.c -exec grep foo {} /dev/null \;
(note the /dev/null forces grep to print the filename of the matched file). Another way to do this is with the mightily cool xargs program, which also solves the overflow problem and its a bit easier to remember:
find . -name \*.c -print | xargs grep foo /dev/null
(this last one is stolen from "UNIX Power Tools" by Jerry Peek, Tim O'Reilly and Mike Loukides - a whole big book of 2 cent tips.

For disk duplication we sometimes use a linux box with a secondary IDE controller, and use

dd
to copy the data over.
dd if=/dev/hdc of=/dev/hdd bs=1024k
this would copy the contents of /dev/hdc to /dev/hdd. The bs=1024k tells linux to use a large block size to speed the transfer.


Sockets and Pipes

Date: Thu, 12 Jun 1997 23:22:38 +1000 (EST) From: Waye-Ian Cheiw,

Hello!

Here's a tip!

Ever tried to pipe things, then realised what you want to pipe to is on another machine?

spiffy $ sort < file 
sh: sort: command not found 
spiffy $ # no sort installed here! gahck!

Try "socket", a simple utility that's included in the Debian distribution. Socket is a tool which can treat a network connection as part of a pipe.

spiffy $ cat file
c 
b
a
spiffy $ cat file | socket -s 7000 &   # Make pipe available at port 7000.
spiffy $ rlogin taffy
taffy $ socket spiffy 7000 | sort      # Continue pipe by connecting to spiffy.
a
b
c

It's also very handy for transferring files and directories in a snap.

spiffy $ ls -F 
mail/   project/
spiffy $ tar cf - mail project | gzip | socket -qs 6666 &
spiffy $ rlogin taffy
taffy $ socket spiffy 6666 | gunzip | tar xf - 
taffy $ ls -F
mail/   project/

The -q switch will close the connection on an end-of-file and conveniently terminate the pipes on both sides after the transfer.

It can also connect a shell command's input and output to a socket. There is also a switch, -l, which restarts that command every time someone connects to the socket.

spiffy $ socket -s 9999 -l -p "fortune" &
spiffy $ telnet localhost 9999
"Baseball is ninety percent mental.  The other half is physical." 
Connection closed by foreign host. 
Will make a cute service on port 9999 that spits out fortunes.

-- Ian!!


Hex Dump

Date: Tue 24 June 1997 22:54:12
From: Arne Wichmann

Hi.

One of my friends once wrote a small vi-compatible hex-editor. It can be found (as source) under vieta.math.uni-sb.de:/pub/misc/hexer-0.1.4c.tar.gz


More on Hex Dump

Date: Wed, 18 Jun 1997 10:15:26 -0700
From: James Gilb

I liked your gawk solution to displaying hex data. Two things (which people have probably already pointed out to you).

  1. If you don't want similar lines to be replaced by * *, use the -v option to hexdump. From the man page:

    -v The -v option causes hexdump to display all input data. Without the -v option, any number of groups of output lines, which would be identical to the immediately preceding group of output lines (except for the input offsets), are replaced with a line comprised of a single asterisk.

  2. In emacs, you can get a similar display using ESC-x hexl-mode. The output looks something like this:
    00000000: 01df 0007 30c3 8680 0000 334e 0000 00ff  ....0.....3N....
    00000010: 0048 1002 010b 0001 0000 1a90 0000 07e4  .H..............
    00000020: 0000 2724 0000 0758 0000 0200 0000 0000  ..'$...X........
    00000030: 0000 0760 0004 0002 0004 0004 0007 0005  ...`............
    00000040: 0003 0003 314c 0000 0000 0000 0000 0000  ....1L..........
    00000050: 0000 0000 0000 0000 0000 0000 2e70 6164  .............pad
    00000060: 0000 0000 0000 0000 0000 0000 0000 0014  ................
    00000070: 0000 01ec 0000 0000 0000 0000 0000 0000  ................
    00000080: 0000 0008 2e74 6578 7400 0000 0000 0200  .....text.......
    00000090: 0000 0200 0000 1a90 0000 0200 0000 2a98  ..............*.
    
    (I don't suppose it is supprising that emacs does this, after all, emacs is not just and editor, it is its own operating system.)


Reply to Z Protocol

Date: Mon 09 June 1997 19:34:54
From: Gregor Gerstmann

In reply to my remarks regarding file transfer with the z protocol in LinuxGazette issue17, April 1997, I received an e-mail that may be interesting to others too:

Hello!

I noticed your article in the Linux Gazette about the sz command, and really don't think you need to split up your downloads into smaller chunks.

The sz command uses the ZMODEM protocol, which is built to handle transmission errors. If sz reports a CRC error or a bad packet, it does not mean that the file produced by the download will be tainted. sz automatically retransmits bad packets.

If you have an old serial UART chip ( 8250 ), then you might be getting intermittent serial errors. If the link is unreliable, then sz may spend most of its time tied up in retransmission loops.

In this case, you should use a ZMODEM window to force the sending end to expect an `OK' acknowledgement every few packets.

  sz -w1024
Will specify a window of 1024 bytes.

-- Ian!!


Published in Linux Gazette Issue 19, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


This page maintained by the Editor of Linux Gazette,
Copyright © 1997 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


News in General


 SPAM Counter Attack!

If you'd like to have your voice heard regarding SPAM mail, why don't you consider writing a letter to your representative?

If you're not sure of who your representatives are, check the Congressional websites:

The postal addresses for your members are:

The Honorable (Senator name) The Honorable (Rep. name) Washington, DC 20510 Washington, DC 20515

The letter doesn't have to be long... two paragraphs is as effective as 10 pages. And you don't need to write different letters, the same one can be sent to each Member. (Just remember to change the mailing address!)


 Linux-Access Web Pages

The Center for Disabled Student Services at the University of Utah in Salt Lake City Utah, today announced it's newly re-designed linux-access web pages. linux-access is a mailing list hosted by CDSS which is used by both developers and users of the Linux operating system in order to aid development and integration of access related technology into the Linux OS and available software.

Both users and developers of Linux are encouraged to join the mailing list and help Linux become more accessible to everyone. Among those encouraged to subscribe to the list are companies making Linux distributions so that they can incorporate access technology into their products as well as get valuable feedback from users.

Location of the new pages is at: http://ssv1.union.utah.edu/linux-access/.
Location of the blinux FTP mirror is at ftp://ssv1.union.utah.edu/pub/mirrors/blinux/.

An archive of the mailing list can be found on the Linux v2 Information HQ site at: http://www.linuxhq.com/lnxlists/linux-access/.


 Supreme Court Ruling

The U.S. Supreme Court extended free-speech rights to cyberspace in its recent ruling striking down a federal law that restricted indecent pictures and words on the Internet computer network.

The court declared the law that bans the dissemination of sexually explicit material to anyone younger than 18 unconstitutional.

"Notwithstanding the legitimacy and importance of the congressional goal of protecting children from harmful materials, we agree ... that the statute abridges 'freedom of speech' protected by the First Amendment," Justice John Paul Stevens said for the court majority in the 40-page opinion.

The ruling represented a major victory for the American Civil Liberties Union (ACLU) and groups representing libraries, publishers and the computer on-line industry, which brought the lawsuit challenging the law.


 The Power OS

Matthew Borowski has created a new website featuring Linux information. Entitled "Linux - THE POWER OS", and featuring Linux links, software, help, and a discussion forum, Linux - THE POWER OS is also a member of the Linux Webring.

The software listing is top-of-the-line, featuring a list of powerful applications that will change the way you make use of Linux. The modem setup section will help you get your modem working under Linux, and the StarOffice-miniHOWTO will help fix Libc problems when installing Staroffice under Linux.

If you have a chance, visit "Linux - THE POWER OS" at: http://www.jnpcs.com/mkb/linux or http://www.mkb.home.ml.org/linux/

For more information write to


 June 1997 PowerPC Project

The Linux for PowerPC project announces its June 1997 CD of the Linux operating system for the PowerPC. The CD is the second release following the first one in January 1997. The June release is significantly faster and has improved memory handling. It now contains over 400 different software packages and everything needed to install and run Linux on any of the PowerPC machines manufactured by Be Inc, Apple Computer, IBM, Motorola and most other manufactures of PowerPC computers. Go to http://www.linuxppc.org/ to order your own CD or to find out more about


 Sunsite Link

Check out http://sunsite.unc.edu/paulc/liv

This lets you view the contents of SunSITE's /pub/Linux/Incoming directory, but extracts all the descriptions out of the map files (.lsm) and displays them in a table. It has links for 24 hours/7 day/14 day and 28 day lists.


 GLUE Announcement

Caldera has announced that it will give a free copy of OpenLinux Lite on CD-ROM for each group member of GLUE. Caldera, Inc. (http://www.caldera.com/) is located in Provo, Utah. For full details on GLUE and to register your group as a member, visit the GLUE web site at http://www.ssc.com/glue.


Software Announcements


 Woven Goods for LINUX

World-Wide Web (WWW) Applications and Hypertext-based Information about LINUX. It is ready configured for the Slackware Distribution and currently tested with Version 3.2 (ELF). The Power Linux LST Distribution contains this collection as an integral part with some changes.

The Collection consists of five Parts, so it can be used for multiple purposes depending on the installed Parts:

The five Parts of Woven Goods for LINUX are:

  1. World-wide Web Browser The World-wide Web Browser from Netscape for X11 and Lynx for ASCII terminals.
  2. LINUX Documents The LINUX Documents contain the HTML Pages of Woven Goods for LINUX, FAQs, HOWTOs, LDP Documents and more in different formats like Hypertext Markup Language (HTML), Text, PDF and Postscript.
  3. World-wide Web Server The Apache World-wide Web Server with additional CGI Scripts for Statistics, viewing MAN Pages and Counters, Glimpse Search Engine and the Documentation for Apache Server. Furthermore the Apache Module PHP/FI as well as the BSCW system and the necessary Python interpreter are included.
  4. Hypertext Markup Language The HTML-Editor asWedit allows the creation of HTML-Pages. Some Graphic Tools allow the creation and modification of GIFs.
  5. External Viewers The external Viewers are nessesary to present Information which can not be viewed by the WWW Browsers. Only the usefull Viewers (xanim, acroread, ia, raplayer, str, splay, swplayer, vrweb, etc.) are included which are not part of the Slackware Distribution (xv, ghostview, showaudio).

Availabilty & Download

Woven Goods for LINUX is available via anonymous FTP from: ftp://ftp.fokus.gmd.de/pub/Linux/woven

Installation

For Installation Instructions see the Installation Guide: ftp://ftp.fokus.gmd.de/pub/Linux/woven/README.install or http://www.fokus.gmd.de/linux/install.html


 Qbib Version 1.1

Qbib is a bibliography management system based on Qddb. Features include the Qddb database, import BibTeX .bib giles, custom export options and a friendly user-interface just to name a few.

For more information about Qbib (including an on-line manual), see http://www.hsdi.com/qddb/commercial

To order Qbib or other Qddb products/services, visit the Qddb store: http://www.hsdi.com/qddb/orders


 WipeOut Version 1.07

WipeOut is an integrated development environment for C++ and Java. It contains project manager, class browser, make tool, central text editor with syntax highlighting and a debugger frontend. WipeOut is available for Linux and SunOS/Solaris both under XView.

For the new release we have especially extended the class browser and the text editor. Check out the changes list for all new features and fixed bugs.

You can obtain the software and documentation at: http://www.softwarebuero.de/ndex-eng.html


Published in Linux Gazette Issue 19, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


This page written and maintained by the Editor of Linux Gazette,
Copyright © 1997 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


The Answer Guy


By James T. Dennis,
Starshine Technical Services, http://www.starshine.org/


Contents:


 Mounting Disks Under Red Hat 4.0

From: Bigby, Bruce W.

Hi. The RedHat 4.0 control-panel has an interesting problem. I have two entries in my /etc/fstab file for my SCSI Zip Drive--one for mounting a Win95 Zip removable disk and another for mounting a removable Linux ext2fs disk--

/dev/sda4 /mnt/zip   ext2fs rw,noauto 0 0
/dev/sda4 /mnt/zip95 vfat   rw,noauto 0 0
I do this so that I can easily mount a removable zip disk by supplying only the appropriate mount point to the mount command--for example, by supplying
mount /mnt/zip
when I want to mount a Linux ext2fs disk, and
mount /mnt/zip95
when I want to mount a Windows 95 Zip disk.

 Yes, I do this all the time (except that I use the command line for all of this -- and vi to edit my fstab). I also add the "user" and a bunch of "nosuid,nodev,..." parameters to my options field. This allows me or my wife (the only two users with console access to the machine) to mount a new magneto optical, floppy, or CD without having to 'su').

 Unfortunately, the control-panel's mount utility treats the two lines as duplicates and removes the additional lines that begin with /dev/sda4. Consequently, the control panel's mount utility only sees the first line,

/dev/sda4 /mnt/zip   ext2fs rw,noauto 0 0
In addition, the utility also modifies my original /etc/fstab. I do not

 Bummer! Since I don't use the GUI controls I never noticed that.

 desire this behavior. I prefer that the utility be fairly dumb and not modify my original /etc/fstab. Has RedHat fixed this problem in 4.2?

 I don't know. There are certainly enough other fixes and upgrades to be worth installing it (although -- with a .1 version coming out every other month -- maybe you want to just download selective fixes and wait for the big 5.0).

(My current guess -- totally unsubstantiated by even an inside rumor -- is that they'll shoot for integrating glibc -- the GNU C library -- into their next release. That would be a big enough job to warrant a jump in release numbers).

 Can I obtain the sources and modify the control-panel's mount utility so that it does not remove, "so-called," duplicates?

 Last I heard the control-panel was all written in Python (I think they converted all the TCL to Python by 4.0) In any event I pretty sure that it's TCL, Python and Tk (with maybe some bash for some parts). So you already have the sources.

The really important question here is why you aren't asking the support team at RedHat (or at least posting to their "bugs@" address). This 'control-panel' is certainly specific to Red Hat's package.

According to the bash man page, bash is supposed to source the .profile, or .profile_bash, in my home directory. However, when I login, bash does not source my .profile. How can I ensure that bash sources the .profile of my login account--$HOME/.profile?

 The man page and the particular configuration (compilation) options in your binary might not match.

You might have an (empty?) ~/.bash_profile or ~/.bash_login (the man page looks for these in that order -- with .profile being the last -- and only it sources the first of them that it finds).

You might have something weird in your /etc/profile or /etc/bashrc that's preventing your ~/.bash_* or ~/.profile from being sourced.

Finally you might want to double check that you really are running bash as your login shell. There could be all sorts of weird bugs in your configuration that effectively start bash and fail to signal to it that this is a "login" shell.

Normally login exec()'s bash with an "ARG[0]" of "-bash" (preceding the name with a dash). I won't get into the gory details -- but if you were logging in with something that failed to do this: bash wouldn't "know" that it was a login shell -- and would behave as though it were a "secondary" shell (like you invoked it from your editor)).

If all else fails go over to prep.ai.mit.edu and grab the latest version of the GNU bash sources. Compile them yourself.

-- Jim


 Weird LILO Problem

From: David Runnels

Hi Jim. I read your column in the Linux Gazette and I have a question. (If I should have submitted it some other way I apologize.)

 I recommend using the tag@starshine.org address for now. At some point I hope to have SSC set up a tag@gazette.ssc.com address -- or maybe get linux.org to give me an account and set up some custom mail scripts.

 I've been using Linux casually for the last couple of years and several months ago I installed RedHat 4.0 on the second IDE drive of a Win95 system. Though I've used System Commander in the past I don't like using it with Win95 so I had the RedHat install process create a boot floppy. This has always worked fine, and I made a second backup floppy using dd) which I also made sure booted fine.

 This probably isn't really a "boot" floppy. It sounds like a "lilo" floppy to me. The difference is that a boot floppy has a kernel on it -- a "lilo" floppy just has the loader on it.

The confusing thing about Linux is that it can be booted in so many ways. In a "normal" configuration you have Lilo as the master boot program (on the first hard drive -- in the first sector of track 0 -- with the partition table). Another common configuration places lilo in the "superblock" (logical boot record) of the Linux "root" partition (allowing the DOS boot block, or the OS/2 or NT boot manager -- or some third party package like System Commander) to process the partition table and select the "active" partition -- which *might* be the Linux root partition.

Less common ways of loading Linux: use LOADLIN.EXE (or SYSLINUX.EXE) -- which are DOS programs that can load a Linux kernel (kicking DOS out from under them so to speak), put Lilo on a floppy (which is otherwise blank) -- or on a none Linux boot block (which sounds like your situation).

Two others: You can put Lilo on a floppy *with* a Linux kernel -- or you can even write a Linux kernel to a floppy with no lilo. That last option is rarely used.

The point of confusion is this: LILO loads the Linux kernel using BIOS calls. It offers one the opportunity to pass parameters to the kernel (compiled into it's boot image via the "append" directive in /etc/lilo.conf -- or entered manually at boot time at the lilo prompt).

Another source of confusion is the concept that LILO is a block of code and data that's written to a point that's outside the filesystems on a drive -- /sbin/lilo is a program that writes this block of boot code according to a set of directives in the /etc/lilo.conf. It's best to think of the program /sbin/lilo as a "compiler" that "compiles" a set of boot images according to the lilo.conf and writes them to some place outside of your filesystem.

Yet another source of confusion is that the Linux kernel has a number of default parameters compiled into it. These can be changed using the 'rdev' command (which was originally used to set the "root device" flags in a kernel image file). 'rdev' basically patches values into a file. It can be be used to set the "root device," the "initial video mode" and a number of other things. Some of these settings can be over-ridden via the LILO prompt and append lines. LOADLIN.EXE can also pass parameters to the kernel that it loads.

There's a big difference between using a kernel image written directly on a floppy -- and a LILO that's built to load an image that's located on a floppy filesystem (probably minix or ext2fs). With LILO the kernel must be located on some device that is accessible with straight BIOS calls.

This usually prevents one from using LILO to boot off of a third IDE or SCSI disk drive (since most systems require a software driver to allow DOS or other OS' to "see" these devices). I say "usually" because there are some BIOS' and especially some BIOS extensions on some SCSI and EIDE controllers that may allow LILO to access devices other than the first two floppies and the first two hard drives. However, those are rare. Most PC hardware can only "see" two floppy drives and two hard drives -- which must be on the same controller -- until an OS loads some sort of drivers.

In the case where a kernel is directly located on the raw floppy -- and in the case where the kernel is located on the floppy with LILO -- the kernel has the driver code for your root device (and controllers) built in. (There are also complex new options using 'initrd' -- an "initial RAM disk" which allows a modular kernel to load the drivers for it's root devices.

Yet another thing that's confusing to the DOS user -- and most transplants from other forms of Unix -- is that the kernel doesn't have to be located on the root device. In fact LOADLIN.EXE requires that the kernel be located on a DOS filesystem.

To make matters more complicated you can have multiple kernels on any filesystem, any of them might use any filesystem as their root device and these relationships (between kernel and root device/filesystem can be set in several ways -- i.e. by 'rdev' or at compile time, vs. via the LOADLIN or LILO command lines).

I recommend that serious Linux users reserve a small (20 or 30 Mb) partition with just a minimal installation of the root/base Linux software on it. This should be on a separate device from your main Linux filesystems.

Using this you have an alternative (hard drive based) boot method which is much faster and more convenient than digging out the installation boot/root floppies (or having to go to a working machine and build a new set!). I recommend the same thing for most Solaris and FreeBSD installations. If you have a DOS filesystem on the box -- at least stash a copy of LOADLIN.EXE and a few copies of your favorite kernels in C:\LINUX\ (or wherever).

Now that more PC SCSI cards support booting off of CD-ROM's (a feature that's been long overdue!) you can get by without heeding my advice -- IF YOU HAVE SUCH A CONTROLLER AND A CD TO MATCH.

(Incidentally -- I found out quite by accident that the Red Hat 4.1 CD is "bootable" on Adaptec 2940 controllers -- if you have the Adaptec configured to allow it. I've also heard that the NCR SymBIOS PCI controller supports this -- though I haven't tested that yet).

In any event we should all make "rescue disks" -- unfortunately these are trickier than they should be. Look for the Bootdisk HOWTO for real details about this.

 About a week ago I put the Linux floppy in the diskette drive, reset the machine and waited for the LILO prompt. Everything went fine, but all I got were the letters LI and everything stopped. I have tried several times, using the original and the backup diskette, with the same results.

 Did you add a new drive to the system?

 I have done nothing (that I can think of!) to my machine and I'm at a loss as to what might be causing this. Just to ensure that the floppy drive wasn't acting funny, I've booted DOS from it and that went fine.

 When you booted DOS where you able to see the drive? I'd get out your installation floppy (or floppies -- I don't remember whether Red Hat 4.0 had a single floppy system or not -- 4.1 and 4.2 only require one for most hardware). Boot from that and choose "rescue" or switch out of the installation script to a shell prompt. You should then be able to attempt mounting your root filesystem.

If that fails you can try to 'fsck' it. After that it's probably a matter of reinstallation and restoring from backups.

 Any ideas you have would be appreciated. Thanks for your time.

Dave Runnels

 Glad I could help.


 Running FileRunner

David E. Stern I wanted to let you know that you were right about relying too heavily on rpm. In the distant past, I used file text-based file compression utilities, so I tried it again and tarballs are actually quite nice. I also found that rpm --nodeps will help. Tarballs are also nice because not all apps are distributed with rpm. (bonus! :-) I'm also told that multiple versions of tcl/tlk can peacably coexist, although rpm won't allow it by default. Another ploy with rpm which I didn't see documented was that to avoid circular dependencies, update multiple rpms at the same time; i.e.: rpm -Uvh app1.rpm app2.rpm app3.rpm . Another thing I learned about was that there are some non-standard (contributed) libraries that are required for certain apps, like afio and xpm. Thanks for the great ideas and encouragement.

The end goal: to install FileRunner, I simply MUST have it! My intermediate goal is to install Tcl/Tk 7.6/4.2, because FileRunner needs these to install, and I only have 7.5/4.1 . However, when I try to upgrade tcl/tlk, other apps rely on older tcl/tk libraries, at least that's what the messages allude to:

libtcl7.5.so is needed by some-app
libtk4.1.so is needed by some-app

(where some-app is python, expect, blt, ical, tclx, tix, tk, tkstep,...)

I have enough experience to know that apps may break if I upgrade the libraries they depend on. I've tried updating some of those other apps, but I run into further and circular dependencies--like a cat chasing it's tail.

In your opinion, what is the preferred method of handling this scenario? I must have FileRunner, but not at the expense of other apps.

 It sounds like you're relying too heavily on RPM's. If you can't afford to risk breaking your current stuff, and you "must" have the upgrade you'll have to do some stuff beyond what the RPM system seems to do.

One method would be to grab the sources (SRPM or tarball) and manually compile the new TCL and tk into /usr/local (possibly with some changes to their library default paths, etc). Now you'll probably need to grab the FileRunner sources and compile that to force it to use the /usr/local/wish or /usr/local/tclsh (which, in turn, will use the /usr/local/lib/tk if you've compiled it all right).

Another approach is to set up a separate environment (separate disk, a large subtree of an existing disk -- into which you chroot, or a separate system entirely) and test the upgrade path where it won't inconvenience you by failing. A similar approach is to do a backup, test your upgrade plan -- (if the upgrade fails, restore the backup).

 Thanks, -david

 You're welcome. This is a big problem in all computing environments (and far worse in DOS, Windows, and NT systems than in most multi-user operating systems. At least with Unix you have the option of installing a "playpen" (accessing it with the chroot call -- or by completely rebooting on another partition if you like).

Complex interdepencies are unavoidable unless you require that every application be statically linked and completely self-sufficient (without even allowing their configuration files to be separate. So this will remain an aspect of system administration where experience and creativity are called for (and a good backup may be the only thing between you and major inconvenience).

-- Jim


 Adding Linux t a DEC XLT-366

From: Alex Pikus

I have a DEC XLT-366 with NTS4.0 and I would like to add Linux to it. I have been running Linux on an i386 for a while.

I have created 3 floppies:

I have upgrade AlphaBIOS to v5.24 (latest from DEC) and added a Linux boot option that points to a:\

 You have me at a severe disadvantage. I've never run Linux on an Alpha. So I'll have to try answering this blind.

 When I load MILO I get the "MILO>" prompt without any problem. When I do

show
or
boot ...
at the MILO I get the following result ...

SCSI controller gets identified as NCR810 on IRQ 28 ... test1 runs and gets stuck "due to a lost interrupt" and the system hangs ...

In WinNTS4.0 the NCR810 appears on IRQ 29.

 My first instinct is the ask if the autoprobe code in Linux (Alpha) is broken. Can you use a set of command-line (MILO) parameters to tell pass information about your SCSI controller to your kernel? You could also see about getting someone else with an Alpha based system to compile a kernel for you -- and make sure that it has values in it's scsi.h file that are appropriate to your system -- as well as insuring that the corrective drivers are built in.

 How can make further progress here?

 It's a tough question. Another thing I'd look at is to see if the Alpha system allows booting from a CD-ROM. Then I'd check out Red Hat's (or Craftworks') Linux for Alpha CD's -- asking each of them if they support this sort of boot.

(I happened to discover that the Red Hat Linux 4.1 (Intel) CD-ROM was bootable when I was working with one system that had an Adaptec 2940 controller where that was set as an option. This feature is also quite common on other Unix platforms such as SPARC and PA-RISC systems -- so it is a rather late addition to the PC world).

 Thanks!
Alex.


 Disk Support

From: Andrew Ng

Dear Sir, I have a question to ask: Does Linux support disks with density 2048bytes/sector?

 Apparently not. This is a common size for CD-ROM's -- but it not at all normal for any other media.

 I have bought a Fujitsu MO drive which support up to 640MB MO disks with density 2048bytes/sector. The Slackware Linux system does not support access to disks with this density. Windows 95 and NT support this density and work very well. Is there any version of Linux which support 2048bytes/sector? If not, is there any project working on that?

 I believe the drive ships with drivers for DOS, Windows, Windows '95 and NT. The OS' don't "support it" the manufacturer supports these OS'.

Linux, other the other hand, does support most hardware (without drivers being supplied by the hardware manufacturers). Granted we get some co-operation from many manufacturers. Some even contribute code to the main kernel development.

We prefer the model where the hardware manufacturer releases free code to drive their hardware -- whether that code is written for Linux, FreeBSD or any other OS. Release it once and all OS' can port and benefit by it.

 I hear a lot of praise about Linux. Is Linux superior to Windows NT in all aspect?

 That's controversial question. Any statement like: Is "foo" superior to "bar" in all aspects? ... is bound to cause endless (and probably acrimonious) debate.

Currently NT has a couple of advantages: Microsoft is a large company with lots of money to spend on marketing and packaging. They are very aggressive in making "partnerships" and building "strategic relationships" with the management of large companies.

Microsoft has slowly risen to dominance in the core applications markets (word processors, spreadsheets, and databases). Many industry "insiders" (myself included) view this as being the result of "trust"-worthy business practices (a.k.a. "verging on monopolistic").

In other words may people believe that MS Word isn't the dominant word processor because it is technically the superior product -- but because MS was able to supply the OS features they needed when they wanted (and perhaps able to slip the schedules of certain releases during the critical development phases of their competitors).

The fact that the OS, and the principal programming tools, and the major applications are all from the same source has generated a amazing amount of market antagonism towards Microsoft. (Personally I think it's a bit extreme -- but I can understand how many people feel "trapped" and understand the frustration of thinking that there's "no choice").

Linux doesn't have a single dominant applications suite. There are several packages out there -- Applixware, StarOffice, Caldera's Internet Office Suite. Hopefully Corel's Java Office will also be a useful to Linux, FreeBSD and other users (including Windows and NT).

In addition to these "suites" there are also several individual applications like Wingz (a spreadsheet system), Mathematica, (the premier symbolic mathematics package), LyX (the free word processor -- LaTeX front-end -- that's under development), Empress, /rdb (database systems), Flagship and dbMan IV (xBase database development packages), Postgres '95, mSQL, InfoFlex, Just Logic's SQL, MySQL (database servers) and a many more. (Browse through the Linux Journal _Buyer's_Guide_ for a large list -- also waltz around the web a bit).

Microsoft's SQL Server for NT is getting to be pretty good. Also, there are alot of people who program for it -- more than you'll find for InfoFlex, Postgres '95 etc. A major problem with SQL is that the servers are all different enough to call for significant differences in the front end applications -- which translates to lots of programmer time (and money!) if you switch from one to another. MS has been very successful getting companies to adopt NT Servers for their "small" SQL projects (which has been hurting the big three -- Oracle, Sybase and Informix). Unfortunately for Linux -- database programmers and administrators are very conservative -- they are a "hard sell."

So Linux -- despite the excellent stability and performance -- is not likely to make a significant impact as a database server for a couple of years at least. Oracle, Sybase and Informix have "strategic relationships" with SCO, Sun, and other Unix companies.

The established Unix companies viewed Linux as a threat until recently. They now seem to see it as a mixed blessing. On the up side Linux has just about doubled the number of systems running Unix-like OS', attracted somewhere between two and eight million new converts away from the "Wintel" paradigm, and even wedged a little bit of "choice" into the minds of the industry media. On the down side SCO can no longer charge thousands of dollars for the low end of their systems. This doesn't really affect Sun, DEC, and HP so much -- since they are primarily hardware vendors who only got into the OS business to keep their iron moving out the door. SCO and BSDI have the tough fight since the bulk of their business is OS sales.

(Note: BSDI is *not* to be confused with the FreeBSD, NetBSD, OpenBSD, or 386BSD (Jolix) packages. They are a company that produces a commercial Unix, BSDI/OS. The whole Free|Net|Open-BSD set of programming projects evolved out of the work of Mr. and Mrs. Jolitz -- which was called 386BSD -- and I call "Jolix" -- a name with I also spotted in the _Using_C-Kermit_ book from Digital Press).

So there don't seem to be any Oracle, SyBase, or Informix servers available for Linux. The small guys like JustLogic and InfoFlex have an opportunity here -- but it's a small crack in a heavy door and some of them are likely to get their toes broken in the process.

Meanwhile NT will keep getting market share -- because their entry level still a tiny fraction of the price of any of the "big guys."

I've just barely scratched the tip of the iceberg (to thoroughly blend those metaphors). There are so many other aspects of comparison it's hard to even list them -- let alone talk about who Linux and NT measure up to them.

It's also important to realize that it's not just NT vs. Linux. There are many forms of Unix -- most of them are quite similar to Linux from a user and even from an administrators point of view. There are many operating systems that are vastly different than either NT (which is supposed to be fundamentally based on VMS) and the various Unix variants.

There are things like Sprite (a Berkeley research project), Amoeba and Chorus (distributed network operating systems), EROS, and many others.

Here's a link where you can find out more about operating systems in general: Yahoo! Computers and Internet: Operating Systems: Research

-- Jim


 Legibility

From: Robert E Glacken

I use a 256 shade monochrome monitor. The QUESTIONS are invisible.

 What questions? What OS? What GUI? (I presume that the normal text is visible in text mode so you must be using a GUI of some sort)?

I wouldn't expect much from a monochrome monitor set to show 256 (or even 127) shades of grey. That's almost no one in the PC/Linux world that uses those -- so there almost no one that tunes their color tables and applications to support it.

Suggestions -- get a color screen -- or drop the GUI and use text mode.

-- Jim


 MetroX Problems

From: Allen Atamer

I am having trouble setting up my XServer. Whether or not I use MetroX or Xfree86 to set it up it's still not working.

When I originally chose metrox to install, i got to the setup screen, chose my card and resolution, saved and exited. Then i started up the xwindows, and my screen loaded the Xserver, but the graphics were all messed up. I exited, then changed some settings, and now i can't even load the xserver. The Xerrors file says it had problems loading the 'core'.

 Hmm. You don't mention what sort of video card you're using or what was "messed up." As I've said many times in my column -- I'm not must of an "Xpert" (or much of a "TeXpert" for that matter).

MetroX and XFree86 each have their own support pages on the web -- and there are several X specific newsgroups where you'd find people who are much better with X than I.

Before you go there to post I'd suggest that you type up the type of video card and monitor you have in excruciating detail -- and make sure you go through the X HOWTO's and the Red Hat manual. Also be sure to check the errata page at Red Hat (http://www.redhat.com/errata.html) -- this will let you know about any problems that were discovered after the release of 4.1.

One other thing you might try is getting the new version (4.2 -- Biltmore) -- and check it's errata sheet. You can buy a new set of CD's (http://www.cheapbytes.com is one inexpensive source) or you can use up a bunch of bandwidth by downloading it all. The middle road is to to download just the parts you need.

I notice (looking at the errata sheets as I type this) that XFree86 is up to version 3.3.1 (at least). This upgrade is apparently primarily to fix some buffer overflow (security) problems in the X libraries.

 By the way, how do I mount what's on the second cd and read it? (vanderbilt 4.1)

 First umount the first CD with a command like: umount /cdrom Remove it. Then 'mount' the other one with a command like: mount -t iso9660 -o ro /cdrom /dev/scd0 ... where /cdrom is some (arbitrary but extent) mount point and /dev/scd0 is the device node that points to your CD drive (that would be the first SCSI CD-ROM on your system -- IDE and various other CD's have different device names).

To find out the device name for your CD use the mount command BEFORE you unmount the other CD. It will show each mounted device and the current mount point.

Personally I use /mnt/cd as my mount point for most CD's. I recommend adding an entry to your /etc/fstab file (the "filesystems table" for Unix/Linux) that looks something like this:

# /etc/fstab
/dev/scd0      /mnt/cd            iso9660 noauto,ro,user,nodev,nosuid 0 0

This will allow you to use the mount and umount commands as a normal user (without the need to su to 'root').

I also recommend changing the permissions of the mount command to something like:

-rwsr-x---   1 root     console            26116 Jun  3  1996 /bin/mount
(chgrp console `which mount && chmod 4550 `which mount`)

... so that only members of the group "console" can use the mount command. Then add your normal user account to that group.

The idea of all this is to strike a balance between the convenience and reduced "fumblefingers" exposure of running the privileged command as a normal user -- and the potential for (as yet undiscovered buffer overflows) to compromise the system by "guest" users.

(I recommend similar procedures for ALL SUID binaries -- but this is an advanced issue that goes *WAY* beyond the scope of this question).

Allen, You really need to get a copy of the "Getting Started" guide from the Linux Documentation Project. This can be downloaded and printed (there's probably a copy on your CD's) or you can buy the professionally bound editions from any of several publishers -- my favorite being O'Reilly & Associates (http://www.ora.com).

Remember that the Linux Gazette "Answer Guy" is no substitute for reading the manuals and participating in Linux newsgroups and mailing lists.

-- Jim


 Installing Linux

From: Aryeh Goretsky

 [ Aryeh, I'm copying my Linux Gazette editor on this since I've put in enough explanation to be worth publishing it ]

 ..... why ... don't they just call it a disk boot sector . .... Okay, I've just got to figure out what the problem is, then. Are there any utilities like NDD for Linux I can run that will point out any errors I made when entering the superblock info?

 Nothing with a simple, colorful interface. 'fsck' is at least as good with ext2 filesystems as NDD is with FAT (MS-DOS) partitions. However 'fsck' (or, more specifically, e2fsck) has a major advantage since the ext2fs was designed to be robust. The FAT filesystem was designed to be simple enough that the driver code and the rest of the OS could fit on a 48K (yes, forty-eight kilobytes) PC (not XT, not AT, and not even close to a 386). So, I'm not knocking NDD when I say that fsck works "at least" as well.

However, fsck doesn't touch your MBR -- it will check your superblock and recommand a command to restore the superblock from one of the backups if yours is damaged. Normally the newfs (like MS-DOS' FORMAT) or mke2fs (basically the same thing) will scatter extra copies of the superblock every 8K sectors across the filesystem (or so). So there are usually plenty of backups.

So, usually, you'd just run fdisk to check your partitions and /sbin/lilo to write a new MBR (or other boot sector). /sbin/lilo will also update its own "map" file -- and may (optionally) make a backup of your original boot sector or MBR.

(Note: There was an amusing incident on one of the mailing lists or newsgroups -- in which a user complained that Red Hat had "infected his system with a virus." It turns out that lilo had moved the existing (PC/MBR) virus from his MBR to a backup file -- where it was finally discovered. So, lilo had actually *cured* his system of the virus).

Actually when you run /sbin/lilo you're "compiling" the information in the /etc/lilo.conf file and writing that to the "boot" location -- which you specify in the .conf file.

You can actually call your lilo.conf anything you like -- and you can put it anywhere you like -- you'd just have to call /sbin/lilo with a -C switch and a path/file name. /etc/lilo.conf is just the built-in default which the -C option over-rides.

Here's a copy of my lilo.conf (which I don't actually use -- since I use LOADLIN.EXE on this system). As with many (most?) Unix configuration files the comments start with hash (#) signs.

boot=/dev/hda
# write the resulting boot block to my first IDE hard drive's MBR.
# if this was /dev/hdb4 (for example) /sbin/lilo would write the 
# resulting block to the logical boot record on the fourth partition
# of my second IDE hard drive.   /dev/sdc would mean to write it to
# the MBR of the third SCSI disk.
# /sbin/lilo will print a warning if the boot location is likely to 
# be inaccessible to most BIOS' (i.e. would require a software driver
# for DOS to access it).

## NOTE:  Throughout this discussion I use /sbin/lilo to refer to the 
## Linux executable binary program and LILO to refer to the resulting
## boot code that's "compiled" and written by /sbin/lilo to whatever
## boot sector your lilo.conf calls for.  I hope this will minimize the
## confusion -- though I've liberally re-iterated this with parenthetical
## comments as well.

# The common case is to put boot=/dev/fd0H1440 to specify that the
# resulting boot code should be written to a floppy in the 1.44Mb
# "A:" drive when /sbin/lilo is run.  Naturally this would require
# that you use this diskette to boot any of the images and "other"
# stanzas listed in the rest of this file.  Note that the floppy
# could be completely blank -- no kernel or files are copied to it
# -- just the boot sector!


map=/boot/map
        # This is where /sbin/lilo will store a copy of the map file --
        # which contains the cylinder/sector/side address of the images
        # and message files  (see below)
        # It's important to re-run /sbin/lilo to regenerate the map
        # file any time you've done anything that might move any of 
        # these image or message files (like defragging the disk,
        # restoring any of these images from a backup -- that sort
        # of thing!).


install=/boot/boot.b
        # This file contains code for LILO (the boot loader) -- this is 
        # an optional directive -- and necessary in this case since it 
        # simply specifies the default location.
        
prompt
        # This instructs the LILO boot code to prompt the user for 
        # input.  Without this directive  LILO would just wait
        # upto "delay" time (default 0 tenths of a second -- none)
        # and boot using the default stanza.
        # if you leave this and the "timeout" directives out --
        # but you put in a delay=X directive -- then LILO won't 
        # prompt the user -- but will wait for X tenths of a second
        # (600 is 10 seconds).  During that delay the user can hit a 
        # shift key, or any of the NumLock, Scroll Lock type keys to 
        # request a LILO prompt.

timeout=50
        # This sets the amount of time LILO (the boot code) will 
        # wait at the prompt before proceeding to the default
        # 0 means 'wait forever'

message=/etc/lilo.message
        # this directive tells /sbin/lilo (the conf. "compiler") to 
        # include the contents of this message in the prompt which LILO
        # (the boot code) displays at boot time.  It is a handy place to
        # put some site specific help/reminder messages about what
        # you call your kernels and where you put your alternative bootable
        # partitions and what you're going to do to people who reboot your 
        # Linux server without a very good reason.

other=/dev/hda1
        label=dos
        table=/dev/hda
        # This is a "stanza"
        # the keyword "other" means that this is referring to a non-Linux
        # OS -- the location tells LILO (boot code) where to find the 
        # "other" OS' boot code (in the first partition of the first IDE --
        # that's a DOS limitation rather than a Linux constraint).
        # The label directive is an arbitrary but unique name for this stanza
        # to allow one to select this as a boot option from the LILO 
        # (boot code) prompt.

        # Because it is the first stanza it is the the default OS --
        # LILO will boot this partition if it reaches timeout or is 
        # told not to prompt.  You could also over-ride that using a 
        # default=$labelname$ directive up in the "global" section of the
        # file.

image=/vmlinuz
        label=linux
        root=/dev/sda5
        read-only
        # This is my "normal" boot partition and kernel.
        # the "root" directive is a parameter that is passed to the 
        # kernel as it loads -- to tell the kernel where its root filesystem
        # is located.  The "read-only" is a message to the kernel to initially
        # mount the root filesystem read-only -- so the rc (AUTOEXEC.BAT) 
        # scripts can fsck (do filesystem checks -- like CHKDSK) on it.  
        # Those rc scripts will then normally remount the fs in "read/write" 
        # mode.

image=/vmlinuz.old
        label=old
        root=/dev/sda5
        append= single
        read-only
        # This example is the same except that it loads a different kernel
        # (presumably and older one -- duh!).  The append= directive allows
        # me to pass arbitrary directives on to the kernel -- I could use this
        # to tell the kernel where to find my Ethernet card in I/O, IRQ, and 
        # DMA space -- here I'm using it to tell the kernel that I want to come
        # up in "single-user" (fix a problem, don't start all those networking
        # gizmos) mode.

image=/mnt/tmp/vmlinuz
        label=alt
        root=/dev/sdb1
        read-only

        # This last example is the most confusing.  My image is on some other
        # filesystem (at the time that I run /sbin/lilo to "compile" this 
        # stanza). The root fs is on the first partition of the 2nd SCSI drive.
        # It is likely that /dev/sdb1 would be the filesystem mounted under 
        # /mnt/tmp when I would run /sbin/lilo.  However it's not "required"
        # My kernel image file could be on any filesystem that was mounted
        # /sbin/lilo will warn me if the image is likely to be inaccessible
        # by the BIOS -- it's can't say for sure since there are a lot of 
        # BIOS' out there -- some of the newer SCSI BIOS' will boot off of a 
        # CD-ROM!

I hope that helps. The lilo.conf man page (in section 5) gives *lots* more options -- like the one I just saw while writing this that allows you to have a password for each of your images -- or for the whole set. Also there are a number of kernel options described in the BootPrompt-HOWTO. One of the intriguing ones is panic= -- which allows you to tell the Linux kernel how long to sit there displaying a kernel panic. The default is "forever" -- but you can use the append= line in your lilo.conf to pass a panic= parameter to your kernel -- telling it how many seconds to wait before attempting to reboot.

In the years that I've used Linux I've only seen a couple (like two or three) kernel panics (that could be identified as such). Perhaps a dozen times I've had a Linux system freeze or go comatose enough that I hard reset it. (Most of those involve very bad hardware IRQ conflicts). Once I've even tricked my kernel into scribbling garbage all over one of my filesystems (don't play with linear and membase in your XConfig file -- and, in particular don't specify a video memory base address that's inside of your system's RAM address space).

So I'm not sure if setting a panic= switch would help much. I'd be much more inclined to get a hardware watchdog timer card and enable the existing support for that in the kernel. Linux is the only PC OS that I know of that comes with this support "built-in"

For those that aren't familiar with them a watchdog timer card is a card (typically taking an ISA slot) that implements a simple count-down and reset (strobing the reset line on the system bus) feature. This is activated by a driver (which could be a DOS device driver, a Netware Loadable Module, or a little chunk of code in the Linux kernel. Once started the card must be updated periodically (the period is set as part of the activation/update). So -- if the software hangs -- the card *will* strobe the reset line.

(Note: this isn't completely fool-proof. Some hardware states might require a complete power cycle and some sorts of critical server failures will render the systems services unavailable without killing the timer driver software. However it is a good sight better than just hanging).

These cards cost about $100 (U.S.) -- which is a pity since there's only about $5 worth of hardware there. I think most Sun workstations have this feature designed into the motherboard -- which is what PC manufacturers should scramble to do.


 AG

At 11:43 AM 6/10/97 -0700, you wrote: Subject: Once again, I try to install Linux... ...and fail miserably. This is getting depressing. Someone wanna explain this whole superblock concept to me? Use small words....

 Aryeh, Remember master boot records (MBR's)? Remember "logical" boot records -- for volume boot records?

A superblock is the Unix term for a logical boot record. Linux uses normal partitions that are compatible with the DOS, OS/2, NT (et al) hard disk partitioning scheme.

To boot Linux you can use LILO (the Linux loader) which can be written to your MBR (most common), to your "superblock" or to the "superblock" of a floppy. This little chunk of code contains a reference (or "map") to the device and logical sector of one or more Linux kernels or DOS (or OS/2) bootable partitions.

There is a program called "lilo" which "compiles" a lilo.conf (configuration file) into this LILO "boot block" and puts it onto the MBR, superblock or floppy boot block for you. This is the source of most of the confusion about LILO. I can create a boot floppy with nothing put this boot block on it -- no kernel, no filesystems, nothing. LILO doesn't care where I put any of my linux kernels -- so long as it can get to it using BIOS calls (which usually limits you to putting the kernel on the one of the first two drives connected to the first drive controller on your system).

Another approach is to use LOADLIN.EXE -- this is a DOS program that loads a Linux (or FreeBSD) kernel. The advantage of this is that you can have as many kernel files as you like, and they can be located on any DOS accessible device (even if you had to load various weird device drivers to be able to see that device.

LOADLIN.EXE is used by some CD-ROM based installation packages -- avoiding the necessity of using a boot floppy.

The disadvantages of LOADLIN include the fact that you may have loaded some device drivers and memory managers that have re-mapped (hooked into) critical BIOS interrupt vectors. LOADLIN often needs a "boot time hardware vector table" (which it usually writes as C:\REALBIOS.INT -- a small hidden/system file). Creating this file involves booting from a "stub" floppy (which saves the table) and rebooting/restarting the LOADLIN configuration to tell it to copy the table from the floppy to your HD. This must be done whenever you change video cards, add any controller with a BIOS extension (a ROM) or otherwise play with the innards of your machine.

Call me and we can go over your configuration to narrow down the discussion. If you like you can point your web browser at www.ssc.com/lg and look for articles by "The Answer Guy" there. I've described this a greater length in some of my articles there.

-- Jim


 Adding Programs to the Pull Down Menus

From: Ronald B. Simon

Thank you for responding to my request. By the way I am using RedHat release 4 and I think TheNextLevel window manager. I did find a .fvwm2rc.programs tucked away in...

 Ronald, TheNextLevel is an fvwm derivative.

 /etc/X11/TheNextLevel/. I added a define ProgramCM(Title,,,program name) and under the start/applications menu I saw Title. When I put the cursor over it and pressed the mouse button, everything froze. I came to the conclusion that I am in way over my head and that I probably need to open a window within the program that I am trying to execute. Any way I will search for some 'C' code that shows me how to do that. Thanks again!

 I forgot to mention that any non X program should be run through an xterm. This is normally done with a line in your rc file like: Exec "Your Shell App" exec xterm -e /path/to/your/app & ... (I'm using fvwm syntax here -- I'll trust you to translate to TNL format). Try that -- it should fix you right up.

Also -- when you think your X session is locked up -- try the Ctrl-Alt-Fx key (where Fx is the function key that corresponds to one of your virtual consoles). This should switch you out of GUI mode and into your normal console environment. You might also try Alt-SysReq (Print-Screen on most keyboards) followed by a digit from the alphanumeric portion of you keyboard (i.e. NOT from the numeric keypad). This is an alternative binding for VC switching that might be enabled on a few systems. If all of that fails you can try Ctrl-Alt-Backspace. This should (normally) signal the X server to shutdown.

Mostly I doubt that your server actually hung. I suspect that you confused it a bit by running a non-X program not "backgrounded" (you DO need those trailing ampersands) and failing to supply it with communications channel back to X (an xterm).

Please remember that my knowlege of X is very weak. I hardly ever use and almost never administer/customize it. So you'll want to look at the L.U.S.T. mailing list, or the comp.windows.x or (maybe) the comp.os.linux.x (although there is nothing to these questions which is Linux specific). I looked extensively for information about TheNextLevel on the web (in Yahoo! and Alta Vista). Unfortunately the one page that almost all of the references pointed to was down

The FVWM home page is at: http://www3.hmc.edu/~tkelly/docs/proj/fvwm.html

-- Jim


 Linux Skip

From: Jesse Montrose

 Time warp: This message was lost in my drafts folder while I was looking up some of the information. As it turns out the wait was to our advantage. Read on.

 Date: Sun, 16 Mar 1997 13:54:34 -0800

Greetings, this question is intended for the Answer Guy associated with the Linux Gazette..

I've recently discovered and enjoyed your column in the Linux Gazette, I'm hoping you might have news about a linux port of sun's skip ip encryption protocol.

Here's the blurb from skip.incog.com: SKIP secures the network at the IP packet level. Any networked application gains the benefits of encryption, without requiring modification. SKIP is unique in that an Internet host can send an encrypted packet to another host without requiring a prior message exchange to set up a secure channel. SKIP is particularly well-suited to IP networks, as both are stateless protocols. Some of the advantages of SKIP include:

 I heard a bit about SKIP while I was at a recent IETF conference. However I must admit that it got lost in the crowd of other security protocols and issues.

So far I've paid a bit more attention to the Free S/WAN project that's being promoted by John Gilmore of the EFF. I finally got ahold of a friend of mine (Hugh Daniel -- one of the architects of Sun's NeWS project -- and well-known cypherpunk and computer security professional)

He explained that SKIP is the "Secure Key Interchange Protocol" -- that is is a key management protocol (incorporated in ISAKMP/Oakley).

For secure communications you need:

 My employer is primarily an NT shop (with sun servers), but since I develop in Java, I'm able to do my work in linux. I am one of about a dozen telecommuters in our organization, and we use on-demand ISDN to dial in directly to the office modem bank, in many cases a long distance call.

 I'm finally working on configuring my dial-on-demand ISDN line here at my place. I've had diald (dial-on-demand over a 28.8 modem) running for about a month now. I just want to cut down on that dial time.

 We're considering switching to public Internet connections, using skip to maintain security. Skip binaries are available for a few platforms (windows, freebsd, sunos), but not linux. Fortunately the source is available (http://skip.incog.com/source.html) but it's freebsd, and I don't know nearly enough deep linux to get it compiled (I tried making source modifications).

 If I understand it correctly SKIP is only a small part of the solution.

Hopefully FreeS/WAN will be available soon. You can do quite a bit with ssh (and I've heard of people who are experimenting with routing through some custom made tunnelled interface). FreeBSD and Linux both support IP tunneling now.

For information on using ssh and IP tunnels to build a custom VPN (virtual private network) look in this month's issue of Sys Admin Magazine (July '97). (Shameless plug: I have an article about C-Kermit appearing in the same issue).

Another method might be to get NetCrypto. Currently the package isn't available for Linux -- however McAfee is working on a port. Look at http://www.mcafee.com

 After much time with several search engines, the best I could come up with was another fellow also looking for a linux version of skip :) Thanks! jesse montrose

 Jesse, Sorry I took so long to answer this question. However, as I say, this stuff has changed considerably -- even in the two months between the time I started this draft message and now.

-- Jim


 ActiveX for Linux

From: Gerald Hewes

Jim, I read your response on ActiveX in the Linux Gazette. At ../issue18/lg_answer18.html#active

Software AG is porting the non GUI portions of ActiveX called DCOM to Linux. Their US site where it should be hosted appears down as I write this e-mail message but there is a link of their home page on a Linux DCOM beta: http:/www.sotwareag.com

 I beleive the link ought to be http://www.sagus.com/prod-i~1/net-comp/dcom/index.htm

 As for DCOM, its main value for the Linux community is in making Microsoft Distributed Object Technology available to the Linux community. Microsoft is trying to push DCOM over CORBA.

 I know that MS is "trying to push DCOM over CORBA" (and OpenDOC, and now, JavaBeans). I'm also aware that DCOM stands for "distributed component object model" and CORBA is the "common object request broker" and SOM is IBM's "system object model" (OS/2).

The media "newshounds" have dragged these little bones around and gnawed on them until we've all seen them. Nonetheless I don't see its "main value to the Linux community."

These "components" or "reusable objects" will not make any difference so long as significant portions of their functionality are tied to specific OS (GUI) semantics. However, this coupling between specific OS' has been a key feature of each of these technologies.

It's Apple's OpenDoc, IBM's DSOM, and Microsoft's DSOM!

While I'm sure that each as their merits from the programmer's point of view (and I'm in no position to comment on their relative technical pros or cons) -- I have yet to see any *benefit* from a user or administrative point of view.

So I suppose the question here becomes:

Is there any ActiveX (DCOM) control (component) that delivers any real benefit to any Linux user? Do any of the ActiveX controls not have a GUI component to them? What does it mean to make the "non-GUI portions" of DCOM available? Is there any new network protocol that this gives us? If so, what is that protocol good for?

For more information, checkout http://www.microsoft.com/oledev

While I encourage people to browse around -- I think I'll wait until someone can point out one DCOM component, one JavaBean, one CORBA object, or one whatever-buzzword- you-want-to-call-it-today and can explain in simple "Duh! I'm a user!" terms what the *benefit* is.

Some time ago -- in another venue -- I provided the net with an extensive commentary on the difference between "benefits" and "features." The short form is this:

I benefit is relevant to your customer. To offer a benefit requires that you understand your customer. "Features" bear no relation to a customers needs. However mass marketing necessitates the promotion of features -- since the *mass* marketer can't address individual and niche needs.

Example: Microsoft operating systems offer a "easy to use graphical interfaces" -- first "easy to use" is highly subjective. In this case it means that there are options listed on menus and buttons and the user can guess at which ones apply to their need and experiment until something works. That is a feature -- one I personally loathe. To me "easy to use" means having documentation that includes examples that are close to what I'm trying to do -- so I can "fill in the blanks" Next there is the ubiquitously touted "GUI." That's another *feature*. To me it's of no benefit -- I spend 8 to 16 hours a day looking at my screen. Text mode screens are far easier on the eyes than any monitor in graphical mode.

To some people, such as the blind GUI's are a giant step backward in accessibility. The GUI literally threatens to cut these people off from vital employment resources.

I'm not saying that the majority of the world should abandon GUI's just because of a small minority of people who can't use them and a smaller, crotchety contingent of people like me that just don't like them. I'm merely trying to point out the difference between a "feature" and a "benefit."

The "writing wizards" offered by MS Word are another feature that I eschew. My writing isn't perfect and I make my share of typos, as well as spelling and grammatical errors. However Most of what I write goes straight from my fingers to the recipient -- no proofreading and no editing. When I've experimented with spell checkers and "fog indexes" I've consistently found that my discourse is beyond their capabilities -- much too specialized and involving far too much technical terminology. So I have to over-ride more than 90% of the "recommendations of these tools.

Although my examples have highlighted Microsoft products we can turn this around and talk about Linux' famed "32-bit power" and "robust stability." These, too are *features*. Stability is a benefit to someone who manages a server -- particularly a co-located server at a remote location. However the average desktop applications user could care less about stability. So long as their application manage to autosave the last three versions of his/her documents the occasional reboot is just a good excuse to go get a cup of coffee.

Multi-user is a feature. Most users don't consider this to be a benefit -- and the idea of sharing "their" system with others is thoroughly repugnant to most modern computer users. On top of that the network services features which implement multi-user access to Linux (and other Unix systems) and NT are gaping security problems so far as most IS users are concerned. So having a multi-user system is not a benefit to must of us. This is particularly true of the shell access that most people identify as *the* multi-user feature of Unix (as opposed to the file sharing and multiple user profiles, accounts and passwords that passes for "multi-user" under Windows for Workgroups and NT).

So, getting back to ActiveX/DCOM -- I've heard of all sorts of features. I'd like to hear about some benefits. Keep in mind that any feature may be a benefit to someone -- so benefits generally have to be expressed in terms of *who* is the beneficiary.

Allegedly programmers are the beneficiary of all these competing component and object schema. "Use our model and you'll be able to impress your boss with glitzy results in a fraction of the time it would take to do any programming" (that seems to be the siren song to seduce people to any of these).

So, who else benefits?

-- Jim


 Bash String Manipulations

From: Niles Mills

Oddly enough -- while it is easy to redirect the standard error of processes under bash -- there doesn't seem to be an easy portable way to explicitly generate message or redirect output to stderr. The best method I've come up with is to use the /proc/ filesystem (process table) like so:

function error { echo "$*" > /proc/self/fd/2 }

Hmmmm...how about good old

>&2
?

$ cat example
#!/bin/bash
echo normal
echo error >&2
$ ./example
normal
error
$ ./example > file
error
$ cat ./file
normal
$ bash -version
$ bash -version
bash -version
GNU bash, version 1.14.4(1)

Best Regards, Niles Mills

 I guess that works. I don't know why I couldn't come up with that on my own. But my comment worked -- a couple of people piped right up with the answer.

 Amigo, that little item dates back to day zero of Unix and works on all known flavors. Best of luck in your ventures.

Niles Mills


 Blinking Underline Cursor

From: Joseph Hartmann

I know an IBM compatible PC is "capable" of having a blinking underline cursor, or a blinking block cursor.

My linux system "came" with a blinking underline, which is very difficult to see. But I have not been able (for the past several hours) to make *any* headway about finding out how to change the cursor to a blinking block.

 You got me there. I used to know about five lines of x86 assembly language to call the BIOS routine that sets the size of your cursor. Of course that wouldn't work under Linux since the BIOS is mapped out of existence during the trip into protected mode.

I had a friend who worked with me back at Peter Norton Computing -- he wrote a toy program that provided an animated cursor -- and had several need animated sequences to show with it -- a "steaming coffee cup," a "running man," and a "spinning galaxy" are the ones I remember.

If you wanted to do some kernel hacking it looks like you'd change the value of the "currcons" structure in one of the /usr/src/linux/drivers/char/ files -- maybe it would be "vga.c"

On the assumption that you are not interested in that approach (I don't blame you) I've copied the author of SVGATextMode (a utility for providing text console mode access to the advanced features of most VGA video cards)

Hopefully doesn't mind the imposition. Perhaps he can help.

I've also copied Eugene Crosser and Andries Brouwer the authors of the 'setfont' and 'mapscrn' programs (which don't seem to do cursors -- but do some cool console VGA stuff). 'setfont' lets you pick your text mode console font.

Finally I've copied Thomas Koenig who maintains the Kernel "WishList" in the hopes that he'll add this as a possible entry to that.

Any hints? Best Regards,

 Joe, As you can see I don't feel stumped very often -- and now that I think about it -- I think this would be a neat feature for the Linux console. This is especially true since the people who are most likely to stay away from X Windows are laptop users -- and those are precisely the people who are most likely to need this feature.

-- Jim


 File Permissions

From: John Gotschall

Hi! I was wondering if anyone there knew how I might actually change the file permissions on one of my linux box's DOS partition.

I have Netscape running on one box on our local network, but it can't write to another linux box's MSDOS filesystem, when that filesystem is NFS mounted. It can write to various Linux directories that have proper permissions, but the MSDOS directory won't keep a permissions setting, it keeps it stuck as owned by, read by and execute by root.

 What you're bumping into is two different issues. The default permissions under which a DOS FAT filesystem is mounted (which is "root.root 755" that is: owned by user root, group root, rwx for owner, r-x for group and other).

You can change that with options to the mount (8) command. Specifically you want to use something like:

mount -t msdos -o uid=??,gid=??,umask=775

... where you pick suitable values for the UID and GID from your /etc/passwd and /etc/group files (respectively).

The other culprit in this is the default behavior of NFS. For your own protection NFS defaults to using a feature called "root squash" (which is not a part of a vegetable). This prevents someone who has root access to some other system (as allowed by your /etc/exports file) from accessing your files with the same permissions as you're own local root account.

If you pick a better set of mount options (and put them in your /etc/fstab in the fourth field) then you won't have to worry about this feature. I DO NOT recommend that you over-ride that setting with the NFS no_root_squash option in the /etc/exports file (see 'man 5 exports' for details). I personally would *never* use that option with any export that was mounted read-only -- not even in my own home between two systems that have no live connection to the net! (I do use the no_root_squash option with the read-only option -- but that's a minor risk in my case).

 Is there a way to change the MS-DOS permissions somehow?

 Yes. See the mount(8) options for uid=, gid=, and umask=. I think you can also use the umsdos filesytem type and effectively change the permissions on your FAT based filesystem mount points.

This was a source of some confusion for me and I've never really gotten it straight to my satisfaction. Luckily I find that I hardly ever use my DOS partitions any more.


Copyright © 1997, James T. Dennis
Published in Issue 19 of the Linux Gazette July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next

"Linux Gazette...making Linux just a little more fun!"


Adventures in Linux: A Redhat Newbie Boldly Treks Onto the Internet Frontier

By A. Cliff Seruntine,


Ever tried using chat to dial out with your modem? If you have, then after a few hours of mind-numbing inproductivity you may have found yourself developing an odd, convulsive twitch and banging your head against your monitor? Another dozen hours of typing in reworded chat scripts and you will find yourself wishing the program was a living, tangible entity so you could delete it once and for all out of the known universe, and thus gain a measure of relief knowing that you have spared others the terrible ordeal of sitting in front of their monitors for perhaps days on end coding pleas for chat to just dial the #!%$ telephone. Truthfully, I have found few programs under any of the operating systems I am familiar with give me the jitters the way chat does.

I recall one frosty summer morning (I live in Alaska, so I can honestly describe some summer mornings as being frosty) when I boldly set off where no Microsoft hacker has gone before-Linux, the final frontier. Well, that's a bit extreme. Many Microsoft hackers have seen the light and made the transition. Anyway, I had decided I was going to resist Bill Gatus of Borg, and not be assimilated, so I put a new hard drive in my computer, downloaded Redhat Linux 4.1 from Redhat's ftp server (a two day ordeal with a 33.6 modem, I might add) and read enough of the install documentation to get started.

Now friends already familiar with the Linux OS offered to come by and help me set it up. But I'd have none of it. After all, I owned a computer and electronics service center. I was the expert. And I was firmly convinced that the best way to truly learn something is to plow through it yourself. So I sat down in front of my PC with a cup of tea, made the two required floppy disks for a hard drive install, and began my voyage into Linux gurudom.

About 45 minutes later I was surprised to discover that I was done. Linux had been installed on my system and little fishies were swimming around my monitor in X windows. Well, I was impressed with myself. "Hah!" I said to the walls. "They said it couldn't be done. Not without background. Not without experience. But I've showed them. I've showed them all! Hah! Hah! Hah!" And then, being the compulsive hacker that I am, I began to do what comes naturally. I hacked. And being the Net buff that I am, the first thing I decided to do was get on the Internet through Linux. And all the stuff I'd read about in my printed copy of the works of the Linux Documentation Project said that the way to dial out with Linux was through chat.

Four days later I found myself on my knees in front of my computer, wearily typing in yet another reworded script for chat, half plea, half incantation, hoping beyond reason that this time chat would perform the miracle I had so long sought and just dial the $#%! phone. Yes, I was by that time a broken man. Worse, a broken hacker. My spirit was crushed. My unique identity was in peril. I could hear Bill Gatus in the distance, but getting closer, closer, saying, "Resistance is futile. You will be assimilated." Resigned to my fate, I wrung my hands, achy and sore from writing enough script variants to fill a novel the size of War and Peace, and prepared to type halt and reboot into Windows 95.

Then a voice said, "Luke. Luke! Use the X, Luke!" I don't know why the voice was calling me "Luke" since my name is Cliff, but somehow I knew to trust that voice. I moved the cursor onto the background, clicked, and opened up the applications menu. There I found a nifty little program called Minicom. I clicked on Minicom, it opened, initialized the modem, and a press of [CTRL-a, d] brought up the dial out options. I selected the edit option with the arrow keys, and at the top entered the name and number of my server. Then I selected the dial option with the arrow keys, and pressed [RETURN]. The X was with me, the modem dialed out, logged into my server, and with a beep announced that I should press any button. Minicom then asked me to enter my login name and password. I breathed a sigh of relief, opened up Arena, typed in an address, and . . . nothing happened. Worse, after about a minute, the modem hung up.

"What?" I wondered aloud, squinting into my monitor, certain that behind the phosphorescent glow I could see little Bill Gatuses frantically chewing away the inner workings of my computer. "Join me, Cliff," they were saying. "It is your destiny."

"I'll never join you," I cried out and whipped out my Linux Encyclopedia. I couldn't find anything in the index on how to avoid assimilation, but I did find out that I needed to activate the ppp daemon and give control of the connection from Minicom to the daemon. The command line that worked best was:

pppd /dev/cua2 115200 -detach crtscts modem defaultroute

-detach
is the most important option to include here. It causes the daemon to take over control of the modem from Minicom. pppd activates the Point to Point Protocol daemon. /dev/cua* should be given whatever number corresponds to the serial port your modem is attached to, as long as you have a serial modem. 115200 is the max speed of my modem with compression. You should set this to the max speed of your own modem. crtscts tells your modem to negotiate high speed transmissions. modem simply indicates the daemon should use the modem as its means of networking. It is a default setting, but I like to set it anyway to remind me whats going on. And defaultroute tells the daemon which route the incoming and outgoing data are going through.

The trick is to enter all this before the Minicom connection times out. You could go through the trouble writing it out every time you log on, but a better way is to edit an alias in .bashrc. Go down to the /root directory and type emacs .bashrc (or whatever your prefered editor is) and enter the line below as follows:

alias daemon = <pppd /dev/cua* <your modem speed> -detach crtscts modem
defaultroute>

(Do not forget the quotes or your alias will not function.)

Finally, go into the control panel, double click on the networking icon, and select the last tab that appears. There you will find near the top the option to set your default gateway and your device. Set your default gateway to whatever your Internet server specifies. Specify your device as /dev/cua (whatever serial port your modem is attached to). Sometimes simply /dev/modem will work if it has been symbolically linked in your system. (By the way, if you haven't already done it, in X you also need to double click the modem icon in the control panel and set your modem to the correct /dev/cua(serial port number) there too). And if you have a SLIP account (rare these days) add the pertinent info while setting up your gateway.

Reboot your system. Now your new alias and settings will all be in effect. Now just invoke Minicom and dial out. Then at xterminal type daemon. Minicom will beep at you for taking away its control of the modem. To be on the safe side, I like to kill Minicom to make sure it stops fighting with the daemon for control of the modem. Occasionally it will succeed and weird things will happen. Then invoke your browser and you are on the World Wide Web.

As a final note, Arena's HTML is kind of weak, and you may find it locking up with newer, more powerful web code. It is a good idea to download a more capable browser such as Netscape 3.01, which makes a fine Linux browser, and install and use that as soon as possible.

And that's all there is to taking your Linux webship onto the Information frontier. Well, I'm enjoying my time on the web. I think I'll build a new site dedicated to stopping the assimilation.


Copyright © 1997, Cliff Seruntine
Published in Issue 19 of the Linux Gazette, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Atlanta Showcase Report

By Phil Hughes,


The Atlanta Linux Showcase is over, and everyone is beginning to recover. Recover, that is, from being awake too long, being on a plane too long and stuffing more Linux than will fit into one weekend.

ALS was put together by the Atlanta Linux Enthusiasts, the local Linux user's group in Atlanta, Georgia. The show began on Friday evening, June 6 and ran through Sunday afternoon. More than 500 people attended. The report following this one by Todd Shrider covers much of the show, including the talks.

I want to thank Amy Ayers and Karen Bushaw for making their photos available to us with a special thank you to Amy for getting them scanned and uploaded to the SSC ftp site.

I spent most of my time in the Linux Journal booth giving away magazines and talking to show attendees. One aspect that made this show special for me is that I didn't spend most of my time explaining that Linux is a Unix-like operating system to the attendees. Instead, I got to discuss Linux with experienced people with thoughtful questions, letting them know in the process how LJ could help them. Each attendee was truly interested in Linux and stopped at each booth in the show. I expect attendees appreciated the low signal-to-noise ratio in the booths; that is, conversations were solely about Linux.

The Roast

On Saturday night there was a roast--no, I didn't change from a vegetarian into a meat eater overnight--we were roasting Linus. That is, a group of people presented interesting stories about Linus, intended to only slightly embarrass him. At the end of the evening, I felt that the roast had been successful in every way.

In front of a crowd of about 115 people, Eric Raymond, David Miller, Jon "maddog" Hall and I got to pick on this Linus character. Topics varied from Linus almost being hit by a car in Boston because he was so engrossed in talking about a particular aspect of kernel code, to the evolution of the top-half/bottom-half concept in interrupt handlers and to why Linus was apparently moving from geekdom to becoming a "hunk" sportswear model. (See the cover of the San Jose Metro, May 8-14, 1997.)

Maddog finished the roasting by telling a few Helsinki stories and showing a video that included Tove's parents talking about Linus. A good time was had by the roasters and the audience and, as Linus' closing comment was "I love you all," we assume he had a good time too and wasn't offended by our gentle ribbing.

The Future

The show came off very well. I consider this sucess an amazing feat for an all-volunteer effort. The ALE members plan to write an article for Linux Gazette about how they made this happen. We'll also make this information available on the GLUE web site. I would like to see more shows put on by user groups. The local involvement, the enthusiasm of the attendees and the all Linux flavor of the show made this weekend a great experience. We are already thinking about a Seattle or Portland show and would like to help others make regional shows a reality.


Take a look at the ALS Photo Album.

More on ALS

by Todd M. Shrider,


I first started writing this article in my hotel room late Sunday evening (or early Monday morning) planning to get just enough sleep that I would wake up in time to catch my plane. The plan didn't work--I missed my 6:00 AM flight out of Atlanta. I did the second draft while waiting for my new 9:45 AM flight. The third draft came (yes, you guessed it) while waiting for my 1:30 PM connection from Detroit to Dayton, also having missed the previous connection because of my first flight's late arrival. Suffice it to say, I'm now back home in Indiana and still enjoying the high received from the Atlanta Linux Showcase.

Thanks to all the sponsors and to our host, the Atlanta Linux Enthusiasts user group, the conference started with a bang and went off without a hitch. The conference was a three day event, starting with registration Friday and ending Sunday with a kernel hacking session led by none other than Linus himself. In between there were numerous conferences found in both a business and technical track, several birds of a feather (BoF) sessions and a floor show. These events were broken up with frequent trips to local pubs and very little sleep.

This was my first (but not last) Linux conference, and I found that an added benefit of ALS was meeting all the people who use Linux as a viable business platform/tool. (These same people tend to be doing very cool things with Linux on the side). From companies such as Red Hat to Caldera to others such as MessageNet, Cyclades and DCG Computers, it was obvious that many people have very creative ways to make money with Linux. This wasn't limited, by any means, to the vendors. Many of the conference speakers talked of ways to make money with Linux or of their experiences with Linux in a professional environment.

All of these efforts seemed to compliment the key-note address, World Domination 101, where Linus Torvalds, called for applications, applications, applications. Did I say he thought Linux needed a few more useful applications? Anyway, he pointed out the more or less obvious fact that, if Linux is going to be a success in a world of commercial operating systems, we need every application type you find in other commercial operating systems. In other words, if you're thinking about doing--don't think--just do it. Another thing that Linus pointed out, and that I was glad to hear echoed throughout the conference, was that Linux needs to be easy to use. It needs to be so easy that a secretary or corporate executive could sit and be as productive as they would be with Windows 95. We need to make people realize that Linux has gotten rid of the high learning curve usually associated with Unix.

Something pointed out by Don Rosenberg, while speaking on the "how-to" and "what's needed next" of commercial Linux, was that we are now in a stage where the innovators (that's us) and the early adopters (that's us as well as the people using Linux in the business world today) must continue to push forward so that we can get a group of early adopters (the old DOS users) to take us seriously. In Maddog's closing remarks he urged us all to find two DOS users, convert them to Linux and then tell them to do the same. As a step in this direction, today I introduced a local computer corporate sales firm to Linux; whether they take my advice and run is left to be seen, but believe me, I'm pushing.

The rest of the conference was filled with business and technical talks. The business talks included things such as Eric Raymond's "The Cathedral and the Bazaar", talks on OpenLinux by both Jeff Farnsworth and Steve Webb and "Linux Connectivity for Humans" by none other than Phil Hughes. Lloyd Brodsky was on hand to talk about Intranet Support of Collaborative Planning while Lester Hightower brought us the story of PCC and their efforts to bring Linux to the business world. Mark Bolzern spoke of the significance of Linux and Bob Young talked of the "process" not the "product" of Linux.

The technical discussion track started with Richard Henderson's discussion of the shared libraries and their function across several architectures. Michael Maher gave a HOWTO of Red Hat's RPM package management system and Jim Paradis discussed EM86 and what remains to be done, so that one can run Intel/Linux binaries under Alpha Linux. David Miller then followed giving a boost of enthusiasm with his discussion of the tasks involved in porting Linux to SPARC and Miguel de Icaza took us on a trip to the world of RAID and Linux. We convened the next day to hear David Mandelstam discuss what is involved with wide-area networks and Mike Warfields anatomy of a cracker's intrusion.

All in all, the conference was a huge success. What I might suggest as an improvement for next year is more involvement from the vendors (or maybe just more vendors), a possible sale from the vendors of their special Linux wares to the conference attendees and a possible tutorial session like the ones seen at Uselinux (Anaheim, California, January 1997). Other than that, a few virtual beers (I owe you Maddog) and lots of great geek conversation made for one wild weekend.


Copyright © 1997, Phil Hughes and Todd M. Shrider
Published in Issue 19 of the Linux Gazette, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


SSC is expanding Matt Welsh's Linux Installation & Getting Started by adding chapters about each of the major distributions. Each chapter is being written by a different author in the Linux community. Here's a sneak preview--the Caldera chapter by Evan Leibovitch.--editor


Caldera OpenLinux

By Evan Leibovitch,


This section deals with issues specific to the Caldera releases of Linux, how to install the current release (Caldera OpenLinux) and prepare for the steps outlined in the following chapters. It is intended to be a complement to, not a replacement for, the "Getting Started Guides" Caldera ships with all of its Linux-based products. References to the Getting Started Guide for Caldera Open Linux Base will be indicated throughout this chapter simply as "the Guide".

What is Caldera?

The beginnings of Caldera the company come from an internal Novell project called "Corsair". While Novell had owned Unix System V in the early 1990s, Corsair was formed to see if there were things Novell could learn from Linux.

Corsair was a casualty of the changing of the guard at Novell that also caused it to sell off Unix to SCO and WordPerfect to Corel. Novell founder Ray Noorda gave startup capital to this group with the intention of making Linux available in a manner that would be as acceptable to business users and corporate MIS as commercial versions of Unix. Caldera is a privately-held company based in Orem, Utah.

The implementation of this goal has resulted in a series of Linux-based products that "broken the mold" in a number of ways. Caldera was the first Linux distribution to bundle-in commercial software such as premium X servers, GUI desktops, backup software and web browsers; at the time of writing, Caldera is the only Linux distribution officially supported by Netscape.

The Caldera Network Desktop

Caldera's first product, the Caldera Network Desktop (CND), was released to the public in early 1995 in a $29 "preview" form (a rather unusual manner to run a beta test), and in final release version in early 1996. The CND was based on the 1.2.13 Linux kernel, and included Netscape Navigator, Accelerated-X, CrispLite, and the Looking Glass GUI desktop. It also was the first Linux release to offer NetWare client capabilities, being able to share servers and printers on existing Novell networks. Production and sale of CND ceased in March 1997.

Caldera OpenLinux

In late 1996, Caldera announced its releases based on the Linux 2.0.25 kernel would be named Caldera Open Linux (COL) and would be made available at three levels;

As this is written, only the COL Base release is shipping, and feature sets of the other packages are still being determined. For specific and up-to-date lists of the comparative features of the three levels, check the Caldera web site http://www.caldera.com.

Because all three levels of COL build on the Base release, all three are installed the same way. The only difference is in the different auxiliary packages available; their installation and configuration issues are beyond the scope of this guide. Most of COL's add-on packages contain their own documentation; check the /doc directory of the Caldera CD-ROM for more details.

Obtaining Caldera

Unlike most other Linux distributions, COL is not available for downloading from the Internet, nor can it be distributed freely or passed around. This is because of the commercial packages which are part of COL; while most of the components of COL are under the GNU Public License, the commercial components, such as Looking Glass and Metro-X, are not. In the list of packages included on the COL media starting on page 196 of the Guide, the commercial packages are noted by an asterisk.

COL is available directly from Caldera, or through a network of Partners around the world who have committed to supporting Caldera products. These Partners can usually provide professional assistance, configuration and training for Caldera users. For a current list of Partners, check the Caldera web site.

Preparing to Install Caldera Open Linux

Caldera support the same hardware as any other release based on Linux 2.0 kernels. Appendix A of the Guide (p145) lists most of the SCSI hosts supported and configuration parameters necessary for many hardware combinations.

Taking a page out of the Novell manual style, Caldera's Guide provides an installation worksheet (page 2) that assists you in having at hand all the details of your system that you'll need for installation. It is highly recommended you complete this before starting installation; while some parameters, such as setting up your network, are not required for installation, doing it all at one time is usually far easier than having to come back to it. Sometimes this can't be avoided, but do as much at installation time as possible.

Creating boot/modules floppies

The COL distribution does not come with the floppy disks required for installation. There are two floppies involved; one is used for booting, the other is a "modules" disk which contains many hardware drivers.

While the Guide recommends you create the floppies by copying them from the CD-ROM, it is better to get newer versions of the disks from the Caldera web site. The floppy images on some CD-ROMs have errors that cause problems, especially with installations using SCSI disks and large partitions.

To get newer versions of the floppy images, download them from Caldera's FTP site. In directory {\tt pub/col-1.0/updates/Helsinki}, you'll find a bunch of numbered directories. Check out the directories in descending order---that will make sure you get the latest versions.

If you find one of these directories has a subdirectory called

bootdisk
, the contents of that directory are what you want.

You should find two files:

install-2.0.25-XXX.img
modules-2.0.25-XXX.img

The

XXX
is replaced by the version number of the disk images. At the time of writing, the current images are 034 and located in the 001 directory.

Once you have these images, transfer them onto two floppies using the methods described on page 4 of the Guide, using RAWRITE from the Caldera CD-ROM if copying from a DOS/Windows system or

dd
from a Linux system.

While Caldera's CD-ROM is bootable (if your system's BIOS allows it), if possible use the downloaded floppies anyway, since they are newer and will contain bug-fixes that won't be in the CD versions.

Preparing the hard disks

This procedure is no different from that of other Linux distributions. You must use fdisk on your booted hard disk to allocate at least two Linux partitions, one for the swap area and one for the root file system. If you are planning to make your system dual-boot COL with another operating system such as MS Windows or DOS or even OS/2, it's usually preferable to install COL last; its "fdisk" recognizes "foreign" OS types better than the disk partitioning tools of most other operating systems.

To run the Linux fdisk, you'll need to start your system using the boot (and maybe the modules) floppy mentioned above. That's because you need to tell COL what kind of disk and disk controller you have; you can't even get as far as entering

fdisk
if Linux doesn't recognize your hard disk!

To do this, follow the bootup instructions in the Guide, from step 2 on page 33 to the end of page 36. Don't bother going through the installation or detection of CDROMs or network cards at this time; all that matters at this point is Linux sees the booting hard disk so you can partition it using fdisk. A brief description of the use of the Linux fdisk is provided on page 28 of the Guide.

Remember that when running fdisk, you need to set up both your root file system (type 83) and your swap space (type 82) as new partitions. A brief discussion of how much swap space to allocate is offered on page 10 of the Guide.

As soon as you have completed this and written the partition table information to make it permanent, you will need to reboot.


Copyright © 1997, Evan Leibovitch
Published in Issue 19 of the Linux Gazette, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


CLUELESS at the Prompt: A new column for new users

by Mike List,


Welcome to installment 6 of Clueless at the Prompt: a new column for new users.


This time let's take a quick look at the XF86Setup utility. at X window managers, concentrating on FVWM, adding popup menus, adding and subtracting apps from existing popups and other relatively easy ways to get a custom appearance and feel.


Using XF86Setup to configure X

Judging from the posts I've seen on the usenet, a lot of people aren't aware that there's an easier way to get X up and running than configuring it the old confusing way(at least I found it to be that way), using a tcl/tk script called XF86Setup. While it doesn't totally eliminate the need to manually edit your XConfig, it does provide a method of getting a usable configuration for most common video cards and monitors. XF86Setup first appeared in the XFree86 3.2 distribution, and uses the lowest common denominator VGA 16 color mode server and a tcl/tk(corrections welcome) script to start the config process in X and by using the graphical nature of this utility script you can be almost certain to have X running in a couple of tries, and if worst comes to worst you can have it running in 16 color mode until you can get the details to optimize it to your video hardware. Current downloads of Xfree86 all seem to have this included, and if your CDROM diskribution has X 3.2 or better you already have it available to install to your HD. If you download it from xf86.org, be sure to read the Relnotes for the component files necessary to insure a successful install. You'll need :

, where ?=the level of the distribution you're using, ie.3.2, 3.3 etc., for all installations, read the relnotes for any oher files your specific hardware might need. Since the 3.3 version just came out, if you are just getting around to setting up X you will most likely want to get this distribution, since every successive version has support for more hardware and often better support for hardware already supported.

OK, you have the files you need, that is the ones listed above, and the server for your particular video card, in my case the SVGA server, you may need to do a little detective work to determine which server to use. If you are using the X version that comes on your CDROM, you can probably install all the servers(assuming there's space on your HD)and let the XF86Setup prog make the choice. To install,type:

       cd /usr/X11R6
Next, copy the preinst.sh and postinst.sh scripts to /var/tmp, then go to /usr/X11R6 and type:

        cd /usr/X11R6
        sh /var/tmp/preinst.sh
the script will remove some symbolic links, and check to see that all the files you need are available, and may output a message asking for those files that are needed but not present. But assuming that you have followed the above, everything should be in place, and you should get a generally encouraging message on exit from the script.

Now for the installation itself,type:

       tar -zxvf /wherever/you/have/X3?files.tgz
you'll have to repeat this step with each of the required files, although if you have these files in a directory by themselves, you may be able to type:
       tar -zxvf /wherever/youhavethem/*.tgz
although it's been awhile, and I can't recall if it works, it won't hurt anything to try, since the alternative is to unpack each tgz file separately.

Next you run the postinst.sh script in the same manner as the preinst.sh above, this will make sure that you have all the X components in the correct places.Run ldconfig something like:

       ldconfig -m /usr/X11R6/lib
or reboot to run ldconfig automatically. This links the libraries necessary to run X. At this point you should be able to start the actual setup by typing, naturally:
       XF86Setup
which will present a dialog box asking if you want to start in graphical mode or tell you it will start momentarily. At this point you'll be in X, using the 16 color VGA server.Read all the instructions, and follow the routine, which I found to be pretty self-explanatory. You will probably have the most trouble finding the right mouse device and protocol, but try each one in turn if you aren't sure. You'll probably also want to change the keyboard to 102key US International keyboard. Specify the video card, and monitor info, don't worry if you don't know the salient monitor inf, you cna start at the top of the list and work your way down the list until you reach a good setting.Much easier if you have your monitor manual available, so have it on hand if you can. Finish the routine when you think it's right and that should do it. Congratulations on your hopefully valid Xconfiguration. If you muff it just try again using slightly different settings until you do get it right.

Window Managers

Most Linux distributions that i'm familiar with use the FVWM window manager as default and the rest of them should have it present, unless you downloaded the files directly from xf86.org, in which case the default is TWM.

FVWM is highly configurable by editing the /var/X11R6/lib/fvwm/system.fvwmrc file.You can use the file as it is, since it has the most common installed features already configured, but you can comment out those programs that you don't have installed by adding a "#" at the beginning of the lines you wish to drop, change colors, add popup menus, and more just by following the examples. Just be sure to save the system.fvwmrc by typing:

       cp /var/X11R6/lib/fvwm/system.fvwmrc
/var/X11R6/lib/fvwm/system.fvwmrc.old
or something similar, so if you do mess up on your customization you can always start from scratch by cp'ing .old to the original system.fvwmrc.A couple of months ago The Weekend Mechanic column had some very cool ideas on wallpapering the root window, so you might want to check them out.

I made "Internet" and "PPP" popup menus to include lynx, Netscape and a couple of telnet sites, as well as an IRC client, and to use the chat script from X. you may have other ideas more to your liking, don't be afraid to try, you can always start over again if you don't like the results.

Take a look at my system.fvwmrc, nothing too sophisticated, but if you compare it to the original you should get the idea. I commented the changes that I made so you can see some of the ways in which you can customize yours.


Copyright © 1997, Mike List
Published in Issue 19 of the Linux Gazette, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Welcom to the Graphics Muse
Set your browser to the width of the line below for best viewing.
© 1997 by

Button Bar muse:
  1. v; to become absorbed in thought
  2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration
Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration.

[Graphics Mews] [Musings] [Resources]
indent This column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems.
      This month has been even more hectic than most. I finished the first pass of an article on the 1.0 release of the GIMP and submitted it to the LInux Journal editors. That will be out in the November Graphics issue. I'll probably have to do some updates after I get back the marked up version. I'm also working on the cover art for that issue, using the developers release (currently at 0.99.10) of the GIMP. I've also had quite of bit of regular work (that kind that pays the rent) since I'm getting very close to my code freeze date. This weekend I'll be writing up documentation for it so I can give an introductory class to testers, other developers, Tech Pubs, Tech Support, and Marketing on Monday. I think I picked a bad time to start lifting weights again.
      In this months column I'll be covering ...
  • More experiences with printing using the Epson Stylus Colro 500
  • A brief discussion about DPI, LPI, and Halftoning
  • An even briefer discussion about 3:2 pulldown - transerring film to video.
Next month may not be much better. I don't know exactly what I'll be writing about, although I do have a wide list from which to choose. Mostly I'm looking forward to my trip to SIGGRAPH in August. Any one else going? I should have plenty to talk about after that. I plan on going to at least two of the OpenGL courses being taught at the Conference. I haven't completely decided which courses I'm going to take, however.
      I'm also looking forward to a trip to DC in August as well. A real vacation. No computers. Just museums and monuments. I may need to take some sort of anti-depressant. Nah. I need the break.

Graphics Mews


      Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.

indent

Announcing bttv version 0.4.0

      BTTV is a device driver for Booktree Bt848 based frame grabber cards like the Hauppauge Win/TV pci, Miro PCTV, STB TV PCI, Diamond DTV2000, and AverMedia. Major new features in version 0.4.0 are rudimentary support for grabbing into user memory and for decoding VBI data like teletext, VPS, etc. in software.

The Motif application xtvscreen now has better support for selecting channels and also works in the dual visual modes (255+24 mil. colors) of Xi Graphics AcceleratedX 3.1 X server.

Author:
      Ralph Metzler
      Marcus Metzler
Web Site:
      http://www.thp.uni-koeln.de/~rjkm/linux/bttv.html

indent indent

OpenGL4Java 0.3

      This is an initial developer's release of an (unoffical) port of OpenGL(tm) for java. Leo Chan's original package has been ported to both WindowsNT/95 and to Linux. Several features have been added, the main one being OpenGl now draws into a Java Frame. What advantage does this provide? Well, you can now add menus to the OpenGL widget as well as receiving all normal events such as MouseMotion and Window events. You could very simply have a user rotate a OpenGL object by moving the mouse around in the Frame ( the demo for the next release will have this feature ).

You can grab it from the developers web page at http://www.magma.ca/~aking/java.

indent

WebMagick Image Web Generator - Version 1.29

WebMagick is a package which makes putting images on the Web as easy as magick. You want WebMagick if you:
  • Have access to a Unix system
  • Have a collection of images you want to put on the Web
  • Are tired of editing page after page of HTML by hand
  • Want to generate sophisticated pages to showcase your images
  • Want to be in control
  • Are not afraid of installing sophisticated software packages
  • Want to use well-documented software (33 page manual!)
  • Support free software
After nine months of development, WebMagick is chock-full of features. WebMagick recurses through directory trees, building HTML pages, imagemap files, and client-side/server-side maps to allow the user to navigate through collections of thumbnail images (somewhat similar to xv's Visual Schnauzer) and select the image to view with a mouse click. In fact, WebMagick supports xv's thumbnail cache format so it can be used in conjunction with xv.

The primary focus of WebMagick is performance. Image thumbnails are reduced and composed into a single image to reduce client accesses, reducing server load and improving client performance. Everything is pre-computed. During operation WebMagick employs innovative caching and work-avoidance techniques to make successive executions much faster. WebMagick has been successfully executed on directory trees containing many tens of directories and thousands of images ranging from tiny icons to large JPEGs or PDF files.

Here is a small sampling of the image formats that WebMagick supports:

  • Windows Bitmap image (BMP)
  • Postscript (PS)
  • Encapsulated Postscript (EPS)
  • Acrobat (PDF)
  • JPEG
  • GIF (including animations)
  • PNG
  • MPEG
  • TIFF
  • Photo CD
WebMagick is written in PERL and requires the ImageMagick (3.8.4 or later) and PerlMagick (1.0.3 or later) packages as well as a recent version of PERL 5 (5.002 or later). Installation instructions are provided in the WebMagick distribution.

Obtain WebMagick from the WebMagick page at http://www.cyberramp.net/~bfriesen/webmagick/dist/. WebMagick can also be obtained from the ImageMagick distribution site at ftp://ftp.wizards.dupont.com/pub/ImageMagick/perl.

indent

EasternGraphics announces public release of `opengl' widget

      EasternGraphics announces the public release of `opengl' widget which allows windows with three-dimensional graphics output, produced by OpenGL to be integrated into Tk applications. The widget is available for Unix and MS-Windows platforms.

You can download the package from ftp://ftp.EasternGraphics.com/
      pub/egr/tkopengl/tkopengl1.0.tar.gz

Email:
WWW: http://www.EasternGraphics.com/

indent indent

ELECTROGIG's GIG 3DGO 3.2 for Linux for $99.

      There is a free demo package for Linux. Its roughly 36M tarred and compressed. A 9M demo's file is also available for download. I had placed a notice about this package in the May's Muse column, but I guess ELECTROGIG had missed that, so they sent me another announcement (I got the first one from comp.os.linux.announce). Anyway, one thing I didn't mention in May was the price for the full Linux product: $99. This is the complete product, although I'm not sure if this includes any documentation or not (it doesn't appear to). The Linux version does not come with any product support, however. You need a 2.0 Linux kernel to run GIG 3DGO.

I also gave a URL that takes you to an FTP site for downloading the demo. A slightly more informative page for downloading the demo and its associated files is at http://www.gig.nl/support/indexftp.html

indent

Type1Inst updated

      uploaded version 0.5b of his type1inst font installation utility to sunsite.unc.edu. If its not already there, it will end up in /pub/Linux/X11/xutils.

Type1inst is a small perl script which generates the "fonts.scale" file required by an X11 server to use any Type 1 PostScript fonts which exist in a particular directory. It gathers this informatiom from the font files themselves, a task which previously was done by hand. The script is also capable of generating the similar "Fontmap" file used by ghostscript. It can also generate sample sheets for the fonts.

FTP: ftp://sunsite.unc.edu/pub/Linux/X11/xutils/type1inst-0.5b.tar.gz

Editors note: I highly recommend this little utility if you are intent on doing any graphics arts style work, such as with the GIMP.

indent

libgr-2.0.13 has been updated to png-0.96

      It seems the interface to png-0.96 is not binary compatible with png-0.89, so the major version of the shared library was bumped to libpng.so.2.0.96 (last version was libpng.so.1.0.89).

WHAT IS LIBGR?
Libgr is a collection of graphics libraries, based on libgr-1.3, by Rob Hooft (hooft@EMBL-Heidelberg.DE), that includes:

  • fbm
  • jpeg
  • pbm
  • pgm
  • png
  • pnm
  • ppm
  • rle
  • tiff
  • zlib, for compression
These are configured to build ELF static and shared libraries. This collection (libgr2) is being maintained by <neal@ctd.comsat.com>

FTP: ftp.ctd.comsat.com:/pub/linux/ELF

indent
indent
indent

Did You Know?

...there is a site devoted to settign up Wacom tablets under XFree86? http://www.dorsai.org/~stasic/wacomx.htm The pages maintainer, , says:
So far, nobody has told me that he or she couldn't follow the instructions.

Fred Lepied is the man who actually created the support for the Wacom tablets under XFree86. He gave me instructions on setting my ArtPad II up and I repeated this, periodically, on Usenet. When the requests for help there turned into a steady stream, I decided to put up a web page (mainly to show that I can make one but not use it for a lame ego trip).

<adam@uunet.pipex.com> has said he's also gotten this to work and offered to help others who might need assistance getting things set up.

...there is rumored work being done on 3Dfx support for Linux? writes:

I was looking around for info about the 3Dfx based cards and came across a guy's page that said he is working on a full OpenGl driver for 3Dfx boards for NT. What does this have to do with Linux? Well, he says that after the NT driver is done, he is going to start work on 3Dfx drivers for Linux and an OpenGl driver for XFree86/3Dfx.

The guy's name is Zanshin and the address of his site is: http://www.planetquake.com/gldojo/

Most of this stuff is in the News Archives section under 4/18/97 Oh yeah, he also mentions hacking SGIQuake to work with Linux, so we may get to see a hardware accelerated version of Quake for Linux.

...the MindsEye Developers mailing list has moved to . unsubscribing can be done by sending a body of

         unsubscribe
                
to and a body of
         unsubscribe mindseye@luna.nl
                
to Other majordomo commands should be send to majordomo@luna.nl a body of 'help' gives an overview. Users which are subscribed to the old mindseye@ronix.ptf.hro.nl adress do not need to unsubscribe. The list will be removed shortly afterwards. They will get this message twice: one from mindseye@luna.nl and one from mindseye@ronix.ptf.hro.nl. A HTML interface by using hypermail is under construction.

Q and A

Q: Forgive what might be a dumb question, but what exactly is meant by "overlays"?

A: Imagine a 24bpp image plane, that can be addressed by 24bpp visuals. Imagine an 8bpp plane in front of the 24bpp image plane, addressed by 8bpp visuals.

One or more of the 8bpp visuals, preferably the default visual, should offer a 'transparent pixel' index. When the 8bpp image plane is painted with the transparent pixel, you can see through to the 24bpp plane. You can call an arrangement like this, a 24bpp underlay, or refer to the 8bpp visuals as an overlay.

Strictly, we call this "multiple concurrent visuals with different color depths", but that's rather a mouthful. Hence, shorthand we refer to it as "24+8" or "overlays", with "24+8" as the preferred description.

From Jeremy Chatfield @ Xi Graphics, Inc.

indent
indent
indent

Musings

Microstation update

      After last months 3D Modeller update I received email from Mark Hamstra at Bentley Systems, Inc. Mark is the man responsible for the ports of Bentley's MicroStation and Masterpiece products that are available for Linux. I've included his response below. The stuff in italics is what I had orginally written:
Thanks for the mention in Gazette #18 --it's kinda fun watching where MicroStation/Linux info pops up. Being the guy that actually did the ports of MicroStation and Masterpiece, I'll lay claim to knowing the most about these products. Unfortunately, you've got a few errors in Gazette #18; allow me to correct them:

Includes programming support with a BASIC language and linkages to various commericial databases such as Oracle and Informix.

Programming support in the current product includes the MicroStation Development Language (C syntax code that compiles to platform-independent byte-code), BASIC, and support for linking MDL with both MDL shared libraries and native code shared libraries (i.e., Linux .so ELF libraries). For a look at the future direction of Bentley and MicroStation, take a look on our web site at the recent announcement by Keith Bentley at the AEC Systems tradeshow of MicroStation/J and our licensing agreement with Javasoft.

Because of the lack of commercial database support for Linux, there are no database linkage facilities in the current Linux port of MicroStation.

This looks like the place to go for a commercial modeller, although I'm not certain if they'll sell their educational products to the general public or not.

Nope, academic-only at this time; although we're collecting requests for commercial licensing (at our normal commercial prices) at http://www.bentley.com/products/change-request.html. The only thing preventing MicroStation from being available commercially for Linux is a lack of adequate expressed interest.

Note that the Linux ports have not been released (to my knowledge - I'm going by whats on the web pages).

The first two of our new Engineering Academic Suites that contain the Linux ports, the Building Engineering and GeoEngineering Suites, have been available in North America since the middle of February. European and worldwide distribution should be underway now too, although it took a little longer. Incidentally, the web pages you list are for our Europe, Middle East, and Africa (ema) division; you probably actually want http://www.bentley.com/academic.

[output formats] Unknown

We output a wide range of formats (and import a wider range than you give us credit for). I always forget just which ones are actually in the product and which are only in my current builds from the most recent source, so I'll just refer you to http://www.bentley.com/products/microstation95 and http://www.bentley.com/products/masterpiece, and note that my copy of MicroStation/Linux currently lists DGN, DWG, DXF, IGES, CGM, SVF, GRD, RIB, VRML, Postscript, HPGL, PCL, TIFF, TGA, BMP, and a couple other raster and animation formats as output options -- and I know I haven't currently got some of our soon-to-be-released translators compiled. Like I said, probably not all of these are in the current Linux port, but it's a simple matter to add whatever's not there to future versions of the Linux products, provided there's enough demand to keep the project going.

I wasn't sure what a few of these formats were, so I wrote Mark back to ask about them. He informed me on the following (which were the ones I had asked specifically about):
  • DGN is MicroStation-native design file format and has its ancestry in the Intergraph IGDS file format.
  • SVF is the Simple Vector Format (see http://www.softsource.com), which works pretty good for web browser plug-ins.
  • GRD is used by our MicroStation Field product.
  • CGM is the Computer Graphics Metafile format, a vendor-independent standard supported in various software packages, browser plug-ins, printers/plotters, etc.
I want to thank Mark for offering updated information so quickly. My information is only as good as what I can find or am fed, and it helps when vendors, developers or end users provide me with useful info like this. Many thanks Mark.

If you've used this product on MS platforms feel free to drop me a line and let me know what you thought of it. I'm always out to support commercial ports of graphics-related products to Linux.

indent

Printing with an Epson Stylus Color 500

      I bought an Epson Stylus Color 500 printer back in December of last year so I could print in color. I had done some research into what printers would be best, based in part on reviews in online PC magazines and also on support available in the Ghostscript 4.03 package. The Epson Stylus Color 500 was rated very high by the reviews and I found a web page which provided information on how to configure Ghostscript for use with the printer. I bought the printer, got Ghostscript working in a very marginal way (that is to say, it printed straight text in black and white). But thats as far as it went. I had gotten some minor printing in color done, but nothing very impressive and most of it was downright bad.
      Earlier this month I was given the opportunity to work on the cover art for an issue of the Linux Journal. A few trial runs were given the preliminary ok but they were too small - the size of the image needed to be more than twice as big as the original I had created. Also, because the conversion of an image from the monitors display to printed paper is not a straightforward one (see the discussion on LPI/DPI elsewhere in this months column) it became apparent I needed to try printing my artwork to sample how it would really look on paper. I had to get my printer configuration working properly.
      Well, it turned out to be easier than I thought. The hardest part is to get Ghostscript compiled properly. The first thing to do is to be sure to read the text files that accompany the source code. There are 3 files to read:
  • make.txt - general compiling and installation instructions
  • drivers.txt - configuration information for support of the various devices you'll need for your system.
  • unix-lpr.txt - help on setting up a print spooler for Unix systems.
The first two are the ones that made the most difference to me. I didn't really use the latter, but my solution isn't very elegant. However, what it lacks in grace it makes up for in simplicity.
      Building the drivers was fairly simple for me - I took most of the defaults, except I added support for the Epson Stylus Color printers. There is a section in make.txt devoted specifically to compiling on Unix systems (search for How to build Ghostscript from source (Unix version) in that file). In most cases you'll just be able to type "make" after linking the correct compiler specific makefile to makefile. However, I needed to configure in the Epson printers first.
      What I did was to edit the unix-gcc.mak file to change one line. The line that begins
      DEVICE_DEVS=
was modified to add
      stcolor.dev
right after the equal sign. I also didn't need support for any of the HP DeskJet (DEVICE_DEVS3 and DEVICE_DEVS4) or Bubble Jet (DEVICE_DEVS6) devices so I commented out those lines. Now, once this file had been linked to makefile I could just run
      make
      make install

At this point the Ghostsript package was ready for use. Note that many of the current distributions already include Ghostscript, but may not have the 4.03 release. Run
      gs -v
to find out if you have Ghostscript 4.03. You'll need it to work with the Epson Stylus Color 500.
      Now I needed to set up my print spooler. This turned out to be rather easy. First, you need to know that the stcolor driver (which is the name of the driver Ghostscript uses to talk to Epson Stylus printers) has a pre-built Postscript file that is used to prepare the printer for printing. This file, called stcolor.ps, is included with the 4.03 distribution. The file contains special commands that are interpreted by the printer, however it does not actually cause anything to be printed.

-Top of next column-
indent indent indent
indent
indent
When you want to print something you need to first print this file followed by the file or files you want to print. Don't worry about how to do this just yet - I have a set of scripts to make this easier.
      There were a number of options I could use with Ghostscript for my printer, but I found I only needed to work with one: display resolution or Dots Per Inch (DPI). In order to handle the two resolutions I simply created two scripts which could be used as input filters for lpr (the print spooler). The scripts are almost exactly the same, except one is called stcolor and one is called stcolor-high, the latter being for the higher resolution. Both of these were installed under /var/spool/lpd/lp and given execute permissions.
      Next came the configuration for lpr. I needed to edit the /etc/printcap file to create entries for the new printer filters. I decided to give the printers different names than the standard, non-filtered printer name. In this way I could print ordinary text files (which I do more than anything else) using the default printer and use the other printer names for various draft or final prints of images, like the cover art.
      Now the system was ready to print my images, but I still needed to do a couple more things. First, I wanted to write a script for handling printing of my images in the most common formats I created. I wrote a script to do this which I named print-tga.sh. I made symbollic links from this file to variations on the name. The script uses the name used to invoke it to determine which type of conversions to run before printing the file. The script converts the various formats, using the tools in the NetPBM kit, to Postscript files and then prints them to the high resolution printer setup in the previously mentioned printcap file.
      Once I got all this done I was able to print full page images on high-gloss paper. They come out beautifully. The images I created for the cover art are far bigger than the paper, so Ghostscript resizes them to fit. It wasn't until I got this working that I realized just how good Ghostscript is. Or just how good the Epson Stylus Color 500 is.
      As a side bonus, I also discovered that I could now print pages from my Netscape browser to my printer. I configured the print command to be lpr -llpps (using the lower resolution printer from the /etc/printcap file) in the Print dialog. Since Netscape passes the page as a Postscript file to the filter, there is no need to do any conversions like I do with my images. I now get full color prints of the pages I wish to save (like SIGGRAPH's registration forms). I also can print directly from Applixware using the same printer configurations. I just had to set up the print options to output as Postscript, which was simply enough to do.
      There are a number of other settings that can be set using the filters. If you are interested in using these you should consult the devices.txt file for information on the stcolor driver. There are probably some better settings than what I'm using for other types of printing needs.
      Well, thats about it. I hope this was of some use to you. I was really thankful when I got it working. My setup is probably not exactly like anyone elses, but if you have the Epson Stylus Color 500 you should be able to get similar results. Don't forget: if you plan on printing high resolution images using the 360 DPI (as opposed to the 180 DPI also supported by the printer) then you'll probably want to print on high-gloss paper. This paper can be rather expensive. The high-gloss paper Epson sells specifically for this printer is about $36US for 15 sheets. Also, I should note that I recently heard Epson now has a a model 600 that is to replace the model 500 as their entry level color printer. I haven't heard if the 600 will work with the stcolor driver in Ghostscript so you may want to contact the drivers author (who is listed in the devices.txt file, along with a web site for more info) if you plan on getting the model 600.
indent
indent

Resources

The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.

Linux Graphics mini-Howto
Unix Graphics Utilities
Linux Multimedia Page

Some of the Mailing Lists and Newsgroups I keep an eye on and where I get alot of the information in this column:

The Gimp User and Gimp Developer Mailing Lists.
The IRTC-L discussion list
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.graphics.api.opengl
comp.os.linux.announce

Future Directions

Next month:
I have no idea. I have a ton of things that need doing, but I just haven't had time to figure out what I *should* do. I still have part 3 of the BMRT series to do, which I plan on doing as part of the process of creating an animation. The animation is another topic I'd like to do. I've also had requests for a number of other topics. One good one was to cover the various Image Libraries that are available (libgr or its individual components, for example). I have a review of Image Alchemy to do (long ago promised and still not done *sigh*). Well, at least I'll never be short a topic.


Copyright © 1997, Michael J. Hammel
Published in Issue 19 of the Linux Gazette, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


More...


Musings

indent
© 1997
indent


    Raster images are always discussed in terms of pixels, the number of dots on the screen that make up the height and width of the image. As long as the image remains on the computer there is no need to worry about converting this to some other resolution using some other terminology. Pixel dimensions work well for Web pages, for example.
    The reality is that many images aren't very useful if they remain on the computer. Their real usefulness lies in their transfer to film, video tape or printed paper such as magazines or posters. The trouble with this is that printing an image is much different than simply viewing it on the screen. There are problems related to color conversions (RGB to CMYK), for example. We'll have to deal with that some other time (like when I learn something about it). Printing also requires a different set of dimensions because of the way they work. Printed images are handled by the number of Dots Per Inch that the printer can handle. In order to get the image to look the way you want it on the printer, you'll need to understand how printers work.
First, some background information: LPI comes from the world of photography while DPI comes from the world of design. Whether it makes sense to speak of DPI resolution for a raster image depends on what you'll be using that image for. Most magazines, such as Time, are printed with 153 LPI or less. Newspapers such as the Wall Street Journal are printed at 45-120 LPI.
    Halftoning masks are the patterns used to create the shades of color or levels of gray seen in the lines per inch on the printed media. Most masks are square. Let's say you have a printer which can do 300 DPI, that is, it can print 300 dots in an inch. If the halftoning mask is 4 pixels wide, then you'll have 300/4 = 75 lines per inch (LPI) for the halftones. That is the effective resolution of the device, since you are interested in nice shaded printouts and not in single bilevel dots.
    An ultra-expensive 1200 DPI typesetter will be able to do 300 LPI if you use 4-pixel wide halftone masks. Of course, the larger the halftone size, the more shades you'll get, but the lower the effective resolution will be.
    If you are only going to display an image on your screen, then perhaps speaking of DPIs in the image is pointless. You'll be mapping one pixel in the image to one pixel on your display, so how physically big the image looks will depend only on the size of your monitor. This makes sense; when you create images for display on a monitor, you usually only think in terms of available screen space (in pixels), not about final physical displayed size. I.e. when you create a web page you try to make your images so that they'll fit on the browser's window, regardless of the size of your monitor.
    The story is a bit different when you are creating images for output on a hardcopy device. You see, sheets of paper have definite physical sizes and people do care about them. That's why everyone tries to print Letter-sized documents on A4 paper and vice-versa.
    The simplest thing to do is to just create images considering the physical output resolution of your printer. Let's say you have a 300 DPI printer and you create an image which is 900 pixels wide. If you map one image pixel to one device pixel (or dot), you'll get a 3-inch wide image:

900 pixels in image / 300 dots per inch for printing = 3 inches of image.

    That sucks, because most likely your printer uses bilevel dots and you'll get very ugly results if you print a photograph with one image pixel mapped to one device pixel. You can get only so many color combinations for a single dot on your printer --- if it uses three inks, Cyan/Magenta/Yellow (CMY) and if it uses bilevel dots (spit ink or do not spit ink, and that's it), you'll only be able to get a maximum of 2*2*2 = 8 colors on that printer. Obviously 8 colors is not enough for a photograph.
    So you decide to do the Right Thing and use halftoning. A halftone block is usually a small square of pixels which sets different dot patterns depending on which shade you want to create. Let's say you use 4-pixel square halftones like in the previous paragraphs. If you map one image pixel to one halftone block, then your printed image will be four times as large as if you had simply mapped one image pixel to one printer dot.
    A good rule of thumb for deciding at what size to create images is the following. Take the number of lines per inch (LPI) that your printer or printing software will use, that is, the number of halftone blocks per inch that it will use, and multiply that by 2. Use that as the number of dots per inch (DPI) for your image.
    Say you have a 600 DPI color printer that uses 4-pixel halftone blocks. That is, it will use 600/4 = 150 LPI. You should then create your images at 150*2 = 300 DPI. So, if you want an image to be 5 inches wide, then you'll have to make it 300*5 = 1500 pixels wide. Your printing software should take all that into account to create the proper halftoning mask. For example, when you use PostScript, you can tell the interpreter to use a certain halftone size and it will convert images appropriately. However, most Linux software doesn't do this yet. If you have a need to create an image destined for print you should check with the printer to get either the LPI or DPI and the number of pixels used in the Halftone that will be used. You can then compute the number of pixels you'll need in your image.
    The story is very different if you do not use regular halftoning masks. If you use a stochastic (based on randomness) dithering technique, like Floyd-Steinberg dithering, then it may be a good idea to design images with the same resolution as the physical (DPI) resolution on your output device. Stochastic screening is based on distributing the dithering error over all the image pixels, so you (usually) get output without ugly Moire patterns and such. Then again, using the same physical resolution as your output device can result in really big images (in number of bytes), so you may want to use a lower resolution. Since the dithering is more or less random, most people won't notice the difference.

    My thanks to Federico Mena Quintero for the majority of this discussion. He summarized the discussion for the GIMP Developers Mailing List quite some time back. Fortunately, I happened to hang onto this his posting.

indent

How many frames makes a movie?


      The following comes from Larry Gritz in response to a question I posed to him regarding something I noticed while framing through my copy of Toy Story one day. I thought his explanation was so good it deserved a spot in the Muse. So here it is.

BTW: I noticed, as I framed through various scenes, that I had 4 frames of movement and one frame of "fill" (exactly the same as the previous frame). Standard video is 30 frames/sec and I've read that 15 or 10 animated frames is acceptable for film but that this requires some fill frames. Lets see, if you did 15 frames per second you could actually render 12 frames with 3 fill frames. Is this about right?

      No, we render and record film at a full 24 frames a second. We do not "render on two's", as many stop motion animators do.
      When 24 fps film is converted to video, something called 3:2 pulldown is done. Video is 30 frames, but actually 60 fields per second -- alternating even and odd scanlines. The 3:2 pulldown process records one frame for three fields of video, then the next frame for 2 fields of video. So you get something like this:

video frame video field film frame
1 1 (even) 1
1 2 (odd) 1
2 1 (even) 1
2 2 (odd) 2
3 1 (even) 2
3 2 (odd) 3
4 1 (even) 3
4 2 (odd) 3
5 1 (even) 4
5 2 (odd) 4
So every 4 film frames get expanded into 5 video frames, and hey, 30/24 == 5/4 ! This is how all films are transferred to video in a way that doesn't mess up the original timing.
      Your video probably only shows the first field when you're paused, which makes it look like 1 in 5 frames is doubled, but it's actually just a bit more complicated than that.

indent
© 1997 by

"Linux Gazette...making Linux just a little more fun!"


Intranet Hallways Systems Based on Linux

By Justin Seiferth,


[Update 27-Dec-1999: Author's e-mail address. -Ed.]
Using Linux: An Intranet Hallways System
Like many of you, I like to use Unix, esp. Linux when ever and where ever it seems to be the best fit for the job. This means I have to work fast and be creative; making opportunities when and where ever I can't take them. I had just such an opportunity recently when I put together a system which allows my workplace to publish the common file sharing areas of its Microsoft Windows NT based desktops. I thought others might be interested in this system and created a distribution have your own Intranet Hallways system or as the popular press would put it an "enterprise information warehouse".  Don't let on how easy it is and you'll be able to make a bundle reselling the system.  Here's what you need to do to make it happen: