Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Michael "Alex" Williams, Don Marti, Ben Okopnik
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. |
|||
This page maintained by the Editor of Linux Gazette, Copyright © 1996-2000 Specialized Systems Consultants, Inc. |
|||
The Mailbag! |
Contents: |
These questions have been selected among the hundreds the Gazette recieves each month. Article submissions on these topics will be eagerly accepted at , and posted in the next issue.
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue -- in the Tips column if simple, the Answer Gang if more complex and detailed.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there. The AnswerGuy "past answers index" may also be helpful (if a bit dusty).
Hi!
Nice article on window managers, although it would have been nice to see a few more tried out.
There are many people out there who have their favorite
window manager out there, and won't try others. Sort of like the which editor is better (vi versus emacs) battle. But what might actually be interesting, is for the X% (90+ ?) of us who don't have any strong feelings about window managers (or editors for that matter), is if some kind of "popularity contest" could be run. We login, and instead of calling twm, fvwm, ... in some RC file, we made a call to wm, which picked one of the N window managers on our system at random. When we went to logout, or to force a change of window manager; we were asked our feelings on how the session went, and then this report (along with a list of what window managers are on our system) went someplace for compilation. Maybe even a list of what programs were launched during the session (and from the command line or via the window manager/icon/however you want to describe it). People who are interested in wm could get feedback from the guinea pigs (so to speak) as to how useful their wm was. I'm sure some interesting statistics might show up. It might turn out that ZWM was most liked by left-handed Carpathians who do GIS work.
Sorry, I don't want to write any code to do this. I've got enough things to do.
Gord
Well, there is always Debian's statistics package so you can let them know what you liked - perhaps someone will do something similar for X. -- Heather
Is there a place on the internet where I can pick up some documentation on the latest version of xdm (The on released with XFree86 4.0.1)? There's some stuff in the Xresources and config files that just isn't mentioned in the man page.
I've looked everywhere I can think of, hit every search engine I know of, and even gotten flamed as a lamer on linux IRC channels trying to find this information.
Your help would really be appreciated.
Doug
And speaking of X... Readers! This is your chance to Make Linux A Little More Fun -- namely, to adopt a man page today, starting with most of the XFree86 Project... or, some articles on XFree86 4.0 would be cheerfully accepted here, too. -- Heather
Saying 'thank you' seems passe these days but I feel I have to do it anyway..
I'd been trying to do this for about a week until I eventually found [at a gazette mirror] ../issue52/tag/14.html
Thanks, Much obliged, Padraig
FYI, re your 'and NSGINA ' article, there is a 95/98/NT/2000 GINA solution now provided by Digital Privacy, Inc.
Cheers,
Tue, 05 Sep 2000 10:55:40 +0200
From: Henk Langeveld <>
Subject: gazette navigation
I've been browsing the gazette several times, but I find the navigation buttons rather awkward. The small size of the [Next] and [Previous] icons makes it difficult to get a quick impression of an issue. I think the last time I checked the Gazette before today was at least 6 months back.
Please consider making these two buttons at least as large as the 'FAQ' icon, and you may have at least one more regular reader.
Note: I'm reading this on a 20" screen - the buttons are about 5x10 mm for me, while the netscape buttons at the top of this window are approx 1/2 inch on each side, that's about a 3 times larger area.
Tue, 5 Sep 2000 12:23:41 -0400
From: Linux Gazette <>
Subject: Monthly FAQ roundup
Here are the answers to this month's FAQ questions sent to LG:
Wed, 13 Sep 2000 18:31:34 -0500
From: eL JoRgItO <>
Subject: Escogiendo un Administrador de Ventanas
hola, una critica, :(
constructiva espero ;)
Es sobre el articulo "Escogiendo un Administrador de Ventanas...", no conozco muy bien vuestra publicacion y tal, pero me parece un poco flojo, el autor solo conoce generalidades de cada gestor de ventanas y enumera algunos reconociendo que ni siquiera los ha probado... Me parece muy mal, pues creo que lo minimo antes de escribir sobre algo es probar todo lo que por lo menos conozcas y nunca hablar de oidas. Yo sin ser nadie del otro mundo he probado esos gestores y ochocientos mil mas y solo por curiosidad, sin el proposito de escribir un articulo sobre ello.
na mas, el resto de articulos que he visto tenian mejor pinta, y por eso os comento este. gracias.
saludos de un cachupin desde la peninsula :)
Hello, here is a criticism, :( a constructive one, I hope ,)
It is related to the article "Choosing your Window Manager...." I don't really know very well your magazine but, the author of this article seems a bit lousy to me. He only knows generalities of each window manager and names a few and even acknowledges that he hasn't tried them...
It looks so bad to me since I believe that the least thing someone has to do before writing about something is try out everything that you know and avoid talking by what you listened around.
Me, being nobody special have tried those window managers and hundred other ones just out of curiosity and without the specific purpose of writing an article about it.
That's it. The rest of the articles I have seen look much better and that's why I comment on this specific one.
Greetings from a "cachupin" from the peninsula :)
[Translator's note: cachupin: refers to the people native of Spain. Peninsula refers to the Iberic peninsula; that is, Spain]
The article may be weak on details, but it does contain some information some readers might need. We have all levels of readers, from newbies to experienced sysadmins/programmers, so we try to provide a wide variety of articles.
If you have anything you wish to write for Linux Gazette in English, , send it to if it's article length, or as a 2-Cent Tip if it's shorter. If you prefer to write it in Spanish, send it to and Felipe will translate it.
-Mike
Aunque ese artículo sea quizás un poco débil de detalles, contiene algo de información de interés a unos usuarios. Hay cada nivel de lectores, tanto usuarios nuevos como administratores de sistemas y programadores experimentados. Pues, tratamos de publicar artículos a cada nivel.
Si Vd quiere escribir algo por Linux Gazette en inglés, envíelo a si es largo de artículo, o a como Consejo de 2 Centavos si es más corto. Si Vd prefiere escribirlo en español, envíelo a y Felipe lo tracucirá.
Readers, is there anyone else willing to translate articles from other languages?
Mon, 18 Sep 2000 10:54:29 -0400
From: Aurelio Martinez Dalis <>
Subject: Information required
My name is Aurelio Martinez. I am a linux begineer and I do not have internet access (WWW). Is it possible to receive Linux Gazette by e-mail in text file format?. Thanks.
[Not currently. The files would be too big (2-4 MB, and many mail systems reject mail over 1 MB). In the future we may be able to offer this, especially since it's such a highly-requested feature. -Mike
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! Send a one- or two-paragraph summary plus URL rather than an entire press release.
The October issue of Linux Journal is on newsstands now. This issue focuses on Security. Click here to view the table of contents, or here to subscribe.
All articles through December 1999 are available for public reading at http://www.linuxjournal.com/lg-issues/mags.html. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.
KDE 2 will be in Debian
Comparision of Linux distributions. (LinuxNewbie.org)
The Duke of URL has new reviews of several distributions. See the links below in the "Linux Links" section of General News.
Listing courtesy Linux Journal. Red indicates shows LJ staff will be attending.
|
September 26-28, 2000 Atlanta, GA www.key3media.com/linuxbizexpo |
||
|
October 3, 2000 Burlington, MA October 5, 2000 Vienna, VA www.computerio.com/news/seminar.html |
||
|
October 10-14, 2000 Atlanta, GA www.linuxshowcase.org |
||
Building Dependability in Global Infrastructures Host of the Real-Time and Embedded Systems Forum |
October 23-24, 2000 Washington, DC www.opengroup.org www.opengroup.org/rtforum |
||
Embedded Linux Expo & Conference |
October 27, 2000 Westborough, MA www.rtcgroup.com/elinuxexpo/index2.html |
||
ISPCON |
November 8-10, 2000 San Jose, CA www.ispcon.com |
||
|
November 13-17, 2000 Las Vegas, NV www.key3media.com/linuxbizexpo |
||
USENIX Winter - LISA 2000 |
December 3-8, 2000 New Orleans, LA www.usenix.org |
||
Pluto Meeting 2000 |
December 9-11, 2000 Terni, Italy meeting.pluto.linux.it |
||
LinuxWorld Conference & Expo |
January 30 - February 2, 2001 New York, NY www.linuxworldexpo.com |
||
ISPCON |
February 5-8, 2001 Toronto, Canada events.internet.com |
||
Internet World Spring |
March 12-16, 2001 Los Angeles, CA events.internet.com |
||
Game Developers Conference |
March 20-24, 2001 San Jose, CA www.cgdc.com |
||
CeBit |
March 22-28, 2001 Hannover, Germany www.cebit.de |
||
Linux Business Expo |
April 2-5, 2001 Chicago, IL www.linuxbusinessexpo.com |
||
Strictly e-Business Solutions Expo |
May 23-24, 2001 Location unknown at present www.stricltyebusinessexpo.com |
||
USENIX Annual Technical Conference |
June 25-30, 2001 Boston, MA www.usenix.org |
||
PC Expo |
June 26-29, 2001 New York, NY www.pcexpo.com |
||
Internet World |
July 10-12, 2001 Chicago, IL events.internet.com |
||
O'Reilly Open Source Convention |
July 23-26, 2001 San Diego, CA conferences.oreilly.com |
||
LinuxWorld Conference & Expo |
August 10-14, 2001 New York, NY www.linuxworldexpo.com |
||
Linux Lunacy Co-Produced by Linux Journal and Geek Cruises |
October 21-28, 2001 Eastern Carribean www.geekcruises.com |
Speaking about a kernel patch, Linus said: "Thanks, and THIS time it really is fixed. I mean, how many times can we get it wrong? At some point, we just have to run out of really bad ideas." (Linux Weekly News)
MOUNTAIN VIEW, Calif. (Sept. 12, 2000)--SGI Education Services announced the availability of its new "eLearning for Linux" program, a suite of Linux courses delivered over the Internet. This allows users to study Linux operating system fundamentals in a way that is flexible, self-paced and available on any desktop.
The courses, based on Linux Professional Institute (LPI) exam objectives, include "Introduction to Linux," "Linux System Administration I," "Linux System Administration II" and "Linux Network Administration."
http://www.sgi.com/support/custeducation/
San Jose, CA--Yggdrasil Computing has shipped the world's first Linux DVD9-ROM, a successor format to CD-ROM's, with over twelve times the capacity. Linux DVD Archives (MSRP $24.95) contains over eight billion bytes of open source software (over 23 billion bytes uncompressed), giving Linux users a new level of convenience and access to open source software.
Linux DVD Archives is the first DVD-ROM made from dvdtape, a program released under the GNU General Public License by Yggdrasil. Because of the high level of technology risk inherent in building the first DVD-9, and building it from a new development system, we chose to begin with a very simple product. A user cannot install Linux from Linux DVD Archives, so it is only useful as an accessory for current Linux users. What the product does provide is the largest collection of software from the metalab.unc.edu and ftp.gnu.org archives ever assembled on a single mass produced medium.
DVD-9 is the state of the art in disc manufacturing, requiring equipment that can bond together layers with two different metals: the conventional aluminum used on CD's and single-layer DVD's, with a layer of gold, giving the discs their distinctive look: silvery on top and gold underneath. Although the manufacturing process may be more exotic than with smaller DVD's, Linux DVD Archives should be compatible with all DVD-ROM drives.
There have been other DVD-ROM's produced for Linux, but these have been "DVD-5" discs, which consist of a single aluminum layer like a conventional CD and have to 55% of the capacity available per side with DVD technology. Yggdrasil's DVD-9 product enables use of the full capacity. The bigger difference, in terms of which event will improve products available to end users, is that Yggdrasil has released its internally developed software for making DVD's under the GNU General Public License, eliminating an expensive proprietary barrier throughout the Linux industry for DVD production, an act which will likely presage more widespread development of DVD-ROM's.
Linux DVD Archives Product Information
How we made a Linux DVD-9 Archive
[The first document cited contains an interesting quote: "...you must be running Linux kernel 2.2.14, 2.3.28 or later in order to access files located more than four gigabytes into the DVD." -Mike.]
The Good Morning Server has its Linux operating system and 20 applications embedded on a flash memory card. This separates the OS from the hard drive, one of the most failure-prone components of any computer.
Five products are currently offered, each fulfilling a different deployment niche: general Internet server, DNS server only, mail server only (includes mailing lists and virtual accounts), e-commerce server, BBS server. Additional products are coming. Each product is designed for maximum security; e.g., unnecessary services have been eliminated. Configuration is via a web interface or telnet.
To upgrade the system, merely change the flash card and reboot, or download a patch file. This upgrade method is patent-pending.
The Good Morning Server is made by Duli Network Corporation, Ltd. in Seoul, South Korea.
http://www.duli.net (Korean language)
September 7, 2000 - DENVER - Jabber.com Inc., a subsidiary of Webb Interactive Services Inc., today announced the addition of two new Technical Advisory Board members - Michael Tiemann, current CTO of Red Hat, and Dr. David P. Reed, former vice president of research and development and chief scientist for Lotus Development Corporation and one of the early members of the committee that defined the Internet protocol suite TCP/IP.
Meanwhile, downloads of the Jabber server have reached the 10,000 mark.
Note that Jabber itself (an XML-based instant-messaging program that can intercommunicate with proprietary systems, in case you've been asleep), is open source, and its development happens at Jabber.org. Jabber.com provides commercial support for it, and JabberCentral provides end-user support.
Jabber is now integrated with Open3's e-business integration platform. This XML-based, open-source platform helps companies move from paper-oriented office procedures to digital.
Do you know all the services (=daemons) your Linux box is running? If not, you'd better find out now and turn off the ones you don't want. Every extra service gives the script kiddies another opening to try to sabotage or commandeer your computer. OSOpinion contributer Joeri Sebrechts rakes the Linux distributions over the coals for shipping default installations that leave optional services on and for not setting the default access policy to "DENY". This, he argues, is a ticking time bomb for users who just migrated from Windows and don't know that these services--which are running now on their computer--even exist.
"You left them alone in a room with a penguin?! Mr Gates, your men are already dead."
-Seen on a Slashdot posting by Tough Love.
Duke of URL articles:
Anchordesk UK (ZDnet) articles:
The differences between the various BSDs. (ZDnet Inter@ctive Week)
Mojolin.com is a free Linux job site.
Linux2order.com offers over 5,000 pieces of Linux software in its web site, which can be downloaded for free or ordered on a custom CD-ROM for US$12.95 plus shipping. The site also includes software reviews. The company plans to offer at least 2,000 additional titles by the year's end.
A review of the Matrox Millennium G450 Under Linux (Slashdot)
KANATA, Canada, September 7, 2000 - CRYPTOCard Corp. has launched CRYPTOAdmin 5.0 for protecting Web Sites, Email and Remote Access Security CRYPTOAdmin 5.0 protectss Apache, iPlanet and Microsoft IIS Web servers from unauthorized access - right down to the page level.
CRYPTOAdmin 5.0 CRYPTOAdmin 5.0 with WEBGuard, ensures access to protected web pages is only permitted with the correct one-time password generated from a CRYPTOCard hardware or software token. Web severs communicate with CRYPTOAdmin, enabling ASP (Active Server Page) or JSP (Java Server Page) security. WEBGuard offers seamless and transparent integration without the need for browser configuration, plug-ins or additional software.
CRYPTOAdmin 5.0 provides strong user authentication in the Linux environment. Used in conjunction with freely available facilities such as PAM and Kerberos, CRYPTOAdmin enhances the stability, versatility and networkability of Linux. A new Graphical User Interface (GUI) makes administration an easy and welcome task for Red Hat, SuSE and Caldera network administrators.
CRYPTOAdmin 5.0 server runs on Windows NT, Windows 2000, Linux and Solaris.
CRYPTOCard's Authentication Server Software license is $7,495 compared with RSA's server license of $57,512 - a savings of $50K. And, unlike RSA, CRYPTOCard's tokens are purchased only once, are not time limited, and have replaceable batteries.
SimCity 3000 Unlimited for Linux is now in production. The first copies will roll off the assembly lines late next week for shipment to our online store and other distributors. [demo]
Version 428 of Unreal Tournament for Linux is now available. [README and download locations]
New and improved FAQs on Loki games.
Descent3 reference card
Loki will partner with Timegate Studios to bring Kohan: Immortal Sovereigns to Linux. Kohan will be the first of the immensely-popular real-time strategy gaming genre to be commercially available for Linux. The Linux version of this masterpiece will be released near-simultaneously with the Windows version in Q1 2001.
VANCOUVER, British Columbia - Friday, September 15, 2000 - It's fragtime for the Dust Puppy. Today kicks off the UserFriendly.org and Loki Software Quake III Arena Contest.
Contestants will create and modify "skins" and levels based on the cast of characters from the UserFriendly.org episodic comic strip for use with Quake III Arena, the blockbuster 3D, first-person perspective, shooter video game developed by id Software. Skins dictate the appearance of the player within the game environment, while the levels define the appearance and layout of the space used by the battling players inside the game. "I am categorically terrified by what the UserFriendly.org community might come up with," explains J.D. "Illiad" Frazer, the comic strips creator and Founder of UserFriendly.org. The contest runs until October 11. For contest information, visit www.userfriendly.org/community/quake.
BORG 0.2.90 is a graphics-rendering tool. [news] [download]
Want an alternative to KDE and Gnome? Try XFce, a GTK+-based"light" cousin to Gnome that contains less features, but therefore uses less resources and is faster. It includes a Gnome compatibility module for xfwm (XFce's window manager) enabling you to run the Gnome panel, pager and tasklist integrated with xfwm. There's no such compatibility module for KDE, so you can't run the KDE panel, but KDE applications run fine. Of course, xfwm also has its own panel.... Here's a LinuxOrbit interview (Slashdot). Readers, anybody wanna review it?
Hello everyone, it's the month for trick-or-treating and we have some real treats for you this month.
The feast of All Hallow's Eve is a time when the spirits of the past and the present cross the borders between each other's worlds. One might even say they're passing into a new security context.
As you don your costumes (hey, those devil horns can double for BSDcon this month, October 14 to 20 in Monterey - www.bsdcon.com) and plot what kinds of eye candy to scatter across your web pages, don't forget to consider security.
Now security is a tricky thing, many people think it just means locking stuff down. But that's not really the case - you also want to continue to provide whatever resources you normally do. Otherwise we'd all lock ourselves in closets with our teddy bear and an IV drip of Jolt cola and call ourselves secure.
It is as important to establish our rights and continued power to do things -- to be secure in our abilities and privilege -- as it is to establish our privacy -- the confidentiality of our data and thoughts, whether we're talking about GPG keys and email, or business plans, or schematics and algorithms. We also need to avoid squelcing the abilities of others -- since it's by increasing the products of our community that we grow more capable and self-sustaining. So a real sense of security lies in defining all of the requirements and all the constraints of what we want to make sure to serve as well as what we want to make sure to protect. Otherwise, we may have failed to protect our future, in the name of present security.
Well, that's it for now. Onward to some fun answers from the Gang!
-- Heather Stern
From Maenard Martinez on Mon, 04 Sep 2000
Answered by: Jim Dennis, Heather Stern
i have a dual boot pc (linux and win98). i want to use my modem in linux (which has no driver for linux) so that i can connect to my isp. i tried using gnome, i found out that it detects the com# of the modem but it is id as ms-dos.
[JimD] What does "it is id as ms-dos" mean? Does it mean that GNOME identifies the COM# as "MS-DOS?" Which GNOME utility are you using? GNOME is a suite of utilities and a set of programming libraries and interfaces (and CORBA objects). So there are several different programs that you might be running under GNOME in you attempts to configure this modem.
[JimD] i used kppp of kde to dial-out. i configured teh kppp so that it will use the same com the modem is using in win98. the error message is "modem is busy". how do i configure my modem w/o linux driver?
[JimD] It sounds like you TRIED to use KPPP to dial out. Did it actually dial?
thanks, maenard
[JimD] I suspect that you are talking about an internal/winmodem here. In that case, rip it out, throw it away and buy a real modem. How do you know if it's a "real modem?" Basically the easy answer is: spend a little extra money and get an external modem that plugs into your serial port. Eventually USB might be supported as well. If its internal then the chances are good that it is a "winmodem" or a "softmodem" --- which are not supported under Linux.
You could always try waiting for the support to become available. There is a project that may eventually support some win/soft-modems (linmodems.org). However, that is likely to take a long time and is likely to require considerable technical expertise for the foreseeable future. It is not a practical alternative to you.
Search our back issues on the term "winmodem" for discussions about why winmodems are not supported by Linux (or any other decent operating system).
[Heather] Hey, don't forget that winmodems are mentioned in the Linux Gazette FAQ
A couple of things I'd look at before ripping it out in disgust:
- run lspci and if it is detected and says it's a winmodem, well, junk it. If you feel like tearing your hair a bit more and it's one of the few supported at linmodems.org, more power to you.
- when you say "same com port as DOS" do you mean com3=ttyS3? If so, you've thought wrong, since in Linux we count from 0 ... the example I gave might be on ttyS2 (because that's the 3rd comport) or ttyS0 (maybe it's the only live one).
- Install wvdial and see if its config script autodetects your modem. It's fairly configurable for strange conditions on the ISP's end of the connection too.
From Stamatis Sarlis on Mon, 04 Sep 2000
Answered by: Jim Dennis
Dear Answer Guy,
I wonder if Linux (and XFree) supports 2 or more VGA cards in the same = PC. If not, is there any commercial XServer that can support more than 1 = VGA ? Where can I find more informations about this issue ?
Thank you in advance for your help
The latest version of XFree (version 4.x) support "Xinerama" which includes multi-headed support. However, I haven't tried that yet. XFree86 version 4.x will probably be included in the next major releases of each mainstream Linux distribution; so look for it in a few months if you don't want to try downloading, building and installing it yourself.
The two major commercial X server packages for Linux support multiple heads as well. I forget which is which but in one of them the multi-headed support is part of the main package, in the other it is a separate version of the package. These two X packages are:
- X-inside: Xig Inc
- http://www.xig.com
- Metro-X:
- http://www.metrolink.com
... note: I think it is the latter of these that offers multi-head support (as well as support for some touch screens and many proprietary laptop chipsets) in their base package. I think that Xig's products are separated among multi-head, laptop and desktop versions.
From Jonathan Hutchins on Mon, 04 Sep 2000
Answered by: Les Catterall, Anthony E . Greene
Hi Jonathan,
You say:
So far, I have yet to figure out a way to implement this kind of feature on Linux workstations. The internal address scheme could probably be handled using Netscape as a mail client and an LDAP server, but I don't know how we would handle the external address book.
The functionality you refer to is of course implemented on mail servers, not upon the workstations (client PCs). For example the SMTP server program Sendmail, has "aliases" which provides the functionality you seek. See the on-line manual entry: "man 5 aliases".
Cheers - Les Catterall
Anthony E . Greene suggested:
So far, I have yet to figure out a way to implement this kind of feature on Linux workstations. The internal address scheme could probably be handled using Netscape as a mail client and an LDAP server, but I don't know how we would handle the external address book.
I used LDAP both at home at at my office to create a shared address book. I update the address book using a browser and the "ldap-abook" package. Ldap-abook is a perl CGI script and module that make it easy to update an LDAP address book.
I use Red Hat Linux 6.2, which comes with OpenLDAP almost ready to run.
The hardest part was exporting the data from whatever format it was in to an LDIF file for import by the LDAP server. After that, I customized the CGI script that came with ldap-abook to improve the appearance of it's HTML output. It works just fine.
Tony
Jonathan and Les wrote each other again:
I don't think I'd want to maintain a 300 entry "Aliases" database (And that's just my personall address book!),...
But someone's got to maintain the details somewhere?
........ nor does this provide the address look-up capabilities that Outlook and Exchange do together. I can even use my Address Book from the Exchange Server to insert addresses in postal letters in Word, but what I'm really after is the ability that Outlook and Lotus Notes have to automatically look up (and optionally complete) names and/or addresses during message composition (or immediately on send, keeping the message open in then event of a lookup failure).
It seems I misunderstood what you are trying to do. You are looking for something which is tightly integrated with MS applications. Indeed, sendmail e-mail aliases won't help you there.
Anthony Greene has suggested ways to connect Outlook and Netscape/Linux clients to an LDAP database as a partial solution. I'll have to look more carefully at how such an interface manages addresses (things such as adding and updating), but Anthony's pointers may supply at least part of the solution.
Thanks for the ideas,
Jonathan
You're welcome. Good luck with LDAP.
Cheers - Les Catterall
From Lady Wistfulee on Wed, 6 Sep 2000
Answered by: Mike Orr, Heather Stern, Dan Wilder, Don Marti
I know this is a very newbie question, but a link to your URL came up in the HTML Writers' Guild List today & from the looks of your site, you MUST be experts on Linux & I "should" be able to get a straight answer from a "definitive" source.
How the heck does one pronounce "Linux"? I have heard "line-ux" & "len-ux". Which is it??
[Mike Orr] There are at least three common pronunciations, and none are authoritative. Because Linus is an ethnic Swede from Finland, he pronounces it with a short "ee" sound we don't have in English. Some people pronounce it "Line-ix" like the English version of Linus. Others pronounce it "linnix", trying to imitate the Swedish.
(The last vowel, being unstressed, can sound like a short i, short u, or schwa, as is normal rules of English.)
Now that Linus has been in the US a few years, he's starting to adopt an American pronunciation. When I saw him speak at LinuxWorld in 98, he pronounced his name "Line-us" and his OS "linnix". Even when he used both in the same sentance.
However, he has said he doesn't care how people pronounce it. He just wants people to use it.
Mike Orr, Editor, Linux Gazette
Please settle this if you can, I have two "gurus" arguing over it & they have me completely confused.
Thank you.
Céline Kapiolani
"He who asks is a fool for 5 minutes.
He who doesn't ask remains a fool forever."
Please, let us definitively argue over it... This is the point where everyone chimes in.
[Heather] A reasonable case could be made for the 3 following:
- line-ux
- per English standard rules for the spelling.
- lin-ix
- rhymes with "minix" which it was designed to resemble
[Mike Orr] I forgot about this. Yes, that was the original reason Linux was pronounced that way in 1991. Minix was a Unix-like operating system that ran on PCs (XTs in those days), which Linux was based on.
I have long disliked this pronunciation (linnix) because I don't see why Linux should be tied down to the name of an OS that is inferior, not free, and practically ceased to be used by 1993 or so.
[Heather]
- lin-ooks
- rhymes with Linus, its core man but by no means its only programmer, even for the kernel.
There is a definitive soundbite record by Linux Torvalds, available from kernel.org: http://www.kernel.org/pub/linux/kernel/SillySounds
It's available in either English or Swedish, and he says "My name is Linus Torvalds, and I pronounce Linux, Linux." Thus it's also definitive for how his own name is pronounced. Some of the major distributions also come with a .wav of the sound, as a sample.
Hope this made Linux a little more fun!
* Heather Stern *
Around the same time...
[Dan Marti] Linus' reluctance to be the pronounciation police (quite consistent with his pragmatic character), and his move to the US and subsequent inconsistencies in his own pronounciation.
When I met him, he introduced himself as LINE-us. And in his talks, he says LIN-ucks (last I heard anyway.)
The "Hello this is LEE-nus Torvalds, and I pronounce LEE-nucks as LEE-nucks" (as heard on the old .au file) does not match current practice.
And...
Dear Mike,
This morning I received an email in response to the email I sent yesterday:
[Dan Wilder] If you can play an .au file:Has Linus Torvalds himself explaining how to pronounce "linux".http://www.kernel.org/pub/linux/kernel/SillySounds/english.au
And the answer is, "lee-nooks".
The nearest common English is "li-nucks"
with a short "i". It certainly isn't "line-ux" with a long "i" or "len-ux" like the water heater manufacturer, either!
Summing up...
[Mike Orr] I should tread lightly here because Dan's my boss , but this sound was recorded many years ago, around 1992 or 93, and a lot has happened since then. For instance, the entire period of Linux ascendancy, the loud debates about how to pronounce Linux, Linus' reluctance to be the pronounciation police (quite consistent with his pragmatic character), and his move to the US and subsequent inconsistencies in his own pronounciation.
After 1 1/2 years of working at a company that officially pronounces it "linnix", I've been half browbeaten into submission. Now I end up saying "line-ux" and "linnix" inconsistently in the same sentance. For instance, "This month's Linnix Journal has an article about Line-ix sound drivers."
...but she says it best herself.
Now I am still confused...but at least no one truly cares how it is pronounced, so all is well & good.
Thanks for your quick response!
much alohas,
Céline Kapiolani
"You have the right to remain silent. Anything you say will be misquoted, then used against you."
From Gaurav on Fri, 8 Sep 2000
Answered by: Ben Okopnik
On Fri, Sep 08, 2000 at 12:01:38AM +0530, root wrote:
Might want to check out your mail setup as well... See my article on setting up Sendmail under Redhat this coming month (if Mike doesn't shoot it on sight, that is.
I have been trying to get my EPSON Stylus COLOR IIs (TRIGEM) printer working on redhat 6.1 for about three months on and off, "UNSUCCESSFULLY" of course.
The problem is that i want to print using black ink-cartrege, as most of my printing is black documents.
I have tried various Ghostscript devices, uniprint drivers read the docs over and over again, posted on the net....and done many unmentionable things but to no avail.
The unmentionable things are probably the ones worth mentioning, just to avoid repetition.
The closest i have come to getting some sane output is with the following GS options
gs -sDEVICE=stcolor -r360x360 -dnoWeave -descp_Band=1 -sOutputFile=\|lpr fileName.ps
and
gs -sDEVICE=stcolor -sModel=st800 -sOutputFile=\|lpr fileName.ps
they both give me ouptut that is elongated in length and the verticals lines are mis-aligned.
Please Please Please Please Please help me out herex cause it is a real bother booting into windows again and again just to get printouts.. and i cant buy a new printer just yet.
Gaurav
I have an Epson Stylus 720, myself (can't see where it would be very different). Try installing "magicfilter" and modifying the "gs" lines in "/etc/magicfilter/stylus_color_720_filter" as follows:
/usr/bin/gs -sDEVICE=stcolor -r720 -q -dSAFER -dNOPAUSE -dSpotSize='{2.6 2.4 2.6 2}' -sOutputFile=- -
Remove the resolution ("-rXXX") switch.
That one took a while of experimenting to find (with it in place, "magicfilter" swallowed the output without a trace), but it works fine. I haven't tried just a black cartridge, but if it works with Windows, it should work with Linux.
By the way - as far as I know, the horizontal and the vertical resolution on my printer (and probably on yours) are not the same; I believe the numbers are something like 360x720. This is probably the reason for the "elongation" problem.
Ben Okopnik
From dwayne.bilka on Sun, 10 Sep 2000
Answered by: Ben Okopnik, Dan Wilder
Can I run xdm in the background without the console switching into the GUI? One of our RedHat 6.0 servers has a SiS620 AGP built into the motherboard, and I have not been able to figure out how to get it to work. Anyways, I don't need a GUI on the server (our servers are in the basement, my office is on the 10th floor), however I do have the occasional need to bring up the GUI. I use X-Win on NT to connect to our Solaris/Linux servers.
[Ben] Try vncserver and svncviewer/xvncviewer instead. You can run the server on your server , and connect to it from your desktop. Clients are available for just about any platform, and you can even log into it from your server machine locally (X Windows without running an X server... the mind boggles). Nice GUI, excellent clients for the Windows world (small enough to carry with you and run directly from a floppy, or you can actually use a browser (!) as a viewer).
The program itself is available under the GPL from the good folx at http://www.uk.research.att.com/vnc - there's also a very good FAQ/troubleshooting guide on the site.
[Dan] Has vnc been maintained recently?
I tried it a couple of years ago and found it terribly buggy. Broken images, crashes, the like. Good enough at the time for no more than extremely casual use.
Watched it for a year or so, no apparent maintenance work being done, gave up on it.
[Ben] The last release for Win clients was 5/26/00; Unix, 2/9/00. I used it in a business environment almost two years ago on a regular basis and found it stable and bug-free.
From Joseph Wilkicki on Wed, 13 Sep 2000
Answered by: Heather Stern
Hi!
I have a question for the Linux Gazette Answer gang, but didn't see an address for submission, so I'll direct it to you.
I'm trying to harden my machine and to that end, I ran Bastille-Linux on my machine when installed, added ssh, and disabled as many services as I can.
When I ran saint and nmap, however I saw I have a few ports open which I don't recognize. They are
listen,
sounds like a verb, not the name of a service
miroconnect,
A brief Google! search implies this may be something to do with a sound card.
and an unknown service running on port 1024.
1024 is in the user-available range ... it is probably the second connection of some other protocol you have running. Try running
netstat -a
on the system's console to see what connections are currently up, and look at what is connecting to it.
Saint didn't seem to think they were a problem, but I didn't explicitly turn them on, so I'm concerned they are a risk.
What are these services, and should I (and how do I) turn them off?
This can't be readily determined until you know what they are; once you do, you can look for the offending service(s) in either your inetd.conf or among your init scripts. lsof (list open files) might also be useful for determining the culprits.
Also, can I secure lpd? I need to print to a local printer, but I don't need to print to network printers.
It's possible albeit unusual to run lpd from inetd - in there, you could protect it with tcpwrappers.
Thanks!
Joseph Wilkicki
You're welcome!
From Wilf on Sat, 2 Sep 2000
There've been a number of requests about ISPs offering "free" (but paid by ads) services and accepting rather than rejecting Linux users. If you're in Europe, Wilf's answer may be handy... and if not, his warnings are still worth regarding. -- Heather
Hello LGs,
Users in France may now choose quite a few ISPs which offer 1, 2, 3, ... 12 hours free of charge (that's to say connection, personnel web space, phone costs within a local area...).
Alas, some inconveniences are:
- (having to) accept extensive advertising which (once your surfing analyzed) will be tailored accordingly (someone MUST pay, mustn't s/he ? . I skip details on restrictions and services which vary from one ISP to another.
- most of 'em are aimed for (un)lucky Windows (Windoze?) users. Win 3.11 is almost always left out, and Linux even more so.
Exactly why everyone has been asking...
Having called an ISP (Free, to be precise), I received a letter and a CD in a surprisingly short time which included necessary information (login, password, POP, SMTP, DNS1 and DNS2, News and Email addresses and some example scripts for Linux which I strangely did not need at all) to configure programmes (under Linux, you've guessed it) which worked fine... after two nights' spent on configuring kppp.
Users in France may check www.free.fr for information regarding the "points d'accès" within their region of residence. They, at Free, do not impose such and such OS: you are free to use Windows and/or Linux or any other OS capable of handling Internet's protocols and some such programmes.
This leads me to furnish a piece of information users of OLITEC modems might find to be just what they needed to get at last connected:
I use (under Red Hat 6.0) a modem of Olitec, the Self Memory 56000 V90/K56Flex, which kppp recognized straight away. However, hours and quite a couple of beers later, the connection was still not established, raising not only the phone bill but also resulting in a what now appears to be a somewhat bold head, until I checked out the configuration file of kppp (under the influence of suggestions made by a friend who cared to spend the nights in front of my compy, and not, as you might have suspected, under the influence of the brew, honest, we weren't blotto at all). Besides, fancy having a good look at his personnel (French) pages he painfully created under Linux and want to learn (or indulge yourself in reminiscences) about that famous "racing car" Gordini or R8? Check out www.amicale1134.cigale.net, it is for you! Well, on then: in kppp, the option "Periphery / End of Line" is set to "CR" by default, which the above named modem needs to be set to "CR/LF". This done, the connection was immediate and I could surf (which I did, in fact) !!!
Worthwhile noting, though: kppp runs without a problem when used under root, other users, however, need be in a group especially created and given the rights to use kppp! Being new to Linux, I haven't yet figured out how to create a group and get my current "none-root user" into it. Anyone care to help, please?
There is a file called /etc/group which declares what groups exist and which users are members of them. You can also be a member of a group if its number is your gid in the /etc/passwd file (that's your default group). So add your own username to the right group line in /etc/group (multiple users are seperated by comma, spaces are not needed). -- Heather
Another information regards reducing phone bills: again, users in France should contact France Telecom and subscribe to "Primaliste Internet" at 9,91 FRF (or 1,51 Euro) per month which gives a discount of 50% on local calls using ONE chosen phone number and made between the (nightly) hours 22:00 and 8:00 (that's 10 p.m. to 8 a.m., if you prefer) at 0,07 cts (or 0,01 Euro) per minute (I guess "Night time is the Right time", but not really an excuse to neglect your Missis!).
Finally, a personnel request to those writing articles or documentations for Linux Gazette in English (be that British or American or in general): I have taken up to translating some of them (articles, that is) and have not often though quite some difficulties when it comes to translate (or even guess) what the author wanted to say. Please be precise! Phrases like "I youz, but it no working, pleaz hep!" don't mean much, particularly when translated. Never mind spelling mistakes (even God left out where/how/why he lives...), but use nouns or name "the things" instead. And, keep your articles coming in! (Hmmm, I hope editors of LG aren't grumbling so much about this invitation .
Quite the contraire, mon ami! We love to see new translations. And, while encouraging our readers to clearer sentences, I'd also like to encourage a bit more that helps all of us help you:
Wishing you all, at LG and 'round the globe, good continuation (and "bon apétit / enjoy your meal" in case you're dining...) and, of course: "ETAHI, ERUA, ETORU !!!",
Sincerely Yours, Wilf (as opposed to Howling Wolf as is ham to bacon
From hazmouz on Thu, 21 Sep 2000
can you please help me to find the driver for linux or linuxlike of the SIS 6326 graphic card
thanks a lot
PS: excuse me for this sudden intrusion in your life
...and...
From Cataldo Pellegrino on Mon, 28 Aug 2000
Now as it turns out, none of the usual Gang answered, but it had been discussed aboard my Star Trek shuttle mailing list a few months ago...
Answered by: The Armadillo with the Mask
- Politely:
- It's a real bother to get working under XFree86.
- Frankly:
- [ Hey, we can't print that sort of punctuated language here! not without entities anyway. ]... #$@%!!!!!!!
In fact, if you have this or a derivative card, don't even try. Go out immediately and get Xfree86 4.0. This is not a matter of compatibility it is a matter that it just doesn't work at all under anything but. You need at least 3.3.6 to get even faltering support let alone anything that actually works.
Seeing as 3.3.6 is still the current version, it's probably still the case. Feel free to build it yourself from the source repository if you feel up to it, but, then you may as well try the new version instead and get real support. -- Heather
So what is it?---A very popular chipset on the low-end $30-50 SVGA card market. I've got one in komodo for preciselyt that reason. When I first put the machine together, I just needed a card that would allow me to stand the box until I could put it on a TTY and decide what I wanted to do for a real card/monitor combo. As such I never tried to bring X up on it. They are in a number of the testbed boxen at [my work] for similar reasons.
I kept running into the same problem. I could run 'startx' by hand, but whenever I ran /etc/init.d/xdm start, 'parse_xf86config' would complain that there was an error in the XF86Config in /etc/X11 and refuse to start xdm. First, I noticed that the variable PROBLEM in /etc/init.d/xdm is automatically set to "yes" with the intent that if the parse_xf86config went well, PROBLEM would get set to null and all would be well.
It turns out this never happens and even when the output of parse_xf86config is clean, PROBLEM doesn't get reset. I was having other problems too. The screen interlaced horribly and occasionally blanked out and then came back. The cursor was a big huge blob. I was really just about to toss the card across the room. A final search on RedHat under "all Linux sites" turned up the answer. If you add the following under XF86Config, it works.
Option no_bitblt Option no_imageblt Option no_accel Option sw_cursor
This also fixed the issue I was having with PROBLEM in /etc/init.d/xdm which is the REALLY bizarre part. From all appearances, it looks like parse_xf86config is not only doing a syntactical check on the config file but that it is also checking to see if the options that you are specifying actually WORK on the card or if it's going to cause trouble.
I have the exact same /etc/init.d/xdm script on chameleon, which is also running debian and it runs without a hitch(it is however, using the vastly more ubiquitous C&T 65550 chipset)
- Moral of the story:
- Avoid cheapy video cards and when you can't or REALLY don't want to, avoid the SiS 6326. Dropping an extra $50 is far more worth your time.
Also, startup scripts aren't infallible. I'm still wondering if there are maybe some bugs in the Debian /bin/sh that was causing the evaluation problems with /etc/init.d/xdm I was seeing.
Note: Probably resolved if you stay on Potato; he was looking at Debian/frozen in mid July, and it didn't release until mid August. -- Heather
"That's my story and I'm stickin' to it!"
'dillo
From Curtis J Blank on Mon, 04 Sep 2000
Answered by: Jim Dennis
Thanks for the answer, that did not dawn on me, I'm perfectly aware of how things exist in an environment and the need to export them. I guess I'd have to say it didn't dawn on me because of the fact that it works in a ksh environment on Solaris and Tru64 UNIX and I wasn't thinking along the lines of forked processes.
You were observing the behavior without understanding the underlying mechanisms.
I'm curious as to why it does work there though, what magic is the shell doing so that the variables exist that were used in the read when the forked read process no longer exists and control returns to the parent? Is the shell transposing the two commands and doing the read in the context of the parent and forking the function so that the variables remain? ...
You still don't understand.
A pipe operator (|) indicates a fork(). However, it doesn't necessitate an exec*(). External commands require an exec*().
In the cases of newer ksh (Korn '93 and later?) and zsh the fork() is on the left of the pipe operator. That is to say that the commands on the left of the operator are performed by a child process. In the other cases the commands on the right are performed by the child. In either case the child executes the commands and exits. Meanwhile the parent parent executes the other set of commands and continues to live.
Thus the question is whether the parent (current process) or a child will be sending data into pipes or reading data from each pipe.
Arguably it makes more sense for the parent to receive data from the children, as the data is likely to be of persistent use. Of course it also stands to reason that we may want to read the data into a variable --- or MORE IMPORTANTLY into a list of variables. This is why the Korn shell (and zsh) model is better.
In the case of a single variable we can always just restructure the command into a set of backtick operators (also known as a "command substitution expression"). For example:
foo | read bar
... can always be expressed as:
bar=$( foo ) # (or bar=`foo` in older shells and csh)
However this doesn't work for multiple variables:
foo | read bar bang
... cannot be written in any command substitution form. Thus we end up trying to execute the rest our script inside of the subshell (enclosing the 'read bar bang' command in a set of braces or parentheses to group it with a series of other commands in our subshell), or we resort of saving all of command 'foo's output into one variable and and fussing with it. That greatly limits the flexibility of the 'read' command and makes the IFS (inter-field separator: a list of characters on which token splitting will be done for the read command) variable almost worthless.
One way to handle this would be to write the output of 'foo' to a temporary file, and then read with with simple re-direction:
foo > /tmp/somefile.$$ ; read bar bang < /tmp/somefile.$$
... but this introduces a host of potential race conditions and security issues; requires that we clean up the temp file, suggests that we should create a 'trap' (signal handler) to perform the cleanup even if we are hit with a deadly signal, and is generally inelegant. We could also create a named pipe, but that has most of the same problems as a temporary file.
So we end up using the process subsitution expression as I described (and as you mention below):
... The real use of this technique is in the example script given that includes the function. I was able to get it to work when I did it per your suggestion:
read a b c < <( dafunc )
-Curt Blank
Of course. This is the same as 'read a b c < /tmp/somefile.$$' except that we are substituting a file for a filename. Thus the <( ... ) expression returns a filename. That file is a virtual file --- that is to say that it is a file descriptor connected to another process (just like a pipe) but it can be represented (on many UNIX systems, including Linux) as an entry under /def/fd/ (the "file descriptor" directory). Under Linux /dev/fd/ is a symlink to /proc/self/fd. Under other forms of UNIX it might have different underlying mechanics. It might actually appear as a directory with a bunch of character mode device nodes or it might be some sort of virtual filesystem (like /proc, /dev/pts, etc).
I still think that bash should switch to the Korn shell semantics. The <(...) is sufficient to provide the features. However, it seems to be unique to bash. For bash to offer the best portability it seems that it should conform to the latest Korn shell design. (BTW: If the switch was to break some script that depended on the old semantics, on the subshell "leaning" to the right --- than that script was already broken under different versions of ksh. However, I could certainly see a good argument for having a shell option (shopt?) that would set this back to the old semantics if that was necessary. I have yet to see a case where the old semantics are actually more desirable than the new ones --- but I haven't really tried to find one either.
From Kurt Radecke on Mon, 18 Sep 2000
Answered by: Heather Stern
Help, saw some info you posted on www.linuxdoc.org.
I am new to Linux.
I have a 15Gig drive. What is the best way to partition it: I am using Redhat 6.2 and the manual says to setup a swap, boot and "/" partition.
I like to recommend:
- /tmp
- 100 to 300 Mb, depending on kinds of things you do that might flood tempspace. Even 400 is not unreasonable, if you have lots of disk to burn.
- /var
- This place holds system logs, the packaging system databases, your incoming mail spool, and your outgoing mail and print spools. That means it can overflow pretty quickly if not kept seperate. Even on a small system I don't like this to be too small (usu 250 Mb is okay for a minimum - but if space is that cramped I also turn off a lot of logging).
- /boot
- 10 or 20 Mb near the front of disk for your kernel(s). This way you can mount 'em read only
- /
- distros vary quite a bit about how small you can get away with this being, but I don't advise less than 200 Mb (unless you're putting something together by hand, or only installing "base" without all the cool stuff). If you are crafting something by hand you can get this fairly tiny by using an initial ramdisk. 500 or 600 Mb is about as large as I'd go.
- swap
- This is where the changeable portion of working processes are kept when they're not the active critter and there's no room to keep them live anyway. How much you need depends on how serious a multitasker you are. Personally I set it around 100 Mb per drive in my system.
- /usr
- can be seperate if you like... in which case I usually make /home a symlink to /usr/local/home so it goes there. If I were going to use "grow" to use up the rest of the disk - this would be where.
You can stretch any of these to be larger but /tmp and /var grow useless after a while - and swap is usually dog slow after about twice RAM, so I wouldn't use more than 1.5 times RAM.
Also, what are your thoughts on installing GNOME or KDE. I have a P75 machine.
You can use both; I mix and match them with afterstep apps, GTK apps that don't use GNOME, and tcl stuff fairly freely. Sometimes K has a better tool, sometimes GNOME... but both environments can eat a surprising amount of RAM because their core libraries are large, and it's possible they may be affected by your older processor, too.
If you find yourself strapped for memory space use a lightweight theme (or switch to a wm that doesn't use themes) and avoid massive tools like netscape or emacs in favor of lighter ones like gzilla and lynx for browsing, nedit for editing (hey, it's under GPL now, that's cool), and an occasional TCL/TK app.
Thanks for your help.
Kurt
From Philipp on Tue, 19 Sep 2000
Answered by: Heather Stern
Hi mr. answerguy,
I have installed Linux on my IBM thinkpad and have found quite a few curiosities you might help me with:
This is normal. MSwin looks for its drives sequentially and when it runs out of stuff it understands, it stops.
For fairly annoying values of "works" it can be forced to work - you need to either stick with 2.2.14 or force 2.2.14's ppp service into a 2.2.16 setup with a shoehorn... then load the ltmodem driver binary they provide (you can find it at linmodems.org)
It still sucks up lots of CPU under load though. And it's sort of doomed, unless they wake up and smell the coffee before the 2.4 kernel ships -- there isn't amy open code, and there isn't even a binary link kit like Aureal did for their soundcards.
Speaking of which -- cheers to Aureal for a good middle ground solution, and especially for setting up on aureal.sourceforge.net. May it encourage other vendors to do the same!
Can't really help unless you can tell what the sound is. Run lspci and see if it says. If you're lucky it's something normal, like an IRQ conflict. Try telling your CMOS Setup program that we are not a plug and play OS, so it will fix the IRQs for your builtin devices. Our idea of "plug and play" is under our control, not the BIOS'...
Maybe Kenneth Harker's Linux on Laptops resource page (http://www.cs.utexas.edu/users/kharker/linux-laptop/) has a link to someone else with the same model?
Positive surprises:
Well, we try, anyway.
Luckily most configurators have a pick for LCD screen these days.
Mice and many other input widgets, we do pretty well.
Anyway, mr. answerguy, if you know how I could get my soundcard or modem to work, I'd be really happy. (PS, I've been through the HowTo's and am currently using an external modem, which is not too nifty on a mobile computer)
I highly recommend a PCMCIA based modem instead of a clunky box-model. Ambicom makes nice solid cardbus modems which are not "winmodems" either.
Regards and Shalom,
Philipp Schlüter
Best of luck! -- Heather
Hi there,
Regarding the 'File With Device Information' in the September Issue, you should also mention that if you have the pciutils package installed, /sbin/lspci is much more useful than eyeballing /proc/pci. Just my thoughts....
Regards,
- Brendon.
Dear Editor,
here is another 2cent tip.
I'm forced to print to a printer connected to a Windows box. This box is networked with my Linux box and I wanted to setup a working samba printer- I had problems setting up the printer under Suse 6.4 The passwords and resource names were correct, but smbclient couldn't connect to the Windows box. I simply added the commandline option "-I 192.168.0.2" in the file /etc/apsfilterrc.stcolor in the line REMOTE_PRINTER= (I'm using the stcolor driver from GhostScript. Check for the file corresponding to the printer driver you've selected in Yast.)
The option above specifies the IP address of the machine where the printer is connected to. It now works without any problems.
cheers, Matthias
First off - the guy was right, it IS illegal pretty much anywhere in the world.
Second - it won't work because the modems are engineered for a different kind of communications channel - RF connections have different characteristics compared to phone lines.
Amateur radio ( which requires a license obtained by taking at test) has had a low speed connection for around 15 years called packet radio. There is quite a bit of support for this technology within Linux (including modems implemented using sound cards...) These will work over an FM radio channel. (CB is AM/SSB which has a much lower quality) and usually only work at 1200 to 9600 baud. Not blazingly fast.
Hope this clears up some points.
Steve Wilson, KA6S
Hi! I would like to access my graphics Linux desktop from my Windows box; I was able to do it using a software called Reflection (I guess you know it), is there any (free) software with the same capabilities of Reflection?
Regards,
Toshiro.
There's MI/X, the MicroImages X server. Some people like it. I can't personally vouch for it. MicroImages provides it as part of the Windows support of its GIS software.
http://www.microimages.com/freestuff/mix
X-WinPro is shareware. License is a lot less than Reflection/X. Personally I've found it useful.
http://www.lab-pro.com
-- Dan Wilder
James,
I am a Linux newbie with a significant MS Windows background (several MS certs, etc) and I am trying to wean myself off of my Windows partition at work. The only thing holding me back is MS Exchange/Outlook. Is there an Exchange client for Linux? I have looked high and low and can't find one. I hae configured the server (I administer it) to be a pop3 server, so I can get my mail that way, but I loose a lot of the functionality that way. Any help would be appreciated.
Duane Tackett
According to a message on the linux-admin list, TradeSuite (they have a server, too) can be found at http://www.bynari.com. Looks like the client is free though the Exchange support might not be. Oh yeah, and Exchange can be told to serve its mail up as webmail... maybe that will do until your company can transition to a more flexible mail system -- Heather
Got more than just a tip in the "dealing with MS Exchange" category? feel free to send us an article ... -- Heather
Is there any easy-to-read FAQ or HOWTO about how to type international characters in XFree86? I am a Spanish student, and I would like to write my essays in LyX. I would just like an easy way to learn about things like `compose keys' and `dead keys.' Can you help me? Thanx.
The answer to that would be "yes". Quoting from my own "Introduction to Shell Scripting":
...for a fairly decent and simple explanation, see Hans de Goede's "fixkeys.tgz", which contains a neat little "HOWTO". For a more in-depth study, the "Keyboard-and-Console-HOWTO" is an awesome reference on the subject.
Ben Okopnik
Don't be scared off by the fact it hasn't changed since 1998; the console itself doesn't change much. For even more fun, the Danish-HOWTO was just updated in March and covers all sorts of other aspects about international needs. There are a few other specific nationalities covered at linuxdocs.org too. -- Heather
I am looking for some software that will talk to my tape library. I have looked at several different commercial packages, but none of them really work the way I need them to. I would like to find an application that would tell me what tapes are in the library (by reporting back the bar codes), then load the tapes that I select into the drives I want. Then I can run taper/tar/cpio/mt to my hearts content. I could write my own software, but I am lazy.
Thank you!
Charles H. Deling
Perhaps amanda would do the trick. If not, perhaps its simplistic shell script mentality will make it easier to adjust to your needs. Gentle readers: any more suggestions? -- Heather
Just read a reply about a true modem. I've been searching for one ever since a friend mentioned it to me. He has an ISA True modem. And wouldn't you know it mine is not. It's a PCI and so far no luck finding a PCI True modem. Any ideas on where I might score one?
Hoping for the right answer,
Kookaberra
Yes, at least one of the links from linmodems.org is the homepage of a Wallace and Gromit fan who keeps track of which cheap store brand modems are complete "hard" or crippled "soft" modems, including, to my great annoyance, the fact that some pccards are software controlled. Arrgh -- Heather
Hi all,
I was wondering if anyone knew of an Emulator for Palm OS. I would like to write apps for Palm OS and test them before installing. Is this possible?
Kind regards
Andrew Higgs
Look at freshmeat.net for the app copilot. It needs a ROM though, which you can either upload from your real Pilot, or get from the Palm Computing developers' site (once you agree to their restrictions for using the debug ROM, of course). -- Heather
Dear James,
I used your answer for the 'telnet - connection closed by foreign host' to get telnet working on a custom red hat install I did.
I begin to think that I left off a package that I really needed when I selected packages in the 'select package' window during the install.
I haven't yet found a description of which individual RPM modules (inetd in.telnetd) are rolled up into what packages that can be selected on the package selection window in the install screen. Any help you can give will be much appreciated.
Andrew Wilkes
If you know which file you want, but not what RPM it's in, this script will do the trick ($1 = directory full of rpms, $2 = file you are seeking)
#!/bin/bash cd $1 for i in *.rpm ; do rpm -qpl $i | grep -q $2 && echo $i ; done
Hope that helps a few folks out there! -- Heather
I have a BeOS partition on hda3. I also have windows on another partition. linux "sees" the windows partition and mounting it is no problem. But how can I make linux "see" the BeOS partition? I've been in etc/fstab to no avail.
Readonly support for the BeOS filesystem is available in the 2.4.0 test kernels. I have no idea how safe it is yet. -- Heather
Hello!
I have a small query. I want to log into a Linux machine, set a process = running, and log out again, leaving the process running. It has been = suggested that I can do this by simply using 'nohup command &' but this = didn't work, because the process was killed as soon as I logged out = again.
Any help would be greatly appreciated.
Andy
screen with autodetach mode turned on would work nicely. We use it here all the time. -- Heather
Courtesy Linux Today, where you can read all the latest Help Dex cartoons.
First of all, check his web site DiBona.com. Now delight yourself as Olinux did, while getting to know the personality of Chris DiBona, the President of Silicon Valley Linux Users Group, Chief Linux Evangelist at VALinux and Grant Chair at Linux International.
Olinux: Tell about your career: college, jobs, personal life (age, birth place, hobbies)
Olinux:When did you started working with Linux? What was your initial motivation and how do you see it nowadays?
DiBona: I first discovered linux when I was a computer science student at George Mason university. I had to write a client server application under linux that used IPC. I start development in the schools Sun lab and found that , when I could get a station, they were very slow.
I installed linux on my machine at home (a 486-25) and went to town. Linux was responsive and beautiful and I was able to complete the homework very quickly and have a good time doing it. I also learned a ton about my computer by doing this.
This was in late 1994, I think. Nowadays, I use Linux for everything from email (I get 500+ emails a day), surfing, and to a lessor degree , video games. I still program under linux for fun, too. I see it as a complete system, now. I have a machine on the net now that hosts my personal site (DiBona.com) and a number of sites for my friends, all running linux of course.
Olinux: How does it feel to be a Linux Evangelist and live professionally for the cause?
DiBona: It undoubtably the coolest job I've ever had. I also get to work with some of the coolest people in computing, both here at VA and in the Linux world at large. I consider myself very lucky.
Olinux: What are the main personal achievements on your career? cite some highlights? did you get any awards as an individual or representing a company?
DiBona: Great career acheivments would include my adminisstration of VA's community outreach program during our public offering. I felt it went really well and am proud for my part in it. AT VA I've had the chance to work with the different departments and help staff them with talented smart people. Other than that, I recieved an award from Linux Journal for my role as co-editor of the book "Open Sources" and I was able to help the EFF with the CDVD cases in California and New York. Also, I've really enjoyed my work with Linux International.
Olinux: How was SVLUG.org created? Who was the group and what were the ideas that guided SVLUG.org start? How do you explain the fast growth of SVLUG.org? Show us some brief facts/work/people that contributed to this extreme progress?
DiBona: SVLUG started 12 years agao as a unix on pc group, concentrating on SCO and XENIX and the rest. It was started by a fellow named Dan Kioka, who was the president of the group. Dan ran the group for 10 years when Ben Spade took over as the president and I as the Vice President about three years agao. When Ben took over, Ian Kluft found us space at Cisco to meet and the larger venue, combined with teh growth of linux and the speakers we had access to in the valley all contributed to the growth of the group. About 1 year ago I took over the presidency from Ben and it's been pretty easy going ever since. The biggest challenge on rnning such a large group is mostly the venue, Cisco has been very good about this though.
Olinux: What are your responsibilities at VALinux and SVLUG.org? How did you become Linux International webmaster and what were you main accomplishments as webmasters?
DiBona: I bacame the LI webmaster and then the grant chair mostly because I was willing to do the work. There are a lot of jobs in the linux community that can be done by anyone provided they are willing to put the time in. John Mark Walker is now the webmaster for LI and I get to concentrate on making the grant system at LI work better now that LI is incorporating and such.
Olinux: How is SVLUG.org organized? Try to give us an idea of how SVLUG.org. works? How is the coordinated and managed (servers, directories, contribution, staff payment)? How many people are involved? What are the main problems? Does SVSLUG.org has a central office somewhere or a HQ?
DiBona: SVLUG is all volunteer, the servers were donted by VA back before I even worked for them. There is no treasury, and no membership fees. Our group's insurance comes from our parent group the Silicon Valley Computer Society. As far as coordination goes, it's jsut a mailing list that we all subscribe to and we all basically work together to get teh meeting happeneing. The installfests are run by Brian , I'm responsible for facilities, Sam handles speakers, Michael and Marc handle the machine and mailing lists and we have a team of web people (Amy, Lisa and Heather) who handle site updates. Main problems is that everyone is very busy with our day jobs, which can lead to some frustrating times, but the meetings still come off, so I'm happy.
Olinux: How many people have subscribed its mail discussion list? how does users help SVSLUG? And how user are motivate to help? all the staff is compounded by volunteers?
DiBona: There are about a thousand on the announce list and 200+ on the discussion list. About 250 people come to each meeting.
Olinux: In your opinion, what ae the most notable results either of SVLUG or Li.org work promoting Linux platform?
DiBona: Putting a friendly face on linux is the important job of LUGs and LI in genereal. It a great thing to be able to tell anyone who emails me or calls me regarding Linux to refer them to a local person who just wants to help out. That's one of the things that really elevates Linux.
Olinux: What are the companies that sponsor or maintain SVLUG.org? What is VALinux's role on the site?
DiBona: The full list of sponsors and their roles can be found here: sponsors. Quite a who's who! Anyhow, VA's role is donating my time, bandwidth and a machine and a ton of t-shirts now and then. Any usergroup on the planet should contact me and we'll get boxes of stuff for you to give away at your meetings.
Olinux: What are the programs (database & scripts languages) use for SVLUG.org development? How difficult is to manage this database?
DiBona: Mailman for the mailing list, apache and perl for the web site. It's pretty easy to handle.
Olinux: How many daily page views and what is the number and type of servers used to keep SVLUG.org online?
DiBona: Gosh, I'm not sure really about the page views, SVLUG.org runs on one pIII 500mhz system with 128mb of ram. The old machine, a 486 was slashdotted three times with no problems. The current machine has an uptime of 210 days!
Olinux: In your opinion, what improvements and support are needed to make Linux a wide world platform for end users?
DiBona: More video-games :-) Well, I'd say further development of the desktop metaphors like gnome and KDE and then we'll get the desktop the way we own the internet server market.
Olinux: IDC has showed that despite the tendency of Linux to become the next dominant OS by 2004, still the expected revenues generated are regarded extremely low. does it means that Linux won't ever play a major whole as a commercial and profitable option for companies?
DiBona: Well, VA just completed a 50.7 million dollar quarter and we're not going to stop. I can't comments for the red hats and such of the world, but we intend to do very well. Linux will continue to grow, and the linux industry will contine to grow with it.
Olinux: What are your forecasts about Linux growth? Do you have any breaking news about linux mass deployment in china or any other country?
DiBona: Nope, Linux is and will continue to be everywhere. More so with everyday. Like John "maddog" Hall says: "Linux is inevitable".
Quentin Cregan is one of the key developers of VALinux' SourceForge Project. Currently, there are 7559 ongoing projects and more than 50000 registered users, building a powerful programmer's network community. SourceForge hosts Open Source Projects and supports thousand of users by providing many tools to allow collaborative work.
Olinux: Tell us about your career: college, jobs, personal life (age, birth place)
Quentin: I currently live in Brisbane, Australia - and work via the 'net.
Olinux: What are your responsibilities at Source Forge? Are there any full times workers?
Quentin: Currently I'm a cross between Support Monkey, FTP Admin and Developer. The staff are all reasonably multitalented =) Fairly able to turn their hands they doing whatever is necessary. There are currently six core staff members that are paid full time.
Olinux: How was SourceForge created? What was the main ideas in the begining? Who the initial group got together? Are any physical HeadQuarters?
Quentin: SourceForge was initially conceived as a project called "ColdStorage". CS was targeted at making permanent archive of every CVS tree for every Open Source project on the planet. That idea got slightly modified by the original founding members of the SourceForge project, along with input from VA Linux Systems' staff. The result is what you have today. The initial group was Tony Guntharp, Drew Streib, Tim Perdue and Uriah Welcome - who are all based out of SourceForge's official HQ - VA Linux Systems in Sunnyvale, California.
Olinux: How is Source Forge organized? Give us an idea of how it works. In terms of the division of responsalities, what are the main groups involved?
Quentin: SourceForge is made up of developers, sysadmins and community contacts.
Dan Bressler and Jim Kingdon try to make sure that we don't stray too far from the needs and wishes of the Open Source community. Uriah Welcome and Chris Endsley take care of the Systems Administration, and make sure that everything is working up to scratch. If not, their pagers wake them up at disgusting hours of the morning. Tim Perdue is the main PHP developer, along with some contributions from me. I handle mostly of the support reqs, and any other odd job that seems to crop up.
Olinux: How often and where the group responsible for key decision meet? those meets take place on any specific place or over the internet?
Quentin: While we're in constant dialogue via both IRC and email, we also have weekly telephone conferences. Although, these days, most people are physically in California, these meetings seem to be more face to face pizza eating events with one poor Australian on the phone, than a "teleconference".
Olinux: How the work is coordinated and managed (servers, directories, funding, staff payment)? How many people are involved? What are the main operating problems?
Quentin: The work, machines and bandwidth is sponsored by VA Linux Systems. The main problems we're facing are growth and ensuring total redundancy. We're currently undergoing a process of making sure that even if we lose our main fileserver (1tb, yes, 1tb), we can keep going with only a momentary loss of service.
Olinux: How many projects are currently open? How are projects development evaluated? Are any special policy to shut down and clear old projects?
Quentin: There are currently 7,450 or so hosted projects, shared between 50,442 registered developers. Old projects are deleted on request. However, our deletion process is more "archival" - keeping with the original ideas of Cold Storage. While development may freeze, the project itself is not actually physically deleted, merely shelved.
Olinux: What facilities are offered for the developers (acounts, machines, lists, email, links)?
Quentin: All developers on projects are offered: access to our compile farm (server cluster) - for compiling and testing across clusters. A shell account on our main development server, an @users.sourceforge.net email alias, as well as access to some great project management tools. We're trying to make everything that a developer could possibly need available to remove any and all overheads from software development.
Olinux: What are the steps for a certain project to be accepted as part of Sourceforge? are there any special criteria as being an open source software, non commercial? all of them are accepted?
Quentin: A project and an Open Source license. For a list of accepted Open Source licenses, please check out http://www.opensource.org. We're aiming to be a development host for as much of the Open Source community as possible.
Olinux: Why should a developer put his project on SorceForge instead of put it into another place?
Quentin: Apart from the developer services listed above, SourceForge has some fantastic web based tools. With SourceForge, you don't have to worry about finding webspace, or if your FTP server will be flooded with downloads. We provide the projects with a high capacity download server, which we're yet to see flooded. We also provide CVS trees to every project so that they can have their own revision control in their code. On top of all this - there are the web based and collaborative tools. The site itself allows you to manage news releases about your project, task management, document management, bug tracking, support management and much more. You can also receive code patches through the site, and set up mailing lists and news forums for discussion about your project.
Olinux: Does Sourceforge has any key strategic alliances with companies? Does any private company other company besides VALinux support Source Forge? Are the any profitable activities?
Quentin: SourceForge has helped numerous groups and companies with both Open Sourcing their code, and helping out with hosting bigger projects that have outgrown their main developer's DSL connection. For example, we're helping out Hewlett Packard with their moves toward Open Sourcing their printer drivers. We've also helped out projects like Mozilla, KDE, XFree86, and MySQL by supplying some extra hardware and support to help get their code out to everyone.
VA Linux is our primary supporter, and I believe they offer some value added services to private companies that wish to implement SourceForge locally. You'd really need to speak to one of the cool guys in VA corporate to ask about that.
Olinux: what are the main projects are under way? are there any commercial projects payed by companies? What role does sorceforge play in the Open Source world these days?
Quentin: Some of the most active projects on SourceForge include Crystal Space - a 3D engine, Mesa3D, FreeCraft, Python, FreeNet and more. This are listed on the front page of the site. SourceForge's role in Open Source appears to be becoming (hopefully) the base carrier of content. Geocities for Open Source, if you like =) We think it's great that so many developers can come to one place, and find so much freely downloadable and modifiable software.
Olinux: how the development is coordinated? deadlines & guidelines established? is there a special testing procedures before the changes are added to the core code? are there any special quality control, auditing on code produced? what are the analysis and programming tools used?
Quentin: Development of the SourceForge codebase is coordinated through a central CVS tree. All the code is fairly thoroughly checked by the developers, and through a testing process on our staging server before it is pushed live. Of course, there is sometimes the odd bug that has been left that gets found by one of site's users.
Olinux: What is the operating system used to run the project? Just Red hat? Why did SourceForge pick the software tools, PHP and MySQL, instead of others like Perl and Postgres? What kind of factors did most influence this decision?
Quentin: With the exception of the BSD machines in the compile farm, the servers all run VA Linux Systems' customised version of RedHat. It has a few slightly modified versions of software to work better on VA's servers.
PHP and MySQL were picked for different reasons. PHP was chosen because it was the right tool for the job, and requires little machine overhead. If we'd run the site as a Perl CGI, the footprint of loading a perl parser for every hit would be a tad, large, to say the least.
MySQL was chosen mainly because of its speed. Although this required the sacrifice of subselects and transactions - we've managed to work around this. There is a good article by Tim Perdue at http://www.phpbuilder.com/columns/tim20000705.php3 that outlines the benefits and detriments of using PostgreSQL and MySQL. This article also covers some of the reasons why MySQL was chosen for SourceForge.
Olinux: what are the main steps toward a better software concerning the project development are still under way? are there any expected turn point in terms of future technology, better output or procedures used?
Quentin: We're always listening to user feedback and wishes through our feature request forum. From here, we get a lot of ideas as to what users really wish to see in the site, and we try our best to make as much of that happen within our schedule.
Olinux: has the project received any special awards? What it represented?
Quentin: I believe we've won a few awards for being a cool site. The list needs to be updated, there's currently a link on http://sourceforge.net/docs/site/awards.php
Olinux: What is the project security policy for servers protecion? Tell us about major problems in keeping your servers secure? Is the project always exposed to hacker attacks or most of them already belong to the project?
Quentin: We're always working on improving monitoring and security tools. We've been the recipients of numerous attacks (DoS, hack attempts, etc.i) This is where we really rely on our SysAdmins, as well as some of the great security tools that are available for Linux.
Olinux: In you opinion, how much Linux/Os community has grown and how do you oversee its future?
I started using Linux back around kernel 1.0.something and haven't been back. Since then, the community has exploded around the project, which has been great to see. We've now got decent looking Window Managers, more features and greater acceptance, which has been great to see.
Olinux: What are the main internet technologies that you consider extremely interesting or relevant advance for technology information?
Quentin: I think the advent of the Internet as a collaborative community has been fantastic, and unprecedented. I personally can't wait for further advances in voice recognition.
Olinux: Send a short message to programers in Brazil that work in Free Software/Opensource projects and to OLinux user's?
Quentin: Thanks. To all developers in all countries - they've helped to make SourceForge what it is. Not only that, they've no doubt helped to bring inspiration to people learning how to code around the planet. As to people working on those projects, if they aren't already hosted on SourceForge, why not?! Let us know what we can do to make SourceForge better for the community as a whole and we'll do it!
Encryption is the transformation of data into a form that is (hopefully) impossible to read without the knowledge of a key. Its purpose is to ensure privacy by keeping information hidden from anyone for whom it is not intended.
Decryption is the reverse of encryption; it is the transformation of encrypted data back into an intelligible form.
Encryption and decryption generally require the use of some secret information, referred as key. Some encryption mechanisms use the same key for both encryption and decryption; others use different keys for the two processes.
Cryptography is fundamentally based on so called hard problems; i.e. problems that can be solved only with a large computation waste. Some examples are factoring, theorem-proving, and the "travelling salesman problem" (finding the route through a given collection of cities which minimizes the total length of the path).
There are two types of cryptosystems: secret key and public key.
In secret key cryptography (or symmetric cryptography) the same key is used for both encryption and decryption. The most popular secret-key cryptosystem in use today is known as DES (Data Encryption Standard), developed by IBM in the middle 1970's.
In public key cryptography, each user has a public key and a private key. The first one is made public and the second one remains secret. The public key is used during encryption, while decryption is done with the private key. Today the RSA public key cryptosystem is the most popular form of public key cryptography. RSA stands for Rivest, Shamir, and Adleman, the inventors of the RSA cryptosystem.
Another popular public key technique is the Digital Signature Algorithm (DSA), though it can only be used for signatures.
In secret key (or symmetric) cryptography the sender and receiver of a message know and use the same secret key: the sender uses the secret key to encrypt the message, and the receiver uses the same secret key to decrypt the message.
Using a similar system the main problem to solve is the key management problem, or getting the sender and receiver to agree on the secret key without anyone else finding out. Anyone who intercepts the key in transit can later read, modify, and forge all messages encrypted or authenticated using that key.
In order to solve this problem, Whitfield Diffie and Martin Hellman introduced the concept of public key cryptography in 1976. In their system, each person gets a pair of keys, one called the public key and the other called the private key. The public key is published, while the private key is kept secret. The sender and the receiver don't need to share any secret information because all communications involve only public keys: no private key is ever transmitted or shared.
Anyone can send a confidential message by just using public information, but the message can only be decrypted with a private key, which is in the sole possession of the intended recipient.
The communication scheme is the following: when A wishes to send a secret message to B he uses B's public key to encrypt the message and sends it. B then uses his private key to decrypt the message and read it. Anyone can send an encrypted message to B, but only B can read it (because only B knows B's private key).
In a public key cryptosystem the private key is always linked mathematically to the public key. Therefore, it is always possible to attack a public key system by deriving the private key from the public key. Typically, the defense against this is to make the problem of deriving the private key from the public key as difficult as possible. Some public key cryptosystems are designed such that deriving the private key from the public key requires the attacker to factor a large number; in this case to perform the derivation is computationally infeasible because multiplying two prime integers together is easy, but as far as we know, factoring the product of two prime numbers is much more difficult.
That is the reason because factoring is the underlying, presumably hard problem upon which several public key cryptosystems are based, including the RSA algorithm.
It has not been proven that factoring must be difficult, and remains a possibility that a quick factoring method might be discovered, though this possibility is today considered remote.
In general, the larger the number the more time it takes to factor it. This is why the size of the modulus in RSA determines how secure an actual use of RSA is; the larger the modulus, the longer it would take an attacker to factor, and thus the more resistant the RSA modulus is to an attack.
PGP is a program developped by Phil R. Zimmermann that allows you to communicate in a secure way over an insecure channel. Using PGP you can easily and securely protect the privacy of your data by encrypting them so that only intended individuals can read it.
PGP is based on public key cryptography: two complementary keys, called a key pair, are used to maintain secure communications. One of the keys is designated as a private key to which only you have access and the other is a public key which you freely exchange with other PGP users. Both your private and your public keys are stored in keyring files.
Before you begin using PGP, you need to generate this key pair.
After you created a key pair, you can begin corresponding with other PGP users. You will need a copy of their public key and they will need yours. The public key is just a block of text, so it's quite easy to trade keys with someone. Some standard techniques are including your public key in an email message, copying it to a file, or posting it on a public or corporate key server where anyone can get a copy when he need it. After you generated your key pair and exchanged public keys, you can begin encrypting and signing email messages and files.
The following informations and commands refer to PGP 5.0i. Some changes may occur using a different PGP release. Informations about getting and installing the program are not covered in this article.
In order to use PGP features, the first operation you must accomplish is generating a key pair. From the command line enter:
pgpk -g
You must reply to some question in order to generate your keys:
The algorithm to use in the encrypting messages (DSS/DH or RSA).
The key size, or the number of bits used to construct your digital key. A larger key is stronger but it takes more time to encrypt and decrypt. Unless you are exchanging extremely sensitive information you are safe using a key composed of 1024 bits.
Enter your user ID. It's not absolutely necessary to enter your real name or even your email address. However, using your real name makes it easier for others to identify you as the owner of your public key. For example:
Matteo Dell'Omodarme <matt@martine2.difi.unipi.it>
If you do not have an email address, use your phone number or some other unique information that would help ensure that your user ID is unique.
Enter a passphrase, a string of characters or words you want to use to maintain exclusive access to your private key.
The generated key pair is placed on your public and secret keyrings in your $HOME/.pgp directory. Here you can find the file pubring.skr, containing the public keys and the file secring.skr, the file of your secret key.
pgpk is the command to use in order to manage public and private keys for PGP. So you can extract your public key from your keyring in such a way:
pgpk -x my_username@my_hostname > my_keyTo add a new public key, stored in keyfile, into your database:
pgpk -a keyfileand, to remove a key:
pgpk -r newuser@new_hostname
pgpe encrypts and signs files using public key cryptography, or encrypts files using conventional cryptography.
The simplest use of the command is the following:
pgpe text_file newuser@new_hostnamewhich encrypts the plaintext file text_file using the public key of the intended receiver. Many options are available (see the pgpe manual page), some of them are reported here:
-a, --armor:
Turn on "ASCII Armoring". This outputs a text-only version of your encrypted text. This makes the result safe for mailing, but about 30% larger.
-f:
Stream mode. Accepts input on stdin and places output on stdout. If no files are specified as arguments, PGP executes in this mode by default.
-o outfile:
Specifies that output should go to outfile. If not specified, output goes to the default filename. The default filename for each input file is the input filename with ".pgp" appended, unless ASCII Armoring is turned on, in which case it is ".asc". It is an error to specify multiple input files with this option.
-t:
Turns on text mode. This causes PGP to convert your input message to a platform-independent form. It is primarily for use when moving files from one operating system to another.
pgpv decrypts and verifies files encrypted and/or signed by PGP.
In order to decrypt a message encrypted using your public key enter the command:
pgpv text_file.pgpSome options are available; among them there are:
-f:
Stream mode. Accepts input on stdin and places output on stdout. If no files are specified as arguments, PGP executes in this mode by default.
-o outfile:
Specifies that output should go to outfile. If not specified, output goes to the default filename. The default filename for each input file is the input filename with the ".pgp" ".asc" or ".sig" removed. It is an error to specify multiple input files with this option.
An useful option of the the mailer Pine makes possible to handle automatically the encryption of outgoing messages and the decryption of the received ones. In the file $HOME/.pinerc search for the lines starting with display-filters and sending-filters and do the following insertions:
# This variable takes a list of programs that message text is piped into # after MIME decoding, prior to display. display-filters=_BEGINNING("-----BEGIN PGP MESSAGE-----")_ /usr/bin/pgpv # This defines a program that message text is piped into before MIME # encoding, prior to sending sending-filters=/usr/bin/pgpe -taf _RECIPIENTS_
The display-filters line says that: "when a received mail starts with the given string (i.e. -----BEGIN PGP MESSAGE-----) process its contents using the program /usr/bin/pgpv". Since all PGP messages start in such a way all PGP encrypted messages are automatically trapped by pgpv and decrypted (obviously only if they are encrypted with your public key).
The sending-filters line sets /usr/bin/pgpe as the program processing outgoing messages, using the email address (i.e. _RECIPIENTS_) of the intended receiver to select which public key must be used by PGP encryption mechanism.
Occurred the previous modifications, you are able to send encrypted messages or plaintext messages, choosing among them at sending time. A question is asked before your mail is sent out:
Send message (unfiltered)?
Replying Y to the question forces Pine to send the mail in a plaintext form, while hitting Ctrl-N sequence (i.e. Next Filter option) make you able to choose among different filters.
If pgpe is the sole filter defined, the following message is displayed:
Send message (filtered thru "pgpe")?
Replying Y to that question makes Pine encrypt the message with the appropriate public key and send it.
[Eric also draws the Sun Puppy comic strip at http://www.sunpuppy.com. It's about, um, puppies. -Ed.]
The process of making the books inexpensively is the fact that is gets cumbersome. It can be a real pain in the butt. My whole goal is to make it very simple and very fast. There is no point making books if it becomes an unpleasant event. This document will address several problems Mark and Rick has had in the past and present one solution. This article will also list ongoing problems.
Rick and I are in the process of making a HOWTO for book binding, and if anyone wishes to add to the HOWTO or wishes to contribute thoughts, please send email to zing@gnujobs.com.
The problem with this setup was that you have to keep your book in the book press for 30 minutes before letting it go. Also, I wasn't using a ruler when Rick said I should.
With my current setup, we can eliminate the book-binding press with just a straight-edge. Here are the tools,
Here are the steps,
This is very nice if you want to create a virtual office. Many a independent consultant might be interested in this.
Once you have extracted the tiff files out of the email messages, you can do with them whatever you want. For this article, we will convert them to pdf files and put them in a web directory for easy download.
### Copy the mail over to a temporary file. cp /var/spool/mail/Username File.mail ### Extract the tiff files. uudeview File.mail ### Let us assume the tiff file is extracted as the name MyFile.tiff ### Convert it to postscript tiff2ps MyFile.tiff > TempFile.ps ### Convert it to pdf ps2pdf TempFile.ps TempFile.pdf ### move it mv TempFile.pdf /www/docs/pdf/TempFile.pdfThat is how you can do it manually. However, we want to automate the process. Two scripts in the next section will do that.
#!/usr/bin/perl ## We assume you have uudeview installed. ## We assume you have a public_html directory which your webserver has been ## properly configured to see. ### This perl script is not properly secured since it is possible to make ### a weird configuration for the name of the fax file, which in theory ### could mess up the command line statements. Use at your own risk. my $User = "Mu_Username"; my $Temp = "/home/$User/Temp/fax"; system "cp /var/spool/mail/$User /home/$User/Temp/"; system "cp /dev/null /var/spool/mail/$User"; system "/usr/bin/uudeview -o -i -d -p /home/$User/tiff/ /home/$User/Temp/fax"; system "cp /dev/null /home/$User/Temp/fax"; my @Old_Pdfs = </home/$User/public_html/pdf/*.pdf>; my $No = @Old_Pdfs; foreach my $File (</home/$User/tiff/*.tif>) { $No++; my $Ps = $File; $Ps =~ s/\.tif/\.ps/g; $Ps =~ s/tiff/ps/; system "/usr/bin/tiff2ps $File > $Ps"; ### If you want to print this, uncomment # system "lpr $Ps"; my $Pdf = $Ps; $Pdf =~ s/\.ps/\.pdf/g; system "/usr/bin/ps2pdf $Ps $Pdf"; ### Either choose to keep the default name of the file or number it # system "mv $Pdf /home/$User/public_html/pdf/"; system "mv $Pdf /home/$User/public_html/pdf/$No.pdf"; system "rm $Ps $File;"; }Here is the crontab file you will need. run the command
crontab Crontabin order to get to be automated.
#!/bin/sh 0,15,30,45 * * * * /home/UserName/Cron.pl >> /home/UserName/cron_log 2>&1
Phil Hunter from COLUG first told me about this a year or two ago. He just dumps the faxes to a printer. It wasn't of much use then when I had an office and a fax machine, but I have found it useful since I moved out to California. My next goal is to send a fax through a modem, and then I will be able to send and receive faxes when I am not in my office in the Bay Area.
"You wouldn't believe how many managers believe that you can get a baby in one month by making nine women pregnant."
-- Marc Wilson
Well, this should be the last article in the "Introduction to Shell Scripting" series - I've had great feedback from a number of readers (and thank you all for your kind comments!), but we've covered most of the basics of shell scripting; that was the original purpose of the series. I may yet pop up at some point in the future ("Oh, rats, I forgot to explain XYZ!"), but those of you who've been following along should now consider yourselves Big-Time Experts, qualified to carry a briefcase and sound important... <grin> Well, at least you should have a pretty good idea of how to write a script and make it work - and that's a handy skill.
Quite a while ago, I found myself in a quandary while writing a script (NO-O-O! How unusual! <grins>); I had an array that contained a list of command lines that I needed to execute based on certain conditions. I could read the array easily enough, or print out any of the variables - but what I needed was to execute them! What to do, what to do... as I remember, I gave up for lack of that one capability, and rewrote the whole (quite large) script (it was not a joyful experience). "eval" would have been the solution.
Here's how it works - create a variable called $cmd, like so:
cmd='cat .bashrc|sort'
It's just an example - you could use any valid command(s). Now, you can echo the thing -
Odin:~$ echo $cmd
cat .bashrc|sort
Odin:~$
- but how do you execute it? Just running "cmd" produces an error:
Odin:~$ $cmd
cat: .bashrc|sort: No such file or directory
Odin:~$
This is where "eval" comes into its own: "eval $cmd" would evaluate the content of the variable as if it had been entered at the command line. This is not something that comes up too often... but it is a capability of the shell that you need to be aware of.
Note that "bash" has no problem executing a single command that is stored as a variable, something like:
Odin:~$ N="cat .bashrc" Odin:~$ $N # ~/.bashrc: executed by bash(1) for non-login shells. export PS1='\h:\w\$ ' umask 022works fine. It's only when more complex commands, e.g., those that involve +aliases or operators ("|", ">", ">>", etc.) are used that you would encounter problems - and for those times, "eval" is the answer.
One of the standard techniques in scripting (and in programming in general) is that of writing data to temporary files - there are many reasons to do this. But, and this is a big one, what happens when your users interrupt that script halfway through execution? (For those of you who have scripts like that and haven't thought of the issue, sorry to give you material for nightmares. At least I'll show you the solution as well.)
You guessed it: a mess. Lots of files in "/tmp", perhaps important data left hanging in the breeze (to be deleted at next reboot), files thought to be updated that are not... Yuck. How about a way for us to exit gracefully, despite a frantic keyboard-pounding user who just has to run "Quake" RIGHT NOW?
The "trap" command provides an answer of sorts (shooting said user is far more effective and enjoyable, but may get you talked about).
function cleanup ()
{
stty intr "" # Ignore 'Ctrl-C'; let him pound away...
echo "Wake up, Neo."
sleep 2; clear
echo "The Matrix has you."
echo "He's at it again."|mail admin -s "Update stopped by $USER"
# Restore the original data
tar xvzf /mnt/backup/accts_recvbl -C /usr/local/acct
# Delete 'tmp' stuff
rm -rf /tmp/in_process/
# OK, we've taken care of the cleanup. Now, it's REVENGE time!!!
rm /usr/games/[xs]quake
# Give him a nice new easy-to-remember password...
chpasswd $USER:~X%y!Z@zF%HG72F8b@Idiot&(~64sfgrnntQwvff########^
# We'll back up all his stuff... Oh, what's "--remove-files" do?
tar cvz --remove-files -f /mnt/timbuktu/bye-bye.tgz /home/$USER
# Heh-heh-heh...
umount /mnt/timbuktu
stty intr ^C # Back to normal
exit # Yep, I meant to do that... Kill/hang the shell.
}
trap 'cleanup' 2
...
There's a little of the BOfH inside every admin. <grin> (For those of you not familiar with the "BOfH Saga", this is a must read for every Unix admin; appalling and hideously funny. Search the Web.)
DON'T run this script... yes, I know it's tempting. The point of "trap" is, we can define a behavior whenever the user hits `Ctrl-Break' (or for that matter, any time the script exits or is killed) that is much more useful to us than just crashing out of the program; it gives us a chance to clean up, generate warnings, etc.
"trap" can also catch other signals; the fact is that even "kill", despite its name, does not of itself `kill' a process - it sends it a signal. The process then decides what to do with that signal (a crude description, but generally correct). If you wish to see the entire list of signals, just type "trap -l" or "kill -l" or even "killall -l" (which does not list the signal numbers, just names). The ones most commonly used are 1)SIGHUP, 2)SIGINT, 3)SIGQUIT, 9)SIGKILL, and 15)SIGTERM.
[But SIGKILL is untrappable. -Ed.]
There are also the `special' signals. They are: 0)EXIT, which traps on any exit from the shell, and DEBUG (no number assigned), which can - here's a nifty thing! - be used to troubleshoot shell scripts (it traps every time a simple command is executed). DEBUG is actually more of an "info only" item: you can have this exact action without writing any "trap"s, simply by adding "-x" to your "hash-bang" (see "IN CASE OF TROUBLE..." below).
"trap" is a powerful little tool. In LG#37, Jim Dennis had a short script fragment that created a secure directory under "/tmp" for just this sort of thing - temp files that you don't want exposed to the world. Pretty cool gadget; I've used it a number of times already.
Speaking of troubleshooting, "bash" provides several very useful tools that can help you find the errors in your script. These are switches - part of the "set" command syntax - that are used in the "hash-bang" line of the script itself. These switches are:
-n Read the shell script lines, but do not execute
-v Print the lines as they're read
-x Prints $PS4 (the "level of indirection" prompt) and the command just executed.
I've found that "-nv" and "-x" are the most useful invocations: one gives you the exact location of a "bad" line (you can see where the script would crash); the other, `noisy' though it is, is handy for seeing where things aren't happening quite the right way (when, even though the syntax is right, the action is not what you want). Good troubleshooting tools both. As time passes and you get used to the quirks of error reporting, you'll probably use them less and less, but they're invaluable to a new shell script writer.
To use them, simply modify the initial "hash-bang":
Here's a line familiar to every "C" programmer:
#include <"stdio.h">
- a very useful concept, that of sourcing external files. What that means is that a "C" programmer can write routines (functions) that he'll use over and over again, store them in a `library' (an external file), and bring them in as he needs them. Well - have I not said that shell scripting is a mature, capable programming language? - we can do the same thing! The file doesn't even have to be executable; the syntax that we use in bringing it in takes care of that. The example below is a snippet of the top of my function library, "Funky". Currently, it is a single file, a couple of kB long, and growing apace. I try to keep it down to the most useful functions, as I don't want to garbage up the environment space (is the concept even applicable in Linux? Must find out...)
There's a tricky little bit of "bash" maneuvering that's worth knowing: if you create a variable called BASH_ENV in your .bash_profile, like so:
export BASH_ENV="~/.bash_env"
then create a file called ".bash_env" in your home directory, that file will be re-read every time you start a `non-login non-interactive shell', i.e., a shell script. A good place to put initialization stuff that is shell-script specific; that's where I source "Funky" from - that way, any changes in it are immediately available to any shell script.
cat /usr/local/bin/Funky|grep \(\)
}
getch () # gets one char from kbd, no "Enter" necessary
{
OLD_STTY=`stty -g`
stty cbreak -echo
GETCH=`dd if=/dev/tty bs=1 count=1 2>/dev/null`
stty $OLD_STTY
}
...
Not too different from a script, is it? No "hash-bang" is necessary, since this file does not get executed by itself. So, how do we use it in a script? Here it is (we'll pretend that I don't source "Funky" in ".bash_env"):
. Funky
declare -i Total=0
leave ()
{
echo "So youse are done shoppin'?"
[ $Total -ne 0 ] && echo "Dat'll be $Total bucks, pal."
echo "Have a nice day."
exit
}
trap 'leave' 0
clear
while [ 1 ]
do
echo
echo "Whaddaya want? I got Cucumbers, Tomatoes, Lettuce, Onions,"
echo "and Radishes today."
echo
# Here's where we call a sourced function...
getch
# ...and reference a variable created by that function.
case $GETCH
in
C|c) Total=$Total+1; echo "Them are good cukes." ;;
T|t) Total=$Total+2; echo "Ripe tomatoes, huh?" ;;
L|l) Total=$Total+2; echo "I picked da lettuce myself." ;;
O|o) Total=$Total+1; echo "Fresh enough to make youse cry!" ;;
R|r) Total=$Total+2; echo "Real crispy radishes." ;;
*) echo "Ain't got nuttin' like that today, mebbe tomorra." ;;
esac
sleep 2
clear
done
Note the period before "Funky": that's an alias for the "source" command. When sourced, "Funky" acquires an interesting property: just as if we had asked "bash" to execute a file, it goes out and searches the path listed in $PATH. Since I keep "Funky" in "/usr/local/bin" (part of my $PATH), I don't need to give an explicit path to it.
If you're going to be writing shell scripts, I strongly suggest that you start your own `library' of functions. (HINT: Steal the functions from the above example!) Rather than typing them over and over again, a single "source" argument will get you lots and lots of `canned' goodies.
Well - overall, lots of topics covered, some "quirks" explained; all good stuff, useful shell scripting info. There's a lot more to it - remember, this series was only an introduction to shell scripting - but anyone who's stuck with me from the beginning and persevered in following my brand of pretzel-bending logic (poor fellows! irretrievably damaged, not even the best psychologist in the world can help you now... :) should now be able to design, write, and troubleshoot a fairly decent shell script. The rest of it - understanding and writing the more complex, more involved scripts - can only come with practice, otherwise known as "making lots of mistakes". In that spirit, I wish you all lots of "mistakes"!
Happy Linuxing!
Communities will fight to defend themselves. People will fight harder and more bitterly to defend their communities, than they will fight to defend their own individual selves.''
-- Bruce Sterling, "Hacker Crackdown"
The "man" pages for 'bash', 'builtins', 'stty' "Introduction to Shell Scripting - The Basics", LG #53
"Introduction to Shell Scripting", LG #54
"Introduction to Shell Scripting", LG #55
"Introduction to Shell Scripting", LG #57
*** NOTE: This may not necessarily be the best way to configure Sendmail; I'm certain that it isn't the only way. It worked for me; if you are in the same situation - home machine, intermittent Net connection, possibly multiple users on one machine - it will probably work for you... but there are no guarantees: if it breaks, you get to keep both pieces. ***
This weekend, I installed RedHat 6.2 on my brother's PC - just to give an idea of how far we've come, I didn't even have to convince him (well, a few hints about "Oh, your machine crashed again? Gee, mine doesn't..." over the years may have helped.) I'm a Debian guy, myself, but he had a RedHat CD, and I wanted the experience of completely configuring an RH system (Famous Last Words: "After all, how different could it be?")
As a matter of fact, the RH install died a few times, until I figured out that one of the non-critical files on the CD was damaged (my brother was very impressed by the fact that I could customize the installation to the extent of eliminating a single file). So, no desktop pictures for the moment - I got them later from ftp.redhat.com - but everything else went well. In a couple of hours, I had his machine up and working away.
The first problem came from the fact that his ISP, AT&T, uses CHAP authentication; not the easiest thing in the world to handle (for those of you who are curious: on the "Advanced" tab of the account properties, select "Let PPP do all authentication" ; close the Configuration Tool; in "/etc/ppp/chap-secrets", put the password in double quotes. That cost me a couple of hours.) Once that was done, everything went smoothly... until I wanted to send mail without using Netscape (I far prefer Mutt). Then, the circus pulled into town, clowns and jugglers and magicians and all...
"He who has never configured `sendmail.cf' has no courage. He who has configuredI've always considered hacking sendmail config files to be the province of ÜberHackers, the people who read raw binary code and laugh about it. A "smail" installation - a one-line change in a simple file - MTA setup the easy way! Well... I figured I'd at least give it a shot; I already have lots of scars, what else could I lose? (I hear a chorus of voices: "Your sanity!" Never had any; can't be a problem.)
it more than once has no brain."
-- Unknown
I'll lightly skip over the gnashing of teeth and the anguished screaming at the total lack of useful info on the Net (every Sendmail expert, everywhere and everywhen, thinks that you're configuring a 50,000-user MTA. There are no exceptions.), and go on to the actual things that worked. Here they are, step by step - note that you'll need to be `root' for all of this:
1. Install the "sendmail-cf" package. It's on the RedHat CD, but does not get installed by default; you'll need it to make any configuration changes.
2. In `/etc/mail', create two files - "genericsdomain" and "genericstable" (we'll be using them in just a minute); in `/etc/mail/Makefile', add "genericsdomain.db" and "genericstable.db" to the "all:" line.
3. Write your FQDN (Fully Qualified Domain Name - run "hostname -f" to see what it is) in "genericsdomain". Adding "localhost" doesn't hurt anything either, and seems like a good idea - this is the file used by Sendmail to determine if the mail it sees is
coming from the local domain.
4. Write (this is the good part) your mail aliases in "genericstable", in a
"local_login_name remote_account_name@mail_domain" format, like so:
joe big_time@yahoo.com
zelda gorgeous@cheerful.com
walter walter@worldnet.att.net
5. Run "make" in `/etc/mail'; this will create the ".db" versions of what you've just created. Re-run it whenever you change those files.
OK, we're done with the simple part. Now, before you do the stuff that follows, dance naked widdershins around your computer three times while chanting, "I shall not fear; fear is the mindkiller..." Oops - sorry, that part is optional for anyone but me...
6. Edit `/etc/sendmail.mc'. Add the following lines (I prefer to put them at the end of the other "FEATURE" statements, just for neatness' sake):
FEATURE(masquerade_envelope)
FEATURE(genericstable, `hash -o /etc/mail/genericstable')
GENERICS_DOMAIN_FILE(`/etc/mail/genericsdomain')
This tells Sendmail to use those files you've just created, and to modify the "envelope" (The "From " header, etc.) as well as the visible headers ("From:", etc.)
7. Run "m4 /etc/sendmail.mc > /etc/sendmail.cf". This processes your newly modified "sendmail.mc" into a form that Sendmail actually reads - the "sendmail.cf" file.
Now, we're almost ready, except for one last thing -
8. Type "killall -HUP sendmail" or "kill -HUP <PID>", using the Sendmail PID from "ps -ax". This will restart Sendmail which forces it to re-read the new config file.
Whew. Well, I'm still alive, and <patting pockets absentmindedly> still have my mind. Somewhere.
The system seems to work - I've sent mail to a number of people I know, and their servers didn't choke; sending mail to myself and examining the headers in "/var/spool/mail/ben" with a text editor confirmed that there was nothing horrendously unusual about them. I've rebooted the system, and everything still seems OK - now, a day later, I've stopped expecting things to go "BOOM". Still, you never know...
It's true that Netscape will handle both SMTP and POP services, one user at a time; for most people, this is good enough. On the other hand, if you're one of those folks (like me) who hates the idea of waiting several minutes for a mail client complete with Web browser, news client, GUI, point-and-click, and lots of confusing options - when all you need is to send some mail - Sendmail may well provide a good answer.
Happy Linuxing to all!
The incredibly confusing and unbelievably complex Sendmail man page
Ditto the /usr/doc/sendmail directory
Double ditto most Net resources
A slightly smaller ditto for RedHat's "Where's Everything?" page,
...and one semi-decent resource from RedHat-Europe, the Sendmail-Address-Rewrite mini-HOWTO.
In this article I will explain how to make your Linux box secure by taking basic security measures. This article will enable anybody to tighten the security of a redhat Linux box.
BIOS Security
Always set a password on BIOS to disallow booting from floppy by changing the BIOS settings. This will block undesired people from trying to boot your Linux system with a special boot disk and will protect you from people trying to change BIOS feature like allowing boot from floppy drive or booting the server without password prompt.
LILO Security
Add the three parameters in "/etc/lilo.conf" file i.e. time-out, restricted and password. These options will ask for password if boot time options (such as "linux single") are passed to the boot loader.
Step 1
Edit the lilo.conf file (vi /etc/lilo.conf) and add or change the three options :
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
time-out=00 #change this line to 00
prompt
Default=linux
restricted #add this line
password=<password> #add this line and put your password
image=/boot/vmlinuz-2.2.14-12
label=linux
initrd=/boot/initrd-2.2.14-12.img
root=/dev/hda6
read-only
Step 2
The "/etc/lilo.conf" file should be readable by only root because it contains unencrypted passwords.
[root@kapil /]# chmod 600 /etc/lilo.conf (will be no longer world readable).
Step 3
Update your configuration file "/etc/lilo.conf" for the change to take effect.
[Root@kapil /]# /sbin/lilo -v (to update the lilo.conf file).
Step 4
One more security measure you can take to secure the "/etc/lilo.conf" file is to set it immutable, using the chattr command.
* To set the file immutable simply, use the command:
[root@kapil /]# chattr +i /etc/lilo.conf
This will prevent any changes (accidental or otherwise) to the "lilo.conf" file.
For more information about lilo security, read my article on LILO.
Disable all special accounts
You should delete all default users and group accounts that you don't use on your system like lp, sync, shutdown, halt, news, uucp, operator, games, gopher etc
To delete a user account :
[root@kapil /]# userdel LP
To delete a group:
[root@kapil /]# groupdel LP
Choose a Right password
You should follow the following guidelines before choosing the right password.
The password Length: The minimum acceptable password length by default when you install your Linux system is 5. This is not enough and must be 8. To do this you have to edit the login.defs file (vi /etc/login.defs) and change the line that read:
Disable all console-equivalent access for regular users
You should disable all console-equivalent access to programs like shutdown, reboot, and halt for regular users on your server.
To do this, run the following command:
[root@kapil /]# rm -f /etc/security/console.apps/<servicename>
Where <servicename> is the name of the program to which you wish to disable console-equivalent access.
Disable & uninstall all unused services
You should disable and uninstall all services that you do not use so that you have one less thing to worry about. Look at your "/etc/inetd.conf" file and disable what you do not need by commenting them out (by adding a # at the beginning of the line), and then sending your inetd process a SIGHUP command to update it to the current "inetd.conf" file. To do this:
Step 1
Change the permissions on "/etc/inetd.conf" file to 600, so that only root can read or write to it.
[Root@kapil /]# chmod 600 /etc/inetd.conf
Step 2
ENSURE that the owner of the file "/etc/inetd.conf" is root.
Step 3
Edit the inetd.conf file (vi /etc/inetd.conf) and disable the services like:
ftp, telnet, shell, login, exec, talk, ntalk, imap, pop-2, pop-3, finger, auth, etc unless you plan to use it. If it's turned off it's much less of a risk.
Step 4
Send a HUP signal to your inetd process
[root@kapil /]# killall -HUP inetd
Step 5
Set "/etc/inetd.conf" file immutable, using the chattr command so that nobody can modify that file
* To set the file immutable simply, execute the following command:
[root@kapil /]# chattr +i /etc/inetd.conf
This will prevent any changes (accidental or otherwise) to the "inetd.conf" file. The only person that can set or clear this attribute is the super-user root. To modify the inetd.conf file you will need to unset the immutable flag:
* To unset the immutable simply, execute the following command:
[root@kapil /]# chattr -i /etc/inetd.conf
TCP_WRAPPERS
By using TCP_WRAPPERS you can make your server secure against outside intrusion . The best policy is to deny all hosts by putting "ALL: ALL@ALL, PARANOID" in the "/etc/hosts.deny" file and then explicitly list trusted hosts who are allowed to your machine in the "/etc/hosts.allow" file. TCP_WRAPPERS is controlled from two files and the search stops at the first match.
/etc/hosts.allow
/etc/hosts.deny
Step 1
Edit the hosts.deny file (vi /etc/hosts.deny) and add the following lines:
# Deny access to everyone.
ALL: ALL@ALL, PARANOID
Which means all services, all locations is blocked, unless they are permitted access by entries in the allow file.
Step 2
Edit the hosts.allow file (vi /etc/hosts.allow) and add for example, the following line:
As an example:
ftp: 202.54.15.99 foo.com
For your client machine: 202.54.15.99 is the IP address and foo.com the host name of one of your client allowed using ftp.
Step 3
The tcpdchk program is the tcpd wrapper configuration checker. It examines your tcp wrapper configuration and reports all potential and real problems it can find.
* After your configuration is done, run the program tcpdchk.
[Root@kapil /]# tcpdchk
Don't let system issue file to be displayed
You should not display your system issue file when people log in remotely . To do this, you can
change the telnet option in your "/etc/inetd.conf".
To do this change the line in "/etc/inetd.conf":
telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd
to look like:
telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd -h
Adding the "-h" flag on the end will cause the daemon to not display any system information and just hit the user with a login: prompt. I will recommend to use sshd instead.
Change the "/etc/host.conf" file
The "/etc/host.conf" file specifies how names are resolved.
Edit the host.conf file (vi /etc/host.conf) and add the following lines:
# Lookup names via DNS first then fall back to /etc/hosts.
order bind,hosts
# We have machines with multiple IP addresses.
multi on
# Check for IP address spoofing.
nospoof on
The first option is to resolve the host name through DNS first and then hosts file.The multi option determines whether a host in the "/etc/hosts" file can have multiple IP addresses (multiple interface ethN).
The nospoof option indicates to take care of not permitting spoofing on this machine.
Immunize the "/etc/services" file
You must immunize the "/etc/services" file to prevent unauthorized deletion or addition of services.
* To immunize the "/etc/services" file, use the command:
[root@kapil /]# chattr +i /etc/services
Disallow root login from different consoles
The "/etc/securetty" file allows you to specify which TTY devices the "root" user is allowed to login . Edit the "/etc/securetty" file to disable any tty that you do not need by commenting them out (# at the beginning of the line).
Blocking anyone to su to root
The su (Substitute User) command allows you to become other existing users on the system. If you don't want anyone to su to root or restrict "su" command to certain users then add the following two lines to the top of your "su" configuration file in the "/etc/pam.d/" directory.
Step 1
Edit the su file (vi /etc/pam.d/su) and add the following two lines to the top of the file:
auth sufficient /lib/security/pam_rootok.so debug
auth required /lib/security/Pam_wheel.so group=wheel
Which means only members of the "wheel" group can su to root; it also includes logging. You can add the users to the group wheel so that only those users will be allowed to su as root.
Shell logging
The bash shell stores up to 500 old commands in the "~/.bash_history" file (where "~/" is your home directory) to make it easy for you to repeat long commands. Each user that has an account on the system will have this file "Bash_history" in their home directory. The bash shell should store less number of commands and delete it on logout of the user.
Step 1
The HISTFILESIZE and HISTSIZE lines in the "/etc/profile" file determine the size of old commands the "Bash_history" file for all users on your system can hold. I would highly recommend setting the HISTFILESIZE and HISTSIZE in "/etc/profile" file to a low value such as 30.
Edit the profile file (vi /etc/profile) and change the lines to:
HISTFILESIZE=30
HISTSIZE=30
Which mean, the "Bash_history" file in each users home directory can store 20 old commands
and no more.
Step 2
The administrator should also add into the "/etc/skel/Bash_logout" file the
"rm -f $HOME/Bash_history" line, so that each time a user logs out, its "Bash_history" file will be deleted.
Edit the Bash_logout file (vi /etc/skel/Bash_logout) and add the following line:
rm -f $HOME/Bash_history
Disable the Control-Alt-Delete keyboard shutdown command
To do this comment out the line (with a "#") listed below in your "/etc/inittab" file .
To do this, edit the inittab file (vi /etc/inittab) and change the line:
ca::ctrlaltdel:/sbin/shutdown -t3 -r now
To read:
#ca::ctrlaltdel:/sbin/shutdown -t3 -r now
Now, for the change to take effect type in the following at a prompt:
[root@kapil /]# /sbin/init q
Fix the permissions under "/etc/rc.d/init.d" directory for script files
Fix the permissions of the script files that are responsible for starting and stopping all your normal processes that need to run at boot time. To do this:
[root@kapil/]# chmod -R 700 /etc/rc.d/init.d/*
Which means only root is allowed to Read, Write, and Execute scripts files on this directory.
Hide your system information
By default, when you login to a Linux box, it tells you the Linux distribution name, version, kernel version, and the name of the server. This is sufficient information for a crackers to get information about your server. You should just prompt users with a "Login:" prompt.
Step 1
To do this, Edit the "/etc/rc.d/rc.local" file and Place "#" in front of the following lines as shown:
# This will overwrite /etc/issue at every boot. So, make any changes you
# want to make to /etc/issue here or you will lose them when you reboot.
#echo "" > /etc/issue
#echo "$R" >> /etc/issue
#echo "Kernel $(uname -r) on $a $(uname -m)" >> /etc/issue
#
#cp -f /etc/issue /etc/issue.net
#echo >> /etc/issue
Step 2
Then, remove the following files: "issue.net" and "issue" under "/etc" directory:
[root@kapil /]# rm -f /etc/issue
[root@kapil /]# rm -f /etc/issue.net
Disable unused SUID/SGID programs
A regular user will be able to run a program as root if it is set to SUID root. A system administrator should minimize the use of these SUID/GUID programs and disable the programs which are not needed.
Step 1
* To find all files with the `s' bits from root-owned programs, use the command:
[root@kapil]# find / -type f \( -perm -04000 -o -perm -02000 \) \-exec ls lg {} \;
* To disable the suid bits on selected programs above, type the following commands:
[root@kapil /]# chmod a-s [program]
After following the above security guidelines, a system administrator can maintain a basic level of system security. Some of the above tasks are a continuous process. The system administrator has to continuously follow the above guidelines to keep system secure.
Written by: Kapil Sharma
Email:
Website: http://www.linux4biz.net
[Kapil Sharma is a Linux and Internet security consultant. He has been working on various Linux/Unix systems and Internet Security for more than 2 years. He is maintaing a web site http://www.linux4biz.net for providing free as well as commercial support for web, Linux and Unix solutions.]
Writing code to access the hardware under Linux is quite a bit more difficult since (in most cases) a separate device driver must be coded and installed into the kernel. The protection mechanisms that prevent harm to the system by misbehaving user processes stymie the diagnostic developer. This article explains the porting 16-bit MS-DOS Diagnostics source code developed using Visual C++ 1.52 to the GNU C++ compiler and Linux OS environment.
A user process running with root privilege can access I/O ports and memory, but can't disable or handle interrupts. PCI configuration space access isn't safe either because consecutive writes to the configuration address and data ports are required. Tight control of time delays isn't possible either since the user process can be put to sleep at any time. See Linux Device Drivers section "Doing it in User Space" in [RUB] Chapter 2 on page 36.
The GCC (GNU C Compiler) that comes with the Linux Distribution handles inline assembly code and both the CPUID and RDTSC instructions will execute correctly in a user context. GCC also offers 64-bit signed and unsigned integer math with the long long types. These capabilities cover the last bullet above. A single Linux module will handle all of the others except the interrupt handlers.
Modules may also dynamically register nodes in the /proc filesystem. The most common use of a /proc node is to deliver a buffer of data to the reader.
The number of modules required can be cut down considerably by creating a single module to provide general-purpose access to the kernel for each of the resource classes. The Linux Wormhole driver module provides the following services:
ioctl (int fd, int request, char *argp);The fd parameter is the file descriptor for the module obtained by a previous open(2) call.
The request parameter identifies the service required of the module. A symbol for example WORM_IOP_R is defined for each request. Each request also has an associated structure type. The argp parameter points to the structure provided by the caller. Data is passed to and from the module through this structure. The kernel calls copy_from_user and get_user_ret are used by the module to get data from the user. The kernel call put_user_ret is used to write data back to user space. See asm/uaccess.h.
The Wormhole module source code uses the macros inb, outb, inw, outw, inl, outl provided by asm/io.h to implement the I/O port access. See the section "Using I/O Ports" in [RUB] Chapter 8 on page 164.
The first is based on the 18.2 Hz (54.94ms) DOS system clock. Linux modules can provide delays in increments of the system timer interrupt, which is currently 10ms. The Wormhole ioctl request WORM_DELAY_MS takes the number of milliseconds to delay as the argument. The driver determines the smallest system timer value that will occur after the delay has expired, sets a timeout, and sleeps. The driver will wake up and return to the user process when the timeout occurs.
The second DOS diagnostics code delay function performs microsecond resolution delays based on the time it takes to write to a non-decoded ISA bus port. This is somewhere in the neighborhood of 700-1000ns. Linux offers the kernel function udelay defined in <linux/delay.h>. See the section "Short Delays" in [RUB] Chapter 6 on page 137. This function bases the delay time on a software loop calibrated at boot time. Experiments show the delay time to be accurate with the best accuracy for delays below 100us. This function is only suitable for small delays (up to around 1ms) since it busy waits, preventing other tasks from running. The Wormhole ioctl request WORM_DELAY_US passes the number of microseconds to delay to the kernel function udelay.
In Linux the processor is running in protected mode and the memory management unit is enabled. The desired physical memory location must be mapped via the page tables and its virtual address must be known. Linux offers the kernel function vremap, which will create the virtual to physical mapping for a block of memory. The physical address must be above the top of DRAM memory. Kernel function ioremap can be used to map in memory-mapped devices and PCI memory. The Wormhole requests WORM_PCIMEM_R and WORM_PCIMEM_W will map a page, perform one 32-bit or 8-bit read or write access then unmap the page. See the section "High PCI Memory" in [RUB] Chapter 8 on page 175.
The Wormhole requests WORM_BIOSMEM_R and WORM_BIOSMEM_W access the System BIOS area below 1M. They use kernel macros readb, readl, writeb, and writel to perform the memory access. See the section "ISA Memory Below 1M" in [RUB] Chapter 8 on page 171.
The Wormhole driver performance is limited by the context switch overhead of the ioctl call. If thousands of operations are required the total time will be significantly longer than the time consumed by a dedicated module.
The Wormhole driver only does one access per call. If several accesses must be done atomically, with no intervening task switches, the Wormhole driver is unsuitable.
The Wormhole driver and Linux user process cannot offer real time response. In the diagnostics environment this problem can be limited by running with no users logged in. Otherwise a dedicated module is required.
The Wormhole driver does not control access to the kernel resources. It is the responsibility of the caller to not break anything in the kernel or change the state of device registers of devices for which are modules are running.
I'm writing this article after a couple of weeks of messing around getting our local school network hooked up to the net. Our problem was similar, I guess, to that of many schools: how do you give students' boxes access to the net, but restricting both certain types of content and certain services altogether (IRC)?
A minor extra point was that I wanted to separate our two computer labs' networks into different segments with some kind of packet filtering in between. With the number of computers going up, so was the collision rate.
I had the following material considerations to take into account:
By now, the network hardware setup was more or less clear, thusly:
The filtering server also runs our local web server (Apache).
Now came the interesting part: how was I to configure the lot into a working setup?
The Linux built-in firewall
My first idea was to use Linux' routing capacities. You can set up just about any Linux box as a router to separate two or more ethernet segments. It just needs a card for each segment - not even necesserily running at the same speed. You then configure the kernel built-in firewall to ignore packets that have source and destination addresses within the same segment, but to forward packets with source and destination addresses in different segments.
This can be a definite gain of speed as the number of collisions on an ethernet network goes up with the number of nodes on each segment - and each collision requires a time-out to retry sending the packet. So, for example, three segments with ten nodes each and a Linux firewall in between outperforms a single segment with all thirty nodes under normal and heavy traffic loads.
For more information, read the Firewall-HOWTO and the ipchains manual page
A simple setup would then be to program the clients to use the net access server as their web proxy, and use the filtering server as a firewall. This is just about the most classical distribution of roles imaginable.
So why couldn't this work for me? The answer lies in the fact that to enable routing, both the client boxes and the net access server had to have the filtering server as their gateway. This worked fine as long as the ISDN wasn't up. But when ISDN went up, the default gateway on the net access server (running under Windows, remember?) became our ISP.
So a request emanating from a client box goes into the filter, and is forwarded to the net access. The WinGate proxy does its stuff, and replies to our local client - but this message is routed back off to our ISP ... and the client gets no reply.
The Squid proxy
As a second approach, I thought of using the squid proxy, cascaded under WinGate. This way, a client request goes to squid on the filter. Squid then determines if the request goes to the local server, or has to be forwarded to the WinGate machine:
And did this work? Yes, very well ... as long as the client requested either the local server or an internet website by giving its IP address. The problem was with DNS.
The squid proxy has to determine where to send each request. So even if you give it a default cascaded proxy, it still tries to perform DNS address resolution on each URL it receives.
I then tried the following: set up WinGate as a DNS proxy as well as www, and tell the filter to use the net access box as its main DNS. The requests went through to WinGate, but got no reply from the 'net. Confounding ... and the client box gets a message from squid complaining it can't proceed with address resolution. Needless to say, the net access server's DNS setup works well on its own.
Another approach was to use the Apache webserver's proxy capabilities. This worked just as well - and just as badly - as squid.
Recommended reading: all 1907 lines of /etc/squid/squid.conf . Same for /etc/httpd/conf/httpd.conf .
Homebuilt Java proxy
As you may imagine, I was at this time fresh out of ideas. And school-in was 48 hours away. So I took the only reasonable decision - write my own proxy daemon in Java, to be installed on the filter.
This may take a bit of explaining. First of all, why is writing a proxy daemon reasonable? In this case, the proxy just had to:
There is no caching, no address resolution, nothing else to be done.
Secondly, why is it reasonable to write such a program in Java, when network programming is traditionnaly done in C? Mainly because programming sockets in C is a pain, and doing it in Java is painless. All relevant classes are available in java.net.* : Socket, ServerSocket, DataInputStream and PrintStream are about all you need.
It is also as easy in Java as is C to fork off a process to handle separately each client connection. The difference is that in Java, one usually uses a thread, not a separate process. This has some advantages on the typical C solution. Each process has its own memory allocation, etc, and so takes relatively longer to establish. A thread is an altogether lighter structure.
Finally, it works. To be quite honest, it works more quickly than I thought, and what was initially conceived as a quick solution looks to stay as a permanent one. In fact, with 20-30 clients going full steam on Internet, the limiting factor is ... our ISP.
Future improvements
Just one on my TODO list: as it stands, there is no page content filtering. I will work on that later on (and cut out web-based chats at the same time).
My source code is here: proxy.java, naturally under GPL. Please send me any comments you may have.
First there was a company called NaviSoft founded by a couple of wizard Unix programmers. They've set out to create the best web publishing system on the planet and they were well on their way. It was 1994 and their product, a web server called NaviServer, was multi-threaded from the ground up (Apache is still trying to catch up with this particular feature), had a tightly integrated scripting language, good extension API and database connectivity built in. It was so good that AOL decided in 1996 to use it to power the core of their business - multiple AOL web-estate. But buying programs is not how AOL works - they've bought the whole company to make sure that the software will grow as fast as AOL's needs. Thus NaviServer has been renamed to AOLserver. In 1999 an MIT researcher, accomplished photographer and web-developer in one person, Philip Greenspun, convinced AOL that it should open-source AOLserver for the mutual benefit of AOL and public at large. So they did. After a few months of frantic code cleanup AOLserver 3.0 debuted as an open-source web server whose development is largely community driven. AOLserver 3.1 has been released in September 2000.
AOLserver is robust, stable and scalable, after all it has been perfected for years by a tight group of programming wizards. Fortunately, you don't have to take my word for it. AOLserver has been battle-field tested in the most demanding environments. It is known to serve 30 thousand hits per seconds on AOL sites. ArsDigita, a Web development company, has built numerous web sites (www.photo.net, www.scorecard.com, www.away.com) that routinely serve millions hits per day. The bottom line is: AOLserver proved to be extremely stable and scalable by serving some of the most popular sites on the Internet today.
You want web server to support four of the most popular programming paradigms. You can think of a web service in object-oriented terms: a web server is an object whose methods are URLs. Users invoke methods by requesting an URL from within their web browser. The simplest way a web server can respond to such request is to fetch a static HTML file from the file system and send its content to an user. Doing anything more complicated means running a computer program which will generate an HTML page. One way to do it is by extending web server itself using its C extension API. Every popular web server (Apache, IIS, Netscape) has some implementation of this idea. AOLserver provides well thought out C extension API that makes writing modules that extend its functionality really easy. As an example: the source code embedding PHP in AOLserver is half the size of the similar code that embeds PHP in Apache (this "benchmark" should be taken with an appropriate grain of salt but it gives a rough idea of extension API quality). Having praised the API it should be noted that this is a very slow and error prone way of writing web services (on any web server). You have to code in a very low-level language (C), debugging is a nightmare and smallest mistake can crash the whole server.
To remedy those shortcomings a CGI protocol has been created. This protocol is supported by all major web servers including AOLserver. The idea is very simple: upon page request a web server will execute a program and whatever this program will send to its standard output will be sent back to a browser. The greatest advantage is that it can significantly shorten development time since programmer can use the best tool for the job (which usually means a scripting language like Perl, Tcl, Lisp or simply a language that he is most familiar with). The disadvantage is that it's slow (for each request a program needs to be executed and it's very expensive operation) and thus doesn't scale well.
To improve performance web servers started to directly embed scripting languages. The most popular examples are mod_perl in Apache and PHP. Since interpreter is linked with the web server executable it's no longer necessary to fork an external program to execute the script which saves a lot of time. This approach conserves the fast development advantage of CGI scripts but the cost is that it limits you to one particular scripting language (which may or may not be an disadvantage depending on how well developer knows this scripting language). AOLserver's story in this department is very compelling: it is the only major web server that comes with a tightly integrated scripting language (Tcl) out-of-the-box. If for some reason you dislike Tcl you can use PHP, Python, Java (and there is a work in progress to add Perl support).
The last programming paradigm is server-side includes ie. code embedded in HTML pages. When a page is server it is parsed by the server, HTML code is left untouched, embedded code chunks are executed and replaced by their output and resulting page is sent to browser. The most popular example of this paradigm are ASP pages in IIS. AOLserver provides developers with a similar feature: ADP pages that allows you to embeded Tcl code inside HTML.
You want your web service to be fast and efficient. AOLserver's multi-threaded architecture gives you the performance advantage over process based web servers (eg. Apache 1.3.x, Apache 2.0 is being rewritten as multi-threaded server but hasn't yet reached maturity). If a web server is based on processes it has to create a new process to serve each http request. AOLserver only has to spawn a new thread and it's much faster.
You want to have ability to share data between scripts. There are many uses for such feature, the simplest example would be to count how many times a given page/script has been called. It's not easy to achieve this in a process based web server because processes do not share dynamically allocated memory. As an example, if you use mod_perl scripts in Apache, you can have a global variable and increment it but if you think that this will tell you the total number of times the script has been executed you're in for a surprise: this value is actually per-script-per-process and since Apache pre-forks multiple processes and (as far as programmer is concerned) unpredictably assigns them to execute scripts your counter won't give you an accurate number. It's possible to overcome this using shared memory (or by storing the data in a database) but it is so cumbersome and non-standard that it's not popular among Apache developers. In AOLserver it's a child's play thanks to the fact that threads share dynamic memory and excellent built-in nsv interface.
Since dynamic web services usually have to store data in a database and retrieve data from a databases you want your web server to have an ability to talk to databases. AOLserver comes with an exceptionally good, standardized and fast database connectivity API. It's fast because it uses connection pooling, ie. database connections are opened when the web server starts and subsequently reused among scripts. An alternative (used eg. in PHP3) is to open a connection on the beginning of a script and close it at the end. This approach is much slower.
Standardized means that you use the same API to send SQL commands regardless of the database server used (to contrast, in PHP each database has its own set of APIs). It's easier to port the code when switching databases (SQL statements still need to be ported, but that just shows how standard an SQL standard is).
To top it off drivers exist for most popular databases: Oracle, PostgreSQL, MySQL, Informix, Interbase, Solid, DB2.
AOLserver provides developers with more basic blocks for building dynamic web services than any other web server. Most web services have to solve many similar problems and provide:
To get more information check those sites:
Two years ago, in the October 1998 Linux Gazette, there appeared a brief article that started part of a process underway that I do not believe any of us had any idea would go as far as it has.
As I mentioned in my article last October, my original article outlined the reasons why I felt a professional certification program would benefit Linux. It concluded with several questions and asked how I could join in the discussion:
If you agree that a certification program can be beneficial for the growth of Linux, how do we as a community go about addressing the points I made above about creating a certification program? Do we create another mailing-list or newsgroup? (Does such a group or list already exist? If so, I have so far failed to find it.) Do we meet at a conference? ...
...I don't necessarily have the answers - but I would like to participate in the discussion. If someone can suggest the appropriate forum in which this discussion should take place (or is currently taking place!), please let me know.
Two years ago, we had no idea that what we were beginning would become the Linux Professional Institute. We had no clue of the tremendous support we would receive not only from members of the Linux community, but also from the larger IT, training and publishing communities. Nor did we know of the significant financial support we would receive. We had no idea how incredibly expensive all of this would be to pull of. Nor did we know how significantly LPI would change some of our lives.
And yet... two years later, we have deployed the two exams of Level 1. As I write this, the exams are completing the final stages of the incredibly long and comprehensive process that is involved with our efforts. And our approach of NOT endorsing or approving any single way of preparing for our exams has paid off with many different ways for people to prepare for our exams.
There are so many people to thank that it is next to impossible to even begin to list them all. We have tried with web pages thanking people who have assisted us in 1999 and 2000, but even those lists fall short. I would refer you to the articles listed below to understand both what we have gone through and also who should be thanked. We have been extremely grateful for all the support of people within the Linux community and also within the larger IT world. We would also thank the Linux Gazette for providing the forum that helped launch our effort and continued to help get our message out. They, and so many other Linux magazines, journals and web sites have been instrumental in helping the world learn about our program.
LPI began with a fairly simple idea - if there is to be certification for Linux, which the larger IT and training industry pretty much determined would be inevitable, then that certification should be controlled by the actual Linux professionals working with the operating system and not by any one Linux vendor or any publishers or training/courseware providers. Furthermore, candidates should have the freedom to choose how they prepare for the exams, including the option of not taking any classes at all and simply studying the exam objectives. The exams should be available globally and as inexpensively as possible, and should use standard industry practices to ensure that the exams are legally defensible, statistically-valid and able to stand equal or better than other existing IT certifications.
It's been a very long road with plenty of joyous moments and plenty of rough spots. But working together, we have done it! Yes, there is still a long way yet to go, and there is much ahead for us to do. There are many more challenges ahead, and we will need the active support and participation of many more people to meet those challenges (please contact Wilma Silbermann < > if you would like to volunteer). We will need help in many different areas... we will need new people providing leadership... we will need new financial sponsors... we will need more people to write and speak about LPI. But based on what I have seen in the past two years, I am more confident than ever that we will continue to build LPI to be a premier certification program.
Yet on this October day, I believe we should take this moment to pause, sit back and appreciate all that has been done by so many different people. We thank everyone who has been involved for your past and continued support and look forward to continuing to work with you all to move this program on to even greater heights!
And so much of it began here, with a little article and, most importantly, all of the people who responded back to say that they, too, wanted to help...
by Dan York
Linux Certification Part #1, October 1998
Linux Certification Part #2, November 1998
Linux Certification Part #3, December 1998
Linux Certification Part #4, February 1999
Linux Certification Part #5, Mid-April 1999
Linux Certification Part #6, July 1999
Linux Certification Part #7, October 1999
by Ray Ferrari
Linux Certification Part #8, February 2000
Linux Certification Part #9, June 2000
Linux Certification Part #10, September 2000
Top 15 of 1071 Total User Agents | |||
---|---|---|---|
# | Hits | User Agent | |
1 | 880501 | 38.63% | MSIE |
2 | 829419 | 36.39% | Netscape |
3 | 269986 | 11.85% | Wget/1.5.3 |
4 | 44209 | 1.94% | Teleport Pro/1.29 |
5 | 29842 | 1.31% | WebCopier |
6 | 25202 | 1.11% | testspider |
7 | 11080 | 0.49% | HTTrack 2.0 |
8 | 8966 | 0.39% | AVSearch-3.0(EoExchange/Liberty) |
9 | 8432 | 0.37% | Opera 4.0 |
10 | 7927 | 0.35% | AvantGo 3.2 |
11 | 7438 | 0.33% | Slurp/2.0-BigOwlWeekly (spider@aeneid.com; http://www.inktomi |
12 | 7107 | 0.31% | GETWWW-ROBOT/2.0 |
13 | 5985 | 0.26% | Slurp/2.0-RedtailCrawl (slurp@inktomi.com; http://www.inktomi |
14 | 5160 | 0.23% | Konqueror/1.1.2 |
15 | 4717 | 0.21% | sitescooper/3.0.0beta (http://sitescooper.cx) libwww-perl/5.4 |
Usage has been hovering at 80,000-90,000 readers (=unique IPs) per month, or 135,000-155,000 visits (from the same IP within a short time period). Of course, some IPs represent multiple readers, but on the other hand, readers with dynamic IP addresses are counted for each address they visit under, so it probably evens out. This includes only www.linuxgazette.com, not the mirrors.
Why would somebody ask you to write "VOID" on the check you're sending them as payment? Doesn't that mean he won't get paid?It's no secret that the Internet has very recently developed into the hottest marketing medium since television! Those with the skills to take advantage of this new medium are starting to make some very serious money.
My offer to you is this. You send me $39 bucks I will mail You my E-mail Marketing Kit CD Disk containing the following information and software.
Bla bla bla The Bulk E-mail Handbook bla bla bla The Targeted Direct E-mail Marketing E-Book bla bla bla...
As if this wasn't enough, hold on to your seat, I am even going to give you absolutely FREE the following software:
- 1.) A copy of CHECKER CHECKS BY FAX SOFTWARE... This is a fully functional program not a crippled demo. Now you can take payment from your customers by Fax, Phone or E-mail simply by taking their Checking account information.
- 2.) A copy of smtp lookup software that will find you foreign mail servers you can use to send your mail completely undetected. This software will test mail through hundreds of servers an hour. I will even Supply you with a list of 16000 foreign domains to insert into your testing.
And I will throw in 500,000 Fresh E-mail addresses. If you are selling a product or service these are the addresses you want.
SPECIAL BONUS if you order your CD By 9/30/00 I will supply you with a copy of E-mail Software you can use to send your message to millions and a copy of E-mail list extracting software you can use to extract E-mail Addresses from all over the Internet.
Just make out your check for $39 payable to X and write void across the face then tape or glue it to the order form at the bottom of this page and you will have your E-mail Marketing Information Kit mailed the same day.
Read paragraph "1." again. He'll send you software which allows you to "take payment from your customers ... simply by taking their Checking accunt information". Thus, he has software which allows him to take payment from you simply by taking your checking account information.
When do people normally ask for a voided check? When you're setting up automatic payment with them, of course! So you can pay the monthly electric or ISP bill without having to write a check. But this is a one-time payment, isn't it? Or is it?
No wonder he tells you to write "VOID" on the check. He doesn't plan on cashing it anyway. Now the question becomes, will he put the order through at $39? Or $399?
Happy Linuxing.
Michael Orr
Editor, Linux Gazette,