|
Table of Contents:
|
||||
Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. |
|||
This page maintained by the Editor of Linux Gazette, Copyright © 1996-2001 Specialized Systems Consultants, Inc. |
|||
Send tech-support questions, answers and article ideas to The Answer Gang <>. Other mail (including questions or comments about the Gazette itself) should go to <>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.
Unanswered questions might appear here. Questions with answers -- or answers only -- appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Bryan Anderson wrote an article in August 2001 Linux Gazette titled 'Make your Virtual Console Log In Automatically'.
Many years ago, before the Web when terminals mattered a lot more, I spent many hours combing through kernel code and experimenting to figure out process groups, sessions, controlling terminals, job control, SIGINT, SIGHUP, and the like. I could write a long article on it, but I think it's really arcane information.
Thu, 2 Aug 2001 16:39:28 -0700
brad harder ()
I'd be interested to read what Bryan has to say about this subject ...
-bch
Thu, 16 Aug 2001 13:27:50 +0200
Yann Droneaud ()
Hi,
I read the article written by Bryan Anderson in August 2001 Linux Gazette titled 'Make your Virtual Console Log In Automatically'. The last section about process groups and controlling terminal was too short for me.
I would be happy if Bryan could write a technical article about this subject as it's suggested by him and the editor. I'm wondering his knowledge could help me.
PS: my currently knowledge is based on a approxmilty reading of bash source code and the GNU libc manual (info).
-- Yann Droneaud
What combination of open source software should be used to create a portal site? How could a beginner build and test such a site?
A handful of the Answer Gang are just starting to give him links to some related software, but an article from someone out there who has already had the experience would be even better.
I work as technology consultant for a small University Centre in the South of Brazil ... we have migrated all of our academic/administrative system into Free Software, developing the SAGU system.
BTW, I am a guest speaker at the Annual Linux Showcase, where I will be presenting our SAGU system.
Well, let me know if you like the idea and I will produce an article.
Thanks, Cesar, we'd love to see your article. It falls solidly into the "real life experiences" category defined in our author guidelines. You should look there for the upcoming deadlines, and submit to .
You may find also interesting we host a "Source Forge" site at "http://codigoaberto.org.br" where we have more than 80 hosted projects, from people all over Brazil.
Cesar Brod
Univates/Brod Tecnologia
Gentle Readers: If you have broad reaching projects that you think make Linux fun and more useful, we encourage you to consider submitting an arttcle too!
This is an exchange regarding CUP
On Thu, 28 Jun 2001 18:16:59 +0100 Xavier wrote:
I just look at your issue 41 (I know that is not really recent ...) but in the article of Christopher Lopes which is talking about CUP, there is a mistake...
I tested it and I see that it didn't walk correctly for all the cases. In fact it is necessary to put a greater priority to the operator ' - ' if not, we have 8-6+9 = -7 because your parsor realizes initially (6+9 = 15) and after (8-15= -7). To solve this problem it is enough to create a state between expr and factor which will represent the fact that the operator - has priority than it +.
Cordially.
Xavier Prat.
On Wed, Aug 01, 2001 at 05:56:21PM -0500, Michael P. Plezbert wrote:
I just couldn't let this slip by.
You do NOT want to give the minus operator a greater priority than the plus operator, because then expressions like a+b-c would parse as a+(b-c), which generally is not what you want. (Algebraically, plus and minus are usually given the same priority, so a+b-c means (a+b)-c.)
In fact, giving the minus operator a higher priority in the CUP file (using CUP's priority capability) will not change anything given the grammar as written in the original article, since the grammar is unambiguous with regard to plus and minus.
The problem is that the lines in the grammar
expr ::= factor PLUS expr | factor MINUS expr
cause the plus and minus operators to be right-associative, when we want them to be left-associative.
The fix is to changes the lines to be
expr ::= expr PLUS factor | expr MINUS factor
This will make the grammar associate the plus and minus operators in the usual way.
(This may have been what the author of the previous mail meant, but the text was unclear and the link to the CUP file was broken.)
Michael
That broken link had been my fault (sorry) but it was fixed immediately when you let us know. Thanks! -- Heather
Michael is right... The fix is just to transform the rules of expr for PLUS and MINUS become left-associative. Thing which I had made in my preceding fix, but it's true that to give a higher priority to MINUS is, in fact, totaly useless...
thanks.
Xavier PRAT.
Eh folks !!
Why don't you just remove all the factor productions (which is clearly school boy junk ...) and leave nothing between <expressions> and <terms> so that the precedence directives can work freely, and there will be no problem :
ex.
precedence left MINUS, PLUS; precedence left TIMES, DIVIDE;
and
expr ::= term | expr MINUS expr | expr PLUS expr | expr TIMES expr | expr DIVIDE expr
We needed a bit more clarity, originally we weren't sure what he was replying to:
Generally the examples given along with developement packages or with teaching-manuals, should be merely considered as simple hints and if used 'as-is', extreme care should be taken ...
In the case of modern LALR parser generators with the feature of precedence-directives :
- the factor-type productions often present in examples (in grammars with expression-productions), are error prone and uselessly over-clobber grammars.
- thus factor-type productions should simply be left out so that precedence rules can work freely as expected.
Enjoy
Waldemar
On Fri, Aug 10, 2001 at 01:34:54PM -0700, Lindsey Seaton wrote:
Thank you everyone who helped to answer my question. The web page that was linked in one of the e-mails was very helpful and added to my "favorites" list for future referance.
Thanks for letting us know. And when you know a bit more about Linux and are able to answer this question for somebody else, please do so. That's what keeps the Linux community active.
I was reading your article in the Linux Gazette about programming perl and I have a little problem in a simple script. This is the script that should open /var/log/messages and search for some text:
#!/usr/bin/perl -w use strict open(MESS, "</var/log/messages") or die "Cannot open file: $!\n"; while(<MESS>) { print "$_\n" if /(fail|terminat(ed|ing)|no)/i; } close MESS;
when I run the script the result is the following:
$ ./logs.pl
syntax error at ./logs.pl line 4, near ") or"
Execution of ./logs.pl aborted due to compilation errors.
Do you have a clue about what's going on?
I have a RedHat Linux with perl 5.6.0
I believe I've actually mentioned this type of error in one of the articles. It's a very deceptive one... and yet shared by all languages that ignore whitespace, due to the way the parser has to look at the code.
Look at line 4 carefully. Look at it again. Can't find anything wrong? That's because there isn't anything. Instead, take a look at the previous line of code, line 2 - it's missing a semicolon at the end! When that happens, Perl figures that you simply continued your statement further down - so, what it sees is
use strict open(MESS, "</var/log/messages")
at which point it realizes "Uh-oh. We've gone past anything that looks like valid syntax for the 'use' function - PANIC TIME!"
The lack of a terminator on a previous line is always an error on the current line.
Hey,
Just wanted to drop a quick line and say thank you for your Learning Perl series in Linux Gazette. I very much enjoyed your writing style, technical depth, and approach ... I picked up a lot of useful tips, and I've been using Perl for quite a while.
Keep up the excellent work.
-- Walt Stoneburner
Per the request of one of our mirrors in Germany, I have added a provision for our mirror sites who want to run their own search engine. Starting with this issue, the Search link on the home page and the TOC page has changed from "Search" to "Search (www.linuxgazette.com)".
Mirrors with their own search engine may replace the text between
<!-- *** BEGIN mirror site search link *** -->
and
<!-- *** END mirror site search link *** -->
with a link to "(SITE.COM mirror)" on the TOC page, and "Search (SITE.COM mirror)" on the home page.
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.
The September issue of Linux Journal is on newsstands now. This issue focuses on Security. Click here to view the table of contents, or here to subscribe.
All articles through December 1999 are available for public reading at http://www.linuxjournal.com/lj-issues/mags.html. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.
Click here to view the table of contents. US residents can subscribe to ELJ for free; just click here. Paid subscriptions outside the US are also available; click on the above link for more information.
Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.
As LG went to press, several events unfolded in the Sklyarov/DMCA case:
The Electronic Frontier Foundation (EFF) is speaking out against industry attempts to get controversial provisions from the US Digital Millenium Copyright Act (DMCA) put into the Free Trade of the Americas (FTAA) agreement. (The FTAA is a still-unfinished treaty that would create a free-trade zone covering North and South America.) "The FTAA organization is considering treaty language that mandates nations pass anti-circumvention provisions similar to the DMCA, except the FTAA treaty grants even greater control to publishers than the DMCA." If you feel strongly about this, the EFF invites you to try to change the situation and provides suggestions for the sort of letters you could write.
Because LG is a monthly publication, we cannot adequately address all the developments in the DMCA controversy. We refer you instead to the Linux Weekly News editorials, the EFF home page, and the various activist sites such as nodmca.org and freesklyarov.org.
LWN's August 30 editorial raises the irony of Dmitry possibly getting a longer prison sentance than "mere armed robbers, rapists and child molesters". It states, "One way or another, we are now seeing the degree of repression that the US is willing to apply to ensure that certain kinds of software are not written.... It takes very little imagination to picture a future where the general-purpose computer has been replaced by a 'trusted computing platform', and systems which do not 'seal data within domains' are treated as 'circumvention devices'. At what point, exactly, does Linux become an illegal device under the DMCA? In a world where programmers face 25-year sentences for code that was legal where they wrote it, this vision should not be seen as overly paranoid."
An older LWN editorial discusses attempts in Canada to insert DMCA-like provisions into its copyright law.
Meanwhile, Slashdot reports on an NPR article saying that many US radio stations are pulling the plug on their webcasting "due to concerns about advertising, royalties and the DMCA". Slashdot then reports on a CNN article about a study saying "people don't and won't purchase heavily restricted music online at higher prices for a less useful item." Slashdot then adds, "This is apparently a revelation to the music industry."
Total Impact has also just announced availability of its new Centricity line of Render Engines beta tests are "creating anticipation that Centricity systems will revolutionize high performance computing with their small size, high processing speeds, low power requirements and ease of use".
MEN Micro's new PC-MIP mezzanine card featuring a 48-bit TTL I/O interface, may allow embedded system designers to quickly implement basic digital I/O without an involved development process. For simple digital I/O, such as a control switch or an actuator, the new PCˇ MIP card can be easily added to a single-board-computer (SBC) or a PCˇ MIP carrier card, assuring a rapid completion of the system's development. Through the MEN Driver Interface System (MDIS), the P13 is supported by drivers for a wide range of operating systems, including VxWorks, OS-9, WindowsNT and Linux.
Keyspan has announced new versions of its USB PDA Adapter and its High Speed USB Serial Adapter. In addition to "off-the shelf" support for Linux 2.4, Keyspan's Serial-to-USB Adapters also support Windows 98, Windows Me and Windows 2000, as well as Mac OS 8.6 or higher. Beta drivers for Mac OS X are also available.
SAIR Linux and GNU Certification's quarterly newsletter, SAIR Linux and GNews issue 9, is available for you to view online.
IBM has announced the new IBM "Start Now" Solutions for e-business, a family of offerings to help small and medium businesses (SMB) rapidly implement powerful, cost-effective, e-business solutions. The eight Start Now Solutions, including three Linux-based solutions, "fulfill the requirements of e-business--from initial Internet access, through e-mail, research and information, Web site management, simple and complex e-commerce, business intelligence, integrated activities and new business opportunities". For more information on IBM Start Now solutions, visit http://www.ibm.com/software/smb.
The book "Advanced Linux 3D Graphics Programming" is now available for purchase. It is the follow-up volume to the first book "Linux 3D Graphics Programming". This second volume provides programmers who are experienced in both Linux and fundamental 3D graphics concepts with a well-rounded perspective on 3D theory and practice within the context of programming larger interactive 3D applications such as games. It covers such topics as texture and light mapping, creating compatible morph targets in Blender, creating and importing IK animations into a 3D engine, BSP trees (node and leaf based), portals, level editing, particle systems, collision detection, digital sound, content creation systems, and more. A table of contents is viewable online and if you like what you see, purchase online.
UnixBoulevard.com is a free and upcoming site designed to be a choice web location for those individuals and organizations that use, manage Unix based servers or Networks. The site provides product and technical support information as well as a forum for UNIX community members to interact.
Linux NetworX, a provider of powerful and easy-to-manage cluster computing solutions, announced today that seismic imaging solutions company GX Technology has purchased an 84-processor Evolocity computer cluster to be used in its oil and gas exploration efforts. This is the third cluster computer system provided to GX Technology by Linux NetworX.
Linux NetworX optimized the Evolocity cluster to work with GX Technology's seismic imaging applications to perform processes such as wave equation and Kirchhoff pre-stack depth migration and prestack time migration. The 42-node Evolocity system includes 84 1.2 GHz AMD Athlon MP processors, with each node containing 1.5 GB of memory, and two 10/100 Ethernet networks for redundancy. GX Technology also utilizes the Linux NetworX ClusterWorX management software tools, and signed an on-going service agreement to ensure system stability.
Linux project in Mexican schools (Red Escolar) fails, largely due to "winmodem" issues it seems. More positively, Linux seems to be finding a role in a Colorado school district. News courtesy Slashdot.
CanadaComputes.com have a round up of the Linux web browsers currently available.
Linux Journal web articles:
Suite101.com have added a new Linux site aimed at explaining to Windows users what it might be like if they changed to Linux.
The Register have reported that several Red Hat 6.2 systems with default installation were cracked in 72 hours during a security research project that intentionally left them online for intruders to find.
Evaluation of Windows XP beta compared to Linux Mandrake 8.0 from the point of view of usability and aesthetics. The review says Windows is getting better than it used to be; Microsoft is learning some of Linux's tricks.
RPM Search page on the User Friendly site.
Slashdot had a recent Slashdot talkback thread on which is the best Linux distribution for a newbie.
The State of Corporate IT: A case for Linux. "By many accounts, the largest cost of ownership increases that corporations have faced have been licensing related. As NT has become a mainstay, licensing terms have become more specific and more expensive."
This story traces a 7,000-employee company that switched from Unix/Novell to NT for "ease of administration and a lower cost of ownership, but years into the transition, administering and licensing costs soared.... While the previous Unix and Novell platforms had handled file, print and mail servers on a single server, NT now needed one machine for each service plus a dedicated backup for each..... Red Hat brought a single Pentium class system for a site visit and thanks to the early legwork their engineers had done, were able to integrate the box into the network and take over all file and print server requests for one busy segment within four hours. The system ran for the next 10 business days without any downtime, something NT machines had not been able to do very often.... Red Hat had proven to be a helpful ally. Instead of trying to push a whole-scale replacement of the infrastructure, they had worked to supplement it.... Some months later, with the market still soft and the bottom line increasingly important to shareholders, the team feels they made the right decision." Courtesy Slashdot.
The Los Angeles Times have a science fiction story about a future world in which Windows is everywhere, causing worldwide catastrophe. Courtesy Slashdot.
TimeGate Studios, Inc. and Loki Software are excited to announce that the demo for Kohan: Immortal Sovereigns on the Linux platform is now available for free download at http://www.lokigames.com/products/demos.php3 For more information, please visit the official game site. Pre-orders can be placed from the Loki webstore.
No Starch Press and Loki Software have announced the launch of the complete and authoritative guide to developing games for Linux. PROGRAMMING LINUX GAMES: LEARN TO WRITE THE GAMES LINUX PEOPLE PLAY (August 2001, 1-886411-49-2, $39.95, paperback, 432 pp., http://www.nostarch.com/?plg) guides readers through important Linux development tools and gaming APIs, with a special focus on Simple DirectMedia Layer (SDL). Written by the gaming masters at Loki Software, this book is the ultimate resource for Linux game developers. Available in bookstores, from Loki Software (http://www.lokigames.com/orders), or from No Starch Press (1-800-420-7240, http://www.nostarch.com).
eVision is excited to announce the release of version 2.1 public beta of the eVe visual search Java-based SDK for Linux. The toolkit lets Linux developers create search applications that use images and visual similarity rather than keywords and text. The user selects a sample query image or partial image, then the search engine finds and ranks other images that are visually similar with respect to the objects in the image and attributes such as color, texture, shape and 3D shading. This technology can be applied to image content, video content, audio content and any other digital pattern. You can sign up to download a free 500 image limited version of the SDK at http://www.evisionglobal.com/developers/sdk/
Great Bridge, a provider of commercial service and support for the open source database PostgreSQL, has announced this morning an open source application development platform that uses the world's most advanced open source tools. Great Bridge WebSuite is an integrated open source platform that combines the PostgreSQL database, PHP scripting language and Apache Web server for building high-performance Web-based applications.
Appligent, Inc. is offering a new utility free of charge. APStripFiles is a command line application that removes attached or embedded files from PDF documents. It enables you to protect your systems from malicious unwanted PDF file attachments.
APStripFiles for AIX, HP-UX, Sun Solaris and Red Hat Linux can be downloaded free from,, http://www.appligent.com/newpages/freeSoftware_Unix.html
There is no guarantee that your questions here will ever be answered. Readers at confidential sites must provide permission to publish. However, you can be published anonymously - just let us know!
From Mike Orr
Answered By Ben Okopnik
Just got a disturbing disk error. It was on my 486 laptop, which I've only used for reading and writing text files on the past few years because of its limited capacity (16 MB RAM, 512 K HD).
1) I was in vi, and it caught a SEGV. Fortunately, it was able save its recovery file. I restarted vi, recovered the file, saved it, deleted the recovery file and went on typing. Then,
[Ben] Could be memory, could be HD...
2) I got an oops. Something about paging. I figured, common enough oops,
[Ben] Ah. This sounds like memory.
even though it's never happened on that computer, so I pulled out the power cable for a second and rebooted. (The battery had long ago stopped holding any charge.) Linux found that the HD had been mounted uncleanly (no duh) and started fsck. Fsck found two deleted files with zero dtime and fixed them. I was glad I had saved the file after recovering it since I'd deleted the recovery file. Then--
3) "Kernel panic: free list corrupted". I rebooted. Again the same error. What do you run when fsck doesn't work?? Is all my data gone bye-bye? Not that it was that much, and I was about to blast away the current (Debian) installation anyway and practice installing Rock Linux. (If, of course, the disk is good enough to be reformattable.)
4) A happy ending. I rebooted again to make sure I had the panic message right, and this time fsck completed and I got a login prompt. Quickly I tarred up my data and copied it onto a floppy.
I wonder if this will make Wacky Topic of the Month.
[Ben] Had that happen... oh, can't even remember now. Something crunchy happened, and required multiple fsck's. It would get a little further every time, and finally got it straightened out. IIRC, it took three or four reboots to get it - and I had exactly the same "if the salt have lost his savour, wherewith shall it be seasoned?" moment. Pretty scary to think that "fsck" doesn't work, just at the moment when it's the only thing that _can._ As far as I'm concerned, "fsck" should have a default "auto-restart" mode that can be interrupted with a 'Ctrl-C'; when it stops like that, the typical user's response isn't going to be "reboot and try again" - it's "Ohmygawd, MY MACHINE IS BROKEN!"
Doesn't fsck automatically restart sometimes? I know I've seen it do this, although the last time was early in the kernel 2.2 days. Is it an ex-feature? Or maybe Debian did it with a 'while' loop or something.
[Ben] Can't say. I've only had "fsck" run in 'repair mode' three times, all in the dim dark past; never saw it restart. I'm pretty sure all three were in, or before, the 2.0 days.
Of course, you can't interrupt an oops with a Ctrl-C. When an oops happens, the machine halts and must be reset.
[Ben] Hmm. Normal disk repair (fixing up inode dtimes and such) shouldn't produce an oops; theoretically, there is a large but fixed number of things that can be wrong, and there is supposed to be a programmatic response to each of them. The only reasons I could see for an oops to occur while "fsck" is running are 1) bad memory - which is an unrelated issue - or 2) the inode that contains "fsck" itself is damaged. Other than those, I can't see why a loop of the sort I suggested can't be written... really, I can't see ANY reason for "fsck" to freeze in the first place. It just sounds like some unaccounted-for cases that come up - and even that should be "catchable".
Sorry, I wasn't thinking clearly. An oops is most likely bad memory, a bad disk or cosmic rays. A kernel panic (in my experience) is more likely to be a programming, configuration or environment issue. In either case, the machine halts and you can't recover except by resetting it. What is curious is, is there a certain moment during disk activity where a SEGV or oops would leave the filesystem in a "free list corrupted" state? Intuitively, there must be.
[Ben] Mmmm... sure. I'm not a kernel expert by any means, but if the machine crashes while the free list is being updated, that would make it corrupt. Not that it's really a big deal, the way it would be if individual inode pointers got fried - but it's certainly a much better mechanism than FAT, where a couple of K worth of mis-written data can fry your entire drive contents.
The next question is, is it possible to retrieve the data after such an error (short of running a sector-by-sector analysis)? Apparently there is, and fsck does it, although it takes a couple runs to finish the repair.
[Ben] Sure; it would be a inode-by-inode analysis ("anything that's not a superblock, and is not owned by a file, and <a few other considerations that I can't think of at the moment> must be free space"), but a corrupted free list isn't that big of a thing. It's much easier to find out which blocks are really free, rather than trying to find which ones aren't _and_ how they're connected to the rest of the structure.
Too bad fsck can't somehow avoid causing a kernel panic or that the kernel can't figure out the situation enough to provide a more reassuring error message.
[Ben] Agreed. That kind of tools, the "fall back if all else fails" kind, should run flawlessly.
The worst fsck case Jim Dennis ever had against required him to run fsck 6 times, but it did eventually succeed in cleaning up the mess he had made. (He had told his video controller to use the address range which the hard disk controller actually owned. Typos can be really bad for you at that level.) The moral here is, if at first fsck does not succeed, don't give up all hope. You may prefer to reformat afterwards anyway, but you should get a decent chance to rescue your important data first. -- Heather
From Lindsey Seaton
Answered By Frank Rodolf, madeline, Thomas Adam
Excuse me. I have a question
As a computer project, I was assigned to get on the computer and find out what linux is and what it is used for. I don't know if it's an orginization or if it's part of HTML script or anything. Please e-mail me back with the answer please. I just know so little about computers and one name can mean so many different things on the internet. I had only just now I had been spelling it wrong (linex) until I found out it was spelled linux.
[Frank] There are so many possible answers to that question, I won't even start to try to answer it.
What I can do, is send you to the list of Frequently Asked Questions (FAQ). The question you ask is the very first question in there. You can find it here:
http://www.linuxdoc.org/FAQ/Linux-FAQ/index.html
Thank you for your help.
[Frank] I hope the link helps you!
[Madeline] I just looked at the FAQ and noticed that they're really not too helpful for a beginner. So here's a more straightforward answer:
Like Windows and Mac OS, Linux is an operating system, which is a program that is in charge of organizing and running everything on your computer. Here is a definition of operating system: http://www.webopedia.com/TERM/o/operating_system.html
Unlike Windows and Mac OS, Linux is free, and the programming code that was used to create it is available to everyone. As a result, there are many versions of linux (such as Red Hat, Gnome, and SuSE) which are somewhat different but with the same foundation (called a "kernel"--this kernel is updated every so often by the creator of Linux, Linus Torvalds, and company). Linux is usually the operating system of choice for computer programmers and scientists because it is very stable and well-designed (not crashing randomly and often as Windows tends to do).
I hope this helps.
Madeline
[Mike] Thanks, Madeline, I was about to say something similar.
Many people also find Linux and other Unix derivatives more flexible than other operating systems.
[Thomas Adam] I don't really remember this address as being advertised as a "do your research/homework" one. Nevertheless, I can try and answer your question....
Firstly, your question is far too broad. There have been numerous books written about the history and use of Linux, and it is beyond the scope of my knowledge to tell you everything.
Considering that Thomas is "The Weekend Mechanic" and has written several articles for the Linux Gazette over the years, that's saying something significant. -- Heather
Linux was created by scratch in ~1991, by Linus Torvalds, a very gifted person for Finland. His goal was to create a Unix like operating system. Thus, he was assisted by numerous loosly-knit programmers over the world, to produce the kernel, the "heart" of the operating system. Essentially, this is what "Linux" refers to.
Linux is an operating system, and is an alternative to the de facto operating system "MS-Windows". Linux is a Unix-like operating system (as I have already said). There are many different "distibutions" of Linux, which use different means of distributing data, either in RPM format, .tgz format etc.
If you are interested, you could try Linux out (by using a floppy based distibution, such as HAL91 available from the following:
http://www.itm.tu-clausthal.de/~perle/hal91
and then you can run Linux off a floppy disk. Bear in mind however, that this will offer no GUI frontend.
I hope this has answered a little of your question, even if it is brief.
Answered By Jim Dennis
Problem: You're using a system at work that's on an internal (non-routable) IP address (as per RFC191 , or that's behind a set of proxy servers or IP masquerading routers. You want to work from home, but you can't get into your system.
WARNING: This hack may be a violation of the usage policies either of the networks involved! I'm describing how to use the tool, you assume all responsibility for HOW you use it. (In my case I'm the one who sets the policy; this is just a convenient trick until I get around to setting up a proper FreeS/WAN IPSec gateway).
Let's assume that you have a Linux desktop or server "inside" and another one "at home" (obviously this trick will work regardless of where "inside" and "at home" really are). Let's also assume that you have OpenSSH installed at both ends. (It should work with any version of SSH/Unix and possibly with some Windows or other clients, I don't know).
As root on your internal machine, issue the following command:
ssh -f -R $SOMEPORT:localhost:22 $YOURSELF@$HOME 'while :; do sleep 86400; done'
... this will authenticate you as $YOURSELF on your machine, $HOME and will will forward tcp traffic to $SOMEPORT on $HOME back trough the tunnel to port 22 (the SSH daemon) on localhost (your "inside" machine at work). You could forward the traffic to any other port, such as telnet, but that would involve configuring your "inside" machine to allow telnet and (to be prudent) configuring its TCP wrappers, ipchains etc, to disabled all telnet that didn't come through (one of) our tunnels.
The fluff on the end is just a command for ssh to run, it will loop around forever (the : shell built-in command is always "true") sleeping for a whole day (86400 seconds) at a time. The -f causes this whole command to fork into the background (becomming a daemon) after performing any authentication (allowing you to enter passwords, if you like).
To use this tunnel (later, say from home) you'd log into $HOME as yourself (or any other user!) and run a command like:
ssh -p $SOMEPORT $WORKSELF@localhost ...
or:
ssh -p $SOMEPORT -l $WORKSELF localhost
... Notice that you use the -p to force the ssh client to connect to your arbitrarily chosen port (I use 1022, 2022, etc. since they end in "22" which is the IANA recognized ssh protocol port). The -l (login as) or the form $WORKSELF@ are equivalent. Note that you user name at work needn't match your name at home, but you must use the "REMOTE" username to connect to the forwarded port.
That bears repeating since it looks weird! You have to use the login name for the remote system even though the command looks like your connecting to the local host (your connection is being FORWARDED).
If you use these commands you can log into a shell and work interactively. You can add additional arguments to execute non-interactive commands, you can set up your ssh keys (ssh-keygen, append $HOME/~/.ssh/identity.pub to $WORK~/.ssh/authorized_keys) so that you can gain access without typing your password (though you should configure your ssh key with a passphrase and use ssh-agent to manage that for you; then you only have to enter you passphrase once per login session to access all of your ssh keyed accounts).
You can also copy files over this tunnel using the scp command like so:
scp -P $SOMEPORT $WORKSELF@localhost:$SOURCEPATH $TARGET
... not that this is an uppercase "P" to select the port, a niggling difference between the syntax of the ssh client and that of the scp utility. Of course this can be done in either direction; this example copies a remote file to a local directory or filename, we're reverse the arguments to copy a local file to the remote system.
As I hinted before, you are actually double encrypting this session. You tunnel to the remote system is encrypted, and in this case the connections coming back are to a copy of the sshd back on your originating machine; which does it's encryption anyway. However, the double encryption doesn't cost enough CPU time to be worth installing a non-encrypting telnet or rsh and configuring it to only respond to requests "from" localhost (from the tunnels).
One important limitation of this technique: Only one remote user/session can connect through this tunnel at a time. Of course you can set up multiple tunnels to handle multiple connections.
This is all in the man pages, and there are many references on the net to using ssh port forwarding, but finding an example of this simple trick was surprisingly difficult, and it is a bit tricky to "grok" which arguments go where. Hopefully you can follow this recipe to pierce the corporate (firewall) veil and get more work done. Just be sure you clear it with your local network and system administrators!
From sunge
Answered By Karl-Heinz Herrmann, Frank Rodolf
Dear TAG members,
When I use ppp-on script connect to my ISP, almost EVERY time the modem will hangup when the
connect time is 3.3 minutes:
$tail -n 10 /var/log/messages ... Jul 15 19:37:37 localhost pppd[1703]: Hangup (SIGHUP) Jul 15 19:37:37 localhost pppd[1703]: Modem hangup Jul 15 19:37:37 localhost pppd[1703]: Connection terminated.
[K.-H.] this is what you would get by a modem-initiated hang up. pppd just gets told that the connection is closed.
Jul 15 19:37:37 localhost pppd[1703]: Connect time 3.3 minutes. Jul 15 19:37:37 localhost pppd[1703]: Sent 4656 bytes, received 6655 bytes. Jul 15 19:37:37 localhost pppd[1703]: Exit. $
But if I use Kppp, modem will NOT hangup.
Thank you.
Regrads,
--
sunge
[K.-H.] kppp and ppp-on will probably set the modem differently. Especially there is one register Sx which contains the time in minutes(?) after which the modem will hang up if no data transfer occurs.
I guess your startup causes about 0.3min traffic after which no further traffic occurs and your timeout with ppp-on is set to 3 minutes. kppp may have that set to a longer time.
The init string is something like AT ..... Sx=3 I'm not sure anymore, but the register number x was something like 6 or 9... see the modem manual for details.
K.-H.
[Frank] Hi there!
Just a small addition to what Karl-Heinz wrote.
The register (at least in a standard Hayes compatible register set) would be number 19 and the number after the = does indeed indicate the number of minutes of inactivity before disconnecting.
Grtz,
Frank
From Chris Twinn
Answered By Ben Okopnik
I am trying to write a little bash script to update the crontab on RH7. Problem is that when I put
linetext = $1" * * * * " root bash /etc/cron.hourly/myscript or
[Ben] Don't do that; you can't have any spaces around the '=' sign in variable assignment.
linetext=$1" * * * * " root bash /etc/cron.hourly/myscript
I get back "2 configure ipchaser 2 configure ipchaser" which is an ls of that current directory fronted by the number 2 in my variable at each point of the star's.
[Ben] Sure; it's doing exactly what you've asked it to do. Text in the weak (double) quotes is interpreted/interpolated by the shell; "*" does indeed mean "all files in the current directory". However, strong (single) quotes treat the enclosed text as a literal string; so does quoting it on assignment and output.
linetext=$1' * * * * root bash /etc/cron.hourly/myscript' linetext="$1 * * * * root bash /etc/cron.hourly/myscript"
Either one of the above would result in "$linetext" containing
2 * * * * root bash /etc/cron.hourly/myscript
(this assumes that "$1" contains '2'.) Note that you have to echo it as
echo "$linetext"
not
echo $linetext
Otherwise, "bash" will still interpret those '*'s.
... he cheerfully reported back, his problem is solved ...
Wicked.
[Ben] On this side of the pond, the expression is "Duuuuude."
Many Many Thanks.
[Ben] Good to know you found it useful, Chris.
From Anonymous
Answered By Mike Orr, Nick Moffitt
I have a question about the "finger" option on telnet. I know that you ccan find out when someone has logged in by entering "finger name" But I was wondering if it possible to find out who has tried to finger your e-mail account??
Please keep my name anonymous.
[Mike] The short answer:
If you are the sysadmin, you can run "fingerd" with the "-l" option to log incoming requests; see "man fingerd". Otherwise, if you have Unix progamming experience, it may be possible to write a script that logs information about the requests you get. If you're merely concerned about security, the correct answer is to turn off the "fingerd" daemon or read the "finger" and "fingerd" manpages to learn how to limit what information your computer is revealing about you and about itself. However, you have some misconceptions about the nature of "finger" which we should also address.
The long answer:
"finger" and "telnet" are two distinct Internet services. "http" (WWW) and "smtp" (sending e-mail) are two other Internet services. Each service is completely independent of the others.
Depending on the command-line options given and the cooperation of the remote site, "finger " may tell you:
(1) BASIC USER INFORMATION: the user's login name, real name, terminal name and write status, idle time, login time, office location and office phone number.
(2) EXTENDED USER INFORMATION: home directory, home phone number, login shell, mail status (whether they have any mail or any unread mail), and the contents of their "~/.plan" and "~/.project" and "~/.forward" files.
(3) SERVER INFORMATION: a "Welcome to ..." banner which also shows some informations (e.g. uptime, operating system name and release)--similar to what the "uname -a" and "uptime" commands reveal on the remote system.
Normally, ".plan", ".project" and ".forward" are regular text files. ".plan" is normally a note about your general work, ".project" is a note about the status of your current project(s), and ".forward" shows whether your incoming mail is being forwarded somewhere else or whether you're using a mail filter (it also shows where it's being forwarded to and what your mail filter program is, scary).
I've heard it's possible to make one of these files a named pipe connected to a script. I'm not exactly sure how it's done. (Other TAG members, please help.) You use "mkfifo" or "mknod -p" to create the special file, then somehow have a script running whose standard output is redirected to the file. Supposedly, whenever "finger" tries to read the file, it will read your script's output. But I don't know how your script would avoid a "broken pipe" error if it writes when there's nobody to read it, how it would know when there's a reader, or how the reader would pass identifying information to the script. Each Internet connection reveal's the requestor's IP, and if the remote machine is running the "identd" daemon, one can find out the username. But how your "finger" script would access that information, I don't know, since it's not running as a subprocess of "finger", so there's no way for "finger" to pass it the information in environment variables or command-line arguments.
However, "finger" is much less useful nowadays than it was ten years ago. Part of this is due to security paranoia and part to the fact that we use servers differently nowadays.
(1) Re security, many sysadmins have rightly concluded that "finger" is a big security risk and have disabled "fingerd" on their servers, or enable it only for intranet requests (which are supposedly more trustworthy). Not only is the host information useful to crackerz and script kiddiez, but users may not realize how much information they're revealing.
[Nick] The notion that fingerd is a security risk because it reveals usernames is a bit misleading. It's true that having information about login status can be useful (don't try to hack in while root is on, and don't crack jack242's account while he's logged in, either!), the real problem is in the implementations of many finger servers.
Part of this lay in the fact that finger daemons ran as the superuser, or root. On systems that have shadow passwords enabled, only root can read the file that has the encrypted password data. A malicious user wishing to obtain the superuser's password data could simply create a symbolic link from ~/.plan to /etc/shadow, and finger his or her own account (stolen or otherwise) to display the information!
This is due to the fact that fingerd was written in an era when most computers on the Internet were run by research institutions. The security was lax, and people didn't write software with resilience to mischief in mind. In fact, adding features was the main push behind most software development, and programs like fingerd contain some extremely dangerous features as a result.
There are, however, some modern implementations that take security into consideration. I personally use cfingerd, and have it configured with most of the options off. Furthermore, I restrict it to local traffic only, as was suggested earlier. I also know that my file security is maintained, since cfingerd will not follow symbollic links from .plan or .project files, and it runs as "nobody" (the minimal-privilege account that owns no files).
[Mike] (2) Re how we use servers, in 1991 at my university, we had one Unix computer (Sequent/Dynix) that any student could get an account on. Users were logged in directly from hardwired text terminals, dialup or telnet. You could use "finger" to see whether your friends were logged in. Since you knew where your friends normally logged in from, you had a fair idea where they were at the moment and could meet them to hack side-by-side with them or to read (Usenet) news or to play games together. (Actually, you didn't even need to use "finger". "tcsh" and "zsh" would automatically tell you when certain "watched" users logged in and out.) You could even use "w" to find out which interactive program they were currently running. But soon demand went above 350 simultaneous users, especially when the university decided to promote universal e-mail use among its 35,000 students and 15,000 staff. The server was replaced by a cluster of servers, and every user logging in to the virtual host was automatically placed on one of the servers at random. Since "finger" and "w" information--as well as the tcsh/zsh "watch" service--are specific to a certain server, it was a pain to check all the servers to see if your friends were on any of them. About this time, people started using X-windows, and each "xterm" window would show up in "finger" as a separate logged-in user. Also, finger access became disabled outside the intranet. "finger" became a lot less convenient, so it fell into disuse.
(3) "finger" only monitors login sessions. This includes the "login" program, "telnet", "xterm", "ssh" (and its insecure cousins "rsh" and "rlogin"). It does not include web browsing, POP mail reading, irc or interactive chat, or instant messaging. These servers could write login entries, but they don't. Most users coming from the web-browser-IS-my-shell background never log in, wouldn't know what to do at the shell prompt if they did log in, don't think they're missing anything, and their ISPs probably don't even have shell access anyway. That was the last nail in the coffin for "finger".
So in short, "finger" still works, but its usefulness is debatable. Linus used to use his ".plan" file to inform people of the current version of Linux and where to download it. SSC used to use it to propagte its public PGP key. There are a thousand other kinds of useful information it could be used for. However, now that everybody and his dog has a home page, this ".plan" information can just as easily be put on the home page, and it's just as easy (or easier for some people) to access it via the web than via "finger".
From Anthony Amaro Jr
Answered By Heather Stern
I have 2 computers currently, one running redhat 6.2 with 2.4.5 kernel (compiled from source) and another running redhat 7.1 stock. Why is it that after I do an almost identical install on both machines package wise, I am able to sucessfully compile and install the 2.4.5 kernel (from kernel.org) on the 6.2 machine but when I try to compile on the redhat 7.1 machine it the compiler stops with errors? It seems hard to believe that a newer version of red hat would be incompatable with the kernel that make it linux!!!
Thanks!
Anthony Amaro Jr.
[Heather] Well, it used to be a Well Known Answer that RH had shipped a gcc which was too d*** new to successfully build kernels. What that obviously means is the folks back in the RedHat labs prepared their kernel RPMs on another machine, one which wasn't running their distro-to-be.
answer 1: you can compile a kernel on a different system, then copy it, the matching System.map and modules across to your misbehaving one.
However, I don't know if this 7.0 problem remains in 7.1. (I'd bet they got a lot of complaints about it.) Soooo... with you having said nothing about what kind of error messages... how would we know either?
answer 2: "it's broken" is not enough detail for us to help "make it work".
Good luck, tho...
From Alan Maddison (published in 2c Tips, Issue 68)
Answered By Anthony E. Greene
I hope that you can help me find a solution before I'm forced back to NT. I have to find a Linux solution that will allow me to connect to an Exchange server over the WAN and then sync address books.
[Anthony] The closest thing I can think of for this is to configure your standards-compliant mail client to access the Exchange Global Address List (GAL) via LDAP. This is a built-in capability of Exchange server that often goes unused. If the LDAP interface is enabled, you can get to the Exchange GAL using the LDAP abilities in Netscape, Pine, Balsa, Eudora, Outlook, Outlook Express, Windows Address Book (part of Outlook Express). The latest version of Mozilla may also support LDAP.
If you want to export the GAL for use in an LDAP server, you will need both Outlook and Outlook Express installed.
- Open Outlook.
- Open the Address Book and select the Global Address List
- In the Global Address List, select all the addresses you want to export and copy them to your Personal Address Book. This is a memory and CPU intensive process. I would advise selecting 100-200 or so at a time. Do not select distribution lists; they are not exportable.
- After all the desired addresses have been copied to your Personal Address Book, leave Outlook open and open Outlook Express.
- Select File->Import to import addresses from your Outlook Personal Address Book.
- After the import is complete, close Outlook.
- Select File-> Export to export your address book to a comma separated values (CSV) formatted text file. I will assumed you exported the following fields: Name, First Name, Last Name, Title, Organization, Department, Email Address, Phone, and Fax.
- After the export is complete, copy the CSV file to a box with Perl installed and run the following script (csv2ldif.pl):
See attached csv2ldif.pl.txt
Take the resulting LDIF file and import it into your LDAP server using its import tools.
Tony
From gianni palermo
Answered By Heather Stern, Huibert Alblas
Dear sir,
please send me through email on how to setup an internet cafe in detail using red hat linux and windows nt cause I am planning to setup one. I got some tips from my friends but I want to consult a professional like you. hoping yuo'll send me the details. thank you sir...
Gianni Palermo
[Heather] We've had this question asked of us a few times before. I even popped it into the "Help Wanted" section in Issue 61: http://www.linuxgazette.com/issue61/lg_mail61.html
...but nobody gave us any hints beyond what I had there. Maybe you can get away with very minimal services, like running all the stations from CD-based Linux distros. There are a bunch of them listed at LWN but some of them or more of a giant rescue disc than a usable system. You might try these:
- Knoppix
- http://www.knopper.net/knoppix
- RunOnCD
- http://my.netian.com/~cgchoi
- DemoLinux
- http://www.demolinux.org
- Virtual Linux
- http://sourceforge.net/projects/virtual-linux
...or only offering web access:
- Public Web Browser mini-HOWTO
- http://www.chuvakin.org/kiodoc/Public-Web-Browser.html
If you want to get more serious you'll need to look harder. Sadly Coffeenet was forced out of business by his landlord, so you can't get his codebase easily (besides, it would be a moderately ancient Linux by now). Since VA Linux is now going into the consultancy and software biz instead of hardware, maybe you can buy some of their E-mail Garden expertise.
Of course you wanted to know where to get started. So I'll give you a bunch of pointers, but for the rest you'll have to do your own homework. If you really want to you could start up an "Internet Coffee House HOWTO" and add it to the LDP. I'd sure enjoy pointing to it if it existed.
There are other important points beyond merely the technical setup to consider but I'll have to assume you're making business plans and selecting a good location on your own.
Here's what seem to be the most helpful HOWTOs right now for the topic. Most of them are also available at the Linux Documentation Project home page.
For being diskless, if you want to go that route:
- Diskless HOWTO
- http://www.linuxdoc.org/HOWTO/Diskless-HOWTO.html
- Thinclient HOWTO
- http://www.linuxdoc.org/HOWTO/Thinclient-HOWTO.html
- Network Boot HOWTO
- http://www.linuxdoc.org/HOWTO/Network-boot-HOWTO/index.html
- KIosk HOWTO
- http://www.linuxdoc.org/HOWTO/Kiosk-HOWTO.html
Getting the connection going:
- ISP Setup RedHat HOWTO
- http://www.chuvakin.org/ispdoc/ISP-Setup-RedHat.html
- Domain mini-HOWTO
- http://caliban.physics.utoronto.ca/neufeld/Domain.HOWTO
- DSL HOWTO
- http://www.linuxdoc.org/HOWTO/DSL-HOWTO/index.html
- DSL HOWTO "prerelease version"
- http://feenix.burgiss.net/ldp/adsl
- DHCP mini-HOWTO
- http://www.oswg.org/oswg-nightly/oswg/en_US.ISO_8859-1/articles/DHCP/DHCP.html
Protecting yourself from abuse:
- The Bandwidth Limiting HOWTO
- http://www.linuxdoc.org/HOWTO/Bandwidth-Limiting-HOWTO/index.html
- Security HOWTO
- http://www.linuxsecurity.com/Security-HOWTO
- Advocacy HOWTO
- http://www.datasync.com/~rogerspl/Advocacy-HOWTO.html
Maybe some things that might make your stations more attractive:
- Sound HOWTO
- http://www.linuxdoc.org/HOWTO/Sound-HOWTO/index.html
- XFree86 Touchscreen HOWTO
- http://www.linuxdoc.org/HOWTO/XFree86-Touch-Screen-HOWTO.html
- Printing HOWTO
- http://www.linuxprinting.org/howto
Last, but certainly not least:
Coffee HOWTO http://www.linuxdoc.org/HOWTO/mini/Coffee.html
It's a lot to read, but I hope that helps!
[Halb] Ok, I don't know if this is exactly what you mean, but try: http://www.dnalounge.com/backstage/src/kiosk/
Its description:
One of the things I want to do here at the DNA Lounge is have public kiosks that people can use for web browsing, IRC, AIM, and so on. When most people set up kiosks, they tend to try and lock them down so that you can only run a web browser, but that's a little too limiting, since I want people to be able to run other applications too (telnet, ssh, irc, and so on.) So really, I wanted to give access to a complete desktop system. But do so safely and reliably.
I decided to set them up as Linux systems running the GNOME desktop, preconfigured with all the common applications people might want to run. However, I needed to figure out a way to make the system robust enough that one user couldn't screw it up for another, on purpose or accidentally. The system would need to be locked down enough that it was easy to reset it to a working state.
So, I had the following goals:
- When the machine boots up, it should automatically log itself in as "guest", and go to the desktop without requiring a login dialog.
- It should be possible to pull the plug on the machine at any time without loss of data: at no time should fsck need to run.
- Logging out or rebooting should reset the machine to a default state, clearing out any changes a previous user might have made.
- Small form factor: I wanted flat screens, and I wanted them without spending a fortune.
Its not using WinNT, but looks like you don't need to...
Have fun:
Halb
From Trevor Lauder
Answered Mike Ellis, Ben Okopnik, Heather Stern
How do I disable password aging without the shadow suite?
[Mike Ellis] Are you sure password aging is turned on without the shadow suite? AFAIK, password aging is only supported under Linux when shadow passwords are used. I also believe that most recent (post '99 ???) distributions come with shadow passwords enabled by default, although I've only really played with RedHat and Suse so I may be wrong here.
So - have you got shadow passwords? The easiest way to tell is to look at the password and shadow files. these are both colon-delimited data files. If you don't have shadow passwords enabled, the file /etc/passwd will look like this:
root:HTf2f4YWjnASU:0:0:root:/root:/bin/bash
The first field gives you the user name - I've only quoted the root user here, your password file will have many more users in it, but each line should follow the pattern shown above. The second field contains the users password, encrypted ...
[Ben] Let's go for "... encrypted with the standard Unix 'crypt' function."
There. That's better. When the choice is
a) give extra info that may be unnecessary or
b) shroud everything in mystery as a true High Priest should, I go with the Open Source version...
[Mike Ellis] The remaining fields specify the users UID, GID, real name, home directory and default shell - nothing for password aging.
If you have shadow passwords enabled, the /etc/passwd file will look more like this:
root:x:0:0:root:/root:/bin/bash
Notice that the second field, which used to contain the password crypt, now has the single letter 'x'. The password crypt is now stored in the /etc/shadow file, which might look like this:
root:$1$17yvt96W$HO11W48wZuy0w9cPtQJdt0:11284:0:99999:7:::
Again, the first field gives the user name, and the second is the password crypt. These two examples use different crypt algorithms, hence the different length of the password field - this is not relevant to this discussion.
The remaining fields in the shadow file enable the password aging - according to "man 5 shadow", these fields are (in order)
Days since Jan 1, 1970 that password was last changed
Days before password may be changed
Days after which password must be changed
Days before password is to expire that user is warned
Days after password expires that account is disabled
Days since Jan 1, 1970 that account is disabled
A reserved field
The manual page also reads:
"The date of the last password change is given as the number of days since Jan 1, 1970. The password may not be changed again until the proper number of days have passed, and must be changed after the maximum number of days. If the minimum number of days required is greater than the maximum number of day allowed, this password may not be changed by the user."
So, to disable password aging (as in the example) set the fourth field to zero and the fifth to a large number (e.g. 99999). This says that the password can be changed after no time at all, and must be changed after 274 years, effectively disabling the aging.
[Ben] To actually _disable_ password aging, make all the fields after the fourth one null, i.e.
ben:ShHh!ItSaSeCrEt!:11504:0:::::
If you do that, "chage -l" reports the following:
ben@Baldur:~$ chage -l ben Minimum: 0 Maximum: -1 Warning: -1 Inactive: -1 Last Change: Jul 01, 2001 Password Expires: Never Password Inactive: Never Account Expires: Never
[Mike Ellis] You can edit the shadow file directly (e.g. using vi/emacs) which is only really recommended for expert users. A safer alternative, although less flexible, is to use a tool to do the work for you, such as the usermod command, or linuxconf. Unfortunately usermod doesn't allow you to disable aging, only to change the dates on which the password expires. linuxconf is better, and should probably be your first port of call unless you are quite experienced.
[Ben] The "proper" tool for modifying "/etc/passwd" and "/etc/shadow" is 'vipw' ("vipw -s" edits "/etc/shadow".) You might want to define the EDITOR variable before using it, though - it uses "vi" by default, and that can be pretty ugly if you're not used to it...
[Heather Stern] I certainly hope Linuxconf has gotten more stable; when it first came out, about half the people I knew who had tried it (to be fair, not very many) had managed to get burned by it - either by major config files eaten if a failure occurred while it was doing something (it wasn't "idempotent" as Debian says, able to be interrupted gracefully), or by features that needed to be tweaked, not being revealed by it or handled incorrectly because the tool's author hadn't thought of them. Like my "doesn't start at 0" address range of less than 255 addresses.
On the other hand, if you edit the file directly you MUST get the number of colons right. Otherwise nobody whose login is described after the line you get wrong, will be able to get in... unless by chance you have more than one wrong, and your other mistakes make them line up properly again, in which case there will be a block of people who cannot login. This can be very hard to debug if you don't know to look for it...
[Mike Ellis] Before attempting any modifications on your system, make sure you've read the manual pages for the password file (man 5 passwd), the shadow file (man 5 shadow) and the usermod command (man usermod). It is quite easy to leave yourself in a situation where it is impossible to log in after one small typo... The examples I've shown are from RedHat systems I happen to have laying around - your system may have a different version of the password system which is subtly different and which blind copying of my examples would break.
Hope it helps!
[Ben] Amen to that. Also, make sure that you have your boot floppy close to hand, or at least know how to boot with the 'single' option.
[Heather] Or at least glance at the "Root password" Tip in this month;s 2c Tips column before making your changes.
From Nick Moffitt
Answered By Ben Okopnik, Heather Stern, Don Marti
I run a server machine, and I have telnet disabled in favor of OpenSSH. What I have done is add the following line to my /etc/inetd.conf:
telnet stream tcp nowait nobody.nogroup /usr/sbin/tcpd /usr/bin/figlet Unauthorized access prohibited. Go away.
The idea is to print out a "NO TRESSPASSING" sign in big block letters using the figlet utility. It works great, and when I run "telnet localhost" from this machine, I see:
----8<----
Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. _ _ _ _ _ _ | | | |_ __ __ _ _ _| |_| |__ ___ _ __(_)_______ __| | | | | | '_ \ / _` | | | | __| '_ \ / _ \| '__| |_ / _ \/ _` | | |_| | | | | (_| | |_| | |_| | | | (_) | | | |/ / __/ (_| | \___/|_| |_|\__,_|\__,_|\__|_| |_|\___/|_| |_/___\___|\__,_| _ _ _ _ _ _ __ _ ___ ___ ___ ___ ___ _ __ _ __ ___ | |__ (_) |__ (_) |_ ___ __| | / _` |/ __/ __/ _ \/ __/ __| | '_ \| '__/ _ \| '_ \| | '_ \| | __/ _ \/ _` | | (_| | (_| (_| __/\__ \__ \ | |_) | | | (_) | | | | | |_) | | || __/ (_| |_ \__,_|\___\___\___||___/___/ | .__/|_| \___/|_| |_|_|_.__/|_|\__\___|\__,_(_) |_| ____ / ___| ___ __ ___ ____ _ _ _ | | _ / _ \ / _` \ \ /\ / / _` | | | | | |_| | (_) | | (_| |\ V V / (_| | |_| |_ \____|\___/ \__,_| \_/\_/ \__,_|\__, (_) |___/ Connection closed by foreign host.
----8<----
This is all well and good, but when I try telnetting from a remote machine, it's a crap shoot. Sometimes I'll get the whole banner, and sometimes I'll get nothing. One machine reliably prints out the correct text up until it ends as follows:
----8<----
____ / ___| ___ __ ___ ____ _ _ _ | | _ / _ \ / _` \ \ /\ / / _` | | | | | |_| | (_) | | (_| |\ V V / (_| Connection closed by foreign host.
----8<----
What could be causing this, and how should I fix it?
[Ben] Arrgh. I haven't looked at the actual code of "inetd", but I'm cringing at the idea of running a text-printing app from /etc/init.d (vs. spawning a listener process, which is what it's supposed to do.) It seems to me that you're bound to run into problems with gross hackage of that sort.
[Heather] I thought I recalled this is what the fingerd was for. In this case it'd be wickedly apropos (wicked being the operative word) to twist finger to doing what you want... so you can give some poor telnet-using sap "the finger" as it were.
If you are going to hack source anyway, hack source of something that's closer to doing the right job, I'd think.
[Ben] If I was going to do something like that, I think I would leave in.telnetd running - there isn't even a process other than inetd until someone requests one - have "/etc/hosts.deny" set up to deny everyone, and set up my "BANNER" line in "/etc/default/telnetd" to print out that message.
[Heather] Does that give you the message before, or after it offers a login attempt? If before, then surely he can hack a copy of telnetd whose login prompt is completely bogus, and that will never let anyone in.
[Ben] Actually, I found something that might be even better for the purpose. These days, "telnetd" is actually "in.telnetd" - Wietse Venema's wonderful wrapper - and uses "/usr/lib/telnetd/login" to negotiate the login process. It's something that's _supposed_ to do real-time interaction with the user. Move "login" to "login.old"; replace it with
#!/bin/sh figlet 'Go away!'
It should work fine. Should be fairly secure, too.
[Don] When I try this telnetting from ssc.com to my test machine I get nothing, and using this figlet_wrapper script instead of calling figlet directly fixes it for me.
#! /bin/sh /usr/bin/figlet $* && sleep 1
Aha, yeah. That seems to do the trick.
[Don] I tried rebuilding figlet with a bunch of fflush(0)s in it, and it seems like I'm getting more text but not all of it.
Yeah, I got the same thing when I tried that. I had considered doing something to tcpd that would make it handle leftover buffers more correctly, but putting in the sleep seems to work well enough for me.
Thanks!
Sometimes you'd like to configure an application so that it starts for any user who uses 'startx' (or logs in through xdm?). For example, I have a policy on my systems that all users should be running xautolock (a program that invoke an xscreensaver or xlock module after a period of mouse/keyboard inactivity).
On a Debian Woody/Sid (2.2 or later) system this can be done by copying or linking a file into /etc/X11/Xsession.d/. This would be a script similar to one you'd add to /etc/init.d/. For example I added a file called 60xautolock consisting of the single line:
/usr/bin/X11/xautolock -time 2 -corners 00-+ -cornerdelay 2 &
I suspect it should be marked as executable; I just set the perms on mine to match the others therein.
(BTW: this xautolock enables a "blank now" hot spot in the lower right corner of the screen, and a "never blank" hot spot in the lower right; so a user can blank the screen with a 2 second delay by shoving their mouse pointer far into the corner; it also sets the automatic blanking to occur in 2 minutes: the default of 10 min. is way too long!)
Here's another Debian tip:
Debian normally configures xdm to invoke the X server with the -auth argument. This allows one to configure their X session to allow remote clients, or local clients under other user IDs to connect to the X server (to run in your X session).
This is useful even if you've accepted the recommendation to configure Xfree86 4.x with the "-nolisten tcp" option (to disable remote clients from direct X protocol access). It allows you to run X under you're own user ID while allowing root to open programs on your display (particularly handy if you want to run ethereal, which will refuse to run SUID/root but which needs access to X and root permission to sniff on your network interfaces).
The problem is that Debian doesn't normally invoke X with the -auth option when you use the startx script. Of course you could use xhost +localhost; but this allows any local user to access your X session; rather than allowing you to control it in a more fine-grained fashion.
The solution is to edit the /etc/X11/xinit/xserverrc file, inserting one command and adding an option to another:
#!/bin/sh /usr/bin/X11/xauth add :0 . $(dd if=/dev/urandom count=2 2> /dev/null | md5sum) exec /usr/bin/X11/X -dpi 100 -nolisten tcp -auth $HOME/.Xauthority ## . . . . . . . . . . . . . . . . . . . . ^^^^^^^^^^^^^^^^^^^^^^^
... last comment line (starting with ##) underscores the addition to that command. The xauth command is being used to create the ~/.Xauthority file.
For root to gain access to this session you'd issue a command like:
xauth -f ~$YOU/.Xauthority extract - `hostname`/unix:0 | xauth merge -
... from a root shell (perhaps by opening an xterm and using the su or sudo commands). (Hint: obviously anyone who can read your .Xauthority file can use it to gain access to your X sessions; so maintaining these on NFS home directories is BAD; yet another reason why NFS stands for "no freakin' security").
That's the easiest and most secure means available for supporting remote X clients; if you call the OpenSSH client with the -X (enable/request X11 forwarding) and if the remote ssh daemon allows it; and if you have your DISPLAY variable set (which is always the case when you start an xterm under X; since it's how the X libraries linked into xterm "found" your X server) then the remote daemon will spawn off a proxy --- an instance of the daemon that will "pretend" to be an X server on display number 10, 11, or higher. That daemon will automatically relay Xprotocol events to your client which will relay them through the local Unix domain socket to your server. This is all automatic with most versions of ssh (except for the newer OpenSSH client which defaults to disabling X11 forwarding and thus requires the -X switch).
Please make sure you use capital X, as -x in lowercase tells it to disable this feature, even if the local sysadmin has chosen to okay a tunneled X connection by default. -- Heather
This allows you to run X with ports 6000 (and up) closed; (preventing remote systems from even seeing that you're running it; much less giving them the opportunity to attack your X server) and still allows you to easily support remote X clients.
SSH X11 forwarding also works through NAT/IP masquerading and any firewall that allows other ssh traffic.
This matter has come up many times before, and will surely come up many times in the future. I hope by putting Yan-Fa's crisp description and our extra notes in Tips, that more people who need it, will find it easily. -- Heather
There's a simpler way to put a new root password on a linux system if you've forgotten it and have physical access. Which I haveto assume this person has since they're messing with partitions.
If you have lilo installed, interrupt the boot up process at the lilo prompt and type:
kernelImageName single
(one example would be linux as your kernelImageName.) -- Heather
This will boot you up in single user mode and allow you to chance the password. This has the added advantage of running all the standard run level 1 processes, including mounting of partitions.
Yan-Fa Li
Things to look out for, however:
If you like to get your hands dirty you can also edit the /etc/sysconfig/desktop file (or create it if it doesn't exist) and put in the line: DESKTOP=KDE
This has the added advantage of changed the XDM to KDM instead of GDM.
Y
Hi,
From the Department of Scripting Newbieville, here's a tiny function I've added to my .bashrc and ended up using quite often:
addy () { if [ $# -eq 1 ] then grep -i "$1" "$HOME/.mail_aliases" | mawk '{ print($3) }' else echo "Usage: addy <searchstring>" fi }
Given a search string (part of a name, nickname or address) as input, it'll output any matching email addresses it finds in an email aliases file (~/.mail_aliases, in this case). The alias file contains lines in the format used by mutt - for example:
alias nickname whoever@wherever (Real Name)
If you use WindowMaker and have xmessage, you can add something similar to a menu by adding the following, as a single line, to the menu config file of your choice:
"Find email address..." SHEXEC "xmessage -nearmouse `grep -i \'%a(Email address finder,Enter search string:)\' .mail_aliases | mawk '{ print($3) }'`"
Thanks to everyone involved with Linux Gazette - you're great!
Tim
Hmm, Answer Gang recommended djbdns without mentioning that it's proprietary software? Ouch. Bad gang, no biscuit.
I said "some" and I didn't mention how many people are currently signed onto TAG. It's more than two. Maybe next time I'll gather the whole flaming thread from across its 3 mailing lists.
However I've cc'd the Gang at large so a few more people can take a bushwhack at me
I ragged on his philosophy a tiny bit and noted that I won't use it. Even, a technical rather than religious/copyright reason not to.
But I was also slaving over hot perl scripts and HTML mashed taters trying to get the mailbag and tips sections cooked. If you smell smoke coming out of my ears that's surely my melted brain
-- Heather
Thanks Rick! Everyone else, I hope you find this particular 2c tip especially handy. I'd enjoy hearing some folks will tell us how useful or annoying they find these things.
I see no signs that they want any money from me. Can you point me to a URL that wants payment?
Sure.
Here's the subscription policy page, clarifying that their stuff is subscription-only now, and why:
http://www.mail-abuse.org/subscription.html
Here's the Fee Structure page that it points to:
http://www.mail-abuse.org/feestructure.html
(note, you really want tables support to read that)
... so it merely depends on who you are.
Which tool must I now use to set up a printer? it used to be printtool on older systems (RedHat/Mandrake)
Please !
Danie Robberts
The Answer Gang replied with a few distro specific notes:
Not really sure how to get this where it needs to go.
This is the right place. It will be published in next month's 2-Cent Tips, unless Heather has too much other material. -- Mike
I have recently had the same problem with random seg faults that you addressed in August TAG.
I bought a new computer, pieced it together, and put 384M in it. When I initially installed linux, it was dog slow, and running top, I noticed that I only had 64M visible (I think, incredibly less that 384 to be sure). I did a little checking and learned that the motherboard has a known problem of not seeing all the memory. So I entered the line "mem=384M". I then started getting random seg faults. I couldn't figure it out for a long time.
Even though I had a graphics card with on-board memory, my bios still alotted 64M to the AGP device on the motherboard. I reduced this (couldn't get rid of it, or set to 0), and allowed for the use in my lilo.conf entry, and all is wonderful now.
Sorry about the verbosity.
-Tom
Hi,
I have done some reading and searching but the solution to our problem still eludes me.
I volunteer for a non-profit freenet ccil.org and would like to setup smtp authenication so that CCIL users who buy connectivity from other ISP's will continue to use our stable and reliable mail sevices. The system our mail runs on is a Debian potato box running the default smtp server exim.
Can you point me to a HOWTO?
Thanks,
Chuck
Are you asking how to allow users of your systems to access mail on your system even though they are not in your domain? If so, you want a program called pop-before-smtp (here's one URL I found over on google: http://rpmfind.net/linux/RPM/PLDtest/i686/pop-before-smtp-1.21-3.noarch.html ).
It's easy to setup and allows your users to access their email from anywhere in the world.
-- Sincerely, Faber Fedor
Has anybody tried Subversion? According to the web page (http://subversion.tigris.org), it's at Milestone 2 alpha development, and aims to have all CVS features plus:
It was recommended by someone on the Cheetah (http://www.cheetahtemplate.org) mailing list.
At print time, it reached its Milestone 3, is now self hosted (they use their own code and not CVS anymore), and they hope to be feature complete in early October.
Compare also Bitkeeper, (www.bitmover.com), a project by Larry McVoy and others aimed toward successful source control of big, complicated projects. -- Heather
Hi,
Can you use the same source for compiling a kernel on both an Intel based machine as well as a Sun?
I would like to know before I break my Sun
thanx
Danie
It should automatically detect the architecture it's compiling on and produce the right kernel.
However, whenever you install a new kernel, you always want to have a plan of escape in case the new kernel doesn't boot. That means making sure your old kernel is still ready to go and you know how to switch back to it. Popular ways to do this are to put the new kernel on a boot floppy, leaving the hard-disk setup alone, or arranging for LILO to boot one or the other from its menu. I'm not sure if Sun computers have LILO (Alphas use a multi-OS program called MILO instead), but they should have something equivalent. -- Mike
I can answer that. They use SILO, which works a little differently from LILO, but in a way, it makes it much easier to have multiple kernels.
Booting a Sparc takes more code than a PC does, but the disk partitioning utilities available to linux are not real clear on that concept. So SILO installs a tiny first stage loader whose only job in the whole world is to find the second stage. The second stage has more room than LILO does, so it is also smart enough to read its own config file. Thus SILO doesn't need to be re-invoked all the time when you make configuration changes.
But I wouldn't change what you let the bootprom memorize, until you are dead certain the new one works.
I'll add that the Sparc Debian disc might make an acceptable rescue disc if you get really screwed up, but it's still better to be careful. -- Heather
What combination of open source software should be used to create a portal site? How could a beginner build and test such a site?
The Gang replies:
Thank you for the reply. It is very helpful. Gives me a lot of new places to look.
peace
Hello there, dear readers. How have you all been? Not too busy I trust. I on the other hand have been busy over the last month or so. I have just completed my A-level exams, which I found to be quite tiring. That was why I was unable to write the Weekend Mechanic last month. For those of you who are currently doing, or are thinking of taking A-levels, I would advise you that although they are good fun they require lots of hard work.
As a result of completing my A-levels (these are university entry exams) I have also left school. Although for me this is rather sad, it does mean that I shall have lots of time to develop my Linux ideas. Thus, over the holidays I am hopefully going to be teaching an evening class about using Linux. It is something that I am looking forward to.
But I don't wish to delve too much into the future. Going back to the world of computing one thing that happened recently which I found quite amusing was that a young computer cracker (age 19 years, whose name I cannot remember) from Wales (UK), had gotten a load of credit-card details and posted them onto another website. Amongst the credit-card details obtained was that of Bill Gates. This young cracker then used his credit-card, ordered a consignment of viagra, and sent it to Bill Gates!!!
You'd have thought that the Welshman would have had something better to do.........
The internet is growing at an alarming rate. Indeed, with nearly every ISP there is the opportunity of being able to publish your own web pages. The ability to do this is through the use of a computer (the host), and a webserver program such as Apache. Although there are other webservers, Apache is the most widely used on the internet and is the most stable.
"But why would you want to use it on a local machine?", I hear you cry. Well running the Apache httpd daemon on your Linux box means that it is a great way of storing information, especially if you have a lot of HTML pages. I happen to have Apache running because I have a local copy of all the LDP Howto's, and of course a copy of the Linux Gazette archives!!
So the first thing to do is to test whether or not you have Apache installed. If you are using a distribution that uses the RPM file format, type in the following:
rpm -qa | grep -i apache
If successful you should see a line similar to:
apache-1.3.12-95
This means that the Apache webserver has been installed. If you do not have Apache on your system then you must install it. Many distributions come with Apache so the chances are it is on your distribuion CD. If it is not, or your distribution does not support the rpm format, then you must download the source files in tarred/gzipped format (*.tar.gz) available from www.apache.org. Once you have downloaded the files you can usually install apache in the following way:
1. Log in as Root
2. Gunzip/untar the file:
tar xzvf /path/to/tarfile/apache*.tar.gz
3. cd to the newly created Apache directory:
cd Apache*
4. Run the "configure" script:
./configure
5. That will take a minute. Hopefully, that should be successful, and a makefile, called "Makefile" should exist in the directory. If not, it is likely that you do not have any compiler programs (such as C, C++, g++), or you header files, or kernel source files installed. It might also be possible that your make utility is not installed. If this is true then you must install them.
So once configure has finished the thing you have to do now is to "make" the file, by typing in the following:
make
This step may take some time, especially if you have an old machine.
Assuming there were no errors from the make, the last thing you have to do is to install the compiled files by typing:
make install
And hopefully that should have installed Apache. If you do encounter errors while installing/compiling Apache read the documentation that comes with it. One caveat that I will mention is that during the "make" process it is normal for the information to be echoed to the screen. If you find that you are getting repeated errors while compiling Apache, one work around is to issue the following command:
make -k all
The above command will force make to continue, even if it encounters errors en route. Although I only recommend using it as an absolute last resort. Invariably reading Apache's documentation should solve most compiler issues.
Now that everything has been installed the next thing to do is to start Apache. This is accomplished by starting the "httpd" daemon. By default (or at least for me anyway) Apache is automatically run during your run-level so if you have not already rebooted your machine type what follows still as user "root":
httpd
Hopefully your prompt should have been returned with nothing echoed to the screen. To check that the "httpd" daemon is running, we can use our old friend "ps", thus:
ps aux | grep -i httpd
What the above command does, is to list all the processes (including those that are not attached to a tty), and then filters the list (pipes "|") it to the grep command, which will match for the expression "apache". The switch -i ignores case sensitivity.
You should see a number of lines, but one which looks similar to the following:
wwwrun 1377 0.0 2.0 4132 1340 ? S 11:09 0:00 httpd
This means that Apache is up and running. If you find that the result of that command simply returns "root blah blah grep -i httpd" then you must run httpd again. If you keep getting the same message, switch to init 6
OK, now were are getting somewhere. Having ensured that the "httpd" daemon is active, we can actually start playing with it. Open up a copy of a Web browser (such as Netscape) and enter the following URL:
http://localhost
Hopefully you should see a web page of sorts. This usually differs between different Linux distributions. On my SuSE box I am presented with the SuSE Support Database with the SuSE chameleon mascot in the top middle of the page!
The page that you are looking at is the main page at the site "localhost". This page is stored in the following directory:
/usr/local/httpd/htdocs
This directory has a special name. It is called DocumentRoot.The actual path may vary slightly on some systems, but invariably it is similar to the above. In this directory you should notice some files, in particular *.html files. The file that is loaded when you go to "http://localhost/" is the file index.html. What I have done, is created a sub-directory in "htdocs" called "oldhtdocs", and copied all the files into it. That way, I can start afresh, and know that I have the originals if I need them.
You may find, that reading and writing to the DocumentRoot folder has been disallowed to non-root users. To get around this issue the following command as root, replacing "/path/to/htdocs" with the correct pathway:
chmod +rw /path/to/htdocs
Knowing now, where the files are located for "http://localhost/" is all very well, but how do you configure apache? Hang on there reader......the file you are looking for is called httpd.conf and is located usually in "/etc/httpd" or it maybe in the directory "/usr/local/apache". On SuSE and Mandrake systems, the latter is the default place. In the sections that follow I shall be using the "httpd.conf" file to carry out various customisations.
How many of you have gone to URL's that contain the tilde symbol (~) followed by a name and then a (/)? I would imagine that nearly everyone has, at sometime. But how many of you were aware of what it meant?? The tilde symbol within a URL indicates a "subdomain" that is owned by a user on the computer system, off the main domain name. Thus, at school, I had my own webserver, with a valid URL:
http://linuxservertom.purbeck.dorset.local/~thomas_adam/
What this was doing, was actually retrieving files stored in a special folder under the user account, of user "thomas_adam". This ability, gives users on a network, a space on which to house their own web pages. So how is all this achieved? Well, it is quite simple really....
All users, who are allowed their own webspace, have to be put in the group nogroup (or www-data under Debian, etc). This can be done, by editing the file "/etc/group" (as root), and locating the line for "nogroup". Then at the end of the line, add the users' name separated by a comma. Then save the file.
In a user's home directory, a directory called public_html has to be created, thus (as user root type):
cd /home/auser/ && mkdir public_html
Where "auser" is the name of a valid user on the system. Then the permissions have to be set. This is done by typing in the following:
chmod 755 /home/auser/public_html
Then the last thing to do, is to set the group of that newly created folder to nogroup. This can be done, by typing:
chown auser.nogroup /home/auser/public_html
Where "auser" is the name of the valid user.....substitute as appropriate. The same procedure can be applied to all users. It might also be an idea to play about with "useradd" so that when you add new users, the "public_html" directory with the permissions are set automatically.
[Actually, you don't have to do all that user and group stuff, if you instead make sure the public_html directory and all its files are world readable:
chmod -R a+r /home/auser/public_htmlThe important thing is that Apache has read access to the files. -- Mike Orr]
So the next thing to do, is to make sure that Apache is aware of what we have done. Open up the "httpd.conf" file, and lets take a look......
By default, I think that the configuration that tells the Apache about the public_html directive is commented out, or at least it was in mine. From the beginning of the document, search for the keyword UserDir. You should find something that looks like the following:
<IfModule mod_userdir.c> UserDir public_html </IfModule>
If any of the above lines have a hash (#) symbol preceeding them, delete them. The above lines tell Apache that the directory "public_html" is to be used for html files for each user.
Directly below this are more related lines that tell apache what sort of restrictions to apply. In the case of the following lines they are read-only. If any of these are commented out, uncomment them.
<Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS PROPFIND> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS PROPFIND> Order deny,allow Deny from all </LimitExcept> </Directory>
So now all that remains is to start to write the web pages. The only other thing which you will find extremely useful is that if you noticed my example earlier:
http://linuxservertom.purbeck.dorset.local/thomas_adam/
I had not specified a ".html" file to load. This is because I had already told Apache a default file to look for within it. Such a file is known as a DirectoryIndex, and you can specify default files to load. Locate the following in your "httpd.conf" file:
<IfModule mod_dir.c> DirectoryIndex index.html index.shtml lwm.html home.html </IfModule>
What this is telling Apache is that when a URL is specified, such as the example above, with no extension file after it (*.htm*), then it will look for a default file(s) specified after the flag "DirectoryIndex". Thus in your "public_html" file, if there was a file called "index.html", then this would be loaded on default. You are able to specify multiple files as in my example above. If Apache cannot find anyone of the above files then the directory listing is displayed instead (unless you specify a file to load).
One thing that I would like to mention at this point, is if you have specified a hostname in "/etc/hosts", you can substitute that name in place of "http://localhost". It is for convienience that I use it here. Furthermore in "httpd.conf", I would recommend that you find the following flag and substitute localhost for the first part of your host name:
ServerName grangedairy
Thus my host name is grangedairy.laptop, I have simply put grangedairy. The reasons for doing this will become apparant from reading the Alias Section
The last thing you have to do is with any changes that you make to "httpd.conf", you have to stop and restart it. This can be achieved by typing in the following (as root):
killall httpd httpd
In this section, I shall be covering the rather short topic of Aliases. Using the "httpd.conf" file, we can see a list of aliases if we search for the keyword "alias". Hopefully you should see a list which looks similar to the following:
Alias /howto /usr/share/doc/howto/en/html/ Alias /mini /usr/share/doc/howto/en/html/mini/ Alias /lg /usr/share/doc/lg/ Alias /hilfe /usr/share/doc/susehilf/ Alias /doc /usr/share/doc/ Alias /cgi-bin-sdb /usr/local/httpd/cgi-bin/ Alias /sdb /usr/share/doc/sdb/ Alias /manual /usr/share/doc/packages/apache/manual/ Alias /htdig /opt/www/htdocs/htdig/ Alias /opt/kde/share/doc/HTML /opt/kde/share/doc/HTML/ Alias /opt/gnome/share/gnome/help/ /opt/gnome/share/gnome/help/ Alias /errors/ /usr/local/httpd/errors/ Alias /icons/ /usr/local/httpd/icons/ Alias /admin /usr/local/httpd/admin/ Alias /lwm /usr/share/doc/lg/lwm/
As you can see, what the above is saying, is that if the URL ends in a "/howto" for example, then Apache is to get its web pages from the directory "/usr/share/doc/howto/en/html". Once again the default web page that it will load up is taken from DirectoryIndex, as we saw earlier.
http://grangedairy/howto
You may remember that earlier I had said that you should specify a ServerName flag in "httpd.conf". This was done so that when you typed in the URL with one of the above aliases, you do not need to put an extra forward slash at the end of the URL. You see, originally, the above aliases, were alised thus:
Alias /howto/ /usr/share/doc/howto/en/html/ Alias /mini/ /usr/share/doc/howto/en/html/mini/
with extra forward slashes after the alias name. I soon got tired of having to add this in myself and so I was able to tell Apache how to do this for me. By setting the ServerName flag apache now knows the name of my machine so that when I go to:
http://grangedairy/howto
It automatically appends the forward slash at the end. Cool, eh?? So if you have done the same as me you can delete the trailing forward slashes from the alias name because hopefully, you should not need them!
The final part to my Apache tutorial is how to set up and create "secure directories", i.e. those that require user authentication before they are loaded. You will have noticed earlier that in my listing examples of Aliases, there was one for "/admin". This is in fact a secure directory.
You can set up secure directories in the same way that you would an ordinary alias except this time, you have to tell Apache a little bit about the directory itself and how it is to be parsed. So say that you wanted to set up a secure directory mysecuredir, at location "/usr/local/httpd/mysecuredir/" You would do the following:
1. Add "/mysecuredir" to alias list:
alias /mysecuredir /usr/local/httpd/mysecuredir
2. Change to the location of the folder that you have specified in the alias list, thus:
cd /usr/local/httpd
3. Create the directory "mysecuredir" by typing in:
mkdir mysecuredir && cd mysecuredir
This has created the directory, and changed to it.
4. Now the work begins. There are two files that we shall be using .htaccess and htpasswd. The first file (.htaccess) is the one that we shall set up first. It is this file that will store the information about how the "mysecuredir" is to be used.
So at the console, use an editor such as nano (a pico clone), jed, emacs, etc, to create the .htaccess file, and enter the following information, exactly as shown because apache is case-sensitive in parsing commands!:
AuthType Basic AuthName "Restricted Directory" AuthUserFile /usr/local/httpd/admin/htpasswd require valid-user
(Since ,htaccess starts with a period, it won't show up in ordinary directory listings. Use "ls -a" to see it.)
The commands above are the most common ones used to create a secure directory. The table below will give a short description of the commands and how you can customise them.
Option Tag | Meaning |
---|---|
AuthType | This sets the authentication type. Basic is usally always used. |
AuthName | Sets the name on the "login" box of the directory that you are trying to connect to (see the screenshot below). |
AuthUserFile | This is the file that is used to check for authentication, i.e. it stores your username and password (encrypted of course). You must ensure that you use the full path to the htpasswd file. |
require valid-user | This says that access is only allowed to those who have a valid entry in the htpasswd file. |
Note: for additional security, put the htpasswd file somewhere that is not accessible via URL--somewhere outside your web directory and outside your alias directories. A .htaccess file must be in the URL-accessible directory it's protecting, but the htpasswd file may be anywhere. You may also share the same htpasswd file among several .htaccess directories if desired.
Ok, now that we have told apache how to handle the directory we now need to create the password file:
5. To create the htpasswd file you have to type in the following command (in the same directory as the ".htaccess" file:
htpasswd -c htpasswd username
Whereby you replace "username" with your username. To keep adding users to the file, issue the same command, but remove the "-c" flag.
6. Now edit our friend /etc/httpd/httpd.conf and at the bottom of the alias list, add the following:
<Directory /usr/local/httpd/*> AllowOverride AuthConfig </Directory>
You may have to modify it slightly, but that will ensure that if apache meets any ".ht*" files it will use them to apply security on them. To turn off this, for the above, change AllowOverride AuthConfig to AllowOverride None.
Now stop and restart the httpd daemon
Ok now you are all set to try it out. Having saved the files go to your web browser and type in the following:
http://servername/mysecuredir
Making sure that you replace "servername" with either your hostname, or "localhost". If successful you should see a dialog box similar to this screenshot.
Once you have entered the correct details you should then be off and away. You may find however that you can connect to the "mysecure" directory without having to supply any credentials. If this happens, you need to check the following in your "/etc/httpd/httpd.conf" file.....
It may be that apache has not been configured to recognise the use of ".ht*" files. You can tell Apache to undo this, by setting the AccessFileName tag, thus:
AccessFileName .htaccessWell, that concludes this entire section. I did consider writing a few words about the use of perl and cgi, but I decided that Mark Nielsen has done a better job over the last few months. Furthermore, Ben Opoknik has been creating yet another excellent tutorial, this time on Perl, so if you are interested in cgi programming, I would start by reading these two series of articles :-)
I stumbled across this program quite by accident. I was originally doing some research at school for the acting network administrator (hi Dave!) which involved the use of power management, as we were having some problems with monitors "sleeping (room D25)" but I digress.....
UPX (Ultimate Packer for eXecutables) is a compression program. What this program actually does, is compress binary executable files which are self contained, and which do not slow down execution or memory performance. A typical use for this type of program is best suited to laptop users, where harddrive space is of enormous concern. Since I use my laptop for most things and only have a 3.2GB harddrive, I have found that compressing the files stored in "/usr/bin"
has cut the size of that directory in half!
Since it will only compress binary files, it is no good trying to compress the files stored in "/etc" for example. I have found that compressing the following directories is ok:
/usr/bin /usr/X11R6/bin /usr/local/bin
One caveat that I should mention, is that I would NEVER use "upx" to compress the files stored in both "/bin" and "/usr/sbin" When I rebooted my computer, I found that Init would not run. Out came "Tom's root/boot" and I later discovered that the compression of these files was causing the main Init program problems for some reason........
So to use the program, download the package from http://wildsau.idv.uni-linz.ac.at/mfx/upx.html. I think you have the choice of either downloading the source packages, or a pre-compiled executable.
I simply downloaded the pre-compiled package, unpacked it, and copied the main upx program to "/usr/bin". then you are ready to start compressing files.
To compress a file, you have to type in the following:
upx /path/to/program/progname
and that will compress the program specified. You can also compress all files in the directory, by typing:
upx /path/to/programs/*
and UPX will happily go through all files, and instantly disregard those which are not Linux/386 format.
Here's a screenshot of UPX in action.
To decompress files, you have to use the "-d" flag, thus:
upx -d /path/to/prog/*
A common list of command-line options, are:
Usage: upx [-123456789dlthVL] [-qvfk] [-o file] file.. Commands: -1 compress faster -9 compress better --best compress best (can be very slow for big files) -d decompress -l list compressed file -t test compressed file -V display version number -h give this help -L display software license Options: -q be quiet -v be verbose -oFILE write output to `FILE' -f force compression of suspicious files --no-color, --mono, --color, --no-progress change look Backup options: -k, --backup keep backup files --no-backup no backup files [default] Overlay options: --overlay=copy copy any extra data attached to the file [default] --overlay=strip strip any extra data attached to the file [dangerous] --overlay=skip don't compress a file with an overlay
Overall, the performance of the compressed files have been ok, and I have not noticed any loss in functionality. The only program that did take a long time to load up once it had been compressed was netscape but that did not bother me too much (netscape uses so much memory, I am used to waiting for it to load).
In issue 67 of the Linux Gazette, Mike Orr, reviewed cowsay/cowthink, a configurable talking cow that displays messages in speech bubbles. Everything is written in Perl (my second-favourite scripting language, after bash) and is displayed in ASCII. I was so impressed with the cows that I decided to look for more ASCII programs. Out came my SuSE distribution CD's and I found the program bb.......
bb is a fully-working ASCII demo, which uses ANSI C and is SVGA compatible. bb makes use of the aa_lib package (ASCII art library) so you will have to install it along with the main package. The demo produces a range of different simulated pictures, from random tumbling characters (going through different shades of grey), to an ASCII simulated mandlebrot fractual!! (which incidentially inspired the colour version of Xaos).
You can get bb from ftp://ftp.bonn.linux.de/pub/misc/bb-1.2.tar.gz.
bb used to have a home page, but unfortunately it's gone. However, project aa (the ASCII Art library) is what bb is based on, and it has a home page at http://aa-project.sourceforge.net/. The aa page also discusses aview (an ASCII art viewer), aatv (to view TV programs on your text console), ttyquake (a text version of Quake), Dumb (a Doom clone), apron (an mpeg1 player), and other programs. ttyquake does require the graphical Quake to be installed, so it uses the original Quake game files. One commentator writes of ttyquake, "people are starving to death in this world... and somebody had time for this....."
bb is best run from the console, but it can be run from within an X-terminal window, as shown by this screenshot.
The valid command-line options for bb are:
Usage: bb [aaoptions] [number] Options: -loop play demo in infinite loop AAlib options: -driver select driver available drivers:linux curses X11 stdout stderr -kbddriver select keyboard driver available drivers:curses X11 stdin -mousedriver select mouse driver available drivers:X11 gpm cursesdos Size options: -width set width -height set height -minwidth set minimal width -minheight set minimal height -maxwidth set maximal width -maxheight set maximal height -recwidth set recomended width -recheight set recomended height Attributes: -dim enable usage of dim (half bright) attribute -bold enable usage of bold (double bright) attribute -reverse enable usage of reverse attribute -normal enable usage of normal attribute -boldfont enable usage of boldfont attrubute -no<attr> disable (i.e -nobold) Font rendering options: -extended use all 256 characters -eight use eight bit ascii -font <font> select font(This option have effect just on hardwares where aalib is unable to determine current font available fonts:vga8 vga9 mda14 vga14 X8x13 X8x16 X8x13bold vgagl8 line Rendering options: -inverse enable inverse rendering -noinverse disable inverse rendering -bright <val> set bright (0-255) -contrast <val> set contrast (0-255) -gamma %lt;val> set gamma correction value(0-1) Ditherng options: -nodither disable dithering -floyd_steinberg floyd steinberg dithering -error_distribution error distribution dithering -random <val> set random dithering value(0-inf) Monitor parameters: -dimmul <val> multiply factor for dim color (5.3) -boldmul <val> multiply factor for dim color (2.7) The default parameters are set to fit my monitor (15" goldstar) With contrast set to maximum and bright set to make black black This values depends at quality of your monitor (and setting of controls Defaultd settings should be OK for most PC monitors. But ideal monitor Needs dimmul=1.71 boldmul=1.43. For example monitor used by SGI is very close to this values. Also old 14" vga monitors needs higher values.
I really do think that if you're into ASCII art, you should give this demo a go. It lasts for approximately 5 minutes.
Well, you've made it to the end of this months article. Looking ahead to next month, I am going to be writing an article about how to write efficient manual pages (anyone remember groff processing??) and whatever else I can think of. However it would be nice to hear from anyone who has article suggestions, as I am running out of ideas.....slowly. If there is anything you feel would be good to include in the LWM, drop me a note :-)
Also, in case anyone is interested, all the screenshots that have appeared in this document, have been using made using the "GNU Image Manipulation Program" and are of the FVWM2 window manager, running the M4 preprocessor AnotherLevel.
As a final notice, I would like to say that as I am no longer at school anymore, my "[email protected]. dorset.sch.uk" account is invalid, and I now have a new account (see below).
So until next time....happy linuxing!
Send Your Comments |
Any comments, suggestions, ideas, etc can be mailed to me by clicking the e-mail address link below:
This article should help you with connecting your old trusty ST to your Linux box as a terminal.
Before I start, many things mentioned in this article apply to other boxes than an Atari ST as well. For sure, you could use an Amiga or a Sinclair QL as a terminal as well. (Linus had a Sinclair QL, by the way, before he got his PC.)
Actually I started computing on the Atari ST about more than 10 years ago when my brother bought an Acorn Archimedes and I got his old 520STM. So this is why I love ST emulators and the good old games today. Now I still have a functioning ST (actually a newer 1040STFM I bought secondhand) and I thought about using it for more than as a video game system.
This lead me to use it as a terminal on my Linux box. I must admit, I do not have any reason to connect a second terminal to my Linux box as I'm using it alone. Simply, it's experimenting and ST nostalgia :).
Now this solution can be used to transfer files and programs to the ST and finally to give the ST limited Internet access. If the terminal emulator is good enough, you can use lynx and w3m to surf the web, read mail with mutt and read news with tin or slrn. You can even play 2-player games for the console like Nettris with this solution.
But now lets face it.
A terminal is simply a display with keyboard, only capable of displaying incoming text, perhaps with special attributes coded into so-called escape sequences and capable of sending the keystrokes to the remote end.
In general a hardware terminal is dumb. It cannot do anything more than that.
In ancient times of computing, terminals were used to connect multiple users to a mainframe.
If you have such an old-style terminal or you can get one, you can connect it in the same way as described here.
The functions of a terminal--receiving, transmitting and displaying--can easily be achieved using software. And this is the way we go here. We use special terminal software to make the ST acting as an old-time terminal.
You need the following hardware for this project:
As stated above in general you can substitute the Atari ST machine with any other computer that has a RS232 socket and a 80column display.
You can use real terminals in the same fashion, although you cannot download or upload software then.
The kernel shouldn't be a problem. If you can use your external serial modem to connect to the net, everything should work out of the box.
In most cases, the kernel will have serial support compiled in or supplied as a kernel module. If not, you must compile a new kernel. I will not handle this in detail here, there are several HOWTOs on this subject available.
This step is required to give the ST a login prompt over the serial line.
First you need a suitable getty. Such a program is used to display a prompt and the input prompt for the username on the line. In invokes the program /bin/login then to login the user to the system.
The getty processes are all spawned by the init process. Init knows form the file /etc/inittab which getty processes to spawn.
Most distributions ship either agetty or mgetty, or both. I use agetty and so this focuses on using agetty.
Now become root and open /etc/inittab with your favourite editor.
The next step is to add a line to the file to spawn a new getty process. This looks like this:
S0:12345:respawn:/sbin/agetty -w -h -L 19200 ttyS0 ansi
Looks ugly, huh? No fear, I'll tell you the meaning of its components.
I assume the parts of the line with 1 being the left most (S0). Now in numbered order:
Now save the file and leave your editor. Type init q in your shell to tell init to reread its inittab file.
First, give the full path to agetty. If you don't know where it is located, try a locate bin/agetty in a shell.
Then you may have one or more of the following command-line arguments. (See the previous section for an example.)
-w tells agetty to wait for a CR (ASCII 13) on the line before displaying the prompt
-h tells agetty to use hardware flow control on the line (aka RTS/CTS)
-L tells agetty that this is a local line. It will not monitor the carrier then.
## This is the baud rate to be used. 19200, 9600 and 2400 are good values. The ST cannot handle more than 19200.
ttyS? This is the serial device to be used. Use: ttyS0 for COM1, ttyS1 for COM2 and so forth. Make sure not to use your modem port. If you only have one serial port, you'll have to switch between modem and terminal. in such a case it is better to use mgetty as this can handle both incoming and outgoing calls at once on one line (intended for modem usage however).
ansi is the terminal type to used. You could try vt100 or atari as well depending on the capability of your terminal software.
When in doubt, running man agetty in your shell will help you.
First of all connect the both machines with the null modem cable. The ST has a socket with a phone symbol next to it. This is the serial port.
You may need an adapter cable that converts DB25 to DB9 or vice versa because the ST has a broad port while most PCs have a small one. Null modems may be found in your local computer store. Buy one that fits to your PCs serial port and an adaptor that connects the null modem to the ST.
Now load the terminal program on the ST. Make sure to set it to the same baud rate as given to agetty and to 8N1. Press Return several times. You should get the login prompt of your Linux box on the ST screen and you should be able to login and to use line oriented shells and programs.
You can try curses-based programs to check the capabilities of your terminal software. With good terminal software, you should be able to use lynx, w3m, mutt and vi. Some terminal emulators are even able to display the Midnight Commander correct and with colours.
The VT52 emulator supplied with your ST can be used for simple tasks and for testing. It lacks a decent ANSI terminal and file transfer options.
If you're ST is equipped with more than 1MB of RAM, you should give Rufus or Connect a try.
ST Term works well with half-Meg STs. The VT52 Emulator together with an ANSI enhancer is a good choice as well due to memory problems.
ANSITERM features full ANSI support including colours and 80 column display in low resolution. However you'd better use a good monitor or your eyes will be ruined.
TAZ runs in medium and monochrome resolution. It features even 16 colours with page flipping and palette switching technology in medium resolution. However this mode requires a monitor capable of 60Hz.
The interface looks much like minicom or telix and is pleasing.
I recommend this program because the terminal emulation is very good and it has nifty features. It may even run with halfmeg STs, haven't tried it.
Make sure to use 80 columns because most programs do not work well with less.
If you have one, the monochrome monitor is definitely better. However you'll miss colours. With TAZ you shouldn't have many problems.
The following FTP server has several ST programs available for download. ftp://wuarchive.wustl.edu/systems/atari/umich.edu/
This feature is one of the main reasons to connect the ST via null modem. Either to save your old ST files or to use software downloaded from the net.
You should use the ZModem protocol because it's:
Make sure you have a ZModem receiver and sender on the ST end.
To transfer a file from the Linux box to the ST, simply type:
sz filenameat your shell prompt. Now invoke the receiving process on the ST end. Some terminals are able to autostart a Zmodem download.
The other way round is as easy. Type:
rzat your shell prompt. Then activate the ZModem upload function of your terminal software.
If it hangs, press Ctrl-C several times. If all fails, kill the rz/sz process on the Linux box.
If you have the right hardware handy, this is a straightforward thing and pretty easy to setup. I found that the trickiest part was to find suitable terminal software for the ST end.
You are not limited to connect an Atari ST as a terminal. Of course, you can use a Commodore Amiga or a Sinclair QL as well. The system used should be able to match into the following pattern:
You can do the same things with them as with the ST.
I hope this all helped you to have some more fun with your old machine and to learn a little bit more about pre-cheap-Ethernet remote working.
Ever had to download a file so huge over a link so slow that you'd need to keep the web browser open for hours or days? What if you had 40 files linked from a single web page, all of which you needed -- will you tediously click on each one? What if the browser crashes before it can finish? GNU/Linux comes equipped with a handy set of tools for downloading in the background, independent of the browser. This allows you to log out, resume interrupted downloads, and even schedule them to occur during off-peak Net usage hours.
Web browsers are designed to make the Web interactive -- click and expect results within seconds. But there are still many files that can take longer than a few seconds to download, even over the quickest of connections. An example are the ISO images that are popular among those burning their own GNU/Linux CD-ROM distro. Some web browsers, especially poorly coded ones, do not behave very well over long durations, leaking memory or crashing at the most inopportune moment. Despite the fusion of some browsers with file managers many still do not support the multi-selection and rubber banding operations that make it easy to transfer several files all in one go. You also have to stay logged in until the entire file has arrived. Finally, you have to be present at the office to click the link initiating the download, thus angering coworkers with whom office bandwidth is being shared.
Downloading of large files is a task more suitable for a different suite of tools. This article will discuss how to combine various GNU/Linux utilities, namely lynx, wget, at, crontab, etc. to solve a variety of file transfer situations. A small amount of simple scripting will also be employed, so a little knowledge of the bash shell will help.
All the major distributions include the wget downloading tool.
bash$ wget http://place.your.url/here
This can also handle FTP, date stamps, and recursively mirror entire web-site directory trees -- and if you're not careful, entire website and whatever other sites they link to:
bash$ wget -m http://target.web.site/subdirectory
Due to the potential high loads this tool can place on servers, this tool obeys the "robots.txt" protocol when mirroring. There are several command options to control what exactly gets mirrored, limiting the types of links followed and the file types downloaded. Example: to follow only relative links and skip GIF images:
bash$ wget -m -L --reject=gif http://target.web.site/subdirectory
wget can also resume interrupted downloads ("-c" option) when given the incomplete file to which to append the remaining data. This operation needs to be supported by the server.
bash$ wget -c http://the.url.of/incomplete/file
The resumption and mirroring can be combined, allowing one to mirror a large collection of files over the period of many separate download sessions. More on how to automate this later.
If you're experiencing download interruptions as often as I do in my office, you can tell wget to retry the URL several times:
bash$ wget -t 5 http://place.your.url/here
Here we give up after 5 attempts. Use "-t inf" to never give up.
What about proxy firewalls? Use the http_proxy environment variable or the .wgetrc configuration file to specify a proxy via which to download. One problem with proxied connections over intermittent connections is that resumptions can sometimes fail. If a proxied download is interrupted, the proxy server will cache only an incomplete copy of the file. When you try to use "wget -c" to get the remainder of the file the proxy checks its cache and erroneously reports that you have the entire file already. You can coax most proxies to bypass their cache by adding a special header to your download request:
bash$ wget -c --header="Pragma: no-cache" http://place.your.url/here
The "--header" option can add any number and manner of headers, by which one can modify the behaviour of web servers and proxies. Some sites refuse to serve files via externally sourced links; content is delivered to browsers only if they access it via some other page on the same site. You can get around this by appending a "Referer:" header:
bash$ wget --header="Referer: http://coming.from.this/page" http://surfing.to.this/page
Some particularly anti-social web sites will only serve content to a specific brand of browser. Get around this with a "User-Agent:" header:
bash$ wget --header="User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)" http://msie.only.url/here
(Warning: the above tip may be considered circumventing a content licensing mechanism and there exist anti-social legal systems that have deemed these actions to be illegal. Check your local legislature. Your mileage may vary.)
If you are downloading large files on your office computer over a connection shared with easily angered coworkers who don't like their streaming media slowed to a crawl, you should consider starting your file transfers in the off-peak hours. You do not have to stay in the office after everyone has left, nor remember to do a remote login from home after dinner. Make use of the at job scheduler:
bash$ at 2300
warning: commands will be executed using /bin/sh
at> wget http://place.your.url/here
at> press Ctrl-D
Here, we want to begin downloading at 11.00pm. Make sure that the atd scheduling daemon is running in the background for this to work.
When there is a lot of data to download in one or several files, and your bandwidth is comparable to the carrier pigeon protocol, you will often find that the download you scheduled to occur has not yet completed when you arrive at work in the morning. Being a good neighbour, you kill the job and submit another at job, this time using "wget -c", repeating as necessary over as many days as it'll take. It is better to automate this using a crontab. Create a plain text file, called "crontab.txt", containing something like the following:
0 23 * * 1-5 wget -c -N http://place.your.url/here 0 6 * * 1-5 killall wget
This will be your crontab file which specifies what jobs to execute at periodic intervals. The first five columns say when to execute the command, and the remainder of each line says what to execute. The first two columns indicate the time of day -- 0 minutes past 11pm to start wget, 0 minutes past 6am to killall wget. The * in the 3rd and 4th columns indicates that these actions are to occur every day of every month. The 5th column indicates on which days of the week to schedule each operation -- "1-5" is Monday to Friday.
So every weekday at 11pm your download will begin, and at 6am every weekday any wget still in progress will be terminated. To activate this crontab schedule you need to issue the command:
bash$ crontab crontab.txt
The "-N" option for wget will check the timestamp of the target file and halt downloading if they match, which is an indication that the entire file has been transferred. So you can just set it and forget it. "crontab -r" will remove this schedule. I've downloaded many an ISO image over shared dial-up connections using this approach.
Some web pages are generated on demand since they are subject to frequent changes sometimes several times a day. Since the target is technically not a file, there is no file length and resuming a download becomes meaningless -- the "-c" option fails to work. Example: a PHP-generated page at Linux Weekend News:
bash$ wget http://lwn.net/bigpage.php3If you interrupt the download and try to resume, it starts over from scratch. My office Net connection is at times so poor that I've written a simple script detecting when a dynamic HTML page has been delivered completely:
#!/bin/bash #create it if absent touch bigpage.php3 #check if we got the whole thing while ! grep -qi '</html>' bigpage.php3 do rm -f bigpage.php3 #download LWN in one big page wget http://lwn.net/bigpage.php3 done
The above bash script keeps downloading the document unless the string "</html>" can be found, which marks the end of the file.
URLs beginning with "https://" must access remote files through the Secure Sockets Layer. You will find another download utility, called curl, to be handy in these situations.
Some web sites force-feed cookies to the browser before serving the requested content. One must add a "Cookie:" header with the correct information which can be obtained from your web browser's cookie file. For lynx and Mozilla cookie file formats:
bash$ cookie=$( grep nytimes ~/.lynx_cookies |awk '{printf("%s=%s;",$6,$7)}' )
will construct the required cookie for downloading stuff from http://www.nytimes.com, assuming that you have already registered with the site using this browser. w3m uses a slightly different cookie file format:
bash$ cookie=$( grep nytimes ~/.w3m/cookie |awk '{printf("%s=%s;",$2,$3)}' )
Downloading can now be carried out thus:
bash$ wget --header="Cookie: $cookie" http://www.nytimes.com/reuters/technology/tech-tech-supercomput.html
or using the curl tool:
bash$ curl -v -b $cookie -o supercomp.html http://www.nytimes.com/reuters/technology/tech-tech-supercomput.html
So far, we've only been downloading single files or mirroring entire website directories. Sometimes one is interested in downloading a large number of files whose URLs are given on a web page but are not interested in performing a full scale mirror of the entire site. An example would be downloading of the top 20 music files on a site that displays the top 100 in order. Here the "--accept" and "--reject" options wouldn't work since they only operate on file extensions. Instead, make use of "lynx -dump".
bash$ lynx -dump ftp://ftp.ssc.com/pub/lg/ |grep 'gz$' |tail -10 |awk '{print $2}' > urllist.txt
The output from lynx can then be filtered using the various GNU text processing utilities. In the above example, we extract URLs ending in "gz" and store the last 10 of these in a file. A tiny bash scripting command will automatically download any URLs listed in this file:
bash$ for x in $(cat urllist.txt)
> do
> wget $x
> done
We've succeeded in downloading the last 10 issues of Linux Gazette.
If you're one of the select few to be drowning in bandwidth, and your file downloads are slowed only by bottlenecks at the web server end, this trick can help "shotgun" the file transfer process. It requires the use of curl and several mirror web sites where identical copies of the target file are located. For example, suppose you want to download the Mandrake 8.0 ISO from the following three locations:
url1=http://ftp.eecs.umich.edu/pub/linux/mandrake/iso/Mandrake80-inst.iso url2=http://ftp.rpmfind.net/linux/Mandrake/iso/Mandrake80-inst.iso url3=http://ftp.wayne.edu/linux/mandrake/iso/Mandrake80-inst.iso
The length of the file is 677281792, so initiate three simultaneous downloads using curl's "--range" option:
bash$ curl -r 0-199999999 -o mdk-iso.part1 $url1 & bash$ curl -r 200000000-399999999 -o mdk-iso.part2 $url2 & bash$ curl -r 400000000- -o mdk-iso.part3 $url3 &
This creates three background download processes, each transferring a different part of the ISO image from a different server. The "-r" options specifies a subrange of bytes to extract from the target file. When completed, simply cat all three parts together -- cat mdk-iso.part? > mdk-80.iso. (Checking the md5 hash before burning to CD-R is well recommended.) Launching each curl in its own window while using the "--verbose" option allows one to track the progress of each transfer.
Everyday millions of Linux users all over the world switch on their computers, wait for a few seconds (or minutes depending on their CPU speeds) to see their favorite operating system booting, and finally get the "login" prompt. That's it. It causes immense pleasure just to log into your favorite operating system and work. Doesn't it? Well, as for me, surely it does. (Though I need to switch on my own computer after every two months, cause I let it run all the time!).
As many of the readers must have noticed, when the computer is bootstrapping itself, a lot of messages come up on the screen. These can be viewed later by issuing the command: cat /var/log/dmesg | more (cause it usually produces a lot of output). Now, the question is: Hey, what do these messages mean in reality? That's easy to answer: Look into any Linux textbook, and you will find it's mentioned something like this: "it refers to the Kernel Boot messages" and so on. But, is that all? And what is meant by "Kernel Boot messages"?
Life has taught me a lot of things. Patience is one of them. And to understand the internal workings of Linux requires a lot of patience and sacrifice cause it requires the proper understanding of the "Linux Kernel Architecture". Most Linux users don't have either that much time to understand the "Linux Kernel Architecture", or maybe are not that much interested in it, while some may have other important things to do in life.
I am NOT going to explain the "Linux Kernel Architecture" in this article cause it would require a whole book to do so. Rather, in this article, I explain in detail one of the most fundamental concepts of a computer-system - Bootstrapping a computer running Linux. In other words, I would explain (at least try to do so) the entire process from the time one switches on the computer till the "login" prompt appears on the screen (assuming he/she is using CLI mode). In short, we would see how the Linux Kernel, and thus the whole system, is "bootstrapped".
Please Note:
1. Bootstrapping. What's that?
Traditionally, the term "bootstrap" refers to a person who tries to stand up (usually while lying down cause he was tired!) by pulling his/her own boots. In operating systems, the term refers to the process in which a part of the operating system is brought into the Main Memory, with the processor executing it. The internal data structures of the Linux Kernel are also initialized, values are set to the constituent variable(s), and processes are created (that usually spawn other significant processes later). Computer bootstrapping is a long and complicated task, cause when the computer is switched on, all the hardware devices are in a unpredictable state, while the RAM is inactive and in a random state. Thus, the thing to be kept in mind is, the process called "bootstrapping" is highly dependent on the computer architecture.
Please note:
2. BIOS. What's that? What does it do?
When a computer is powered on, initially, it's practically useless. Cause the RAM chips contain random data, aren't initialized, and there's no operating system present. To begin the "bootstrapping", a special hardware circuit raises the logical value of the RESET pin of the CPU. Then, some CPU registers, which include registers like cs (a Segmentation Register - code segment register, which points to a segment containing program instructions) and eip ( when a processor-detected exception is generated by the CPU, that is, in other words, an exception raised by the CPU when the CPU detects an anomalous condition while executing an instruction, they are further of three types, namely "faults", "traps" and "aborts", depending on the value of the eip register that is saved on the Kernel Mode stack when the CPU control unit raises the exception.) are set to fixed values. Then, the code found at physical address 0xfffffff0 is executed. This address is mapped by the hardware to some read-only, permanent memory-chip, a special kind of memory which is usually called ROM (Read-Only Memory). BIOS (Basic Input/Output System) is a set of programs that is stored in ROM. It consists of several interrupt-driven low-level procedures used by various operating systems to handle the hardware devices that constitute the computer-system. Microsoft DOS is one such OS.
The question that now comes up is: Then, does Linux use the BIOS to initialize the hardware devices attached to the computer system? Or, is it anything else that performs the same task? If yes, what's it? Well, the answer is not that simple, cause the answer needs to be understood carefully. Starting with the 80386 model, Intel microprocessors perform address translation (from Logical Address --> Linear Address --> Physical Address) in two different ways called the "Real mode" and "Protected mode". Real mode exists mainly to maintain processor compatibility with older models. In fact, all BIOS procedures are executed in Real mode. But, the Linux Kernel executes in the Protected mode and NOT in the Real mode. Thus, once initialized, Linux does NOT make any use of BIOS but provides its own device drivers for every hardware device on the computer.
The question that now comes up is: When Linux uses "Protected mode", why can't BIOS use the same mode? BIOS uses the Real mode, cause it utilizes Real mode addresses for its operation, and Real mode addresses are the only ones available when the computer is switched on. A Real mode address is a seg segment and an off offset; thus the corresponding physical address is given by seg*(2*8)+off. (Please note: Since a Segment Descriptor is 8 bytes long, its relative address inside the GDT or the LDT is obtained by multiplying the most significant 13 bits of the Segment Selector by 8).
So, does this mean Linux never uses BIOS during the entire process of "bootstrapping"? Well, the answer is No, Linux is forced to use BIOS in the bootstrapping phase when it has to retrieve the Kernel image from disk or some other external device.
To sum up this section, let's look closely at the main operations that the BIOS performs during the bootstrapping sequence. They are as follows:
3. Boot Loader. What's that? What does it do?
The BIOS invokes (note: NOT executes) a special program whose main (rather only) task is to load the image of an operating system Kernel into RAM. This program is called the Boot Loader. Before we proceed any further, let's take a brief look in the different ways a system can be booted:
1. Booting Linux from a Floppy disk : When booting from a floppy disk, the instructions stored in the first sector of the floppy disk is loaded in RAM and executed. These instructions then copy all the remaining sectors containing the Kernel image into RAM.
2. Booting Linux from a Hard disk : When booting from the hard disk, the booting procedure is different. The first sector of the hard disk, called the Mater Boot Record (MBR) includes the partition table and a small program. This program loads the first sector of the partition containing the operating system to be started. Linux is highly flexible and sophisticated piece of software, thus it replaces this small program in the MBR with a sophisticated program called LILO (LInux boot LOader). LILO allows users to select the operating system to be booted.
Now, let's take a more deeper and detailed look into these 2 different ways of booting a system.
4. Booting Linux from Floppy Disk.
The Linux Kernel fits into a single 1.44-MB floppy disk. (In fact, there exists a type of Red Hat Linux installation known as "stripped-off" type, which requires approx. 2 MB physical RAM and approx. 1.44 MB hard disk space for running a Red Hat Linux system. That's what Linux is all about, after all. Isn't it?) But the only way to store a Linux Kernel on a single floppy disk is to compress the "Linux Kernel Image". The point to remember here is that, compressing is done at compile time, while decompressing is done at boot time by the loader.
When you're booting Linux from a floppy disk, the boot loader in case of booting Linux from a floppy disk is very simple. It has been coded in the assembly-language file /usr/src/linux-2.4.2/arch/i386/boot/bootsect.S. When we compile the Linux Kernel source, and obtain a new kernel image, the executable code yielded by this assembly language file is place at the beginning of the Kernel image file. This makes it easy to produce a floppy disk containing the Linux Kernel.
Copying the kernel image starting from the first sector of the disk can create the floppy. When the BIOS loads the first sector of the floppy disk, it actually copies the code of the boot loader. The boot loader, which is invoked by BIOS (by jumping to the physical address 0x00007c00) performs the following operations:
5. Booting Linux from Hard Disk.
The Linux Kernel is mostly loaded from the hard disk. This requires a two-stage boot loader. On Intel systems, the most commonly used Linux boot loader is named LILO. For other architectures, other Linux boot loaders exist. LILO may either be installed on the MBR (Please note: During Red Hat Linux Installation there comes a step where the user has to either write LILO to the MBR or put it in the boot sector) or in the boot sector of an active disk partition.
LILO is broken into 2 parts, otherwise it would be too large to fit into the MBR. The MBR or the disk partition boot sector includes a small boot loader, which is loaded into RAM starting from address 0x00007c00 by the BIOS. This small program moves itself to the address 0x0009a000, then sets up the Real Mode stack, and then finally loads the second part of the LILO boot loader. (Please note: The Real Mode stack ranges from 0x0009b000 to 0x0009a200).
The second part of LILO reads all the available operating systems from disk and offers the user a prompt so that he/she can choose any one of them from that available list. After the user has chosen any one Kernel (on my system, one can opt for any 1 Linux Kernel out of 8 Custom Kernels!) to be loaded, the boot loader may either copy the boot sector of the corresponding partition into RAM and execute it or directly copy the Kernel image into RAM.
Since the Linux Kernel image must be booted, the Linux boot loader performs essentially the same operations as the boot loader integrated into the Kernel image. The boot loader, which is invoked by BIOS (by jumping to the physical address 0x00007c00) performs the following operations:
6. The setup( ) function. What does it do?
Now, time has come to take a deeper look into some of the essential assembly language functions that are indispensable for the "bootstrapping" process. Here we look at the setup( ) function.
The setup( ) function can be found in the file /usr/src/linux-2.4.2/arch/i386/boot/setup.S. The code of the setup( ) assembly language function is placed by the linker immediately after the integrated boot loader of the Kernel, that is, at offset 0x200 of the Kernel Image file. This allows the boot loader to locate the code easily and copy it onto the RAM starting from the physical address 0x00090200.
Now the question that comes up is: What does this setup( ) function do? As its name suggests, it's supposed to set up something. But what? And how?
As we all know, for the kernel to operate properly, all the hardware devices in the computer must be detected, and then initialized in an orderly fashion. What the setup( ) function does is initialize all the hardware devices and thus creates an environment for the Kernel to operate in.
But, hang on a second. Didn't we see a few minutes earlier, that the BIOS was supposed to do all this stuff? Yeah, you are right. 100%. Although the BIOS already initialized most hardware, the Linux Kernel does NOT rely on it and initializes all of the hardware in its own fashion. But, if someone asks, well, why does Linux operate in such a way? The answer to this question is both very easy yet extremely difficult to explain. The Linux Kernel had been so designed to enhance portability and robustness. This is one of the many features that makes the Linux Kernel the best out of all the Unix and Unix-like Kernels available and makes it unique in so many ways. A proper understanding of why and exactly how the Linux Kernel implements this feature is beyond the scope of this topic and would require an extremely detailed coverage of the essential features of the Linux Kernel Architecture.
The setup( ) code mainly performs the following tasks:
Now from here on, the going gets a bit tougher as the "bootstrap" process gets a bit more complicated. I hope you put aside everything and carefully try to understand from here on.
7. The startup_32( ) function - 1st function. What does it do?
Okay, let's get to the confusing point straight away. There exist two functions called startup_32(). Although both these two startup_32() functions are assembly language functions and are required for "bootstrap" process, they are totally different functions. The one we refer to here is coded in /usr/src/linux-2.4.2/arch/i386/boot/compressed/head.S file. After setup() code is executed, the function has been moved either to physical address 0x00100000 or to physical address 0x00001000, depending on whether the Kernel Image was loaded "high" or "low" in RAM.
This function when executes, performs the following operations:
Now, after the 4th operation (as mentioned above is over), code execution is taken over by the other startup_32( ) function. In other words, the second one takes over the bootstrapping process.
8. The startup_32( ) function - 2nd function. What does it do?
The decompressed Linux kernel image begins with another startup_32( ) function. This function exists in /usr/src/linux-2.4.2/arch/i386/kernel/head.S file.
The question that must come up here is: Hey, using two different functions having the same name& Doesn't this cause problem? The answer is: Well, no it doesn't at all. Cause both functions are executed by jumping to their initial physical addresses and hence they are executed in their own execution environments. No problem at all!
Now, let's look at the second startup_32( ) function functionality. What does it do? This function when executes, it essentially sets up the execution environment for the first Linux process (process 0). The function performs the following operations:
9. The start_kernel( ) function. What does it do?
The start_kernel( ) function completes the "initialization" of the Linux Kernel. All the essential Kernel components are initialized when this function executes. And this in fact is the last step of the entire "bootstrapping" process.
The following takes place when this function executes:
The "Linux version 2.4.2 &" message is displayed right after the beginning of start_kernel( ). Many other messages are displayed also. At the very end, the very familiar login prompt appears on the console. This tells the user that the Linux Kernel is up and running, and just raring to go&. And dominate the world!
10. Conclusion
This sums up our long and tedious journey of the entire "bootstrapping" process of a Linux system, or in other words, of a computer system running Linux operating system. As the readers would rightly note, I have NOT explained most of the other components and terms that I have had used. A few include IDT, GDT, eip register, cs register and so on. The full explanation of all these terms would make it impossible to complete the article in just a few pages, and would make the entire topic rather very boring. So, I really hope the readers would understand the fact that in this article I provide everyone with a glimpse of the processes and other various things that take place when a Linux system boots. In depth coverage of all the associated functions like paging_init( ) and mem_init( ) is beyond the scope of this topic.
This article provides us with an overview of GNOME Programming in Linux using GTK+ Toolkit. Please note: It is assumed that the reader knows the basics of getting around in Linux, knows how to use the GNOME environment, and possesses the required level of C and/or C++ programming experience.
The code samples that have been provided along with the text, have been checked on a computer system with the following configuration: Compaq Presario 4010 Series computer system, 15.5 GB Hard Disk Space, 96 MB RAM, 400 MHz Intel Celeron Processor, Red Hat Linux 7.1 Distribution Release underlying kernel: 2.4.2-2
This article has been divided into the following sections for easy understanding of the subject matter:
1. What is GNOME all about? An Introduction.
2. The GNOME Architecture.
3. GTK+ - An Introduction
4. A basic program.
5. Signals & Callbacks
6. Containers
7. Buttons
8. Entry Widgets
9. List boxes & Combo boxes
10. Menus & Toolbars
11. Dialog boxes
12. Conclusion & Links for Further study
1. What is GNOME all about? An Introduction.
Before entering into the exciting world of Gnome programming in Linux, let's try to understand what Gnome actually refers to. GNOME is the acronym for "GNU's Not Unix Network Object Model Environment". Though it sounds a bit complicated, Gnome is a software project with a simple aim: To provide all Linux users with an extremely user-friendly, yet a powerful and complete programming Desktop environment. GNOME is currently the default Desktop system installed with the latest releases of Red Hat and Debian Distribution releases of Linux.
For more specific info on GNOME and it's various wonderful features, make sure you check out the GNOME Project home page at http://www.gnome.org which provide readers with a wealth of information on GNOME, including online documentation, news; and one could also download the binaries and source code of GNOME compatible with most Linux systems.
Now let's look at GNOME from both a "Linux programmer's" as well as a "Linux System Administrator's" point of view. The basic question that comes to mind is: do they think and feel the same when they talk about GNOME? The answer to this question is not so easy to answer. Most Linux system administrators currently are/or have been Linux programmers in the past or so, which makes it quite difficult to answer this question. For an average Linux system administrator, the GNOME environment provides a wealth of tools that makes his/her administrative job so simple. Meanwhile, the the GNOME programmer has a responsibility to continue providing these facilities by designing even better programs. So, they are in perfect harmony with each other as far as their respective works are concerned.
Now let's take a bit closer look at Gnome's functionality. GNOME is actually a programming layer that is placed in between the X Window System (or X) and the Window Manager software. Thus, as mentioned earlier, it provides Linux GUI programmers with an enormous functionality that they can then harness to design Linux based programs. But most significant of all, the reason why GNOME is nearly indispensable for all Linux/Unix developers is because GNOME provides these developers/programmers with an Integrated Framework which was specifically designed for building open-source applications with a consistent graphical user interface.
The GNOME Project started in August, 1997. Some of the initial founders included, amongst others, Peter Mattis, Spencer Kimball, Richard Stallman, and Erik Troan and Mark Ewing of Red Hat, Inc.
GNOME's extremely powerful, yet flexible architecture is what provides GNOME its terrific functionality. The base toolkit in GNOME is named GTK+(the GIMP toolkit). It was originally written for using in GIMP(GNU Image Manipulation Program). The proper understanding of GTK+ is extremely necessary for the understanding of GNOME Programming. GTK+ is an object-oriented, cross-platform language-neutral toolkit that is primarily used for creating applications independently of GNOME. Then the question that comes up is: Then why was GTK+ chosen as the toolkit for GNOME? The answer is simple: It was for its support for many programming languages including C, C++, PERL, Python, ADA etc. But it is helpful to keep in mind always that both GNOME as well as GTK+ was written using C; so we would be dealing here with C only.
Another question that should come up in the reader's mind is: Hey, what do these things called "Toolkits" contain? Toolkits like GTK+, Qt (the KDE Environment is based on Qt) are collections of widgets. Which brings us to the question: What are "Widgets"?
Widgets are GUI objects like buttons, menus, dialog boxes and other such objects or object-related general functions. This can be compared with Active Template Library (ATL 3.0) on the Microsoft Platform, which provides Component Object Model (COM) developers with a ready-made framework for creating COM Objects and Components (ActiveX EXEs & ActiveX DLLs).
Now let's take a closer look into some of the features of GTK+:
The set of libraries used by GTK+: GLIB (GIMP Library) and GDK (GIMP Drawing Toolkit).
GLIB defines data types and provides functions that deal with error handling and memory routines.
GDK is the platform dependent layer that is present in between the native graphics API and GTK+.
That's not all. GNOME adds further functionality to GTK+ by adding a separate layer of GNOME specific widgets and libraries.
Thus, GNOME comes with a full-featured, object-oriented extensive widget set enabled architecture.
Other than functionality of GTK+, we also have the added benefits of a Custom implementation of the CORBA system called ORBit in GNOME architecture, allowing software objects to communicate easily and effectively.
GLIB defines its own set of basic data types. Most of these are equivalent to the standard C data types.
GLIB data type | C language type |
gchar | char |
gshort | short |
glong | long |
gint | int |
gboolean | boolean |
gpointer | void* |
8. A vital requirement for proper understanding of GTK+ is the concept of "Widget Hierarchy". Widgets in GTK+ belong to a hierarchy so that functions that are common to a set of widgets need only be implemented once.
For example, the function gtk_widget_show. This leads to removal of duplicate code, thus leading to better and faster program development. New widgets are derived from existing higher-level widgets so that only the unique features of this widget are to be written by the programmer. For example, let's look closely at this particular widget hierarchy:
GtkObject --> GtkWidget --> GtkContainer --> GtkBin --> GtkWindow --> GnomeApp
Thus, if you look carefully, you can see that GnomeApp widget is derived from the higher-level GtkWindow, which itself has been derived from the higher-level GtkBin and so on. If we take into the consideration the essential features of the C++ programming language, well, this reminds us of the concept of "Inheritance". Doesn't it? Well, surely it does. And it is this feature of the "Widget Hierarchy" that incorporates the derived functionality in GTK+.
Let's now take a brief look at the widget creation functions. For these functions to operate correctly, one must make sure that all the GNOME and GTK+ libraries are correctly installed. Another important thing to be kept in mind is that the library's path must be correctly set before trying to compile any code.
Let's first consider the widget creation function, gnome_app_new(). This function as shown returns a GtkWidget pointer, which is the generic widget. This maybe shown as:
GtkWidget *ghosh;
ghosh = gnome_app_new(&&&);
Please note that this also means that if we want to call a GnomeApp specific function such as gnome_app_set_menus(), then we have to use a macro to perform the cast from a GtkWidget type to a GnomeApp type; which is only possible because GnomeApp is derived from GtkWidget (see hierarchy above).
The best way to learn Linux programming is to understand the internal workings of the kernel and by doing programming yourself. So, let's now look at a small program to understand the subject matter better.
Boot your system in Linux, and if you are in the CLI (command line interface) mode, switch over to gnome, using the command "switchdesk gnome", and then issue a "startx" command to boot into the X Window System GUI mode. Once into the GNOME environment, open the GNOME Terminal, create a file named myapp.c using vi, and type in the following:
/* A sample GNOME program
Created By: Subhasish Ghosh
Date: 8th August, 2001
*/#include <gnome.h>
int main(int argc, char *argv[ ])
{GtkWidget *ghosh;
gnome_init("sample", "0.1", argc, argv);
ghosh = gnome_app_new("sample", "My Window");
gtk_widget_show(ghosh);gtk_main();
return 0;}
Now, to compile the program myapp.c, make sure you type in: (note the back-ticks carefully)
# gcc myapp.c -o myapp `gnome-config --cflags --libs gnomeui`
Note, GNOME comes with a shell script named gnome-config that supplies the compiler with the correct flags required for compilation. Once compiled, run the program using the command:
# ./myapp &
and press enter.
An empty window will appear on the screen, which you can move, resize, as well close. Now, let's take a closer look at the code. At the top, we introduced a few commented lines, describing the program, it's creator and date of creation. Though not necessary, it's a good programming practice to include those to each and every program. Then, we included the header file, gnome.h, that takes care of all necessary GNOME and GTK+ library functions and definitions. Then comes "ghosh", which is a GtkWidget pointer. This would point to our new Window object. The function gnome_init is then called. It initializes libraries, and is used for correct session management. The ID passed to this gnome_init function is "sample", the version number being "0.1", and then the usual command line arguments of main. These are necessary for the internal workings of GNOME. Then comes the function gnome_app_new(), which when executed, creates our window. This takes two arguments, as shown in the sample code: "sample" and "My Window". "sample" is the application name, and "My Window" is the window title. But please note: Though the name of this function is gnome_app_new(); it does NOT create any sort of new application or so. It creates a top-level window, that's all. The next function called is gtk_widget_show(), which makes our window visible. Next comes gtk_main() which is a very important function, as it makes sure that GNOME functions such as events nd button presses are executed, by handing on the functionality to GNOME.
So, that's the internal workings of our first GNOME program.
Now let's take a deeper look into the GNOME programming environment: "Signals" and "Callbacks". What are these and what are they used for? Do we really need them? Every single time the mouse moves, enters and leaves widgets, buttons are pressed, toggle buttons are toggled on or off, and such things are done, a signal is sent to the application. This signal can be passed to a callback function. So, though not always, yet at times, applications need to connect to these events for taking certain actions. In GNOME/GTK+, we call a function called gtk_signal_connect to connect signals to handler functions.
The gtk_signal_connect function has the following 4 parameters:
GtkObject *object -- Which widget the callback is associated with.
const gchar *name -- The signal to be handled.
GtkSignalFunc func -- The function to be called when the signal is sent.
gpointer data -- Any arbitrary data to be given to the signal handling function.
It should be noted that various kinds of widgets emit different signals. The signals from buttons are as follows:
clicked -- Button clicked (pressed & released).
pressed -- Button pressed down by mouse.
released -- Button released.
enter -- Mouse moved over the Button area.
leave -- Mouse moved out of the Button area.
We will look into signals and callbacks playing a vital role in the applications that we would develop later.
6. Containers
Next, we look into another vital component of GNOME programming: containers. GTK+ uses containers a great deal, because GTK+ is actually a "container-based" toolkit. That means we have a parent container within which we have to place our other widgets. Windows are single widget containers. Thus, the important point to keep in mind is that GTK+ utilizes invisible "packing boxes" which can hold multiple widgets to create windows layouts. These "packing boxes" are of two types: horizontal and vertical, created by using the functionality of the functions gtk_hbox_new and gtk_vbox_new, respectively. We would see these functions in action soon, in the applications that we create later. For now, let's take a look into the parameters of these two functions. They have the following parameters:
homogeneous : type --> gboolean : Forces all widgets in the box to occupy the same area as the largest widget in the box.
spacing : type --> gint : Determines the space between adjacent widgets.
expand : type --> gboolean : Allows the packing box to expand to fill the remaining space.
fill : type --> gboolean : Allows that particular widget to expand to fill the remaining space.
padding : type --> gint : Determines the width of a frame surrounding the widget.
7. Buttons
Next we come to another very vital component: Buttons. GTK+ provides 4 different kinds of buttons:
Simple push buttons --> To perform an action on clicking.
Toggle buttons --> With a particular state: Up/Down
Check boxes --> With a particular state: On/Off
Radio buttons --> For making only one selection from a group of options.
Creating radio buttons is very similar to check boxes, and all that we need to do extra is to specify a group the radio button belongs to. Radio buttons are derived from check buttons, which are derived from toggle buttons, so this means that we have the same set of functions to read and modify their state and also use the same old events. Please note: For more information of specific functions, consult the GTK+ Reference Documentation available at: http://www.gtk.org
For creating single-line text widgets, which are commonly called "Entry widgets", we utilize a function called gtk_entry_new(). Entry widgets are mainly used to enter small amounts of information. Let's know take a look at a program that creates a "Login Window", and outputs the password field, when the activate signal occurs, when the button is pressed. Type in the following and execute the program as has been explained above.
/* Creating a Login GNOME-style using GTK+ Toolkit:
Created By: Subhasish Ghosh
Date: Wednesday, August 8, 2001
*/
#include <gnome.h>
static void enter_pressed(GtkWidget *button, gpointer data)
{
GtkWidget *text_entry = data;
char *string = gtk_entry_get_text(GTK_ENTRY(text_entry));
g_print(string);
}
int main(int argc, char *argv[])
{
GtkWidget *app;
GtkWidget *text_entry;
GtkWidget *label;
GtkWidget *hbox; gchar *text;
gnome_init("example", "0.1", argc, argv);
app = gnome_app_new("example", "entry widget");
gtk_container_border_width(GTK_CONTAINER(app), 5);
hbox = gtk_hbox_new(FALSE, 0);
/* we now create a Label: */
label = gtk_label_new("Password: ");
gtk_misc_set_alignment(GTK_MISC(label), 0, 1.0);
gtk_box_pack_start(GTK_BOX(hbox), label, FALSE, FALSE, 0);
text_entry = gtk_entry_new();
gtk_entry_set_visibility(GTK_ENTRY(text_entry), FALSE);
gtk_box_pack_start(GTK_BOX(hbox), text_entry, FALSE, FALSE, 0);
gtk_signal_connect(GTK_OBJECT(app), "delete_event", GTK_SIGNAL_FUNC(gtk_main_quit), NULL);
gtk_signal_connect(GTK_OBJECT(text_entry), "activate", GTK_SIGNAL_FUNC(enter_pressed), text_entry);
gnome_app_set_contents(GNOME_APP(app), hbox);
gtk_widget_show_all(app);
gtk_main( );
return 0;
}
This program when executed, a login window should appear on the screen. Type in any text (assuming it to be a password), press enter and observe what happens.
List boxes and Combo boxes play the same role as they play on the Microsoft platform. List box widgets hold a list of strings that allow users to select one or more entries; concerned the widget is so configured. Combo boxes are entry widgets with an added pull-down menu that allow users to select options also.
10. Menus & Toolbars
The various widgets that we have come across until now are simple widgets that don't provide some extra-ordinary functionality. We now look at some specific GNOME programming libraries that offer more complicated widgets with rich functionality.
Just hang on for a second, someone may ask: "Hey, we were doing pretty well with ordinary code and all the stuff that you discussed earlier. What's the use of this so-called "specific GNOME programming libraries"? Are they indeed useful? Or are you just including them here for making your article a bit longer?"
Well, here's the reason for considering specific GNOME programming libraries. With plain GTK+ code, though nearly everything can be done, which we usually would do using specific GNOME programming libraries, but using simple and plain GTK+ code often leads to much code repetition, inefficient code blocks and such other things, making the whole program a bloated one. Now, to prevent this from happening, we use specific GNOME programming libraries that provide a great deal of extra functionality and much lower programming overhead.
So, let's talk about "Menus" and "Toolbars". GNOME lets us create menus and toolbars for our GnomeApp widgets that can be docked and undocked from the window. First you fill up arrays with the necessary information, then call gnome_app_create_menus or gnome_app_create_toolbar.
The menus and toolbar items each have properties, defined in arrays. A few such properties include type, string, callback pointer, etc. Most of the time the menu entries are pretty simple, and we can just use one of a set of macros provided by GNOME to create the structure for us. So let's check out a few of the most used top-level macros.
Please note: These macros are the ones that create top-level menus when passed an array containing any or all of the following GnomeUIInfo structures.
Menu | Macro |
File | GNOMEUIINFO_MENU_FILE_TREE(tree) |
Edit | GNOMEUIINFO_MENU_EDIT_TREE(tree) |
View | GNOMEUIINFO_MENU_VIEW_TREE(tree) |
Settings | GNOMEUIINFO_MENU_SETTINGS_TREE(tree) |
Windows | GNOMEUIINFO_MENU_WINDOWS_TREE(tree) |
Help | GNOMEUIINFO_MENU_HELP_TREE(tree) |
Game | GNOMEUIINFO_MENU_GAME_TREE(tree) |
Within the top-level menu there exists over thirty macros for creating common menu items. The macros associate small images (pixmaps) and accelerator keys with each menu item. A callback function is required to be called when the item is selected and a data pointer is to be passed to that function.
Let's look at some of these common menu items and their respective macros.
File -->>
New --> GNOMEUIINFO_MENU_NEW_ITEM (label, hint, cb, data)
Open --> GNOMEUIINFO_MENU_OPEN_ITEM (cb, data)
Save --> GNOMEUIINFO_MENU_SAVE_ITEM (cb, data)
Print --> GNOMEUIINFO_MENU_PRINT_ITEM (cb, data)
Exit --> GNOMEUIINFO_MENU_EXIT_ITEM (cb, data)
Edit -->>
Cut --> GNOMEUIINFO_MENU_CUT_ITEM (cb, data)
Copy --> GNOMEUIINFO_MENU_COPY_ITEM (cb, data)
Paste --> GNOMEUIINFO_MENU_PASTE_ITEM (cb, data)
Settings -->>
Preferences --> GNOMEUIINFO_MENU_PREFERENCES_ITEM (cb, data)
Help -->>
About --> GNOMEUIINFO_MENU_ABOUT_ITEM (cb, data)
Like menu bars, toolbars require an array using the GNOMEUIINFO_ITEM_STOCK (label, tooltip, callback, stock_id) macro. Here, "stock_id" is the id of a predefined icon that we want to use for that item.
Let's look at this example, and see how the arrays and macros work in reality.
#include <gnome.h>
static void callback (GtkWidget *button, gpointer data)
{
g_print("Item Selected");
}
GnomeUIInfo file_menu[ ] = {
GNOMEUIINFO_ITEM_NONE ("A menu item", "This is the Status bar info", callback),
GNOMEUIINFO_MENU_EXIT_ITEM (gtk_main_quit, NULL),
GNOMEUIINFO_END
};
GnomeUIInfo menubar[ ] = {
GNOMEUIINFO_MENU_FILE_TREE (file_menu),
GNOMEUIINFO_END
};
GnomeUIInfo toolbar[ ] = {
GNOMEUIINFO_ITEM_STOCK ("Print", "This is another tooltip", callback, GNOME_STOCK_PIXMAP_PRINT),
GNOMEUIINFO_ ITEM_STOCK ("Exit", "Exit the application", gtk_main_quit, GNOME_STOCK_PIXMAP_EXIT),
GNOMEUIINFO_END
};
int main (int argc, char *argv[ ])
{
GtkWidget *app;
gnome_init ("example", "0.1", argc, argv);
app = gnome_app_new ("example", "A Sample Toolbar and Menu");
gnome_app_create_menus (GNOME_APP (app), menubar);
gnome_app_create_toolbar (GNOME_APP (app), toolbar);
gtk_widget_show_all (app);
gtk_main();
return 0;
}
This program creates a small window with an embedded menu and toolbar. You can click, dock, undock and drag it around the screen.
11. Dialog boxes
Let's now look at the widget that displays textual information to the user in the GNOME environment. Yes, we are referring to the Dialog box. When we need to create dialog boxes, we call the gnome_message_box_new function and pass it the message text, also mention the type of dialog box we need, and the buttons we want on it. All of this mentioned in a NULL terminated list. Then we bind the "clicked" signal of the dialog widget that we have just created to a handling function that is passed the button that the user pressed as an integer. Finally, we call the gtk_widget_show function for displaying a non-modal box.
Let's look at this code extract from a program, which creates a simple question dialog box, adds three buttons and responds to the user's code.
static void messagebox_clicked(GnomeDialog *dlg, gint button, gpointer data)
{
switch (button)
{
case 1: /* user pressed apply */
return;
case 0: /* user pressed ok */
case 2: /* user pressed close */
gnome_dialog_close(dlg);
}
}
GtkWidget *dlg;
dlg = gnome_message_box_new("Hi, pal, how are you doing??? I am fine!",
GNOME_MESSAGE_BOX_QUESTION,
GNOME_STOCK_BUTTON_OK,
GNOME_STOCK_BUTTON_APPLY,
GNOME_STOCK_BUTTON_CLOSE,
NULL);
gtk_signal_connect (GTK_OBJECT(dlg), "clicked", GTK_SIGNAL_FUNC(messagebox_clicked), NULL);
gtk_widget_show (dlg);
12. Conclusion & Links for Further study
This sums up our journey of the exciting world of GNOME programming using GTK+ toolkit.
Please note: GNOME Programming is not at all difficult. Once you have a little understanding, it's really easy to grasp. There is still much more to learn after this article, but if done diligently, it can definitely be mastered.
For more information and detailed coverage of this topic, check out the following links:
http://www.linuxheadquarters.com/howto/programming/gtk_examples/index.shtml
http://www.ibiblio.org/pub/Linux/docs/HOWTO/mini/other-formats/html_single/Programming-Languages.html
http://linuxheadquarters.com/howto/programming/gtk_examples/window/window.shtml
http://developer.gnome.org/doc/GGAD/ggad.html
http://wolfpack.twu.net/docs/gtkdnd/index.html
Today we are continuing along on the same open-source journey Homer set out upon three thousand years ago, when he shared the words of The Odyssey with an audience and enriched them with the knowledge of a classic's ineffable truths. The story was passed along from generation to generation as part of an oral tradition for a few hundred of years, before it was transcribed around 700 BC. The invention of the printing press and movable type by Gutenberg circa 1445 aided in the sharing of classical information, and suddenly the Bible, as well as works such as The Odyssey, found a far greater audience.
With the advent of the Internet the content and the audience have augmented vastly. And of even greater significance, with the new paradigms afforded by information technology, classical computing has joined the ranks of immortal art, science, and literature. In the past few years, we have played witness to a revolutionary era of humanity's cultural journey, wherein technology and ideas have merged in a brave new digital world, rendering knowledge as affordable as it is eternal.
Software is labor immortalized, as a programmer's algorithm, once written, may continue to function for eternity. Thus now, in addition to inheriting the cultural riches of our predecessors, we may also inherit the functionality of their programs. In a world where commerce is defined by the movement of information, that machinery--the hardware and software--which moves the information embodies work, and thus the innovations of one's predecessors will not only bestow aesthetic riches, but they shall also provide a wellspring of eternal labor. A hundred years from now Hamlet shall still be contemplating the correct course of action, and the Linux kernel, along with Apache, shall still be providing the fundamental labor which transports Hamlet all about the watery globe.
In software, language has becomes action. Never before has an individual commanded so much wealth, so many man-hours of innovation. In the past decade, those man-hours have increased geometrically, as the network has enabled the collaboration of thousands of the best and brightest programmers.
Whereas in Jefferson's day it took three days and a horse's labor to deliver a letter from Philadelphia to Washington, today one can instantaneously send a message to the far corners of the watery globe by utilizing the inherited wealth born of the millions of hours that millions of scientists have spent theorizing, millions of innovators have spent innovating, and millions of engineers have spent engineering--one can use this collective wealth for free. All one has to do is log on to the new paradigm of classical computing.
Over two-thousand-five-hundred years ago the Greeks developed an architecture which was passed along to the Romans via open-source methods. In 1675, about seventy years after Shakespeare penned Hamlet, Newton claimed he saw further because he "stood upon the shoulders of giants," and he invented calculus. A hundred years later a poet by the name of William Blake penned the verse wherein he saw the world in a grain of sand and found eternity within an hour. Since then, via the open source of modern physics, space has become time, time has become space, and Blake's grain of sand has become a silicon chip, which holds not only entire worlds, but also all of the art, music, and poetry ever known to humanity. And these vast open-source riches, from condensed matter physics, to the complete works of Shakespeare, are free to all. Thus it is that those who open books or log on are granted an inheritance as never before.
The very freedoms which are so fundamental to our everyday existence were passed down by a classical open-source method. The Declaration of Independence has inspired the likes of Ghandi and the students in Tiannamen Square, and the noble document's author, Thomas Jefferson, once stated stated that there was nothing new within its words, but that he had merely edited the better parts of history. Concerning the Declaration of Independence, Jefferson wrote:
Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion. All its authority rests then on the harmonizing sentiments of the day, whether expressed in conversation, in letters, printed essays, or in the elementary books of public right, as Aristotle, Cicero, Locke, Sidney, &c.
The classics represent the center and circumference of humanity's open-source movement. Like calculus, the transistor, the microprocessor, C, and Linux, they were created for little in the way of stock options, and shared not so much for fame and fortune, but because they had to be, because they worked and accomplished the task of helping us find words for our thoughts, music for our feelings, solutions for our technical hurdles, and meaning for our lives.
Not too long ago John Doerr of Kleiner Perkins said that the Internet age had fostered the greatest legal creation of wealth. Instead I would argue that it has afforded the greatest inheritance of wealth, for on the Internet we are standing upon the shoulders of giants with names like Shockley, Bohr, Faraday, Einstein, Jefferson, Dirac, Aristotle, Moses, Copernicus, Shakespeare, and Newton. And as Newton himself acknowledged that he had stood upon the shoulders of giants, so it is that today we are standing upon the shoulders of giants who stood upon the shoulders of giants.
Recorded culture is humanity's single greatest invention, and it is a tower built from the open source of the ages, with foundations thousands of years deep, reaching back to the dawn of civilization and language itself. Today we are standing upon the shoulders of countless innovators and educators: all the typesetters and teachers throughout the ages who kept the language alive and the aesthetic beacon lit, all the prophets and poets, all the inventors and innovators who built the first presses, who pioneered quantum mechanics, and who selflessly pushed forward the open-source technology, philosophy, and software of the Internet age. Venture Capital is a very recent innovation, and because individuals, rather than money, invent new technologies, VC has played little if any role in the development of the internet, as it was used primarily for seeding pyramid schemes wherein savvy MBAs could momentarily pretend they were high-tech entrepreneurs.
We are standing upon the shoulders of the Founding Fathers who humbly recognized our fundamental freedom in the face of mysteries greater than ourselves, who penned an open-source Constitution in homage to those higher laws which grant us our natural freedoms. An open-source Constitution which could be amended by the people, and which has been freely distributed about the globe, and adapted and adopted in country after country, in city after city, in heart after heart. An open-source Constitution in which they set in words the laws which today encourage innovators by allowing them to own their ideas via copyrights, patents, and trademarks.
In fact, the only place where the word "right" is mentioned in the Constitution is in relation to intellectual property:
The Congress shall have Power To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
But when more and more intellectual property is inherited rather than created, when more and more lawyers and hypesters are employed by corporations to convince judges and juries of the grandiose merits of some trifling innovation, is the true innovator benefiting?
As the contemporary innovator stands upon more and more giants, perhaps the patenting process devolves into a game of semantics, wherein some "innovators" attempt to claim credit for others' monuments by calling a rose by a different name. For instance, when Jeff Bezos patented "one-click shopping", he was in essence giving a new name to the cookie technology which is intrinsic to the browser, which maintains state and stores the identity of the user. Jeff Bezos had nothing to do with the development of that technology, yet he was still awarded the patent.
Patents are supposed to encourage innovation by protecting the inventor's rights to profit from their inventions, but it is hard to imagine how far along we'd be today if every aspect of the C language had been patented as it evolved, if every new subroutine or algorithm was handed to the lawyers before it was presented to other programmers, or if Tim Berners Lee had patented the fundamentals of the Internet. With hundreds of Internet companies penning patents and creating dubious boundaries, erecting fences on a wide open frontier which they did not discover nor create, it is more likely that lawyers will profit as opposed to innovation.
The realm of open source and "classical computing" may represent a hybrid paradigm, wherein programming is closer in essence to physics and mathematics than it is to inventing the world's first functional airplane, or the first light bulb. One cannot patent scientific laws nor mathematical concepts, and thus physics and mathematics have always been open-source endeavours.
In programming the fundamental algorithms are immutable ideals, and though they may be used as machines to ferry information about the globe, when one attempts to patent the machine, one is perhaps trying to take too much credit for the algorithms developed by others, or for immutable ideals which were always there. It seems that more and more innovations in contemporary information technology are dwarfed by the giants upon which they are based, for what sole inventor or invention can be greater than the open platform upon which it is invented, such as Linux and C++?
The GNU General Public License takes the "standing upon the shoulders of giants" aspect of software development into account, as it states:
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.If one builds upon code developed under the GNU License, the new code inherits the GNU Copyleft, thereby keeping the source open, and acknowledging the former giants of innovation and good will.
When the division between the legal mind and the innovating mind grows, as has been encouraged by the fact that only lawyers can practice law (except for the cases where one represents oneself), what can quickly happen is that the laws founded to encourage innovation begin to encourage lawyers at the innovator's expense, as it is difficult for the typical inventor to keep up with the ever-evolving game of legal semantics. Indeed, the men who penned the Constitution believed that the common man would be capable of comprehending the law--otherwise what good could laws be in a democratic republic? Perhaps innovators should be made to file and defend their own patents, or patent nothing at all, and lawyers should only be allowed to file patents for that which they themselves have invented. This would keep well-funded corporations from hiring legions of lawyers to file ambiguous patents with sweeping claims. For if the legal system can determine that Microsoft has a monopoly in the arena of the desktop operating system, then certainly that same legal system should recognize that lawyers have monopolized the legal system, taxing all innovation as the arbiters of others' copyrights, patents, and trademarks.
All the major innovations upon which the Internet is based were made before 1995, from TCP/IP to Sendmail, Apache, Perl, Mosaic, and Netscape. None of these innovations were patented. After 1995 we encountered the irony that although the Internet was built by individuals seeking truth and beauty in functionality, it was hyped by hundreds of corporations led by MBAs and "visonary" CEOs who had very little to do with true innovation, who registered thousands of trademarks and patented spurious technological innovations, and who ultimately created thousands of worthless companies which lost far more than they ever made, except for the insiders and the bankers.
And yet, it is a misconception that the open-source movement in general opposes intellectual property rights, although at times a few adherents or government bureaucrats seem to be drawn towards the open-source movement because they believe it supports a form of communism. Rather, most open-sourcers are opposed to the patenting of other's innovations and trying to pass them off as one's own in a game of legal semantics.
Benjamin Franklin, an open-sourcer who was certainly not a communist, turned down the opportunity to patent the Franklin Stove, "on the principle that 'as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously." But at the same time, he didn't believe that the government should fund the development of the Franklin stove, nor did he ever speak out against the rights which inventors should have to their own innovations, nor did he ever contend that the government should have the role of redistributing his Franklin stove. Open source is about the individual--it is about the innovator, the end-user, not about the administrators nor the hypesters, who so often seek to ride on the coattails of others' achievements, whether they reside in a corporate or government bureaucracy.
Regarding the ownership of intellectual property via copyrights, Mark Twain once addressed the United States Congress with:
I am aware that copyright must have a limit, because that is required by the Constitution of the United States, which sets aside the earlier Constitution, which we call the decalogue. The decalogue says you shall not take away from any man his profit. I don't like to be obliged to use the harsh term. What the decalogue really says is, "Thou shalt not steal," but I am trying to use more polite language.Twain goes on to offer a good defense of the protection of "ideas which did not exist before" as property:
I put a supposititious case, a dozen Englishmen who travel through South Africa and camp out, and eleven of them see nothing at all; they are mentally blind. But there is one in the party who knows what this harbor means and what the lay of the land means. To him it means that some day a railway will go through here, and there on that harbor a great city will spring up.That is his idea. And he has another idea, which is to go and trade his last bottle of Scotch whiskey and his last horse-blanket to the principal chief of that region and buy a piece of land the size of Pennsylvania. That was the value of an idea that the day would come when the Cape to Cairo Railway would be built.
Every improvement that is put upon the real estate is the result of an idea in somebody's head. The skyscraper is another idea; the railroad is another; the telephone and all those things are merely symbols which represent ideas. An andiron, a wash-tub, is the result of an idea that did not exist before.
So if, as that gentleman said, a book does consist solely of ideas, that is the best argument in the world that it is property, and should not be under any limitation at all.
Although Twain would like to keep his intellectual property in this case, while Franklin aims to give his away, they both seem to agree that intellectual property is property, and that individuals should have the right to choose what they do with it. And as patents and copyrights have limits, eventually the source of all intellectual property becomes open.
The Wright brothers' names are still on the fundamental patents which describe the design of the navigational systems on all modern airplanes. Such fundamental patents as this help inspire the innovators in their life times, allowing them to reap the benefits of what they develop, and too, when they expire, the open knowledge, which one can improvise upon without the fear of a lawsuit, allows for further innovations. To determine the "right" duration of a patent or a copyright will always be a difficult task, and perhaps modern technological innovations, most of which are based on yesteryear's far greater monuments of innovation, should be granted patents with a shorter duration.
Another common misconception is that Red Hat Linux and Microsoft are at opposite ends of the open-source spectrum, but they are in fact very similar. Both operating systems developed by the publicly-traded companies were mostly written in the open source of the C computing language (the language itself is an open specification), both were built upon the open-source science and technology found within the silicon chip, and both benefit from intellectual property rights to their respective trademarks, copyrights, and patents. Both use the open source of the English language, and both openly share volumes of useful information on their web sites. Microsoft chooses to keep more of their coding proprietary, thus guaranteeing better pay for their programmers, while Red Hat opens the source, thereby allowing anyone to contribute, but lowering the direct monetary compensation of those who do.
Also, Microsoft has offered a far better return for the common investor and worker, not just for the insiders. Perhaps there is not as much money to be made out of a global network of open-source programmers as Red Hat and other public linux companies once trumpeted. Perhaps the true wealth of the open-source movement is inherited by the webmasters who utilize the code, by the entrepreneurs who download the open-source tools and applications to power entire portals of their own creation. Like the free market, it seems that in the long run the Internet favors the rugged individual, the renaissance man, over the bureaucracy led by the administrator and hypester.
For certain user-friendly applications, such as office suites and other software used by non-programmers, Microsoft has the upper hand, as paying programmers to write word-processing applications and office suites makes sense. The majority of hard-core programmers probably don't care about font colors and integration with PowerPoint and spreadsheets quite as much as they care about streamlining Apache or enhancing Linux security.
But when it comes to servers, the open-source paradigm provides a superior system, as the more technically-inclined--the ones who actually build and configure the servers--are allowed to get under the hood and enhance the performance. Whereas a typical author or MBA would probably never want to hack away at PowerPoint or Microsoft Word to get cooler fonts, those who have built and configured their own servers don't mind spending a few sleepless nights to add functionality. And by sharing their accomplishments on the Internet, they may receive that priceless respect from fellow gurus, and benefit themselves while benefiting others, as the improvements that they bestow upon their fellow programmers may in turn be improved upon, while the bugs may be fixed by any one of thousands of experts.
There is a beauty in efficiency and functionality, and the programmer's aesthetic is very similar to Einstein's, who once said, "Everything should be made as simple as possible, but not more so." There is room for both Microsoft and Linux, and as Eric Raymond pointed out in a recent Wall Street Journal article, there is little need for the government to interfere--Linux will continue to spread throughout the server market, as it contains all the inherent advantages of open source.
If history has demonstrated anything, it is that truth, beauty, and freedom are the favored traditions, and thus classical computing, born upon the ancient open-source paradigm, shall prosper throughout the rest of eternity.
In Part I , we looked at the most basic operations of the numerical workbenches GNU/Octave 2.1.34, Scilab 2.6, and Tela 1.32. This time we will talk about matrices, have a look at some of the predefined functions, learn how to write our own functions, and introduce flow control statements. The article closes with a brief discussion of the applications' input and output facilities.
Vectors help a lot if data depend on a single parameter. The different parameter values are reflected by different index values. If data depend on two parameters, vectors are a clumsy container and a more general structure, which allows for two independent indices is needed. Such a structure is called a matrix. Matrices are packed like a fresh six-pack: they are rectangular storage containers and no bottle -- oops -- element is missing.
Matrices are, for example, built from scalars as the next transcript of a GNU/Octave session demonstrates.
octave:1> # temperature rain sunshine octave:1> # degF inches hours octave:1> weather_data = [ 73.4, 0.0, 10.8; ... > 70.7, 0.0, 8.5; ... > 65.2, 1.3, 0.7; ... > 68.2, 0.2, 4.1] weather_data =
73.40000 0.00000 10.80000 70.70000 0.00000 8.50000 65.20000 1.30000 0.70000 68.20000 0.20000 4.10000
Three new ideas appear in the example. First, we have introduced some comments to label the columns of our matrix. A comment starts with a pound sign ``#
'' and extends until the end of the line. Second, the rows of a matrix are separated by semi-colons ``;
'', and third, if an expression stretches across two or more lines, the unfinished lines must end with the line-continuation operator ``...
''.
Similarly to vectors, matrices can not only be constructed from scalars, but from vectors or other matrices. If we had some variables holding the weather data of each day, like
weather_mon = [73.4, 0.0, 10.8] weather_tue = [70.7, 0.0, 8.5] weather_wed = [65.2, 1.3, 0.7] weather_thu = [68.2, 0.2, 4.1]
we would have defined weather_data
with
weather_data = [weather_mon; weather_tue; weather_wed; weather_thu]
or, on the other hand, if we had the data from the various instruments as
temperature = [73.4; 70.7; 65.2; 68.2] rain = [0.0; 0.0; 1.3; 0.2] sunshine = [10.8; 8.5; 0.7; 4.1]
we would have defined weather_data
with
weather_data = [temperature, rain, sunshine]
The fundamental rule is: Commas separate columns, semi-colons separate rows.
The scalars living in matrix m
are accessed by applying two indices: m(row, column)
, where row is the row-index, and column is the column index. Thus, the amount of rain fallen on Wednesday is fetched with the expression
octave:10> weather_data(3, 2) ans = 1.3000
Entries are changed by assigning to them:
octave:11> weather_data(3, 2) = 1.1 weather_data =
73.40000 0.00000 10.80000 70.70000 0.00000 8.50000 65.20000 1.10000 0.70000 68.20000 0.20000 4.10000
Now that we have defined weather_data
we want to work with it. We can apply all binary operations that we have seen in last month's article on vectors. However, for this particular example, computing
rain_forest_weather_data = weather_data + 2.1 siberian_summer_weather_data = weather_data / 3.8
does not make much sense, though the computer will not complain at all. In the first example it would dutifully add 2.1
to every element of weather_data
, in the second it would -- obedient like a sheepdog -- divide each element by 3.8
.
Say we want to do something meaningful to weather_data
and convert all temperatures from degrees Fahrenheit to degrees Celsius. To that end, we need to access all elements in the first column. The vector of interest is
octave:16> temp = [weather_data(1, 1); ... > weather_data(2, 1); ... > weather_data(3, 1); ... > weather_data(4, 1)] temp =
73.400 70.700 65.200 68.200
Obviously, the row-indices [1, 2, 3, 4]
form a vector themselves. We can use a shortcut and write this vector of indices as
temp = weather_data([1, 2, 3, 4], 1)
In general, any vector may be used as index vector. Just watch out that no index is out of range. Ordering of the indices does matter (for example weather_data([2, 1, 4, 3], 1)
puts Tuesday's temperature in front) and repeated indices are permitted (for example weather_data([3, 3, 3, 3, 3, 3, 3], 1)
holds Wednesday's temperature seven times).
In our example, the index-vector can be generated by a special built-in, the range generation operator ``:
''. To make a vector that starts at low and contains all integers from low to high, we say
low:high
For example
octave:1> -5:2 ans =
-5 -4 -3 -2 -1 0 1 2
Our weather data example now simplifies to
temp = weather_data(1:4, 1)
Accessing a complete column or row is so common that further shortcuts exist. If we drop both, low and high from the colon-operator, it will generate all valid column indices for us. Therefore, we reach at the shortest form to get all elements in the first column.
octave:17> temp = weather_data(:, 1) temp =
73.400 70.700 65.200 68.200
With our new knowledge, we extract the sunshine hours on Tuesday, Wednesday, and Thursday
octave:19> sunnyhours = weather_data(2:4, 3) sunnyhours =
8.50000 0.70000 4.10000
and Tuesday's weather record
octave:20> tue_all = weather_data(2, :) tue_all =
70.70000 0.00000 8.50000
Now it is trivial to convert the data on the rain from inches to millimeters: Multiply the second column of weather_data
by 25.4 (Millimeters per Inch) to get the amount of rain in metric units:
octave:21> rain_in_mm = 25.4 * weather_data(:, 2) rain_in_mm =
0.00000 0.00000 27.94000 5.08000
We have already seen that vectors are compatible with scalars
1.25 + [0.5, 0.75, 1.0]
or
[-4.49, -4.32, 1.76] * 2
Scalars are also compatible with matrices.
octave:1> 1.25 + [ 0.5, 0.75, 1.0; ... > -0.75, 0.5, 1.25; ... > -1.0, -1.25, 0.5] ans =
1.75000 2.00000 2.25000 0.50000 1.75000 2.50000 0.25000 0.00000 1.75000
octave:2> [-4.49, -4.32, 1.76; ... > 9.17, 6.35, 3.27] * 2 ans =
-8.9800 -8.6400 3.5200 18.3400 12.7000 6.5400
In each case the result is the scalar applied to every element in the vector or matrix.
How about vectors and matrices? Obviously, an expressions like
[7, 4, 9] + [3, 2, 7, 6, 6] [2, 4; 1, 6] - [1, 1, 9, 4]
do not make any sense. In the first line the vectors disagree in size (3 vs. 5 elements), in the second line they have different shapes (2 columns and 2 rows vs. 4 columns and 1 row). To make sense, vectors or matrices that are used in an addition or subtraction must have the same shape, which means the same number of rows and the same number of columns. The technical term for ``shape'' in this context is dimension. We can query the dimension of anything with the built-in function size()
.
octave:22> size(weather_data) ans =
4 3
octave:23> size(sunnyhours) ans =
3 1
The answer is a vector whose first element is the number of rows, and whose second element is the number of columns of the argument.
Multiplication and division of matrices can be defined in two flavors, both of which are implemented in the numerical workbenches.
a = [3, 3; ... 6, 4; ... 6, 3] b = [9, 3; ... 8, 2; ... 0, 3]
octave:1> a .* b ans =
27 9 48 8 0 9
The element-by-element operators are preceded by a dot: element-by-element multiplication ``.*
'' and element-by-element division ``./
''.
Example:
a = [3, 3; ... 6, 4; ... 6, 3]
b = [-4, 0, 1, -4; ... -1, -3, 2, 0]
octave:1> a * b ans =
-15 -9 9 -12 -28 -12 14 -24 -27 -9 12 -24
Although we have not seen for
-loops yet (they will be discussed farther down), I would like to write the code behind the matrix multiplication operator ``*
'' to give the reader an impression of the operations involved.
for i = 1:p for j = 1:r sum = 0 for k = 1:q sum = sum + a(i, k)*b(k, j) end c(i, j) = sum end end
Compare these triply nested for
-loops with the simple expression c = a * b
.
/
'' is defined for vectors and matrices. But writing x = b / a, where a and b are matrices or vectors has nothing to do with division at all! It means: please solve the system of linear equations
x * a = b
for x, given matrix a and the right-hand-side(s) b. Here ``*
'' denotes matrix multiplication as defined in the previous item, and the same rules for compatible dimensions of a and b apply.
a = [-2, 3, 1; ... 7, 8, 6; ... 2, 0, -1]
b = [-26, 5, -6; ... 24, 53, 26]
octave:1> x = b / a x =
7.00000 -2.00000 1.00000 7.00000 4.00000 5.00000
Isn't that an easy way to solve a system of linear equations? Imagine you had to write the code which does exactly that.
Finally, let us verify the result by multiplying with a again
octave:2> x*a ans =
-26.0000 5.0000 -6.0000 24.0000 53.0000 26.0000
which, as expected, recovers b.
Details
\
''. x = a \ b solves the linear system of equations
a * x = b
for x, given matrix a and the right-hand-side(s) b. This is the form most users prefer, because here x is a column vector, whereas operator ``/
'' returns x as row-vector.
\
'' has the dotted cousin ``.\
'' and the relation a ./ b == b .\ a holds.Differences
// This is a Scilab or a Tela comment
...
''
weather_data = #(73.4, 0.0, 10.8; 70.7, 0.0, 8.5; 65.2, 1.3, 0.7; 68.2, 0.2, 4.1)
In interactive mode, Tela does not handle multi-line expressions as the above. Multi-line expressions must be read from a file (with source("filename.t")
).
*
'' and ``/
'' work element by element, this is, they work like ``.*
'' and ``./
'' do in GNU/Octave and Scilab. Matrix multiplication (a * b in GNU/Octave or Scilab) is written as
a ** b
or
matmul(a, b)
solving systems of linear equations (b / a in Octave or Scilab) as
linsolve(a, b)
Ugh -- far too many to mention! The workbenches supply dozens of predefined functions. Here I can only wet the reader's appetite.
zeros(m, n)
or ones: ones(m, n)
, or n-times-n diagonal matrices, where the diagonal consists entirely of ones: eye(n)
or the diagonal is set to numbers supplied in a vector: diag([a1, a2, ..., I<an>])
.min(a)
, max(a)
, or totaling matrix a: sum(a)
.
Differences: GNU/Octave's min(a)
, max(a)
, and sum(a)
return the column-wise result as a row vector. To get the minimum, maximum, and sum of all elements in matrix a, use min(min(a))
, max(max(a))
, sum(sum(a))
.
/
''. But many more linear algebra functions exist, for example singular value decomposition: svd(a)
, or eigenvalue computation: eig(a)
.
Differences: In Tela uses SVD(a)
instead of svd(a)
, and instead of eig(a)
, Scilab uses spec(a)
to compute the eigenvalue spectrum.
One note on performance: basically, all three applications are interpreters. This means that each expression is first parsed, then the interpreter performs desired computations, finally calling the functions inside of the expressions -- all in all a relatively slow process in comparison to a compiled program. However, functions like those shown above are used in their compiled form! They execute almost at top speed. What the interpreter does in these cases is to hand over the complete matrix to a compiled Fortran, C, or C++ function, let it do all the work, and then pick up the result.
Thus we deduce one of the fundamental rules for successful work with numerical workbenches: prefer compiled functions over interpreted code. It makes a tremendous difference in execution speed.
No matter how many functions a program may provide its users, they are never enough. Users always need specialized functions to deal with their problems, or they simply want to group repeated, yet predefined operations. In other words, there always is a need for user-defined functions.
User functions are best defined in files, so that they can be used again in later sessions. For GNU/Octave, functions files end in .m, and are loaded either automagically or with source("filename.m")
. Scilab calls its function files .sci, and requires them to be loaded with getf("filename.sci")
. Tela functions are stored in .t-files and loaded with source("filename.t")
. As big as the differences are in loading functions, all workbenches use quite similar syntax for the definition of functions.
GNU/Octave and Scilab
function [res1, res2, ..., resM] = foo(arg1, arg2, ..., argN) # function body endfunction
Tela
function [res1, res2, ..., resM] = foo(arg1, arg2, ..., argN) { // function body };
where arg1 to argN are the functions' arguments (also known as parameters), and res1 to resN are the return values. Yes, trust your eyes, multiple return values are permitted, what might come as a surprise to most readers who are acquainted with popular programming languages. However, this is a necessity, as no function is allowed to change any of its input arguments.
Enough theory! let us write a function that takes a matrix as input and returns a matrix of the same dimensions, with the entries rescaled to lie in the interval (0, 1).
### Octave
function y = normalize(x) ## Return matrix X rescaled to the interval (0, 1).
minval = min(min(x)) maxval = max(max(x))
y = (x - minval) / (maxval - minval) endfunction
Now define a Scilab function that returns the spectral radius on a matrix. We use abs()
which returns the magnitude of its (possibly complex) argument.
// Scilab
function r = spectral_radius(m) // Return the spectral radius R of matrix M.
r = max(abs(spec(m))) endfunction
Finally, we write a Tela function which computes the Frobenius norm of a matrix.
// Tela
function x = frobenius(m) // Return the Frobenius norm X of matrix M. { x = sqrt(sum(abs(m)^2)) };
Details:
GNU/Octave's ``automagical'' function file loading works the following way: if Octave runs into an undefined function name it searches the list of directories specified by the built-in variable LOADPATH
for files ending in .m that have the same base name as the undefined function; for example, x = my_square_root(2.0)
looks for the file my_square_root.m in the directories listed in LOADPATH
.
All code we have written thus far executes strictly top-to-bottom, we have not used any flow control statements such as conditionals or loops.
Before we manipulate the flow of control, we should look at logical expressions because the conditions used in conditionals and loops depend on them. Logical expressions are formed from (1.) numbers, (2.) comparisons, and (3.) logical expressions catenated with logical operators.
<
'', less-or-equal ``<=
'', greater-than ``>
'', greater-or-equal ``>=
'', and equal ``==
''.
Differences: The inequality operator varies quite a bit among the programs. (Octave cannot decide whether it feels like C, Smalltalk, or Pascal. Scilab wants to be Smalltalk and Pascal at the same time. :-)
!= ~= <> # Octave ~= <> // Scilab != // Tela
Differences:
and or not ---- ---- ---- & | ! ~ # Octave & | ~ // Scilab && || ! // Tela
We are all set now for the first conditional, the if
-statement. Note that the parenthesis around the conditions are mandatory (as they are in C). The else
-branches are optional in any case.
# Octave // Scilab // Tela
if (cond) if cond then if (cond) { # then-body // then-body // then-body else else } else { # else-body // else-body // else-body endif end };
cond is a logical expression as described above.
while
-statements:
# Octave // Scilab // Tela
while (cond) while cond while (cond) { # body // body // body endwhile end };
Again, cond is a logical expression.
for
-statements in Octave and Scilab walk through the columns of expr one by one. Most often expr will be a vector generated with the range operator ``:
'', like for i = 1:10
. Tela's for
-statement is the same as C's.
# Octave // Scilab // Tela
for var = expr for var = expr for (init; cond; step) { # body // body // body endfor end };
Here come some examples which only show things we have discussed so far.
Octave
function n = catch22(x0) ## The famous catch-22 function: it is ## impossible to compute that it will ## stop for a specific input. Returns ## the number of loops.
n = 0 x = x0 while (x != 1) if (x - floor(x/2)*2 == 0) x = x / 2 else x = 3*x + 1 endif n = n + 1 endwhile endfunction
Scilab
function m = vandermonde(v) // Return the Vandermonde matrix M based on // vector V.
[rows, cols] = size(v) m = [] // empty matrix if rows < cols then for i = 0 : (cols-1) m = [m; v^i] end else for i = 0 : (rows-1) m = [m, v^i] end end endfunction
Tela
function vp = sieve(n) // Sieve of Erathostenes; returns vector of // all primes VP that are strictly less than // 2*N. 1 is not considered to be a prime // number in sieve(). { vp = #(); // empty vector if (n <= 2) { return };
vp = #(2); flags = ones(1, n + 1); for (i = 0; i <= n - 2; i = i + 1) { if (flags[i + 1]) { p = i + i + 3; vp = #(vp, p); for (j = p + i; j <= n; j = j + p) { flags[j + 1] = 0 } } } };
We have been using with the workbenches a lot. At some point we would like to call it a day, but we do not want to lose all of our work. Our functions are already stored in files. It is time to see how to make our data persist.
All three applications at least have one input/output (I/O) model that borrows heavily from the C programming language. This model allows close control of the items read or written. Often though, it is unnecessary to take direct control over the file format written. If variables must be saved just to be restored later, simplified I/O commands will do.
save
/load
command pair.
save filename varname1 varname2 ... varnameN
saves the variables named varname1, varname2, ..., varnameN in file filename. The complementary
load filename varname1 varname2 ... varnameN
command restores them from filename. If load
is given no variable names, all variables form filename are loaded. Handing over names to load
selects only the named variables for loading.
Note that the save
and load
commands do not have parenthesis and their arguments are separated by spaces not commas. Filename and variable names are strings.
save "model.oct-data" "prantl" "reynolds" "grashoff" load "model.oct-data" "reynolds"
By default load
does not overwrite existing variables, but complain with an error if the user tries to do so. When it is save to discard of the values of existing variables, add option ``-force
'' to load
, like
load -force "model.oct-data" "reynolds"
and variable reynolds
will be loaded from file model.oct-data no matter whether it has existed before or not.
save(filename, var1, var2, ..., varN)
However, the variables var1, ..., varN are not strings, but appear literally. This means that the name of a variable is not stored in the file. The association between name and contents is lost!
The complementary function
load(filename, varname1, varname2, ..., varnameN)
restores the contents of filename in the variables named varname1, varname2, ... varnameN.
save(filename, varname1, varname2, ..., varnameN)
function, preserving the association between variable name and variable contents. The complementary
load(filename)
function loads all variables stored in filename. It is not possible to select specific variables.
As we use matrices so often, specialized functions exist to load and save whole matrices. Especially loading a matrix with a single command is convenient and efficient to read data from experiments or other programs.
Let us assume, we have the ASCII file datafile.ascii which contains the lines
# run 271 # 2000-4-27 # # P/bar T/K R/Ohm # ====== ====== ====== 19.6 0.118352 0.893906e4 15.9846 0.1 0.253311e5 39.66 0.378377 0.678877e4 13.6 0.752707 0.00622945e4 12.4877 0.126462 0.61755e5
and sits in the current working directory. The file's five leading lines are non-numeric. They are skipped by the workbenches, but possibly aid the user in identifying her data. I have intentionally taken a data set which is not neatly formatted, as are most data files. Matrix-loading functions split the input at whitespace not at a specific column, thus they are happy with datafile.ascii.
We load the data into GNU/Octave with
octave:1> data = load("datafile.ascii") data =
1.9600e+01 1.1835e-01 8.9391e+03 1.5985e+01 1.0000e-01 2.5331e+04 3.9660e+01 3.7838e-01 6.7888e+03 1.3600e+01 7.5271e-01 6.2294e+01 1.2488e+01 1.2646e-01 6.1755e+04
or into Scilab
-->data = fscanfMat("datafile.ascii") data =
! 19.6 0.118352 8939.06 ! ! 15.9846 0.1 25331.1 ! ! 39.66 0.378377 6788.77 ! ! 13.6 0.752707 62.2945 ! ! 12.4877 0.126462 61755. !
or into Tela
>data1 = import1("datafile.ascii") >data1 #( 19.6, 0.118352, 8939.06; 15.9846, 0.1, 25331.1; 39.66, 0.378377, 6788.77; 13.6, 0.752707, 62.2945; 12.4877, 0.126462, 61755)
In all three examples data will contain a 5-times-3 matrix with all the values from datafile.ascii.
The complementary commands for saving a single matrix in ASCII format are
save("data.ascii", "data") # GNU/Octave fprintfMat("data.ascii", data, "%12.6g") // Scilab export_ASCII("data.ascii", data) // Tela
Note that Scilab's fprintfMat()
requires a third parameter that defines the output format with a C-style template string.
Of course none of the above save commands writes the original header, the lines starting with hash-symbols, of datafile.ascii. To write these, we need the ``low-level'', C-like input/output functions, which featured in each of the three workbenches.
For a precise control of the input and the output, C-like I/O models are offered. All three applications implement function
printf(format, ...)
Moreover, GNU/Octave and Tela follow the C naming scheme with their C-style file I/O:
handle = fopen(filename) fprintf(handle, format, ...) fclose(handle)
whereas Scilab prefixes these functions with an ``m
'' instead of an ``f
''
handle = mopen(filename) mprintf(handle, format, ...) mclose(handle)
Whether the function is called fprintf()
or mprintf()
, they work the same way.
Next Month: Graphics, function plotting and data plotting.
Your game is looking good, so you pass a copy to a friend to try out. Strangely, the sound doesn't work. It turns out your friend runs the GNOME desktop, and under GNOME the sound device is taken over by the esd sound server. GNOME applications are supposed to talk to esd, not directly to the sound device. So you go back, learn the esd API, and add an option to your program to work with esd.
You now pass the game over to a second friend to test, and again sound doesn't work for her. She likes to run the KDE desktop, and under KDE the sound device is managed by the artsd sound server. So you spend a couple more evenings learning the artsd sound API and adding support to your game so it can work with KDE too.
A third friend has heard about this great game and wants to try it too. He is running Solaris on a Sun workstation, but that's okay, he can just recompile the code for his architecture. Unfortunately sound does not work properly, because the Solaris sound device and APIs are not the same as on Linux. You could work on adding support for Solaris too (if you had access to a Solaris machine), but what about the friend who runs AIX, and the one who uses the Network Audio Server (NAS), and the one who uses the ALSA kernel drivers? Just providing basic support for sound is getting to be a lot of work. It's too bad you can't just write the program once and have it work on all platforms.
Enter CSL, the Common Sound Layer.
It currently works on Linux systems using the OSS sound drivers and the aRts sound library (which runs on both KDE and GNOME desktop environments). Applications that use CSL have no further dependencies on any libraries or other components.
CSL was designed to provide similar performance to platform-specific code, with support for latency management and full duplex. This makes it particularly suitable for real-time type applications such as games.
CSL is somewhat unique in that, despite all the talk of "desktop wars" between KDE and GNOME, it is being co-operatively developed by Stefan Westerfeld, a KDE developer, and Tim Janik, a GNOME developer.
CSL has some limitations, mostly by design: it is a C API only, so don't expect an object-oriented interface. It is a low-level API for digital audio only (no MIDI, no mixer, no codecs for complex formats like MP3). If you want features like 3D sound, you should look at something like OpenAL.
Currently, it supports only the OSS sound drivers or the aRts sound server. (aRts ships with KDE, and there is a GNOME version. Many multimedia applications like xmms and the RealVideo player will work under aRts as well).
While CSL is not finished yet, the API is quite stable and most of the functionality is there.
CSL is currently only provided in source format. You need to download the tar archive from http://www.arts-project.org/download/csl-0.1.2.tar.gz (there may be a newer version available by the time you read this).
Building and installing CSL follows the usual GNU build procedure, documented in the file INSTALL. Briefly, you need to run the commands:
%./configure % make % make install
The last must be done as user root. To test CSL, you can run two of the included utility programs. The testsine program generates raw samples for a 440 Hertz sine wave and cslcat accepts raw sound sample from standard input and sends them to the audio output device. Piping them together like this
% tests/testsine | csl/cslcat
should produce one second of a 440 Hertz sine wave tone (the above command assumes you are in the main CSL source directory).
If you have any raw sound files, you can try playing one with the cslcat utility, for example:
% cslcat -r 44100 -w 8 -c 1 /usr/lib/games/koules/start.raw
You can also try the programs in the examples directory.
1 #include <unistd.h> 2 #include <stdio.h> 3 #include <fcntl.h> 4 #include <csl/csl.h> 5 6 int main (int argc, char **argv) 7 { 8 const int size = 1024; 9 CslDriver *driver; 10 CslPcmStream *stream; 11 CslOptions options; 12 short buffer[size]; 13 int i, j, fd; 14 15 options.n_channels = 2; 16 options.rate = 44100; 17 options.pcm_format = CSL_PCM_FORMAT_S16_LE; 18 csl_driver_init (NULL, &driver); 19 csl_pcm_open_output (driver, "cslpcm1", options.rate, options.n_channels, options.pcm_format, &stream); 20 fd = open("/dev/urandom", O_RDONLY); 21 for (i = 0; i < 500; i++) 22 { 23 read(fd, buffer, size); 24 for (j = 0; j < size; j++) 25 buffer[j] = CLAMP(buffer[j], -4000, 4000); 26 csl_pcm_write (stream, size, buffer); 27 } 28 csl_pcm_close (stream); 29 csl_driver_shutdown (driver); 30 return 0; 31 }
In line 4 we include <csl/csl.h>, the header file that defines all of the CSL API functions.
In lines 9-11 we declare variables to hold some of the important CSL data types. A CslDriver is a handle associated with a particular backend driver. A CslPcmStream is a PCM audio stream, associated with a CSLDriver, opened for either input or output, and with specific sampling parameters. It is used much like a file descriptor. The type CslOptions stores options for a CslPcmStream. For convenience, CSL can parse standard command line options for sampling parameters and put them in a CslOptions variable.
Lines 15-17 set the PCM options: number of channels, sampling rate, and data format. In this case two channels (stereo), at a 44100 sample per second rate, using 16-bit signed little-endian samples.
Line 18 obtains a handle to a CslDriver. We could have specified the backend to use (e.g. "oss" or "arts") but the special value of NULL instructs CSL to select a driver automatically. You can also ask CSL to return a list of available drivers.
In line 19, using the driver handle, we pass the sampling options and receive a handle to the CslPcmStream, in this case an output stream for sound playback. If we wanted to perform sound recording we would have opened an input stream.
In line 20 of the example we open the Linux random device. We are going to use it to obtain random numbers which we will send to the sound device.
Lines 22-27 form a loop in which we read data from /dev/urandom into a buffer and then write the data to the PCM stream using csl_pcm_write. Because the data is random, it can contain large sample values which may be quite loud. We use the convenience macro CLAMP provided by CSL to constrain the value within a smaller range (recall that here we are working with 16-bit signed values). The result of writing the random data to the sound device should be a hissing sound from the speaker. This white noise of no particular frequency confirms that the random number generator device is indeed a good source of random data.
In lines 28-29, after looping 500 times (which corresponds to about 3 seconds), we clean up by closing the stream and shutting down the driver.
By studying this and the other examples, and looking at the HTML API documentation, you should quickly get a feel for how to use the library.
There is a mailing list for CSL. You can join the list by sending a message with the word subscribe as the message body to [email protected]. The mailing list is archived at http://www.mail-archive.com/[email protected].
Preface:
This article assumes the reader can do basic SELECT, INSERT, UPDATE, and DELETE queries to and from a SQL database. If you are not sure on how these functions work, please read a tutorial on how these types of queries work. Specifically if you can use a SELECT query, then you are armed with enough information to read through this document with a high level of understanding. That said, lets get on to aggregate functions!
Summary:
In the beginning of this rather extensive article, I will cover how to use the five most common and basic aggregate functions on PostgreSQL. Those functions are count(), min(), max(), avg(), and sum(). Then I will cover how to use several common operators that exist for your use in PostgreSQL. Depending on your development environment, a good philosophy to practice is letting your DataBase Management System (DBMS) craft your results so that they are immediately usable in your code with little or no processing. Good examples for the reasoning behind this philosophy are exhibited when using aggregates. Finally, I will cover how to use several common operators with our aggregate function counterparts that exist for your use in PostgreSQL. Depending on your development environment, a good philosophy to practice is letting your DataBase Management System (DBMS) craft your results so that they are immediately usable in your code with little or no processing. In this article, I will demonstrate how to use some simple operators in your queries to craft data exactly as you need it.
What is an aggregate function?
An aggregate function is a function such as count() or sum() that you can use to calculate totals. In writing expressions and in programming, you can use SQL aggregate functions to determine various statistics and values. Aggregate functions can greatly reduce the amount of coding that you need to do in order to get information from your database.
(Excerpt from the PostgreSQL 7.1 manual)
aggregate_name (expression)
aggregate_name (ALL expression)
aggregate_name (DISTINCT expression)
aggregate_name ( * )
where aggregate_name is a previously defined aggregate, and expression is any expression that does not itself contain an aggregate expression.
The first form of aggregate expression invokes the aggregate across all input rows for which the given expression yields a non-NULL value. (Actually, it is up to the aggregate function whether to ignore NULLs or not --- but all the standard ones do.) The second form is the same as the first, since ALL is the default. The third form invokes the aggregate for all distinct non-NULL values of the expression found in the input rows. The last form invokes the aggregate once for each input row regardless of NULL or non-NULL values; since no particular input value is specified, it is generally only useful for the count() aggregate function.
Consider this example. You are writing a program which tracks sales of books. You have a table called the "sale" table that contains the book title, book price, and date of purchase. You want to know what the total amount of money that you made by selling books for the month of March 2001. Without aggregate functions, you would have to select all the rows with a date of purchase in March 2001, iterate through them one by one to calculate the total. Now if you only have 10 rows, this does not make a big difference (and if you only sell 10 books a month you should hope those are pretty high dollar!). But consider a bookstore that sells on average 2000 books a month. Now iterating through each row one by one does not sound so efficient does it?
With aggregate functions you can simply select the sum() of the book price column for the month of March 2001. Your query will return one value and you will not have to iterate through them in your code!
The SUM() function.
The sum() function is very useful as described in the above example. Based on our fictitious table, consider the following.
table sale ( book_title varchar(200), book_price real, date_of_purchase datetime )
Without aggregate functions:
SELECT * FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001';
This returns all rows which correspond to a sale in the month of March 2001.
With aggregate functions:
SELECT SUM(book_price) AS total FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001';
This returns a single row with a single column called total containing the total books sold in the month of March 2001.
You can also use mathematical operators within the context of the sum() function to add additional functionality. Say for instance, you wanted to get the value of 20% of your sum of book_price as all of your books have a 20% markup built in to the price. Your aggregate would look like:
SELECT SUM(book_price) AS total, SUM(book_price * .2) AS profit FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001';
If you look on a grander scale, you will see even more uses for the sum() function. For example calculating commissions, generating detailed reports, and generating running statistical totals. When writing a report, it is much easier to have SQL do the math for you and simply display the results than attempting to iterate through thousands or millions of records.
The count() function.
Yet another useful aggregate function is count(). This function allows you to return the number of rows that match a given criteria. Say for example you have a database table that contains news items and you want to display your current total of news items in the database without selecting them all and iterating through them one by one. Simply do the following:
SELECT COUNT(*) AS myCount FROM news;
This will return the total number of news articles in your database.
The MAX() and MIN() functions.
These two functions will simply return the maximum or minimum value in a given column. This may be useful if you want to very quickly know the highest priced book you sold and the lowest price book you sold (back to the bookstore scenario). That query would look like this.
SELECT MAX(book_price) AS highestPrice, MIN(book_price) AS lowestPrice FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001';
Again, this simply prevents you from having to select EVERYTHING from the database, iterate through each row one by one, and calculate the result by hand.
The AVG() function.
This particular aggregate is definitely very useful. Any time you would like to generate an average value for any number of fields, you can use the avg() aggregate. Without aggregates, you would once again have to iterate through all rows returned, sum up your column and take a count of the number of rows, then do your math. In our bookstore example, say you would like to calculate the average book price that was sold during March 2001. Your query would look like this.
SELECT AVG(book_price) AS avg_price FROM sale WHERE date_of_purchase BETWEEN '03/01/2001' AND '04/01/2001';
What is an operator?
An operator is something that performs on operation or function on the values that are around it. For an example of this, let's look at Mathematical Operators. If you wanted to subtract the values from two fields in a select statement, you would use the subtraction (-) operator.
SELECT salesperson_name, revenue - cost AS commission FROM sales;
What will be returned is the results of the revenue each sales person brought in minus the cost of the products that they sold which will yield their commission amount.
salesperson_name | commission |
---|---|
Branden Williams | 234.43 |
Matt Springfield | 87.74 |
Operators can be VERY useful when you have complex calculations or a need to produce the exact results you need without having your script do any text or math based processing.
Let's refer to our bookstore example. You are writing a program which will show you the highest margin books (largest amount of profit per book) so that your marketing monkey can place them closer to the door of the store. Instead of doing your math on the fly while iterating through your result set, you can have the result set display the correct information for you.
table inventory ( book_title varchar(200), book_cost real, selling_price real )
SELECT book_title, selling_price - book_cost AS profit ORDER BY profit DESC;
Which will produce results similar to the following.
book_title | profit |
---|---|
How To Scam Customers Into Buying Your Books | 15.01 |
How To Crash Windows 2000 | 13.84 |
Now your marketing guy can very quickly see which books are the highest margin books.
Another good use for operators is when you are selecting information from one table to another. For example, you may have a temporary table that you select product data into so that it can be proofread before it is sent into some master data table. Shopping Carts make great examples of this. You can take the pertinent information from your production tables and place it in a temporary table to be then removed, quantity increased, or discounts added before it is placed into your master order table.
In an example like this, you would not want to select out your various kinds of information, perform some functions to get them just right, and then insert them back into your temporary table. You can simply do it all in one query by using operators. It also creates less of a headache when you are dealing with very dynamic data. Let the database handle as much of your dynamic data as it can.
Now I would like to go into some specific operators and their functions. To see a complete list of operators, in your pgsql interface window type '\do'.
The +, -, *, and / operators.
These are the basic math operators that you can use in PostgreSQL. See above for good examples on how to use them. A few additional examples are here.
Many more uses for math operators will be revealed in the next article in this series which combines operators with aggregate functions.
Inequality (<, >, <=, >=) operators.
You most likely have used these in the WHERE clause of a specific SQL query. For instance.
SELECT book_title FROM inventory WHERE selling_price >= '30.00';
This query will select all books that have a selling price of $30.00 or more. You could even extend that to our profit example earlier and do the following.
SELECT book_title, selling_price - book_cost AS profit WHERE selling_price - book_cost >= '14.00' ORDER BY profit DESC;
Which will only produce the following results.
book_title | profit |
---|---|
How To Scam Customers Into Buying Your Books | 15.01 |
This can allow you to set thresholds for various kinds of queries which is very useful in reporting.
The || (concatenate) operator.
When doing any sort of text concatenation, this operator comes in handy. Say for instance, you have a product category which has many different products within it. You might want to print out the product category name as well as the product item on the invoice.
SELECT category || CAST(': ' AS VARCHAR) || productname AS title FROM products;
Notice the use of the CAST() function. Concatenate will require knowledge about the elements it is operating on. You must tell PostgreSQL that the string ': ' is of type VARCHAR in order for your operator to function.
Your results may look like:
title |
---|
Music CDs: Dave Matthews, Listener Supported |
DVDs: Airplane |
In the previous articles, I showed you some simple ways to use operators and aggregate functions to help speed up your applications. The true power of operators and aggregate functions come when you combine their respective powers together. You can cut down on the lines of code your application will need by simply letting your database handle that for you. This article will arm you with a plethora of information on this subject.
Our Scenario:
You are hired to create a web-based shopping application. Here is your database layout for your order table.
create table orders ( orderid integer (autoincrement), customerid integer, subtotal real, tax real, shipping real ) create table orderdetail ( orderid integer, productid integer, price real, qty integer ) create table taxtable ( state varchar(2), rate real ) create table products ( productid integer, description varchar(100), price real ) create table cart ( sessionid varchar(30), productid integer, price real, qty integer )
In this example, I will use database driven shopping carts instead of storing the cart information in a session. However, I will need a sessionID to keep up with the changes in the database. Our cart table contains the current pre-checkout shopping cart. Orders and Orderdetail contain the completed order with items. We can calculate each order's Grand Total by adding up the sub parts when needed for tracking or billing. Finally, products is our product table which contains a price and description.
The point of this exercise is to pass as much of the computation back to the database so that your application layer does not have to perform many trips to and from the database, as well as to reduce the lines of code required to complete your task. In this example, several of your items are stored in a database table so they may be dynamic. Those items are the basis of your subtotal, tax and shipping calculations. If you do not use operators and aggregates (and potentially subqueries), you will run the risk of making many trips around the database and putting added overhead into your application layer. I will break down the calculation of each of those items for you, as well as an example of how to put it all together in the end.
The subtotal calculation.
This is a rather simple calculation, and will only use an aggregate function and simple operator to extract. In our case.
SELECT SUM(price*qty) AS subtotal FROM cart WHERE sessionid = '9j23iundo239new';
All we need is the sum of the results from every price * qty calculation. This shows how you can combine the power of operators and aggregates very nicely. Remember that the SUM aggregate will return the total sum from every calculation that is performed on a PER ROW basis. Don't forget your order of operations!
The tax calculation.
This one can be kind of tricky without some fancy SQL. I will be using COALESCE to determine the actual tax rate. COALESCE takes two arguments. If the results of the first argument are null, it will return the second. It is very handy in situations like this. Below is the query. Note: _subtotal_ is simply a placeholder.
SELECT _subtotal_ * COALESCE(tax, 0) AS tax FROM tax WHERE state = 'TX';
In the final query, I will show you how all these will add up so try not to get confused by my nifty placeholders.
The shipping calculation.
For simplicity, we will just assume that you charge shipping based on a $3 fee per item. You could easily expand that to add some fancy calculations in as well. By adding a weight field to your products table, you could easily calculate shipping based on an algorithm. In our instance, we will just count the number of items in our cart and multiply that by 3.
SELECT COUNT(*) * 3 FROM cart AS shipping WHERE sessionid = '9j23iundo239new';
Tying it all together.
Now that I have shown you how to get the results for those calculations separately, lets tie them all together into one big SQL query. This query will handle all of those calculations, and then place them into the orders table for you.
INSERT INTO orders (customerid, subtotal, tax, shipping) VALUES (customerid, (SELECT SUM(price*qty) FROM cart WHERE sessionid = '9j23iundo239new'), (SELECT SUM(price*qty) FROM cart WHERE sessionid = '9j23iundo239new') * (SELECT COALESCE(tax, 0) FROM tax WHERE state = 'TX'), (SELECT COUNT(*) * 3 FROM cart WHERE sessionid = '9j23iundo239new'));
Additionally, if you had a Grand Total field in your orders table, you could complete this by adding up the sub items in either a separate query, or inside your INSERT query. The first of those two examples might look like this.
UPDATE orders SET grandtotal = subtotal+tax+shipping WHERE orderid = 29898;
To move the rest of the items from the cart table to the orderdetail table the following two queries can be issued in sequence.
INSERT INTO orderdetail (orderid, productid, price, qty) values SELECT _yourorderid_, productid, price, qty FROM cart WHERE sessionid = '9j23iundo239new';
DELETE FROM cart WHERE sessionid = '9j23iundo239new';
Conclusion:
Aggregate functions can greatly simplify and speed up your applications by allowing the SQL server to handle these kinds of calculations. In more complex applications they can be used to return customized results from multiple tables for reporting and other functions. Operators can greatly enhance the quality of the results that you return from your database. The correct use of operators and aggregate functions can not only increase the speed and accuracy of your application, but it also can greatly reduce your code base by removing unneeded lines of code for looping through result sets, simple calculations, and other line hogs.
I hope that you enjoy reading and learning from this article as much as I enjoyed writing it!
buthead is a program to copy all but the first N lines of standard input to standard output. It's a new Debian package. Think Beavis and...
Answered By Don Marti, Heather Stern
Dear sir,[Don] Cleanliness is important! Wash the coffee pot every day for peak flavor, and wash your hands before serving food and beverages.
To make the cafe inviting to customers, wipe up spills from tables or floors when there are no customers in line.
Shop around at local bakeries to find the best baked goods. If you tell them you are opening an Internet cafe, many bakeries will bring you a free sample plate of breakfast pastries.
Get plenty of change and small bills in the morning in case the first few customers only have large bills.
[Heather] Last, but certainly not least: The Linux Coffee HOWTO.
Answered By Heather Stern
Ben Okopnik says:
I'm gettin' middlin' crazy with the e-mail blahs; I seem to be wandering in this maze of little passages, all alike...
> look trophyThe trophy case has clearly seen better days. It has different sections with small faded labels like "MIME" and "base 64 encoded". It appears to contain a scroll.
> get scrollIt's in the trophy case.
> open trophyWith what, your bare hands?
> open trophy with handsThe trophy case contains a scroll. Its lid is open.
> get scrollThe scroll tube "Mapping DUNGEO for precocious 6 year olds." It looks like this tube has been opened before.
> sYou have arrived at the SMTP reception area. The host appears to be waiting for something.
> ehlo starshine.orgEHLO starshine.org, pleased to meet you!
Answered By Rory Krause
Quote from Rory as he disappeared into the server room when a server ran out of swap space, "It needs a rootie tootie rebootie."
Answered By Iron, Faber Fedor
... work fine, just as they will in procmail. Note that "pchs.com" will work (without "\.") - but will also match "pchsxcom", etc. I'm so confused![Ben] When it comes to regexes, you're not the first... nor will you be the last. <ominous laugh:>
[Iron] We really need to do something about that Ben Okopnik... Now he's being diabolical about regexes. I wonder when he's gonna start writing a virus. <flips dark shades down so he can exit incognito>
[Ben] Not meeee! I'm only a part-time ax-murderer!
-Ben Okopnik, white hat firmly in place
-=-=-=-=-=-
[Faber] This man can't hold down *any* job full-time, can he? :-)
[Ben] Ah-hah. I *thought* some of Faber's mannerisms and turns of phrase sounded familiar. Dad, are you using pen names *again*?
[Iron] Son, when are you going to stop playing around with them computer contraptions and get a REAL job?
[Ben] <Jaw hits floor> Mike, you're welcome to give Pop his brain back any time you're done using it. I'm sure he could still get some wear out of it...
The phrasing - even though the original was in Russian - was word-for-word exact. I guess the folks at Alcoholics Anonymous aren't the only ones who get those "you mean my experience isn't unique?" shocks.
<walks off, shaking head>
Answered By Ben Okopnik
Faber Fedor asks:
I was wondering if anyone here could explain how email spoofing occurs. Specifically, email sent from [email protected] TO [email protected]. If it's being sent To: [email protected], how does it show up in my mailbox? Is there a "broadcast" address for email at a site?
[Ben]
From: <[email protected]> To: <random_name@stupid_ISP_that_permits_open_relaying.shmuck> Bcc: <list_of_harvested_addresses> Subject: MONEY!! MONEY!! MONEY!!!!!! Dear <Insert Name Here>: Wouldn't *YOU* like to make a bazillion dollars? This program requires no effort, no time, and NO brain. You don't even have to know the details of what will happen. Simply send me all your money, and I'll take care of everything!!! <List of testimonials follows>
Answered By Iron
I am moving from Canada to XXXXX Illinois and I cant seem to find anywhere the names of telephone companies who do installations and basic service I dont need long distance,I would like to get that information please as soon as possible as i am moving September 4th/2001 and I would like the phone installed in my apartement before I move in.And i know here in Canada you want to know something you call the press,and you can always get your answers.[Iron] And you think everybody in "the press" knows everything? Have you ever tried e-mailing a random publication in Vancouver or Halifax to see whether they'd tell you how to get in touch with the local phone company?
Try typing "XXXXX Illinois telephone" into a search engine and see what it says. Many cities have a general links page maintained by some public or private organization. Or find the city's Chamber of Commerce and ask them.
You've reached the Linux Gazette Answer Gang.... Linux ::::::::: a modern operating system not much like any of: --- DOS -- Windows -- Solaris -- MacOS -- alien starships --- ... except occasionally, an ability to run on the same hardware. Gazette ::::::: published more regularly than "almanac." In our case: --- a monthly web-based magazine, home: www.linuxgazette.com Answer Gang ::: Not the "lazy college student's UNstudy group" --- nor the "hey d00dz help me cRaK my neighBoorZ klub" We're just a batch of (mostly) cheerful volunteers who want to make LINUX a little more fun. If you want fascinating answers to non-computing questions try asking Cecil Adams, buy a Tarot deck, or run the 'fortune' program on your nearest Linux box and see if it actually has any meaning for you.
Answered By Iron
I am majoring in CIS and i am looking for any grant money i can find to help pay for me to go to school. If you know of any sites or places i could write would you please send me a reply.[Iron] I know one place you shouldn't write, and that is this address.
Answered By Ben Okopnik
I'm a Red Hat user (don't look at me like that, Ben!)[Iron] Ben, are you intimidating people again?
[Ben] ______ .-" "-. / \ _ | | _ ( \ |, .-. .-. ,| / ) > "=._ | )(__/ \__)( | _.=" < (_/"=._"=._ |/ /\ \| _.="_.="\_) "=._ (_ ^^ _)"_.=" "=\__|IIIIII|__/=" _.="| \IIIIII/ |"=._ _ _.="_.="\ /"=._"=._ _ ( \_.="_.=" `--------` "=._"=._/ ) > _.=" "=._ < (_/ \_)
(Urgent and confidential)
(Re: TRANSFER OF ($ 152,000.000.00 USD (ONE HUNDRED AND FIFTY TWO MILLION DOLLARS
Dear sir,
We want to transfer to overseas ($ 152,000.000.00 USD) One hundred and Fifty two million United States Dollars) from a Prime Bank in Africa, I want to ask you to quietly look for a reliable and honest person who will be capable and fit to provide either an existing bankaccount or to set up a new Bank a/c immediately to receive this money,even an empty a/c can serve to receive this money, as long as you will remain honest to me till the end for this important business trusting in you and believing in God that you will never let me down either now or in future.
The amount involved is (USD 152M) One hundred and Fifty two million United States Dollars, only I want to first transfer $52,000.000 [fifty two million United States Dollar from this money into a safe foreigners account abroad before the rest, but I don't know any foreigner,
[Doesn't know any foreigner, huh? -Iron.]I am only contacting you as a foreigner because this money can not be approved to a local person here, without valid international foreign passport, but can only be approved to any foreigner with valid international passport or drivers license and foreign a/c because the money is in us dollars and the former owner of the a/c Mr. Allan P. Seaman is a foreigner too, [and the money can only be approved into a foreign a/c However, we will sign a binding agreement, to bind us together
With my influence and the position of the bank official we can transfer thismoney to any foreigner's reliable account which youcan provide with assurance that this money will be intact pending our physical arrival in your country forsharing. The bank official will destroy all documents of transaction immediately we receive this money leaving no trace to any place and to build confidence you can come immediately to discuss with me face to face after which I will make this remittance in your presence and three of us will fly to your country at least two days ahead of the money going into the account.
I will apply for annual leave to get visa immediately I hear from you that you are ready to act and receive this fund in your account. I will use my position and influence to obtain all legal approvals for onward transfer of this money to your account with appropriate clearance from the relevant ministries and foreign exchange departments.
I AM AN ACCOUNTANT AND MEMBER OF THE TENDER COMMITTEE OF MY CORPORATION, THE NIGERIA NATIONAL PETROLEUM CORPORATION (NNPC).
AFTER DUE CONSULTATION WITH OTHER MEMBERS OF THE COMMITTEE I HAVE SPECIFICALLY BEEN MANDATED TO ARRANGE WITH YOU FOR A POSSIBLE TRANSFER OF SOME FUNDS ... RESULTING FROM VARIOUS CONTRACTS ...
WE EXPECT TO LOBBY TOP OFFICIALS FOR THEM TO APPROVE THE PAYMENT.
NOTE THAT WE HAVE PUT IN MANY YEARS OF METICULOUS SERVICE TO THE GOVERNMENT
sometime ago, a contract was awarded to a conglomerate of foreign companies in n.n.p.c by my committee. these contracts were over - invoiced to the tune of us$22.35million. this was done delibrately; the over-invoicing was a deal by members of my committee to benefit from the project. we now desire to transfer this money, which is presently in a suspense account of the n.n.p.c in our apex bank into an overseas account
[A suspense account? Does he mean a suspended account? -Iron.]it does not matter whether or not your company does contract projects of the nature described here. the assumption is that you won a major contract and subcontracted it out to other companies, more often than not, big trading companies or firms of unrelated fields win major contracts and subcontracts to more specialised firms for execution of such contracts.
Subject: Son of Babs
[This was another message similar to the one above, but from Babs' son. It said that Babs was killed on duty several years ago. -Iron.]
I HOPE MY LETTER DOES NOT CAUSE YOU TOO MUCH EMBARRASSMENT AS I WRITE TO YOU IN GOOD FAITH BASED ON THE CONTACT ADDRESS GIVEN TO ME BY A FRIEND WHO ONCE WORKED AT THE NIGERIAN EMBASSYIN YOUR COUNTRY.
I REPRESENT MOHAMMED XXXXX, SON OF THE LATE GEN. XXX XXXXX, WHO WAS THE FORMER MILITARY HEAD OF STATE IN NIGERIA. HE DIED IN 1998. SINCE HIS DEATH, THE FAMILY HAS BEEN LOSING A LOT OF MONEY DUE TO VINDICTIVE GOVERNMENT OFFICIALS WHO ARE BENT ON DEALING WITH THE FAMILY. BASED ON THIS THEREFORE, THE FAMILY HAS ASKED ME TO SEEK FOR A FOREIGN PARTNER WHO CAN WORK WITH US AS TO MOVE OUT THE TOTAL SUM OF US$75,000,000.00 ( SEVENTY FIVE MILLION UNITED STATES DOLLARS ), PRESENTLY IN THEIR POSSESSION.
Subject: Can you handle big bucks?
[Yeah, like I'm really going to open an attachment called Setup.exe. -Iron.]
Just a short little note today from me - just meant to help you since you've been posting to FFA pages like mine.
If you are posting and advertising all over the place and not getting the results you want - THERE IS A GOOD REASON WHY you're not getting results!
If you are ARE "signing up" people - but THEY never do anything that makes you OR them money - THERE IS A GOOD REASON WHY that's happening.
Have you got your internet business up and running yet? Is it making you money? If not then you need to check out this opportunity.
" THE KARMA PROGRAM! "
BULK FRIENDLY offshore website hosting only $500 per month.
Bulk Email advertise your website, RISK FREE!
Never have your website shutdown again!
More hits and business than you can imagine.
Every month over 1,000,000 new web-sites come on-line. Websites from people who want to make a living from internet commerce. But they haven't the slightest idea how to go about it.
Would you like to be able to get a list of these people? Newcomers to the Net are ideal prospects for anyone offering Internet Marketing services and training of any kind. Would you like to get the tools to trace them, take them by the hand, and sell them your unique internet marketing program?
These newbies will welcome your offer with outstretched arms! (Remember your own desperation when you entered the net??)
Imagine! You can offer them a free manual that provides the step-by-step information they have been searching for ever since they came on line, when they buy the necessary software to become successful from you.
Do you think you'll have any problems selling to these people???
When you realize that newcomers to the Internet want and need your help, it should be clear that here is the answer to all of your promotion fantasies! Here is the Mother-lode! The source of more business than you ever dreamed of! If you could reach these people, it would be almost like getting the combination to your bank's safe!
The question is though, how do you find these people?
There is no main gate to the Internet, that everyone goes through when they first get online... There's no "newbie" lounge where they all congregate... or is there? In fact, there is a place where most Internet newcomers gather, if they are hoping to establish a web-based home business. You can find them there any time of the day or night... I can show you where they are!
None of the internet Marketing Gurus has ever been able to tell you where to catch these people when they're entering the net! I can and I will!
[It was, huh? Do you even know who my webspace provider is? -Iron.]
If you would like to receive this excellent offer all you have to do is send your details to the following email address and your username and password will be emailed to you within 5 business days. Then you will be able to login at the members area of XXXXX.com and configure your account (add domains, setup scripts etc.).
Please send all of the following details:
Your Full Name
Your Full Address
Your Phone Number
Your Email Address
Your Credit Card Number
Your Credit Card Expiry Date
Your Credit Card Type (eg. visa, mastercard etc.)
Send all of the above to [email protected]
Are you ready to upgrade your web site for e-commerce or drastically improve your current shopping cart system?...
Don't be the one to waste valuable time with an inferior shopping cart program that isn't tailored to your business. XXXXX was developed and by an experienced web development company that specializes in helping people like you sell and market products over the Internet.
Due to our ISP's terms of service agreement, we are unable to advertise the web address of the Atomicart demonstration web site in this message, so contact us today at XXX XXX XXXX and we will provide you with demonstration web addresses. When you call, you will speak directly with a software engineer not a salesperson.
Happy Linuxing!
Mike ("Iron") Orr
Editor, Linux Gazette,