Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Jon Cox
I saw an article in July's LG that talked about using watch as a better way to monitor ftp downloads -- there 's an even BETTER way: Check out ncftp. It works much like ftp, but shows a progress bar, estimates time to completion, and saves bookmarks of where you've been. I think ncftp is pretty standard on all distributions these days.
-Enjoy Jon
Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Jon Cox
While grep works as a tool for searching through a big directory tree for a string, it's pretty slow for this kind of thing & a much better tool exists --Glimpse. It even has an agrep-style stripped down regexp capability for doing "fuzzy search", and is astonishingly fast. Roughly speaking:
glimpse is to grep as
locate is to find
I believe the latest rpm version is glimpse-4.0-4.i386.rpm You can find it in any site that mirrors Red hat's contrib directory.
Enjoy!
-Jon
Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Wim Jongman
I have hacked a helpful utility. Please have a look at it.
Regards,
Wim Jongman
I have been a satisfied diald user for quite some time. one of the things that were on my list of favorites was the possibility to activate the link from another location. I have written a small shell script that waits for activity on my telephone line.
If activity has been detected the script submits the ping utility which causes diald to set up a link to my ISP. If activity is detected from the inside (diald does the dialing) then the ping is also performed but there can be no harm in that.
My /etc/diald.conf looks like this:
mode cslip
connect /usr/local/bin/connect
device /dev/cua2
speed 115200
modem
lock
crtscts
local local.ip.ad.dres
remote ga.te.way.address
mtu 576
defaultroute
ip-up /usr/local/bin/getmail &
ip-down /usr/local/bin/waitmodem &
include /usr/lib/diald/standard.filter
The first time the link goes down, the program waitmodem is submitted. The script for /usr/local/bin/waitmodem is:
#!/bin/bash
# This script waits for data entering the modem. If data has arrived,
# then a host is pinged to allow diald to
# setup a connection (and you to telnet in.)
if test -f /var/locks/waitmodem
then
exit 0
else
touch /var/locks/waitmodem
sleep 5
read myvar < /dev/cua2
ping -c 10 host.com > /dev/nul & > /dev/nul
rm /var/locks/waitmodem
exit 0
fi
If the diald decides to drop the link, the ip-down keyword activates the waitmodem script. This creates a lock in /var/lock(s) and sleeps for five seconds to allow the modem buffers to flush. Then the modem device is read and if activity occurs, the ping is submitted. Change the italic bits in the scripts. The lock is removed and diald dials out. This allows you to access your machine. I guess you have to have a static ip for it to be useful.
Regards,
Wim Jongman
Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Jordi Sanfeliu
hi !
This is my contribution to this beautiful gazette !! :))
tree is a simple tool that allows you to see the whole directory tree on your hard disk.
I think that is very cool, no?
#!/bin/sh # @(#) tree 1.1 30/11/95 by Jordi Sanfeliu # email: [email protected] # # Initial version: 1.0 30/11/95 # Next version : 1.1 24/02/97 Now, with symbolic links # # Tree is a tool for view the directory tree (obvious :-) ) # search () { for dir in `echo *` do if [ -d $dir ] ; then zz=0 while [ $zz != $deep ] do echo -n "| " zz=`expr $zz + 1` done if [ -L $dir ] ; then echo "+---$dir" `ls -l $dir | sed 's/^.*'$dir' //'` else echo "+---$dir" cd $dir deep=`expr $deep + 1` search # with recursivity ;-) numdirs=`expr $numdirs + 1` fi fi done cd .. if [ $deep ] ; then swfi=1 fi deep=`expr $deep - 1` } # - Main - if [ $# = 0 ] ; then cd `pwd` else cd $1 fi echo "Initial directory = `pwd`" swfi=0 deep=0 numdirs=0 zz=0 while [ $swfi != 1 ] do search done echo "Total directories = $numdirs"
Have fun !
Jordi
Date: Wed, 18 Jun 1997 10:15:26 -0700
From: James Gilb
I liked your gawk solution to displaying hex data. Two things (which people have probably already pointed out to you).
-v: The -v option causes hexdump to display all input data. Without the -v option, any number of groups of output lines, which would be identical to the immediately preceding group of output lines (except for the input offsets), are replaced with a line comprised of a single asterisk.
00000000: 01df 0007 30c3 8680 0000 334e 0000 00ff ....0.....3N.... 00000010: 0048 1002 010b 0001 0000 1a90 0000 07e4 .H.............. 00000020: 0000 2724 0000 0758 0000 0200 0000 0000 ..'$...X........ 00000030: 0000 0760 0004 0002 0004 0004 0007 0005 ...`............ 00000040: 0003 0003 314c 0000 0000 0000 0000 0000 ....1L.......... 00000050: 0000 0000 0000 0000 0000 0000 2e70 6164 .............pad 00000060: 0000 0000 0000 0000 0000 0000 0000 0014 ................ 00000070: 0000 01ec 0000 0000 0000 0000 0000 0000 ................ 00000080: 0000 0008 2e74 6578 7400 0000 0000 0200 .....text....... 00000090: 0000 0200 0000 1a90 0000 0200 0000 2a98 ..............*.
(I don't suppose it is surprising that emacs does this, after all, emacs is not just and editor, it is its own operating system.)
Date: Tue, 24 Jun 1997 11:54:48 +0200
From: Jerko Golubovic
A comment on article "HARD DISK DUPLICATION" written by [email protected] in Linux Gazette #18 (June 97).
What I did at my place is following:
I SetUp root-NFS system to boot usable configuration over network. I just need a floppy with appropriate kernel command-line and system brings up.
When system brings up I mount as /root NFS volume where I store compressed images. In that way I have them readily available when I log-in.
With dmesg I find about geometry of the hard disk of the target system. Then, for taking a new image I do:
cat /dev/hda | gzip -9 > <somename>.gz
And for restore:
zcat <somename>.gz > /dev/hda
Of course, I don't have to use such system. It is enough to prepare one boot floppy containing just FTP client and network config. I made two shell scripts:
b: ---------------------- #!/bin/sh cat /dev/hda | gzip -9 r: ---------------------- #!/bin/sh gzip -d > /dev/hda Then, in FTP you do: put |./b <somename>.gz - to save image get <somename.gz> |./r - to restore image
ANY FTP server on ANY platform can be used for storage.
Not only that - you don't have to use FTP at all - you can use smbclient instead - and read directly from Win or Lanman shares - doing basically the same thing.
Date:Tue, 1 Jul 1997 13:12:34
From: Gene Gotimer
In Linux Gazette Issue 18, Earl Mitchell ([email protected]) suggested
grep foo `find . -name \*.c -print`
as a way to grep files in a directory tree. He warned about a command line character limit (potentially 1024 characters).
Another way to accomplish this, without the character limit, is to use the xargs command:
find . -name '*.c' -print | xargs grep foo
The xargs command accepts arguments on standard input, and tacks them on the end of the specified command (after any supplied parameters).
You can specify where in the command xargs will place the arguments (rather than just on the end) if you use the -i option and a pair of curly braces wherever you want the substitution:
ls srcdir | xargs -i cp srcdir/{} destdir/{}
xargs has a number of options worth looking at, including -p to confirm each command as it is executed. See the man page.
-- Gene Gotimer
Date: Mon, 23 Jun 1997 08:45:48 +0200
From: Jean-Philippe CIVADE
I've written an utility under Windows 95 able to copy from disk to disk in a biney way. It's called Disk2file. It's findable on my web site under tools. The primary purpose of this utility was to make iso images from a hard disk (proprietary file system) to record them on a cdrom. I've used it yesterday do duplicate a red hat 4.1 installed disk with success. The advantage of this method is this is possible to product a serial of disk very quickly. This utility is written to tranfert up to 10Mb /s. The duplication time for a 540 Mb is about 10 mins.
The way to use it is:
It's referenced as a shareware in the docs but I conced the freeware mode to the Linux community for disk duplication only.
-- Best Regards Jean-Philippe CIVADE
Date: Fri, 20 Jun 1997 00:05:33 -0500 (CDT)
From: Ralph
Here is a script I hacked together (trust me after you see it I'm sure you'll understand why this is my first script hack I'm sure) to ftp McAfee virus definitions unzip then and run a test to make sure they are ok...now ya gotta have vscan for linux located at ftp://ftp.mcafee.com/pub/antivirus/unix/linux
the first one does the work of pulling it down unzipping and testing
#!/bin/sh # ===================================================================== # Name: update-vscan # Goal: Auto-update McAfee's Virus Scan for Linux # Who: Ralph Sevy [email protected] # Date: June 19 1997 # ---------------------------------------------------------------------- # Run this file on the 15th of each month to insure that the file gets # downloaded # ====================================================================== datafile=dat-`date +%y%m`.zip mcafeed=/usr/local/lib/mcafee ftp -n ftp.mcafee.com << ! user anonymous [email protected] binary cd /pub/antivirus/datfiles/2.x get $datafile quit ! if [ -f $mcafeed/*.dat ]; then rm *.dat fi unzip $datafile *.DAT -d $mcafeed for file in $(ls $mcafeed/*.DAT); do lconvert $mcafeed/*.DAT done uvscan $mcafeed/* exit --------------------------------------------------------------------------- CUT HERE lconvert is a 3 line script I stole looking in the gazette CUT HERE -------------------------------------------------------------------------- #!/bin/tcsh # script named lconvert foreach i (*) mv $1 `echo $1 | tr '[A-Z]' '[a-z]'` ------------------------------------------------------------------------- CUT HERE
The last thing you want to do is add an entry to crontab to update your files once a month....I prefer the 15th as it makes sure I get the file (dunno really how to check for errors yet, its my next project)
# crontab command line # update mcafee data files once a month on the 15th at 4am * 4 15 * * /usr/local/bin/update-vscan
Its not pretty I'm sure, but it works
Ralph http://www.kyrandia.com/~ralphs
Date: Thu, 3 Jul 1997 11:13:56 -0400
From: Neil Schemenauer
I have seen a few people wondering what to do with log files that keep growing. The easy solution is to trim them using:
cat </dev/null >some_filenameThe disadvantage to this method is that all your logged data is gone, not just the old stuff. Here is a shell script I use to prevent this problem.
#!/bin/sh # # usage: logroll [ -d <save directory> ] [ -s <size> ] <logfile> # where to save old log files SAVE_DIR=/var/log/roll # how large should we allow files to grow before rolling them SIZE=256k while : do case $1 in -d) SAVE_DIR=$2 shift; shift;; -s) SIZE=$2 shift;shift;; -h|-?) echo "usage: logroll [ -d <save directory> ] [ -s <size> ] <logfile>" exit;; *) break;; esac done if [ $# -ne 1 ] then echo "usage: logroll [ -d <save directory> ] [ -s <size> ] <logfile>" exit 1 fi if [ -z `find $1 -size +$SIZE -print` ] then exit 0 fi file=`basename $1` if [ -f $SAVE_DIR/$file.gz ] then /bin/mv $SAVE_DIR/$file.gz $SAVE_DIR/$file.old.gz fi /bin/mv $1 $SAVE_DIR/$file /bin/gzip -f $SAVE_DIR/$file # this last command assumes the PID of syslogd is stored like RedHat # if this is not the case, "killall -HUP syslogd" should work /bin/kill -HUP `cat /var/run/syslog.pid`Save this script as /root/bin/logroll and add the following to your /etc/crontab:
# roll log files 30 02 * * * root /root/bin/logroll /var/log/log.smb 31 02 * * * root /root/bin/logroll /var/log/log.nmb 32 02 * * * root /root/bin/logroll /var/log/maillog 33 02 * * * root /root/bin/logroll /var/log/messages 34 02 * * * root /root/bin/logroll /var/log/secure 35 02 * * * root /root/bin/logroll /var/log/spooler 36 02 * * * root /root/bin/logroll /var/log/cron 38 02 * * * root /root/bin/logroll /var/log/kernelNow forget about log files. The old log file is stored in /var/log/roll and gzipped to conserve space. You should have lots of old logging information if you have to track down a problem.
Neil
Date: Fri, 27 Jun 1997 15:43:44 +1000 (EST)
From: Damian Haslam
Hi, after searching (to no avail) for a way to display the currently executing process in the xterm on the xterm's title bar, I resorted to changing the source of bash2.0 to do what I wanted. from line 117 of eval.c in the source, add the lines marked with # (but don't include the #)
117: if (read_command () == 0) 118: { #119: if (strcmp(get_string_value("TERM"),"xterm") == 0) { #120: printf("^[]0;%s^G",make_command_string(global_command)); #121: fflush(stdout); #122: } #123: 124: if (interactive_shell == 0 && read_but_dont_execute) .....you can then set PROMPT_COMMAND to reset the xterm title to the pwd, or whatever takes your fancy.
cheers - damian
Date: Sun, 29 Jun 1997 10:09:52 -0400 (EDT)
From: Tim Newsome
Another way of getting a file numbered:
grep -n $ <filename>
-ntells grep to number its output, and $ means end-of-line. Since every line in the file has an end (except possibly the last one) it'll stick a number in front of every line.
Tim
Date: Wed, 02 Jul 1997 20:17:26 +0900
From: Matt Gushee
About getting rid of X components, Michael Hammel wrote that "...you still need to hang onto the X applications (/usr/X11R6/bin/*)." We-e-ll, I think that statement needs to be qualified. Although I'm in no sense an X-pert, I've poked around and found quite a few non-essential components: multiple versions of xclocks (wristwatches are more accurate and give your eyes a quick break). Xedit (just use a text-mode editor in an xterm). Fonts? I could be wrong, but I don't see any reason to have both 75 and 100dpi fonts; and some distributions include Chinese & Japanese fonts, which are BIG, and which not everyone needs. Anyway, poking around for bits and pieces you can delete may not be the best use of your time, but the point is that X seems to be packaged with a very broad brush. By the way, I run Red Hat, but I just installed the new (non-rpm) XFree86 3.3 distribution--and I notice that Red Hat packages many of the non-essential client programs in a separate contrib package, while the Xfree86 group puts them all in the main bin/ package.
Here's another, maybe better idea for freeing up disk space: do you have a.out shared libraries? If you run only recent software, you may not need them. I got rid of my a.out libs several months ago, and have installed dozens of programs since then, and only one needed a.out (and that one turned out not to have the features I needed anyway). Of course, I have the RedHat CD handy so I can reinstall them in a moment if I ever really need them.
That's my .02 .
--Matt Gushee
Date: Wed, 2 Jul 1997 09:46:33 -0400 (EDT)
From: Clayton L. Hynfield
Don't forget about find's -exec option:
find . -type f -exec grep foo {} \;
Clayton L. Hynfield
Date: Mon, 07 Jul 97 15:08:39 +1000
From: Stuart Lamble
With regards to changing the size of the X screen, I assume you're using XFree86. XFree will make your virtual screen size the larger of: *the specified virtual screen size *the _largest_ resolution you _might_ use with your video card (specified in 'Section "Screen"').
Open your XF86Config file in any text editor (ae, vi, emacs, jed, joe, ...) _as root_. (You need to be able to write it back out again.) Search for "Screen" (this is, IIRC, case insensitive, so for example, under vi, you'd type:
/[Ss][Cc][Rr][Ee][Ee][Nn]yeah, yeah, I know there's some switch somewhere that makes the search case insensitive (or if there isn't, there _should_ be :), but I can't remember it offhand; I don't have much use for such a thing.)
You'll see something like:
Section "Screen" Driver "accel" Device "S3 Trio64V+ (generic)" Monitor "My Monitor" Subsection "Display" Depth 8 Modes "1024x768" "800x600" "640x480" ViewPort 0 0 Virtual 1024 768 EndSubsection Subsection "Display" Depth 16 Modes "800x600" "640x480" ViewPort 0 0 Virtual 800 600 EndSubsection Subsection "Display" Depth 24 Modes "640x480" ViewPort 0 0 Virtual 640 480 EndSubsection EndSection(this is taken from a machine I use on occasion at work.)
The first thing to check is the lines starting with Virtual. If you want the virtual resolution to be the same as the screen size, it's easy to do - just get rid of the Virtual line, and it'll be set to the highest resolution listed in the relevant Modes line. (In this case, for 24bpp, it would be 640x480; at 16bpp, 800x600; at 8bpp, 1024x768.) Just be aware that if you've got a 1600x1200 mode at the relevant depth listed, the virtual screen size will stay at 1600x1200. You'd need to get rid of the higher resolution modes in this case.
I would strongly recommend you make a backup of your XF86Config file before you mess around with it, though. It's working at the moment; you want to keep it that way :-)
All of this is, of course, completely incorrect for MetroX, or any other commercial X server for Linux.
Cheers.
Date: Sun, 6 Jul 1997 13:13:29 -0400 (EDT)
From: Tim Newsome
Since nobody has mentioned it yet: procps (at least version 1.01) comes with a very useful utility named watch. You can give it a command line which it will execute every 2 seconds. So, to keep track of file size, all you really need is: watch ls -l filename Or if you're curious as to who's logged on: watch w You can change the interval with the -n flag, so to pop up a different fortune every 20 seconds, run: watch -n 20 fortune Tim
Date: Fri, 04 Jul 1997 14:50:08 -0400
From: Ian Quick
I don't know if this is very popular but my friend once told me a way to put your syslog messages on a virtual console. First make sure that you have the dev for what console you want. (I run RedHat 4.0 and they have them up tty12). Then edit your syslog.conf file and add *.* <put a few tabs for format> /dev/tty12. Reboot and TA DA! just hit alt-F12 and there are you messages logged to a console.
-Ian Quick
Date: Mon, 7 Jul 1997 15:59:39 -0600 (CST)
From: Terrence Martin
This is a common problem that occurs with many of our Windows users when they upload html and perl cgi stuff to our web server.
The real fix for this has been available for years in ftp clients themselves. Every ftp client should have support for both 'Binary or type I' and 'Ascii or type 2' uploads/downloads. By selecting or toggling this option to Ascii mode (say in ws_ftp) the dos format text files are automagically translated to unix style without the ^M. Note you definitely do not want to transfer binary type files like apps or programs as this translation will corrupt them.
Regards
Terrence Martin
Date: Fri, 11 Jul 1997 00:27:49 -0400
From: Joey Hess
I use X 99% of the time, and I was getting tired of the routine of CTRL-ALT-F1; log in; run squake; exit; switch back to X that I had to go through every time I wanted to run squake. So I decided to add an entry for squake to my fvwm menus. To make that work, I had to write a script, I hope someone else finds this useful, I call it runvc:
#!/bin/sh # Run something on a VC, from X, and switch back to X when done. # GPL Joey Hess, Thu, 10 Jul 1997 23:27:08 -0400 exec open -s -- sh -c "$* ; chvt `getvc`"Now, I can just type runvc squake (or pick my fvwm menu entry that does the same) and instantly be playing squake, and as soon as I quit squake, I'm dumped back into X. Of course, it works equally well for any other program you need to run at the console.
Runvc is a one-liner, but it took me some time to get it working right, so here's an explanation of what's going on. First, the open -s command is used to switch to another virtual console (VC) and run a program. By default, it's going to switch to the next unused VC, which is probably VC 8 or 9. The -s has to be there to make open actually change to that console.
Next, the text after the -- is the command that open runs. I want open to run 2 commands, so I have to make a small shell script, and this is the sh -c "..." part. Inside the quotes, I place $*, which actually handles running squake or whatever program you told runvc to run.
Finally, we've run the command and nothing remains but to switch back to X. This is the hard part. If you're not in X, you can use something like open -w -s -- squake and open will run squake on a new VC, wait for it to exit, and then automatically switch back to the VC you ran it from. But if you try this from inside X, it just doesn't work. So I had to come up with another method to switch back to X. I found that the chvt command was able to switch back from the console to X, so I used it.
Chvt requires that you pass it the number of the VC to switch to. I could just hard code in the number of the VC that X runs on on my system, and do chvt 7, but this isn't portable, and I'd have to update the script if this ever changed. So I wrote a program named 'getvc' that prints out the current VC. Getvc is actually run first, before any of the rest of the runvc command line, because it's enclosed in backticks. So getvc prints out the number of the VC that X is running on and that value is stored, then the rest of the runvc command line gets run, and eventually that value is passed to chvt, which finally switches you back into X.
Well, that's all there is to runvc. Here's where you can get the programs used by it:
/* getvc.c * Prints the number of the current VC to stdout. Most of this code * was ripped from the open program, and this code is GPL'd * * Joey Hess, Fri Apr 4 14:58:50 EST 1997 */ #include <sys/vt.h> #include <fcntl.h> main () { int fd = 0; struct vt_stat vt; if ((fd = open("/dev/console",O_WRONLY,0)) < 0) { perror("Failed to open /dev/console\n"); return(2); } if (ioctl(fd, VT_GETSTATE, &vt) < 0) { perror("can't get VTstate\n"); close(fd); return(4); } printf("%d\n",vt.v_active); } /* End of getvc.c */
I hope this tip isn't too long!
-- see shy jo
Date: Fri, 18 Jul 1997 00:33:48 +0200 (SAT)
From:
Hi!
First of all, I want to congratulate you with your fine magazine. Although I've been around for quite some time and known about the existance of LG, I've never had the time (or should I say I have been to ignorant) to read it. Well, I finally sat down and started reading all the issues and I must say I'm impressed. Therefore I decided I would show my gratitude by showing you some of my 2c Tips. Enjoy...
# Quick way to copy a tree of files from one place to another ----< cptree <---- #!/bin/sh if [ $# = 2 ] then (cd $1; tar cf - .) | (mkdir $2; cd $2; tar xvfp -) else echo "USAGE: "`basename $0`" <source_directory> <dest_directory>" exit 1 fi ----< cptree <---- # Quick way to move a tree of files from one place to another ----< mvtree <---- #!/bin/sh if [ $# = 2 ] then (cd $1; tar cf - .) | (mkdir $2; cd $2; tar xvfp -) rm -rf $1 else echo "USAGE: "`basename $0`" <source_directory> <dest_directory>" exit 1 fi ----< mvtree <---- # Rename numeric files (1.*, 2.*, 3.*, etc.) to it's correct numeric # equivalents (01.*, 02.*, 03.*, etc.). Useful to prevent incorrect wild # card matching ----< fixnum <---- #!/bin/sh if [ $# = 0 ] then FILELIST=`ls {1,2,3,4,5,6,7,8,9}.mp3` 2> /dev/null MPFILE="empty" chmod -x * for MPFILE in $FILELIST do if [ -e $MPFILE ]; then mv $MPFILE "`echo "0$MPFILE"`"; fi done fi ----< fixnum <---- # This one strips the given file name from it's extension (i.e. "file.txt" # would become "file" ----< cutbase <---- #!/bin/sh if [ $# = 1 ] then dotpos=`expr index $1 "."` if [ $dotpos -gt 0 ] then dotpos=`expr $dotpos - 1` stripfile=`expr substr $1 1 $dotpos` else stripfile=$1 fi echo $stripfile else echo " USAGE: `basename $0` <filename>" exit 1 fi ----< cutbase <---- # If you're desperately looking for a file containing something and you # don't have a clue where to start looking, this one might be for you. # It greps through all the files in the given directory tree for the given # keyword and list all the files. For example: grepall /usr/doc PAP secrets ----< grepall <---- #!/bin/sh if [ $# = 0 ] then DIR="." else DIR=$1 shift find $DIR -type f -exec grep -lie "$@" {} \; | less fi ----< grepall <---- # You might have seen some of the xterm titlebar tips posted in LG. Here # is my variation of the theme. I like my xterm to keep it's title that # I've either specified on the command-line or the name of the program # and after I've run programs like Midnight Commander, that's changes the # titlebar, I want it restored to it's old value. Here is my way of doing # it. Just put the code in /etc/profile or ~/.profile or whatever startup # file you use... ----< Titlebar 2c tip <---- if [ $TERM = "xterm" -o $TERM = "xterm-color" -o $TERM = "rxvt" ] then function TitlebarString() { local FOUND=0 local PIDTXT=`ps | grep $PPID` for WORDS in $PIDTXT do if [ $FOUND = 1 ]; then break; fi if [ $WORDS = "-T" ]; then export FOUND=1; fi done if [ $FOUND = 0 ] then WORDS=`(for TMP in $PIDTXT;do echo -n $TMP" ";done) | cut -f5 -d" "` if [ "`echo $WORDS | grep -i xterm`" != "" ]; then WORDS="xterm"; fi fi echo -n $WORDS unset WORDS } if [ $COLORTERM -a $COLORTERM = "rxvt-xpm" ] then alias mc='mc -c;echo -ne "\033[m\033]0;`TitlebarString`\007"' else alias mc='mc -c;echo -ne "\033]0;`TitlebarString`\007"' fi fi ----< Titlebar 2c tip <---- # This is an add-on for du. It shows the total disk usage in bytes, # kilobytes, megabytes and gigabytes (I thought terabytes wouldn't be # necessary (: ) ----< space <---- #!/bin/sh BYTES=`du -bs | cut -f1` 2> /dev/null if [ $BYTES -lt 0 ] then KBYTES=`du -ks | cut -f1` 2> /dev/null else KBYTES=`expr $BYTES / 1024` fi MBYTES=`expr $KBYTES / 1024` GBYTES=`expr $MBYTES / 1024` echo "" if [ $BYTES -gt 0 ]; then echo " $BYTES bytes"; fi if [ $KBYTES -gt 0 ]; then echo " $KBYTES KB"; fi if [ $MBYTES -gt 0 ]; then echo " $MBYTES MB"; fi if [ $GBYTES -gt 0 ]; then echo " $GBYTES GB"; fi echo "" ----< space <---- # A scripty to unzip all zipfiles specified or all those in the current # directory and remove the orginal ones (Remember that GNU zip/unzip # doesn't support wildcards) ----< unzipall <---- #!/bin/sh if [ $# = 0 ] then ZIPLIST=`ls *.zip` 2> /dev/null else ZIPLIST="$@" fi ZIPFILE="garbage" for ZIPFILE in $ZIPLIST do unzip -L $ZIPFILE done rm -f $ZIPLIST 2> /dev/null ----< unzipall <---- # Zip all the files in the current directory seperately and wipe the # original files. Zip's them in a dos style (i.e. hungry.txt would # be zipped to hungry.zip and not hungry.txt.zip) ----< zipall <---- #!/bin/sh function stripadd () { local dotpos=`expr index $1 "."` if [ $dotpos -gt 0 ] then dotpos=`expr $dotpos - 1` local stripfile=`expr substr $1 1 $dotpos` else local stripfile=$1 fi echo $stripfile".zip" } function ziplist () { zipfile="garbage" for zipfile in "$@" do zip -9 `stripadd $zipfile` $zipfile rm $zipfile done } if [ $# -gt 0 ] then ziplist "$@" else ziplist `ls` fi ----< zipall <----
Okay, now for some Window manager tips. Since '95 microsoft has launched a '95 keyboard campaign and in the process a lot of people (including me) have ended up with keyboards containing those silly, useless buttons. Luckily I've put them to good use. To give them the same functions in your window manager as in doze 95, just follow the instructions:
Edit ~/.Xmodmap and add the following lines: keycode 115 = F30 keycode 116 = F31 keycode 117 = F32
Now, edit your window manager configuration file and bind those keys. Here is the proper keybindings for fvwm95 and afterstep respectively
# Fvwm95 (edit ~/.fvwm2rc95) Key F30 A A CirculateDown Key F31 A A CirculateUp Key F32 A A PopUp "Utilities" # Afterstep (edit ~/.steprc) Key F30 A A CirculateDown Key F31 A A CirculateUp Key F32 A A PopUp "HotList"
Just remember that PopUp "Utilities" and PopUp "HotList" should be replaced by your actual popup menus. If you don't known what I'm talking about, just browse through your configuration file and read the comments - It'll become clear very soon.
I guess that's all for now. I've got some other (more useful) scripts and tips, but they are either system specific or just to large to include here and if I don't stop now, you'll need a seperate issue just for my tips.
Cheers
ixion
Date: Wed, 23 Jul 1997 09:28:24 -0300
From: Mario Storti
Hi, RCS (see rcs(1)) is a very useful tool that allows to store versions of a file by storing only the differences between successive versions. In this way I can make a large amounts of backups of my source files but with a negligible amount of storage. I use it all the time, even for TeX files!! However, when you are working with a set of source files (*.c, shell or Perl scripts, I work mainly with Fortran .f and Octave *.m files) what I want is to make backups of the whole set of files in such a way that you can recover the state of the whole package at a given time. I know that there is a script called rcsfreeze around, but I know that it has problems, for instance if you rename, delete or create new files, it is not guaranteed to recover the same state of the whole set.
I found a solution that seems to be simpler and is working for me: I make a `shar' of the files and then a version control of the shar file. (see shar(1)). Shar is a file that packs a set of text files in a single text file. It has been used since a long time to send set of files by e-mail.
It would be easy to write a script for this, but I prefer to include the shell code in a Makefile. The commands to be issued each time you want to make a backup are:
$ co -l source.shar $ shar *.m Makefile >source.shar $ ci -m"save package" source.shar
Here *.m and Makefile is the set of files that I want to backup periodically.
(I want to point out that RCS version control is far beyond the simple fact of making backups: It serves to manage files to be worked by different people, etc... Here I'm using a very small subset of the utilities of RCS.)
Hope this could be of use for someone else. It would be nice also to hear of other solutions,
Mario
Date: Wed, 23 Jul 1997 15:53:31 -0500
From: Debie Scholz
If you have a ps2 style mouse and the /dev/psaux gets deleted you must do a MAKEDEV busmice but it doesnt make a psaux it makes a psmouse so you must make a symbolic link to psaux.;
Debie Scholz
Sirius Systems Group, Inc.
Date: Wed, 30 Jul 1997 08:35:46 +0200 (MET DST)
From: Werner Fleck
Hi!
I have read all the 2c tips on grepping files in a directory tree but I think all missed the ultimate tool for this: a perl script named ``mg''. With this you can:
Although it is written in perl it is very fast - I used it now for many years and it works wonderful for me.
FTP search results
Hardware by Opticom ASA, ITEA and IDI. Network by UNINETT. This server is located in Trondheim, Norway
"Exact search" for "mg-2.16"
1 -r--r--r-- 38.8K 1996 Oct 2 ftp.nuie.nagoya-u.ac.jp /languages/perl/sra-scripts/mg-2.16 2 -rw-r--r-- 38.8K 1995 Nov 16 ftp.et-inf.fho-emden.de /pub/.mnt2/perl/sra-scripts/mg-2.16 3 -rw-r--r-- 38.8K 1996 Oct 3 ftp.hipecs.hokudai.ac.jp /pub/LANG/perl/utashiro/mg-2.16 4 -rw-r--r-- 38.8K 1997 Mar 4 ftp.st.ryukoku.ac.jp /pub/lang/perl/mg-2.16 5 -r--r--r-- 38.8K 1996 Oct 2 ftp.elelab.nsc.co.jp /pub/lang/perl/scripts.sra/mg-2.16 6 -r--r--r-- 38.8K 1996 Oct 3 ftp.sra.co.jp /pub/lang/perl/scripts/utashiro-scripts/mg-2.16 7 -r--r--r-- 38.8K 1996 Oct 3 ftp.sra.co.jp /pub/lang/perl/sra-scripts/mg-2.16 8 -rw-r--r-- 38.8K 1995 Nov 16 ftp.fujitsu.co.jp /pub/misc/perl/sra-scripts/mg-2.16 9 -r--r--r-- 38.8K 1996 Oct 2 ftp.eos.hokudai.ac.jp /pub/tools/sra-scripts/mg-2.16 9 reported hits 0.018 seconds prospero 0.018 seconds HTTP 0 partial writes. DONE
FTP search, Copyright © 1994-1997 Tor Egge
Greetings, Werner