"Linux Gazette...making Linux just a little more fun!"


The Answer Guy


By James T. Dennis, answerguy@ssc.com
Starshine Technical Services, http://www.starshine.org/


(?)Updates: Risks and Rewards

From Frits Hoogland on 08 Oct 1998

Hi almighty answerguy!

I'm a bit confused by all the updates of various system components (like the libc, gcc, etc, etc). Is it advisable to loop at ftp.redhat.com for updates of my 5.0 system? Is it advisable to download a new kernel? Can I install let say kernel 2.0.35 (which, as I noticed, nearly everyone uses) or are there things I have to consider, things I have to check, etc.?

(!)That's an excellent question. Using Red Hat's package management system (RPM) does make it faster and easier for most "mere mortals" (myself included) to upgrade most packages and install most new ones.
Debian package management is allegedly at least as good --- but it doesn't seem to be documented nearly as well so it's harder to learn than RPM. (Hey, Debian dudes if you write a DPKG/APT Guide for RPM users --- you might win more converts!).
Even Slackware's pkgadd (or is that pkg_add, it's been so long) is somewhat easier than the old "manly" way of upgrading your software (downloading the sources, and building them yourself).
Indeed, even that approach (building from sources) has improved quite a bit over the years, for most packages. The mark of a good modern package is that you can install it, from source with the following sequence of commands:
tar tzf /usr/local/from/foo.tar.gz
# look at contents, insure that it creates
# it's own directory tree and puts everything
# in there:
tar xzf ....
cd /usr/local/src
# extract the sources into our local source
# tree.  (Might need to do a mkdir and cd into
# that if your package extracts to the "current"
# directory).
cd $package_source_top_level_dir
view README
# or 'more' or 'less' -- maybe different README's
# This should have some basica instructions
# which ideally will amount to:
./configure
# possibly with options like --prefix=/usr/local
make install
... Note that the really important parts of this are './configure' and 'make install' After that a good source package should be ready to configure and run.
(Note that the "configure" command in my examples is a script generated to perform a set of local tests and select various definitions that are to be used by the make file. This, in turn, tells the local system how to "make" the program's binaries, libraries, man pages and other files in a way that's suitable for your system --- and (with the commonly implemented "install" option or "target" in "makefile" terms) tells the 'make' command where to put things. There is a difference between "./configure"-ing the sources to be build and "configuring" the resulting package).
In any event, with RPM's you get the package (for your plattform: x86, Alpha, SPARC, PowerPC, etc) and type:
rpm -i foo-$VERSION.$PLATFORM.rpm
... or whatever the file's name is. To upgrade a source package you follow mostly the same procedure for sources (usually saving any configuration files and/or data from the previous versions, and maybe moving or renaming all of the old libs and bins and other files). It would be nice if source package maintainers make upgrades easier by detecting their prior installed version and suggesting a "make upgrade" command --- or something like that).
To upgrade a typical RPM you just type:
rpm -U foo.....rpm
There are similar commands for Debian, but I don't know them off the top of my head and they aren't handy to look up from this S.u.S.E. (RPM) system.
(I'm sure I'll get flamed for the perceived slight --- oh well. Comes with the territory. Please include techie info and examples with the flames).
Now, when it comes to major system upgrades (like libc5 to glibc, and from a 1.x kernel to a 2.x kernel) it's a different matter.
If you have a libc5 system and you just install glibc unto it; there's no real problem. The two will co-exist. All of your existing libc5 programs will continue to load their shared libraries, and all your glibc2 (a.k.a. libc6) linked programs should find the new library. Running a mixture of "typical" programs from both libraries will have not important effects (although you'll be using more memory than you would if all you binaries are linked against the same libraries.
Notice I said "typical" --- when it comes to system utilities, like the 'login' command there are some interactions that are significant and even incompatible. I've heard that the format of the "utmp" and "wtmp" records are different (these are user and "who" log files) which are accessed by a whole suite of different utilities (like the 'who' and 'w' commands, the 'login' and 'xdm' commands, 'screen' and other utilities).
So, it's best to upgrade the whole system to glibc at once. (The occasional application, one that is not part of the base "system" and isn't "low level" that uses a different version/set of libraries won't be a problem).
With most recent kernels you can install the sources under /usr/src/linux and running the following command:
make menuconfig
(go through a long list of options to customize the kernel to your system and preferences)... or copy in your old .config file and type:
make oldconfig
... to focus on just the differences between the options listed/chosen for your previous kernel and the new one.
Then you'd type something like:
make clean dep install modules modules_install
... and wait awhile.
I've done kernel upgrades that were that easy. Usually I read the "changelog" and other text files, and the help screens on most of the new options (I usually also have to refresh my memory on a couple dozen old options).
These are major upgrades because they can affect the operation of your whole system.
Recently my father (studying Mathematica) needed a better video card. This was an old VLB (VESA Local Bus) 486 running Red Hat 4.1. So I decided to build a new system (Pentium 166, PCI video card, new 6Gb UDMA hard disk) and upgrade his system to Red Hat 5.1.
So, here's how I did that:
Build new hardware, boot from a customized copy of Tom Oehser's "root/boot" diskette (http://www.toms.net/rb) and connect to the LAN using a temporary IP address that we have reserved for this purpose.
I then run fdisk on the new drive and issue a command like:
for i in 1 3 5 6 7; do mk2efs -c /dev/hda$i; done
(to make filesystems all all of the partitions, root, rescue root, /usr, /home and /usr/local). I go away and answer e-mail and get some coffee, getting thoroughly side tracked.
A day or so later I remember to finish work on that (he reminds me that he has some homework to do).
Now I mount all of these filesystems they way I want them to be later (when I reboot from the hard disk). So I mount the new rootfs under /mnt/den (the machine's name is Deneb --- an obscure star) and the new usr under /mnt/den/usr and the new /usr/local under /mnt/den/usr/local (etc).
Then I copy his old /etc/passwd and /etc/group file into the ram disk (see below) and issue a command like the following:
rsh deneb "cd / && find . | cpio -o0BH crc " \
| ( cd /mnt/den && cpio -ivumd )
... this copies his entire existing system to the new system.
When that's done (doesn't take long, but I don't time it --- it runs unattended until I get back to it), I edit the /mnt/den/etc/fstab, run a chroot command (cd /mnt/den && chroot . /bin/sh) fix up the lilo.conf file and run /sbin/lilo, and reboot (with the root/boot diskette removed.
Now I've replicated his whole system (and accidently knocked his off of our LAN because I forgot to reset the IP address of this box). So, I fix that.
I make a last check to make sure that everything *really* did copy over like I wanted:
cd / ; rsh den " cd / && tar cf - . " | tar df -
... this copies all of his file back over the net again (this time using 'tar' instead of cpio), but the receiving copy just compares (diffs) the incoming file "archive" (from it's standard input, a.k.a. the pipelined data) rather than extracting them and writing them into place.
This reports a few differences among some log files, the /etc/ files that I modified, and it gives some errors about "sockets" (Unix domain sockets show up in your file tree as little filenames with the "s" as the leading characters in an 'ls -l' output; there are about five or six of these on a typical system, one for your printer, one for you syslog and one or two for any copies of X Windows or 'screen' you may have run. These should not be confused with "internet domain" sockets which only exist in memory and go through your IP interfaces).
I presume that the tar diff feature simply doesn't support Unix domain sockets, it's probably a bug, but not a significant one to me.
A different bug in 'cpio' is a bit irritating and I have to report it to their maintainer. Remember how I copied over my old passwd and group files before the transfer? There's *supposed* to be an option to "use the numeric UID and GID" (-n or --numeric-uid-gid) in 'cpio' (and a similar one in newer versions of 'tar'). However, my copies (on several machines from several distributions around the house) all just give an error if I try to use that switch. Not a reasonable error message like: "option not supported" or "don't do that you moron" --- just a stubborn insistence on printing the "usage" summary which clearly shows these options as available!
The quickest workaround is to just copy of the passwd and group files to the local system before attempting the "restore" (replication). One time when I failed to do this (using a version of 'tar' that didn't support the feature) it squashed the ownership of every file to 'root' --- resulting in a useless system. Luckilly I was just playing around that other time, so I learned to watch out for that.
So, now I just slip in my Red Hat 5.1 CD (courtesy of the IBM booth at last week's ISPCon in San Jose --- where they were giving them out). I think IBM's booth got them from the BALUG (Bay Area Linux Users Group) which is still trying to scrape up a few hundred bucks to pay for the 'free' booth they were offered. (Free means no fees to the convention co-ordinators; we went out of pocket for "renting" tables and chairs and paying for a power extension for the demo computer).
From there I just let the RH 5.1 upgrade process run its course.
What!?! All that work just to let run the upgrade?
Yep!
I spent years in technical support (MS-DOS and Windows markets). I consider vendor and OS upgrades to be the most dangerous thing you can do on your computer. I'm sure they've caused more downtime and lost data than failed hard drives, virus infections, stupid users (other than the stupidity of blind updates), and disgruntled employees.
So, I like to make sure that I can get back to where I started. For me, that means "start with a copy of the system or a restore of the system's backups" I've been known to take a hard drive out of a system, install a new one, restore the backup to that and then to the restore to the "new" system. (The old hard drive stays on a shelf until the data on it is so out of date it would be worth copying back in --- then gets rolled into some other system, or the same one, as "extra disk space").
Now, please don't take this as a personal attack on Linux or Red Hat. So far they haven't failed me. I have yet to see an "upgrade" destroy one of my systems. However, my professional experience as proven to me that this is the right approach even for one of my home systems.
In this case the upgrade was silky smooth. I had to fuss a little to get in a new X server (different video card, remember) and the new system really didn't like the PS/2 mouse that I have it (which was of no consequence, since my father uses a serial port trackball, so the mouse that I had in /dev/psaux was just a temporary one anyway).
Oh, yeah, I had to compile a new kernel to support the ethernet card I'm using (a Tulip based Netgear). There was probably a module laying around the CD for it somewhere --- but so what. It's a good test for the system.
At this point the old computer sitting in the living room, and the new one is in his room running Mathematica. In a week or so (when we're really convinced that everything is really O.K with the new box) I'll prep that old 486 up as a server (my colocated server is do for an upgrade --- so this one will go in for it, and that one will become the spare and test bed).
I can understand how most users don't want to have whole systems around as spares. However, these days, it's not too expensive to keep an extra 6Gb hard drive laying around for these sorts of "major" upgrades. It's also a good way to insure that your backups are working (if you use the "restore" method rather than a network "replication" trick).
Note that this whole process, as complicated as it sounds, only takes a little more "human" time than just slipping in the CD and doing it blindly. The system keeps pretty busy --- but I don't wait for it, I only issued 10 commands are so (I have a couple of scripts for "tardiff" and "replicate" to save some of the typing).
For the daring you can run a program called 'rpmwatch' (or Red Hat or other RPM based systems) or "autoup.sh" (Debian). You point these at your favorite mirror and they will automatically update new packages.
Of course this is "daring" --- I personally wouldn't consider it from any public mirror site. I have recommended it to corporate customers, where they point their client systems at an internal server and their staff posts rpm's after testing (limited automated deployment). This is a little easier for some sorts of upgrades than using 'rdist' and home brewed scripts.
In terms of upgrades --- my main "gateway" (router, server, mailhost, uucp hub, and internal web server) is an old 386/33 --- it's about a decade old, has 32Mb of RAM and single, full SCSI chain with a few Gig of disk space). It runs an old copy of RH 4.2, which is an upgrade (disk replication/swap method) from 3.03, which is an upgrade from Slackware 1.(something), which was an upgrade (wipe and re-install from scratch) from SLS something (running a 0.99p10 kernel).
I used to use that machine (antares) for everything (even X --- it has a 2mb STB Powergraph video card that cost more than a new motherboard would have). However, these days I mostly use 'canopus' -- a S.u.S.E. 5.1 upgraded to 5.3 (blindly --- I was feeling lazy!) My wife mostly uses her workstation, 'betelgeuse' --- which came from VA Research with some version of Red Hat (read the review I wrote for Linux Journal if you're really curious) --- and was upgraded (new installation on new drive, copy the data) to S.u.S.E. 5.2.
So, you can see that we've used a variety of upgrade strategies around the house over the years.
As for installing a new kernel. Do a good backup. Now ask: Can I afford a bit of down time if I break the system (while I restore it)? If the answer is "Yes" than go get a 2.1.124 (or later) kernel and try that. We're getting really close to 2.2 and only a few thousand people have tried the new kernels. So we want lots of people to at least try the current releases before we finally go to 2.2.
(Linus doesn't want to have 36 "fix" releases to the next kernel series).
The new kernel should be much faster in TCP/IP performance (already one of the fastest on the platform) and much, much faster in filesystem performance (using the new dcache system).
So, try the new kernel. Be sure to get a new copy of pppd if you use that --- the kernel does change some structure or interface that the old version trips on.
This upgrade will not be nearly as big a deal as the 1.2 to 2.0 shift (which was the most disruptive in the history of Linux as far as I can tell --- the formats of entries under /proc changed, so it broke the whole procps suite of utilities, like the 'ps' and 'top' commands). I haven't seen any such problems from the 2.0 to 2.1 kernels (I'm running a 2.1.123 at the moment, on canopus. antares is running on 2.0.33 or so --- it is least frequently upgraded because it is the server.

(?)Looking forward to your answer. Frits.


Copyright © 1998, James T. Dennis
Published in Linux Gazette Issue 34 November 1998


[ Answer Guy Index ] apache current digi ether goodtimes intlX largedisk
maybe numlock quota recovery script serial session
sound tape testsuite w95ie w95ras w95virus xdm


[ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Next Section ]