installers
installers
Hey!
Why is rpm based linux install process so painstakingly slow? I have observed redhat/fedora for more that 12 years now, and every time I do an install, during installation I see very short, intermittent DVD and disk activity. Rest of the time, I assume, is spent doing some CPU work. I am aware of the fact that packages have dynamic configs. Deps is another considerable CPU bound computation, but is (I guess) a one time activity.
I had to (re)install my Linux and Windows installation last week.
I use windows to do justice to the GPU powerhouse by feeding it some games. Windows 7 RC is now available on my school's MSDNAA account. Linux was to be reinstalled because I recently got a new hard drive and had to RAID my home and /.
Windows 7 installed roughly in 15 minutes. It took me another half an hour to manually install rest of the stuff we usually use (except MSoffice, because I do not have a licensed version and I did not have oo for windows at that time).
Fedora 11, on the other hand, with similar functionality set as that of windows, with some more dev packages and eclipse etc. took more than an hour. The numbers won't differ if I had used say fedora 10.
I am aware of the fine grained configurability in linux, which is absolutely absent in Win7, but if at all I could make a custom spin of windows with these apps built in the dvd, it would certainly take less than 45 minutes.
Individual package install will be slower than simple dump copies anyday. But how slower?
If Linux wants to compete in the Desktop market, this is one of the issues we have to look into.
Consider this as a rant. I do not fully understand how rpm works. And I do not intend to start a flamewar or anysuch.
Why is rpm based linux install process so painstakingly slow? I have observed redhat/fedora for more that 12 years now, and every time I do an install, during installation I see very short, intermittent DVD and disk activity. Rest of the time, I assume, is spent doing some CPU work. I am aware of the fact that packages have dynamic configs. Deps is another considerable CPU bound computation, but is (I guess) a one time activity.
I had to (re)install my Linux and Windows installation last week.
I use windows to do justice to the GPU powerhouse by feeding it some games. Windows 7 RC is now available on my school's MSDNAA account. Linux was to be reinstalled because I recently got a new hard drive and had to RAID my home and /.
Windows 7 installed roughly in 15 minutes. It took me another half an hour to manually install rest of the stuff we usually use (except MSoffice, because I do not have a licensed version and I did not have oo for windows at that time).
Fedora 11, on the other hand, with similar functionality set as that of windows, with some more dev packages and eclipse etc. took more than an hour. The numbers won't differ if I had used say fedora 10.
I am aware of the fine grained configurability in linux, which is absolutely absent in Win7, but if at all I could make a custom spin of windows with these apps built in the dvd, it would certainly take less than 45 minutes.
Individual package install will be slower than simple dump copies anyday. But how slower?
If Linux wants to compete in the Desktop market, this is one of the issues we have to look into.
Consider this as a rant. I do not fully understand how rpm works. And I do not intend to start a flamewar or anysuch.
Re: installers
RPM is dead, dead, dead. DEB and Portage run circles around RPM, at least feature wise. I never stop-clocked any of them.
</rant>
</rant>
Every good solution is obvious once you've found it.
Re: installers
Hi,
I'd assume that for Windows (after hardware detection and disk formatting) the majority of the installation is pure file copying, and that no dependencies of any kind need to be checked.
For Linux, I'd assume that copying the RPMs to the local disk would take about the same amount as time as the file copying part of a Windows install; but on top of that all of the dependencies for every single package need to be checked, the installer would need to find the correct order to install things, and then the files for each package still need to be decompressed and installed. Because of this I'd expect RPMs to take at least twice as long.
Note: I use Gentoo, which is far worse, as there's typically 2 additional steps per package: autoconfig (which never caches anything and repeatedly detects everything that it detected for all the previous packages, and is sometimes the most time consuming part of installing a package), followed by compiling the package from source.
There's also usually a lot more dependencies in Linux. Something like a image manipulation program in Windows would be one package (with any DLLs, etc that aren't part of a standard Windows install included in that package, and no need for extra packages), while for Linux each library that's used would be a separate package with it's own dependencies; and at times you get a chain reaction - an attempt to install something simple pulls in a bunch of packages that all pull in a bunch more packages that all pull in a bunch more packages, and you end up with a list of 100 packages.
To make this worse there's no standardization - one application might use a library for something and another application might use a different library for the same thing, so you get 2 different libraries for the same thing (and all the dependencies for these 2 different libraries), for no practical reason whatsoever.
IMHO part of the dependency problem is that nobody actually writes software for Linux. Instead they write portable software for a range of different Unix clones (including different Linux distributions, and sometimes non-Unix OSs too), and leave the package maintainers for each different OS to sort out dependencies and clean up the mess in the best way they can.
From my perspective, Linux (and other OSs, like FreeBSD and OS X) are just tools to help break Microsoft's monopoly until my OS is ready...
Cheers,
Brendan
I'd assume that for Windows (after hardware detection and disk formatting) the majority of the installation is pure file copying, and that no dependencies of any kind need to be checked.
For Linux, I'd assume that copying the RPMs to the local disk would take about the same amount as time as the file copying part of a Windows install; but on top of that all of the dependencies for every single package need to be checked, the installer would need to find the correct order to install things, and then the files for each package still need to be decompressed and installed. Because of this I'd expect RPMs to take at least twice as long.
Note: I use Gentoo, which is far worse, as there's typically 2 additional steps per package: autoconfig (which never caches anything and repeatedly detects everything that it detected for all the previous packages, and is sometimes the most time consuming part of installing a package), followed by compiling the package from source.
There's also usually a lot more dependencies in Linux. Something like a image manipulation program in Windows would be one package (with any DLLs, etc that aren't part of a standard Windows install included in that package, and no need for extra packages), while for Linux each library that's used would be a separate package with it's own dependencies; and at times you get a chain reaction - an attempt to install something simple pulls in a bunch of packages that all pull in a bunch more packages that all pull in a bunch more packages, and you end up with a list of 100 packages.
To make this worse there's no standardization - one application might use a library for something and another application might use a different library for the same thing, so you get 2 different libraries for the same thing (and all the dependencies for these 2 different libraries), for no practical reason whatsoever.
IMHO part of the dependency problem is that nobody actually writes software for Linux. Instead they write portable software for a range of different Unix clones (including different Linux distributions, and sometimes non-Unix OSs too), and leave the package maintainers for each different OS to sort out dependencies and clean up the mess in the best way they can.
We?prashant wrote:If Linux wants to compete in the Desktop market, this is one of the issues we have to look into.
From my perspective, Linux (and other OSs, like FreeBSD and OS X) are just tools to help break Microsoft's monopoly until my OS is ready...
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: installers
This problem impacts both (and all conceivable) systems - two apps using different libs for the same thing, instead of sharing ressources.Brendan wrote:To make this worse there's no standardization - one application might use a library for something and another application might use a different library for the same thing, so you get 2 different libraries for the same thing (and all the dependencies for these 2 different libraries), for no practical reason whatsoever.
The only way around this are "strong standards", i.e. if there is one library that is so feature-rich and well-written that there is simply no need for another.
The worst thing that can happen is having two "strong standards", which divide developer and user attention between them (Gnome and KDE spring to mind).
Every good solution is obvious once you've found it.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: installers
Just a side note you may wish to know: It's against the license agreement to use any software from MSDN for any purpose other than developing software. I think theres an exception for Office - but not for Windows.
- gravaera
- Member
- Posts: 737
- Joined: Tue Jun 02, 2009 4:35 pm
- Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.
Re: installers
Assuming mine doesn't come out first.Brendan wrote:We?prashant wrote:If Linux wants to compete in the Desktop market, this is one of the issues we have to look into.
From my perspective, Linux (and other OSs, like FreeBSD and OS X) are just tools to help break Microsoft's monopoly until my OS is ready...
Cheers,
Brendan
But really, I don't even bother myelf with the weaknesses of Linux and Windows anymore: they're all just flawed creations. It's not a big deal. The whole 'Package' system under *nix is almost ridiculous. It' so completely idiotic.
To install any one program, you have to have LibXYZ, + FooUtils + GNUJunk, and all of them are different versions. Every program released for *nix requires a different version of the same library. Sometimes you have to have GCC v2, 3 and 4 installed on one PC. And not only that, but you may have GCC v2.X, and v 2.Y, as if they're different programs. NO version control.
I'd honestly say that the whole Linux package thing is backward. I understand the 'community' thing, and the one or two 'benefits' that ad-hoc development brings, but this downright irritating facility is caused by all of these distributed sources of software.
Why doesn't Torvalds standardize his distribution interface across Linuxes like he did with the kernel? In fact, why not just keep the whole thing in one central location permanently, and have people submit source to ONE location only. Or even better: keep it open sourc,e but work on it with a dedicated dev tem in one location. No more 'submitting source'. It's really dumb.
Version control...I learned a LOT from Linus when I first encountered the sheer frustration that is Linux Package Management. Whether Apt, or Synaptics: it's all the same: F.u.b.a.r.
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
Re: installers
So we get all the religious arguments that come with open source, but absolutely none of the practical benefits? Truly a wonderful idea.holypanl wrote:Or even better: keep it open sourc,e but work on it with a dedicated dev tem in one location. No more 'submitting source'. It's really dumb.
-
- Member
- Posts: 524
- Joined: Sun Nov 09, 2008 2:55 am
- Location: Pennsylvania, USA
Re: installers
GNU software does have issues with new versions breaking things (because stable APIs/ABIs are crap right?) but most software has much less trouble with new versions. I think that package managers are a great idea, because its an easy way to keep all your software up to date. Some Windows software will automatically download and install updates (e.g. Adobe) but a lot of my software gets outdated if I don't keep track of it. On Ubuntu (just an example, its of course not the only distro with package management) I don't have that problem, because I can easily update everything in one step.To install any one program, you have to have LibXYZ, + FooUtils + GNUJunk, and all of them are different versions. Every program released for *nix requires a different version of the same library. Sometimes you have to have GCC v2, 3 and 4 installed on one PC. And not only that, but you may have GCC v2.X, and v 2.Y, as if they're different programs. NO version control.
I'd honestly say that the whole Linux package thing is backward. I understand the 'community' thing, and the one or two 'benefits' that ad-hoc development brings, but this downright irritating facility is caused by all of these distributed sources of software.
Package managers also do what they're meant to: make it easy to find and install software. Would you rather have to find and download 'LibXYZ, + FooUtils + GNUJunk' yourself? If package managers didn't exist, then I would be pretty angry with the process of obtaining Linux software.
Re: installers
A good package manager does handle that situation gracefully. Either by having the app being available in binary form already (DEB), or by handling the requirements automatically (Portage).holypanl wrote:To install any one program, you have to have LibXYZ, + FooUtils + GNUJunk, and all of them are different versions. Every program released for *nix requires a different version of the same library. Sometimes you have to have GCC v2, 3 and 4 installed on one PC. And not only that, but you may have GCC v2.X, and v 2.Y, as if they're different programs. NO version control.
Because Torvalds does not have power beyond the kernel (and I am not sure about the amount of power he still has within the kernel).Why doesn't Torvalds standardize his distribution interface across Linuxes like he did with the kernel?
Linux is not an operating system. It's a kernel, surrounded by a hodgepodge of libraries, tools, and applications, written by many completely different people, with some of them not even been written with Linux in mind, held together with spit and duct tape by the distribution maintainers. The whole GNU toolchain (including autotools), for example, is older than Linux.
Every good solution is obvious once you've found it.
- Troy Martin
- Member
- Posts: 1686
- Joined: Fri Apr 18, 2008 4:40 pm
- Location: Langley, Vancouver, BC, Canada
- Contact:
Re: installers
Kabump!
I find RPM to be a pile. It's awkward, hard to work with, and even harder to maintain. DEB is a little better, but at times, can be a little strange. I've never really worked with ports and the BSD stuffs, so no comment on those.
I find RPM to be a pile. It's awkward, hard to work with, and even harder to maintain. DEB is a little better, but at times, can be a little strange. I've never really worked with ports and the BSD stuffs, so no comment on those.
Or, as Solar said, just throw a pre-compiled binary for x86 into the file, do up another for x86-64, and so on.holypanl wrote:To install any one program, you have to have LibXYZ, + FooUtils + GNUJunk, and all of them are different versions. Every program released for *nix requires a different version of the same library. Sometimes you have to have GCC v2, 3 and 4 installed on one PC. And not only that, but you may have GCC v2.X, and v 2.Y, as if they're different programs. NO version control.
Torvalds still is the primary maintainer of whatever the current version of the unbranched Linux kernel is (2.6 as of writing.) Not sure if he has power anymore though.Solar wrote:Because Torvalds does not have power beyond the kernel (and I am not sure about the amount of power he still has within the kernel).
Re: installers
DEB is a wonderful package format to use. Building the packages is a right royal pain in the you know where, especially if you want to set up signing and so on.Troy Martin wrote:DEB is a little better, but at times, can be a little strange.
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: installers
And open source doesn't mean complete anarchy. Every project has a maintainer who is in charge of actually applying those submitted patches to the tree, and making sure they are of good quality. The whole BSD base system is developed together centrally, but the maintainers obviously accept patches. On Linux, the distributions are in charge of the base system, so there is a sort of standard API.JackScott wrote:So we get all the religious arguments that come with open source, but absolutely none of the practical benefits? Truly a wonderful idea.holypanl wrote:Or even better: keep it open sourc,e but work on it with a dedicated dev tem in one location. No more 'submitting source'. It's really dumb.