Hi,
Kevin wrote:Antti wrote:Are you wasting your time for waiting the actual compiling or the highly frequent starting and stopping the compiler?
Spawning processes certainly does take some time. However what you're really waiting for is still the actual compiling.
There's the overhead of checking which pieces need to be updated (make), plus the overhead spawning processes (and doing dynamic linking, etc), plus the overhead of finding all the files ("open()" and seek times), plus the overhead of destroying processes. It adds up. Compiling is fast (e.g. for some languages it's limited by disk IO speed). What takes time isn't compiling, but optimising.
However; when you want to optimise, the "multiple files" method makes it impossible for the compiler to do it properly (whole program optimisation) and you end up with poor quality code. In an attempt to solve this all modern tool chains support some form of "link time optimisation"; but if you ask anyone that's used this they'll tell you it's extremely slow (especially for large projects) and that (because the compiler discards a lot of useful information before LTO occurs) it doesn't really help as much as it should.
Kevin wrote:Like handling dependencies and rebuilding only these "one hundred" source files that depends on the header file I am modifying instead of all "two hundred". It solves the problem that we do not need to have at all.
You mean because we could just always compile the one whole source file that was created by merging the 200 files - so we don't even have to think about compiling only the half of it because it isn't possible any more?
You're still failing to see anything beyond the lame and broken existing tools that were designed to solve ancient resource constraints (not enough RAM to have compiler and all source code in memory at once) instead of being designed to do the job properly. For existing tool-chains; object files are used as a kind of half-baked cache to store the "pre whole program optimisation" results of compiling individual files (note: "half-baked" due to the failure to make it transparent and the inability to cache more than one version of each object file). With a tool-chain designed for "single source file"; there's no reason why you can't cache "pre whole program optimisation" results with much finer granularity (e.g. individual functions rather than whole "object files"). This can be done properly; so that it's transparent to the user (e.g. avoids the need for external "cache management tools" like 'make'), and it can also handle multiple versions simultaneously. In this way if someone only changes a few functions you'd compile less with single-source than you would've with multiple source files; and if you regularly build one version with debugging and one without (or several versions with different options) you'd prevent a massive amount of recompiling each time you change from one configuration to another.
Kevin wrote:But you do know that these 6000 lines in the configure scripts aren't all nops? A typical configure script does much more, checking which libraries and functions are available, testing for platform-dependent behaviour etc. and are an important part of making programs portable?
Agreed - existing tools were "evolved" by morons that can't decide on "standard standards", resulting in 6000 line configure scripts (and hideous hack-fests like "autoconf") to work around the insanity. Any excuse to throw this puke out and establish a "standard standard" that avoids the need for "6000 lines of evidence of poor tool design" is a good excuse.
Note that this only applies for some tool-chains - more modern tool-chains (e.g. Java, .NET) do have effective standards and don't have this problem.
Kevin wrote:Antti wrote:Then we have this IDE with "a tree-like view of the program" and it is easy to see the big picture.
You've never worked on a big project, have you? Can you imagine what the tree-like view of the Linux kernel would look like? Or because you like to say that kernels are different, let's take qemu, with which I'm more familiar anyway. (And that's still the kind of projects that compile in a few minutes, not hours or days like X, KDE or LibreOffice.)
Let's forget about software; and think about building trucks. There's many kinds of trucks; varying by size, purpose and payload (e.g. flat-bed vs. tanker vs. refrigerated vs...). If you were a company building trucks; you'd identify common "truck pieces" (steering wheel, CD player, seats, engine, fuel tanks, chassis, etc) and establish a set of standards that describe how these pieces fit together; and then design several alternatives for each piece. That way, a customer can come to you asking for a truck, and choose which of your pieces to combine to create their truck - maybe a diesel engine with an automatic gearbox, red seats with electronic adjustment and a flat-bed on the back; or maybe an natural gas engine with a 6-speed manual gearbox, cheaper seats and a "refrigerated box" on the back; or any combination of many possible pieces.
Now let's think about software. In fact, let's think of building an emulator like Qemu; but (just for a few minutes) assume we aren't typical code monkeys and are capable of *design*. We can start by identifying the types of pieces - we'd need a "CPU emulator" piece, a "chipset" piece, several different types of "PCI device" pieces, etc. Then we can design a set of standards that describe how these pieces fit together (I like my modules to communicate via. asynchronous message passing, but you can use shared library "plug ins" if you like). Now that we've got usable standards for how pieces fit together; I can write a "CPU emulator" piece that interprets instructions (like Bochs), you can write a "CPU emulator" piece that does JIT (like Qemu) and someone else can write a "CPU emulator" piece that uses Intel's "VT-x" hardware virtualisation. Someone else might write a "generic ISA" chipset piece, some might write a "Intel 945 chipset" piece and someone else might write an "AMD opteron chipset" piece. More people might create a "SATA controller" piece, a "realtek ethernet card" piece, a "OHCI USB controller" piece, etc.
The other thing we'd need is an "virtual machine management" piece for starting/stopping virtual machines and configuring them. To configure a virtual machine, you could have a set of icons representing all the different pieces of virtual hardware that's been installed, and the user can drag these icons into a work area to add the virtual hardware to their virtual machine.
Of course no single entity (company, person, group) would be responsible for implementing an entire emulator - it would be many different people that have each implemented one or more individual pieces that fit together. Some of the pieces might be open source, some might be closed source, they can be written in different languages, etc.
It doesn't have to be an emulator though. Something like "LibreOffice" could be a "spreadsheet front-end" piece, a "word processor front-end" piece, a "spell checker" piece, an "expression evaluator" piece, etc; where several competing versions of all of these pieces are written by different people and are inter-changeable.
The point is programmers shouldn't be writing applications to begin with. Designers should be designing standards for "pieces that work together" and programmers should be implementing these "pieces that work together".
If big projects are a problem for "single source file" (which I doubt); then that would be a good thing anyway because big projects shouldn't exist to begin with.
Cheers,
Brendan