I have/do.iansjack wrote:Just as a matter of interest, do you guys who are advocating binary-only releases use a cross-compiler to produce your OS binaries?
Binary vs. source code
Re: Binary vs. source code
Every universe of discourse has its logical structure --- S. K. Langer.
Re: Binary vs. source code
And you had no problem finding a pre-compiled binary of your cross-compiler with the appropriate host and target and all the options you required?
As a secondary question, would you write an assembler and compiler from scratch for your OS or would you consider porting the GNU ones? (Same question, obviously, holds for all other tools and utilities.)
As a secondary question, would you write an assembler and compiler from scratch for your OS or would you consider porting the GNU ones? (Same question, obviously, holds for all other tools and utilities.)
Re: Binary vs. source code
No. No problems. Chip manufacturers or kernel suppliers often have cross compilers for sale. I've used DIAB C for PowerPC, Metrowerks Codewarrior for ARM, and assemblers released as binaries for various microcontrollers. I don't think any of the tools were distributed with source code.iansjack wrote:And you had no problem finding a pre-compiled binary of your cross-compiler with the appropriate host and target and all the options you required?
In my current project I am actually writing a compiler which will be used for OS development.iansjack wrote: As a secondary question, would you write an assembler and compiler from scratch for your OS or would you consider porting the GNU ones? (Same question, obviously, holds for all other tools and utilities.)
I wouldn't port any GNU tools unless a client was paying for it. I just don't see any of the languages supported by GCC in my future personal projects.
Last edited by bwat on Thu Nov 28, 2013 11:29 am, edited 1 time in total.
Every universe of discourse has its logical structure --- S. K. Langer.
Re: Binary vs. source code
This is good and I think most of us agree on this. However, this still implies that we use standard tools to build the binary.iansjack wrote:I like the idea that a user can modify my code in whatver way they see fit, and can configure the compile to produce a binary suited to their exact needs. And I like to think that many users are capable of doing that.
I am not sure whether I want to be a guy "who is advocating binary-only releases" because it sounds bad. Being it true or not...iansjack wrote:Just as a matter of interest, do you guys who are advocating binary-only releases use a cross-compiler to produce your OS binaries?
I discarded the whole concept of being dependent on GNU. Currently I use NASM but my source code is carefully written in a way that it is easy to write my own assembler later. I am not a fan of assembly so I am planning to have my own high-level language and a compiler too. In the distant future, I will have a system that is really stand-alone. I am not expecting much from it but it is more interesting for me than "Unix-clone kernel and port all GNU software" OSs.iansjack wrote:would you write an assembler and compiler from scratch for your OS or would you consider porting the GNU ones?
Re: Binary vs. source code
Why are your expectations so low? Once you start language development you quickly realise that as soon as you get functions(*) and variable scope working, you've got something good enough for pretty much anything. Everything above and beyond this is to make your life easier. Also, you simply don't need the worlds most optimising compiler to write responsive systems --- microprocessor designers make life easier for us. I reckon you'll surprise yourself when you get going.Antti wrote:I am not a fan of assembly so I am planning to have my own high-level language and a compiler too. In the distant future, I will have a system that is really stand-alone. I am not expecting much from it ...
*) Here I mean functions, procedures, predicates, words, ..... whatever the language calls procedural abstractions.
Every universe of discourse has its logical structure --- S. K. Langer.
Re: Binary vs. source code
Well, I certainly wish you guys luck in triumphing where major companies such as Apple, Sun, and IBM have taken the pragmatic, if easy, way out. Myself, I am not so ambitious and don't feel the need to reinvent the wheel. I am more than happy to build on the work of others when appropriate.
Re: Binary vs. source code
Well I'm confused. Can you explain your point in another way?iansjack wrote:Well, I certainly wish you guys luck in triumphing where major companies such as Apple, Sun, and IBM have taken the pragmatic, if easy, way out. Myself, I am not so ambitious and don't feel the need to reinvent the wheel. I am more than happy to build on the work of others when appropriate.
Every universe of discourse has its logical structure --- S. K. Langer.
Re: Binary vs. source code
All we can do here is to have a general discussion about this because we can not say what someone should do or not to do. If you think that you are more than happy to build on the work of others, it is perfectly fine.iansjack wrote:Myself, I am not so ambitious and don't feel the need to reinvent the wheel. I am more than happy to build on the work of others when appropriate.
We have had this kind of discussion before but I am still wondering the "reinvent the wheel" statement you made. It is hard for me to think anything that is more "reinventing the wheel" than making an OS that follows the same path than all the others. Also, I do not think someone needs to be ambitious if doing something else.
Re: Binary vs. source code
I certainly wouldn't wish to proscribe what anyone should do. I'm merely expressing my opinion of what I find most useful in software I use (and by extension what I should fairly supply to others).
Without GNU software I would never have been able to get as far as I have. I simply do not have the capability, or rather the time, to produce from scratch an assembler, compiler, and full toolchain that would be capable of producing an operating system. And this is the course that large companies have taken when producing operating systems. None of them write everything from scratch. Apple, for example, are quite happy to port GNU utilities to OS X.
Using GNU software does not have to mean that you are producing just another Posix clone. i would like to think that there is still considerable room for creativity.
I have serious doubts that it is possible for one person, or a small team, to produce a useful operating system from scratch for which they can provide binaries that wil support the full gamut of current (and future hardware). Perhaps I lack ambition or perhaps it is just realism. But the truth is that I see no examples of such operating systems today.
Without GNU software I would never have been able to get as far as I have. I simply do not have the capability, or rather the time, to produce from scratch an assembler, compiler, and full toolchain that would be capable of producing an operating system. And this is the course that large companies have taken when producing operating systems. None of them write everything from scratch. Apple, for example, are quite happy to port GNU utilities to OS X.
Using GNU software does not have to mean that you are producing just another Posix clone. i would like to think that there is still considerable room for creativity.
I have serious doubts that it is possible for one person, or a small team, to produce a useful operating system from scratch for which they can provide binaries that wil support the full gamut of current (and future hardware). Perhaps I lack ambition or perhaps it is just realism. But the truth is that I see no examples of such operating systems today.
Re: Binary vs. source code
Hi,
From my perspective; source code is only really useful for developers, means larger downloads for users, is slower to install, tends to have more dependencies (the tools used become dependencies), and can be a serious barrier for commercial application and device driver developers who (whether we like it or not) prefer closed source. On the other hand, pre-compiled binaries are inflexible - they can't be portable and can't be optimised for the specific computer; and this leads to having different binaries for different architectures where each binary only uses generic optimisations (e.g. to avoid having several different binaries for 32-bit 80x86 alone, you might not use MMX/SSE/AVX and sacrifice a lot of performance for the sake of reducing the hassle of having a very large number of different binaries to worry about).
Distributing software as some sort of intermediate code solves almost all of the problems.
To be effective; source code would have to be compiled to intermediate code and pre-optimised (e.g. dead code elimination, constant folding, common sub-expression elimination, etc) to minimise the size of the distributed intermediate code and reduce the work that the "install time" compiler needs to do as much as possible (and minimise the time "intermediate code to native" compiling takes). The source code to intermediate code compiler would also need to detect all potential errors; because you don't want these errors being found in the "intermediate code to native code" step and do want the developer to be confident their intermediate code will work for all users/architectures. Note: this is a lot harder than it sounds - e.g. "unsigned int x = 6 - sizeof(void *);" should be detected as an error by the source code to intermediate code compiler, even though it works fine for some targets.
If we assume intermediate code is the "least worst" option; then the next problem is finding existing tools that can be used like this. LLVM would probably be the only likely candidate. The only other option is writing your own.
Cheers,
Brendan
From my perspective; source code is only really useful for developers, means larger downloads for users, is slower to install, tends to have more dependencies (the tools used become dependencies), and can be a serious barrier for commercial application and device driver developers who (whether we like it or not) prefer closed source. On the other hand, pre-compiled binaries are inflexible - they can't be portable and can't be optimised for the specific computer; and this leads to having different binaries for different architectures where each binary only uses generic optimisations (e.g. to avoid having several different binaries for 32-bit 80x86 alone, you might not use MMX/SSE/AVX and sacrifice a lot of performance for the sake of reducing the hassle of having a very large number of different binaries to worry about).
Distributing software as some sort of intermediate code solves almost all of the problems.
To be effective; source code would have to be compiled to intermediate code and pre-optimised (e.g. dead code elimination, constant folding, common sub-expression elimination, etc) to minimise the size of the distributed intermediate code and reduce the work that the "install time" compiler needs to do as much as possible (and minimise the time "intermediate code to native" compiling takes). The source code to intermediate code compiler would also need to detect all potential errors; because you don't want these errors being found in the "intermediate code to native code" step and do want the developer to be confident their intermediate code will work for all users/architectures. Note: this is a lot harder than it sounds - e.g. "unsigned int x = 6 - sizeof(void *);" should be detected as an error by the source code to intermediate code compiler, even though it works fine for some targets.
If we assume intermediate code is the "least worst" option; then the next problem is finding existing tools that can be used like this. LLVM would probably be the only likely candidate. The only other option is writing your own.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Binary vs. source code
You can build software using 3rd party proprietary tools, they don't have to be GNU. If GNU didn't exist you could still do it. I've used GCC for C development on Linux and for OS dev on x86 and PowerPC. I've used nothing GCC specific.iansjack wrote: Without GNU software I would never have been able to get as far as I have. I simply do not have the capability, or rather the time, to produce from scratch an assembler, compiler, and full toolchain that would be capable of producing an operating system.
Many of them licenced Unix System V (IBM-AIX, Sun-Solaris, HP-HP/UX, Microsoft-Xenix, Silicon Graphics-Irix). I believe all of them provided non-GNU toolchains. I've only direct experience with Solaris, HP/UX, and Irix from that list. IIRC the non-System V Unix Tru64 had a non-GNU compiler, I definitely remember error reporting with the C++ compiler that was streets ahead of GCC.iansjack wrote: And this is the course that large companies have taken when producing operating systems. None of them write everything from scratch. Apple, for example, are quite happy to port GNU utilities to OS X.
You're making assumptions about hardware support. Some of us have no desire to support anything other than one well defined hardware platform.iansjack wrote: I have serious doubts that it is possible for one person, or a small team, to produce a useful operating system from scratch for which they can provide binaries that wil support the full gamut of current (and future hardware). Perhaps I lack ambition or perhaps it is just realism. But the truth is that I see no examples of such operating systems today.
Every universe of discourse has its logical structure --- S. K. Langer.
Re: Binary vs. source code
Hi,
OS developers who are only doing it for fun/education are like the children who don't care where they're going. There's nothing wrong with that (fun and/or education are worthy goals, and it doesn't matter where you end up if you enjoy the ride).
OS developers who spend time doing research, design and implementation are like the children who head to a specific destination. There's a massive advantage being the first to get somewhere specific (e.g. no competing OSs for that niche). If you're trying to make an OS that other people will actually want to use, then this makes a lot of sense.
OS developers who use existing tools and techniques to implement things they've seen in existing OS designs (e.g. *nix clones) are like the children who follow others. They will never be the first to get anywhere because they are following someone that is moving faster and/or had a huge head start. If you're trying to make an OS that other people will actually want to use, this is completely pointless.
Cheers,
Brendan
Imagine children riding tricycles in a playground. Some won't care where they're going. Some will head to specific destinations. Some will follow others.iansjack wrote:Well, I certainly wish you guys luck in triumphing where major companies such as Apple, Sun, and IBM have taken the pragmatic, if easy, way out. Myself, I am not so ambitious and don't feel the need to reinvent the wheel. I am more than happy to build on the work of others when appropriate.
OS developers who are only doing it for fun/education are like the children who don't care where they're going. There's nothing wrong with that (fun and/or education are worthy goals, and it doesn't matter where you end up if you enjoy the ride).
OS developers who spend time doing research, design and implementation are like the children who head to a specific destination. There's a massive advantage being the first to get somewhere specific (e.g. no competing OSs for that niche). If you're trying to make an OS that other people will actually want to use, then this makes a lot of sense.
OS developers who use existing tools and techniques to implement things they've seen in existing OS designs (e.g. *nix clones) are like the children who follow others. They will never be the first to get anywhere because they are following someone that is moving faster and/or had a huge head start. If you're trying to make an OS that other people will actually want to use, this is completely pointless.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Binary vs. source code
Fine words, but I can't help but think "What about Linux? What about OS X?" Both operating systems built with existing tools implementing things seen in existing OS designs. And both, I would venture to say, OSs that other people want to use. The truth is that I can't think of many other OSs that people do want to use (not in numbers anywhere near as large as these two) - except, of course, for Windows.Brendan wrote:OS developers who use existing tools and techniques to implement things they've seen in existing OS designs (e.g. *nix clones) are like the children who follow others. They will never be the first to get anywhere because they are following someone that is moving faster and/or had a huge head start. If you're trying to make an OS that other people will actually want to use, this is completely pointless.
Re: Binary vs. source code
I do not know what to say about the inflexibility of architecture-dependent binaries. Yes, that is true: they are inflexible. If I were to write this thread again, I would change the definition of binary to be "non-human readable packed release format" rather than the actual machine code executable (however, I did mention byte code in my third post).
Re: Binary vs. source code
We still have the question that I raised earlier, but everyone seems to be studiously avoiding, of producing binaries that are tailored to a particular user's needs. A good example would be a highly configurable program like Apache. With source code you can produce a version configured to your exact needs which will be a nice lean, efficient binary. Without the ability to configure the binary I think we are left with the Windows model of huge do-it-all programs that can be configured to do what you want but contain loads of code that any given user will never need. I believe this leads to the phenomenon known as "bloatware"; the very opposite of what I would like to produce.
Another good example is the Linux kernel. That provided with most distributions is a bloated version, with many options enabled that I don't need and a host of modules that will never see the light of day but just lurk hidden somewhere on my hard disk. On the other hand, my Gentoo install has a kernel compiled specifically for my hardware and my needs. There are very few modules as the hardware I need to support is compiled into the kernel; I know I need it, so let's not piss about with modules. It loads far quicker than the bloated, monolithic version, it is more efficient, and it knows how to make the most of my processor - it knows what processor it is going to run on, so it knows exactly what registers and instructions are available and how to optimize the code. It is, without doubt, a better kernel than that supplied with - say - Ubuntu.
Another good example is the Linux kernel. That provided with most distributions is a bloated version, with many options enabled that I don't need and a host of modules that will never see the light of day but just lurk hidden somewhere on my hard disk. On the other hand, my Gentoo install has a kernel compiled specifically for my hardware and my needs. There are very few modules as the hardware I need to support is compiled into the kernel; I know I need it, so let's not piss about with modules. It loads far quicker than the bloated, monolithic version, it is more efficient, and it knows how to make the most of my processor - it knows what processor it is going to run on, so it knows exactly what registers and instructions are available and how to optimize the code. It is, without doubt, a better kernel than that supplied with - say - Ubuntu.