I have an interesting topic on my mind. How do you guys actually interpret documentation? How does your mechanism work? Do you just magically write code out of nowhere?
(Assuming you've never heard anything about the subject (or not a lot)). Any guidelines?
Other part of this topic:
@omarrx024 mentioned I should focus on things like multitasking and user-space. Which led me to this question. What should I do next? Without having to build dependencies. (e.g. I need to do something called X, but X requires Y and Z to be implemented already).
How many separate "modules" can I build independently without needing something I haven't implmeneted yet?
This is what I currently have:
memory category (GDT, PMM, VMM, Kernel Heap, some other abstractions), interrupts category (IDT, IRQ, ISR), timer category (PIT), input category (keyboard, PS2), output category (TUI), debug category (panic screen), hardware category (ports).
What is my next logical step? What to implement next "by the book" & without needing "Y" and "Z", but can do with what I have currently?
Btw, I've read a wiki page on OSDev stereotypes, unfortunately found to be one of them.
Interpreting documentation & what's next?
Interpreting documentation & what's next?
OS: Basic OS
About: 32 Bit Monolithic Kernel Written in C++ and Assembly, Custom FAT 32 Bootloader
About: 32 Bit Monolithic Kernel Written in C++ and Assembly, Custom FAT 32 Bootloader
Re: Interpreting documentation & what's next?
There's probably no single way, rather a combination of multiple things.Octacone wrote:I have an interesting topic on my mind. How do you guys actually interpret documentation? How does your mechanism work? Do you just magically write code out of nowhere?
(Assuming you've never heard anything about the subject (or not a lot)). Any guidelines?
- Learn to work with documentation. Use search within the document. Locate all the places where the subject is mentioned. Note its abbreviations, acronyms, synonyms and word forms (you may need/want to expand this onto other words (e.g. verbs, not just nouns) found in the context). Read those paragraphs and chapters, but don't expect to understand everything from the first take, you'll be going back to the same text. Work from there in depth and breadth to see what the subject builds and depends on (or what it enables). Recursion!
- As you go through the text, write down relevant discoveries (probably include references to where you found them). Write down important facts and questions for further reading/analysis/etc (TBD's, unmarked checkboxes with work items and such). Structure this info (a document with internal links (and URLs too) would be great; OneNote or something similar may be useful to organize your notes). Apply search to your notes too, so you don't miss anything important in what you've collected.
- When you find something unclear, look for other sources: alternative documentation (e.g. intel vs AMD on x86 CPUs), online articles, books, forums, existing code.
- Experiment.
You can't avoid dependencies. But you may build and test some parts out of order if you define the right interfaces and fake some dependencies.Octacone wrote:Other part of this topic:
@omarrx024 mentioned I should focus on things like multitasking and user-space. Which led me to this question. What should I do next? Without having to build dependencies. (e.g. I need to do something called X, but X requires Y and Z to be implemented already).
Get a book on OS design/implementation to get an idea of what an OS consists of and may enjoy in terms of CPU/hardware functionality. Then dive into details of MM, interrupts, system calls, scheduling, etc etc applicable to your hardware.Octacone wrote: How many separate "modules" can I build independently without needing something I haven't implmeneted yet?
Re: Interpreting documentation & what's next?
That's a rather unusual question from you. I mean typically beginners ask that, and you have proven that you are capable of dealing with OS dev.
Anyway, here are my answers:
First I read the documentation. One or maybe more times, to get the big picture. Once I got that, I start experimenting, doing smaller or bigger PoCs. Time after time I pick up the documentation again and I check the details.
About in which order: I think there's no common answer here. I have seen a lot of systems doing things differently. I suggest just to start it, and when you got a dependency, work on that, and then return. Sometimes (in microkernel's world more often than one would anticipate) you'll face chicken-and-egg scenarios. In those cases you'll have to find a workaround, there'll be no "optimal order" at all. For example you'll need to allocate memory to store the free memory list, but you cannot do that until you have a list of free memory to allocate from... If you choose a different path there's a chance you'll never face that problem at all. It's up to you. No-one can foresee which problems to solve in YOUR design. It's only up to you. Another example: you can develop storage driver before you develop filesystems (common for monolithic kernels). With microkernels, most probably you'll first implement filesystems in order to load the storage driver from ramdisk.
For you, a logical step would be to implement filesystems, but that's just a suggestion, at the end of the day it's totally up to you.
The path I've chosen (I have to stress that it's my personal preference, you don't have to follow it, it's nothing more than an example):
1. milestone: build environment, bootloader, mkfstool (to create initrd), hello world kernel, bootloader tests (here mkfs preceeds kernel because in my design kernel is located inside the initrd)
2. milestone: kprintf, early filesystem support, load services, parse executable format (as I have a microkernel), tasks, cooperative multitasking.
3. milestone: panic screen, interrupts (as ISR stubs call panic), pmm, timer, scheduler, preemtion, messaging.
Here I faced the lack of mmap (to map initrd in FS service's and video buffer in video driver's memory)
4. milestone: vmm (mapping), input/output devices: PS2 keyboard, mouse, lfb driver, UI service, libc.
Here I found that further UI development depends of malloc (to allocate window buffers and block cache) and filesystems (to load icons)
5. milestone: malloc implementation in libc, mounting file systems
Here I went back and finished the initialization sequence of my OS (using cooperative multitasking during init, and switch to preemtion when the idle task first scheduled. I've also modified my design which originally had one task for drivers into a separate task for every driver as I could route IRQs more effectively).
Now I've found file operations depend on cache, which in turn lazily depend on storage drivers (I had to take into consideration that I want an async mechanism to load blocks from disk into cache, but for now I'm fine with reading blocks only from initrd eliminating a layer in the first implementation). Further roadmap: file and directory operations, window management, rescue shell, storage drivers, first beta release. As you can see, I have a basic plan, but I often change it as dependencies show up along the way. It's also possible to partially fulfill a dependency (like I did with ramdisk/storage devices). Although I try to do it the right way for the first time, it's not possible all the time. Therefore I sometimes end up with a quick-and-dirty-but-works solution, and later go back to correct it. To keep track of those things, I use comments in my code, and I use a script to create a TODO list. When I'm in a doubt what to do, I pick one from that list.
Anyway, here are my answers:
First I read the documentation. One or maybe more times, to get the big picture. Once I got that, I start experimenting, doing smaller or bigger PoCs. Time after time I pick up the documentation again and I check the details.
About in which order: I think there's no common answer here. I have seen a lot of systems doing things differently. I suggest just to start it, and when you got a dependency, work on that, and then return. Sometimes (in microkernel's world more often than one would anticipate) you'll face chicken-and-egg scenarios. In those cases you'll have to find a workaround, there'll be no "optimal order" at all. For example you'll need to allocate memory to store the free memory list, but you cannot do that until you have a list of free memory to allocate from... If you choose a different path there's a chance you'll never face that problem at all. It's up to you. No-one can foresee which problems to solve in YOUR design. It's only up to you. Another example: you can develop storage driver before you develop filesystems (common for monolithic kernels). With microkernels, most probably you'll first implement filesystems in order to load the storage driver from ramdisk.
For you, a logical step would be to implement filesystems, but that's just a suggestion, at the end of the day it's totally up to you.
The path I've chosen (I have to stress that it's my personal preference, you don't have to follow it, it's nothing more than an example):
1. milestone: build environment, bootloader, mkfstool (to create initrd), hello world kernel, bootloader tests (here mkfs preceeds kernel because in my design kernel is located inside the initrd)
2. milestone: kprintf, early filesystem support, load services, parse executable format (as I have a microkernel), tasks, cooperative multitasking.
3. milestone: panic screen, interrupts (as ISR stubs call panic), pmm, timer, scheduler, preemtion, messaging.
Here I faced the lack of mmap (to map initrd in FS service's and video buffer in video driver's memory)
4. milestone: vmm (mapping), input/output devices: PS2 keyboard, mouse, lfb driver, UI service, libc.
Here I found that further UI development depends of malloc (to allocate window buffers and block cache) and filesystems (to load icons)
5. milestone: malloc implementation in libc, mounting file systems
Here I went back and finished the initialization sequence of my OS (using cooperative multitasking during init, and switch to preemtion when the idle task first scheduled. I've also modified my design which originally had one task for drivers into a separate task for every driver as I could route IRQs more effectively).
Now I've found file operations depend on cache, which in turn lazily depend on storage drivers (I had to take into consideration that I want an async mechanism to load blocks from disk into cache, but for now I'm fine with reading blocks only from initrd eliminating a layer in the first implementation). Further roadmap: file and directory operations, window management, rescue shell, storage drivers, first beta release. As you can see, I have a basic plan, but I often change it as dependencies show up along the way. It's also possible to partially fulfill a dependency (like I did with ramdisk/storage devices). Although I try to do it the right way for the first time, it's not possible all the time. Therefore I sometimes end up with a quick-and-dirty-but-works solution, and later go back to correct it. To keep track of those things, I use comments in my code, and I use a script to create a TODO list. When I'm in a doubt what to do, I pick one from that list.
Re: Interpreting documentation & what's next?
One of the most effective ways is advancing in whatever you can, no matter if it takes years to gather the right information. Then present what you know, and ask about the details you don't know. If you need to ask, it's doubtful that you will be able to take advantage of more speed than that. After all, if by asking for missing details you get them, you will have to spend enough time to produce the additional code in a clean way, then you'll be able to complete what you don't have without turning off your creativity, just getting what you are clear that you need (but that in this way you will be able to figure without help sooner or later in case nobody can help), which is what will let you think in more and more complex things suddenly as you need to do things, to implement them no matter how limited the information is on any subject.
The best documentation is code itself, but only when it's written in a human readable way, only when it's parsed by a human repeatedly and it becomes increasingly clearer, simpler, better commented but without mutilating what it's supposed to do.
Abundant documentation comes after someone understands something so well and in a such a well-tuned way, that writing down the details becomes a gain.
Thinking in normal verbal terms when describing an algorithm step by step, and writing comments before code as if it was an executable document also helps to bring programming capabilities to the same intuitive but complex level when using the programming language, now with a touch of human language expressiveness but only implementing mathematically what the computer needs to model our description (and also a clear snapshot of human actions to be reproduced as in a movie, with an active logic that will take the same actions we would, for example, to detect a problem, using the same tools, tests, timing).
One knows that everyone wants to understand things. One might think that writing some documentation will supply them with what they want. But the fact is that documentation will be much better if it's based in code we write. People want things that work, so what people really want is full code with full documentation explaining how it works. That needs time and work, so the best try is starting by publishing code snippets integrated in a system to add something new here and there, depending on the speed of the developer.
_______________________________________________________
_______________________________________________________
_______________________________________________________
_______________________________________________________
About the main issue of the question...
We as humans already have a set of common things and common concepts in our mind that can help us do anything, among those things make programs of a certain complexity. Those common things are already fully created in our minds, so we will have no problem using them and calling them easy.
But there are other concepts, like memory paging, memory management, game-like or window-like graphical environments, data sorting, compression, that we need to create fully in our brain all by ourselves. Nobody can really help us, unless they write the detailed mental process from start to end with results, about how to do something. There are very few documents, books or code that reflect that, but those are the authors we call good.
That's why we study, and that's why there are some people who are able to do more in those areas faster than others. They already have those things fully created in their minds, and will probably have the help of other knowledgeable people to build parts of that mental structure faster.
But the point is that everything starts by trying out ways to do things that come from ourselves. We just need to try and do our best to implement something. Then as time passes, we will be able to figure external information.
But trying to learn from external resources where the whole mental structure of other authors left off is like trying to understand a movie saga without watching it all, without giving time to figure out, and only relying on things that others say and that we only understand partially but that leave us mostly stalled for years, as time passes without us enabling and using our brain and skills fully.
For example, since I studied programming formally, I wanted to know how to make Windows programs. I just knew that I needed to include WINDOWS.H. But now, after many years, I notice that scripting languages like JavaScript and PHP already make use of all the functions of the language without linking anything. So I thought that I could make an EXE skeleton that would only contain the standard functions present from Windows 9x/98, up to Windows 10 in such a way that the same program could run in all of those systems, using the DLL exports obtained with PEDUMP.EXE (wheaty.net), for core system DLLs, DirectX DLLs, common dialogs and controls, MFC, and any other fully standard DLL present universally in any Windows system. Now I could call any function as if it was a script without thinking on importing or dynamically linking anything, the whole WinAPI would already be included in my program. But I could only think of that after my very own creative job of learning to make PE EXEs fully in assembly without a linker, and after noticing how the most popular and easy to use web scripting languages already give access to their whole API with zero configuration, which is one of the things that make those languages so helpful to learn complex programming and actually making possible implement programs dealing with real world tasks easier than in C++, so if I included the whole WinAPI in all my Windows programs without exception, I would receive the same benefit of those popular scripting languages and now I would only need to worry about making my programs as if it was HTML5.
The main things that everyone will notice and will make people consider a system interesting are if you make work all of the device types for:
- Screen. What appears in it is the first thing people will notice, and usually the last step your program will take for the user.
- Keyboard and mouse.
- Disk.
- Sound.
- Network.
Those are the things you have to do anything for making them work normally for you and the user. The user doesn't distinguish between standards, file systems or other things with the same functions but internally different, they just care about being able to access their devices and information readily, so those things above are the very first ones that must work in an interesting system.
I've thought about how, for example, multitasking came to be. It actually started as several programs loaded simultaneously, with or without paging, where only one of them was active. The user could choose which one to activate at any time, and there were also programs that reacted to some event by themselves. From there and from cooperative multitasking where a program can choose to let other program run, you just need to make the kernel iterate the present programs.
Windowing was another thing that started from a console-only environment. First they were text-mode window frames that could be dragged, and gradually implemented in graphics modes. It was something that they had to think and explore by themselves. They had to have the whole picture of how to do it, and investigate the best ways to implement that system in an efficient way. Games could have been the best help they got, but they had to learn that too and try to do it their way and all by themselves to get the idea of the full logical path towards that system, which would have been initially unknown to them.
The best documentation is code itself, but only when it's written in a human readable way, only when it's parsed by a human repeatedly and it becomes increasingly clearer, simpler, better commented but without mutilating what it's supposed to do.
Abundant documentation comes after someone understands something so well and in a such a well-tuned way, that writing down the details becomes a gain.
Thinking in normal verbal terms when describing an algorithm step by step, and writing comments before code as if it was an executable document also helps to bring programming capabilities to the same intuitive but complex level when using the programming language, now with a touch of human language expressiveness but only implementing mathematically what the computer needs to model our description (and also a clear snapshot of human actions to be reproduced as in a movie, with an active logic that will take the same actions we would, for example, to detect a problem, using the same tools, tests, timing).
One knows that everyone wants to understand things. One might think that writing some documentation will supply them with what they want. But the fact is that documentation will be much better if it's based in code we write. People want things that work, so what people really want is full code with full documentation explaining how it works. That needs time and work, so the best try is starting by publishing code snippets integrated in a system to add something new here and there, depending on the speed of the developer.
_______________________________________________________
_______________________________________________________
_______________________________________________________
_______________________________________________________
About the main issue of the question...
We as humans already have a set of common things and common concepts in our mind that can help us do anything, among those things make programs of a certain complexity. Those common things are already fully created in our minds, so we will have no problem using them and calling them easy.
But there are other concepts, like memory paging, memory management, game-like or window-like graphical environments, data sorting, compression, that we need to create fully in our brain all by ourselves. Nobody can really help us, unless they write the detailed mental process from start to end with results, about how to do something. There are very few documents, books or code that reflect that, but those are the authors we call good.
That's why we study, and that's why there are some people who are able to do more in those areas faster than others. They already have those things fully created in their minds, and will probably have the help of other knowledgeable people to build parts of that mental structure faster.
But the point is that everything starts by trying out ways to do things that come from ourselves. We just need to try and do our best to implement something. Then as time passes, we will be able to figure external information.
But trying to learn from external resources where the whole mental structure of other authors left off is like trying to understand a movie saga without watching it all, without giving time to figure out, and only relying on things that others say and that we only understand partially but that leave us mostly stalled for years, as time passes without us enabling and using our brain and skills fully.
For example, since I studied programming formally, I wanted to know how to make Windows programs. I just knew that I needed to include WINDOWS.H. But now, after many years, I notice that scripting languages like JavaScript and PHP already make use of all the functions of the language without linking anything. So I thought that I could make an EXE skeleton that would only contain the standard functions present from Windows 9x/98, up to Windows 10 in such a way that the same program could run in all of those systems, using the DLL exports obtained with PEDUMP.EXE (wheaty.net), for core system DLLs, DirectX DLLs, common dialogs and controls, MFC, and any other fully standard DLL present universally in any Windows system. Now I could call any function as if it was a script without thinking on importing or dynamically linking anything, the whole WinAPI would already be included in my program. But I could only think of that after my very own creative job of learning to make PE EXEs fully in assembly without a linker, and after noticing how the most popular and easy to use web scripting languages already give access to their whole API with zero configuration, which is one of the things that make those languages so helpful to learn complex programming and actually making possible implement programs dealing with real world tasks easier than in C++, so if I included the whole WinAPI in all my Windows programs without exception, I would receive the same benefit of those popular scripting languages and now I would only need to worry about making my programs as if it was HTML5.
The main things that everyone will notice and will make people consider a system interesting are if you make work all of the device types for:
- Screen. What appears in it is the first thing people will notice, and usually the last step your program will take for the user.
- Keyboard and mouse.
- Disk.
- Sound.
- Network.
Those are the things you have to do anything for making them work normally for you and the user. The user doesn't distinguish between standards, file systems or other things with the same functions but internally different, they just care about being able to access their devices and information readily, so those things above are the very first ones that must work in an interesting system.
I've thought about how, for example, multitasking came to be. It actually started as several programs loaded simultaneously, with or without paging, where only one of them was active. The user could choose which one to activate at any time, and there were also programs that reacted to some event by themselves. From there and from cooperative multitasking where a program can choose to let other program run, you just need to make the kernel iterate the present programs.
Windowing was another thing that started from a console-only environment. First they were text-mode window frames that could be dragged, and gradually implemented in graphics modes. It was something that they had to think and explore by themselves. They had to have the whole picture of how to do it, and investigate the best ways to implement that system in an efficient way. Games could have been the best help they got, but they had to learn that too and try to do it their way and all by themselves to get the idea of the full logical path towards that system, which would have been initially unknown to them.
YouTube:
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
Re: Interpreting documentation & what's next?
@alexfru @bzt @~
Thanks for replying, I was able to collect some valuable information from this.
I've noticed that there are many different paths that people take when it comes to their OS specific timeline.
For example my plan is somewhat similar to @bzt's but in a very different order. Also I found out that using a TODO list is very helpful. +1 for that.
Looks like I will just have to deal with the dependencies. Sooner or later they will show up somewhere.
@alexfru OneNote looks very handy and it's well browser integrated, I wish there was a Linux version of it. Microsoft is really pushing Windows too hard!
Experimenting sounds like something I should do more of, rather than waiting for something to magically appear in my head.
@~ I hope it wont't take that long for me to understand a document. Is there an example of your draggable window text mode thing, I would really like to check it out.
Thanks for replying, I was able to collect some valuable information from this.
I've noticed that there are many different paths that people take when it comes to their OS specific timeline.
For example my plan is somewhat similar to @bzt's but in a very different order. Also I found out that using a TODO list is very helpful. +1 for that.
Looks like I will just have to deal with the dependencies. Sooner or later they will show up somewhere.
@alexfru OneNote looks very handy and it's well browser integrated, I wish there was a Linux version of it. Microsoft is really pushing Windows too hard!
Experimenting sounds like something I should do more of, rather than waiting for something to magically appear in my head.
@~ I hope it wont't take that long for me to understand a document. Is there an example of your draggable window text mode thing, I would really like to check it out.
OS: Basic OS
About: 32 Bit Monolithic Kernel Written in C++ and Assembly, Custom FAT 32 Bootloader
About: 32 Bit Monolithic Kernel Written in C++ and Assembly, Custom FAT 32 Bootloader
Re: Interpreting documentation & what's next?
An excellent example is Turbo C++ 1. It even has text hyperlinks but I think it's closed source.Octacone wrote:@~ I hope it wont't take that long for me to understand a document. Is there an example of your draggable window text mode thing, I would really like to check it out.
In the meantime, the most common open source program is FreeDOS EDIT, which can manipulate text windows to open several files at the same time, it can drag, size, maximize, minimize, overlap, scroll the text:
http://www.ibiblio.org/pub/micro/pc-stu ... /dos/edit/
The best to learn from it would be to fill the whole program with debug functions called all over the code we want to study (WriteBookFromProgram(FILE *documentHandle, char *message, long messageSize, int messageType,...)) to generate structured human-readable text and hypertext/bitmap or text screen snapshots as the program executes and generates an ordered document; recompile, use the programs normally and then inspect the desired document with the sequences as if it was a high quality automatically generated book, to learn from actual code instead of only books too far away from real world implementations.
It's strange that end user programs don't produce a large set of demos, for example a text mode windows, or maybe they do and I don't know how to find such demos.
YouTube:
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1
http://youtube.com/@AltComp126
My x86 emulator/kernel project and software tools/documentation:
http://master.dl.sourceforge.net/projec ... 7z?viasf=1