Page 1 of 1

Some thoughts and concepts to share on kernel design...

Posted: Thu Apr 07, 2011 12:31 pm
by naf456
(BEEN...READING...TO...MANY...S****....MANUALS....MUST...TYPE...IN...SOME...TEXT)

anyway I've been studying OS development night and day for what seems like forever now (around 8 months) and now it's even eating up all my school work.
I'm 16 years old thus don't know everything about everything yet- working on it though.
and the main part of my OS , is the Web browser.
In fact , my web browser is going to replace the desktop- (people don't want to spend time on there Desktop computers anymore- why waste time on your desktop, on your computer? 8) )

So I was First wandering if Gecko (Morzilla's Layout engine) was still open source- free to use? if not, i just need some resources on how to build my own Layout engine( :shock: ).

Anyway here's some concepts I would like to share:

My (Still imaginary) Fenix Kernel:

Special design- every Application will have it's own Child kernel- Each child kernel(_CKERNEL) will be monitored by the Mother kernel(_MKERNEL). Application system calls will be Put through a Protocol Called OSmosis, where every System call needed for an application will be preprocessed through a Dice Container- A script that withholds all the Die.

Die are Classes that get filled in with whatever system call you want. the inside workings of osmosis is still being planned out.
What I what out of osmosis is a system that is COMPLETELY transparent, but at the same time, 100% secure away from any Viruses that wants to enter the RAM.

Anyway. pairing every application with a child kernel is a great way For memory management.
The child kernel will have it's own ai. wether a program is running out of memory on a page- a new page will be added and then Assigned to the application, making the Whole OS Modulated . Good plan? I think so.

Also Child kernels may even(Like on Exo kernels) use different libraries and coding languages- Hopefully making Any and ALL developers able to Design apps without the need to learn a Specific coding language(My problem with wanting to develop for mac and windows operating systems).

(Note: to Anyone who creates snub comments denoting the fact I know 'Diddly s***' about operating system- your comments DO NOT HELP ANYONE. STOP BEING MISERABLE BAD TURDS, JUST BECAUSE YOU'RE MORE 'INFORMED' THEN I AM.( [-X ) )

Re: Some thoughts and concepts to share on kernel design...

Posted: Thu Apr 07, 2011 12:48 pm
by NickJohnson
Gecko is still free, although it's not exactly lightweight.

I'm not sure why you're going for such a strange design for a system that's only supposed to run a web browser. In theory, you should be fine with a single process with threading, with a monolithic kernel. You seem to be over-complicating things: even a simple OS is not trivial to implement, and something as complicated as you're explaining would not only be intractable for a newbie, but would probably be overkill in any situation.
naf456 wrote:(Note: to Anyone who creates snub comments denoting the fact I know 'Diddly s***' about operating system- your comments DO NOT HELP ANYONE. STOP BEING MISERABLE BAD TURDS, JUST BECAUSE YOU'RE MORE 'INFORMED' THEN I AM.( [-X ) )
This is probably the wrong attitude. People here are used to giving hash (but ultimately constructive) criticism to designs. Ignoring people because you think they know more than you do is just unreasonable anyway.

Re: Some thoughts and concepts to share on kernel design...

Posted: Thu Apr 07, 2011 1:53 pm
by turdus
naf456 wrote:seems like forever now (around 8 months)
You made my day :-) I was at your age when I wrote my first kernel, but my "forever" is more than a decade old now :-) There were gaps, sometimes years, but I never loose interest. The best advice I can give, always think first, and think it well. Do not implement anything until you're absolutely sure that there's no better way to do it. It really sucks to redesign basic concepts. And write lots of comments, knowing that they will be read by a psycho serial killer, who knows, where you live.
If not, i just need some resources on how to build my own Layout engine( :shock: ).
There are other options as well. You can use other opensource projects, like WebKit or Konqueror's engine. For extreme w3m is a text based browser.

But I suggests to write your own. That's easier than you think, you don't have to be full W3C compliant at first, it's enough if your browser displays some page (also written by you) okay. No XML parser needed (although after a complexity, life is easier if you have one).
Anyway here's some concepts I would like to share:
Good concepts, why don't you see it for yourself if they applicable? :-)

Re: Some thoughts and concepts to share on kernel design...

Posted: Thu Apr 07, 2011 2:15 pm
by xenos
I don't think that the OP opposes (maybe harsh) comments from more experienced people in general, but those troll-ish and useless comments that pop up here once in a while, mainly directed towards newbies and simply saying "Go away newbie, osdev is not your kind of business".

Regarding the design, I would agree with NickJohnson: it looks utterly complicated to me, and I don't see any advantages in having all those separate kernels. If all you want to display is some kind of web browser, you could run Gecko on pretty any kind of operating system. Applications could be based o a GUI written in some web language (HTML5 with SVG, MathML, styling with CSS...) and writen in JavaScript... Just to share some quick ideas ;)

Re: Some thoughts and concepts to share on kernel design...

Posted: Thu Apr 07, 2011 2:25 pm
by TylerH
Sounds like an overcomplicated, hand written Chrome OS. (Whereas Chrome OS is just Linux - any semblance of freedom + Chrome Browser.)

Re: Some thoughts and concepts to share on kernel design...

Posted: Thu Apr 07, 2011 3:24 pm
by naf456
Thanks for your replies. My main goal out of My operating system is to be the most stable- secure operating system- which has a ton of quirks and is easy to develop for.
My operating system will not only host a web browser (although this is my main Objective) but will also hold a Collection of applications- maybe (if i get some profits)i could Create my own store and alike. ofcourse, these things are FAR into the future.

+ I do not mind any critisms- in fact i do indeed welcome it (Faliure is every part of success)
but i disagree with covering critisms under swear words, which i kinda am also guilty for in my last post :oops:

Thanks :mrgreen:

Re: Some thoughts and concepts to share on kernel design...

Posted: Fri Apr 08, 2011 9:09 am
by suthers
That's bizzare, I was working on comming up with a new kernel paradigm and I had a similar idea just an hour before I saw this post, minus the AI and webOS thing.

I have to being a bit wary of trying to integrate AI into an OS as, for me, reliability is one of the qualities that all OS's should strive to achieve (I think nobody can disagree with this). Using an AI that must "learn" as it goes along from it's experiences, one cannot predict it's final state and therefore the decisions that it will make after some learning are unpredictable and not necesarily positive if it tries to make a decision on an new situation based on its experience from a previous event that appears similar, but in fact is not.

For me this is the very opposite of reliability and it cannot be made so without severely limiting the learning outcomes of the AI, which in turn curtail the advantages it brings, probably resulting, in most cases, in it being a waste of time.

Though of course (as a disclaimer), not having ever attempted to do so, I would be interested in reading up about the subject and I do not claim that there aren't any methods that would allow an AI within an operating system to be free enough in it's learning outcomes to generate significant advantages while not risking any form of unreliability.

Kind regards,

Jules

Re: Some thoughts and concepts to share on kernel design...

Posted: Fri Apr 08, 2011 10:09 am
by NickJohnson
The only areas I could see benefiting from AI are scheduling and caching: both are all about decisions that can't break the system but can improve performance, and both give good feedback. By caching, I mean both page swapping and disk block caching.
suthers wrote: For me this is the very opposite of reliability and it cannot be made so without severely limiting the learning outcomes of the AI, which in turn curtail the advantages it brings, probably resulting, in most cases, in it being a waste of time.
Machine learning algorithms generally have proven guarantees of (eventual) convergence toward optimal or semi-optimal solutions. Machine learning algorithms are, in fact, algorithms, not heuristics.

Re: Some thoughts and concepts to share on kernel design...

Posted: Fri Apr 08, 2011 3:16 pm
by suthers
I acknowledge that my statement attributed properties of heuristic algorithms to machine learning algorithms and I retract the example I gave.

I have to admit that I didn't make myself clear and that my post was confused, what I meant is that during the convergence to a quasi-optimal solution, it is possible that there will be points at which the machine learning will yield an algorithm that produces a solution that is worse than even a greedy algorithm would give.

One cannot even produce an OS with the machine learning algorithm already in a state that is considered to be close to an optimal solution, as the optimal solution for a given user might be significantly different to the one required by 'lab use' or whatever test conditions are used to produce it, so it is possible that during the algorithms search through the function space of the problem, it might pass through a local "fitness" minima, resulting in the above situation.

The confusion here seems to stem from my use of the word 'unreliable', for me, an OS that does a set amount of work slowly, sub-optimally and in variable amount of time(depending on what stage of its search it is in), is unreliable.

Also, even for scheduling algorithms, the are many variables to take into account, so it can be very hard to produce a "fitness" function (i.e. a function: f : (g : R^n -> R^m) -> R n, m ∈ Z, that judges how "good" (i.e. optimal) the current solution is so that the machine learning algorithm can "know" which way to move in the solution function space) that is appropriate.

An example can be found in HDD disk fetch scheduling, I understand that this is not something that an OS must face, but it is an example that I myself have encountered in my research, and so give. I was once using a neural network (NN) to derive a servo arm scheduling algorithm and was using an emulated disk to do so. The problem that occurred was that that after some time it (almost) always outputted the sector that was closest to the current position of the emulated servo arm that was required to be read for a request on the queue to be completed.

This at first doesn't seem bad, as it produced a very high data throughput, but when I did a test where many requests where constantly being made for spans of data that were physically close to each other on the disk, as well as other points, the spans of data that where close to each other where constantly read, and the requests that were not on this path where seldom ever serviced.

If these requests where being made by different applications, it is clear that this would not be "fair" to all the applications, although it produces a high throughput. The problem here was that I had chosen a naive measure of the "fitness" of a particular solution that gave little or no weighting to considerations outside of the total data throughput. This was particularly stupid of me, and a bit of tweaking gave a much nicer solution, but it is easy to make such mistakes and some considerations can be subtle and hard to see and therefore may be disregarded during the design of the machine learning algorithm.

But the point here is twofold, it is clear that a more naive test would simply have seen a high data throughput and passed the algorithm as good and not gone into testing the algorithms with requests which have localized points on the disks surface with a high number and frequency of requests.

The second point I am making is that it is very hard to understand the workings of a solution given by a machine learning algorithm. I observed this when, some time ago, I used a NN (again) to approximate euler's totient function (φ(n)) for large numbers during an experiment in using AI for RSA factoring. When I looked at the weightings and shape of the network, it was clear that no human could derive anything from it, nor even attempt to understand why this worked, although it gave a fairly close answer in many case, which I must admit surprised me.

This makes it very hard to "see" any potential weak spots of this solution and test accordingly for cases where the algorithm produced by the machine learning algorithm gives a solution which is far from optimal.

So for me this has the potential to produce an "unreliable" OS. I understand that from the definition of unreliability that was implied in NickJohnson post, I am incorrect, but I did not mean to imply that it would "break" an OS in anyway, I simply said that people should be wary of it and meant that it should be seen as more of an experiment in AI rather than one in OS development. If I were to try such an idea out, I would first try it in a false environment before putting it in an OS. If this is the standard definition of reliability, I admit that from this point of view I was wrong and I am extremely sorry for wasting your time.

Kind regards,

Jules

Re: Some thoughts and concepts to share on kernel design...

Posted: Fri Apr 08, 2011 4:24 pm
by bewing
Sorry for off-topicness, but: Jules -- excellent post! :D

Re: Some thoughts and concepts to share on kernel design...

Posted: Fri Apr 08, 2011 4:50 pm
by Jezze
If a compontent requires a solution for which no optimal solution exist - discard the component.

That is why my kernel doesn't have multitasking =)

Re: Some thoughts and concepts to share on kernel design...

Posted: Fri Apr 08, 2011 6:47 pm
by TylerH
Hi Jules,
suthers wrote:I acknowledge that my statement attributed properties of heuristic algorithms to machine learning algorithms and I retract the example I gave.

I have to admit that I didn't make myself clear and that my post was confused, what I meant is that during the convergence to a quasi-optimal solution, it is possible that there will be points at which the machine learning will yield an algorithm that produces a solution that is worse than even a greedy algorithm would give.

One cannot even produce an OS with the machine learning algorithm already in a state that is considered to be close to an optimal solution, as the optimal solution for a given user might be significantly different to the one required by 'lab use' or whatever test conditions are used to produce it, so it is possible that during the algorithms search through the function space of the problem, it might pass through a local "fitness" minima, resulting in the above situation.
As to the inability to start in an optimal state: It would be reconciable to start in a suboptimal state, given that state can be retained over time(including reboots) and that the state approaches optimality in a logistic fashion. The ability to keep state could be attained by storing state to some residual storage, from which state would be restored after getting sufficiently far in the boot sequence.

A solution to user differences could be to have compile time(or even runtime) configurability of the fitness function. This exists for current kernels
suthers wrote:The confusion here seems to stem from my use of the word 'unreliable', for me, an OS that does a set amount of work slowly, sub-optimally and in variable amount of time(depending on what stage of its search it is in), is unreliable.

Also, even for scheduling algorithms, the are many variables to take into account, so it can be very hard to produce a "fitness" function (i.e. a function: f : (g : R^n -> R^m) -> R n, m ∈ Z, that judges how "good" (i.e. optimal) the current solution is so that the machine learning algorithm can "know" which way to move in the solution function space) that is appropriate.

An example can be found in HDD disk fetch scheduling, I understand that this is not something that an OS must face, but it is an example that I myself have encountered in my research, and so give. I was once using a neural network (NN) to derive a servo arm scheduling algorithm and was using an emulated disk to do so. The problem that occurred was that that after some time it (almost) always outputted the sector that was closest to the current position of the emulated servo arm that was required to be read for a request on the queue to be completed.

This at first doesn't seem bad, as it produced a very high data throughput, but when I did a test where many requests where constantly being made for spans of data that were physically close to each other on the disk, as well as other points, the spans of data that where close to each other where constantly read, and the requests that were not on this path where seldom ever serviced.

If these requests where being made by different applications, it is clear that this would not be "fair" to all the applications, although it produces a high throughput. The problem here was that I had chosen a naive measure of the "fitness" of a particular solution that gave little or no weighting to considerations outside of the total data throughput. This was particularly stupid of me, and a bit of tweaking gave a much nicer solution, but it is easy to make such mistakes and some considerations can be subtle and hard to see and therefore may be disregarded during the design of the machine learning algorithm.

But the point here is twofold, it is clear that a more naive test would simply have seen a high data throughput and passed the algorithm as good and not gone into testing the algorithms with requests which have localized points on the disks surface with a high number and frequency of requests.
Could you have factored the service time for specific reads into the NN's feedback? As in, if a certain read was left unserviced for x amount of time(x could even be a function of distance from the read, so as to maintain fast reads for closer data), give error feedback reguardless. Or, were you doing just that? Either way, I see the point that an effective fitness function would be difficult to make. But difficult doesn't imply impossible or even infeasible.
suthers wrote:The second point I am making is that it is very hard to understand the workings of a solution given by a machine learning algorithm. I observed this when, some time ago, I used a NN (again) to approximate euler's totient function (φ(n)) for large numbers during an experiment in using AI for RSA factoring. When I looked at the weightings and shape of the network, it was clear that no human could derive anything from it, nor even attempt to understand why this worked, although it gave a fairly close answer in many case, which I must admit surprised me.

This makes it very hard to "see" any potential weak spots of this solution and test accordingly for cases where the algorithm produced by the machine learning algorithm gives a solution which is far from optimal.
Does it matter that some NN results are incomprehensible, given that educated guess and check can lead to satisfactory results? We have little to no understanding as to how biological NNs work, but they are obviously supioror to simple computation in some fields. The fact that the solution is harder to find(and even to define) doesn't necessarily make the search for that solution any more extraneous.

Respect,
Tyler