CIL to Native Compiler

Programming, for all ages and all languages.
Post Reply
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

CIL to Native Compiler

Post by dc0d32 »

Hi,

I am planning to write a CIL to native code - AOT compiler.

The assembly loading is pretty strightforward. Using Mono.Cecil, we can load an indivisual assembly, load all modules, load all types in modules, load all member info in types and get the method bodies as IL.

The target code generation is the tricky part -

There are two ways this can be done -
1. Convert IL to gcc RTL/tree and let gcc do the optimization and code gen.
2. Convert IL to basic blocks and DAGs ourself, do optimizations, and emit the target code (either direct binary or as x86 assembler source to be assembled by gas).

The first approach is taken by gcj.

I'm thinking of choosing the 2nd option. Well, performing optimization is not a very easy task, and i might even skip it for the fisrt time. I will aso have to code some basic things like memory manager & gc, exception handling glue code, virtual method call stubs etc. in either unsafe C# or C++.

The primary use of this compiler will be to write kernels for normal and embedded systems, in any language that targets CIL.
Last edited by dc0d32 on Sun Jul 08, 2007 1:43 am, edited 1 time in total.
earlz
Member
Member
Posts: 1546
Joined: Thu Jul 07, 2005 11:00 pm
Contact:

Post by earlz »

I will aso have to code some basic things like memory manager & gc, exception handling glue code, virtual method call stubs etc. in either unsafe C# or C++.
You MUST write this in native code, so C/C++ (or assembly even)

also, be aware that in OSDeving, there are some lowlevel bits that I'm not so sure any CIL system could do well at...I don't know what arch your going for though...
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Post by Colonel Kernel »

hckr83 wrote:
I will aso have to code some basic things like memory manager & gc, exception handling glue code, virtual method call stubs etc. in either unsafe C# or C++.
You MUST write this in native code, so C/C++ (or assembly even)
C# in this context is native code. For implementing such low-level things it would have to be compiled to native code. "Unsafe" C# has pointers and can do just about anything C can do AFAIK.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

Post by dc0d32 »

Colonel Kernel wrote:C# in this context is native code. For implementing such low-level things it would have to be compiled to native code. "Unsafe" C# has pointers and can do just about anything C can do AFAIK.
yea, unsafe C# pointers are just like C pointers [:)].

and about the bits thing - we can do bitwise operations in C#, so even if it is a walk around the Earth, it is possible to write such util classes i guess.
User avatar
Alboin
Member
Member
Posts: 1466
Joined: Thu Jan 04, 2007 3:29 pm
Location: Noricum and Pannonia

Post by Alboin »

What advantages would there be to use C# over just plain C++?
C8H10N4O2 | #446691 | Trust the nodes.
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

Post by dc0d32 »

1. the COMPILER is being written in C#; there is no restriction on the language used to write the kernel as long as it compiles to CIL [BrainFuck :!: language maybe].

2. If you are asking instead the advantages of CIL -
well, type safety (which inherently includes memory safety), verifyable and hence trustable processes, single address space (or maybe a few with grouping of mutiple processes into one), cheap IPC and what not.

This and this can tell it much better than I can.

Still the main question prevails - does anyone have some experience with gcc RTL? or should I go ahead and apply the bare compiler concepts from the books.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Post by Colonel Kernel »

I don't have any experience with the innards of gcc, but speaking very generally it is usually a better idea to integrate something that you know works rather than to re-invent the wheel. :) Have you considered using LLVM?

Your project sounds very interesting. I wish I had the time to spend on something like that. :(
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
niteice
Member
Member
Posts: 59
Joined: Tue Oct 03, 2006 3:49 pm

Post by niteice »

Option #1 would be easier if not for one thing, that being that MSIL is entirely stack-based, but RTL favors registers.

Otherwise though, this could be a very useful and interesting project. Could you not also compile the assemblies, ship them as standard DLLs/SOs, and completely remove the Mono/CLR dependency?
jnc100
Member
Member
Posts: 775
Joined: Mon Apr 09, 2007 12:10 pm
Location: London, UK
Contact:

Post by jnc100 »

Mono, with the --aot option can produce native assemblies from a CIL .exe file. I'd imagine it directly outputs the machine code, depending on the current architecture, but I hope you can specify a different architecture if you want.

Regards,
John
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

Post by dc0d32 »

the main reason why i am writing this compiler on my own in C# is to bootstrap it. When (If ever) we succeed in building an OS with this compiler, we can compile the compiler as an installer for our OS. Software packages to be installed will be shipped as CIL assemblies, and the compiled compiler will compile the incoming assembly into native code for execution - what I call as 'installation'. The concept is much similar to ngen, but radically different in how the compiled code is executed.

ABOUT LLVM - looks interesting, will see when i get some good time.

about mono -aot - I've tried it before, and it seems to be a royal pain in the a** to build a kernel out of it because of so many dependancies on mono and the underlaying OS (mostly linux).

I dont know why but i do have a bad feeling about mono -aot or even gcj. I once tried the same in Java, using gcj to compile native. But, porting libgcj/ulibgcj was a very bad experience. With no documentation, i wasted an entire week porting and debugging, things like exception handling are so bad in there, i just quit. :cry: . i said, well i'mm make my own - clean and simple one so that others who are interested [and can code better in say C# than me] can start coding.

edit - even if we use LLVM - which is totally mind blowing :D , bootstrapping is still a problem :(
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

Post by dc0d32 »

can we convert CIL bytecode to LLVM bytecode without losing details? [i'm yet to compare LLVM btecode with CIL]
User avatar
Kevin McGuire
Member
Member
Posts: 843
Joined: Tue Nov 09, 2004 12:00 am
Location: United States
Contact:

Post by Kevin McGuire »

I just finished a small project to emit machine instructions during execution for something very much like self-modifying code except it just rebuilt a function that was more suited to the task needed. I succeeded but I did not get the performance I wanted. GCC is a much better at generating efficient machine code which left mine useless, but I did get close in speed so it was not awful just not what my goal was.
2. Convert IL to basic blocks and DAGs ourself, do optimizations, and emit the target code (either direct binary or as x86 assembler source to be assembled by gas).
The good side is it is fairly easy to emit machine code for one of the hardest instruction sets, ironically, which is the IA32. Instruction sets for other processors are easier, which is good. The hard side is optimization and fortunately for proof of concept (beginning) you could slide by very easily.

So if you are looking to just emit functional code then I would say this route is not as hard as it looks, but just requires some determination and endurance for bug hunting at times (or some clever debugging).

There are two main paths and only one for which I have ever actually done myself. The macro assembler is one concept that I have never done. The second I can not think of a name for, but it turns out to be much simpler and I can speak from experience.

The later is mostly about writing support code for a large table that contains all the instructions for the target platform. This table may very well be completely different from platform to platform but the idea is the same. For each instruction you have certain flags which your support code picks up and uses to generate the correct byte sequences. the data in each entry in this table also serves to enforce the correct parameters being passed for generation of this instruction.

I searched up a old post of mine and the table looked something like this. Making the table once the support code was written was exactly like copying the instruction tables straight from the IA32 instruction set documentation.

Code: Select all

tISet ISet[] = {
        {0xFFFFFFFF, 0, 0, 0, 0},
        {ME_MOV,0, 0x0088, A_RM8, A_R8 | X86_O_R},
        {ME_MOV,P_OSO, 0x0089, A_RM16, A_R16 | X86_O_R},
        {ME_MOV,0, 0x0089, A_RM32, A_R32 | X86_O_R},
        {ME_MOV,0, 0x008A, A_R8 | X86_O_R, A_RM8},
        {ME_MOV,P_OSO, 0x008B, A_R16 | X86_O_R, A_RM16},
        {ME_MOV,0, 0x008B, A_R32 | X86_O_R, A_RM32},
};
I can not explain much more as it was quite a while ago when I did it, but after my little project I can already tell you that the 0x88, 0x89, 0x8A, 0x8B are the opcodes since I just got done dealing with them time and time again until the values are stuck in my head. The flags presented afterwards allow the support code to generate the correct following bytes or modifications to the instruction bytes... it will require a more in depth look if you are lost. I apologize for getting your slightly lost if so.

I dunno, I am just rambling hoping to give insight into the do-it-your-self route. So like I said it would take more than a handful of determination and time, but not hard as you might think it is.
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

Post by dc0d32 »

x86 code generation is manageable.

Once the IL has been converted to DAG or more precisely SSA AST, instruction selection is fairly simlpe using tree matching (with help of an x86 instruction timing and cost expert), we could use maximal munch or the dynamic programming approach. Then comes the hard part ie. register allocation and instruction rescheduling, out of which register allocation is graph coloring while i personally have no experience at all with instruction rescheduling.

Optimization is the hard part. I am creating the compiler structure such that (like LLVM) optimizations will be organized as passes, working on either global, method, basic block or peephole level. New optimization algo can be added by implementing an interface, and registering with the OptimizationPassManager, which will apply them in the topological sort order.

Since i will be emitting assembler code, there are two parts in a 'target'. One is the actual hardware and other is the assembler - if ever you wished to change your assembler, just change the value of targetAssemblerEmitter member in ClassEmitter.

As you suggest - proving correctness will be vital and hard. Regarding debugging, i guess i'm used to it [and with one of the best debuggers at hand - microsoft visual C# 2005 express debugger].
User avatar
dc0d32
Member
Member
Posts: 69
Joined: Thu Jun 09, 2005 11:00 pm
Location: Right here

Basic block linking...

Post by dc0d32 »

Conversion of IL to basic blocks is almost complete. I'm stuck with a dilemma.

When the last instruction of a basic block is a method call, should the basic block have a link to the first basic block of the called method, or a flag or something that the control flows out of the current call graph? It makes a little difference in practice, what about clarity?

I guess it is more sort of theoretical problem than an implementation problem.
Post Reply