Standardized IPC protocol

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

I've come up with a design with RPCs (function entry points that you can call from another process), and these RPCs are grouped together into services, and processes can choose to implement services.

Services would be things such as "VFS", "Disk Driver", "Windowing System". There would be kernel hooks, so a process could say "notify me when a new Disk Driver service appears", "list all Disk Drivers", etc.

It would be a good idea to formalize these services, have a canonical list somewhere. I'm wondering if I formalized these services with a descriptor language, I could then make a code generator generating stubs for both the caller and the implementor.
My OS is Perception.
MollenOS
Member
Member
Posts: 202
Joined: Wed Oct 26, 2011 12:00 pm

Re: Standardized IPC protocol

Post by MollenOS »

AndrewAPrice wrote: It would be a good idea to formalize these services, have a canonical list somewhere. I'm wondering if I formalized these services with a descriptor language, I could then make a code generator generating stubs for both the caller and the implementor.
Thats exactly what I've done to minimize maintience and eliminate copy-paste errors, because I often let my guard down when I have to define the same thing a million times. So I formalized all the protocols/functions/events as XML and then build a library that can handle the communication. I generalized the library to allow others to use it. All RPC is inheritently asynchronous but the library allows for synchronous calling using message-contexts that can be provided at invoke time, and then using _await on that context.

My OS service protocols for instance is defined like this;
https://github.com/Meulengracht/MollenO ... tocols.xml
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Standardized IPC protocol

Post by OSwhatever »

I use a lightweight IPC protocol. It is losely based on this paper

https://dl.acm.org/doi/pdf/10.1145/1278 ... nload=true

which very similar to QDataStream in the Qt framework. So there is no type checking, parameter checking or any XML description format. The services are responsible for checking their parameters themselves.

The benefit of this appoach is:

It is lightweight and fast
No IDL compiler required
Easily implementable in any language
It can be easily adapted to contain endianess awareness
In C++ you can easily overload the <<,>> operators for marshaling entire classes.
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

I've been reading about Cap n Proto and I'm intrigued because of its zero-cost serialization. I thought it would be nice to have an IDL that's similar, and if the message is <= 88 bytes (I'm in long mode and accounting for the other registers used by the syscall), I could transfer the message in registers. For larger messages, I could create a memory arena (aligned to pages), allocate all messages and submessages inside the memory arena (with pointers being offsets inside the memory arena), and "gift" the pages containing the memory arena to the receiver.
My OS is Perception.
User avatar
eekee
Member
Member
Posts: 891
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Standardized IPC protocol

Post by eekee »

Serialization is generally low- to zero-cost outside the OO world. I'm intrigued by Cap'n Proto's dependency tracking, promises, etc, because the problem it shows under "traditional RPC" in the first diagram is a big one for Plan 9 and for what I want to do with networks.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

Inspired by Cap n Proto and Flat Buffers, I've been working on my own format (Permebuf).

My goal is to minimize copying - if the buffer is under ~80 bytes then we can keep it on the stack and transfer it via registers. If it grows larger, we'll move it into its own page, and the RPC will move the memory pages over.
Last edited by AndrewAPrice on Mon Jul 06, 2020 7:15 pm, edited 1 time in total.
My OS is Perception.
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Standardized IPC protocol

Post by Korona »

Managarm is also switching to its own serialization format (and away from Protobufs). The most crucial advantage of our own format is that we can have a fixed-size header and variable data in a tail. This allows us to handle most IPC requests without additional syscalls (since most messages are indeed fixed-size). It also means that we do not need to allocate for most messages.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Standardized IPC protocol

Post by OSwhatever »

One thing is that any method that includes marshaling is basically a memory copy into a temporary buffer. My kernel supports IOVs (see QNX what IOV mean) and that can be used in order to copy the parameters between processes without any temporary copy at all. You can use the the pointer/size pair for each parameter, types using buffers like strings can just fill the pointer/size pair with the pointers where the string is located directly. The kernel does the rest and copies all the parameters. Then at the other side the buffers can be used directly in place. The only downside is that constant simple types like integers must be given a variable that lives on the stack but this is a minor issue.

The limitation is also how many IOVs the kernel can accept but in general it is order of hundreds of them (QNX can handle 1024). If you want to send loads of parameters and structures, then you must come up with a buffered method that uses marshaling.

Just one other way of IPC that I could think about.
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

I'm fascinated with the idea of promise pipelining.

The idea I have floating around in my head is that RPCs could immediately return a future, and they don't block until attempting to read that future. Then, any message field in my Permebuf format could be assigned a future or another message in the same buffer, and the C++ API could be agonistic to whether it is a future.

But, this creates issues such as garbage collecting unread but owned futures, what to do when we actually read the future (do we copy the future's value into the local buffer (simplest to forward in follow up RPCs), or "chain" multiple buffers together (fastest, but sending a chained buffers are more complicated)?)
My OS is Perception.
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

(edit: double posted)
Last edited by AndrewAPrice on Sat Jul 18, 2020 11:29 am, edited 1 time in total.
My OS is Perception.
User avatar
AndrewAPrice
Member
Member
Posts: 2300
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

I'm questioning my original plan of making RPCs act as cross-address-space function calls.

My original plan was this:
- The service would register an entry point for each function- a memory address of the function.
- The caller would issue a system call with the service ID and function ID (small messages could be copied across in registers, large messages could be copied across by 'gifting' the callee system pages).
- I wanted to imitate as if the caller/callee shared the same thread. So the kernel would block the caller's thread, and create a new thread in the callee, and when this function returns (a system call with the response message), I'd destroy this thread and resume execution in the caller's thread.

I wanted to make creating/destroying a thread super lightweight. You want an async RPC? Wrap the call in a thread. You want your service to process RPCs sequentially? Acquire a global lock in the callee. Calling a function exposed in a service in another thread would function very similarly to calling a library function (my IDL would even generate C++ stubs for the caller and an interface for the callee to implement.)

Can creating threads willy nilly be as lightweight as I'm imagining? I thought I could make it lightweight by using object pools for the data structures in the kernel, as well as having a pool of stacks in userland (and have some stackless assembly code that runs at the start/end of a thread to grab/release the stack.) This would avoid having to do dynamic memory allocation most of the time.

But, I ported musl, which does TLS initialization, and calling TLS destructors on exit. It's not the most difficult, but it's making me doubt how lightweight this would be, if I were to create/destroy a thread for every RPC.

Alternatively, I could abandon this approach, and make the client create a worker thread that polls/sleeps for incoming RPCs. I guess I could imitate the "every RPC has its own thread" by letting the service manage it's own pool of worker threads. The service could pick it's own scheme, such as:
- a worker threads always make sure there is at least one worker thread sleeping for messages (maybe with a maximum limit.)
- a worker thread per processor core.
- one worker thread, all messages are handled synchronously.

What do people think?
My OS is Perception.
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Standardized IPC protocol

Post by Korona »

Well, if you want asynchronous message processing, an asynchronous IPC primitive will certainly perform better than any emulation based on synchronous primitives.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
MollenOS
Member
Member
Posts: 202
Joined: Wed Oct 26, 2011 12:00 pm

Re: Standardized IPC protocol

Post by MollenOS »

I would also stick to making the RPC inheritly async and then emulating sync behaviour on top with an await functionality.

Code: Select all

int gracht_client_invoke(gracht_client_t*, struct gracht_message_context*, struct gracht_message*);
int gracht_client_await(gracht_client_t*, struct gracht_message_context*);
int gracht_client_await_multiple(gracht_client_t*, struct gracht_message_context**, int, unsigned int);
int gracht_client_status(gracht_client_t*, struct gracht_message_context*, struct gracht_param*);
This is the interface of my protocol client, one can invoke purely async RPC calls, and if the message is synchronous you can await the result. The invoke method is never called directly, but rather by the protocol functions that are generated by my protocol generator. If you need to await an RPC you need to provide a message context where the result can be stored. It's a temporary storage thats required to hold the response.

An example of calling a synchronous method through an async API:

Code: Select all

struct vali_link_message msg = VALI_MSG_INIT_HANDLE(GetProcessService());

status = svc_process_report_crash(GetGrachtClient(), &msg.base, *GetInternalProcessId(), Context, sizeof(Context_t), Signal->signal);
gracht_client_await(GetGrachtClient(), &msg.base);
svc_process_report_crash_result(GetGrachtClient(), &msg.base, &osStatus);
OSwhatever
Member
Member
Posts: 595
Joined: Mon Jul 05, 2010 4:15 pm

Re: Standardized IPC protocol

Post by OSwhatever »

Korona wrote:Well, if you want asynchronous message processing, an asynchronous IPC primitive will certainly perform better than any emulation based on synchronous primitives.
I would like you to define asynchronous IPC first. Often claimed asynchronous IPC is often not completely asynchronous. This is similar to OS that claims to be real time OS but might have some unexpected event that makes the real time claim fall apart.

First question, during send, can the thread be blocked in some instance during that operation. If the answer is yes, why go through the trouble designing an "asynchronous" IPC when it might be blocked during send. It is likely to be blocked in order to acquire some resource but how does that differ from a synchronous IPC?
Korona
Member
Member
Posts: 1000
Joined: Thu May 17, 2007 1:27 pm
Contact:

Re: Standardized IPC protocol

Post by Korona »

OSwhatever wrote:I would like you to define asynchronous IPC first. Often claimed asynchronous IPC is often not completely asynchronous. This is similar to OS that claims to be real time OS but might have some unexpected event that makes the real time claim fall apart.

First question, during send, can the thread be blocked in some instance during that operation. If the answer is yes, why go through the trouble designing an "asynchronous" IPC when it might be blocked during send. It is likely to be blocked in order to acquire some resource but how does that differ from a synchronous IPC?
An asynchronous operation consists of two components: (i) submission and (ii) completion. The operation is submitted, operates "in the background" (i.e., usually without blocking the submitting thread) and completes after some time has passed. An appropriate proactor (aka callbacks/coroutines/fibers/futures, e.g., Window's overlapped I/O or Linux' io_uring) or reactor (e.g., Linux' epoll) is usually used to monitor the progress of the operation.

I'm not sure what the point of your second question is. If you're asking specifically about my OS: no, on Managarm, an asynchronous operation never blocks the current thread. OTOH I could imagine designs where it's sometimes acceptable/useful to block. I have the feeling that arguing the contrary quickly runs into a no-true-Scotsman fallacy.

EDIT: about the benefit of asynchronous I/O: well, a simple state machine that tracks the progress of an operation is much more lightweight than a thread, hence you can have more of them (i.e., more I/O operations) running concurrently.
managarm: Microkernel-based OS capable of running a Wayland desktop (Discord: https://discord.gg/7WB6Ur3). My OS-dev projects: [mlibc: Portable C library for managarm, qword, Linux, Sigma, ...] [LAI: AML interpreter] [xbstrap: Build system for OS distributions].
Post Reply