Well, then you'd be back to passing process IDs to the kernel, and every message would involve a round trip through a "system" process whose job it is to pass messages back and forth between other processes. Or with some added complexity, messages that do not involve handle transfers could be sent directly after establishing the client's identity through the system server. Sounds like it would be awkward, maybe a little too micro.In practice handles can just be data as well. What is preventing an operating system to implement all the capabilities in user space and the kernel IPC just remains simple data transfers?
Standardized IPC protocol
Re: Standardized IPC protocol
Re: Standardized IPC protocol
I haven't fully kept up with the discussion, but just wanted to note:
Using Plan 9, I sometimes found myself wishing services knew the PIDs of clients, but it was impossible because clients could be on the other end of a network link - even a multi-hop link. This will also apply to my OS, and I guess QNX also has the issue because it too has network-transparent services.
Using Plan 9, I sometimes found myself wishing services knew the PIDs of clients, but it was impossible because clients could be on the other end of a network link - even a multi-hop link. This will also apply to my OS, and I guess QNX also has the issue because it too has network-transparent services.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: Standardized IPC protocol
Why not make PIDs know their own hosts? Then any pid-involving call can check if the pid's host is local, and if not make an RPC.eekee wrote:Using Plan 9, I sometimes found myself wishing services knew the PIDs of clients, but it was impossible because clients could be on the other end of a network link - even a multi-hop link. This will also apply to my OS, and I guess QNX also has the issue because it too has network-transparent services.
- AndrewAPrice
- Member
- Posts: 2300
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: Standardized IPC protocol
I finally got in two way messaging. I am happy with the API. Here is both my sync and async syntax:
I built a userspace fiber framework so when sync calls blocked, the thread could keep responding to other messages.
It was a lot of work. It's the sort of work that only people on this forum can appreciate!
Code: Select all
graphics_driver.CallGetScreenSize(
GraphicsDriver::GetScreenSizeRequest(), [](
StatusOr<GraphicsDriver::GetScreenSizeResponse> response) {
std::cout << "Async screen size is " << response->GetWidth() << " x "
<< response->GetHeight() << std::endl;
});
auto screen_size = *graphics_driver.CallGetScreenSize(
GraphicsDriver::GetScreenSizeRequest());
std::cout << "Sync Screen size is " << screen_size.GetWidth() << " x "
<< screen_size.GetHeight() << std::endl;
It was a lot of work. It's the sort of work that only people on this forum can appreciate!
My OS is Perception.
Re: Standardized IPC protocol
Congrats, Andrew!
Interesting question. I think it would help even in tracing a server through multiple hops.moonchild wrote:Why not make PIDs know their own hosts? Then any pid-involving call can check if the pid's host is local, and if not make an RPC.eekee wrote:Using Plan 9, I sometimes found myself wishing services knew the PIDs of clients, but it was impossible because clients could be on the other end of a network link - even a multi-hop link. This will also apply to my OS, and I guess QNX also has the issue because it too has network-transparent services.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
- AndrewAPrice
- Member
- Posts: 2300
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: Standardized IPC protocol
My API for sending dynamically sized messages is more verbose than I'd like.
This code:
Is the equivalent of building a data structure that if printed to pseudo-JSON would look like:
The Permebuf<> is what holds all the data - and the internal data is page aligned. All of the other objects are references into the Permebuf<>'s memory. When I talk to the graphic's driver, I use std::move(commands), and pages owned by the Permebuf<> get sent to the receiving process. Now I can send arbitrary sized messages between processes.
The slowest part is probably all the dynamic allocating/releasing when you create and destroy Permebuf<> objects (I also have 32-byte 'mini messages' that can live just in the stack and be sent just using registers). I'm hoping I could speed this up one day by letting userland programs keep a local pool of last released pages so we can avoid syscalls for creating and destroying small Permebufs.
This code:
Code: Select all
// Create two draw calls. One at 10,10 and one at 200,200.
Permebuf<GraphicsDriver::RunCommandsMessage> commands;
auto command_1 = commands->MutableCommands();
auto command_1_oneof = commands.AllocateOneOf<GraphicsCommand>();
command_1.Set(command_1_oneof);
auto command_1_copy_texture_to_position = command_1_oneof.MutableCopyTextureToPosition();
command_1_copy_texture_to_position.SetSourceTexture(texture_id);
command_1_copy_texture_to_position.SetDestinationTexture(0); // The screen
command_1_copy_texture_to_position.SetLeftDestination(10);
command_1_copy_texture_to_position.SetTopDestination(10);
auto command_2 = command_1.InsertAfter();
auto command_2_oneof = commands.AllocateOneOf<GraphicsCommand>();
command_2.Set(command_2_oneof);
auto command_2_copy_texture_to_position = command_2_oneof.MutableCopyTextureToPosition();
command_2_copy_texture_to_position.SetSourceTexture(texture_id);
command_2_copy_texture_to_position.SetDestinationTexture(0); // The screen
command_2_copy_texture_to_position.SetLeftDestination(200);
command_2_copy_texture_to_position.SetTopDestination(200);
// Send the draw calls.
graphics_driver.SendRunCommands(std::move(commands));
Code: Select all
{
Commands: [
{
CopyTextureToPosition: {
source_texture: 1,
destination_texture: 1,
left_destination: 10,
top_destination: 10
}
},
{
CopyTextureToPosition: {
source_texture: 1,
destination_texture: 1,
left_destination: 200,
top_destination: 200
}
}
]
}
The slowest part is probably all the dynamic allocating/releasing when you create and destroy Permebuf<> objects (I also have 32-byte 'mini messages' that can live just in the stack and be sent just using registers). I'm hoping I could speed this up one day by letting userland programs keep a local pool of last released pages so we can avoid syscalls for creating and destroying small Permebufs.
My OS is Perception.
Re: Standardized IPC protocol
Yes, that looks rather complicated. Is there any particular reason why you are using a complex structured IPC transport layer rather than a basic unstructured one with support for structured messages left to a regular user library? Structured RPC adds overhead that is completely unnecessary for some types of servers (depending on the security model of your OS, it may make things harder to secure). Also, does your IPC transport layer have a hardcoded set of classes for each message type server in your system, or are the message classes provided by the individual servers/interface libraries? In addition, is there any reason why you didn't just make the various fields of the message arguments to the constructor rather than requiring them to be set afterwards with accessors? That would make it less verbose.AndrewAPrice wrote:My API for sending dynamically sized messages is more verbose than I'd like.
This code:Is the equivalent of building a data structure that if printed to pseudo-JSON would look like:Code: Select all
// Create two draw calls. One at 10,10 and one at 200,200. Permebuf<GraphicsDriver::RunCommandsMessage> commands; auto command_1 = commands->MutableCommands(); auto command_1_oneof = commands.AllocateOneOf<GraphicsCommand>(); command_1.Set(command_1_oneof); auto command_1_copy_texture_to_position = command_1_oneof.MutableCopyTextureToPosition(); command_1_copy_texture_to_position.SetSourceTexture(texture_id); command_1_copy_texture_to_position.SetDestinationTexture(0); // The screen command_1_copy_texture_to_position.SetLeftDestination(10); command_1_copy_texture_to_position.SetTopDestination(10); auto command_2 = command_1.InsertAfter(); auto command_2_oneof = commands.AllocateOneOf<GraphicsCommand>(); command_2.Set(command_2_oneof); auto command_2_copy_texture_to_position = command_2_oneof.MutableCopyTextureToPosition(); command_2_copy_texture_to_position.SetSourceTexture(texture_id); command_2_copy_texture_to_position.SetDestinationTexture(0); // The screen command_2_copy_texture_to_position.SetLeftDestination(200); command_2_copy_texture_to_position.SetTopDestination(200); // Send the draw calls. graphics_driver.SendRunCommands(std::move(commands));
Code: Select all
{ Commands: [ { CopyTextureToPosition: { source_texture: 1, destination_texture: 1, left_destination: 10, top_destination: 10 } }, { CopyTextureToPosition: { source_texture: 1, destination_texture: 1, left_destination: 200, top_destination: 200 } } ] }
Developer of UX/RT, a QNX/Plan 9-like OS
- AndrewAPrice
- Member
- Posts: 2300
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: Standardized IPC protocol
I want to really clean up my API.andrew_w wrote: Yes, that looks rather complicated.
I built my own IDL called Permebuf. It's mostly userspace, other than the ability to send pages to other processes, and discover named services. I have .permebuf files (here's an example of my graphics device) and I have a permebuf->C++ compiler that generates the classes aswell as server and client stubs. I tried to aim for zero serialization - that is - you read and write to the permebuf in-situ (Permebuf references are just buffer pointer + offset). Permebufs are made to be write-once data structures (you can update them, but as memory is always allocated at the end of a Permebuf, you can't release individual objects without releasing the entire Permebuf) but super fast. I was inspired by gRPC/Protobufs. The theory being that by building my OS's microkernel ecosystem around Permebufs, I could build a Permebuf->Javascript/Rust/Go/etc. compiler and any program or any server could be implemented in any language that supports Permebufs.andrew_w wrote:Is there any particular reason why you are using a complex structured IPC transport layer rather than a basic unstructured one with support for structured messages left to a regular user library? Structured RPC adds overhead that is completely unnecessary for some types of servers (depending on the security model of your OS, it may make things harder to secure). Also, does your IPC transport layer have a hardcoded set of classes for each message type server in your system, or are the message classes provided by the individual servers/interface libraries?
Permebufs are binary backwards compatible (that is, you can rename, add, remove fields, etc.) even though obviously this will break code that calls "SetName" if "Name" no longer exists. I'm thinking about how I could simplify it with constructors like you mentioned and possibley aggregate initialization.andrew_w wrote:In addition, is there any reason why you didn't just make the various fields of the message arguments to the constructor rather than requiring them to be set afterwards with accessors? That would make it less verbose.
My OS is Perception.