Non-intterupt driven OS
-
- Member
- Posts: 78
- Joined: Tue Jun 07, 2022 11:23 am
- Libera.chat IRC: Cyao
- Location: France
- Contact:
Non-intterupt driven OS
I feel like x86 is all about intterupts, it uses intterupts to monitor time, see when a key is pressed, handle exceptions. But in some architectures like riscv there isn't any intterupts, so I wonder, how to efficiently handle the things that intterupts do? Or is polling the only option?
Re: Non-intterupt driven OS
Yeah, I highly doubt that. Small amount of research showed me that RISC-V does indeed have interrupt handling. Anything else would have been extremely weird. Now, there is only a single architectural interrupt handler, but all that means is that you need an interrupt controller, and you need to ask it who actually caused the interrupt.Cyao wrote:But in some architectures like riscv there isn't any intterupts,
Without interrupts you would have to poll for everything, that is true. Which is why every CPU since the 6502 at least has had them. You couldn't even sleep the CPU without interrupts, but would have to run it at 100% the whole time. That will be hell on the thermals and on power consumption. And that is why no CPU will ever come without interrupts.
Carpe diem!
Re: Non-intterupt driven OS
Many microcontrollers operate quite well without interrupts. You have a main loop that polls all the relevant hardware and that handles relevant changes. This provides known response times and you are always certain that things will work. Of course, you would not want to run an operating system this way, or any task with some complexity, but for simple tasks, it works well.nullplan wrote:Yeah, I highly doubt that. Small amount of research showed me that RISC-V does indeed have interrupt handling. Anything else would have been extremely weird. Now, there is only a single architectural interrupt handler, but all that means is that you need an interrupt controller, and you need to ask it who actually caused the interrupt.Cyao wrote:But in some architectures like riscv there isn't any intterupts,
Without interrupts you would have to poll for everything, that is true. Which is why every CPU since the 6502 at least has had them. You couldn't even sleep the CPU without interrupts, but would have to run it at 100% the whole time. That will be hell on the thermals and on power consumption. And that is why no CPU will ever come without interrupts.
There is another set of much more complicated hardware that process things in parallel rather than in sequence (FPGAs), and then you don't need interrupts either. The drawback of this is that you need to define with Verilog or VHDL exactly what the hardware should do and then configure the FPGA with this. However, FPGAs can deliver 1,000 G multiply-add operations per second, something that is quite impossible to achieve with any CPU, even multicore versions.
There is also a mixture of FPGA and sequential programming by adding processor cores to an FPGA that can interact with dedicated hardware. The processor core could use interrupts from the FPGA hardware design, but the FPGA would work in parallel without interrupts.
Going a step further, you could plug an FPGA into an PCIe slot in your PC, and then use interrupts from the FPGA in you operating system driver.
-
- Member
- Posts: 78
- Joined: Tue Jun 07, 2022 11:23 am
- Libera.chat IRC: Cyao
- Location: France
- Contact:
Re: Non-intterupt driven OS
Ah yes! I just found out that the interrupts are described in the privileged volume, which I didn't look at (and that I always spelled interrupt wrong)nullplan wrote: Yeah, I highly doubt that. Small amount of research showed me that RISC-V does indeed have interrupt handling. Anything else would have been extremely weird. Now, there is only a single architectural interrupt handler, but all that means is that you need an interrupt controller, and you need to ask it who actually caused the interrupt.
Why FPGAs give 1,000 G multiply-add operations per second, but CPUs can't? Can't just CPU manufacturers just copy the FPGA designs and they will also give that much operations per second? Or do you mean "any mainstream CPU"?nullplan wrote:However, FPGAs can deliver 1,000 G multiply-add operations per second, something that is quite impossible to achieve with any CPU, even multicore versions.
Re: Non-intterupt driven OS
Same way GPUs can -- they specialize towards it, and throw an order of magnitude or two more units at it. Rumor is at least one of the big GPUs is basically an FPGA under the hood, or has a large subsection dedicated to that.
Intel's packaged some FPGA's into some Xeon cores to serve as reconfigurable coprocessors.
They probably won't show up in desktop chips commonly any time soon, their main purpose tends to be either faster logic design iteration, rapid prototyping, or being a highly reconfigurable, but also highly specialized process. They have probably gutted a hefty portion of the mid-end DSP market though.
Intel's packaged some FPGA's into some Xeon cores to serve as reconfigurable coprocessors.
They probably won't show up in desktop chips commonly any time soon, their main purpose tends to be either faster logic design iteration, rapid prototyping, or being a highly reconfigurable, but also highly specialized process. They have probably gutted a hefty portion of the mid-end DSP market though.
Re: Non-intterupt driven OS
Even if a general purpose CPU could implement 1,000 DSP slices, the serial nature of C programs would not be able to use more than one at the time. By adding more cores, more could be used simultaneously, but then you need multiple threads that run on the cores to be able to use them. I don't see CPUs with 1,000 cores coming anytime soon, and I don't see applications that can spread their work on 1,000 threads either.reapersms wrote:Same way GPUs can -- they specialize towards it, and throw an order of magnitude or two more units at it. Rumor is at least one of the big GPUs is basically an FPGA under the hood, or has a large subsection dedicated to that.
Intel's packaged some FPGA's into some Xeon cores to serve as reconfigurable coprocessors.
They probably won't show up in desktop chips commonly any time soon, their main purpose tends to be either faster logic design iteration, rapid prototyping, or being a highly reconfigurable, but also highly specialized process. They have probably gutted a hefty portion of the mid-end DSP market though.
The main difference between typical programming languages and FPGA languages, is the serial vs parallel issue. I can use Verilog to create a design that uses 1,000 DSP slices in parallel, and it can achieve very good performance, but the same can not be easily done with serial languages like C.
Re: Non-intterupt driven OS
They kind of can, via HLSL and friends, for yon 3000-ish core GPUs, but those definitely don't do well at branchy, general purpose code. If someone did try throwing a thousand or so MAC units on a CPU, it'd certainly be part of some big SIMD thing.
Right tool for the right job and all that. Chainsaws for trees, breadknives for bread, python for glue, C for compute, etc.
As the general purpose CPUs have gotten bigger and better, the various helper coprocessors that were ubiquitous throughout the 8 and 16 bit eras have mostly disappeared from the limelight (GPUs excepted), but there's still plenty of them scattered around. Mostly to offload things so the CPU isn't having to burn instructions and loop overhead polling or bit-banging things.
Audio is probably the weirdest spot, where the value-add for things more complicated than a raw DAC has dropped off enough, and the problem domain for the consumer side not quadrupling in size every 6 months. Though it's definitely questionable whether much of Creative Labs' excursions past the SB Live were truly worth anything. Their use of the stencil shadow patent to force EAX support into games certainly brings questions.
As for the original question, there are certainly some embedded OSs out there that rely mainly on polling for consistent, predictable (albeit maybe not quite as fast) response times. The CPUs they're running on probably still have some interrupt hardware in them. Handling memory protection violations without exceptions/interrupts is probably not very convenient though.
Also, x86 of late doesn't really use interrupts for either time or keyboards anymore outside of legacy support -- there's a plethora of timers available now, and USB is host-polled.
Right tool for the right job and all that. Chainsaws for trees, breadknives for bread, python for glue, C for compute, etc.
As the general purpose CPUs have gotten bigger and better, the various helper coprocessors that were ubiquitous throughout the 8 and 16 bit eras have mostly disappeared from the limelight (GPUs excepted), but there's still plenty of them scattered around. Mostly to offload things so the CPU isn't having to burn instructions and loop overhead polling or bit-banging things.
Audio is probably the weirdest spot, where the value-add for things more complicated than a raw DAC has dropped off enough, and the problem domain for the consumer side not quadrupling in size every 6 months. Though it's definitely questionable whether much of Creative Labs' excursions past the SB Live were truly worth anything. Their use of the stencil shadow patent to force EAX support into games certainly brings questions.
As for the original question, there are certainly some embedded OSs out there that rely mainly on polling for consistent, predictable (albeit maybe not quite as fast) response times. The CPUs they're running on probably still have some interrupt hardware in them. Handling memory protection violations without exceptions/interrupts is probably not very convenient though.
Also, x86 of late doesn't really use interrupts for either time or keyboards anymore outside of legacy support -- there's a plethora of timers available now, and USB is host-polled.
Re: Non-intterupt driven OS
I didn't think it was possible to embody the Dunning-Kruger effect in this succinct a manner, but you succeeded. Bravo.reapersms wrote:Also, x86 of late doesn't really use interrupts for either time or keyboards anymore outside of legacy support -- there's a plethora of timers available now, and USB is host-polled.
In actuality, almost all hardware has been moving toward more interrupts. Network cards now get one interrupter per queue, xHCI can have something like 1000 of the things. Yes, USB is host-polled, but that is handled in the host controller. The CPU doesn't notice. As for time, of course operating systems still need things to happen in a timely manner, and use interrupts to do it. Best interrupting time source is probably the LAPIC timer. Yeah, you don't need it for the system time, but you still need certain things to happen at certain times.
Carpe diem!
Re: Non-intterupt driven OS
So that trend on the high-end networking side of moving everything to epoll reversed? Or was that more just a question of the userland interface? (recall seeing various amounts of listserv traffic about that, though as I think about it more, that *was* a long time ago. IIRC it had to do with high-volume cases, where the interrupt rate would be high enough to start bottlenecking things)
The USB one gets iffy, in that for at least one of the controllers the CPU still had to queue up the transfers, and the interrupt was transfer complete. Granted all my recent practical experience there has been from the device end. I take it xHCI assumes a more active role on things to avoid poking the CPU for do-nothing cases.
So fine, there's still plenty of interrupts, its complicated, your mileage may vary, precise parameters will vary from manifestation to manifestation, etc.
The USB one gets iffy, in that for at least one of the controllers the CPU still had to queue up the transfers, and the interrupt was transfer complete. Granted all my recent practical experience there has been from the device end. I take it xHCI assumes a more active role on things to avoid poking the CPU for do-nothing cases.
So fine, there's still plenty of interrupts, its complicated, your mileage may vary, precise parameters will vary from manifestation to manifestation, etc.
Re: Non-intterupt driven OS
There indeed are some that don't support interrupts.Cyao wrote:in some architectures like riscv there isn't any intterupts, so I wonder, how to efficiently handle the things that intterupts do? Or is polling the only option?
Take a look at the Parallax Propeller. On a single chip you have several cores each of which can do I/O and serialize communication with other cores (via shared RAM with serialized access). So, for example, you could fully dedicate one core to some I/O via polling, while other core(s) can do other tasks, rarely getting distracted from them.
When you have huge continuous data flows through a system, there's little sense in generating interrupts for something very often and handling things like on a general purpose CPU with context switches and such. A system like a big internet router would be designed to be very parallel and do most of its basic work in a very synchronous way with mostly fixed response times and throughput.
Re: Non-intterupt driven OS
The main CPU of the Cray 1 had no interrupts. Auxiliary processors handled all the IO, and the main CPU polled them. AFAIK the auxiliary processors did have interrupts.
It's worth noting that if you have no interrupts at all, you can't do preemptive multitasking. This was evidently fine for the Cray 1, but modern supercomputers run Linux -- a preemptive multitasking OS. At the other end of the power scale, I wouldn't want an OS without some preemption. I'm targeting hardware where preemption will be very slow thanks to the need to preserve process state with limited RAM and no MMU. My plan is to rely mostly on cooperative multitasking, but to have a preemptive task switch every 250ms or so to handle processes which might occasionally exceed a reasonable time slot. It'll also be useful for library code.
If you're after FPGAs, you might want one of the processors which combine ARM cores with an FPGA. The 9front community built an SBC around a Xilinx chip with 2 ARM cores and FPGA which they used for SATA, DisplayPort graphics, Ethernet, and other stuff.
It's worth noting that if you have no interrupts at all, you can't do preemptive multitasking. This was evidently fine for the Cray 1, but modern supercomputers run Linux -- a preemptive multitasking OS. At the other end of the power scale, I wouldn't want an OS without some preemption. I'm targeting hardware where preemption will be very slow thanks to the need to preserve process state with limited RAM and no MMU. My plan is to rely mostly on cooperative multitasking, but to have a preemptive task switch every 250ms or so to handle processes which might occasionally exceed a reasonable time slot. It'll also be useful for library code.
If you're after FPGAs, you might want one of the processors which combine ARM cores with an FPGA. The 9front community built an SBC around a Xilinx chip with 2 ARM cores and FPGA which they used for SATA, DisplayPort graphics, Ethernet, and other stuff.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
Re: Non-intterupt driven OS
Nah, I don't want to run Linux on the FPGA. My idea is that it is better to configure the FPGA over PCIe using BARs than running a CPU core (Microblaze) on the Xilinx chip.eekee wrote: If you're after FPGAs, you might want one of the processors which combine ARM cores with an FPGA. The 9front community built an SBC around a Xilinx chip with 2 ARM cores and FPGA which they used for SATA, DisplayPort graphics, Ethernet, and other stuff.
The example design for the ADC board I have indeed does use a CPU core on the FPGA, and interface with the built-in network, but that solution is unable to stream data in real-time, or add filters to the design. Their code is configuring the system with SPI from the CPU core, but I find it much better to do this in the RDOS device driver through the PCI BAR.
Also, the CPU core takes up quite a bit of the FPGA resources, which is yet another reason why I prefer to only use Verilog and configure through PCIe.
Re: Non-intterupt driven OS
Oh I get'cha. It's different for me as I'm using microcontrollers; I have no reason to support PCIe in the first place. As for the 9fronters, they're fine with PCIe, but they wanted to build their own computer as near from scratch as they could. There was little reason for it to have PCIe.rdos wrote:Nah, I don't want to run Linux on the FPGA. My idea is that it is better to configure the FPGA over PCIe using BARs than running a CPU core (Microblaze) on the Xilinx chip.
[...]
Also, the CPU core takes up quite a bit of the FPGA resources, which is yet another reason why I prefer to only use Verilog and configure through PCIe.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
- AndrewAPrice
- Member
- Posts: 2300
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: Non-intterupt driven OS
I've found some creative workarounds to this.eekee wrote:It's worth noting that if you have no interrupts at all, you can't do preemptive multitasking. This was evidently fine for the Cray 1, but modern supercomputers run Linux -- a preemptive multitasking OS. At the other end of the power scale, I wouldn't want an OS without some preemption. I'm targeting hardware where preemption will be very slow thanks to the need to preserve process state with limited RAM and no MMU. My plan is to rely mostly on cooperative multitasking, but to have a preemptive task switch every 250ms or so to handle processes which might occasionally exceed a reasonable time slot. It'll also be useful for library code.
You could write your code very asynchronosly. e.g. JavaScript style where everything is async/await. You would still get crashes in true infinite loops, but at least your code blocks would be fairly granular.
I also recall an "OS" named GIMI written in QBasic that would achieve multitasking by round-robin interpreting each script.
My OS is Perception.
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Non-intterupt driven OS
Niklaus Wirth got a bee in his bonnet about interrupts and determinism in the mid-1980s, so the Oberon OS and language was designed entirely around cooperative multitasking. The language and the OS are tightly interwoven, with the compiler designed to automatically insert passes into the program at regular intervals - specifically, any loop would end by passing to the scheduler, or at least do so after a certain number of iterations. Needless to say, this method is highly dependent on that specific compiler being used for everything.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.