True Streaming File I/O

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

True Streaming File I/O

Post by SpyderTL »

I'm pretty sure I know what most of you guys are going to say, so let me start off by saying that if you think this idea is a bad idea, in general, don't bother posting a reply. I'll take your silence as a generic "that's stupid" response, by default. :)

On the other hand, if you have a specific concern that you would like to bring up that needs to be taken into consideration, feel free to elaborate.

So, it just so happens that I am working on a file system explorer (in C#) to read the raw file system of a disk and display all of the data tables and file entries in a tree view that can be navigated, similar to the windows explorer (but displaying the raw data tables instead of just file icons). I also just got finished reading a thread by wxwsk8er asking about reading individual bytes for a file system. Of course, the answer is that you can't read individual bytes from a storage device... you have to read one or more blocks.

So, I ran into an issue with the file system explorer where windows would randomly throw an error when reading individual bytes from a raw file stream. Google turned up a post about this issue, where, apparently, Windows may or may not require you to read block aligned bytes, depending on what mode that Windows decided to open the raw file in, and the way the device driver handles data transfers, etc. So I changed my code to read bytes in 512 or 2048 byte chunks, which fixed the problem. But it got me thinking...

Would it be possible to read individual bytes from the FDC / IDE controllers, and stream them one-by-one to the application? It seems plausible if you are reading in PIO (polling) mode. (DMA mode doesn't make much sense.) The controller itself may complain if the application does not read the data fast enough, but other than that, what would be the down side?

Obviously, multi-threading would be an issue. Only one file could be open at any given time, unless you wanted to time-slice your controller driver data access, and reset and resend read/write commands every time you had a "context-switch" to another "thread". I can see how that would be slow, if not impossible to manage.

What other disadvantages can you guys think of? Are there any compelling advantages to this approach?

For certain scenarios, I can see how this would be faster, overall, than copying a block to memory, then reading that block, if you are only looking for one or two values near the beginning of a block. For example, reading file system tables...

It would also allow the application to read data directly from the controller to the CPU without using any system memory, or at least, it would give you that option.

Of course, most modern devices don't even have the ability to read individual bytes, and rely on DMA transfers or PCI Bus Mastering to transfer data, so this approach would really only be applicable to older controllers, like the FDC and IDE controller.

Still, as a purely academic exercise, is there any reason that this would not work?

(Constructive comments only, please...)
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: True Streaming File I/O

Post by iansjack »

I think that you have already identified the flaws in this plan. They boil down to horrendous inefficiency.

Bear in mind that the controller has to read a whole sector from the disk before it can start to transfer any bytes out. And this process is orders of magnitude slower than transferring the data from the controller to a buffer in RAM. Now suppose you have two files open at once and are reading records from one, say 8 bytes each, and writing these records to the other file.

So you read 512 bytes from the disk to the controller (SLOOOW); transfer 8 bytes to RAM, through away the rest of the buffer, write these 8 bytes to the other file, read the same 512 bytes to The controller (SLOW) again, transfer 8 bytes to RAM, throw them away, transfer another 8 bytes to RAM to get the second record, and so on. (And this ignores the fact that in reality you have to write a whole sector to the disk - it's not like reading where you can throw away the part of the controller's buffer that you are not interested in; it's going to write the whole buffer to the disk, the 8 bytes you want plus the trash that is in the buffer already.)

It's nothing to do with multiple tasks. Even in a single-tasking system it is natural to have several files open at once. Many programs take input from one file and write output to another one.

So you lose efficiency - you'd have time to go and make a cup of coffee when running a simple file-processing program - and versatility. And you gain absolutely nothing. Apart from that, I can't see anything wrong with the idea.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re: True Streaming File I/O

Post by Candy »

It sounds like you're trying to find a design to fix a flaw in Windows' way of handling things, where Windows' method is flawed and good solutions exist.

For example, did you consider memory mapping the file? You can then read through it byte by byte and the full caching & loading will be hidden from you 100%. No need to even actually parse the file, you can just stream through it. This may also just work on Windows...
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: True Streaming File I/O

Post by SpyderTL »

iansjack wrote:Bear in mind that the controller has to read a whole sector from the disk before it can start to transfer any bytes out. And this process is orders of magnitude slower than transferring the data from the controller to a buffer in RAM. Now suppose you have two files open at once and are reading records from one, say 8 bytes each, and writing these records to the other file.
I don't think this is entirely accurate. At least for the FDC, I'm pretty sure it only has a 16 byte buffer, which must be read before it gets full, or the read operation fails with a buffer overrun error. (I think this is the case).

But the internal workings of the controller and drive are not important... at least in the sense that there is nothing that you can do from the software side to change the way that these two components work. I'm only worried about the actual "transfer" of bytes from the controller to RAM, and I'm trying to figure out if it is possible to "read" data from the controller to the CPU without writing the data to RAM first. Since, in PIO mode, bytes are read from an I/O port by the CPU, it seems like this should be possible.

So, lets say that you are reading a table, which, at offset 16, contains a 32-bit block address of another table. Is it possible to send the command to the controller, read 16 bytes from the I/O port and throw them away, then read 4 bytes into the EAX register, then reset the controller and use the value in EAX to find the next table?

This is a pretty specific scenario, but if you are writing code to read file system tables, this is something that would be used quite a bit. And if it ended up being 10% faster than loading a 512 byte block into memory, and using no RAM at all, is it worth implementing?
iansjack wrote:It's nothing to do with multiple tasks. Even in a single-tasking system it is natural to have several files open at once. Many programs take input from one file and write output to another one.
In a single tasking system, you may have several files "open", but you can only read (or write) one file at a time, because one thread can only be executing one function at a time.

As for a multi-tasking system, it may still work if you queue all read/write requests, and the performance may not be too bad if you only read/write one block at a time, and you "swapped" between requests when you reached the end of a block.

For another example of when this might be useful, let's say that you wanted to write a stream of bytes to a file. Let's say you are reading bytes from the sound card (microphone), and writing them to the hard drive. If you have sent the IDE controller a write command, and you are not using DMA mode, it will be waiting for individual bytes to be sent to the appropriate I/O port. As bytes are received from the sound card, they can be immediately written to the hard drive. Of course, this is essentially moving data from once hardware buffer, to the CPU, to another hardware buffer. Getting these synchronized so that you could reliably avoid buffer overruns/underruns would probably be more trouble than it's worth.

So, the only scenario I can think of where buffer-to-buffer transfers would not be an issue is either a) the source data is pre-loaded into memory, and then written one byte at a time to the device, or b) the bytes are generated on the fly, in code, and written out. For example, writing zeros, or random numbers. Not terribly useful, I'll admit.
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: True Streaming File I/O

Post by SpyderTL »

Candy wrote:It sounds like you're trying to find a design to fix a flaw in Windows' way of handling things, where Windows' method is flawed and good solutions exist.

For example, did you consider memory mapping the file? You can then read through it byte by byte and the full caching & loading will be hidden from you 100%. No need to even actually parse the file, you can just stream through it. This may also just work on Windows...
No, it's just a coincidence that I ran into that issue a few minutes after discussing reading/writing blocks on OSDev.

I am working on my own OS, and I was considering implementing this functionality that could (optionally) be used to read/write bytes from/to a storage device, rather than reading/writing to a buffer first. I think it would be difficult to implement, error-prone, and difficult to manage from the OS level, but I may do it anyway, just as an academic exercise.

Who knows, it may come in useful some day...
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: True Streaming File I/O

Post by iansjack »

The controller has to read a whole sector into its internal buffers in one go. For starters it would be far too inefficient to wait for the right part of the disk to be under the heads each time it read a byte or two. And, apart from that, how do you think that the controller knows where the heads are on the disk at any time? It reads the synchronization data between sectors so that it knows where the sector starts, then reads the whole sector in one go. Anything else would be error-prone as well as inefficient.

As for the business of multiple files, the problem is not reading two files at once, it is the interleaving of reads. If you read 16 bytes from one file then read or write 16 bytes to another, you have to start from scratch for each read. You don't have multiple input buffers on the chip to store the data read from multiple files. Really, this is so obvious that I can't think of a better way of putting it. There are good reasons for caching data, and you have to cache it somewhere. In any modern computer the 512, or nowadays 4096, bytes that you use to store a sector read from the disk is a drop in the ocean. RAM is just not that scarce.

Of course these arguments go out of the window when you consider SSDs, and it is possible that you might want to treat these as character rather than block devices and develop new controllers accordingly. But I think you then start to run into problems of how you efficiently address those individual bytes without dividing them into larger chunks. And it is very convenient to be able to treat all storage devices in the same way.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: True Streaming File I/O

Post by Owen »

iansjack wrote:Of course these arguments go out of the window when you consider SSDs, and it is possible that you might want to treat these as character rather than block devices and develop new controllers accordingly. But I think you then start to run into problems of how you efficiently address those individual bytes without dividing them into larger chunks. And it is very convenient to be able to treat all storage devices in the same way.
If anything, you want to treat them as block devices with bigger blocks.
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: True Streaming File I/O

Post by iansjack »

I'm not convinced of that. There is a certain attraction in the idea of being able to access any individual byte of any storage in a single operation, if it could be done efficiently. As I see it the main argument for block devices is because of the extreme inefficiency of reading individual bytes from a mechanical device. Once that storage device is, essentially, just RAM why not treat it as such?
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: True Streaming File I/O

Post by Owen »

iansjack wrote:I'm not convinced of that. There is a certain attraction in the idea of being able to access any individual byte of any storage in a single operation, if it could be done efficiently. As I see it the main argument for block devices is because of the extreme inefficiency of reading individual bytes from a mechanical device. Once that storage device is, essentially, just RAM why not treat it as such?
Because flash is inherently block based, as it is divided up into erase blocks. On contemporary bulk devices, these erase blocks are roughly 64kB in size. Before you can write to an already written part of the device, you must erase the entire block.

Significant work goes into flash translation layers (which let you pretend flash is like a "classical" block device, as found in SD cards, SSDs and USB flash drives), or into flash file systems (such as JFFS, YAFFS, UBIFS, which are used on 'raw'/'bare' NAND/NOR flash) in order to perform wear leveling and such.

Also, NAND flash in particular is built around reading a page at a time (NOR flash is readable bytewise, but tops out at ~32MB devices and is expensive, but fast)
User avatar
iansjack
Member
Member
Posts: 4703
Joined: Sat Mar 31, 2012 3:07 am
Location: Chichester, UK

Re: True Streaming File I/O

Post by iansjack »

I'm not working on the assumption that persistent solid-state storage will always be based on the same technology that is used today.
User avatar
thepowersgang
Member
Member
Posts: 734
Joined: Tue Dec 25, 2007 6:03 am
Libera.chat IRC: thePowersGang
Location: Perth, Western Australia
Contact:

Re: True Streaming File I/O

Post by thepowersgang »

From what I've seen of storage technologies, things are tending more and more towards block-based storage and bursty transmissions (From old slow floppies, to HDDs, and now low-latency SSDS). Network is slightly different, but even there, traffic is usually bursty (with the exception of real-time video).

Designing your storage subsystem around the assumption that everything is capable of streaming will probably not be the best idea, especially for large files (what you do at the C library level is completely different, you could quite easily do large read caching on files to simulate streaming from a block device).

The important thing to realise is that for the forseeable future, storage will be block based (and is probably heading for larger block sizes, as latency becomes more of an issue), so the storage subsystem should be built around blocks.
Kernel Development, It's the brain surgery of programming.
Acess2 OS (c) | Tifflin OS (rust) | mrustc - Rust compiler
Currently Working on: mrustc
User avatar
bluemoon
Member
Member
Posts: 1761
Joined: Wed Dec 01, 2010 3:41 am
Location: Hong Kong

Re: True Streaming File I/O

Post by bluemoon »

On the contrast, interesting enough that even video streaming trends to shift toward block based, for example, HLS is more widely used than RTSP even for live video streaming. One of the reason is that it's not so cost-effective to maintain a steady bandwidth for stream of data, especially on mobile. The same reason goes with computer internals, it's not so cost-effective to maintain a steady bus stream, but one should use bursting mode.
Post Reply