digitalmars.D - memory-mapped files
- Andrei Alexandrescu (19/19) Feb 17 2009 Indeed, time and again, "testing is believing".
- grauzone (3/3) Feb 17 2009 Could you post compilable versions for both approaches, so that we can
- bearophile (8/13) Feb 17 2009 I don't like that byLineDirect() too much, it will become one of the mos...
- Brad Roberts (8/14) Feb 17 2009 You can drop the 'sliding' part. mmap tends to help when doing random
- Andrei Alexandrescu (6/18) Feb 17 2009 This all would make perfect sense if the performance was about the same
- Vladimir Panteleev (7/23) Feb 17 2009 Perhaps this may help:
- Sean Kelly (6/23) Feb 18 2009 If I had to guess, I'd say that the OS assumes every file will be read i...
- Benji Smith (3/6) Feb 18 2009 Pessimization? What a great word! I've never heard that before!
- Andrei Alexandrescu (3/11) Feb 18 2009 I've heard it first from Scott Meyers.
- Sergey Gromov (7/18) Feb 19 2009 I've heard this term in connection with premature optimization
- Andrei Alexandrescu (6/18) Feb 18 2009 Hey Brad,
- Walter Bright (3/6) Feb 18 2009 If you can build 4 windows executables, I can time them on my machine,
- Kagamin (2/9) Feb 19 2009 By default windows does random access optimisation simply sucking file i...
- Lionello Lunesu (8/12) Feb 17 2009 Random seeking in large files :)
- BCS (10/21) Feb 17 2009 paging is going to be built to move date in the fastest possible way so ...
- Kagamin (1/1) Feb 18 2009 May be mm scheme results in more calls to HDD?
- Kagamin (6/7) Feb 24 2009 from mac optimizations guidelines:
Indeed, time and again, "testing is believing". I tried a simple line splitting program in D with and without memory mapping against a 140MB file. The program just reads the entire file and does some simple string processing on it. The loop pattern looks like this: foreach (line; byLineDirect(stdin)) { auto r = splitter(line, "|||"); write(r.head, ":"); r.next; writeln(r.head); } The byLineDirect returns a range that uses memory mapped files when possible, or simple fread calls otherwise. The memory-mapped version takes 2.15 seconds on average. I was fighting against Perl's equivalent 2.45. At some point I decided to try without memory mapping and I consistently got 1.75 seconds. What the heck is going on? When does memory mapping actually help? Andrei
Feb 17 2009
Could you post compilable versions for both approaches, so that we can test it our self? I guess one would also need some input data.
Feb 17 2009
Andrei Alexandrescu:Indeed, time and again, "testing is believing".Yep. Time ago I have read that the only science of "computer science" is in things like timing benchmarks and the like :-)foreach (line; byLineDirect(stdin))I don't like that byLineDirect() too much, it will become one of the most used in scripting-like programs, so it deserves to be short&easy.write(r.head, ":");Something tells me that such .head will become so common in D programs that my fingers will learn to write it while I sleep too :-)r.next;.next is clear, nice, and short. Its only fault is that it doesn't sound much like something that has side effects... I presume it's not possible to improve this situation.What the heck is going on? When does memory mapping actually help?<You are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file. Bye, bearophile
Feb 17 2009
bearophile wrote:You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read. Later, BradWhat the heck is going on? When does memory mapping actually help?<You are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file.
Feb 17 2009
Brad Roberts wrote:bearophile wrote: >> What the heck is going on? When does memory mapping actually help?<This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing less work. This is very odd. AndreiYou are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file.You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.
Feb 17 2009
On Wed, 18 Feb 2009 06:22:17 +0200, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:Brad Roberts wrote:Perhaps this may help: http://en.wikipedia.org/wiki/Memory-mapped_file#Drawbacks -- Best regards, Vladimir mailto:thecybershadow gmail.combearophile wrote: >> What the heck is going on? When does memory mapping actually help?<This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing less work. This is very odd.You are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file.You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.
Feb 17 2009
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s articleBrad Roberts wrote:If I had to guess, I'd say that the OS assumes every file will be read in a linear manner from front to back, and optimizes accordingly. There's no way of knowing how a memory-mapped file will be accessed however, so no such optimization occurs. Seanbearophile wrote: >> What the heck is going on? When does memory mapping actually help?<This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing less work. This is very odd.You are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file.You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.
Feb 18 2009
Andrei Alexandrescu wrote:This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing lessPessimization? What a great word! I've never heard that before! --benji
Feb 18 2009
Benji Smith wrote:Andrei Alexandrescu wrote:I've heard it first from Scott Meyers. AndreiThis all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing lessPessimization? What a great word! I've never heard that before! --benji
Feb 18 2009
Wed, 18 Feb 2009 20:56:16 -0800, Andrei Alexandrescu wrote:Benji Smith wrote:I've heard this term in connection with premature optimization discussions. Like, premature optimization is investing time in improving something that doesn't really need to be improved. On the other hand, pessimization is doing something which is easy to avoid and is almost guaranteed to slow you down. Like using post-increment in C++.Andrei Alexandrescu wrote:I've heard it first from Scott Meyers.This all would make perfect sense if the performance was about the same in the two cases. But in fact memory mapping introduced a large *pessimization*. Why? I am supposedly copying less data and doing lessPessimization? What a great word! I've never heard that before! --benji
Feb 19 2009
Brad Roberts wrote:bearophile wrote: >> What the heck is going on? When does memory mapping actually help?<Hey Brad, Nice advice on madvise, didn't know about it. Just in case it might be useful to someone, trying madvise with any of the four possible policies did not yield any noticeable change in timing for my particular test. AndreiYou are scanning the file linearly, and the memory window you use is probably very small. In such situation a memory mapping is probably not the best thing. A memory mapping is useful when you for example operate with random access on a wider sliding window on the file.You can drop the 'sliding' part. mmap tends to help when doing random access (or sequential but non-contiguous maybe) over a file. Pure streaming is handled pretty well by both patterns. One nicity with mmap is that you can hint to the os how you'll be using it via madvise. You can't do that with [f]read.
Feb 18 2009
Andrei Alexandrescu wrote:Nice advice on madvise, didn't know about it. Just in case it might be useful to someone, trying madvise with any of the four possible policies did not yield any noticeable change in timing for my particular test.If you can build 4 windows executables, I can time them on my machine, and we can see if windows behaves differently.
Feb 18 2009
Walter Bright Wrote:Andrei Alexandrescu wrote:By default windows does random access optimisation simply sucking file into cache which is faster (on XP) than sequential access optimisation. It will behave quite good if all 400MB fit in your file cache.Nice advice on madvise, didn't know about it. Just in case it might be useful to someone, trying madvise with any of the four possible policies did not yield any noticeable change in timing for my particular test.If you can build 4 windows executables, I can time them on my machine, and we can see if windows behaves differently.
Feb 19 2009
The memory-mapped version takes 2.15 seconds on average. I was fighting against Perl's equivalent 2.45. At some point I decided to try without memory mapping and I consistently got 1.75 seconds. What the heck is going on? When does memory mapping actually help?Random seeking in large files :) Sequential read can't possibly gain anything by using MM because that's what the OS will end up doing, but MM is using the paging system, which has some overhead (a page fault has quite a penalty, or so I've heard.) I use std.mmfile for a simple DB implementation, where the DB file is just a large, >1GB, array of structs, conveniently accessible as a struct[] in D. (Primary key is the index, of course.) L.
Feb 17 2009
Hello Lionello,paging is going to be built to move date in the fastest possible way so it would be expected that using MM would be fast. The only thing I see getting in the way would be 1) it uses up lots of address space and 2) you might be able to lump reads or hint to the OS to pre load when you load the file other ways. It would be neat to see what happens if you MM a file and force page faults on the whole thing right up front (IIRC the is an asm op that forces a page fault but doesn't wait for it). Even better might be to force a page fault for N pages ahead of where you are processing.The memory-mapped version takes 2.15 seconds on average. I was fighting against Perl's equivalent 2.45. At some point I decided to try without memory mapping and I consistently got 1.75 seconds. What the heck is going on? When does memory mapping actually help?Random seeking in large files :) Sequential read can't possibly gain anything by using MM because that's what the OS will end up doing, but MM is using the paging system, which has some overhead (a page fault has quite a penalty, or so I've heard.)
Feb 17 2009
Kagamin Wrote:May be mm scheme results in more calls to HDD?from mac optimizations guidelines: http://developer.apple.com/documentation/Performance/Conceptual/FileSystem/Articles/FilePerformance.html #Minimize the number of file operations you perform. For more information, see “Minimize File System Access.” #Group several small I/O transfers into one large transfer. A single write of eight pages is faster than eight separate single-page writes, primarily because it allows the hard disk to write the data in one pass over the disk surface. For more information, see “Choosing an Optimal Transfer Buffer Size.” #Perform sequential reads instead of seeking and reading small blocks of data. The kernel transparently clusters I/O operations, which makes sequential reads much faster.
Feb 24 2009