digitalmars.D - Are these bencmarks recent and real?
- rempas (8/8) Aug 30 2021 Just trying to compile a sample basic gtk-d project (including
- jfondren (10/19) Aug 30 2021 You can see how recent they are, and yeah they're real, but it's
- rikki cattermole (3/4) Aug 30 2021 "Oh let me recompile dmd" - Stefan
- =?UTF-8?Q?Ali_=c3=87ehreli?= (15/23) Aug 30 2021 The following program takes 10 seconds on my computer. How is that fast?...
- rempas (3/21) Aug 30 2021 Yep! There is and I hope things would be more clear. Your book
- Steven Schveighoffer (23/50) Aug 31 2021 initializers *sometimes* can be computed at compile time. If assigned to...
- =?UTF-8?Q?Ali_=c3=87ehreli?= (29/40) Aug 31 2021 [I change my understanding at the end of this post.]
- Steven Schveighoffer (18/74) Aug 31 2021 What `static const` does is require the execution of the expression at
- Imperatorn (4/12) Sep 02 2021 "Someone" "should" make a D cheat sheet! Like some best
- H. S. Teoh (6/19) Sep 02 2021 Well, there's this: https://p0nce.github.io/d-idioms/
- rempas (8/28) Aug 30 2021 Yeah I checked the last commit which was 11 July so yeah they are
- Chris Katko (14/34) Aug 31 2021 One definite flaw I've had with D's ecosystem, is these issues
- Mike Parker (7/14) Aug 31 2021 That all happens on github. Even the editing. You never have to
- drug (5/13) Aug 31 2021 What's wrong with? Just now I submit a couple of PRs to fix typo in
- bauss (16/23) Aug 31 2021 That's standard for pretty much all open-source documentations.
- russhy (13/22) Sep 01 2021 LDC is based on LLVM, just like rust/zig, so it's gonna be slow
- russhy (3/4) Sep 01 2021 Just to be precise, 0.7 seconds for a FULL REBUILD :)
- user1234 (2/16) Sep 01 2021 same with the `--force` dub argument ?
- russhy (2/21) Sep 01 2021 it is, i used ``-f``
- russhy (4/27) Sep 01 2021 On windows the same project takes 1.8 sec to fully rebuild,
- evilrat (7/10) Sep 01 2021 I could be wrong but Windows has antivirus hooks on file access
- Patrick Schluter (7/35) Sep 02 2021 File system access is significantly slower on Windows because of
Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?
Aug 30 2021
On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?You can see how recent they are, and yeah they're real, but it's a very specific artificial benchmark and there's a lot more to say about compilation speeds about the various languages. That D can compile amazingly fast can be shown just by building dmd itself. But unlike go, D also gives you a bunch of tools that you can use to make compile times take a lot longer. Likewise you could say that D can make just as careful use of memory as a C program, but that D unlike C also gives you a bunch of tools that you can put towards much less careful use of memory.
Aug 30 2021
On 31/08/2021 3:34 AM, jfondren wrote:That D can compile amazingly fast can be shown just by building dmd itself."Oh let me recompile dmd" - Stefan A little gag from this BeerConf and yeah, it builds fast.
Aug 30 2021
On 8/30/21 8:46 AM, rikki cattermole wrote:On 31/08/2021 3:34 AM, jfondren wrote:The following program takes 10 seconds on my computer. How is that fast? :p import std.range; import std.algorithm; int main() { enum ret = 4_000_000.iota.sum; // pragma(msg, ret); return ret ^ ret; } (Of course I am joking: Replacing 'enum' with e.g. 'const' makes it fast.) However, TIL: pragma(msg) works with 'const' variables! (At least with that one.) Replace 'enum' with 'const' and pragma(msg) computes it at compile time. But... but... 'const' doesn't really mean compile-time... Is that intended? There is some semantic confusion there. :/ AliThat D can compile amazingly fast can be shown just by building dmd itself."Oh let me recompile dmd" - Stefan A little gag from this BeerConf and yeah, it builds fast.
Aug 30 2021
On Monday, 30 August 2021 at 17:15:00 UTC, Ali Çehreli wrote:On 8/30/21 8:46 AM, rikki cattermole wrote: The following program takes 10 seconds on my computer. How is that fast? :p import std.range; import std.algorithm; int main() { enum ret = 4_000_000.iota.sum; // pragma(msg, ret); return ret ^ ret; } (Of course I am joking: Replacing 'enum' with e.g. 'const' makes it fast.) However, TIL: pragma(msg) works with 'const' variables! (At least with that one.) Replace 'enum' with 'const' and pragma(msg) computes it at compile time. But... but... 'const' doesn't really mean compile-time... Is that intended? There is some semantic confusion there. :/ AliYep! There is and I hope things would be more clear. Your book helps tho ;)
Aug 30 2021
On 8/30/21 1:15 PM, Ali Çehreli wrote:On 8/30/21 8:46 AM, rikki cattermole wrote:initializers *sometimes* can be computed at compile time. If assigned to a const or immutable variable, the compiler is smart enough to know that the item hasn't changed, and so it can go back to the static initializer for what the value actually is. what is happening here: `enum ret = 4_000_000.iota.sum;` In this case, you are requesting a compile time constant, and so it runs CTFE here to generate the result. `const ret = 4_000_000.iota.sum;` In this case, since this is inside a function, and not assiged to a global or static variable, it is generated at runtime. `pragma(msg, ret);` However, here we are requesting the value of `ret` at compile time. The compiler knows that since it's const, it should have the value it's initialized with. So it runs the *initializer* expression `4_000_000.iota.sum` at compile-time, and now it has access to the value. So actually, the CTFE engine runs here instead of at `ret`'s initialization. If a const variable depended on an expression that could only be computed at runtime (like say with the value of an input parameter), then the `pragma(msg)` would NOT work. -SteveOn 31/08/2021 3:34 AM, jfondren wrote:The following program takes 10 seconds on my computer. How is that fast? :p import std.range; import std.algorithm; int main() {  enum ret = 4_000_000.iota.sum;  // pragma(msg, ret);  return ret ^ ret; } (Of course I am joking: Replacing 'enum' with e.g. 'const' makes it fast.) However, TIL: pragma(msg) works with 'const' variables! (At least with that one.) Replace 'enum' with 'const' and pragma(msg) computes it at compile time. But... but... 'const' doesn't really mean compile-time... Is that intended? There is some semantic confusion there. :/That D can compile amazingly fast can be shown just by building dmd itself."Oh let me recompile dmd" - Stefan A little gag from this BeerConf and yeah, it builds fast.
Aug 31 2021
On 8/31/21 8:09 AM, Steven Schveighoffer wrote:`const ret = 4_000_000.iota.sum;` In this case, since this is inside a function, and not assiged to a global or static variable, it is generated at runtime. `pragma(msg, ret);` However, here we are requesting the value of `ret` at compile time. The compiler knows that since it's const, it should have the value it's initialized with.[I change my understanding at the end of this post.] But 'const' is not 'static const'. pragma(msg) is being extra helpful by hoping for the availability of the value. (To me, 'const' means "I promise I will not mutate", which has no relation to compile-time availability.)So it runs the *initializer* expression `4_000_000.iota.sum` at compile-time, and now it has access to the value. So actually, the CTFE engine runs here instead of at `ret`'s initialization.It makes sense but there are two minor disturbances: 1) This is an example of "It went better than I expected". 2) Success is determined by trying. The following program fails compilation when it gets to 'args.length' after 10 second of compilation: import std.range; import std.algorithm; void main(string[] args) { const ret = 4_000_000.iota.sum + args.length; pragma(msg, ret); } Well, nothing is *wrong* here but concepts are muddled. But then I even more TIL that this is the same for templates: void foo(int i)() { } void main() { const i = 42; foo!i(); // Compiles } You know... Now it feels I knew it all along because CTFE is about expressions, not variables. I need not have a "compile-time variable". Makes sense now. :) Ali
Aug 31 2021
On 8/31/21 11:44 AM, Ali Çehreli wrote:On 8/31/21 8:09 AM, Steven Schveighoffer wrote: > `const ret = 4_000_000.iota.sum;` > > In this case, since this is inside a function, and not assiged to a > global or static variable, it is generated at runtime. > > `pragma(msg, ret);` > > However, here we are requesting the value of `ret` at compile time. The > compiler knows that since it's const, it should have the value it's > initialized with. [I change my understanding at the end of this post.] But 'const' is not 'static const'. pragma(msg) is being extra helpful by hoping for the availability of the value. (To me, 'const' means "I promise I will not mutate", which has no relation to compile-time availability.)What `static const` does is require the execution of the expression at compile time. Why? Because it needs to put that value into the data segment for the linker to use. The compiler isn't going to do CTFE unless it *has to*. Because CTFE is expensive. This is why the cases where CTFE is done are explicit. But whether CTFE will work or not depends on whether the code you are executing at compile time can be executed at compile time. And that isn't decided until the expression is run.> So it runs the *initializer* expression > `4_000_000.iota.sum` at compile-time, and now it has access to the > value. So actually, the CTFE engine runs here instead of at `ret`'s > initialization. It makes sense but there are two minor disturbances: 1) This is an example of "It went better than I expected". 2) Success is determined by trying. The following program fails compilation when it gets to 'args.length' after 10 second of compilation: import std.range; import std.algorithm; void main(string[] args) {  const ret = 4_000_000.iota.sum + args.length;  pragma(msg, ret); }Which actually makes sense :) CTFE is handed an expression, which is essentially an AST branch that it needs to execute. It executes it until it can't, and then gives you the error. A CTFE error is like a runtime error, except the "runtime" is "compile time".Well, nothing is *wrong* here but concepts are muddled. But then I even more TIL that this is the same for templates: void foo(int i)() { } void main() {  const i = 42;  foo!i();   // Compiles } You know... Now it feels I knew it all along because CTFE is about expressions, not variables. I need not have a "compile-time variable". Makes sense now. :)CTFE to me is taking the parsed tree and executing it. But only when it needs to. The only odd magic part here is, it can "See through" the variable to how it was calculated. -Steve
Aug 31 2021
On Tuesday, 31 August 2021 at 15:09:11 UTC, Steven Schveighoffer wrote:On 8/30/21 1:15 PM, Ali Çehreli wrote:"Someone" "should" make a D cheat sheet! Like some best practices, tips n tricks to stay healthy in the D universe[...]initializers *sometimes* can be computed at compile time. If assigned to a const or immutable variable, the compiler is smart enough to know that the item hasn't changed, and so it can go back to the static initializer for what the value actually is. [...]
Sep 02 2021
On Thu, Sep 02, 2021 at 07:42:19PM +0000, Imperatorn via Digitalmars-d wrote:On Tuesday, 31 August 2021 at 15:09:11 UTC, Steven Schveighoffer wrote:Well, there's this: https://p0nce.github.io/d-idioms/ But it may be somewhat outdated now. T -- Why did the mathematician reinvent the square wheel? Because he wanted to drive smoothly over an inverted catenary road.On 8/30/21 1:15 PM, Ali Çehreli wrote:"Someone" "should" make a D cheat sheet! Like some best practices, tips n tricks to stay healthy in the D universe[...]initializers *sometimes* can be computed at compile time. If assigned to a const or immutable variable, the compiler is smart enough to know that the item hasn't changed, and so it can go back to the static initializer for what the value actually is. [...]
Sep 02 2021
On Monday, 30 August 2021 at 15:34:13 UTC, jfondren wrote:On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:Yeah I checked the last commit which was 11 July so yeah they are very resent. My only complain about D's memory management is that the Garbage collector is used in the Phobos so I cannot mark my whole project with ` nogc`. I think the library is a specific thing and probably we could get away doing manual memory management and also being bug-free but I don't take it on me. I may be wrongJust trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?You can see how recent they are, and yeah they're real, but it's a very specific artificial benchmark and there's a lot more to say about compilation speeds about the various languages. That D can compile amazingly fast can be shown just by building dmd itself. But unlike go, D also gives you a bunch of tools that you can use to make compile times take a lot longer. Likewise you could say that D can make just as careful use of memory as a C program, but that D unlike C also gives you a bunch of tools that you can put towards much less careful use of memory.
Aug 30 2021
On Monday, 30 August 2021 at 15:34:13 UTC, jfondren wrote:On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:One definite flaw I've had with D's ecosystem, is these issues aren't made CLEAR. Like if you dare import regex and use a single regex match, your compile time explodes by 10-15 seconds on my i3 chromebook (from ~3 seconds to over 15!). There are a lot of gotchas in D. Anything that "can" explode your program in terms of memory leaks, or run / compile-time performance should be clearly documented. And the barrier-to-entry is annoying even to fix documentation. I went to a D page, clicked "improve this page" and I have to... fork the repo, and then submit a pull request. Even if if's just to correct a typo. Is that kind of lock-and-key audited security over edits to documentation... really necessary? If it takes more than 10 seconds, most people aren't even going to bother.Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?You can see how recent they are, and yeah they're real, but it's a very specific artificial benchmark and there's a lot more to say about compilation speeds about the various languages. That D can compile amazingly fast can be shown just by building dmd itself. But unlike go, D also gives you a bunch of tools that you can use to make compile times take a lot longer. Likewise you could say that D can make just as careful use of memory as a C program, but that D unlike C also gives you a bunch of tools that you can put towards much less careful use of memory.
Aug 31 2021
On Tuesday, 31 August 2021 at 08:27:30 UTC, Chris Katko wrote:And the barrier-to-entry is annoying even to fix documentation. I went to a D page, clicked "improve this page" and I have to... fork the repo, and then submit a pull request. Even if if's just to correct a typo. Is that kind of lock-and-key audited security over edits to documentation... really necessary? If it takes more than 10 seconds, most people aren't even going to bother.That all happens on github. Even the editing. You never have to leave the browser. The whole point of that is that you don't have to fork manually from the command line. I'm not sure how it could be easier. Care to elaborate? Anyway, if that's too much, then you can always submit an issue on Bugzilla.
Aug 31 2021
31.08.2021 11:27, Chris Katko пишет: [snip]And the barrier-to-entry is annoying even to fix documentation. I went to a D page, clicked "improve this page" and I have to... fork the repo, and then submit a pull request. Even if if's just to correct a typo. Is that kind of lock-and-key audited security over edits to documentation... really necessary? If it takes more than 10 seconds, most people aren't even going to bother.What's wrong with? Just now I submit a couple of PRs to fix typo in documentation. It's really trivial thing to do, especially if you contribute to open source projects. In fact it is a standard how to work.
Aug 31 2021
On Tuesday, 31 August 2021 at 08:27:30 UTC, Chris Katko wrote:And the barrier-to-entry is annoying even to fix documentation. I went to a D page, clicked "improve this page" and I have to... fork the repo, and then submit a pull request. Even if if's just to correct a typo. Is that kind of lock-and-key audited security over edits to documentation... really necessary? If it takes more than 10 seconds, most people aren't even going to bother.That's standard for pretty much all open-source documentations. See examples below: MDN Uses Github for their documentation: https://github.com/mdn/content/blob/main/files/en-us/web/html/element/input/index.html Rust: https://prev.rust-lang.org/en-US/contribute-docs.html Visual C++: https://github.com/MicrosoftDocs/cpp-docs/blob/master/CONTRIBUTING.md And the list goes on. It's the most liable method of having documentation that can be contributed to by others. What would your alternative be? There HAS to be someway of audited security, otherwise people WILL insert whatever into the documentation such as malicious content and the malicious content may not even be discoverable because it could be hidden to a page's user.
Aug 31 2021
On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?LDC is based on LLVM, just like rust/zig, so it's gonna be slow DMD is the reference compiler, it is what you should use to get fast iteration time, in fast my game with 20+k lines of code compiles in just 0.7 seconds ![screenshot](https://i.imgur.com/z7vyRtX.png "screenshot") https://i.imgur.com/z7vyRtX.png I only use, and i recommand you use LDC for your release builds as LLVM has best optimizations On windows it is a little bit slower but not that much No matter the language, as long as you factor in templates, if you abuse them, it's gonna slow down your compile times since they need to be computer at compilation time
Sep 01 2021
On Wednesday, 1 September 2021 at 22:49:59 UTC, russhy wrote:On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:Just to be precise, 0.7 seconds for a FULL REBUILD :) Just don't abuse templates
Sep 01 2021
On Wednesday, 1 September 2021 at 22:49:59 UTC, russhy wrote:On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:same with the `--force` dub argument ?Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?LDC is based on LLVM, just like rust/zig, so it's gonna be slow DMD is the reference compiler, it is what you should use to get fast iteration time, in fast my game with 20+k lines of code compiles in just 0.7 seconds
Sep 01 2021
On Wednesday, 1 September 2021 at 23:23:16 UTC, user1234 wrote:On Wednesday, 1 September 2021 at 22:49:59 UTC, russhy wrote:it is, i used ``-f``On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:same with the `--force` dub argument ?Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?LDC is based on LLVM, just like rust/zig, so it's gonna be slow DMD is the reference compiler, it is what you should use to get fast iteration time, in fast my game with 20+k lines of code compiles in just 0.7 seconds
Sep 01 2021
On Wednesday, 1 September 2021 at 23:37:41 UTC, russhy wrote:On Wednesday, 1 September 2021 at 23:23:16 UTC, user1234 wrote:On windows the same project takes 1.8 sec to fully rebuild, windows likes to make things slow.. i wonder if that's because of the linker, i don't know how to check thatOn Wednesday, 1 September 2021 at 22:49:59 UTC, russhy wrote:it is, i used ``-f``On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:same with the `--force` dub argument ?Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?LDC is based on LLVM, just like rust/zig, so it's gonna be slow DMD is the reference compiler, it is what you should use to get fast iteration time, in fast my game with 20+k lines of code compiles in just 0.7 seconds
Sep 01 2021
On Wednesday, 1 September 2021 at 23:56:59 UTC, russhy wrote:On windows the same project takes 1.8 sec to fully rebuild, windows likes to make things slow.. i wonder if that's because of the linker, i don't know how to check thatI could be wrong but Windows has antivirus hooks on file access which slow down things, additionally programs might suffer from malware checks, thus such timings. Extra thing is dub, it invokes compiler on each build to probe the environment, spawning new process on Windows is slower than on Linux, roughly like 100ms vs 10ms on Linux.
Sep 01 2021
On Wednesday, 1 September 2021 at 23:56:59 UTC, russhy wrote:On Wednesday, 1 September 2021 at 23:37:41 UTC, russhy wrote:File system access is significantly slower on Windows because of case insensitivity, Unicode and more metadata accesses per file. This overhead is far from being negligible when accessing a lot of small files (on NTFS afaicr files < than cluster size are stored inside the special directory structure requiring special operation to extract and other such oddities).On Wednesday, 1 September 2021 at 23:23:16 UTC, user1234 wrote:On windows the same project takes 1.8 sec to fully rebuild, windows likes to make things slow.. i wonder if that's because of the linker, i don't know how to check thatOn Wednesday, 1 September 2021 at 22:49:59 UTC, russhy wrote:it is, i used ``-f``On Monday, 30 August 2021 at 13:12:09 UTC, rempas wrote:same with the `--force` dub argument ?Just trying to compile a sample basic gtk-d project (including the libraries themselves) using ldc2 and optimization "-Os" and seeing how much time this takes, I want to ask if the benchmarks found [here](https://github.com/nordlow/compiler-benchmark) about ldc2 are real. Again seeing the gtk-d project taking so much time, It's hard to believe that ldc2 compiles faster that tcc and go. However, this test probably doesn't use optimizations but still.... Ayn thoughts?LDC is based on LLVM, just like rust/zig, so it's gonna be slow DMD is the reference compiler, it is what you should use to get fast iteration time, in fast my game with 20+k lines of code compiles in just 0.7 seconds
Sep 02 2021
On Thursday, 2 September 2021 at 11:05:44 UTC, Patrick Schluter wrote:File system access is significantly slower on Windows because of case insensitivity, Unicode and more metadata accesses per file. This overhead is far from being negligible when accessing a lot of small files (on NTFS afaicr files < than cluster size are stored inside the special directory structure requiring special operation to extract and other such oddities).It's definitely more than case insensitivity. Try it on Linux: dd if=/dev/zero of=casei.img bs=1G count=50 mkfs -t ext4 -O casefold casei.img mkdir casei mount -o loop casei.img casei That creates a 50G case-insensitive ext4 filesystem and then mounts it into a directory that you can build dmd in, etc.
Sep 02 2021
On Thursday, 2 September 2021 at 11:05:44 UTC, Patrick Schluter wrote:While true, this doesn't even remotely explain the difference. Windows filesystem performance are a disaster. Don't take it from me, take it from the windows team itself, for instance: https://www.youtube.com/watch?v=yQEgeoabHNo In that presentation, the engineer explain the challenge of using a git monorepo at microsoft, and process creation performance as well as file system performance where both major issue vs linux, and to a lesser extent OSX.On windows the same project takes 1.8 sec to fully rebuild, windows likes to make things slow.. i wonder if that's because of the linker, i don't know how to check thatFile system access is significantly slower on Windows because of case insensitivity, Unicode and more metadata accesses per file. This overhead is far from being negligible when accessing a lot of small files (on NTFS afaicr files < than cluster size are stored inside the special directory structure requiring special operation to extract and other such oddities).
Sep 02 2021