digitalmars.D.learn - Maxime's micro allocation benchmark much faster ?
- Laeeth Isharc (35/35) Mar 31 2015 I was curious to see if new DMD had changed speed on Maxime
- Laeeth Isharc (3/39) Mar 31 2015 oops - scratch that. may have made a mistake with versions and
- Laeeth Isharc (10/10) Mar 31 2015 Trying on a different beefier machine with 2.066 and 2.067
- John Colvin (2/12) Mar 31 2015 That's nice news. The recent GC improvements are clearly working.
- weaselcat (2/12) Mar 31 2015 Wow! props to the people that worked on the GC.
- Laeeth Isharc (11/27) Mar 31 2015 Yes - should have said that. And I do appreciate very much all
- FG (3/6) Apr 01 2015 That is great news, thanks!
- John Colvin (2/9) Apr 01 2015 Yeah, what's with that? I've never seen it before.
- Laeeth Isharc (5/16) Apr 01 2015 One cannot entirely escape déformation professionnelle ;)
- John Colvin (5/22) Apr 01 2015 well yes, who doesn't always not want to never avoid mistakes? ;)
- FG (2/19) Apr 01 2015 Yeah, there's that, but at least 1024 and 1000 are still in the same bal...
- Laeeth Isharc (8/8) Apr 01 2015 (not translated into D yet)
I was curious to see if new DMD had changed speed on Maxime Chevalier-Boisvert's allocation benchmark here: http://pointersgonewild.com/2014/10/26/circumventing-the-d-garbage-collector/ I haven't had time to look at the Phobos test suite to know if this was one of those that were included, but the difference seems to be striking. I am using two machines in my office - both of which are old x64 boxes running Arch Linux and are quite old (8 Gb RAM only). Same manufacturer and similar models so should be same spec CPUwise. Have not got time to install and compare different versions of dmd on same machine, so fwiw: 1mm objects ----------- dmd 2.07 release: 0.56 seconds dmd 2.067-devel-639bcaa: 0.88 seconds ------------ dmd 2.07 release: between 4.44 and 6.57 seconds dmd 2.067-devel-639bcaa: 90 seconds In case I made a typo in code: import std.conv; class Node { Node next; size_t a,b,c,d; } void main(string[] args) { auto numNodes=to!size_t(args[1]); Node head=null; for(size_t i=0;i<numNodes;i++) { auto n=new Node(); n.next=head; head=n; } }
Mar 31 2015
oops - scratch that. may have made a mistake with versions and be comparing 2.067 with some unstable dev version. On Tuesday, 31 March 2015 at 11:46:41 UTC, Laeeth Isharc wrote:I was curious to see if new DMD had changed speed on Maxime Chevalier-Boisvert's allocation benchmark here: http://pointersgonewild.com/2014/10/26/circumventing-the-d-garbage-collector/ I haven't had time to look at the Phobos test suite to know if this was one of those that were included, but the difference seems to be striking. I am using two machines in my office - both of which are old x64 boxes running Arch Linux and are quite old (8 Gb RAM only). Same manufacturer and similar models so should be same spec CPUwise. Have not got time to install and compare different versions of dmd on same machine, so fwiw: 1mm objects ----------- dmd 2.07 release: 0.56 seconds dmd 2.067-devel-639bcaa: 0.88 seconds ------------ dmd 2.07 release: between 4.44 and 6.57 seconds dmd 2.067-devel-639bcaa: 90 seconds In case I made a typo in code: import std.conv; class Node { Node next; size_t a,b,c,d; } void main(string[] args) { auto numNodes=to!size_t(args[1]); Node head=null; for(size_t i=0;i<numNodes;i++) { auto n=new Node(); n.next=head; head=n; } }
Mar 31 2015
Trying on a different beefier machine with 2.066 and 2.067 release versions installed: 1mm allocations: 2.066: 0.844s 2.067: 0.19s 10mm allocations 2.066: 1m 17.2 s 2.067: 0m 1.15s So numbers were ballpark right before, and allocation on this micro-benchmark much faster.
Mar 31 2015
On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:Trying on a different beefier machine with 2.066 and 2.067 release versions installed: 1mm allocations: 2.066: 0.844s 2.067: 0.19s 10mm allocations 2.066: 1m 17.2 s 2.067: 0m 1.15s So numbers were ballpark right before, and allocation on this micro-benchmark much faster.That's nice news. The recent GC improvements are clearly working.
Mar 31 2015
On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:Trying on a different beefier machine with 2.066 and 2.067 release versions installed: 1mm allocations: 2.066: 0.844s 2.067: 0.19s 10mm allocations 2.066: 1m 17.2 s 2.067: 0m 1.15s So numbers were ballpark right before, and allocation on this micro-benchmark much faster.Wow! props to the people that worked on the GC.
Mar 31 2015
On Tuesday, 31 March 2015 at 22:00:39 UTC, weaselcat wrote:On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:Yes - should have said that. And I do appreciate very much all the hard work that has been done on this (and also by the GDC and LDC maintainers who have to keep up with each release). Don't trust these numbers till someone else has verified them, as I am not certain I haven't messed up transliterating the code, or doing something else stoopid. And of course it's a very specific micro benchmark, but it's one that matters beyond the direct implications given the discussion over it when her post came out. I would be really curious to see if Maxime finds the overall performance of her JIT improved.Trying on a different beefier machine with 2.066 and 2.067 release versions installed: 1mm allocations: 2.066: 0.844s 2.067: 0.19s 10mm allocations 2.066: 1m 17.2 s 2.067: 0m 1.15s So numbers were ballpark right before, and allocation on this micro-benchmark much faster.Wow! props to the people that worked on the GC.
Mar 31 2015
On 2015-03-31 at 22:56, Laeeth Isharc wrote:1mm allocations 2.066: 0.844s 2.067: 0.19sThat is great news, thanks! OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M. :P
Apr 01 2015
On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:On 2015-03-31 at 22:56, Laeeth Isharc wrote:Yeah, what's with that? I've never seen it before.1mm allocations 2.066: 0.844s 2.067: 0.19sThat is great news, thanks! OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M. :P
Apr 01 2015
On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:One cannot entirely escape déformation professionnelle ;) [People mostly write 1,000 but 1mm although 1m is pedantically correct for 1,000). Better internalize the conventions if one doesn't want to avoid expensive mistakes under pressure.On 2015-03-31 at 22:56, Laeeth Isharc wrote:Yeah, what's with that? I've never seen it before.1mm allocations 2.066: 0.844s 2.067: 0.19sThat is great news, thanks! OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M. :P
Apr 01 2015
On Wednesday, 1 April 2015 at 14:22:57 UTC, Laeeth Isharc wrote:On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:well yes, who doesn't always not want to never avoid mistakes? ;) Anyway, as I'm sure you know, the rest of the world assumes SI/metric, or binary in special cases (damn those JEDEC guys!): http://en.wikipedia.org/wiki/Template:Bit_and_byte_prefixesOn Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:One cannot entirely escape déformation professionnelle ;) [People mostly write 1,000 but 1mm although 1m is pedantically correct for 1,000). Better internalize the conventions if one doesn't want to avoid expensive mistakes under pressure.On 2015-03-31 at 22:56, Laeeth Isharc wrote:Yeah, what's with that? I've never seen it before.1mm allocations 2.066: 0.844s 2.067: 0.19sThat is great news, thanks! OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M. :P
Apr 01 2015
On 2015-04-01 at 16:52, John Colvin wrote:On Wednesday, 1 April 2015 at 14:22:57 UTC, Laeeth Isharc wrote:Yeah, there's that, but at least 1024 and 1000 are still in the same ballpark. Bankers are used to the convention and won't mistake M for a million (or you'd read it in every newspaper if they did), but it does create havoc when you see that convention being used outside of the financial context, or worse, being mixed with SI.On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:well yes, who doesn't always not want to never avoid mistakes? ;) Anyway, as I'm sure you know, the rest of the world assumes SI/metric, or binary in special cases (damn those JEDEC guys!): http://en.wikipedia.org/wiki/Template:Bit_and_byte_prefixesOn Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:One cannot entirely escape déformation professionnelle ;) [People mostly write 1,000 but 1mm although 1m is pedantically correct for 1,000). Better internalize the conventions if one doesn't want to avoid expensive mistakes under pressure.On 2015-03-31 at 22:56, Laeeth Isharc wrote:Yeah, what's with that? I've never seen it before.1mm allocations 2.066: 0.844s 2.067: 0.19sThat is great news, thanks! OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M. :P
Apr 01 2015
(not translated into D yet) http://blog.mgm-tp.com/2013/12/benchmarking-g1-and-other-java-7-garbage-collectors/ http://www.mm-net.org.uk/resources/benchmarks.html http://www.ccs.neu.edu/home/will/GC/sourcecode.html http://yoda.arachsys.com/csharp/benchmark.html it's possible we already have better ones in phobos test suite, but maybe interesting also to be able to compare to other languages/platforms.
Apr 01 2015