digitalmars.D - Three articles on D
- data pulverizer (19/19) Jun 07 2020 I was doing research for my next article and decided to write
- Paul Backus (12/19) Jun 07 2020 The examples here seem a bit contrived and unrealistic. It might
- data pulverizer (15/30) Jun 07 2020 I've made the change to the article.
- Paul Backus (14/29) Jun 07 2020 See, that's one of the examples I had in mind, because the way
- data pulverizer (12/41) Jun 07 2020 I assumed that arr.length is always run-time for arrays - but I
- tastyminerals (3/9) Jun 09 2020 FYI, I have a couple of Julia benchmarks timed against NumPy here:
- data pulverizer (5/8) Jun 11 2020 Interesting. There is a recent Julia package called
- tastyminerals (3/11) Jun 11 2020 True, a very solid improvement indeed.
- jmh530 (7/21) Jun 11 2020 It sounds like @avx for Julia is a bit like @fastmath [1]. I was
- data pulverizer (8/14) Jun 12 2020 Interesting. I didn't know that fast math vectorized calculations
- data pulverizer (5/12) Jun 12 2020 p.s. @simd in Julia was written by Intel's Arch Robinson the
- Russel Winder (13/39) Jun 08 2020 As well as publishing in the usual online places, I am sure the editors ...
- data pulverizer (4/11) Jun 11 2020 Thanks for your suggestion. I've taken a look at their website
- tastyminerals (5/10) Jun 09 2020 Thanks. Interesting that D is less susceptible to floating types
- data pulverizer (2/6) Jun 11 2020 Updated. Thanks
I was doing research for my next article and decided to write three shorter ones in the interim. Two are about basic aspects in D and one is a small benchmarking exercise of D math functions. 1. Importing modules & scripts in D. It's much more informative than it sounds especially for beginners, and includes all the import cases including `mixin(import("script.d"));` https://github.com/dataPulverizer/ImportingInD/blob/master/README.md 2. Template basics decomposition and value types: Once you've written your first basic template in D, at some point soon afterwards you start struggling for how to decompose template types and articulate the template pattern you want in order to facilitate template auto-inference by the compiler. That's what this article helps with: https://github.com/dataPulverizer/DTemplatesDecompositionAndValueTypes/blob/master/README.md 3. Benchmarking of mathematical functions in D, `std.math`, vs C in core, vs LLVM in LDC_intrinsic: https://github.com/dataPulverizer/DMathBench/blob/master/report.md I look forward to your comments. Thank you
Jun 07 2020
On Sunday, 7 June 2020 at 22:42:34 UTC, data pulverizer wrote:2. Template basics decomposition and value types: Once you've written your first basic template in D, at some point soon afterwards you start struggling for how to decompose template types and articulate the template pattern you want in order to facilitate template auto-inference by the compiler. That's what this article helps with: https://github.com/dataPulverizer/DTemplatesDecompositionAndValueTypes/blob/master/README.mdThe examples here seem a bit contrived and unrealistic. It might be difficult for readers to understand how and why they would apply these techniques to their own code. Also, the vocabulary used is different from the official vocabulary in the language spec. For example, what the tutorial calls "decomposition" is officially called "template argument deduction" [1]. Using official terminology whenever possible makes it easier for users to find relevant information with search engines, get help from the community, and integrate knowledge from multiple sources. [1] https://dlang.org/spec/template.html#argument_deduction
Jun 07 2020
On Sunday, 7 June 2020 at 23:34:56 UTC, Paul Backus wrote:On Sunday, 7 June 2020 at 22:42:34 UTC, data pulverizer wrote:Thank you for your response.2. Template basics decomposition and value types: Once you've written your first basic template in D, at some point soon afterwards you start struggling for how to decompose template types and articulate the template pattern you want in order to facilitate template auto-inference by the compiler. That's what this article helps with: https://github.com/dataPulverizer/DTemplatesDecompositionAndValueTypes/blob/master/README.mdAlso, the vocabulary used is different from the official vocabulary in the language spec. For example, what the tutorial calls "decomposition" is officially called "template argument deduction" ...I've made the change to the article.The examples here seem a bit contrived and unrealistic. It might be difficult for readers to understand how and why they would apply these techniques to their own code.It would probably be helpful for you to pick an example and describe what you mean. From my point of view I'm targeting people writing numerical code and Julia. Someone like that will look at the article and think "hmm okay interesting ...". The introductory example is a dot product ... those people will know what that is and the code itself is simple enough to extrapolate and get people curious. The second example getting static array size is from code that actually I use. The code is small in size and simple to follow. The third example of value type is analogous to Julia's value types (https://docs.julialang.org/en/v1/manual/types/#%22Value-types%22-1) and targets people that have some familiarity with Julia - makes them feel comfortable knowing that some of the same Julia constructs are available in D.
Jun 07 2020
On Monday, 8 June 2020 at 00:25:38 UTC, data pulverizer wrote:On Sunday, 7 June 2020 at 23:34:56 UTC, Paul Backus wrote:[...]The examples here seem a bit contrived and unrealistic. It might be difficult for readers to understand how and why they would apply these techniques to their own code.It would probably be helpful for you to pick an example and describe what you mean. From my point of view I'm targeting people writing numerical code and Julia. Someone like that will look at the article and think "hmm okay interesting ...".The second example getting static array size is from code that actually I use. The code is small in size and simple to follow.See, that's one of the examples I had in mind, because the way I'd get the length of a static array in real code is arr.length I assumed you knew about that and were just using the example to demonstrate template argument deduction. :)The third example of value type is analogous to Julia's value types (https://docs.julialang.org/en/v1/manual/types/#%22Value-types%22-1) and targets people that have some familiarity with Julia - makes them feel comfortable knowing that some of the same Julia constructs are available in D.I'm not familiar with Julia's value types, and reading the linked page doesn't give me a clear idea what they are used for. Again, the example does a decent job showing how template value parameters [1] work, but it doesn't give me much of a clue about why I'd want to use them. Perhaps that's just me, though, and a programmer with a background in Julia would understand better. [1] https://dlang.org/spec/template.html#template_value_parameter
Jun 07 2020
On Monday, 8 June 2020 at 00:43:34 UTC, Paul Backus wrote:On Monday, 8 June 2020 at 00:25:38 UTC, data pulverizer wrote:I assumed that arr.length is always run-time for arrays - but I just checked with a simple example and it works for static arrays at compile time. I can now see why you were saying it was contrived - because you don't need the template to get the length. Fair enough.On Sunday, 7 June 2020 at 23:34:56 UTC, Paul Backus wrote:[...]The examples here seem a bit contrived and unrealistic. It might be difficult for readers to understand how and why they would apply these techniques to their own code.It would probably be helpful for you to pick an example and describe what you mean. From my point of view I'm targeting people writing numerical code and Julia. Someone like that will look at the article and think "hmm okay interesting ...".The second example getting static array size is from code that actually I use. The code is small in size and simple to follow.See, that's one of the examples I had in mind, because the way I'd get the length of a static array in real code is arr.length I assumed you knew about that and were just using the example to demonstrate template argument deduction. :)I'm not familiar with Julia's value types, and reading the linked page doesn't give me a clear idea what they are used for. Again, the example does a decent job showing how template value parameters [1] work, but it doesn't give me much of a clue about why I'd want to use them. Perhaps that's just me, though, and a programmer with a background in Julia would understand better. [1] https://dlang.org/spec/template.html#template_value_parameterThey are more similar to template alias parameters (https://dlang.org/spec/template.html#aliasparameters) but you can dispatch on anything - you don't have to specify a type (compiler deduces). The official template documentation looks as if it has been updated, the last time I saw it, it wasn't this detailed. Great!
Jun 07 2020
On Monday, 8 June 2020 at 00:25:38 UTC, data pulverizer wrote:On Sunday, 7 June 2020 at 23:34:56 UTC, Paul Backus wrote:FYI, I have a couple of Julia benchmarks timed against NumPy here: https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-threadThank you for your response.[...][...]I've made the change to the article. [...]
Jun 09 2020
On Tuesday, 9 June 2020 at 21:30:24 UTC, tastyminerals wrote:FYI, I have a couple of Julia benchmarks timed against NumPy here: https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-threadInteresting. There is a recent Julia package called LoopVectorization which by all accounts performs much better than base Julia: https://discourse.julialang.org/t/ann-loopvectorization/32843
Jun 11 2020
On Thursday, 11 June 2020 at 22:11:41 UTC, data pulverizer wrote:On Tuesday, 9 June 2020 at 21:30:24 UTC, tastyminerals wrote:True, a very solid improvement indeed. Sigh, wish D received as much attention as Julia continues to get.FYI, I have a couple of Julia benchmarks timed against NumPy here: https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-threadInteresting. There is a recent Julia package called LoopVectorization which by all accounts performs much better than base Julia: https://discourse.julialang.org/t/ann-loopvectorization/32843
Jun 11 2020
On Thursday, 11 June 2020 at 23:08:45 UTC, tastyminerals wrote:On Thursday, 11 June 2020 at 22:11:41 UTC, data pulverizer wrote:It sounds like avx for Julia is a bit like fastmath [1]. I was re-reading this [2] recently. You may find interesting. [1] https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.fastmath.29 [2] http://johanengelen.github.io/ldc/2016/10/11/Math-performance-LDC.htmlOn Tuesday, 9 June 2020 at 21:30:24 UTC, tastyminerals wrote:True, a very solid improvement indeed. Sigh, wish D received as much attention as Julia continues to get.FYI, I have a couple of Julia benchmarks timed against NumPy here: https://github.com/tastyminerals/mir_benchmarks#general-purpose-multi-threadInteresting. There is a recent Julia package called LoopVectorization which by all accounts performs much better than base Julia: https://discourse.julialang.org/t/ann-loopvectorization/32843
Jun 11 2020
On Friday, 12 June 2020 at 00:24:39 UTC, jmh530 wrote:It sounds like avx for Julia is a bit like fastmath [1]. I was re-reading this [2] recently. You may find interesting. [1] https://wiki.dlang.org/LDC-specific_language_changes#.40.28ldc.attributes.fastmath.29 [2] http://johanengelen.github.io/ldc/2016/10/11/Math-performance-LDC.htmlInteresting. I didn't know that fast math vectorized calculations - automatically using SIMD. That feature isn't mentioned on the LLVM fast math documentation https://llvm.org/docs/LangRef.html#fast-math-flags. Julia's approach to SIMD and fast math seems effective - the practice of being able to label individual statements to direct the compiler to optimize those specific statements.
Jun 12 2020
On Saturday, 13 June 2020 at 05:29:34 UTC, data pulverizer wrote:Interesting. I didn't know that fast math vectorized calculations - automatically using SIMD. That feature isn't mentioned on the LLVM fast math documentation https://llvm.org/docs/LangRef.html#fast-math-flags. Julia's approach to SIMD and fast math seems effective - the practice of being able to label individual statements to direct the compiler to optimize those specific statements.p.s. simd in Julia was written by Intel's Arch Robinson the architect of Intel's Threading Building Blocks. That kind of support is very helpful indeed https://software.intel.com/content/www/us/en/develop/articles/vectorization-in-julia.html.
Jun 12 2020
As well as publishing in the usual online places, I am sure the editors of = CVu and/or Overload would love to see these sort of article published in one of those two. ACCU folk love this sort of programming stuff. On Sun, 2020-06-07 at 22:42 +0000, data pulverizer via Digitalmars-d wrote:I was doing research for my next article and decided to write=20 three shorter ones in the interim. Two are about basic aspects in=20 D and one is a small benchmarking exercise of D math functions. =20 1. Importing modules & scripts in D. It's much more informative=20 than it sounds especially for beginners, and includes all the=20 import cases including `mixin(import("script.d"));` =20 https://github.com/dataPulverizer/ImportingInD/blob/master/README.md =20 2. Template basics decomposition and value types: Once you've=20 written your first basic template in D, at some point soon=20 afterwards you start struggling for how to decompose template=20 types and articulate the template pattern you want in order to=20 facilitate template auto-inference by the compiler. That's what=20 this article helps with:=20 https://github.com/dataPulverizer/DTemplatesDecompositionAndValueTypes/bl=ob/master/README.md=20 3. Benchmarking of mathematical functions in D, `std.math`, vs C=20 in core, vs LLVM in LDC_intrinsic:=20 https://github.com/dataPulverizer/DMathBench/blob/master/report.md =20 I look forward to your comments. =20 Thank you =20 =20--=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Jun 08 2020
On Monday, 8 June 2020 at 07:52:58 UTC, Russel Winder wrote:As well as publishing in the usual online places, I am sure the editors of CVu and/or Overload would love to see these sort of article published in one of those two. ACCU folk love this sort of programming stuff. On Sun, 2020-06-07 at 22:42 +0000, data pulverizer via Digitalmars-d wrote:Thanks for your suggestion. I've taken a look at their website and publications. It looks like a great place to write articles for. I'll prepare some stuff for that.[snip]
Jun 11 2020
On Sunday, 7 June 2020 at 22:42:34 UTC, data pulverizer wrote:I was doing research for my next article and decided to write three shorter ones in the interim. Two are about basic aspects in D and one is a small benchmarking exercise of D math functions. [...]Thanks. Interesting that D is less susceptible to floating types than C and LLVM. Also, a minor typo in "... to think about [then] considering which ...".
Jun 09 2020
On Tuesday, 9 June 2020 at 21:26:02 UTC, tastyminerals wrote:Thanks. Interesting that D is less susceptible to floating types than C and LLVM. Also, a minor typo in "... to think about [then] considering which ...".Updated. Thanks
Jun 11 2020