digitalmars.D - DIP80: phobos additions
- Robert burner Schadek (6/6) Jun 07 2015 Phobos is awesome, the libs of go, python and rust only have
- Dennis Ritchie (6/12) Jun 07 2015 Yes, it's a great DIP's discussion. I just for the expansion of
- weaselcat (5/11) Jun 07 2015 can we discuss the downside of making phobos huge?
- Adam D. Ruppe (8/11) Jun 07 2015 Me too... but that's not actually a problem of huge library. It
- Jonathan M Davis (13/17) Jun 07 2015 Andrei has already stated that we are definitely going to make
- weaselcat (6/24) Jun 07 2015 I wasn't arguing against a large library(in fact, I prefer it.) I
- Tofu Ninja (4/10) Jun 07 2015 Would love some kind of color implementation in Phobos, simple
- Rikki Cattermole (8/19) Jun 07 2015 https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu,
- Manu via Digitalmars-d (5/23) Jun 07 2015 I've kinda just been working on it on the side for my own use.
- Rikki Cattermole (7/32) Jun 07 2015 Like I said its a blocker for an image library. There's no point
- Manu via Digitalmars-d (9/50) Jun 07 2015 Yeah, that's fine. Is there an initiative for a phobos image library?
- Rikki Cattermole (7/59) Jun 07 2015 I agree that it is. But we will need to move past this for the
- Tofu Ninja (3/11) Jun 07 2015 Personally I would just be happy with a d wrapper for something
- Mike (4/6) Jun 07 2015 That's what Deimos is for
- Tofu Ninja (5/12) Jun 07 2015 I guess I meant to use it as a base for image loading and storing
- Rikki Cattermole (8/22) Jun 07 2015 Atleast my idea behind Devisualization.Image was mostly this.
- Jonathan M Davis (8/14) Jun 07 2015 Yeah. After the problems with linking in curl, I think that we
- Tofu Ninja (4/19) Jun 08 2015 I think that is probably pretty sad if that is actually the
- Mike (3/7) Jun 07 2015 I'm interested in this library as well.
- Mike (3/7) Jun 08 2015 Looks like it's been getting a couple commits monthly, so I think
- weaselcat (7/13) Jun 07 2015 I think a std.bindings or something similar for ubiquitous C
- Manu via Digitalmars-d (10/24) Jun 07 2015 I've been humoring the idea of porting my engine to D. It's about 15
- Rikki Cattermole (7/34) Jun 07 2015 I'm definitely interested. Imagine getting something like that into
- Manu via Digitalmars-d (14/57) Jun 07 2015 I can't really see a place for many parts in phobos...
- Rikki Cattermole (6/65) Jun 07 2015 They would have to be manual tests.
- Walter Bright (2/10) Jun 08 2015 It's a chicken-and-egg thing. Somebody's got to start and not wait for t...
- Jacob Carlborg (4/7) Jun 08 2015 Perhaps you could try using magicport.
- Joakim (4/18) Jun 09 2015 What cross-compilers are you waiting for? Nobody is working on
- Manu via Digitalmars-d (7/20) Jun 09 2015 XBone works. PS4 is probably easy or already working.
- ezneh (12/18) Jun 08 2015 IMHO, Phobos could include thinks like this as a standard :
- Nick Sabalausky (11/13) Jun 13 2015 I see them everywhere, but does anyone ever actually use them? Usually
- ketmar (2/5) Jun 13 2015 same for me.=
- Steven Schveighoffer (4/18) Jun 13 2015 A rather cool usage of QR code I saw was a sticker on a device that was
- ketmar (3/5) Jun 13 2015 it's k001, but i'll take a printed URL for it in any time. the old good=...
- Joakim (6/34) Jun 19 2015 Then there's always this:
- Steven Schveighoffer (4/8) Jun 21 2015 Oh man. Note to marketing department -- all QR codes must point to
- ponce (7/13) Jun 08 2015 What I'd like in phobos:
- "Per =?UTF-8?B?Tm9yZGzDtnci?= <per.nordlow gmail.com> (6/8) Jun 08 2015 Automatic randomizer for builtins, ranges, etc. Used to generate
- Ilya Yaroshenko (17/23) Jun 08 2015 There are
- Ilya Yaroshenko (2/27) Jun 08 2015 ... probably std.container.Array is good template to start.
- Andrei Alexandrescu (10/30) Jun 08 2015 I think licensing matters would make this difficult. What I do think we
- John Colvin (13/53) Jun 09 2015 I don't think this is quite the right approach. Multidimensional
- Ilya Yaroshenko (28/86) Jun 09 2015 Probably we need both approaches:
- Ilya Yaroshenko (1/10) Jun 09 2015 assert(&matrix[0, 2] is &tensor[0, 1, 2]);
- Ilya Yaroshenko (2/8) Jun 09 2015 I have created Phobos PR. Now we can discuss it at GitHub.
- Dennis Ritchie (12/16) Jun 09 2015 Yes, I really want to D supports multidimensional arrays,
- Ilya Yaroshenko (4/7) Jun 09 2015 D definitely needs BLAS API support for matrix multiplication.
- Dennis Ritchie (8/12) Jun 09 2015 Yes, those programs on D, is clearly lagging behind the
- Dennis Ritchie (3/6) Jun 09 2015 Actually, that's what you need to realize in D:
- Ilya Yaroshenko (7/13) Jun 09 2015 This is very good stuff. However I want to create something more
- Andrei Alexandrescu (2/7) Jun 09 2015 "And finally uBLAS offers good (but not outstanding) performance." -- An...
- Dennis Ritchie (18/29) Jun 09 2015 OK, but...
- Andrei Alexandrescu (4/5) Jun 09 2015 BigInt should use reference counting. Its current approach to allocating...
- Dennis Ritchie (4/10) Jun 09 2015 Done:
- Andrei Alexandrescu (2/12) Jun 09 2015 Thanks! -- Andrei
- Steven Schveighoffer (11/16) Jun 09 2015 Slightly OT, but this reminds me.
- Andrei Alexandrescu (4/21) Jun 09 2015 The obvious solution that comes to mind is adding a Flag!"interlocked".
- Steven Schveighoffer (7/31) Jun 10 2015 If you add an instance of RefCounted to a GC-destructed type (either in
- Andrei Alexandrescu (8/41) Jun 10 2015 That's a problem with the GC. Collected memory must be deallocated in
- Steven Schveighoffer (10/52) Jun 10 2015 I agree it's a problem with the GC, but not that it's a simple fix. It's...
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (4/6) Jun 11 2015 `RefCounted!T` is also thread-local by default, only
- Steven Schveighoffer (10/15) Jun 11 2015 I may have misunderstood Andrei. We can't just use a flag to fix this
- Andrei Alexandrescu (2/14) Jun 11 2015 Yes, we definitely need to fix the GC. -- Andrei
- ixid (7/19) Jun 10 2015 I suspect this is more about who the Mathematica and D users are
- Dennis Ritchie (2/8) Jun 10 2015 OK, if D is at least BLAS, I will try to overtake you :)
- Dennis Ritchie (6/9) Jun 10 2015
- ixid (10/19) Jun 10 2015 You rarely need to use BigInt for heavy lifting though, often
- Dennis Ritchie (7/16) Jun 10 2015 Yes it is. Many are trying to find performance problems D. And
- Manu via Digitalmars-d (15/21) Jun 09 2015 A complication for linear algebra (or other mathsy things in general)
- John Colvin (7/40) Jun 09 2015 Optimising floating point is a massive pain because of precision
- Manu via Digitalmars-d (19/54) Jun 09 2015 We have flags to control this sort of thing (fast-math, strict ieee, etc...
- John Colvin (8/53) Jun 09 2015 If the compiler is free to rewrite by analytical rules then "I
- Manu via Digitalmars-d (4/51) Jun 11 2015 This is fine, those applications would continue not to use it.
- Ilya Yaroshenko (9/45) Jun 09 2015 Simplified expressions would help because
- Ilya Yaroshenko (2/49) Jun 09 2015 EDIT: would NOT help
- Manu via Digitalmars-d (17/57) Jun 11 2015 Perhaps you've never worked with incompetent programmers (in my
- Dennis Ritchie (5/15) Jun 11 2015 But you don't think you need to look up to programmers who are
- Ilya Yaroshenko (15/99) Jun 11 2015 OK, generally you are talking about something we can name MathD.
- Manu via Digitalmars-d (18/105) Jun 12 2015 That's nice... I'm all for it :)
- Ilya Yaroshenko (12/69) Jun 12 2015 ... for example we can optimise matrix chain multiplication
- Manu via Digitalmars-d (2/3) Jun 09 2015 *operators* along with their properties
- Andrei Alexandrescu (8/29) Jun 09 2015 I see. So what would be the primitives necessary? Strides (in the form
- Ilya Yaroshenko (12/31) Jun 09 2015 N-dimensional slices can be expressed as N slices and N shifts.
- jmh530 (70/75) Jun 11 2015 A well-supported matrix math library would definitely lead to me
- Wyatt (11/13) Jun 11 2015 Your post reminds me of two things I've considered attempting in
- jmh530 (6/9) Jun 11 2015 I see your point, but I think it might be a bit risky if you
- Wyatt (9/18) Jun 11 2015 From the outset, my thought was to strictly define the set of
- Tofu Ninja (2/10) Jun 11 2015 What would the new order of operations be for these new operators?
- Wyatt (11/13) Jun 12 2015 Hadn't honestly thought that far. Like I said, it was more of a
- Tofu Ninja (23/32) Jun 17 2015 I actually thought about it more, and D does have a bunch of
- Dominikus Dittes Scherkl (4/8) Jun 23 2015 +* is a specially bad idea, as I would read that as "a + (*b)",
- Tofu Ninja (8/18) Jun 23 2015 Yeah |- does seem like an interesting one, not sure what it would
- Wyatt (13/36) Jun 24 2015 Oh right, meant to respond to this. I'll admit it took me a few
- Tofu Ninja (16/54) Jun 24 2015 I am thinking of writing a mixin that will set up the proxy for
- Timon Gehr (53/109) Jun 24 2015 Obviously you will run into issues with precedence soon, but this should...
- Tofu Ninja (50/51) Jun 25 2015 Heres what I came up with... I love D so much <3
- Rikki Cattermole (3/74) Jun 11 2015 Humm, work on getting gl3n into phobos or work on my ODBC driver
- jmh530 (10/12) Jun 12 2015 I can only speak for myself. I'm sure there's a lot of value in
- Tofu Ninja (5/17) Jun 12 2015 Matrix math is matrix math, it being for ogl makes no real
- jmh530 (14/16) Jun 12 2015 I think it’s a little more complicated than that. BLAS and LAPACK
- Rikki Cattermole (4/17) Jun 12 2015 The reason I am considering gl3n is because it is old solid code. It's
- John Colvin (9/30) Jun 13 2015 The tiny subset of numerical linear algebra that is relevant for
- Tofu Ninja (8/17) Jun 13 2015 I think there is a conflict of interest with what people want.
- Rikki Cattermole (5/22) Jun 13 2015 IMO simple matrix is fine for a standard library. More complex highly
- John Colvin (4/26) Jun 13 2015 Linear algebra for graphics is the specialised case, not the
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (8/11) Jun 13 2015 A geometry library is different, it should be type safe when it
- jmh530 (13/15) Jun 13 2015 Switching representations behind the scenes? Sounds complicated.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (17/28) Jun 14 2015 You don't have much of a choice if you want it to perform. You
- John Colvin (8/27) Jun 13 2015 Yes, that's what I was trying to point out. Anyway, gl3n or
- Timon Gehr (4/19) Jun 13 2015 I think there's no point to that. Just have dynamically sized and fixed
- weaselcat (9/28) Jun 14 2015 +1
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (10/12) Jun 14 2015 The reason is that C++ didn't provide anything. As a result each
- Ilya Yaroshenko (5/17) Jun 14 2015 The reason is general purpose matrixes allocated at heap, but
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (5/7) Jun 14 2015 No, the reason is that LA-libraries are C-libraries that also
- Ilya Yaroshenko (4/11) Jun 14 2015 We need D own BLAS implementation to do it. Sight, DBLAS will be
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (19/20) Jun 14 2015 Why can't you use "version" for those that want to use a BLAS
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (14/14) Jun 14 2015 I think there might be a disconnection in this thread. D only, or
- Ilya Yaroshenko (3/17) Jun 14 2015 +1
- Ilya Yaroshenko (13/34) Jun 14 2015 I am really don't understand what you mean with "generic" keyword.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (20/31) Jun 14 2015 Yes, that is what generic programming is about. The type should
- Ilya Yaroshenko (7/16) Jun 14 2015 std.range has a lot of types + D arrays.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (14/18) Jun 14 2015 Yeah, I agree that templates in C++/D more or less makes those
- Ilya Yaroshenko (9/27) Jun 14 2015 Alignment, strides (windows on a stream - I understand it like
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (8/13) Jun 14 2015 It isn't a problem if you use the best possible abstraction from
- Ilya Yaroshenko (14/27) Jun 14 2015 I am sorry for this trolling:
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (20/31) Jun 14 2015 Even it if was, it does not provide the meta info and alignment
- weaselcat (3/6) Jun 14 2015 https://github.com/solodon4/Mach7
- Ilya Yaroshenko (1/5) Jun 14 2015 A naive std.algorithm and std.range is easy to write too.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (8/13) Jun 14 2015 I wouldn't know. People have different needs. Builtin
- Ilya Yaroshenko (10/26) Jun 14 2015 Yes, but it would be hard to create SIMD optimised version.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (23/32) Jun 14 2015 Hmm… I don't know. In general I think the best thing to do is to
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (12/12) Jun 14 2015 Another thing worth noting is that I believe Intel has put some
- anonymous (23/33) Jun 14 2015 See [1] (the Matmul benchmark) Julia Native is probably backed
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (7/13) Jun 14 2015 Sure, but that is what I'd do if I had the time. Get a baseline
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (6/9) Jun 15 2015 In case it isn't obvious: a potential advantage of a simple
- anonymous (10/36) Jun 15 2015 On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad
- Ilya Yaroshenko (2/6) Jun 15 2015 +1
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (13/20) Jun 15 2015 Yes. Well, I think there are some different expectations to what
- John Chapman (3/3) Jun 10 2015 It's a shame ucent/cent never got implemented. But couldn't they
- ponce (3/6) Jun 10 2015 FWIW:
- Andrei Alexandrescu (3/9) Jun 10 2015 Yes, arbitrary fixed-size integrals would be good to have in Phobos.
- ponce (6/19) Jun 23 2015 Sorry for the delay. I wrote this code a while earlier.
- John Chapman (13/16) Jun 10 2015 Other things I often have a need for:
- ketmar (4/12) Jun 10 2015 +inf for including that into Phobos. current implementations are hacks=2...
- Robert burner Schadek (2/3) Jun 10 2015 std.experimental.logger!?
- John Chapman (3/6) Jun 10 2015 Perfect, he said sheepishly.
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (4/5) Jun 10 2015 https://github.com/D-Programming-Language/phobos/pull/3233
- "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> (3/6) Jun 10 2015 I think the next release of LDC will support it, at least on some
- rsw0x (3/9) Jun 13 2015 std.container.concurrent.*
- Nick Sabalausky (2/8) Jun 13 2015 What are the problems with std.json?
- weaselcat (2/14) Jun 13 2015 slow
- Dennis Ritchie (4/4) Jun 13 2015 Good start:
- Ilya Yaroshenko (6/12) Jun 15 2015 N-dimensional slices is ready for comments!
- Dennis Ritchie (16/17) Jun 15 2015 It seems to me that the properties of the matrix require `row`
- John Colvin (2/19) Jun 15 2015 try .length!0 and .length!1 or .shape[0] and .shape[1]
- Ilya Yaroshenko (3/28) Jun 15 2015 Nitpick: shape contains lengths and strides: .shape.lengths[0]
- Ilya Yaroshenko (21/38) Jun 15 2015 This works:
- Dennis Ritchie (7/10) Jun 15 2015 Here something similar implemented:
Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discuss
Jun 07 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussYes, it's a great DIP's discussion. I just for the expansion of Phobos! You also need to consider Hana to copy some very useful elements in the Phobos: http://ldionne.com/hana/
Jun 07 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discusscan we discuss the downside of making phobos huge? I actively avoid adding phobos libs to my projects because it bloats my binaries and increases compile times by massive amounts.
Jun 07 2015
On Monday, 8 June 2015 at 00:05:58 UTC, weaselcat wrote:I actively avoid adding phobos libs to my projects because it bloats my binaries and increases compile times by massive amounts.Me too... but that's not actually a problem of huge library. It is more a problem of an interconnected library - if you write independent modules an import should only pull them with little from other ones. There's a difference with classes because of Object.factory, all of them are pulled in, but modules with functions, structs, and templates are cool, shouldn't be a problem.
Jun 07 2015
On Monday, 8 June 2015 at 00:05:58 UTC, weaselcat wrote:can we discuss the downside of making phobos huge? I actively avoid adding phobos libs to my projects because it bloats my binaries and increases compile times by massive amounts.Andrei has already stated that we are definitely going to make Phobos large. We are _not_ going for the minimalistic approach, and pretty much no other language is at this point either. So, Phobos _will_ continue to grow in size. Now, as Adam points out, we can should do a better job of making it so that different pieces of Phobos don't depend on each other if they don't need to, but it's a given at this point that Phobos is only going to get larger. And if unnecessary dependencies are kept to a minimum, then it really shouldn't hurt your compilation times (and I'm sure that we'll have further compiler improvements in that area anyway). - Jonathan M Davis
Jun 07 2015
On Monday, 8 June 2015 at 01:39:33 UTC, Jonathan M Davis wrote:On Monday, 8 June 2015 at 00:05:58 UTC, weaselcat wrote:I wasn't arguing against a large library(in fact, I prefer it.) I just think the effort should be put towards making phobos more modular before adding more stuff on top of it and making the problem worse. bye,can we discuss the downside of making phobos huge? I actively avoid adding phobos libs to my projects because it bloats my binaries and increases compile times by massive amounts.Andrei has already stated that we are definitely going to make Phobos large. We are _not_ going for the minimalistic approach, and pretty much no other language is at this point either. So, Phobos _will_ continue to grow in size. Now, as Adam points out, we can should do a better job of making it so that different pieces of Phobos don't depend on each other if they don't need to, but it's a given at this point that Phobos is only going to get larger. And if unnecessary dependencies are kept to a minimum, then it really shouldn't hurt your compilation times (and I'm sure that we'll have further compiler improvements in that area anyway). - Jonathan M Davis
Jun 07 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWould love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
Jun 07 2015
On 8/06/2015 2:50 p.m., Tofu Ninja wrote:On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going? Gl3n should be a candidate as it is old code and good one at that. https://github.com/Dav1dde/gl3n But it seems like it is no longer maintained. Can anyone contact the author regarding license to boost? Image manipulation blocked by color.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWould love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
Jun 07 2015
On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:On 8/06/2015 2:50 p.m., Tofu Ninja wrote:I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWould love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
Jun 07 2015
On 8/06/2015 3:48 p.m., Manu via Digitalmars-d wrote:On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:Like I said its a blocker for an image library. There's no point implementing an image library with a half baked color definition meant for phobos. The long term issue is that we cannot really move forward with anything related to GUI or game development into phobos without it. So preferably we can get it into phobos by the end of the year :)On 8/06/2015 2:50 p.m., Tofu Ninja wrote:I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWould love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.
Jun 07 2015
On 8 June 2015 at 13:54, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:On 8/06/2015 3:48 p.m., Manu via Digitalmars-d wrote:Yeah, that's fine. Is there an initiative for a phobos image library? I have said before that I'm dubious about it's worth; the trouble with an image library is that it will be almost impossible to decide on API, whereas a colour is fairly unambiguous in terms of design merits.On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:Like I said its a blocker for an image library. There's no point implementing an image library with a half baked color definition meant for phobos.On 8/06/2015 2:50 p.m., Tofu Ninja wrote:I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWould love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.The long term issue is that we cannot really move forward with anything related to GUI or game development into phobos without it. So preferably we can get it into phobos by the end of the year :)Yeah, I agree it's a sore missing point, which is why I started working on it ;) ... I'll make it high priority. I recently finished up various work on premake5, so I can work on this now.
Jun 07 2015
On 8/06/2015 4:05 p.m., Manu via Digitalmars-d wrote:On 8 June 2015 at 13:54, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:I agree that it is. But we will need to move past this for the betterment of our ecosystem. Without it we will suffer too much. As it is, Devisualization.Image will have a new interface once std.image.color is pulled. So it'll be a contender for std.image.On 8/06/2015 3:48 p.m., Manu via Digitalmars-d wrote:Yeah, that's fine. Is there an initiative for a phobos image library? I have said before that I'm dubious about it's worth; the trouble with an image library is that it will be almost impossible to decide on API, whereas a colour is fairly unambiguous in terms of design merits.On 8 June 2015 at 13:08, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:Like I said its a blocker for an image library. There's no point implementing an image library with a half baked color definition meant for phobos.On 8/06/2015 2:50 p.m., Tofu Ninja wrote:I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:https://github.com/D-Programming-Language/phobos/pull/2845 Heyyyy Manu, hows it going?Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWould love some kind of color implementation in Phobos, simple linear algebra(vectors, matrices), image manipulation.Sounds good, I was getting worried that you had stopped altogether.The long term issue is that we cannot really move forward with anything related to GUI or game development into phobos without it. So preferably we can get it into phobos by the end of the year :)Yeah, I agree it's a sore missing point, which is why I started working on it ;) ... I'll make it high priority. I recently finished up various work on premake5, so I can work on this now.
Jun 07 2015
On Monday, 8 June 2015 at 04:05:23 UTC, Manu wrote:Yeah, that's fine. Is there an initiative for a phobos image library? I have said before that I'm dubious about it's worth; the trouble with an image library is that it will be almost impossible to decide on API, whereas a colour is fairly unambiguous in terms of design merits.Personally I would just be happy with a d wrapper for something like freeimage being included.
Jun 07 2015
On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:Personally I would just be happy with a d wrapper for something like freeimage being included.That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage). Mike
Jun 07 2015
On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:I guess I meant to use it as a base for image loading and storing and to build some kind of d image lib on top of it. I see no point in us trying to implement all the various image formats if we try to make a image lib for phobos.Personally I would just be happy with a d wrapper for something like freeimage being included.That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage). Mike
Jun 07 2015
On 8/06/2015 4:34 p.m., Tofu Ninja wrote:On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:Atleast my idea behind Devisualization.Image was mostly this. The implementation can be swapped out with another easily. But the actual interface used is well made. So while a Phobos image library might have a few formats such as PNG, it probably wouldn't include a vast array of them. So then its just a matter of allowing 3rd party libraries to add them transparently.On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:I guess I meant to use it as a base for image loading and storing and to build some kind of d image lib on top of it. I see no point in us trying to implement all the various image formats if we try to make a image lib for phobos.Personally I would just be happy with a d wrapper for something like freeimage being included.That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage). Mike
Jun 07 2015
On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:Yeah. After the problems with linking in curl, I think that we more or less decided that including stuff in Phobos which has to link against 3rd party libraries isn't a great idea. Maybe we'll end up doing it again, but in general, it just makes more sense for those to be done as 3rd party libraries and put in code.dlang.org. - Jonathan M DavisPersonally I would just be happy with a d wrapper for something like freeimage being included.That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage).
Jun 07 2015
On Monday, 8 June 2015 at 04:34:56 UTC, Jonathan M Davis wrote:On Monday, 8 June 2015 at 04:22:56 UTC, Mike wrote:I think that is probably pretty sad if that is actually the current stance. There are so many great libraries that Phobos could benefit from and is not because of a packaging issue.On Monday, 8 June 2015 at 04:21:45 UTC, Tofu Ninja wrote:Yeah. After the problems with linking in curl, I think that we more or less decided that including stuff in Phobos which has to link against 3rd party libraries isn't a great idea. Maybe we'll end up doing it again, but in general, it just makes more sense for those to be done as 3rd party libraries and put in code.dlang.org. - Jonathan M DavisPersonally I would just be happy with a d wrapper for something like freeimage being included.That's what Deimos is for (https://github.com/D-Programming-Deimos/FreeImage).
Jun 08 2015
On Monday, 8 June 2015 at 03:48:14 UTC, Manu wrote:I've kinda just been working on it on the side for my own use. I wasn't happy with the layout, and restructured it a lot. If there's an active demand for it, I'll give it top priority...?I'm interested in this library as well. Mike
Jun 07 2015
On Monday, 8 June 2015 at 03:08:46 UTC, Rikki Cattermole wrote:Gl3n should be a candidate as it is old code and good one at that. https://github.com/Dav1dde/gl3n But it seems like it is no longer maintained.Looks like it's been getting a couple commits monthly, so I think it's being maintained.
Jun 08 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussI think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
Jun 07 2015
On 8 June 2015 at 13:15, weaselcat via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussI think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
Jun 07 2015
On 8/06/2015 3:53 p.m., Manu via Digitalmars-d wrote:On 8 June 2015 at 13:15, weaselcat via Digitalmars-d <digitalmars-d puremagic.com> wrote:I'm definitely interested. Imagine getting something like that into phobos! Would be utterly amazing for us. Or atleast parts of it, once D-ified. Although might be worth doing tests using e.g. ldc to see how many platforms you can actually get working. Then perhaps an acceptance criteria before you port it?On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussI think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.
Jun 07 2015
On 8 June 2015 at 13:59, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:On 8/06/2015 3:53 p.m., Manu via Digitalmars-d wrote:I can't really see a place for many parts in phobos... large parts of it are hardware/platform abstraction; would depend on many system library bindings present in phobos.On 8 June 2015 at 13:15, weaselcat via Digitalmars-d <digitalmars-d puremagic.com> wrote:I'm definitely interested. Imagine getting something like that into phobos! Would be utterly amazing for us. Or atleast parts of it, once D-ified.On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussI think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.Although might be worth doing tests using e.g. ldc to see how many platforms you can actually get working. Then perhaps an acceptance criteria before you port it?Yeah, it's a lot of work to do unit tests for parallel runtime systems that depend almost exclusively on user input or large bodies of external data... and where many of the outputs don't naturally feedback for analysis (render output, audio output). I can see a unit test framework being more code than most parts of the engine ;) .. not that it would be bad (it would be awesome!), I just can't imagine a simple/acceptable design. The thing I'm most happy about with Fuji is how relatively minimal it is (considering its scope and capability).
Jun 07 2015
On 8/06/2015 4:12 p.m., Manu via Digitalmars-d wrote:On 8 June 2015 at 13:59, Rikki Cattermole via Digitalmars-d <digitalmars-d puremagic.com> wrote:They would have to be manual tests. So e.g. throws exceptions happily and uses threads kind of thing. But where you load it up and run it. It could help the ldc and gdc guys know what is still missing for this use case.On 8/06/2015 3:53 p.m., Manu via Digitalmars-d wrote:I can't really see a place for many parts in phobos... large parts of it are hardware/platform abstraction; would depend on many system library bindings present in phobos.On 8 June 2015 at 13:15, weaselcat via Digitalmars-d <digitalmars-d puremagic.com> wrote:I'm definitely interested. Imagine getting something like that into phobos! Would be utterly amazing for us. Or atleast parts of it, once D-ified.On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussI think a std.bindings or something similar for ubiquitous C libraries would go a long way - _quality_(not just a wrapper) SDL, OpenGL, etc bindings. D is very attractive to game developers, I think with a little push it would get a lot of traction from this.Although might be worth doing tests using e.g. ldc to see how many platforms you can actually get working. Then perhaps an acceptance criteria before you port it?Yeah, it's a lot of work to do unit tests for parallel runtime systems that depend almost exclusively on user input or large bodies of external data... and where many of the outputs don't naturally feedback for analysis (render output, audio output). I can see a unit test framework being more code than most parts of the engine ;) .. not that it would be bad (it would be awesome!), I just can't imagine a simple/acceptable design. The thing I'm most happy about with Fuji is how relatively minimal it is (considering its scope and capability).
Jun 07 2015
On 6/7/2015 8:53 PM, Manu via Digitalmars-d wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.It's a chicken-and-egg thing. Somebody's got to start and not wait for the others.
Jun 08 2015
On 2015-06-08 05:53, Manu via Digitalmars-d wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios.Perhaps you could try using magicport. -- /Jacob Carlborg
Jun 08 2015
On Monday, 8 June 2015 at 03:53:52 UTC, Manu wrote:I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.What cross-compilers are you waiting for? Nobody is working on XBone or PS4 as far as I know, but Dan's work on iOS seems pretty far along, if you want to try that out.
Jun 09 2015
On 9 June 2015 at 17:32, Joakim via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Monday, 8 June 2015 at 03:53:52 UTC, Manu wrote:XBone works. PS4 is probably easy or already working. Android, iOS are critical. Nintendo platforms also exist. I would hope we'll see Enscripten and NaCl at some point; I could use them at work right now. The phones do appear to be moving recently, which is really encouraging.I've been humoring the idea of porting my engine to D. It's about 15 years of development, better/cleaner than most proprietary engines I've used at game studios. I wonder if there would be interest in this? Problem is, I need all the cross compilers to exist before I pull the plug on the C code... a game engine is no good if it's not portable to all the consoles under the sun. That said, I think it would be a good case-study to get the cross compilers working against.What cross-compilers are you waiting for? Nobody is working on XBone or PS4 as far as I know, but Dan's work on iOS seems pretty far along, if you want to try that out.
Jun 09 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussIMHO, Phobos could include thinks like this as a standard : - OAuth (1 & 2), at least it would be useful for projects like vibe.d - Create / read QR codes, maybe ? It seems we see more and more QR Codes here and there, so it could potentially be worth it - Better / full OS bindings (winapi, x11, ect.), but it would (sadly) require a very large amount of work to do so. - + what has been said already. I guess we can try to think about integrating a large amount of things that are widely used and are considered as "standards"
Jun 08 2015
On 06/08/2015 03:55 AM, ezneh wrote:- Create / read QR codes, maybe ? It seems we see more and more QR Codes here and there, so it could potentially be worth itI see them everywhere, but does anyone ever actually use them? Usually it's just an obvious link to some company's marketing/advertising. It's basically just like the old CueCat, if anyone remembers it: <https://en.wikipedia.org/wiki/CueCat> Only time I've ever seen *anyone* actually using a QR code is when *I* use a "display QR link for this page" FF plugin to send the webpage I'm looking at to my phone. Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.
Jun 13 2015
On Sat, 13 Jun 2015 11:46:41 -0400, Nick Sabalausky wrote:Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.same for me.=
Jun 13 2015
On 6/13/15 11:46 AM, Nick Sabalausky wrote:On 06/08/2015 03:55 AM, ezneh wrote:A rather cool usage of QR code I saw was a sticker on a device that was a link to the PDF of the manual. -Steve- Create / read QR codes, maybe ? It seems we see more and more QR Codes here and there, so it could potentially be worth itI see them everywhere, but does anyone ever actually use them? Usually it's just an obvious link to some company's marketing/advertising. It's basically just like the old CueCat, if anyone remembers it: <https://en.wikipedia.org/wiki/CueCat> Only time I've ever seen *anyone* actually using a QR code is when *I* use a "display QR link for this page" FF plugin to send the webpage I'm looking at to my phone. Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.
Jun 13 2015
On Sat, 13 Jun 2015 21:57:42 -0400, Steven Schveighoffer wrote:A rather cool usage of QR code I saw was a sticker on a device that was a link to the PDF of the manual.it's k001, but i'll take a printed URL for it in any time. the old good=20 URL that i can read with my eyes.=
Jun 13 2015
On Sunday, 14 June 2015 at 01:57:37 UTC, Steven Schveighoffer wrote:On 6/13/15 11:46 AM, Nick Sabalausky wrote:Then there's always this: http://www.theverge.com/2015/6/19/8811425/heinz-ketchup-qr-code-porn-site-fundorado Not the fault of the QR code of course, just an expired domain name, but still funny. :)On 06/08/2015 03:55 AM, ezneh wrote:A rather cool usage of QR code I saw was a sticker on a device that was a link to the PDF of the manual.- Create / read QR codes, maybe ? It seems we see more and more QR Codes here and there, so it could potentially be worth itI see them everywhere, but does anyone ever actually use them? Usually it's just an obvious link to some company's marketing/advertising. It's basically just like the old CueCat, if anyone remembers it: <https://en.wikipedia.org/wiki/CueCat> Only time I've ever seen *anyone* actually using a QR code is when *I* use a "display QR link for this page" FF plugin to send the webpage I'm looking at to my phone. Maybe I'm just not seeing it, but I suspect QR is more someone that companies *want* people to care about, rather than something anyone actually uses.
Jun 19 2015
On 6/19/15 9:50 PM, Joakim wrote:Then there's always this: http://www.theverge.com/2015/6/19/8811425/heinz-ketchup-qr-code-porn-site-fundorado Not the fault of the QR code of course, just an expired domain name, but still funny. :)Oh man. Note to marketing department -- all QR codes must point to ourcompany.com, you can redirect from there!!! -Steve
Jun 21 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWhat I'd like in phobos: - OS bindings (more complete win32, Cocoa etc) - DerelictUtil - allocators That's about it.
Jun 08 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:http://wiki.dlang.org/DIP80 lets get OT, please discussAutomatic randomizer for builtins, ranges, etc. Used to generate data for tests. Here's a start: https://github.com/nordlow/justd/blob/master/random_ex.d
Jun 08 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussThere are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux? Regards, Ilya
Jun 08 2015
On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:... probably std.container.Array is good template to start.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussThere are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux? Regards, Ilya
Jun 08 2015
On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I think licensing matters would make this difficult. What I do think we can do is: (a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives. (b) Provide signatures for C and Fortran libraries so people who have them can use them easily with D. (c) Provide high-level wrappers on top of those functions. AndreiPhobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussThere are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux?
Jun 08 2015
On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu wrote:On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I think licensing matters would make this difficult. What I do think we can do is: (a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussThere are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux?(b) Provide signatures for C and Fortran libraries so people who have them can use them easily with D. (c) Provide high-level wrappers on top of those functions. AndreiThat is how e.g. numpy works and it's OK, but D can do better. Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.
Jun 09 2015
On Tuesday, 9 June 2015 at 08:50:16 UTC, John Colvin wrote:On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu wrote:Probably we need both approaches: [1]. Multidimensional random access slices (ranges, not only arrays) We can do it easily: size_t anyNumber; auto ar = new int[3 * 8 * 9 + anyNumber]; auto slice = Slice[0..3, 4..8, 1..9]; assert(ar.canBeSlicedWith(slice)); //checks that ar.length <= 3 * 8 * 9 auto tensor = ar.sliced(slice); tensor[0, 1, 2] = 4; auto matrix = tensor[0..$, 1, 0..$]; assert(matrix[0, 2] == 4); [2]. BLAS Transposed.no/yes and Major.row/column (naming can be changed) flags for plain 2D matrixes based on 2.1 D arrays (both GC and manual memory management) 2.2 std.container.array (RefCounted) RowMajor and RowMinor are not needed if Transposed is already defined. But this stuff helps engineers implement software in terms of corresponding mathematical documentation. I hope to create nogc versions for 2.1 and 2.2 (because GC is not needed for slices). Furthermore [2] can be based on [1].On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:I think licensing matters would make this difficult. What I do think we can do is: (a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives.Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussThere are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions? Some notes about portability: 1. OS X has Accelerated framework builtin. 2. Linux has blast by default or it can be easily installed. However default blast is very slow. The openBLAS is preferred. 3. Looks like there is no simple way to have BLAS support on Windows. Should we provide BLAS library with DMD for Windows and maybe Linux?John, please describe your ideas and use cases. I think github issues is more convenient place. You have opened https://github.com/kyllingstad/scid/issues/24 . So I think we can past our code examples at SciD issue.(b) Provide signatures for C and Fortran libraries so people who have them can use them easily with D. (c) Provide high-level wrappers on top of those functions. AndreiThat is how e.g. numpy works and it's OK, but D can do better. Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.
Jun 09 2015
size_t anyNumber; auto ar = new int[3 * 8 * 9 + anyNumber]; auto slice = Slice[0..3, 4..8, 1..9]; assert(ar.canBeSlicedWith(slice)); //checks that ar.length <= 3 * 8 * 9 auto tensor = ar.sliced(slice); tensor[0, 1, 2] = 4; auto matrix = tensor[0..$, 1, 0..$]; assert(matrix[0, 2] == 4);assert(&matrix[0, 2] is &tensor[0, 1, 2]);
Jun 09 2015
Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.I have created Phobos PR. Now we can discuss it at GitHub. https://github.com/D-Programming-Language/phobos/pull/3397
Jun 09 2015
On Tuesday, 9 June 2015 at 08:50:16 UTC, John Colvin wrote:I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.Yes, I really want to D supports multidimensional arrays, matrices, rational numbers and quaternions. I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :) Rational numbers and quaternions have already been implemented are: https://github.com/k3kaimu/carbon/blob/master/source/carbon/rational.d https://github.com/k3kaimu/carbon/blob/master/source/carbon/quaternion.d Satisfactory developments with matrices have here: https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d
Jun 09 2015
I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
Jun 09 2015
On Tuesday, 9 June 2015 at 15:26:43 UTC, Ilya Yaroshenko wrote:D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.Yes, those programs on D, is clearly lagging behind the programmers Wolfram Mathematica :) https://projecteuler.net/language=D https://projecteuler.net/language=Mathematica To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.
Jun 09 2015
On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
Jun 09 2015
On Tuesday, 9 June 2015 at 16:16:39 UTC, Dennis Ritchie wrote:On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:This is very good stuff. However I want to create something more simple: [1]. n-dimensional slices (without matrix multiplication, "RowMajor/..." and other math features) [2]. netlib like standart CBLAS API at `etc.blas.cblas` [3]. High level bindings to connect [1] and 1-2D subset of [2].To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
Jun 09 2015
On 6/9/15 9:16 AM, Dennis Ritchie wrote:On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:"And finally uBLAS offers good (but not outstanding) performance." -- AndreiTo solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
Jun 09 2015
On Tuesday, 9 June 2015 at 17:19:28 UTC, Andrei Alexandrescu wrote:On 6/9/15 9:16 AM, Dennis Ritchie wrote:OK, but... Same thing I can say about BigInt in Phobos. "And finally `std.bigint` offers good (but not outstanding) performance." I decided 17 math problems and for most of them I needed a `BigInt`: http://i.imgur.com/CmOSm7V.png https://projecteuler.net/language=D If in D would not be `BigInt`, I probably would have used to Boost.Multiprekison on C++: http://www.boost.org/doc/libs/1_58_0/libs/multiprecision/doc/html/index.html Or do some slow Python. Maybe all this and does not give a huge performance, but for a wide range of mathematical problems it all helps. Thus, it is better to have something than nothing :) And BLAS is more than something...On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:"And finally uBLAS offers good (but not outstanding) performance." -- AndreiTo solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.Actually, that's what you need to realize in D: http://www.boost.org/doc/libs/1_58_0/libs/numeric/ublas/doc/index.html
Jun 09 2015
On 6/9/15 11:42 AM, Dennis Ritchie wrote:"And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Jun 09 2015
On Tuesday, 9 June 2015 at 18:58:56 UTC, Andrei Alexandrescu wrote:On 6/9/15 11:42 AM, Dennis Ritchie wrote:Done: https://issues.dlang.org/show_bug.cgi?id=14673"And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Jun 09 2015
On 6/9/15 12:21 PM, Dennis Ritchie wrote:On Tuesday, 9 June 2015 at 18:58:56 UTC, Andrei Alexandrescu wrote:Thanks! -- AndreiOn 6/9/15 11:42 AM, Dennis Ritchie wrote:Done: https://issues.dlang.org/show_bug.cgi?id=14673"And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Jun 09 2015
On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:On 6/9/15 11:42 AM, Dennis Ritchie wrote:Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references. Can we make RefCounted use atomicInc and atomicDec? It will hurt performance a bit, but the current state is not good. I spoke with Erik about this, as he was planning on using RefCounted, but didn't know about the hairy issues with the GC. If we get to a point where we can have a thread-local GC, we can remove the implementation detail of using atomic operations when possible. -Steve"And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- Andrei
Jun 09 2015
On 6/9/15 1:53 PM, Steven Schveighoffer wrote:On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:How do you mean that?On 6/9/15 11:42 AM, Dennis Ritchie wrote:Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references."And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- AndreiCan we make RefCounted use atomicInc and atomicDec? It will hurt performance a bit, but the current state is not good. I spoke with Erik about this, as he was planning on using RefCounted, but didn't know about the hairy issues with the GC. If we get to a point where we can have a thread-local GC, we can remove the implementation detail of using atomic operations when possible.The obvious solution that comes to mind is adding a Flag!"interlocked". -- Andrei
Jun 09 2015
On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:On 6/9/15 1:53 PM, Steven Schveighoffer wrote:If you add an instance of RefCounted to a GC-destructed type (either in an array, or as a member of a class), there is the potential that the GC will run the dtor of the RefCounted item in a different thread, opening up the possibility of races.On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:How do you mean that?On 6/9/15 11:42 AM, Dennis Ritchie wrote:Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references."And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- AndreiCan you explain it further? It's not obvious to me. -SteveCan we make RefCounted use atomicInc and atomicDec? It will hurt performance a bit, but the current state is not good. I spoke with Erik about this, as he was planning on using RefCounted, but didn't know about the hairy issues with the GC. If we get to a point where we can have a thread-local GC, we can remove the implementation detail of using atomic operations when possible.The obvious solution that comes to mind is adding a Flag!"interlocked".
Jun 10 2015
On 6/10/15 3:52 AM, Steven Schveighoffer wrote:On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:That's a problem with the GC. Collected memory must be deallocated in the thread that allocated it. It's not really that complicated to implement, either - the collection process puts the memory to deallocate in a per-thread freelist; then when each thread wakes up and tries to allocate things, it first allocates from the freelist.On 6/9/15 1:53 PM, Steven Schveighoffer wrote:If you add an instance of RefCounted to a GC-destructed type (either in an array, or as a member of a class), there is the potential that the GC will run the dtor of the RefCounted item in a different thread, opening up the possibility of races.On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:How do you mean that?On 6/9/15 11:42 AM, Dennis Ritchie wrote:Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references."And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- AndreiThe RefCounted type could have a flag as a template parameter. AndreiCan you explain it further? It's not obvious to me.Can we make RefCounted use atomicInc and atomicDec? It will hurt performance a bit, but the current state is not good. I spoke with Erik about this, as he was planning on using RefCounted, but didn't know about the hairy issues with the GC. If we get to a point where we can have a thread-local GC, we can remove the implementation detail of using atomic operations when possible.The obvious solution that comes to mind is adding a Flag!"interlocked".
Jun 10 2015
On 6/10/15 11:49 AM, Andrei Alexandrescu wrote:On 6/10/15 3:52 AM, Steven Schveighoffer wrote:I agree it's a problem with the GC, but not that it's a simple fix. It's not just a freelist -- the dtor needs to be run in the thread also. But the amount of affected code (i.e. any code that uses GC) makes this a very high risk change, whereas changing RefCounted is a 2-line change that is easy to prove/review. I will make the RefCounted atomic PR if you can accept that.On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:That's a problem with the GC. Collected memory must be deallocated in the thread that allocated it. It's not really that complicated to implement, either - the collection process puts the memory to deallocate in a per-thread freelist; then when each thread wakes up and tries to allocate things, it first allocates from the freelist.On 6/9/15 1:53 PM, Steven Schveighoffer wrote:If you add an instance of RefCounted to a GC-destructed type (either in an array, or as a member of a class), there is the potential that the GC will run the dtor of the RefCounted item in a different thread, opening up the possibility of races.On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:How do you mean that?On 6/9/15 11:42 AM, Dennis Ritchie wrote:Slightly OT, but this reminds me. RefCounted is not viable when using the GC, because any references on the heap may race against stack-based references."And finally `std.bigint` offers good (but not outstanding) performance."BigInt should use reference counting. Its current approach to allocating new memory for everything is a liability. Could someone file a report for this please. -- AndreiOK, thanks for the explanation. I'd do it the other way around: Flag!"threadlocal", since we should be safe by default. -SteveThe RefCounted type could have a flag as a template parameter.Can you explain it further? It's not obvious to me.Can we make RefCounted use atomicInc and atomicDec? It will hurt performance a bit, but the current state is not good. I spoke with Erik about this, as he was planning on using RefCounted, but didn't know about the hairy issues with the GC. If we get to a point where we can have a thread-local GC, we can remove the implementation detail of using atomic operations when possible.The obvious solution that comes to mind is adding a Flag!"interlocked".
Jun 10 2015
On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:OK, thanks for the explanation. I'd do it the other way around: Flag!"threadlocal", since we should be safe by default.`RefCounted!T` is also thread-local by default, only `shared(RefCounted!T)` needs to use atomic operations.
Jun 11 2015
On 6/11/15 4:15 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>" wrote:On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:I may have misunderstood Andrei. We can't just use a flag to fix this problem, all allocations are in danger of races (even thread-local ones). But maybe he meant *after* we fix the GC we could add a flag? I'm not sure. A flag at this point would be a band-aid fix, allowing one to optimize if one knows that his code never puts RefCounted instances on the heap. Hard to prove... -SteveOK, thanks for the explanation. I'd do it the other way around: Flag!"threadlocal", since we should be safe by default.`RefCounted!T` is also thread-local by default, only `shared(RefCounted!T)` needs to use atomic operations.
Jun 11 2015
On 6/11/15 5:17 AM, Steven Schveighoffer wrote:On 6/11/15 4:15 AM, "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net>" wrote:Yes, we definitely need to fix the GC. -- AndreiOn Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:I may have misunderstood Andrei. We can't just use a flag to fix this problem, all allocations are in danger of races (even thread-local ones). But maybe he meant *after* we fix the GC we could add a flag? I'm not sure.OK, thanks for the explanation. I'd do it the other way around: Flag!"threadlocal", since we should be safe by default.`RefCounted!T` is also thread-local by default, only `shared(RefCounted!T)` needs to use atomic operations.
Jun 11 2015
On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:On Tuesday, 9 June 2015 at 15:26:43 UTC, Ilya Yaroshenko wrote:I suspect this is more about who the Mathematica and D users are as Project Euler is mostly mathematical rather than code optimization. More of the Mathematica users would have strong maths backgrounds. I haven't felt held back by D at all, it's only been my own lack of ability. I'm in 2nd place atm for D users.D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.Yes, those programs on D, is clearly lagging behind the programmers Wolfram Mathematica :) https://projecteuler.net/language=D https://projecteuler.net/language=Mathematica To solve these problems you need something like Blas. Perhaps BLAS - it's more practical way to enrich D techniques for working with matrices.
Jun 10 2015
On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:I suspect this is more about who the Mathematica and D users are as Project Euler is mostly mathematical rather than code optimization. More of the Mathematica users would have strong maths backgrounds. I haven't felt held back by D at all, it's only been my own lack of ability. I'm in 2nd place atm for D users.OK, if D is at least BLAS, I will try to overtake you :)
Jun 10 2015
On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:I suspect this is more about who the Mathematica and D users are as Project Euler is mostly mathematical rather than code optimization.Here and I say that despite the fact that in D BigInt not optimized very well, it helps me to solve a wide range of tasks that do not require high performance, so I want to BLAS or something similar was in D. Something is better than nothing!
Jun 10 2015
On Wednesday, 10 June 2015 at 08:50:31 UTC, Dennis Ritchie wrote:On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:You rarely need to use BigInt for heavy lifting though, often it's just summing, not that I would argue against optimization. I think speed is absolutely vital and one of the most powerful things we could do to promote D would be to run the best benchmarks site for all language comers and make sure D does very well. Every time there's a benchmark contest it seems to unearth D performance issues that can be greatly improved upon. I'm sure you will beat me pretty quickly, as I said my maths isn't very good but it might motivate me to solve some more! =)I suspect this is more about who the Mathematica and D users are as Project Euler is mostly mathematical rather than code optimization.Here and I say that despite the fact that in D BigInt not optimized very well, it helps me to solve a wide range of tasks that do not require high performance, so I want to BLAS or something similar was in D. Something is better than nothing!
Jun 10 2015
On Wednesday, 10 June 2015 at 09:43:47 UTC, ixid wrote:You rarely need to use BigInt for heavy lifting though, often it's just summing, not that I would argue against optimization. I think speed is absolutely vital and one of the most powerful things we could do to promote D would be to run the best benchmarks site for all language comers and make sure D does very well. Every time there's a benchmark contest it seems to unearth D performance issues that can be greatly improved upon.Yes it is. Many are trying to find performance problems D. And sometimes it turns out.I'm sure you will beat me pretty quickly, as I said my maths isn't very good but it might motivate me to solve some more! =)No, I will start to beat you until next year, because, unfortunately, I will not have a full year of access to the computer. We can say that this is something like a long vacation :)
Jun 10 2015
On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
Jun 09 2015
On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you. Of the things that can be done, lazy operations should make it easier/possible for the optimiser to spot.A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
Jun 09 2015
On 10 June 2015 at 02:32, John Colvin via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:We have flags to control this sort of thing (fast-math, strict ieee, etc). I will worry about my precision, I just want the optimiser to do its job and do the very best it possibly can. In the case of linear algebra, the optimiser generally fails and I must manually simplify expressions as much as possible. In the event the expressions emerge as a result of a series of inlines, or generic code (the sort that appears frequently as a result of stream/range based programming), then there's nothing you can do except to flatten and unroll your work loops yourself.On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.Of the things that can be done, lazy operations should make it easier/possible for the optimiser to spot.My experience is that they possibly make it harder, although I don't know why. I find the compiler becomes very unpredictable optimising deep lazy expressions. The backend inline heuristics may not be tuned for typical D expressions of this type? I often wish I could address common compound operations myself, by implementing something like a compound operator which I can special case with an optimised path for particular expressions. But I can't think of any reasonable ways to approach that.
Jun 09 2015
On Tuesday, 9 June 2015 at 16:45:33 UTC, Manu wrote:On 10 June 2015 at 02:32, John Colvin via Digitalmars-d <digitalmars-d puremagic.com> wrote:If the compiler is free to rewrite by analytical rules then "I will worry about my precision" is equivalent to either "I don't care about my precision" or "I have checked the codegen". A simple rearrangement of an expression can easily turn a perfectly good result in to complete garbage. It would be great if compilers were even better at fast-math mode, but an awful lot of applications can't use it.On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:We have flags to control this sort of thing (fast-math, strict ieee, etc). I will worry about my precision, I just want the optimiser to do its job and do the very best it possibly can. In the case of linear algebra, the optimiser generally fails and I must manually simplify expressions as much as possible.On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.[...]A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Jun 09 2015
On 10 June 2015 at 03:04, John Colvin via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Tuesday, 9 June 2015 at 16:45:33 UTC, Manu wrote:This is fine, those applications would continue not to use it. Personally, I've never written code in 20 years where I didn't want fast-math.On 10 June 2015 at 02:32, John Colvin via Digitalmars-d <digitalmars-d puremagic.com> wrote:If the compiler is free to rewrite by analytical rules then "I will worry about my precision" is equivalent to either "I don't care about my precision" or "I have checked the codegen". A simple rearrangement of an expression can easily turn a perfectly good result in to complete garbage. It would be great if compilers were even better at fast-math mode, but an awful lot of applications can't use it.On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:We have flags to control this sort of thing (fast-math, strict ieee, etc). I will worry about my precision, I just want the optimiser to do its job and do the very best it possibly can. In the case of linear algebra, the optimiser generally fails and I must manually simplify expressions as much as possible.On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Optimising floating point is a massive pain because of precision concerns and IEEE-754 conformance. Just because something is analytically the same doesn't mean you want the optimiser to go ahead and make the switch for you.[...]A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.
Jun 11 2015
On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Simplified expressions would help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small). 2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdfA complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
Jun 09 2015
On Tuesday, 9 June 2015 at 16:40:56 UTC, Ilya Yaroshenko wrote:On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:EDIT: would NOT helpOn 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Simplified expressions would help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small). 2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdfA complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.
Jun 09 2015
On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Simplified expressions would [NOT] help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdfLow-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.
Jun 11 2015
On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.But you don't think you need to look up to programmers who are not able to quickly simplify an algebraic expression? :) For example, I'm a little addicted to sports programming. And I could really use matrix and other math in the standard library.
Jun 11 2015
On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language. Even float/double internal conversion to real in math expressions is a huge headache when math algorithms are implemented (see first two comments at https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL sqrt(x)^2 should compiles as is. Such optimisations can be implemented over the basic routines (pow, sqrt, gemv, gemm, etc). We can use approach similar to D compile time regexp. Best, IlyaOn Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Simplified expressions would [NOT] help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdfLow-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.
Jun 11 2015
On 12 June 2015 at 15:22, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:That's nice... I'm all for it :) Perhaps if there were some distinction between a base type and an algebraic type? I wonder if it would be possible to express an algebraic expression like a lazy range, and then capture the expression at the end and simplify it with some fancy template... I'd call that an abomination, but it might be possible. Hopefully nobody in their right mind would ever use that ;)On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language.On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:Perhaps you've never worked with incompetent programmers (in my experience, >50% of the professional workforce). Programmers, on average, don't know maths. They literally have no idea how to simplify an algebraic expression. I think there are about 3-4 (being generous!) people in my office (of 30-40) that could do it properly, and without spending heaps of time on it.On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d <digitalmars-d puremagic.com> wrote:Simplified expressions would [NOT] help because 1. On matrix (hight) level optimisation can be done very well by programer (algorithms with matrixes in terms of count of matrix multiplications are small).A complication for linear algebra (or other mathsy things in general) is the inability to detect and implement compound operations. We don't declare mathematical operators to be algebraic operations, which I think is a lost opportunity. If we defined the properties along with their properties (commutativity, transitivity, invertibility, etc), then the compiler could potentially do an algebraic simplification on expressions before performing codegen and optimisation. There are a lot of situations where the optimiser can't simplify expressions because it runs into an arbitrary function call, and I've never seen an optimiser that understands exp/log/roots, etc, to the point where it can reduce those expressions properly. To compete with maths benchmarks, we need some means to simplify expressions properly.I believe that Phobos must support some common methods of linear algebra and general mathematics. I have no desire to join D with Fortran libraries :)D definitely needs BLAS API support for matrix multiplication. Best BLAS libraries are written in assembler like openBLAS. Otherwise D will have last position in corresponding math benchmarks.2. Low level optimisation requires specific CPU/Cache optimisation. Modern implementations are optimised for all cache levels. See work by KAZUSHIGE GOTO http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdfLow-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.Even float/double internal conversion to real in math expressions is a huge headache when math algorithms are implemented (see first two comments at https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL sqrt(x)^2 should compiles as is.Yeah... unless you -fast-math, in which case I want the compiler to do whatever it can. Incidentally, I don't think I've ever run into a case in practise where precision was lost by doing _less_ operations.Such optimisations can be implemented over the basic routines (pow, sqrt, gemv, gemm, etc). We can use approach similar to D compile time regexp.Not really. The main trouble is that many of these patterns only emerge when inlining is performed. It would be particularly awkward to express such expressions in some DSL that spanned across conventional API boundaries.
Jun 12 2015
On Friday, 12 June 2015 at 11:00:20 UTC, Manu wrote:... for example we can optimise matrix chain multiplication https://en.wikipedia.org/wiki/Matrix_chain_multiplication ---- //calls `this(MatrixExp!double chain)` Matrix!double = m1*m2*m3*m4; ----That's nice... I'm all for it :) Perhaps if there were some distinction between a base type and an algebraic type? I wonder if it would be possible to express an algebraic expression like a lazy range, and then capture the expression at the end and simplify it with some fancy template... I'd call that an abomination, but it might be possible. Hopefully nobody in their right mind would ever use that ;)Low-level optimisation is a sliding scale, not a binary position. Reaching 'optimal' state definitely requires careful consideration of all the details you refer to, but there are a lot of improvements that can be gained from quickly written code without full low-level optimisation. A lot of basic low-level optimisations (like just using appropriate opcodes, or eliding redundant operations; ie, squares followed by sqrt) can't be applied without first simplifying expressions.OK, generally you are talking about something we can name MathD. I understand the reasons. However I am strictly against algebraic operations (or eliding redundant operations for floating points) for basic routines in system programming language.Mathematics functions requires concrete order of operations http://www.netlib.org/cephes/ (std.mathspecial and a bit of std.math/std.numeric are based on cephes).Even float/double internal conversion to real in math expressions is a huge headache when math algorithms are implemented (see first two comments at https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL sqrt(x)^2 should compiles as is.Yeah... unless you -fast-math, in which case I want the compiler to do whatever it can. Incidentally, I don't think I've ever run into a case in practise where precision was lost by doing _less_ operations.If I am not wrong in both LLVM and GCC `fast-math` attribute can be defined for functions. This feature can be implemented in D.Such optimisations can be implemented over the basic routines (pow, sqrt, gemv, gemm, etc). We can use approach similar to D compile time regexp.Not really. The main trouble is that many of these patterns only emerge when inlining is performed. It would be particularly awkward to express such expressions in some DSL that spanned across conventional API boundaries.
Jun 12 2015
On 10 June 2015 at 02:17, Manu <turkeyman gmail.com> wrote:... If we defined the properties along with their properties ...*operators* along with their properties
Jun 09 2015
On 6/9/15 1:50 AM, John Colvin wrote:On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu wrote:I see. So what would be the primitives necessary? Strides (in the form of e.g. special ranges)? What are the things that would make a library vendor or user go, "OK, now I know what steps to take to use my code with D"?(a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives.I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.Color me interested. This is another of those domains that hold great promise for D, but sadly a strong champion has been missing. Or two :o). Andrei(b) Provide signatures for C and Fortran libraries so people who have them can use them easily with D. (c) Provide high-level wrappers on top of those functions. AndreiThat is how e.g. numpy works and it's OK, but D can do better. Ilya, I'm very interested in discussing this further with you. I have a reasonable idea and implementation of how I would want the generic n-dimensional types in D to work, but you seem to have more experience with BLAS and LAPACK than me* and of course interfacing with them is critical. *I rarely interact with them directly.
Jun 09 2015
On Tuesday, 9 June 2015 at 16:08:40 UTC, Andrei Alexandrescu wrote:On 6/9/15 1:50 AM, John Colvin wrote:N-dimensional slices can be expressed as N slices and N shifts. Where shift equals count of elements in source range between front elements of neighboring sub-slices on corresponding slice-level. private struct Slice(size_t N, Range) { size_t[2][N] slices; size_t[N] shifts; Range range; }On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu wrote:I see. So what would be the primitives necessary? Strides (in the form of e.g. special ranges)? What are the things that would make a library vendor or user go, "OK, now I know what steps to take to use my code with D"?(a) Provide standard data layouts in std.array for the typical shapes supported by linear algebra libs: row major, column major, alongside with striding primitives.I don't think this is quite the right approach. Multidimensional arrays and matrices are about accessing and iteration over data, not data structures themselves. The standard layouts are common special cases.
Jun 09 2015
On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions?A well-supported matrix math library would definitely lead to me using D more. I would definitely applaud any work being done on this subject, but I still feel there are some enhancements (most seemingly minor) that would really make a matrix math library easy/fun to use. Most of what I discuss below is just syntactical sugar for some stuff that could be accomplished with loops or std.algorithm, but having it built-in would make practical use of a matrix math library much easier. I think Armadillo implements some of these as member functions, whereas other languages like R and Matlab have them more built-in. Disclaimer: I don't consider myself a D expert, so I could be horribly wrong on some of this stuff. 1) There is no support for assignment to arrays based on the values of another array. int[] A = [-1, 1, 5]; int[] B = [1, 2]; int[] C = A[B]; You would have to use int[] C = A[1..2];. In this simple example, it’s not really a big deal, but if I have a function that returns B, then I can’t just throw B in there. I would have to loop through B and assign it to C. So the type of assignment is possible, but if you’re frequently doing this type of array manipulation, then the number of loops you need starts increasing. 2) Along the same lines, there is no support for replacing the B above with an array of bools like bool[] B = [false, true, true]; or auto B = A.map!(a => a < 0); Again, it is doable with a loop, but this form of logical indexing is a pretty common idiom for people who use Matlab or R quite a bit. 3) In addition to being able to index by a range of values or bools, you would want to be able to make assignments based on this. So something like A[B] = c; This is a very common operation in R or Matlab. support for array comparison operators. Something like int[3] B; B[] = A[] + 5; works, but bool[3] B; B[] = A[] > 0; doesn’t (I’m also not sure why I can’t just write auto B[] = A[] + 5;, but that’s neither here nor there). Moreover, it seems like only the mathematical operators work in this way. Mathematical functions from std.math, like exp, don’t seem to work. You have to use map (or a loop) with exp to get the result. I don’t have an issue with map, per se, but it seems inconsistent when some things work but not others. 5) You can only assign scalars to slices of arrays. There doesn’t seem to be an ability to assign an array to a slice. For instead of what I had written for C. 6) std.range and std.algorithm seem to have much better support for one dimensional containers than if you want to treat a container as two-dimensional. If you have a two-dimensional array and want to use map on every element, then there’s no issue. However, if you want to apply a function to each column or row, then you’d have to use a for loop (not even foreach). This seems to be a more difficult problem to solve than the others. I’m not sure what the best approach is, but it makes sense to look at other languages/libraries. In R, you have apply, which can operate on any dimensional array. Matlab has arrayfun. Numpy has apply_along_axis. Armadillo has .each_col and .each_row (one other thing about Armadillo is that you can switch between what underlying matrix math library is being used, like OpenBlas vs. Intel MKL).
Jun 11 2015
On Thursday, 11 June 2015 at 21:30:22 UTC, jmh530 wrote:Most of what I discuss below is just syntactical sugar for some stuff that could be accomplished with loops or std.algorithm,Your post reminds me of two things I've considered attempting in the past: 1) a set of operators that have no meaning unless an overload is specifically provided (for dot product, dyadic transpose, etc.) and 2) a library implementing features of array-oriented languages to the extent it's possible (APL functions, rank awareness, trivial reshaping, aggregate lifting, et al). Syntax sugar can be important. -Wyatt
Jun 11 2015
On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote:1) a set of operators that have no meaning unless an overload is specifically provided (for dot product, dyadic transpose, etc.) andI see your point, but I think it might be a bit risky if you allow too much freedom for overloading operators. For instance, what if two people implement separate packages for matrix multiplication, one adopts the syntax of R (%*%) and one adopts the new Python syntax ( ). It may lead to some confusion.
Jun 11 2015
On Friday, 12 June 2015 at 00:11:16 UTC, jmh530 wrote:On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote:From the outset, my thought was to strictly define the set of (eight or so?) symbols for this. If memory serves, it was right around the time Walter's rejected wholesale user-defined operators because of exactly the problem you mention. (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to be!?) I strongly suspect you don't need many simultaneous extra operators on a type to cover most cases. -Wyatt1) a set of operators that have no meaning unless an overload is specifically provided (for dot product, dyadic transpose, etc.) andI see your point, but I think it might be a bit risky if you allow too much freedom for overloading operators. For instance, what if two people implement separate packages for matrix multiplication, one adopts the syntax of R (%*%) and one adopts the new Python syntax ( ). It may lead to some confusion.
Jun 11 2015
On Friday, 12 June 2015 at 01:55:15 UTC, Wyatt wrote:From the outset, my thought was to strictly define the set of (eight or so?) symbols for this. If memory serves, it was right around the time Walter's rejected wholesale user-defined operators because of exactly the problem you mention. (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to be!?) I strongly suspect you don't need many simultaneous extra operators on a type to cover most cases. -WyattWhat would the new order of operations be for these new operators?
Jun 11 2015
On Friday, 12 June 2015 at 03:18:31 UTC, Tofu Ninja wrote:What would the new order of operations be for these new operators?Hadn't honestly thought that far. Like I said, it was more of a nascent idea than a coherent proposal (probably with a DIP and many more words). It's an interesting question, though. notes, though: precedence and fixity are determined by the base operator. In my head, extra operators would be represented in code by some annotation or affix on a built-in operator... say, braces around it or something (e.g. [*] or {+}, though this is just an example that sets a baseline for visibility). -Wyatt
Jun 12 2015
On Friday, 12 June 2015 at 01:55:15 UTC, Wyatt wrote:From the outset, my thought was to strictly define the set of (eight or so?) symbols for this. If memory serves, it was right around the time Walter's rejected wholesale user-defined operators because of exactly the problem you mention. (Compounded by Unicode-- what the hell is "2 🐵 8" supposed to be!?) I strongly suspect you don't need many simultaneous extra operators on a type to cover most cases. -WyattI actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,void main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }--, &++, ^^+, in++, |-, %~, ect...
Jun 17 2015
On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,+* is a specially bad idea, as I would read that as "a + (*b)", which is quite usual in C. But in general very cool. I love ~~ and |- the most :-)--, &++, ^^+, in++, |-, %~, ect...
Jun 23 2015
On Tuesday, 23 June 2015 at 16:33:29 UTC, Dominikus Dittes Scherkl wrote:On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:Yeah |- does seem like an interesting one, not sure what it would mean though, I get the impression it's a wall or something. Also you can basicly combine any binOp and any number of unaryOps to create an arbitrary number of custom binOps. ~+*+*+*+ could be valid! You could probably make something like brainfuck in D's unary operators.I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,+* is a specially bad idea, as I would read that as "a + (*b)", which is quite usual in C. But in general very cool. I love ~~ and |- the most :-)--, &++, ^^+, in++, |-, %~, ect...
Jun 23 2015
On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyattvoid main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }--, &++, ^^+, in++, |-, %~, ect...
Jun 24 2015
On Wednesday, 24 June 2015 at 19:04:38 UTC, Wyatt wrote:On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:I am thinking of writing a mixin that will set up the proxy for you so that you can just write. struct test { mixin binOpProxy("*"); void opBinary(string op : "+*", T)(T rhs){ writeln("+*"); } } The hard part will be to get it to work with arbitrarily long unary proxies. Eg: mixin binOpProxy("~-~"); void opBinary(string op : "+~-~", T)(T rhs){ writeln("+~-~"); }I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyattvoid main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }--, &++, ^^+, in++, |-, %~, ect...
Jun 24 2015
On 06/24/2015 11:41 PM, Tofu Ninja wrote:On Wednesday, 24 June 2015 at 19:04:38 UTC, Wyatt wrote:Obviously you will run into issues with precedence soon, but this should do it: import std.stdio; struct Test{ mixin(binOpProxy("+~+-~*--+++----*")); void opBinary(string op : "+~+-~*--+++----*", T)(T rhs){ writeln("+~+-~*--+++----*"); } } void main(){ Test a,b; a +~+-~*--+++----* b; } import std.string, std.algorithm, std.range; int operatorSuffixLength(string s){ int count(dchar c){ return 2-s.retro.countUntil!(d=>c!=d)%2; } if(s.endsWith("++")) return count('+'); if(s.endsWith("--")) return count('-'); return 1; } struct TheProxy(T,string s){ T unwrap; this(T unwrap){ this.unwrap=unwrap; } static if(s.length){ alias NextType=TheProxy!(T,s[0..$-operatorSuffixLength(s)]); alias FullType=NextType.FullType; mixin(` auto opUnary(string op : "`~s[$-operatorSuffixLength(s)..$]~`")(){ return NextType(unwrap); }`); }else{ alias FullType=typeof(this); } } string binOpProxy(string s)in{ assert(s.length>=1+operatorSuffixLength(s)); assert(!s.startsWith("++")); assert(!s.startsWith("--")); foreach(dchar c;s) assert("+-*~".canFind(c)); }body{ int len=operatorSuffixLength(s); return ` auto opUnary(string op:"`~s[$-len..$]~`")(){ return TheProxy!(typeof(this),"`~s[1..$-len]~`")(this); } auto opBinary(string op:"`~s[0]~`")(TheProxy!(typeof(this),"`~s[1..$-1]~`").FullType t){ return opBinary!"`~s~`"(t.unwrap); } `; }On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:I am thinking of writing a mixin that will set up the proxy for you so that you can just write. struct test { mixin binOpProxy("*"); void opBinary(string op : "+*", T)(T rhs){ writeln("+*"); } } The hard part will be to get it to work with arbitrarily long unary proxies. Eg: mixin binOpProxy("~-~"); void opBinary(string op : "+~-~", T)(T rhs){ writeln("+~-~"); }I actually thought about it more, and D does have a bunch of binary operators that no ones uses. You can make all sorts of weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,Oh right, meant to respond to this. I'll admit it took me a few to really get why that works-- it's fairly clever and moderately terrifying. (I showed it to a friend and he opined it may violate the grammar.) But playing with it a bit...well, it's very cumbersome having to do these overload gymnastics. It eats away at your opUnary space because of the need for private proxy types, and each one needs an opBinary defined to support it explicitly. It also means you can't make overloads for mismatched types or builtin types (at least, I couldn't figure out how in the few minutes I spent poking it over lunch). -Wyattvoid main(string[] args){ test a; test b; a +* b; } struct test{ private struct testAlpha{ test payload; } testAlpha opUnary(string s : "*")(){ return testAlpha(this); } void opBinary(string op : "+")(test rhs){ writeln("+"); } void opBinary(string op : "+")(testAlpha rhs){ writeln("+*"); } }--, &++, ^^+, in++, |-, %~, ect...
Jun 24 2015
On Thursday, 25 June 2015 at 01:32:22 UTC, Timon Gehr wrote:[...]Heres what I came up with... I love D so much <3 module util.binOpProxy; import std.algorithm : joiner, map; import std.array : array; struct __typeproxy(T, string s) { enum op = s; T payload; auto opUnary(string newop)() { return __typeproxy!(T,newop~op)(payload); } } /** * Example: * struct test * { * mixin(binOpProxy!("~", "*")); * * void opBinary(string op : "+~~", T)(T rhs) * { * writeln("hello!"); * } * * void opBinary(string op : "+~+-~*--+++----*", T)(T rhs) * { * writeln("world"); * } * * void opBinary(string op, T)(T rhs) * { * writeln("default"); * } * } * */ enum binOpProxy(proxies ...) = ` import ` ~ __MODULE__ ~ ` : __typeproxy; auto opBinary(string op, D : __typeproxy!(T, T_op), T, string T_op) (D rhs) { return opBinary!(op~D.op)(rhs.payload); } ` ~ [proxies].map!((string a) => ` auto opUnary(string op : "` ~ a ~ `")() { return __typeproxy!(typeof(this),op)(this); } `).joiner.array;
Jun 25 2015
On 12/06/2015 9:30 a.m., jmh530 wrote:On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.There are https://github.com/9il/simple_matrix and https://github.com/9il/cblas . I will try to rework them for Phobos. Any ideas and suggestions?A well-supported matrix math library would definitely lead to me using D more. I would definitely applaud any work being done on this subject, but I still feel there are some enhancements (most seemingly minor) that would really make a matrix math library easy/fun to use. Most of what I discuss below is just syntactical sugar for some stuff that could be accomplished with loops or std.algorithm, but having it built-in would make practical use of a matrix math library much easier. I think Armadillo implements some of these as member functions, whereas other languages like R and Matlab have them more built-in. Disclaimer: I don't consider myself a D expert, so I could be horribly wrong on some of this stuff. 1) There is no support for assignment to arrays based on the values of another array. int[] A = [-1, 1, 5]; int[] B = [1, 2]; int[] C = A[B]; You would have to use int[] C = A[1..2];. In this simple example, it’s not really a big deal, but if I have a function that returns B, then I can’t just throw B in there. I would have to loop through B and assign it to C. So the type of assignment is possible, but if you’re frequently doing this type of array manipulation, then the number of loops you need starts increasing. 2) Along the same lines, there is no support for replacing the B above with an array of bools like bool[] B = [false, true, true]; or auto B = A.map!(a => a < 0); Again, it is doable with a loop, but this form of logical indexing is a pretty common idiom for people who use Matlab or R quite a bit. 3) In addition to being able to index by a range of values or bools, you would want to be able to make assignments based on this. So something like A[B] = c; This is a very common operation in R or Matlab. for array comparison operators. Something like int[3] B; B[] = A[] + 5; works, but bool[3] B; B[] = A[] > 0; doesn’t (I’m also not sure why I can’t just write auto B[] = A[] + 5;, but that’s neither here nor there). Moreover, it seems like only the mathematical operators work in this way. Mathematical functions from std.math, like exp, don’t seem to work. You have to use map (or a loop) with exp to get the result. I don’t have an issue with map, per se, but it seems inconsistent when some things work but not others. 5) You can only assign scalars to slices of arrays. There doesn’t seem couldn’t write A[0..1] = B; or A[0, 1] = B; instead of what I had written for C. 6) std.range and std.algorithm seem to have much better support for one dimensional containers than if you want to treat a container as two-dimensional. If you have a two-dimensional array and want to use map on every element, then there’s no issue. However, if you want to apply a function to each column or row, then you’d have to use a for loop (not even foreach). This seems to be a more difficult problem to solve than the others. I’m not sure what the best approach is, but it makes sense to look at other languages/libraries. In R, you have apply, which can operate on any dimensional array. Matlab has arrayfun. Numpy has apply_along_axis. Armadillo has .each_col and .each_row (one other thing about Armadillo is that you can switch between what underlying matrix math library is being used, like OpenBlas vs. Intel MKL).
Jun 11 2015
On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more. I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
Jun 12 2015
On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:Matrix math is matrix math, it being for ogl makes no real difference. Also if you are waiting to learn vulkan but have not done any other graphics, don't, learn ogl now, vulkan will be harder.Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more. I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
Jun 12 2015
On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:Matrix math is matrix math, it being for ogl makes no real difference.I think it’s a little more complicated than that. BLAS and LAPACK (or variants on them) are low-level matrix math libraries that many higher-level libraries call. Few people actually use BLAS directly. So, clearly, not every matrix math library is the same. What differentiates BLAS from Armadillo is that you can be far more productive in Armadillo because the syntax is friendly (and quite similar to Matlab and others). There’s a reason why people use glm in C++. It’s probably the most productive way to do matrix math with OpenGL. However, it may not be the most productive way to do more general matrix math. That’s why I hear about people using Armadillo, Eigen, and Blaze, but I’ve never heard anyone recommend using glm. Syntax matters.
Jun 12 2015
On 13/06/2015 7:45 a.m., jmh530 wrote:On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:The reason I am considering gl3n is because it is old solid code. It's proven itself. It'll make the review process relatively easy. But hey, if we want to do it right, we'll never get any implementation in.Matrix math is matrix math, it being for ogl makes no real difference.I think it’s a little more complicated than that. BLAS and LAPACK (or variants on them) are low-level matrix math libraries that many higher-level libraries call. Few people actually use BLAS directly. So, clearly, not every matrix math library is the same. What differentiates BLAS from Armadillo is that you can be far more productive in Armadillo because the syntax is friendly (and quite similar to Matlab and others). There’s a reason why people use glm in C++. It’s probably the most productive way to do matrix math with OpenGL. However, it may not be the most productive way to do more general matrix math. That’s why I hear about people using Armadillo, Eigen, and Blaze, but I’ve never heard anyone recommend using glm. Syntax matters.
Jun 12 2015
On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:Matrix math is matrix math, it being for ogl makes no real difference.Humm, work on getting gl3n into phobos or work on my ODBC driver manager. Tough choice.I can only speak for myself. I'm sure there's a lot of value in solid ODBC support. I use SQL some, but I use matrix math more. I'm not that familiar with gl3n, but it looks like it's meant for the math used in OpenGL. My knowledge of OpenGL is limited. I had some cursory interest in the developments of Vulkan earlier in March, but without much of a background in OpenGL I didn't follow everything they were talking about. I don't think many other languages include OpenGL support in their standard libraries (though I imagine game developers would welcome it).
Jun 13 2015
On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Jun 13 2015
On 13/06/2015 10:35 p.m., Tofu Ninja wrote:On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:IMO simple matrix is fine for a standard library. More complex highly specialized math library yeah no. Not enough gain for such a complex code. Where as matrix/vector support for e.g. OpenGL now that will have a high visibility to game devs.The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Jun 13 2015
On Saturday, 13 June 2015 at 10:37:39 UTC, Rikki Cattermole wrote:On 13/06/2015 10:35 p.m., Tofu Ninja wrote:Linear algebra for graphics is the specialised case, not the other way around. As a possible name for something like gl3n in phobos, I like std.math.geometryOn Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:IMO simple matrix is fine for a standard library. More complex highly specialized math library yeah no. Not enough gain for such a complex code. Where as matrix/vector support for e.g. OpenGL now that will have a high visibility to game devs.[...]I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Jun 13 2015
On Saturday, 13 June 2015 at 11:05:19 UTC, John Colvin wrote:Linear algebra for graphics is the specialised case, not the other way around. As a possible name for something like gl3n in phobos, I like std.math.geometryA geometry library is different, it should be type safe when it comes to units, lengths, distances, areas... I think linear algebra should have the same syntax for small and large matrices and switch representation behind the scenes. The challenge is to figure out what kind of memory layouts you need to support in order to interact with existing frameworks/hardware with no conversion.
Jun 13 2015
On Saturday, 13 June 2015 at 11:18:54 UTC, Ola Fosheim Grøstad wrote:I think linear algebra should have the same syntax for small and large matrices and switch representation behind the scenes.Switching representations behind the scenes? Sounds complicated. I would think that if you were designing it from the ground up, you would have one general matrix math library. Then a graphics library could be built on top of that functionality. That way, as improvements are made to the matrix math functionality, the graphics library would benefit too. However, given that there already is a well developed math graphics library, I'm not sure what's optimal. I can see the argument for implementing gl3n in the standard library (as a specialized math graphics option) on its own if there is demand for it.
Jun 13 2015
On Sunday, 14 June 2015 at 02:56:04 UTC, jmh530 wrote:On Saturday, 13 June 2015 at 11:18:54 UTC, Ola Fosheim Grøstad wrote:You don't have much of a choice if you want it to perform. You have take take into consideration: 1. hardware factors such as SIMD and alignment 2. what is known at compile time and what is only known at runtime 3. common usage patterns (what elements are usually 0, 1 or a value) 4. when does it pay off to encode the matrix modifications and layout as meta information (like transpose and scalar multiplication or addition) And sometimes you might want to compute the inverse matrix when doing the transforms, rather than as a separate step for performance reasons.I think linear algebra should have the same syntax for small and large matrices and switch representation behind the scenes.Switching representations behind the scenes? Sounds complicated.I would think that if you were designing it from the ground up, you would have one general matrix math library. Then a graphics library could be built on top of that functionality. That way, as improvements are made to the matrix math functionality, the graphics library would benefit too.Yes, but nobody wants to use a matrix library that does not perform close to the hardware limitations, so the representation should be special cased to fit the hardware for common matrix layouts.
Jun 14 2015
On Saturday, 13 June 2015 at 10:35:55 UTC, Tofu Ninja wrote:On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:Yes, that's what I was trying to point out. Anyway, gl3n or similar would be great to have in phobos, I've used it quite a bit and think it's great, but it should be very clear that it's not a general purpose matrix/linear algebra toolkit. It's a specialised set of types and operations specifically for low-dimensional geometry, with an emphasis on common graphics idioms.The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Jun 13 2015
On 06/13/2015 12:35 PM, Tofu Ninja wrote:On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:(It's neither weird nor crazy.)The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff.Maybe they should be kept separate?I think there's no point to that. Just have dynamically sized and fixed sized versions. Why should they be incompatible? It's the same concept.
Jun 13 2015
On Saturday, 13 June 2015 at 10:35:55 UTC, Tofu Ninja wrote:On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:+1 nobody uses general purpose linear matrix libraries for games/graphics for a reason, many game math libraries take shortcuts everywhere and are extensively optimized(e.g, for cache lines) for the general purpose vec3/mat4 types. many performance benefits for massive matrices see performance detriments for tiny graphics-oriented matrices. This is just shoehorning, plain and simple.The tiny subset of numerical linear algebra that is relevant for graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at all representative of the whole. The algorithms are different and the APIs are often necessarily different. Even just considering scale, no one sane calls in to BLAS to multiply a 3*3 matrix by a 3 element vector, simultaneously no one sane *doesn't* call in to BLAS or an equivalent to multiply two 500*500 matrices.I think there is a conflict of interest with what people want. There seem to be people like me who only want or need simple matrices like glm to do basic geometric/graphics related stuff. Then there is the group of people who want large 500x500 matrices to do weird crazy maths stuff. Maybe they should be kept separate? In which case then we are really talking about adding two different things. Maybe have a std.math.matrix and a std.blas?
Jun 14 2015
On Sunday, 14 June 2015 at 08:14:21 UTC, weaselcat wrote:nobody uses general purpose linear matrix libraries for games/graphics for a reason,The reason is that C++ didn't provide anything. As a result each framework provide their own and you get N different libraries that are incompatible. There is no good reason for making small-matrix libraries incompatible with the rest of eco-system given the templating system you have in D. What you need is a library that supports multiple representations and can do the conversions. Of course, you'll do better if you also have term-rewriting/AST-macros.
Jun 14 2015
On Sunday, 14 June 2015 at 09:07:19 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 08:14:21 UTC, weaselcat wrote:The reason is general purpose matrixes allocated at heap, but small graphic matrices are plain structs. `opCast(T)` should be enough.nobody uses general purpose linear matrix libraries for games/graphics for a reason,The reason is that C++ didn't provide anything. As a result each framework provide their own and you get N different libraries that are incompatible. There is no good reason for making small-matrix libraries incompatible with the rest of eco-system given the templating system you have in D. What you need is a library that supports multiple representations and can do the conversions. Of course, you'll do better if you also have term-rewriting/AST-macros.
Jun 14 2015
On Sunday, 14 June 2015 at 09:19:19 UTC, Ilya Yaroshenko wrote:The reason is general purpose matrixes allocated at heap, but small graphic matrices are plain structs.No, the reason is that LA-libraries are C-libraries that also deal with variable sized matrices. A good generic API can support both. You cannot create a good generic API in C. You can in D.
Jun 14 2015
On Sunday, 14 June 2015 at 09:25:25 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 09:19:19 UTC, Ilya Yaroshenko wrote:We need D own BLAS implementation to do it. Sight, DBLAS will be largest part of std.The reason is general purpose matrixes allocated at heap, but small graphic matrices are plain structs.No, the reason is that LA-libraries are C-libraries that also deal with variable sized matrices. A good generic API can support both. You cannot create a good generic API in C. You can in D.
Jun 14 2015
On Sunday, 14 June 2015 at 09:59:22 UTC, Ilya Yaroshenko wrote:We need D own BLAS implementation to do it.Why can't you use "version" for those that want to use a BLAS library for the implementation? Those who want replications of LAPACK/LINPACK APIs can use separate bindings? And those who want to use BLAS directly would not use phobos anyway, but a direct binding so they can switch implementation? I think a good generic higher level linear algebra library for D should aim to be equally useful for 2D Graphics, 3D/4D GPU graphics, CAD solid modelling, robotics, 3D raytracing, higher dimensional fractals, physics sims, image processing, signal processing, scientific computing (which is pretty wide) and more. The Phobos API should be user-level, not library-level like BLAS. IMO. You really want an API that look like this in Phobos? http://www.netlib.org/blas/ BLAS/LAPACK/LINPACK all originate in Fortran with a particular scientific tradition in mind, so I think one should rethink how D goes about this. Fortran has very primitive abstraction mechanisms. This stuff is stuck in the 80s…
Jun 14 2015
I think there might be a disconnection in this thread. D only, or D frontend? There are hardware vendor and commercial libraries that are heavily optimized for particular hardware configurations. There is no way a D-only solution can beat those. As an example Apple provides various implementations for their own machines, so an old program on a new machine can run faster than a static D-only library solution. What D can provide is a unifying abstraction, but to get there one need to analyze what exists. Like Apple's Accelerate framework: https://developer.apple.com/library/prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef/index.html#//apple_ref/doc/uid/TP40009465 That goes beyond BLAS. We also need to look at vDSP etc. You'll find similar things for Microsoft/Intel/AMD/ARM etc…
Jun 14 2015
On Sunday, 14 June 2015 at 10:43:24 UTC, Ola Fosheim Grøstad wrote:I think there might be a disconnection in this thread. D only, or D frontend? There are hardware vendor and commercial libraries that are heavily optimized for particular hardware configurations. There is no way a D-only solution can beat those. As an example Apple provides various implementations for their own machines, so an old program on a new machine can run faster than a static D-only library solution. What D can provide is a unifying abstraction, but to get there one need to analyze what exists. Like Apple's Accelerate framework: https://developer.apple.com/library/prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef/index.html#//apple_ref/doc/uid/TP40009465 That goes beyond BLAS. We also need to look at vDSP etc. You'll find similar things for Microsoft/Intel/AMD/ARM etc…+1
Jun 14 2015
On Sunday, 14 June 2015 at 10:15:08 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 09:59:22 UTC, Ilya Yaroshenko wrote:I am really don't understand what you mean with "generic" keyword. Do you want one matrix type that includes all cases??? I hope you does not. If not, yes it should be generic like all other Phobos. But we will have one module for 3D/4D geometric and 3D/4D matrix/vector multiplications, another module for general matrix (std.container.matrix) and another module with generic BLAS (std.numeric.blas) for general purpose matrixes. After all of that we can think about scripting like "m0 = m1*v*m2" features. I think LAPACK would not be implemented in Phobos, but we can use SciD instead.We need D own BLAS implementation to do it.Why can't you use "version" for those that want to use a BLAS library for the implementation? Those who want replications of LAPACK/LINPACK APIs can use separate bindings? And those who want to use BLAS directly would not use phobos anyway, but a direct binding so they can switch implementation? I think a good generic higher level linear algebra library for D should aim to be equally useful for 2D Graphics, 3D/4D GPU graphics, CAD solid modelling, robotics, 3D raytracing, higher dimensional fractals, physics sims, image processing, signal processing, scientific computing (which is pretty wide) and more. The Phobos API should be user-level, not library-level like BLAS. IMO. You really want an API that look like this in Phobos? http://www.netlib.org/blas/ BLAS/LAPACK/LINPACK all originate in Fortran with a particular scientific tradition in mind, so I think one should rethink how D goes about this. Fortran has very primitive abstraction mechanisms. This stuff is stuck in the 80s…
Jun 14 2015
On Sunday, 14 June 2015 at 11:43:46 UTC, Ilya Yaroshenko wrote:I am really don't understand what you mean with "generic" keyword. Do you want one matrix type that includes all cases??? I hope you does not.Yes, that is what generic programming is about. The type should signify the semantics, not exact representation. Then you alias common types "float4x4" etc. It does take a lot of abstraction design work. I've done some of it in C++ for sliced views over memory and arrays and I'd say you need many iterations to get it right.If not, yes it should be generic like all other Phobos. But we will have one module for 3D/4D geometric and 3D/4D matrix/vector multiplications, another module for general matrix (std.container.matrix) and another module with generic BLAS (std.numeric.blas) for general purpose matrixes. After all of that we can think about scripting like "m0 = m1*v*m2" features.All I can say is that I have a strong incentive to avoid using Phobos features if D does not automatically utilize the best OS/CPU vendor provided libraries in a portable manner and with easy-to-read high level abstractions. D's strength compared to C++/Rust is that D can evolve to be easier to use than those languages. C++/Rust are hard to use by nature. But usability takes a lot of API design effort, so it won't come easy. D's strength compared to Go is that it can better take advantage of hardware and provide better library abstractions, Go appears to deliberately avoid it. They probably want to stay nimble with very limited hardware-interfacing so that you can easily move it around in the cloud.
Jun 14 2015
On Sunday, 14 June 2015 at 12:01:47 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 11:43:46 UTC, Ilya Yaroshenko wrote:std.range has a lot of types + D arrays. The power in unified API (structural type system). For matrixes this API is very simple: operations like m1[] += m2, transposed, etc. IlyaI am really don't understand what you mean with "generic" keyword. Do you want one matrix type that includes all cases??? I hope you does not.Yes, that is what generic programming is about. The type should signify the semantics, not exact representation. Then you alias common types "float4x4" etc.
Jun 14 2015
On Sunday, 14 June 2015 at 12:18:39 UTC, Ilya Yaroshenko wrote:std.range has a lot of types + D arrays. The power in unified API (structural type system).Yeah, I agree that templates in C++/D more or less makes those type systems structural-like, even though C is using nominal typing. I've also found that although the combinatorial explosion is a possibility, most applications I write have a "types.h" file that define the subset I want to use for that application. So the combinatorial explosion is not such a big deal after all. But one need to be patient and add lots of static_asserts… since the template type system is weak.For matrixes this API is very simple: operations like m1[] += m2, transposed, etc.I think it is a bit more complicated than that. You also need to think about alignment, padding, strides, convolutions, identiy matrices, invertible matrices, windows on a stream, higher order matrices etc…
Jun 14 2015
On Sunday, 14 June 2015 at 12:52:52 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 12:18:39 UTC, Ilya Yaroshenko wrote:Alignment, strides (windows on a stream - I understand it like Sliding Windows) are not a problem. Convolutions, identiy matrices, invertible matrices are stuff I don't want to see in Phobos. They are about "MathD" not about (big) standard library. For hight order slices see https://github.com/D-Programming-Language/phobos/pull/3397std.range has a lot of types + D arrays. The power in unified API (structural type system).Yeah, I agree that templates in C++/D more or less makes those type systems structural-like, even though C is using nominal typing. I've also found that although the combinatorial explosion is a possibility, most applications I write have a "types.h" file that define the subset I want to use for that application. So the combinatorial explosion is not such a big deal after all. But one need to be patient and add lots of static_asserts… since the template type system is weak.For matrixes this API is very simple: operations like m1[] += m2, transposed, etc.I think it is a bit more complicated than that. You also need to think about alignment, padding, strides, convolutions, identiy matrices, invertible matrices, windows on a stream, higher order matrices etc…
Jun 14 2015
On Sunday, 14 June 2015 at 13:48:23 UTC, Ilya Yaroshenko wrote:Alignment, strides (windows on a stream - I understand it like Sliding Windows) are not a problem.It isn't a problem if you use the best possible abstraction from the start. It is a problem if you don't focus on it from the start.Convolutions, identiy matrices, invertible matrices are stuff I don't want to see in Phobos. They are about "MathD" not about (big) standard library.I don't see how you can get good performance without special casing identity matrices, transposed matrices and so on. You surely need to support matrix inversion, Gauss-Jordan elimination (or the equivalent) etc?
Jun 14 2015
On Sunday, 14 June 2015 at 14:02:59 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 13:48:23 UTC, Ilya Yaroshenko wrote:I am sorry for this trolling: Lisp is the best abstraction, thought. Sometimes I find very cool abstract libraries, with relatively small number of users. For example many programmers don't want to use Boost only because it's abstractions makes them crazy.Alignment, strides (windows on a stream - I understand it like Sliding Windows) are not a problem.It isn't a problem if you use the best possible abstraction from the start. It is a problem if you don't focus on it from the start.For daily scientific purposes - yes. For R/Matlab like mathematical library - yes. For real world application - no. Engineer can achieve best performance without special cases by lowering "abstraction" down. Simplicity and transparency ("how it works") is more important in this case.Convolutions, identiy matrices, invertible matrices are stuff I don't want to see in Phobos. They are about "MathD" not about (big) standard library.I don't see how you can get good performance without special casing identity matrices, transposed matrices and so on. You surely need to support matrix inversion, Gauss-Jordan elimination (or the equivalent) etc?
Jun 14 2015
On Sunday, 14 June 2015 at 14:25:11 UTC, Ilya Yaroshenko wrote:I am sorry for this trolling: Lisp is the best abstraction, thought.Even it if was, it does not provide the meta info and alignment type constraints that makes it possible to hardware/SIMD optimize it behind the scenes.For example many programmers don't want to use Boost only because it's abstractions makes them crazy.Yes, C++ templates are a hard nut to crack, if D had added excellent pattern matching to its meta programming repertoire the I think this would be enough to put D in a different league. Application programmers should not have to deal with lots of type parameters, they can use the simplified version (aliases). That's what I do in my C++ libs, using templated aliasing to make a complicated type composition easy to use while still getting the benefits generic pattern matching and generic programming.Getting platform optimized versions of frequently used heavy operations is the primary reason for why I would use a builtin library over rolling my own. Especially if the compiler has builtin high-level optimizations for the algebra. A naive basic matrix library is simple to write, I don't need standard library support for that + I get it to work the way I want by using SIMD registers directly... => I probably would not use it if I could implement it in less than 10 hours.For daily scientific purposes - yes. For R/Matlab like mathematical library - yes. For real world application - no. Engineer can achieve best performance without special cases by lowering "abstraction" down. Simplicity and transparency ("how it works") is more important in this case.Convolutions, identiy matrices, invertible matrices are stuff
Jun 14 2015
On Sunday, 14 June 2015 at 14:46:36 UTC, Ola Fosheim Grøstad wrote:Yes, C++ templates are a hard nut to crack, if D had added excellent pattern matching to its meta programming repertoire the I think this would be enough to put D in a different league.https://github.com/solodon4/Mach7
Jun 14 2015
A naive basic matrix library is simple to write, I don't need standard library support for that + I get it to work the way I want by using SIMD registers directly... => I probably would not use it if I could implement it in less than 10 hours.A naive std.algorithm and std.range is easy to write too.
Jun 14 2015
On Sunday, 14 June 2015 at 15:15:38 UTC, Ilya Yaroshenko wrote:I wouldn't know. People have different needs. Builtin for-each-loops, threads and SIMD support are more important to me than iterators (ranges). But the problem with linear algebra is that you might want to do SIMD optimized versions where you calculate 4 equations at the time, do reshuffling etc. So a library solution has to provide substantial benefits.A naive basic matrix library is simple to write, I don't need standard library support for that + I get it to work the way I want by using SIMD registers directly... => I probably would not use it if I could implement it in less than 10 hours.A naive std.algorithm and std.range is easy to write too.
Jun 14 2015
On Sunday, 14 June 2015 at 18:05:33 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 15:15:38 UTC, Ilya Yaroshenko wrote:Yes, but it would be hard to create SIMD optimised version. What do you think about this chain of steps? 1. Create generalised (only type template and my be flags) BLAS algorithms (probably slow) with CBLAS like API. 2. Allow users to use existing CBLAS libraries inside generalised BLAS. 3. Start to improve generalised BLAS with SIMD instructions. 4. And then continue discussion about type of matrixes we want...I wouldn't know. People have different needs. Builtin for-each-loops, threads and SIMD support are more important to me than iterators (ranges). But the problem with linear algebra is that you might want to do SIMD optimized versions where you calculate 4 equations at the time, do reshuffling etc. So a library solution has to provide substantial benefits.A naive basic matrix library is simple to write, I don't need standard library support for that + I get it to work the way I want by using SIMD registers directly... => I probably would not use it if I could implement it in less than 10 hours.A naive std.algorithm and std.range is easy to write too.
Jun 14 2015
On Sunday, 14 June 2015 at 18:49:21 UTC, Ilya Yaroshenko wrote:Yes, but it would be hard to create SIMD optimised version.Then again clang is getting better at this stuff.What do you think about this chain of steps? 1. Create generalised (only type template and my be flags) BLAS algorithms (probably slow) with CBLAS like API. 2. Allow users to use existing CBLAS libraries inside generalised BLAS. 3. Start to improve generalised BLAS with SIMD instructions. 4. And then continue discussion about type of matrixes we want...Hmm… I don't know. In general I think the best thing to do is to develop libraries with a project and then turn it into something more abstract. If I had more time I think I would have made the assumption that we could make LDC produce whatever next version of clang can do with pragmas/GCC-extensions and used that assumption for building some prototypes. So I would: 1. protoype typical constructs in C, compile it with next version of llvm/clang (with e.g. 4xloop-unrolling and try different optimization/vectorizing options) the look at the output in LLVM IR and assembly mnemonic code. 2. Then write similar code with hardware optimized BLAS and benchmark where the overhead between pure C/LLVM and BLAS calls balance out to even. Then you have a rough idea of what the limitations of the current infrastructure looks like, and can start modelling the template types in D? I'm not sure that you should use SIMD directly, but align the memory for it. Like, on iOS you end up using LLVM subsets because of the new bitcode requirements. Ditto for PNACL. Just a thought, but that's what I would I do.
Jun 14 2015
Another thing worth noting is that I believe Intel has put some effort into next gen (?) LLVM/Clang for autovectorizing into AVX2. It might be worth looking into as it uses a mask that allows the CPU to skip computations that would lead to no change, but I think it is only available on last gen Intel CPUs. Also worth keeping in mind is that future versions of LLVM will have to deal with GCC extensions and perhaps also Clang pragmas. So maybe take a look at: http://clang.llvm.org/docs/LanguageExtensions.html#vectors-and-extended-vectors and http://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-loop-hint-optimizations ?
Jun 14 2015
See [1] (the Matmul benchmark) Julia Native is probably backed with Intel MKL or OpenBLAS. D version was optimized by Martin Nowak [2] and is still _much_ slower.1. Create generalised (only type template and my be flags) BLAS algorithms (probably slow) with CBLAS like API.I think a good interface is more important than speed of default implementation (at least for e.g large matrix multiplication). Just use existing code for speed... Goto's papers about his BLAS: [3][4] Having something a competitive in D would be great but probably a lot of work. Without a good D interface dstep + openBLAS/Atlas header will not look that bad. Note I am not talking about small matrices/graphics.2. Allow users to use existing CBLAS libraries inside generalised BLAS.nice, but not really important. Good interface to existing high quality BLAS seems more important to me than fast D linear algebra implementation + CBLAS like interface.3. Start to improve generalised BLAS with SIMD instructions.+14. And then continue discussion about type of matrixes we want...2. Then write similar code with hardware optimized BLAS and benchmark where the overhead between pure C/LLVM and BLAS calls balance out to even.may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:) [1] https://github.com/kostya/benchmarks [2] https://github.com/kostya/benchmarks/pull/6 [3] http://www.cs.utexas.edu/users/flame/pubs/GotoTOMS2.pdf [4] http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf
Jun 14 2015
On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner. Then consider pushing anything beyond that over to something more specialized. *shrugs*2. Then write similar code with hardware optimized BLAS and benchmark where the overhead between pure C/LLVM and BLAS calls balance out to even.may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:)
Jun 14 2015
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad wrote:Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner.In case it isn't obvious: a potential advantage of a simple algorithm that do "naive brute force" is that the backend might stand a better chance optimizing it, at least when you have a matrix that is known at compile time.
Jun 15 2015
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad wrote:On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad wrote:Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner. Then consider pushing anything beyond that over to something more specialized. *shrugs*2. Then write similar code with hardware optimized BLAS and benchmark where the overhead between pure C/LLVM and BLAS calls balance out to even.may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:)On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:sorry, I should read more careful. I understand 'optimize default implementation to the speed of high quality BLAS for _any_/large matrix size'. Great if it is done but imo there is no real pressure to do it and probably needs lot of time of experts. To benchmark when existing BLAS is actually faster is than 'naive brute force' sounds very good and reasonable.Sure, but that is what I'd do if I had the time. Get a baseline for what kind of NxN sizes D can reasonably be expected to deal with in a "naive brute force" manner. Then consider pushing anything beyond that over to something more specialized. *shrugs*2. Then write similar code with hardware optimized BLAS and benchmark where the overhead between pure C/LLVM and BLAS calls balance out to even.may there are more important / beneficial things to work on - assuming total time of contributors is fix and used for other D stuff:)
Jun 15 2015
On Monday, 15 June 2015 at 08:12:17 UTC, anonymous wrote:I understand 'optimize default implementation to the speed of high quality BLAS for _any_/large matrix size'. Great if it is done but imo there is no real pressure to do it and probably needs lot of time of experts.+1
Jun 15 2015
On Monday, 15 June 2015 at 08:12:17 UTC, anonymous wrote:sorry, I should read more careful. I understand 'optimize default implementation to the speed of high quality BLAS for _any_/large matrix size'. Great if it is done but imo there is no real pressure to do it and probably needs lot of time of experts. To benchmark when existing BLAS is actually faster is than 'naive brute force' sounds very good and reasonable.Yes. Well, I think there are some different expectations to what a standard library should include. In my view BLAS is primarily an API that matters because people have existing code bases, therefore it is common to have good implementations for it. I don't really see any reason for why new programs should target it. I think it is a good idea to stay higher level. Provide simple implementations that the optimizer can deal with. Then have a benchmarking program that run on different configurations (os+hardware) to measure when the non-D libraries perform better and use those when they are faster. So I don't think phobos should provide BLAS as such. That's what I would do, anyway.
Jun 15 2015
It's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.
Jun 10 2015
On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:It's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.FWIW: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d
Jun 10 2015
On 6/10/15 1:53 AM, ponce wrote:On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:Yes, arbitrary fixed-size integrals would be good to have in Phobos. Who's the author of that code? Can we get something going here? -- AndreiIt's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.FWIW: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d
Jun 10 2015
On Wednesday, 10 June 2015 at 15:44:40 UTC, Andrei Alexandrescu wrote:On 6/10/15 1:53 AM, ponce wrote:Sorry for the delay. I wrote this code a while earlier. I will relicense it anyway that is needed (if needed). Currently lack the time to polish it more (adding custom literals would be the one thing to do).On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:Yes, arbitrary fixed-size integrals would be good to have in Phobos. Who's the author of that code? Can we get something going here? -- AndreiIt's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.FWIW: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d
Jun 23 2015
On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:It's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.Other things I often have a need for: Weak references Queues, stacks, sets Logging Custom date/time formatting Locale-aware number/currency formatting HMAC (for OAuth) URI parsing Sending email (SMTP) Continuations for std.parallelism.Task Database connectivity (sounds like this is on the cards) HTTP listener
Jun 10 2015
On Wed, 10 Jun 2015 09:12:15 +0000, John Chapman wrote:On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:+inf for including that into Phobos. current implementations are hacks=20 that may stop working when internals will change, but if it will be in=20 Phobos, it will be always up-to-date.=It's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.=20 Other things I often have a need for: =20 Weak references
Jun 10 2015
On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:Loggingstd.experimental.logger!?
Jun 10 2015
On Wednesday, 10 June 2015 at 09:30:37 UTC, Robert burner Schadek wrote:On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:Perfect, he said sheepishly.Loggingstd.experimental.logger!?
Jun 10 2015
On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:HMAC (for OAuth)https://github.com/D-Programming-Language/phobos/pull/3233 Unfortunately it triggers a module cycle bug on FreeBSD that I can't figure out, so it hasn't been merged yet.
Jun 10 2015
On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:It's a shame ucent/cent never got implemented. But couldn't they be added to Phobos? I often need a 128-bit type with better precision than float and double.I think the next release of LDC will support it, at least on some platforms...
Jun 10 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussstd.container.concurrent.*
Jun 13 2015
On 06/07/2015 02:27 PM, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWhat are the problems with std.json?
Jun 13 2015
On Saturday, 13 June 2015 at 16:53:22 UTC, Nick Sabalausky wrote:On 06/07/2015 02:27 PM, Robert burner Schadek wrote:slowPhobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussWhat are the problems with std.json?
Jun 13 2015
Good start: http://code.dlang.org/packages/dip80-ndslice https://github.com/9il/dip80-ndslice/blob/master/source/std/experimental/range/ndslice.d I miss the function `sliced` in Phobos.
Jun 13 2015
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek wrote:Phobos is awesome, the libs of go, python and rust only have better marketing. As discussed on dconf, phobos needs to become big and blow the rest out of the sky. http://wiki.dlang.org/DIP80 lets get OT, please discussN-dimensional slices is ready for comments! Announce http://forum.dlang.org/thread/rilfmeaqkailgpxoziuo forum.dlang.org Ilya
Jun 15 2015
On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:N-dimensional slices is ready for comments!It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
Jun 15 2015
On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:try .length!0 and .length!1 or .shape[0] and .shape[1]N-dimensional slices is ready for comments!It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
Jun 15 2015
On Monday, 15 June 2015 at 13:55:16 UTC, John Colvin wrote:On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:Nitpick: shape contains lengths and strides: .shape.lengths[0] and .shape.lengths[1]On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:try .length!0 and .length!1 or .shape[0] and .shape[1]N-dimensional slices is ready for comments!It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
Jun 15 2015
On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:This works: unittest { import std.stdio, std.experimental.range.ndslice; import std.range : iota; auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); writeln(matrix[0].length); // 4 writeln(matrix[0].length!0); // 4 writeln(matrix[0].length!1); // 5 writeln(matrix.length!2); // 5 } Prints: //[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] //4 //4 //5 I am note sure that we need something like `height`/row and `width`/col for nd-slices. This kind of names can be used after casting to the future `std.container.matrix`.N-dimensional slices is ready for comments!It seems to me that the properties of the matrix require `row` and `col` like this: import std.stdio, std.experimental.range.ndslice, std.range : iota; void main() { auto matrix = 100.iota.sliced(3, 4, 5); writeln(matrix[0]); // [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]] // writeln(matrix[0].row); // 4 // writeln(matrix[0].col); // 5 } P.S. I'm not exactly sure that these properties should work exactly as in my code :)
Jun 15 2015
On Monday, 15 June 2015 at 14:32:20 UTC, Ilya Yaroshenko wrote:I am note sure that we need something like `height`/row and `width`/col for nd-slices. This kind of names can be used after casting to the future `std.container.matrix`.Here something similar implemented: https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d#L52-L56 Want in the future something like `rows' and `cols`: https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d#L156-L157 Waiting for `static foreach`. This design really helps a lot to implement multidimensional slices.
Jun 15 2015