digitalmars.D - Library for Linear Algebra?
- Trass3r (3/3) Mar 20 2009 Is there any working library for Linear Algebra?
- BCS (5/8) Mar 20 2009 I havent used them but:
- Trass3r (10/12) Mar 20 2009 This project contains BLADE, I didn't test it myself cause the author
- Don (5/23) Mar 20 2009 I abandoned it largely because array operations got into the language;
- Trass3r (6/12) Mar 21 2009 Though array operations still only give us SIMD and no multithreading (?...
- Don (7/22) Mar 22 2009 There's absolutely no way you'd want multithreading on a BLAS1
- Fawzi Mohamed (12/35) Mar 22 2009 not true, if your vector is large you could still use several threads.
- Don (16/59) Mar 23 2009 That's surprising. I confess to never having benchmarked it, though.
- Bill Baxter (7/10) Mar 20 2009 For D1 try Gobo and Dflat. Gobo has a bunch of wrappers for Fortran
- Fawzi Mohamed (17/32) Mar 20 2009 Dflat gives you also sparse matrix formats, if all you are interested
Is there any working library for Linear Algebra? I only found BLADE, which seems to be abandoned and quite unfinished, and Helix which only provides 3x3 and 4x4 matrices used in games :(
Mar 20 2009
Reply to Trass3r,Is there any working library for Linear Algebra? I only found BLADE, which seems to be abandoned and quite unfinished, and Helix which only provides 3x3 and 4x4 matrices used in games :(I havent used them but: http://www.dsource.org/projects/mathextra http://www.dsource.org/projects/lyla if either works well for you, please comment in it.
Mar 20 2009
BCS schrieb:http://www.dsource.org/projects/mathextraThis project contains BLADE, I didn't test it myself cause the author stated on Wed Oct 17, 2007 that "there are still some fairly large issues to work out" and there haven't been any updates to it since May 2008.http://www.dsource.org/projects/lylaThis seems to be something I should look further into. Though it is abandoned as well :( That's a real pity, having a good scientific computation library is crucial for D being used at universities. You know, Matlab is fine for writing clean, intuitive code, but when it comes to real performance requirements it totally sucks (damn Java ;) )
Mar 20 2009
Trass3r wrote:BCS schrieb:I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working. Don't worry, I haven't gone away!http://www.dsource.org/projects/mathextraThis project contains BLADE, I didn't test it myself cause the author stated on Wed Oct 17, 2007 that "there are still some fairly large issues to work out" and there haven't been any updates to it since May 2008.http://www.dsource.org/projects/lylaThis seems to be something I should look further into. Though it is abandoned as well :( That's a real pity, having a good scientific computation library is crucial for D being used at universities. You know, Matlab is fine for writing clean, intuitive code, but when it comes to real performance requirements it totally sucks (damn Java ;) )
Mar 20 2009
Don schrieb:I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working. Don't worry, I haven't gone away!I see.Though array operations still only give us SIMD and no multithreading (?!). I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.http://www.dsource.org/projects/lyla
Mar 21 2009
Trass3r wrote:Don schrieb:There's absolutely no way you'd want multithreading on a BLAS1 operation. It's not until BLAS3 that you become computation-limited.I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working. Don't worry, I haven't gone away!I see.Though array operations still only give us SIMD and no multithreading (?!).http://www.dsource.org/projects/lylaI think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.In my opinion, we actually need matrices in the standard library, with a very small number of primitive operations built-in (much like Fortran does). Outside those, I agree, wrappers to an existing library should be used.
Mar 22 2009
On 2009-03-22 09:45:32 +0100, Don <nospam nospam.com> said:Trass3r wrote:not true, if your vector is large you could still use several threads. but you are right that using multiple thread at low level is a dangerous thing, because it might be better to use just one thread, and parallelize another operation at a higher level. Thus you need sort of know how many threads are really available for that operation. I am trying to tackle that problem in blip, by having a global scheduler, that I am rewriting.Don schrieb:There's absolutely no way you'd want multithreading on a BLAS1 operation. It's not until BLAS3 that you become computation-limited.I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working. Don't worry, I haven't gone away!I see.Though array operations still only give us SIMD and no multithreading (?!).http://www.dsource.org/projects/lylablyp.narray.NArray does that if compiled with -version=blas, but I think that for large vector/matrixes you can do better (exactly using multithreading).I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.In my opinion, we actually need matrices in the standard library, with a very small number of primitive operations built-in (much like Fortran does). Outside those, I agree, wrappers to an existing library should be used.
Mar 22 2009
Fawzi Mohamed wrote:On 2009-03-22 09:45:32 +0100, Don <nospam nospam.com> said:That's surprising. I confess to never having benchmarked it, though. If the vector is large, all threads are competing for the same L2 and L3 cache bandwidth, right? (Assuming a typical x86 situation where every CPU has an L1 cache and the L2 and L3 caches are shared). So multiple cores should never be beneficial whenever the RAM->L3 or L3->L2 bandwidth is the bottleneck, which will be the case for most BLAS1-style operations at large sizes. And at small sizes, the thread overhead is significant, wiping out any potential benefit. What have I missed?Trass3r wrote:not true, if your vector is large you could still use several threads.Don schrieb:There's absolutely no way you'd want multithreading on a BLAS1 operation. It's not until BLAS3 that you become computation-limited.I abandoned it largely because array operations got into the language; since then I've been working on getting the low-level math language stuff working. Don't worry, I haven't gone away!I see.Though array operations still only give us SIMD and no multithreading (?!).http://www.dsource.org/projects/lylabut you are right that using multiple thread at low level is a dangerous thing, because it might be better to use just one thread, and parallelize another operation at a higher level. Thus you need sort of know how many threads are really available for that operation.Yes, if you have a bit more context, it can be a clear win.I am trying to tackle that problem in blip, by having a global scheduler, that I am rewriting.I look forward to seeing it!I suspect that with 'shared' and 'immutable' arrays, D can do better than C, in theory. I hope it works out in practice.blyp.narray.NArray does that if compiled with -version=blas, but I think that for large vector/matrixes you can do better (exactly using multithreading).I think the best approach is lyla's, taking an existing, optimized C BLAS library and writing some kind of wrapper using operator overloading etc. to make programming easier and more intuitive.In my opinion, we actually need matrices in the standard library, with a very small number of primitive operations built-in (much like Fortran does). Outside those, I agree, wrappers to an existing library should be used.
Mar 23 2009
On Sat, Mar 21, 2009 at 3:38 AM, Trass3r <mrmocool gmx.de> wrote:Is there any working library for Linear Algebra? I only found BLADE, which seems to be abandoned and quite unfinished, and Helix which only provides 3x3 and 4x4 matrices used in games :(For D1 try Gobo and Dflat. Gobo has a bunch of wrappers for Fortran libraries. Dflat is a higher level matrix/sparse matrix interface. Works with Tango using Tangobos. See: http://www.dsource.org/projects/multiarray --bb
Mar 20 2009
On 2009-03-20 20:46:21 +0100, Bill Baxter <wbaxter gmail.com> said:On Sat, Mar 21, 2009 at 3:38 AM, Trass3r <mrmocool gmx.de> wrote:Dflat gives you also sparse matrix formats, if all you are interested in are sense matrixes than blip http://dsource.org/projects/blip (using the gobo wrappers) has NArray that gives you a nice interface to N dimensional arrays, and (compiling with -version=blas -version=lapack) also gives you access to most of lapack functions. With it you can do things like these: import blip.narray.NArray; auto m=zeros!(float)([10,10]); auto d=diag(m); auto d[]=arange(0.0,1.0,0.1); m[1,2]=0.4f; auto v=dot(v,arange(0.1,1.1,0.1)); auto r=solve(m,v.asType!(float)); auto ev=eig(m); FawziIs there any working library for Linear Algebra? I only found BLADE, which seems to be abandoned and quite unfinished, and Helix which only provides 3x3 and 4x4 matrices used in games :(For D1 try Gobo and Dflat. Gobo has a bunch of wrappers for Fortran libraries. Dflat is a higher level matrix/sparse matrix interface. Works with Tango using Tangobos. See: http://www.dsource.org/projects/multiarray --bb
Mar 20 2009