digitalmars.D.announce - ArrayFire, a GPU library, is now open source
- bachmeier (6/6) Nov 12 2014 ArrayFire is open source, as announced on Hacker News and Reddit
- ponce (10/16) Nov 13 2014 Am I the only one to be left completely cold with the new wave of
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (9/14) Nov 13 2014 The way I see it you have two groups looking for GPU support:
- Shehzan (19/28) Dec 16 2014 I am one of the developers of ArrayFire. As we went open source,
- ponce (13/28) Dec 17 2014 Hi, I'm kind of embarrassed by my bitter post, must have been a
- Pavan Yalamanchili (14/44) May 13 2015 I know this is a really old post, but just to add to what Shehzan
- Engine Machine (2/8) Aug 09 2016 But basically it is still C and not D?
- Pavan Yalamanchili (4/17) Aug 15 2016 We have a c++ API as well that can be directly used from D.
ArrayFire is open source, as announced on Hacker News and Reddit https://github.com/arrayfire/arrayfire Overview here: http://www.arrayfire.com/docs/index.htm There is a C API so it is easy to call from D. This should help the situation for numerical programming with D.
Nov 12 2014
On Thursday, 13 November 2014 at 02:06:03 UTC, bachmeier wrote:ArrayFire is open source, as announced on Hacker News and Reddit https://github.com/arrayfire/arrayfire Overview here: http://www.arrayfire.com/docs/index.htm There is a C API so it is easy to call from D. This should help the situation for numerical programming with D.Am I the only one to be left completely cold with the new wave of C++ to GPU libraries (Bolt/ArrayFire/OpenACC) which take back the control compute APIs give? For example this one removes double precision and multiple devices, something that is builtin with OpenCL. These libraries build on the myth that GPU's power can be harnessed without pain, but at one point you have to expose the multiple levels of parallelism that GPU have, use spatial cache locality, etc. This is like, a 60% solution.
Nov 13 2014
On Thursday, 13 November 2014 at 08:33:57 UTC, ponce wrote:Am I the only one to be left completely cold with the new wave of C++ to GPU libraries (Bolt/ArrayFire/OpenACC) which take back the control compute APIs give? For example this one removes double precision and multiple devices, something that is builtin with OpenCL.The way I see it you have two groups looking for GPU support: those that do batch computations and those that do realtime. I think the latter group want "more direct access" to the hardware (like Metal). I guess this solution is more for the in-between, like desktop apps that need a little boost here and there. People who are not really into coprocessors/hardware, but sometimes need a little extra.
Nov 13 2014
Am I the only one to be left completely cold with the new wave of C++ to GPU libraries (Bolt/ArrayFire/OpenACC) which take back the control compute APIs give? For example this one removes double precision and multiple devices, something that is builtin with OpenCL. These libraries build on the myth that GPU's power can be harnessed without pain, but at one point you have to expose the multiple levels of parallelism that GPU have, use spatial cache locality, etc. This is like, a 60% solution.I am one of the developers of ArrayFire. As we went open source, we removed all restrictions that were put in place for our older commercial version. That is, double precision and multiple device are are part of the open source project. We also support CPU and OpenCL backends along with CUDA. This way, you can use the same ArrayFire code to run across any of those technologies without changes. All you need to do is link the correct library. We used a BSD 3-Clause license to make it easy for everyone to use in their own projects. Here is a blog I made about implementing Conway's Game of Life using ArrayFire http://arrayfire.com/conways-game-of-life-using-arrayfire/. It demonstrates how easy it is to use ArrayFire. Our goal is to make it easy for people to get started with GPU programming and break down the barrier for non-programmers to use the hardware efficiently. I agree that complex algorithms require more custom solutions, but once you get started, things become much easier.
Dec 16 2014
Hi, I'm kind of embarrassed by my bitter post, must have been a bad day :). On Tuesday, 16 December 2014 at 19:49:37 UTC, Shehzan wrote:We also support CPU and OpenCL backends along with CUDA. This way, you can use the same ArrayFire code to run across any of those technologies without changes. All you need to do is link the correct library.Cool, this was reason enough to avoid using NPP until now. I've certainly found desirable to be able to target OpenCL, CPU or CUDA indifferently from the same codebase. What I'd like more than a library of functions though is an abstracted compute API. This would be a compiler from your own compute language to OpenCL C or CUDA C++ also an API wrapper. That would probably mean to leave some features behind to have the intersection. Similar to bgfx but for compute APIs, which has a shader compiler to many targets.We used a BSD 3-Clause license to make it easy for everyone to use in their own projects. Here is a blog I made about implementing Conway's Game of Life using ArrayFire http://arrayfire.com/conways-game-of-life-using-arrayfire/. It demonstrates how easy it is to use ArrayFire. Our goal is to make it easy for people to get started with GPU programming and break down the barrier for non-programmers to use the hardware efficiently. I agree that complex algorithms require more custom solutions, but once you get started, things become much easier.Your example is indeed very simple, so I guess it has its uses.
Dec 17 2014
On Wednesday, 17 December 2014 at 12:58:23 UTC, ponce wrote:Hi, I'm kind of embarrassed by my bitter post, must have been a bad day :). On Tuesday, 16 December 2014 at 19:49:37 UTC, Shehzan wrote:I know this is a really old post, but just to add to what Shehzan already mentioned, we have double precision support (both real and complex) since day one (and quite a long time before that as well). Our documentation does not make it obvious immediately because we just have a single array class. The array class holds the metadata of the data types and we eventually launch the appropriate kernels. ArrayFire can also integrate with existing CUDA or OpenCL code. The goal of libraries (be it Thrust or Bolt or ArrayFire) is to not take back control, but to make sure users are not re-inventing the wheel over and over again. Having access to highly optimized, pre-existing GPU kernels for commonly used algorithms can only increase productivity.We also support CPU and OpenCL backends along with CUDA. This way, you can use the same ArrayFire code to run across any of those technologies without changes. All you need to do is link the correct library.Cool, this was reason enough to avoid using NPP until now. I've certainly found desirable to be able to target OpenCL, CPU or CUDA indifferently from the same codebase. What I'd like more than a library of functions though is an abstracted compute API. This would be a compiler from your own compute language to OpenCL C or CUDA C++ also an API wrapper. That would probably mean to leave some features behind to have the intersection. Similar to bgfx but for compute APIs, which has a shader compiler to many targets.We used a BSD 3-Clause license to make it easy for everyone to use in their own projects. Here is a blog I made about implementing Conway's Game of Life using ArrayFire http://arrayfire.com/conways-game-of-life-using-arrayfire/. It demonstrates how easy it is to use ArrayFire. Our goal is to make it easy for people to get started with GPU programming and break down the barrier for non-programmers to use the hardware efficiently. I agree that complex algorithms require more custom solutions, but once you get started, things become much easier.Your example is indeed very simple, so I guess it has its uses.
May 13 2015
On Thursday, 13 November 2014 at 02:06:03 UTC, bachmeier wrote:ArrayFire is open source, as announced on Hacker News and Reddit https://github.com/arrayfire/arrayfire Overview here: http://www.arrayfire.com/docs/index.htm There is a C API so it is easy to call from D. This should help the situation for numerical programming with D.But basically it is still C and not D?
Aug 09 2016
On Tuesday, 9 August 2016 at 15:31:21 UTC, Engine Machine wrote:On Thursday, 13 November 2014 at 02:06:03 UTC, bachmeier wrote:We have a c++ API as well that can be directly used from D. If anyone is explicitly interested in a D wrapper ping me on our chat room https://gitter.im/arrayfire/arrayfireArrayFire is open source, as announced on Hacker News and Reddit https://github.com/arrayfire/arrayfire Overview here: http://www.arrayfire.com/docs/index.htm There is a C API so it is easy to call from D. This should help the situation for numerical programming with D.But basically it is still C and not D?
Aug 15 2016