www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - GPU's as floating point coprocessors

reply Karen Lanrap <karen digitaldaemon.com> writes:
http://folding.stanford.edu/FAQ-ATI.html declares ATI's X1900-series 
to be the most advanced coprocessor ever ( rumour: 375 GFLOPS as 
opposed to the 25 GFLOPS of the lastest conroes /rumour).

How to address this power with D?
Sep 29 2006
next sibling parent Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Karen Lanrap wrote:
 http://folding.stanford.edu/FAQ-ATI.html declares ATI's X1900-series 
 to be the most advanced coprocessor ever ( rumour: 375 GFLOPS as 
 opposed to the 25 GFLOPS of the lastest conroes /rumour).
 
 How to address this power with D?
Make it cooperate with Cg/HLSL or GLSL <g>
Sep 29 2006
prev sibling parent reply "Andrei Khropov" <andkhropov nospam_mtu-net.ru> writes:
Karen Lanrap wrote:

 http://folding.stanford.edu/FAQ-ATI.html declares ATI's X1900-series 
 to be the most advanced coprocessor ever ( rumour: 375 GFLOPS as 
 opposed to the 25 GFLOPS of the lastest conroes /rumour).
 
 How to address this power with D?
Have you seen Accelerator project of MS Research? Paper: http://research.microsoft.com/research/pubs/view.aspx?type=technical%20report&id =1040 Video: http://channel9.msdn.com/showpost.aspx?postid=229585 Download: http://research.microsoft.com/research/downloads/download.aspx?FUID=50ee362a-c4d 7-4fe6-9018-1b7f9c1dd5dc Maybe it's not a bad idea to create something similar for D. --
Oct 03 2006
next sibling parent =?iso-8859-1?q?Knud_S=F8rensen?= <12tkvvb02 sneakemail.com> writes:
On Tue, 03 Oct 2006 19:33:25 +0000, Andrei Khropov wrote:

 Karen Lanrap wrote:
 
 http://folding.stanford.edu/FAQ-ATI.html declares ATI's X1900-series 
 to be the most advanced coprocessor ever ( rumour: 375 GFLOPS as 
 opposed to the 25 GFLOPS of the lastest conroes /rumour).
 
 How to address this power with D?
Have you seen Accelerator project of MS Research? Paper: http://research.microsoft.com/research/pubs/view.aspx?type=technical%20report&id =1040 Video: http://channel9.msdn.com/showpost.aspx?postid=229585 Download: http://research.microsoft.com/research/downloads/download.aspx?FUID=50ee362a-c4d 7-4fe6-9018-1b7f9c1dd5dc Maybe it's not a bad idea to create something similar for D.
That is there idea with the vectorization suggestion here. http://all-technology.com/eigenpolls/dwishlist/index.php?it=10 What is a vectorized expression? Basically, loops that does not specify any order of execution. If there is no order specified, of course the compiler can choose any one that is efficient or maybe even distribute the code and execute it in parallel.
Oct 03 2006
prev sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Khropov wrote:
 Karen Lanrap wrote:
 
 http://folding.stanford.edu/FAQ-ATI.html declares ATI's X1900-series 
 to be the most advanced coprocessor ever ( rumour: 375 GFLOPS as 
 opposed to the 25 GFLOPS of the lastest conroes /rumour).

 How to address this power with D?
Have you seen Accelerator project of MS Research? Paper: http://research.microsoft.com/research/pubs/view.aspx?type=technical%20report&id =1040 Video: http://channel9.msdn.com/showpost.aspx?postid=229585 Download: http://research.microsoft.com/research/downloads/download.aspx?FUID=50ee362a-c4d 7-4fe6-9018-1b7f9c1dd5dc Maybe it's not a bad idea to create something similar for D.
Hmm. It sounds pretty much like OpenMP. http://www.openmp.org/drupal/ --bb
Oct 03 2006
parent reply "nobody_" <spam spam.spam> writes:
 Hmm.  It sounds pretty much like OpenMP.
 http://www.openmp.org/drupal/

 --bb
I might be wrong, but as far as I can see OpenMP doesn't use the gpu. Using the gpu as a coprocessor isn't that difficult though, just find code which can be done faster through the gpu pipelines. Reading through the opengl redbook might give you some clues as to which kind of operations you should be looking. Translate the problem 'graphically' et voila profit. I intent to use the gpu for parts of the AI in my upcoming games(just wait another year or so :) But yes, it would be nice to have a project going which would setup everything (in opengl) and have some basic functions which would be done by the gpu. It shouldn't even be too difficult to let opengl stay your main device for screen output :D
Oct 08 2006
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Ok.  I didn't get that far in watching the channel 9 video.  So it 
sounds like OpenMP for the GPU then.  Neat.  How's the support for 
double precision floating point?  Also what happens when the user has 
crappy built-in Intel chipset graphics?  If they're going to solve those 
issues for us then it will be very useful indeed.

--bb

nobody_ wrote:
 Hmm.  It sounds pretty much like OpenMP.
 http://www.openmp.org/drupal/

 --bb
I might be wrong, but as far as I can see OpenMP doesn't use the gpu. Using the gpu as a coprocessor isn't that difficult though, just find code which can be done faster through the gpu pipelines. Reading through the opengl redbook might give you some clues as to which kind of operations you should be looking. Translate the problem 'graphically' et voila profit. I intent to use the gpu for parts of the AI in my upcoming games(just wait another year or so :) But yes, it would be nice to have a project going which would setup everything (in opengl) and have some basic functions which would be done by the gpu. It shouldn't even be too difficult to let opengl stay your main device for screen output :D
Oct 09 2006
parent "nobody_" <spam spam.spam> writes:
I didn't watch the whole thing either. (well just now I did :)
I was just talking about general gpu computation.
If you want to know the support for x, you should just check the specs of 
OpenGl x.x or Direct3D x.x
Every card has specs which show the version they are compliant with, and if 
they are cheating both the card manifacturers and OpenGl/Microsoft will sue 
them.
Thus even the inbuild chips should be able to help out.
But you are right, there are probably some precision differences, but isn't 
that also a problem with different cpu's?
(I remember being told not to use different cpu's for distributed rendering 
for just that reason)

I think it would be very interesing to start our own gpu-co-proc module.


 Ok.  I didn't get that far in watching the channel 9 video.  So it sounds 
 like OpenMP for the GPU then.  Neat.  How's the support for double 
 precision floating point?  Also what happens when the user has crappy 
 built-in Intel chipset graphics?  If they're going to solve those issues 
 for us then it will be very useful indeed.

 --bb
Oct 09 2006