www.digitalmars.com         C & C++   DMDScript  

D - Non-commuting product operator

reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Hi there,

the assumption that multiplications are always commutative really is
restricting the use of the language in a rather serious way.

If I were to design a numerical library for linear algebra, the most natural
thing to do would be to use the multiplication operator for matrix
multiplications, allowing to write
        Matrix A, B;
        Matrix C = A * B;

In the current definition of the language, there would be now way to do such
a thing, forcing library writers do resort to stuff like:
        Matrix X = mult(A,B);
which gets absolutely ugly for large expressions.

In this aspect the minimal simplification for library writers (who can drop
one of two opMul definitions in a few cases) actually means a major
inconvenience for the library users!

This question does not affect optimizability at all. One could still say in
the language definition that *for builtin float/int multiplications* the
order of subexpressions is not guaranteed.

Therefore I strongly urge the language designers to reconsider this matter
if they have any interest in creating a serious tool for scientific
computing.

(The question whether the assumed associativity of multiplications/additions
is such a good idea is a completely different matter. I doubt it, but I
cannot yet discuss it.)

Ciao,
Nobbi
Apr 21 2004
next sibling parent reply "Matthew" <matthew.hat stlsoft.dot.org> writes:
I'm afraid I don't follow your argument. Can you be specific about what is
wrong, and about what changes you propose?

:)

"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c656ip$2k8e$1 digitaldaemon.com...
 Hi there,

 the assumption that multiplications are always commutative really is
 restricting the use of the language in a rather serious way.

 If I were to design a numerical library for linear algebra, the most
natural
 thing to do would be to use the multiplication operator for matrix
 multiplications, allowing to write
         Matrix A, B;
         Matrix C = A * B;

 In the current definition of the language, there would be now way to do
such
 a thing, forcing library writers do resort to stuff like:
         Matrix X = mult(A,B);
 which gets absolutely ugly for large expressions.

 In this aspect the minimal simplification for library writers (who can
drop
 one of two opMul definitions in a few cases) actually means a major
 inconvenience for the library users!

 This question does not affect optimizability at all. One could still say
in
 the language definition that *for builtin float/int multiplications* the
 order of subexpressions is not guaranteed.

 Therefore I strongly urge the language designers to reconsider this matter
 if they have any interest in creating a serious tool for scientific
 computing.

 (The question whether the assumed associativity of
multiplications/additions
 is such a good idea is a completely different matter. I doubt it, but I
 cannot yet discuss it.)

 Ciao,
 Nobbi
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
The problem is, that matrices do not commute. That is
        A*B != B*A

To be more exact: commutativity is a special feature of real and complex
numbers, but mathematically there are tons of other objects where it makes
sense to define a product, but this product does not commute.

Many of these object might not be useful for numerics, but matrices and
vectors are a basic tool for every engineer and scientist. For me, a
powerful and comfortable matrix library is one of the key features of a
good language for scientific computing.

My request, to be specific, would simply be:
--> drop the assumption that opMul is commutative in the language definition  

Of course, the compiler may still optimize code by commuting products of
plain numbers, but not by commuting user-defined objects.

The additional overhead for library authors will only be, that they have to
define two, mostly identical opMul implementations, but only for
multiplications of two different types, but then, the second one can still
refer to the first one.

Whether this should be done for additions as well, I do not know, but it
might be the safe decision. I know of no practically used mathematical
objects where the addition does not commute, but who knows what
mathematicians may come up with. Also, non-mathematicians might find it
hard to understand why addition and multiplication are handled differently.

Other operators are not affected, since they either are commutative by
mathematical definition (like "==") or have now mathematical meaning beyone
the boolean one ("&")

As I said, the matter of associativity should be considered as well. Most of
the practically used structures in mathematics are associative, but then,
floating point operations are not really associative. (Just try to compare
(1e-30+1e30)-1e30 and 1e-30+(1e30-1e30) and you'll see the problem.)

Anyhow: Fortran, which still is the preferred language by many scientists
doing heavy numerics, assumes associativity and demands that programmers
explicitely demand a certain order by splitting expressions where this is
necessary.

Ciao,
Nobbi


Matthew wrote:

 I'm afraid I don't follow your argument. Can you be specific about what is
 wrong, and about what changes you propose?
 
 :)
 
 "Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
 news:c656ip$2k8e$1 digitaldaemon.com...
 Hi there,

 the assumption that multiplications are always commutative really is
 restricting the use of the language in a rather serious way.

 If I were to design a numerical library for linear algebra, the most
natural
 thing to do would be to use the multiplication operator for matrix
 multiplications, allowing to write
         Matrix A, B;
         Matrix C = A * B;

 In the current definition of the language, there would be now way to do
such
 a thing, forcing library writers do resort to stuff like:
         Matrix X = mult(A,B);
 which gets absolutely ugly for large expressions.

 In this aspect the minimal simplification for library writers (who can
drop
 one of two opMul definitions in a few cases) actually means a major
 inconvenience for the library users!

 This question does not affect optimizability at all. One could still say
in
 the language definition that *for builtin float/int multiplications* the
 order of subexpressions is not guaranteed.

 Therefore I strongly urge the language designers to reconsider this
 matter if they have any interest in creating a serious tool for
 scientific computing.

 (The question whether the assumed associativity of
multiplications/additions
 is such a good idea is a completely different matter. I doubt it, but I
 cannot yet discuss it.)

 Ciao,
 Nobbi
Apr 21 2004
next sibling parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
This is not only true for matrices,
but for all groups and rings, that are not commutative.
Matices are a nice example that shows that forcing operators + and *  to be
commutative is an obstacle in the use of the language.

But i guess this is not new to Walter...
-- 
Jan-Eric Duden

"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c65981$2p7m$1 digitaldaemon.com...
 The problem is, that matrices do not commute. That is
         A*B != B*A

 To be more exact: commutativity is a special feature of real and complex
 numbers, but mathematically there are tons of other objects where it makes
 sense to define a product, but this product does not commute.

 Many of these object might not be useful for numerics, but matrices and
 vectors are a basic tool for every engineer and scientist. For me, a
 powerful and comfortable matrix library is one of the key features of a
 good language for scientific computing.

 My request, to be specific, would simply be:
 --> drop the assumption that opMul is commutative in the language
definition
 Of course, the compiler may still optimize code by commuting products of
 plain numbers, but not by commuting user-defined objects.

 The additional overhead for library authors will only be, that they have
to
 define two, mostly identical opMul implementations, but only for
 multiplications of two different types, but then, the second one can still
 refer to the first one.

 Whether this should be done for additions as well, I do not know, but it
 might be the safe decision. I know of no practically used mathematical
 objects where the addition does not commute, but who knows what
 mathematicians may come up with. Also, non-mathematicians might find it
 hard to understand why addition and multiplication are handled
differently.
 Other operators are not affected, since they either are commutative by
 mathematical definition (like "==") or have now mathematical meaning
beyone
 the boolean one ("&")

 As I said, the matter of associativity should be considered as well. Most
of
 the practically used structures in mathematics are associative, but then,
 floating point operations are not really associative. (Just try to compare
 (1e-30+1e30)-1e30 and 1e-30+(1e30-1e30) and you'll see the problem.)

 Anyhow: Fortran, which still is the preferred language by many scientists
 doing heavy numerics, assumes associativity and demands that programmers
 explicitely demand a certain order by splitting expressions where this is
 necessary.

 Ciao,
 Nobbi


 Matthew wrote:

 I'm afraid I don't follow your argument. Can you be specific about what
is
 wrong, and about what changes you propose?

 :)

 "Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
 news:c656ip$2k8e$1 digitaldaemon.com...
 Hi there,

 the assumption that multiplications are always commutative really is
 restricting the use of the language in a rather serious way.

 If I were to design a numerical library for linear algebra, the most
natural
 thing to do would be to use the multiplication operator for matrix
 multiplications, allowing to write
         Matrix A, B;
         Matrix C = A * B;

 In the current definition of the language, there would be now way to do
such
 a thing, forcing library writers do resort to stuff like:
         Matrix X = mult(A,B);
 which gets absolutely ugly for large expressions.

 In this aspect the minimal simplification for library writers (who can
drop
 one of two opMul definitions in a few cases) actually means a major
 inconvenience for the library users!

 This question does not affect optimizability at all. One could still
say
 in
 the language definition that *for builtin float/int multiplications*
the
 order of subexpressions is not guaranteed.

 Therefore I strongly urge the language designers to reconsider this
 matter if they have any interest in creating a serious tool for
 scientific computing.

 (The question whether the assumed associativity of
multiplications/additions
 is such a good idea is a completely different matter. I doubt it, but I
 cannot yet discuss it.)

 Ciao,
 Nobbi
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Jan-Eric Duden wrote:

 But i guess this is not new to Walter...
True, but with the discussions happening on a newsgroup without an easily searchable archive, it is hard to avoid bringing up topics again. (And even if the topic has been discussed, I would probably bring it up again...)
Apr 21 2004
parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
 :)  I like it if those issues pop up again and again.
Maybe it convinces Walter that there is a good reason to change D in that
aspect..

-- 
Jan-Eric Duden
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c65gou$42f$2 digitaldaemon.com...
 Jan-Eric Duden wrote:

 But i guess this is not new to Walter...
True, but with the discussions happening on a newsgroup without an easily searchable archive, it is hard to avoid bringing up topics again. (And
even
 if the topic has been discussed, I would probably bring it up again...)
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Jan-Eric Duden wrote:

  :)  I like it if those issues pop up again and again.
 Maybe it convinces Walter that there is a good reason to change D in that
 aspect..
 
Well, it will probably pop up again everytime somebody tries to implement a matrix class in D. I don't know whether Walter ever used Matrices in programming, but everybody who did will agree that it is not a minor request but an urgent recommendation no allow for noncommutative multiplication. (Especially, since the gain by defining opMul commutative is neglectible.)
Apr 21 2004
parent "Jan-Eric Duden" <jeduden whisset.com> writes:
Maybe we should give Walter homework. :)
Everytime we can think of a use-case where D has drawbacks over other
languages, we formulate an excercise and Walter needs to solve elgantly it
with the current D compiler. :)

-- 
Jan-Eric Duden

"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c65lgt$bd8$2 digitaldaemon.com...
 Jan-Eric Duden wrote:

  :)  I like it if those issues pop up again and again.
 Maybe it convinces Walter that there is a good reason to change D in
that
 aspect..
Well, it will probably pop up again everytime somebody tries to implement
a
 matrix class in D. I don't know whether Walter ever used Matrices in
 programming, but everybody who did will agree that it is not a minor
 request but an urgent recommendation no allow for noncommutative
 multiplication.

 (Especially, since the gain by defining opMul commutative is neglectible.)
Apr 21 2004
prev sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Norbert Nemec wrote:

The problem is, that matrices do not commute. That is
        A*B != B*A

  
Dig (undig on my webpage) has a nice example of matrices. This example is not a problem for D, because you are multiplying a matrix by a matrix. The problem comes when you multiple a privative by another type. But I can't think of any non-comunitive scalar/object operations, can u? And if your disparate you can wrap (box) the privative in a class. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
next sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
J Anderson wrote:

 Norbert Nemec wrote:

 The problem is, that matrices do not commute. That is
        A*B != B*A

  
Dig (undig on my webpage) has a nice example of matrices. This example is not a problem for D, because you are multiplying a matrix by a matrix. The problem comes when you multiple a privative by another type. But I can't think of any non-comunitive scalar/object operations, can u? And if your disparate you can wrap (box) the privative in a class.
privative = primitive (ie int, float ect...) -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
prev sibling parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
This example
 is not a problem for D, because you are multiplying a matrix by a
 matrix.
No. think of following matrix operation: Translate(-1.0,-1.0,-1.0)*RotateX(30.0)*Translate(1.0,1.0,1.0) which is different from Translate(1.0,1.0,1.0)*RotateX(30.0)*Translate(-1.0,-1.0,-1.0) or RotateX(30.0)*Translate(1.0,1.0,1.0)*Translate(-1.0,-1.0,-1.0) -- Jan-Eric Duden "J Anderson" <REMOVEanderson badmama.com.au> wrote in message news:c65hqm$5d1$2 digitaldaemon.com...
 Norbert Nemec wrote:

The problem is, that matrices do not commute. That is
        A*B != B*A
Dig (undig on my webpage) has a nice example of matrices. This example is not a problem for D, because you are multiplying a matrix by a matrix. The problem comes when you multiple a privative by another type. But I can't think of any non-comunitive scalar/object operations, can u? And if your disparate you can wrap (box) the privative in a class. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

This example
is not a problem for D, because you are multiplying a matrix by a
matrix.
    
No. think of following matrix operation: Translate(-1.0,-1.0,-1.0)*RotateX(30.0)*Translate(1.0,1.0,1.0) which is different from Translate(1.0,1.0,1.0)*RotateX(30.0)*Translate(-1.0,-1.0,-1.0) or RotateX(30.0)*Translate(1.0,1.0,1.0)*Translate(-1.0,-1.0,-1.0)
Have you tried dig matrices yet. They work this way. I mean, with a matrix class you have access to both sides of the equation. It's only with privative types that there is this problem. Commute is more then possible for matrices. //From dig: /** Multiply matrices. */ mat3 opMul (mat3 mb) { mat3 mo; float [] a = array (); float [] b = mb.array (); float [] o = mo.array (); for (int i; i < 3; i ++) { o [i + 0] = a [i] * b [0] + a [i + 3] * b [1] + a [i + 6] * b [2]; o [i + 3] = a [i] * b [3] + a [i + 3] * b [4] + a [i + 6] * b [5]; o [i + 6] = a [i] * b [6] + a [i + 3] * b [7] + a [i + 6] * b [8]; } return mo; } I'm afraid you've miss-understood what the documentation means. Try it. Send in a dig example that doesn't compute the correct result. //(Untested) mat3 n, o; ... if (n * o == o * m) { printf("multiplication is equal\n"); } -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
next sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
The problem down there just exemplifies what I mean:

If you use that mat3 multiplication:

        mat3 A,B;
        A = ...;
        B = ...;

now A*B is different from B*A. The language definition, though, allows the
compiler to swap factors of a product.

Obviously, as long as you only use matrix*matrix operation, the compiler has
no reason to swap them except for optimization purposes. So obviously, you
never ran into problems with your code. Anyhow: a different compiler might
just transform an A*B into a B*A randomly and your code will break down
completely.

B.t.w: scalar*matrix=matrix*scalar, but p.e. matrix*columnvector does not,
and actually columnvector*matrix does not even exist!



J Anderson wrote:

 Jan-Eric Duden wrote:
 Have you tried dig matrices yet.  They work this way.  I mean, with a
 matrix class you have access to both sides of the equation.   It's only
 with privative types that there is this problem.  Commute is more then
 possible for matrices.
 
 //From dig:
     /** Multiply matrices. */
     mat3 opMul (mat3 mb)
     {
         mat3 mo;
         float [] a = array ();
         float [] b = mb.array ();
         float [] o = mo.array ();
 
         for (int i; i < 3; i ++)
         {
             o [i + 0] = a [i] * b [0] + a [i + 3] * b [1] + a [i + 6] *
 b [2];
             o [i + 3] = a [i] * b [3] + a [i + 3] * b [4] + a [i + 6] *
 b [5];
             o [i + 6] = a [i] * b [6] + a [i + 3] * b [7] + a [i + 6] *
 b [8];
         }
 
         return mo;
     }
 
 I'm afraid you've miss-understood what the documentation means.
 
 Try it.  Send in a dig example that doesn't compute the correct result.
 
 //(Untested)
 mat3 n, o;
 ...
 if (n * o == o * m)
 {
     printf("multiplication is equal\n");
 }
 
Apr 21 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Norbert Nemec wrote:

The problem down there just exemplifies what I mean:

If you use that mat3 multiplication:

        mat3 A,B;
        A = ...;
        B = ...;

now A*B is different from B*A. The language definition, though, allows the
compiler to swap factors of a product.
  
I think this is only on the primitive level.
Obviously, as long as you only use matrix*matrix operation, the compiler has
no reason to swap them except for optimization purposes. So obviously, you
never ran into problems with your code. Anyhow: a different compiler might
just transform an A*B into a B*A randomly and your code will break down
completely.
  
So now your arguing what the specs says is different from what the compiler does? Walter is fully aware of how matrices work. I don't think he would let this one fall down.
B.t.w: scalar*matrix=matrix*scalar, but p.e. matrix*columnvector does not,
and actually columnvector*matrix does not even exist!
  
You should overload matrix to include columnvector ie like dig (vec3). -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Sorry about the complaints concerning commuting operators. I just reread the
language definition and found I misunderstood a detail:

The compiler actually is only allowed to swap operands if there is no
routine defined doing the operation in the right order. With this
restriction, it is actually possible to write a correct matrix library.

One has to take care, of course, not to leave out any combination, because
the compiler would produce wrong code instead of complaining, but is
something I can live with.

Ciao,
Nobbi
Apr 21 2004
next sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
Norbert Nemec wrote:

Sorry about the complaints concerning commuting operators. I just reread the
language definition and found I misunderstood a detail:

The compiler actually is only allowed to swap operands if there is no
routine defined doing the operation in the right order. With this
restriction, it is actually possible to write a correct matrix library.

One has to take care, of course, not to leave out any combination, because
the compiler would produce wrong code instead of complaining, but is
something I can live with.

Ciao,
Nobbi
  
Much better then C++, because you only need to overload 8 rather then 28 operators (in general). -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
prev sibling parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
As far as I understand the docs, this is not correct:
opAdd and opMul are supposed to be commutative. There is no opAdd_r or
opMul_r.
see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
Binary Operators

-- 
Jan-Eric Duden

"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c65n80$er1$1 digitaldaemon.com...
 Sorry about the complaints concerning commuting operators. I just reread
the
 language definition and found I misunderstood a detail:

 The compiler actually is only allowed to swap operands if there is no
 routine defined doing the operation in the right order. With this
 restriction, it is actually possible to write a correct matrix library.

 One has to take care, of course, not to leave out any combination, because
 the compiler would produce wrong code instead of complaining, but is
 something I can live with.

 Ciao,
 Nobbi
Apr 21 2004
next sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

As far as I understand the docs, this is not correct:
opAdd and opMul are supposed to be commutative. There is no opAdd_r or
opMul_r.
see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
Binary Operators
It's a miss-understanding. Sure if the other class doesn't define the opposite overload then D will swap the things around to make it work (this is a feature rather then a con). But that's just a semantic bug on the programmers part. class A { opMul(B) {} } class B { opMul(A) {} //If this is omitted then D will make things commutative. Otherwise you essentially have non-commutative. } Therefore opMul_r isn't nessary. This question has been asked a couple of times in the group. So far I've seen no one show a coded example (or a mathematical type) that defeats this stradagie. And matrices definatly aren't one. Now one problem I do see is if the programmer doesn't have access type A and B to add there own types (of course they can overload) but that's a different issue. I guess it be solved done using delegates though. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
next sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
OK, now, this actually boils down the problem even further. It only leaves
cases where one of the two operand types is not accessible to add the
multiplication. I can think of three possible reasons for that:

* one of the types is a primitive. (I don't know of any mathematical object
that does not commute with scalars, but who knows?)

* you cannot or do not want to touch the sourcecode of one of the types -
here I have not enough insight whether delegates might solve this and if
this solution is elegant and efficient enough to justify the inhibition of
opMult_r

Allowing opMult_r would not cost anything than a minor change in the
language definition and the implementation. It would break no existing code
at all and finish this kind of discussion.



J Anderson wrote:
 It's a miss-understanding.  Sure if the other class doesn't define the
 opposite overload then D will swap the things around to make it work
 (this is a feature rather then a con). But that's just a semantic bug on
 the programmers part.
 
 class A { opMul(B) {} }
 
 class B
 {
     opMul(A) {} //If this is omitted then D will make things
 commutative. Otherwise you essentially have non-commutative.
 }
 
 Therefore opMul_r isn't nessary.  This question has been asked a couple
 of times in the group.  So far I've seen no one show a coded example (or
 a mathematical type) that defeats this stradagie.  And matrices
 definatly aren't one.
 
 Now one problem I do see is if the programmer doesn't have access type A
 and B to add there own types (of course they can overload) but that's a
 different issue.   I guess it be solved done using delegates though.
Apr 21 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Norbert Nemec wrote:

OK, now, this actually boils down the problem even further. It only leaves
cases where one of the two operand types is not accessible to add the
multiplication. I can think of three possible reasons for that:

* one of the types is a primitive. (I don't know of any mathematical object
that does not commute with scalars, but who knows?)
  
I don't know of any mathematical object that does not commute with scalars, but who knows? Exactly. If you really needed you can box.
* you cannot or do not want to touch the sourcecode of one of the types -
here I have not enough insight whether delegates might solve this and if
this solution is elegant and efficient enough to justify the inhibition of
opMult_r

Allowing opMult_r would not cost anything than a minor change in the
language definition and the implementation. It would break no existing code
at all and finish this kind of discussion.

  
It would cost something. We would have C++ where operators are used for anything. Anyhow I've shown how this can be overcome. It's not impossible, just hard to do because most of the time you don't need it (and if you do there is probably something wrong in your design). -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
J Anderson wrote:
 It would cost something.  We would have C++ where operators are used for
 anything.  Anyhow I've shown how this can be overcome.
If you look at "cost" in terms of language and compiler complexity, it would not cost anything. The comfort for the programmer would also stay the same, since current code would just work as now. As long as you don't use opMul_r, you don't have to worry about it. As long as you use only commuting objects, there will never be any reason for opMul_r. If you - as I understand your remark - consider "cost" the increased danger of abuse of language constructs (Meaning evil programmers defining "*" operators that have nothing to do with multiplication.) then the existance of opMul_r and opAdd_r will change little. Therefore I really don't understand why you say "Allowing opMul_r would cost something." So, if opMul_r does not cost anything, I don't see any reason to try to find out whether you can work around it in any case. If anybody ever needs to box a scalar or take the pains of using delegates or whatever means just because someone decided that opMul_r is "unnecessary", then clearly this decision was wrong.
Apr 21 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Norbert Nemec wrote:

J Anderson wrote:
  

It would cost something.  We would have C++ where operators are used for
anything.  Anyhow I've shown how this can be overcome.
    
If you look at "cost" in terms of language and compiler complexity, it would not cost anything. The comfort for the programmer would also stay the same, since current code would just work as now. As long as you don't use opMul_r, you don't have to worry about it. As long as you use only commuting objects, there will never be any reason for opMul_r. If you - as I understand your remark - consider "cost" the increased danger of abuse of language constructs (Meaning evil programmers defining "*" operators that have nothing to do with multiplication.) then the existance of opMul_r and opAdd_r will change little. Therefore I really don't understand why you say "Allowing opMul_r would cost something." So, if opMul_r does not cost anything, I don't see any reason to try to find out whether you can work around it in any case. If anybody ever needs to box a scalar or take the pains of using delegates or whatever means just because someone decided that opMul_r is "unnecessary", then clearly this decision was wrong.
Problems arrives when you have operator clashes. If you have op_r and op defined in both classes which one do you pick? You've just killed both class operators. Walter wants to avoid this problem (which is in C++) as much as possible. The solution I presented doesn't suffer from that problem. Sadly of the other operators will suffer from this problem. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
next sibling parent reply Hauke Duden <H.NS.Duden gmx.net> writes:
J Anderson wrote:
 Problems arrives when you have operator clashes.  If you have op_r and 
 op defined in both classes which one do you pick?  You've just killed 
 both class operators.  Walter wants to avoid this problem (which is in 
 C++) as much as possible.  The solution I presented doesn't suffer from 
 that problem.
That is not a problem at all. "a op b" can be translated to "a.op(b)" or "b.op_r(a)". It can NOT be translated to "b.op(a)" or "a.op_r(b)". And since there is a clear precedence rule between op and op_r (op comes first) there is no ambiguity at all. Hauke
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Hauke Duden wrote:

 J Anderson wrote:
 Problems arrives when you have operator clashes.  If you have op_r and
 op defined in both classes which one do you pick?  You've just killed
 both class operators.  Walter wants to avoid this problem (which is in
 C++) as much as possible.  The solution I presented doesn't suffer from
 that problem.
That is not a problem at all. "a op b" can be translated to "a.op(b)" or "b.op_r(a)". It can NOT be translated to "b.op(a)" or "a.op_r(b)".
Well, YES, actually for op="*", it will be translated to b.opMul(a) in step 3. of the precedence rules.
Apr 21 2004
parent Hauke Duden <H.NS.Duden gmx.net> writes:
Norbert Nemec wrote:
That is not a problem at all. "a op b" can be translated to "a.op(b)" or
"b.op_r(a)". It can NOT be translated to "b.op(a)" or "a.op_r(b)".
Well, YES, actually for op="*", it will be translated to b.opMul(a) in step 3. of the precedence rules.
I meant that these would be the reasonable rules for non-commutative operators. I.e. that it is no problem to define something unambiguous. Sorry for not making it clear. Hauke
Apr 21 2004
prev sibling parent Norbert Nemec <Norbert.Nemec gmx.de> writes:
J Anderson wrote:
 Problems arrives when you have operator clashes.  If you have op_r and
 op defined in both classes which one do you pick?  You've just killed
 both class operators.  Walter wants to avoid this problem (which is in
 C++) as much as possible.
That problem would already be cleanly solved by the three rules in the often cited: http://www.digitalmars.com/d/operatoroverloading.html If the compiler finds an expression a*b, it will 1. first try to interpret it as a.opMul(b) 2. if that does not exist, try b.opMul_r(a) 3. if that also does not exist, try b.opMul(a) and if that does not exist, give up issueing an error. For noncommuting operators like "/", step 3 would be skipped. Currently, for commuting operators like "*", step 2 is skipped for no good reason. My request would only be to reintroduce (2.) for "*" and "+". For any existing code, opMul_r does not exist, so this change would not matter. For any truely commuting objects, b.opMul_r(a) and b.opMul(a) are identical, so if the programmer really implements b.opMul_r(a) it is just redundant. As you see, the problem of the ambiguity was already solved be putting the three rules into the language spec. It is just a matter of actually allowing programmers to implement opMul_r where it is appropriate.
 The solution I presented doesn't suffer from
 that problem.
 
 Sadly of the other operators will suffer from this problem.
Sorry, I lost overview over which solution you mean. In any case, by simply allowing opMul_r and opAdd_r, there is no ambiguity and nobody feels a difference unless they really implement it, in which case it just works in a straightforward way.
Apr 21 2004
prev sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
J Anderson wrote:

 Jan-Eric Duden wrote:

 As far as I understand the docs, this is not correct:
 opAdd and opMul are supposed to be commutative. There is no opAdd_r or
 opMul_r.
 see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
 Binary Operators
It's a miss-understanding. Sure if the other class doesn't define the opposite overload then D will swap the things around to make it work (this is a feature rather then a con). But that's just a semantic bug on the programmers part. class A { opMul(B) {} } class B { opMul(A) {} //If this is omitted then D will make things commutative. Otherwise you essentially have non-commutative. } Therefore opMul_r isn't nessary. This question has been asked a couple of times in the group. So far I've seen no one show a coded example (or a mathematical type) that defeats this stradagie. And matrices definatly aren't one. Now one problem I do see is if the programmer doesn't have access type A and B to add there own types (of course they can overload) but that's a different issue. I guess it be solved done using delegates though.
Actually I didn't mean delegates. I mean something like: interface A { B opMyMul(B); } class B { B opMul (A a) { return a.opMyMul(this); } } Problem solved. But most of the time you shouldn't need that. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
prev sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Jan-Eric Duden wrote:

 As far as I understand the docs, this is not correct:
 opAdd and opMul are supposed to be commutative. There is no opAdd_r or
 opMul_r.
 see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
 Binary Operators
 
True!! Ouch!! This actually is a detail that should be fixed! Looking at the section: ------------- The following sequence of rules is applied, in order, to determine which form is used: (...) 2. If b is a struct or class object reference that contains a member named opfunc_r and the operator op is not commutative, the expression is rewritten as: b.opfunc_r(a) ------------- the phrase "and the operator op is not commutative" should be dropped. b.opMul_r(a) should be checked in any case before b.opMul(a) If the factors actually do commute, opMul_r need not be defined, but it should still be possible to define it for cases like Matrix arithmetic. I guess this request is small enough to convince Walter? The only question now is, how to get him to read any mail about this subject at all... B.t.w.: would it be possible to define b.opfunc_r(a) as "illegal" in some way to create a compiler error when a special combination (like columnvector*matrix) is used?
Apr 21 2004
parent "Jan-Eric Duden" <jeduden whisset.com> writes:
It think this should be done with opAdd too....
-- 
Jan-Eric Duden

"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c65opb$hj6$1 digitaldaemon.com...
 Jan-Eric Duden wrote:

 As far as I understand the docs, this is not correct:
 opAdd and opMul are supposed to be commutative. There is no opAdd_r or
 opMul_r.
 see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
 Binary Operators
True!! Ouch!! This actually is a detail that should be fixed! Looking at the section: ------------- The following sequence of rules is applied, in order, to determine which form is used: (...) 2. If b is a struct or class object reference that contains a member named opfunc_r and the operator op is not commutative, the expression is rewritten as: b.opfunc_r(a) ------------- the phrase "and the operator op is not commutative" should be dropped. b.opMul_r(a) should be checked in any case before b.opMul(a) If the factors actually do commute, opMul_r need not be defined, but it should still be possible to define it for cases like Matrix arithmetic. I guess this request is small enough to convince Walter? The only question now is, how to get him to read any mail about this subject at all... B.t.w.: would it be possible to define b.opfunc_r(a) as "illegal" in some way to create a compiler error when a special combination (like columnvector*matrix) is used?
Apr 21 2004
prev sibling parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
for example: :)
a:
[ 3    3    1]
[-3    2    2]
[ 5    2    3]
b:
[1    0    4]
[0    1    0]
[0    0    1]

a*b
[ 3    3     13]
[-3    2    -10]
[ 5    2     23]

b*a
[23    11    13]
[-3     2     2]
[ 5     2     3]


-- 
Jan-Eric Duden
"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c65joj$92k$1 digitaldaemon.com...
 Jan-Eric Duden wrote:

This example
is not a problem for D, because you are multiplying a matrix by a
matrix.
No. think of following matrix operation: Translate(-1.0,-1.0,-1.0)*RotateX(30.0)*Translate(1.0,1.0,1.0) which is different from Translate(1.0,1.0,1.0)*RotateX(30.0)*Translate(-1.0,-1.0,-1.0) or RotateX(30.0)*Translate(1.0,1.0,1.0)*Translate(-1.0,-1.0,-1.0)
Have you tried dig matrices yet. They work this way. I mean, with a matrix class you have access to both sides of the equation. It's only with privative types that there is this problem. Commute is more then possible for matrices. //From dig: /** Multiply matrices. */ mat3 opMul (mat3 mb) { mat3 mo; float [] a = array (); float [] b = mb.array (); float [] o = mo.array (); for (int i; i < 3; i ++) { o [i + 0] = a [i] * b [0] + a [i + 3] * b [1] + a [i + 6] * b [2]; o [i + 3] = a [i] * b [3] + a [i + 3] * b [4] + a [i + 6] * b [5]; o [i + 6] = a [i] * b [6] + a [i + 3] * b [7] + a [i + 6] * b [8]; } return mo; } I'm afraid you've miss-understood what the documentation means. Try it. Send in a dig example that doesn't compute the correct result. //(Untested) mat3 n, o; ... if (n * o == o * m) { printf("multiplication is equal\n"); } -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

for example: :)
a:
[ 3    3    1]
[-3    2    2]
[ 5    2    3]
b:
[1    0    4]
[0    1    0]
[0    0    1]

a*b
[ 3    3     13]
[-3    2    -10]
[ 5    2     23]

b*a
[23    11    13]
[-3     2     2]
[ 5     2     3]


  
I don't know why I waste my time proving what you say is nonsense <g> import std.c.stdio; import net.BurtonRadons.dig.common.math; import std.process; void main() { mat3 a= mat3.create(3, 3, 1, -3, 2, 2, 5, 2, 3); mat3 b = mat3.create(1, 0, 4, 0, 1, 0, 0, 0, 1); printf("a\n"); a.print(); printf("b\n"); b.print(); printf("a * b\n"); mat3 c = a * b; c.print(); printf("b * a\n"); mat3 d = b * a; d.print(); std.process.system("pause"); } output: a [ 3 3 1 ] [ -3 2 2 ] [ 5 2 3 ] b [ 1 0 4 ] [ 0 1 0 ] [ 0 0 1 ] a * b [ 3 3 13 ] [ -3 2 -10 ] [ 5 2 23 ] b * a [ 23 11 13 ] [ -3 2 2 ] [ 5 2 3 ] -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
Uhm. That depends how you interpret the matrices - as row vectors or as
column vectors.
In any case, your program proofs as well that b*a != a*b !

-- 
Jan-Eric Duden
"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c65mvr$e8h$1 digitaldaemon.com...
 Jan-Eric Duden wrote:

for example: :)
a:
[ 3    3    1]
[-3    2    2]
[ 5    2    3]
b:
[1    0    4]
[0    1    0]
[0    0    1]

a*b
[ 3    3     13]
[-3    2    -10]
[ 5    2     23]

b*a
[23    11    13]
[-3     2     2]
[ 5     2     3]
I don't know why I waste my time proving what you say is nonsense <g> import std.c.stdio; import net.BurtonRadons.dig.common.math; import std.process; void main() { mat3 a= mat3.create(3, 3, 1, -3, 2, 2, 5, 2, 3); mat3 b = mat3.create(1, 0, 4, 0, 1, 0, 0, 0, 1); printf("a\n"); a.print(); printf("b\n"); b.print(); printf("a * b\n"); mat3 c = a * b; c.print(); printf("b * a\n"); mat3 d = b * a; d.print(); std.process.system("pause"); } output: a [ 3 3 1 ] [ -3 2 2 ] [ 5 2 3 ] b [ 1 0 4 ] [ 0 1 0 ] [ 0 0 1 ] a * b [ 3 3 13 ] [ -3 2 -10 ] [ 5 2 23 ] b * a [ 23 11 13 ] [ -3 2 2 ] [ 5 2 3 ] -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
next sibling parent "Jan-Eric Duden" <jeduden whisset.com> writes:
I figured out why your ouput is different.
the matrices are up side down in your output...
I typed them in the standard mathematical way...

-- 
Jan-Eric Duden
"Jan-Eric Duden" <jeduden whisset.com> wrote in message
news:c65nha$fd9$1 digitaldaemon.com...
 Uhm. That depends how you interpret the matrices - as row vectors or as
 column vectors.
 In any case, your program proofs as well that b*a != a*b !

 -- 
 Jan-Eric Duden
 "J Anderson" <REMOVEanderson badmama.com.au> wrote in message
 news:c65mvr$e8h$1 digitaldaemon.com...
 Jan-Eric Duden wrote:

for example: :)
a:
[ 3    3    1]
[-3    2    2]
[ 5    2    3]
b:
[1    0    4]
[0    1    0]
[0    0    1]

a*b
[ 3    3     13]
[-3    2    -10]
[ 5    2     23]

b*a
[23    11    13]
[-3     2     2]
[ 5     2     3]
I don't know why I waste my time proving what you say is nonsense <g> import std.c.stdio; import net.BurtonRadons.dig.common.math; import std.process; void main() { mat3 a= mat3.create(3, 3, 1, -3, 2, 2, 5, 2, 3); mat3 b = mat3.create(1, 0, 4, 0, 1, 0, 0, 0, 1); printf("a\n"); a.print(); printf("b\n"); b.print(); printf("a * b\n"); mat3 c = a * b; c.print(); printf("b * a\n"); mat3 d = b * a; d.print(); std.process.system("pause"); } output: a [ 3 3 1 ] [ -3 2 2 ] [ 5 2 3 ] b [ 1 0 4 ] [ 0 1 0 ] [ 0 0 1 ] a * b [ 3 3 13 ] [ -3 2 -10 ] [ 5 2 23 ] b * a [ 23 11 13 ] [ -3 2 2 ] [ 5 2 3 ] -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
prev sibling next sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Jan-Eric Duden wrote:

 Uhm. That depends how you interpret the matrices - as row vectors or as
 column vectors.
 In any case, your program proofs as well that b*a != a*b !
Guess, we can end this thread. Obviously, everybody knows that matrices do not commute in general. Obviously, the program by J. Anderson works correctly. My question was only whether it is guaranteed to work correctly by the language definition, or whether it just works for the current implementation. Now, I found out that I first misunderstood the language definition and that it actually guarantees correctness (as long as you don't leave out any operand definition for non-commuting objects) Hope everyone is satisfied and nobody is mad at me for starting this pointless thread... Ciao, Nobbi
Apr 21 2004
parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
I dont think so.
The opMul is supposed behave commutative according to the D specification:

As far as I understand the docs, this is not correct:
opAdd and opMul are supposed to be commutative. There is no opAdd_r or
opMul_r.
see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
Binary Operators


-- 
Jan-Eric Duden
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c65o7t$gmn$1 digitaldaemon.com...
 Jan-Eric Duden wrote:

 Uhm. That depends how you interpret the matrices - as row vectors or as
 column vectors.
 In any case, your program proofs as well that b*a != a*b !
Guess, we can end this thread. Obviously, everybody knows that matrices do not commute in general. Obviously, the program by J. Anderson works correctly. My question was only whether it is guaranteed to work correctly by the language definition, or whether it just works for the current implementation. Now, I found out that I first misunderstood the language definition and
that
 it actually guarantees correctness (as long as you don't leave out any
 operand definition for non-commuting objects)

 Hope everyone is satisfied and nobody is mad at me for starting this
 pointless thread...

 Ciao,
 Nobbi
Apr 21 2004
parent J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

I dont think so.
The opMul is supposed behave commutative according to the D specification:

As far as I understand the docs, this is not correct:
opAdd and opMul are supposed to be commutative. There is no opAdd_r or
opMul_r.
see http://www.digitalmars.com/d/operatoroverloading.html : Overloadable
Binary Operators
  
Commutative only if you don't define the other operator in the other class. Get it? -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
prev sibling parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

Uhm. That depends how you interpret the matrices - as row vectors or as
column vectors.
  
You just make a choice and stick with it (row/col is pretty standard ie x/y). It all depends on how you see the array. You'll have that *problem* in any language. a * vec3 vec3 * a
In any case, your program proofs as well that b*a != a*b !
  
And so it should. Sorry I'm not quite sure if you understand that D works for matrices yet or not? Really, download undig from my website and look at the math.d file. It's really quite complete. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
The point is that the D compiler is allowed make B*A out of A*B.
which is not equivalent in the case of matrix multiplication.

-- 
Jan-Eric Duden
"J Anderson" <REMOVEanderson badmama.com.au> wrote in message
news:c65p09$i3u$1 digitaldaemon.com...
 Jan-Eric Duden wrote:

Uhm. That depends how you interpret the matrices - as row vectors or as
column vectors.
You just make a choice and stick with it (row/col is pretty standard ie x/y). It all depends on how you see the array. You'll have that *problem* in any language. a * vec3 vec3 * a
In any case, your program proofs as well that b*a != a*b !
And so it should. Sorry I'm not quite sure if you understand that D works for matrices yet or not? Really, download undig from my website and look at the math.d file. It's really quite complete. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
next sibling parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Actually, no. I thought so myself (that's why I started the thread) but
        http://www.digitalmars.com/d/operatoroverloading.html
explicitely states the rules after which the compiler determines which code
to use. Writing
        A*B
then first A.opMul(B) is checked. Only if this does not exist, B.opMul(A) is
called. This solves the problem, if the library author takes care not to
forget any definitions. Only in that case the compiler might swap factors
without a warning.

The only thing that may still be discussed, and will probably pop up over
and over again until Walter gives in, is the question of opMul_r and
opAdd_r. I believe it would make sense to allow those and check them before
commuting factors, but I can live without them.

Jan-Eric Duden wrote:

 The point is that the D compiler is allowed make B*A out of A*B.
 which is not equivalent in the case of matrix multiplication.
Apr 21 2004
next sibling parent reply "Jan-Eric Duden" <jeduden whisset.com> writes:
But then, In the documentation the word "commutative" should not be used
with opMul and opAdd !
Because they don't need to be commuative!

-- 
Jan-Eric Duden
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c660ah$uef$1 digitaldaemon.com...
 Actually, no. I thought so myself (that's why I started the thread) but
         http://www.digitalmars.com/d/operatoroverloading.html
 explicitely states the rules after which the compiler determines which
code
 to use. Writing
         A*B
 then first A.opMul(B) is checked. Only if this does not exist, B.opMul(A)
is
 called. This solves the problem, if the library author takes care not to
 forget any definitions. Only in that case the compiler might swap factors
 without a warning.

 The only thing that may still be discussed, and will probably pop up over
 and over again until Walter gives in, is the question of opMul_r and
 opAdd_r. I believe it would make sense to allow those and check them
before
 commuting factors, but I can live without them.

 Jan-Eric Duden wrote:

 The point is that the D compiler is allowed make B*A out of A*B.
 which is not equivalent in the case of matrix multiplication.
Apr 21 2004
parent reply J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

But then, In the documentation the word "commutative" should not be used
with opMul and opAdd !
Because they don't need to be commuative!
They are commutative by default that is, they don't need an _r operator. I feel like an echo. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
parent Norbert Nemec <Norbert.Nemec gmx.de> writes:
J Anderson wrote:

 Jan-Eric Duden wrote:
 
But then, In the documentation the word "commutative" should not be used
with opMul and opAdd !
Because they don't need to be commuative!
They are commutative by default that is, they don't need an _r operator. I feel like an echo.
Sorry, I feel like repeating myself all the time, too, but this last statement just was incorrect. It should be: "They are commutative by default, so if your objects do commute, you can drop the opfunc_r (because it would be identical to opfunc). For noncommuting objects, opfunc_r would be different from opfunc, but unfortunately you are still not allowed to define it and therefore have to work around it. This is trivial, if you can just put the definition in the other class as opfunc, but tricky to impossible if you cannot."
Apr 21 2004
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
news:c660ah$uef$1 digitaldaemon.com...
 Actually, no. I thought so myself (that's why I started the thread) but
         http://www.digitalmars.com/d/operatoroverloading.html
 explicitely states the rules after which the compiler determines which
code
 to use. Writing
         A*B
 then first A.opMul(B) is checked. Only if this does not exist, B.opMul(A)
is
 called. This solves the problem, if the library author takes care not to
 forget any definitions. Only in that case the compiler might swap factors
 without a warning.

 The only thing that may still be discussed, and will probably pop up over
 and over again until Walter gives in, is the question of opMul_r and
 opAdd_r. I believe it would make sense to allow those and check them
before
 commuting factors, but I can live without them.
You're right, and they're in.
Jan 28 2005
parent reply Norbert Nemec <Norbert Nemec-online.de> writes:
Walter wrote:

 
 "Norbert Nemec" <Norbert.Nemec gmx.de> wrote in message
 news:c660ah$uef$1 digitaldaemon.com...
 Actually, no. I thought so myself (that's why I started the thread) but
         http://www.digitalmars.com/d/operatoroverloading.html
 explicitely states the rules after which the compiler determines which
code
 to use. Writing
         A*B
 then first A.opMul(B) is checked. Only if this does not exist, B.opMul(A)
is
 called. This solves the problem, if the library author takes care not to
 forget any definitions. Only in that case the compiler might swap factors
 without a warning.

 The only thing that may still be discussed, and will probably pop up over
 and over again until Walter gives in, is the question of opMul_r and
 opAdd_r. I believe it would make sense to allow those and check them
before
 commuting factors, but I can live without them.
You're right, and they're in.
Wow - you really have your newgroup well-sorted! Having somebody answer to ~9 months old messages happens rarely these days... Anyway: I had already noticed from the specs that you had changed that detail. Thanks a lot!
Jan 30 2005
parent "Walter" <newshound digitalmars.com> writes:
"Norbert Nemec" <Norbert Nemec-online.de> wrote in message
news:ctjilv$1fck$1 digitaldaemon.com...
 Wow - you really have your newgroup well-sorted! Having somebody answer to
 ~9 months old messages happens rarely these days...
Well, it was a long & important thread, and I thought it needed a bit of closure for the archives.
 Anyway: I had already noticed from the specs that you had changed that
 detail. Thanks a lot!
You're welcome.
Feb 04 2005
prev sibling parent J Anderson <REMOVEanderson badmama.com.au> writes:
Jan-Eric Duden wrote:

The point is that the D compiler is allowed make B*A out of A*B.
  
That is not true! Because mat3 is multiplied by itself you automaticly get the non-commutative. If for example you had mat3 (A) and mat4 (B) then you'd need to write 2 overloads to make them non-commutative between the two (of course you can't multiply m3xm4 unless you use in the 16th pos in mat3 with 1). In C++ you would also have to write at least 2 overloads (probably more) to do the same thing. So all D is doing is putting in a *default* commutative method for all types which you can take out by specifying the other side.
which is not equivalent in the case of matrix multiplication.
  
This is true and is possible in D. -- -Anderson: http://badmama.com.au/~anderson/
Apr 21 2004
prev sibling parent reply Ben Hinkle <bhinkle4 juno.com> writes:
Norbert Nemec wrote:

 Hi there,
 
 the assumption that multiplications are always commutative really is
 restricting the use of the language in a rather serious way.
 
 If I were to design a numerical library for linear algebra, the most
 natural thing to do would be to use the multiplication operator for matrix
 multiplications, allowing to write
         Matrix A, B;
         Matrix C = A * B;
 
 In the current definition of the language, there would be now way to do
 such a thing, forcing library writers do resort to stuff like:
         Matrix X = mult(A,B);
 which gets absolutely ugly for large expressions.
Eventually array arithmetic will act element-wise, so I would design Matrix so that * mean element-wise multiply. That would work fine with commutativity. I suspect another operator will eventually be added to D to mean "non-commutative multiply" something like ** so that Matrix multiply is the regular sense would use **. For example, the language R uses %*% for matrix mult, %o% for inner product, %x% for outer product. -Ben
Apr 21 2004
parent reply Norbert Nemec <Norbert.Nemec gmx.de> writes:
Ben Hinkle wrote:
 Eventually array arithmetic will act element-wise, so I would design
 Matrix so that * mean element-wise multiply. That would work fine with
 commutativity. I suspect another operator will eventually be added to D to
 mean "non-commutative multiply" something like ** so that Matrix multiply
 is the regular sense would use **.
There is no point in elementwise multiplying two matrices. Of course, a Matrix implementation would use some elementwise operation internally on the array that it encapsulates. Anyhow, that is an implementation issue. To the outside, Matrix should only have one multiplication, and that is the Matrix multiplication.
 For example, the language R uses %*% for matrix mult, %o% for inner
 product, %x% for outer product.
You can save all these if you work with column- and rowvectors. r*c means inner product, c*r is an outer product, resulting in a matrix. r*r or c*c don't exist. To convert between the two, you just transpose them (or even better adjungate them, then you are safe even for complex stuff) Of course, this all works only for plain 2-d matrices, but for higher dimensional object, you need to use indices anyway. Distinguishing between row- and columnvectors seems awkward at first, but once you are used to it, you realize that things get a lot easier. I would really hate it to see n different multiplication operators, when actually, the types of the operands tell you everything you need.
Apr 21 2004
parent reply "Ben Hinkle" <bhinkle4 juno.com> writes:
 There is no point in elementwise multiplying two matrices.
I disagree. Maybe the misunderstanding is about vocabulary. If you think of the word "matrix" only in the linear algebra sense then sure you never element-wise multiply them but if you think of the word "matrix" as a 2D array of data (more like a spreadsheet or something) then you element-wise multiply all the time. I work at the MathWorks (makers of MATLAB) so I have seen lots and lots of code that does both. MATLAB uses * for matrix mult and .* for element-wise mult. -Ben
Apr 21 2004
parent Norbert Nemec <Norbert.Nemec gmx.de> writes:
True, this is just about vocabulary.

For me, "Matrix" only refers to the mathematical object in the linear
algebra sense. Everything else would be an array. Matlab mixes both and
therefore needs different operators. The cleaner approach is to separate
the data types and define the operators accordingly. For arrays, the
sensible default are elementwise operations, for (mathematical) matrices
these do not make much sense (except maybe somewhere inside the
implementation of certain algorithms) and the plain old "*" operator should
be used for linear algebra multiplications of all sorts, depending on the
type of the objects (inner, outer and matrix product)

The strong point in object oriented programming is, that you can give
semantics to objects. In Matlab, you just have a 2D-array of numbers that
get their meaning only by the operations that you use on them. A specific
linear algebra library gives the internal array far richer semantics. The
first is better for quick and dirty programming, the second for more
disciplined software design. (Like with weak vs. strong typing)

Ben Hinkle wrote:

 
 There is no point in elementwise multiplying two matrices.
I disagree. Maybe the misunderstanding is about vocabulary. If you think of the word "matrix" only in the linear algebra sense then sure you never element-wise multiply them but if you think of the word "matrix" as a 2D array of data (more like a spreadsheet or something) then you element-wise multiply all the time. I work at the MathWorks (makers of MATLAB) so I have seen lots and lots of code that does both. MATLAB uses * for matrix mult and .* for element-wise mult. -Ben
Apr 21 2004