digitalmars.D.learn - Multiplying transposed matrices in mir
- p.shkadzko (12/12) Apr 19 2020 I'd like to calculate XX^T where X is some [m x n] matrix.
- jmh530 (4/17) Apr 19 2020 2 is Contiguous, 0 is Universal, 1 is Canonical. To this day, I
- p.shkadzko (12/32) Apr 19 2020 Ah, I see. There are docs on internal representations of Slices
- jmh530 (11/13) Apr 19 2020 There are elementwise operation on two matrices of the same size
- p.shkadzko (2/17) Apr 19 2020 "assumeContiguous" that's what I was looking for. Thanks!
- p.shkadzko (5/24) Apr 19 2020 well no, "assumeContiguous" reverts the results of the
- jmh530 (17/22) Apr 19 2020 Ah, you're right. I use it in other places where it hasn't been
- p.shkadzko (3/25) Apr 19 2020 Thanks. I somehow missed the whole point of "a * a.transposed"
- jmh530 (6/9) Apr 19 2020 a.transposed is just a view of the original matrix. Even when I
- p.shkadzko (3/14) Apr 20 2020 It is. I was trying to calculate the covariance matrix of some
- jmh530 (8/11) Apr 20 2020 Incorrect. The covariance matrix is calculated with matrix
- 9il (5/30) Apr 19 2020 In the same time, the SliceKind isn't matter for assignment
- 9il (5/17) Apr 19 2020 BTW for the following operation
- p.shkadzko (2/22) Apr 20 2020 Interesting, thanks for the examples.
I'd like to calculate XX^T where X is some [m x n] matrix. // create a 3 x 3 matrix Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed; // error Looks like it is not possible due to "incompatible types for (a) * (transposed(a)): Slice!(double*, 2LU, cast(mir_slice_kind)2) and Slice!(double*, 2LU, cast(mir_slice_kind)0)" I'd like to understand why and how should this operation be performed in mir. Also, what does the last number "0" or "2" means in the type definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"?
Apr 19 2020
On Sunday, 19 April 2020 at 17:07:36 UTC, p.shkadzko wrote:I'd like to calculate XX^T where X is some [m x n] matrix. // create a 3 x 3 matrix Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed; // error Looks like it is not possible due to "incompatible types for (a) * (transposed(a)): Slice!(double*, 2LU, cast(mir_slice_kind)2) and Slice!(double*, 2LU, cast(mir_slice_kind)0)" I'd like to understand why and how should this operation be performed in mir. Also, what does the last number "0" or "2" means in the type definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"?2 is Contiguous, 0 is Universal, 1 is Canonical. To this day, I don’t have the greatest understanding of the difference. Try the mtimes function in lubeck.
Apr 19 2020
On Sunday, 19 April 2020 at 17:22:12 UTC, jmh530 wrote:On Sunday, 19 April 2020 at 17:07:36 UTC, p.shkadzko wrote:Ah, I see. There are docs on internal representations of Slices but nothing about the rationale. It would be nice to have them since it is pretty much the core of Slice. "a.mtimes(a.transposed);" works but the results are different from what NumPy gives. For example: a = np.array([[1, 2], [3, 4]]) Slice!(int*, 2LU) a = [1, 2, 3, 4].sliced(2,2); writeln(a.mtimes(a.transposed)); // [[5, 11], [11, 25]] So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".I'd like to calculate XX^T where X is some [m x n] matrix. // create a 3 x 3 matrix Slice!(double*, 2LU) a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed; // error Looks like it is not possible due to "incompatible types for (a) * (transposed(a)): Slice!(double*, 2LU, cast(mir_slice_kind)2) and Slice!(double*, 2LU, cast(mir_slice_kind)0)" I'd like to understand why and how should this operation be performed in mir. Also, what does the last number "0" or "2" means in the type definition "Slice!(double*, 2LU, cast(mir_slice_kind)0)"?2 is Contiguous, 0 is Universal, 1 is Canonical. To this day, I don’t have the greatest understanding of the difference. Try the mtimes function in lubeck.
Apr 19 2020
On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote:snip So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".There are elementwise operation on two matrices of the same size and then there is matrix multiplication. Two different things. You had initially said using an mxn matrix to do the calculation. Elementwise multiplication only works for matrices of the same size, which is only true in your transpose case when they are square. The mtimes function is like dot or in python and does real matrix multiplication, which works for generic mxn matrices. If you want elementwise multiplication of a square matrix and it’s transpose in mir, then I believe you need to call assumeContiguous after transposed.
Apr 19 2020
On Sunday, 19 April 2020 at 18:59:00 UTC, jmh530 wrote:On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote:"assumeContiguous" that's what I was looking for. Thanks!snip So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".There are elementwise operation on two matrices of the same size and then there is matrix multiplication. Two different things. You had initially said using an mxn matrix to do the calculation. Elementwise multiplication only works for matrices of the same size, which is only true in your transpose case when they are square. The mtimes function is like dot or in python and does real matrix multiplication, which works for generic mxn matrices. If you want elementwise multiplication of a square matrix and it’s transpose in mir, then I believe you need to call assumeContiguous after transposed.
Apr 19 2020
On Sunday, 19 April 2020 at 19:13:14 UTC, p.shkadzko wrote:On Sunday, 19 April 2020 at 18:59:00 UTC, jmh530 wrote:well no, "assumeContiguous" reverts the results of the "transposed" and it's "a * a". I would expect it to stay transposed as NumPy does "assert np.all(np.ascontiguous(a.T) == a.T)".On Sunday, 19 April 2020 at 17:55:06 UTC, p.shkadzko wrote:"assumeContiguous" that's what I was looking for. Thanks!snip So, lubeck mtimes is equivalent to NumPy "a.dot(a.transpose())".There are elementwise operation on two matrices of the same size and then there is matrix multiplication. Two different things. You had initially said using an mxn matrix to do the calculation. Elementwise multiplication only works for matrices of the same size, which is only true in your transpose case when they are square. The mtimes function is like dot or in python and does real matrix multiplication, which works for generic mxn matrices. If you want elementwise multiplication of a square matrix and it’s transpose in mir, then I believe you need to call assumeContiguous after transposed.
Apr 19 2020
On Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote:[snip] well no, "assumeContiguous" reverts the results of the "transposed" and it's "a * a". I would expect it to stay transposed as NumPy does "assert np.all(np.ascontiguous(a.T) == a.T)".Ah, you're right. I use it in other places where it hasn't been an issue. I can do it with an allocation (below) using the built-in syntax, but not sure how do-able it is without an allocation (Ilya would know better than me). /+dub.sdl: dependency "lubeck" version="~>1.1.7" dependency "mir-algorithm" version="~>3.7.28" +/ import mir.ndslice; import lubeck; void main() { auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed.slice; }
Apr 19 2020
On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote:On Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote:Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.[snip] well no, "assumeContiguous" reverts the results of the "transposed" and it's "a * a". I would expect it to stay transposed as NumPy does "assert np.all(np.ascontiguous(a.T) == a.T)".Ah, you're right. I use it in other places where it hasn't been an issue. I can do it with an allocation (below) using the built-in syntax, but not sure how do-able it is without an allocation (Ilya would know better than me). /+dub.sdl: dependency "lubeck" version="~>1.1.7" dependency "mir-algorithm" version="~>3.7.28" +/ import mir.ndslice; import lubeck; void main() { auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed.slice; }
Apr 19 2020
On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:[snip] Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.a.transposed is just a view of the original matrix. Even when I tried to do a raw for loop I ran into issues because modifying the original a in any way caused all the calculations to be wrong. Honestly, it's kind of rare that I would do an element-wise multiplication of a matrix and its transpose.
Apr 19 2020
On Sunday, 19 April 2020 at 21:27:43 UTC, jmh530 wrote:On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:It is. I was trying to calculate the covariance matrix of some dataset X which would be XX^T.[snip] Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.a.transposed is just a view of the original matrix. Even when I tried to do a raw for loop I ran into issues because modifying the original a in any way caused all the calculations to be wrong. Honestly, it's kind of rare that I would do an element-wise multiplication of a matrix and its transpose.
Apr 20 2020
On Monday, 20 April 2020 at 19:06:53 UTC, p.shkadzko wrote:[snip] It is. I was trying to calculate the covariance matrix of some dataset X which would be XX^T.Incorrect. The covariance matrix is calculated with matrix multiplication, not element-wise multiplication. For instance, I often work with time series data that is TXN where T > N. Couldn't do a calculation with element-wise multiplication in that case. Try using Lubeck's covariance function or checking your results with the covariance function in other languages.
Apr 20 2020
On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote:In the same time, the SliceKind isn't matter for assignment operations: auto b = a.slice; // copy a to b b[] *= a.transposed; // works wellOn Sunday, 19 April 2020 at 19:20:28 UTC, p.shkadzko wrote:Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.[...]Ah, you're right. I use it in other places where it hasn't been an issue. I can do it with an allocation (below) using the built-in syntax, but not sure how do-able it is without an allocation (Ilya would know better than me). /+dub.sdl: dependency "lubeck" version="~>1.1.7" dependency "mir-algorithm" version="~>3.7.28" +/ import mir.ndslice; import lubeck; void main() { auto a = [2.1, 1.0, 3.2, 4.5, 2.4, 3.3, 1.5, 0, 2.1].sliced(3, 3); auto b = a * a.transposed.slice; }
Apr 19 2020
On Monday, 20 April 2020 at 02:42:33 UTC, 9il wrote:On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:BTW for the following operation auto b = a * a.transposed.slice; `b` isn't allocated as well because `*` is lazy.On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote:In the same time, the SliceKind isn't matter for assignment operations: auto b = a.slice; // copy a to b b[] *= a.transposed; // works well[...]Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.auto b = a.slice; // copy a to b b[] *= a.transposed; // works wellSo, the assignment operations are preferable anyway.
Apr 19 2020
On Monday, 20 April 2020 at 02:50:29 UTC, 9il wrote:On Monday, 20 April 2020 at 02:42:33 UTC, 9il wrote:Interesting, thanks for the examples.On Sunday, 19 April 2020 at 20:29:54 UTC, p.shkadzko wrote:BTW for the following operation auto b = a * a.transposed.slice; `b` isn't allocated as well because `*` is lazy.On Sunday, 19 April 2020 at 20:06:23 UTC, jmh530 wrote:In the same time, the SliceKind isn't matter for assignment operations: auto b = a.slice; // copy a to b b[] *= a.transposed; // works well[...]Thanks. I somehow missed the whole point of "a * a.transposed" not working because "a.transposed" is not allocated.auto b = a.slice; // copy a to b b[] *= a.transposed; // works wellSo, the assignment operations are preferable anyway.
Apr 20 2020