digitalmars.D - Issues with array expressions
- Norbert Nemec (32/32) Jun 26 2010 Hi there,
- Robert Jacques (15/47) Jun 26 2010 I have the vague memory that this used to cause segfaults or ICEs. A lot...
- Andrei Alexandrescu (6/8) Jun 26 2010 I think they should always check array lengths. Checks should be elided
Hi there, I just encountered a few issues with array expressions (2.047) * Implicit type conversion does not work. On the snippet ------------------------- auto a = [1,2,3]; auto b = [2.5,3.5,4.5]; auto d = new real[2]; d[] = a[]+d[]; ------------------------- the compiler first complains that int[] and double[] do not mix. When replacing the integers by floating points, the compiler still complains that double[] cannot be converted to real[]. Writing a loop, which is supposed to be equivalent to the expression, everything is fine. Is this a known bug? * According to TDPL, the length of the rhs arrays may be larger than the lhs array. Indeed, the following code compiles and works: ------------------------- auto a = [1,2,3]; auto b = [2,3,4]; auto d = new int[2]; d[] = a[]+d[]; ------------------------- I have the feeling that this should be explicitly restricted by the language. Both sides of the expression should be demanded to have the same length. At the moment, the code already needs a run-time check to make sure that the RHS is not shorter then the LHS. Changing this check to force equal length would not cost anything more. Using array expressions every day in Python/NumPy, the exact shape checking has helped me find many bugs. What to others think? Greetings, Norbert
Jun 26 2010
On Sat, 26 Jun 2010 08:47:04 -0400, Norbert Nemec <Norbert nemec-online.de> wrote:Hi there, I just encountered a few issues with array expressions (2.047) * Implicit type conversion does not work. On the snippet ------------------------- auto a = [1,2,3]; auto b = [2.5,3.5,4.5]; auto d = new real[2]; d[] = a[]+d[]; ------------------------- the compiler first complains that int[] and double[] do not mix. When replacing the integers by floating points, the compiler still complains that double[] cannot be converted to real[]. Writing a loop, which is supposed to be equivalent to the expression, everything is fine. Is this a known bug?I have the vague memory that this used to cause segfaults or ICEs. A lot of those bugs have been fixed. Long term, Don's been talking about generalizing / fixing array ops to allow function calls, etc. ( This may be based off of bug 3760 http://d.puremagic.com/issues/show_bug.cgi?id=3760) Essentially, a[] = b[] + sin(c[]); would be lowered into (a.k.a. syntactic sugar for) foreach(i, ref __a; a) __a = b[i] + sin(c[i]); This would also solve implicit conversion issues.* According to TDPL, the length of the rhs arrays may be larger than the lhs array. Indeed, the following code compiles and works: ------------------------- auto a = [1,2,3]; auto b = [2,3,4]; auto d = new int[2]; d[] = a[]+d[]; ------------------------- I have the feeling that this should be explicitly restricted by the language. Both sides of the expression should be demanded to have the same length. At the moment, the code already needs a run-time check to make sure that the RHS is not shorter then the LHS. Changing this check to force equal length would not cost anything more. Using array expressions every day in Python/NumPy, the exact shape checking has helped me find many bugs. What to others think? Greetings, NorbertI agree. This is bug 2547: Array Ops should check length, at least when bounds checking is on (http://d.puremagic.com/issues/show_bug.cgi?id=2547)
Jun 26 2010
On 06/26/2010 11:19 AM, Robert Jacques wrote:I agree. This is bug 2547: Array Ops should check length, at least when bounds checking is on (http://d.puremagic.com/issues/show_bug.cgi?id=2547)I think they should always check array lengths. Checks should be elided only when there's gain from doing so; one check prior to one array operation will not impact performance for all arrays longer than a couple of elements. Andrei
Jun 26 2010