digitalmars.D - Some lazy code to D
- bearophile (229/229) Aug 28 2012 Once in a while I show some comparisons of code translated in
- Caligo (14/14) Aug 28 2012 Did you time the runs?
- bearophile (83/89) Aug 29 2012 I didn't time them. My timings (compiling with GHC -O3) are
- Marco Leise (19/28) Aug 29 2012 I've made the experience with another (non-professional)
- bearophile (12/15) Aug 29 2012 Lot of people say similar things, but saying it again and again
- Peter Alexander (12/26) Aug 29 2012 In general, it is not possible to have both full expressiveness
- bearophile (11/15) Aug 29 2012 The ShedSkin compiler has shown me that in some important cases
Once in a while I show some comparisons of code translated in different languages. A challenge from Reddit-Dailyprogrammer: http://www.reddit.com/r/dailyprogrammer/comments/ywm28/8272012_challenge_92_difficult_bags_and_balls/ The purpose of this post of mine is to compare a nice looking Haskell solution of this problem to its Python/D translation. This means this post is about language design matters and it's not about problem solving :-) (I have used a little program from that Reddit because this allows to show in this post complete runnable programs. Usually you can't do this with real-world code. But I can attest that the lazyness and other things shown in this program and this post are present and used/useful in much larger programs too.) ------------------------------ The little problem: Compute all the permutations of placing 9 balls into 4 bags such that each bag contains an odd number of balls. Ball count is transient when placing bags within one another. For example, a bag containing 3 balls is placed inside a bag containing 2 balls. The inner bag contains 3 balls and the outer bag contains 5 balls. Some example permutations: ((((9)))) (8(((1)))) (1)(1)((7)) ------------------------------ The Haskell solution written by Ledrug, a little reformatted: bags :: Int -> Int -> Int -> Int -> [String] bags 0 _ 0 _ = [""] bags b c n m | b <= 0 || n <= 0 || c <= 0 || m <= 0 = [] | otherwise = [l ++ r | n1 <- [1 .. m], b1 <- if n1 == m then [1 .. c] else [1 .. b], l <- bag b1 n1, r <- bags (b - b1) b1 (n - n1) n1] bag :: Int -> Int -> [String] bag 0 0 = [""] bag b n | b <= 0 || n <= 0 = [] | even n = [] | otherwise = ["(" ++ replicate n1 '*' ++ chain ++ ")" | n1 <- [0 .. n], chain <- bags (b - 1) (b - 1) (n - n1) (n - n1)] pick :: Int -> Int -> [String] pick b n = bags b b n n main = do mapM_ putStrLn $ (pick 4 9) ------------------------------ A direct translation to Python2 of the Haskell code comes very similar, it's quite lazy, and it's short enough: def pick(nbags, nballs): def bag(b, n): if b == 0 and n == 0: return [""] if b <= 0 or n <= 0 or n % 2 == 0: return [] return ("(" + '*' * n1 + chain + ")" for n1 in xrange(n + 1) for chain in bags(b - 1, b - 1, n - n1, n - n1)) def bags(b, c, n, m): if b == 0 and n == 0: return [""] if b <= 0 or n <= 0 or c <= 0 or m <= 0: return [] return (l + r for n1 in xrange(1, m + 1) for b1 in (xrange(1, c+1) if n1 == m else xrange(1, b+1)) for l in bag(b1, n1) for r in bags(b - b1, b1, n - n1, n1)) return bags(nbags, nbags, nballs, nballs) def main(): for sol in pick(4, 9): print sol main() I have used inner functions because here they keep the global namespace more clean. The user needs to see just the pick() function. ------------------------------ This is a direct translation of the Python2 code to D (but uses eager arrays, that currently are more idiomatic in D): import std.stdio, std.array, std.range; string[] pick(in int nbags, in int nballs) /*pure nothrow*/ { static struct Namespace { static string[] bag(in int b, in int n) /*pure nothrow*/ { if (b == 0 && n == 0) return [""]; if (b <= 0 || n <= 0 || n % 2 == 0) return []; typeof(return) result; foreach (n1; 0 .. n + 1) foreach (chain; bags(b - 1, b - 1, n - n1, n - n1)) result ~= "(" ~ std.array.replicate("*", n1) ~ chain ~ ")"; return result; } static string[] bags(in int b, in int c, in int n, in int m) /*pure nothrow*/ { if (b == 0 && n == 0) return [""]; if (b <= 0 || n <= 0 || c <= 0 || m <= 0) return []; typeof(return) result; foreach (n1; 1 .. m + 1) // iota is not pure, nor nothrow foreach (b1; (n1 == m) ? iota(1, c+1) : iota(1, b + 1)) foreach (l; bag(b1, n1)) foreach (r; bags(b - b1, b1, n - n1, n1)) result ~= l ~ r; return result; } } return Namespace.bags(nbags, nbags, nballs, nballs); } void main() { writefln("%-(%s\n%)", pick(4, 9)); } In this D code there are few temporary problems: - Iota is not pure nor nothrow, so none of the functions can be pure nothrow. This Phobos problem will be fixed. - I have had to fully qualify the path of std.array.replicate, to avoid a name clash. Maybe this little problem will be solved. There are some small problems: - I have used a static inner struct Namespace (Don's idea), to allow nested mutual recursion. It's not very nice, but for me it's acceptable, also because mutual recursion among inner functions is not so common. - "*".replicate(n1) is less nice and less short than the Python syntax "*" * n1, but it's not a big problem. - The "%-(%s\n%)" formatting string is complex, this syntax isn't so easy to remember, on the other hand here it has a voided one foreach. So I think it's acceptable. - Currently the DMD front-end doesn't contain logic that specifically optimimizes better std.range.iota. Maybe in some years this will improve. In the meantime this is not a big problem. A problem with inner functions is that in both D and Python they become hard(er) to unittest. In D I'd like this to be valid code: void main() { int foo() { return 1; } unittest { assert(foo() == 1; } } There is a significant problem: - This program generates a small sequence of solutions, so lazyness is not important. But sometimes the algoritms have to generate tons of things, and lazyness becomes useful. This D program is not lazy, unlike the Haskell and Python programs. Writing this program with lazyness is not too much hard, but the designers did the right thing adding yield to their language, because it's a construct very often useful. ------------------------------ This is a lazy translation in D, using opApply: import std.stdio, std.array, std.range; auto pick(in int nbags, in int nballs) /*pure nothrow*/ { static struct Namespace { static struct Bag { int b, n; int opApply(int delegate(ref string) dg) { int result; string aux; if (b == 0 && n == 0) { aux = ""; result = dg(aux); if (result) goto END; } if (b <= 0 || n <= 0 || n % 2 == 0) goto END; foreach (n1; 0 .. n + 1) foreach (chain; Bags(b - 1, b - 1, n - n1, n - n1)) { aux = "(" ~ std.array.replicate("*", n1) ~ chain ~ ")"; result = dg(aux); if (result) goto END; } END: return result; } } static struct Bags { int b, c, n, m; int opApply(int delegate(ref string) dg) { int result; string aux; if (b == 0 && n == 0) { aux = ""; result = dg(aux); if (result) goto END; } if (b <= 0 || n <= 0 || c <= 0 || m <= 0) goto END; foreach (n1; 1 .. m + 1) foreach (b1; (n1 == m) ? iota(1, c+1) : iota(1, b + 1)) foreach (l; Bag(b1, n1)) foreach (r; Bags(b - b1, b1, n - n1, n1)) { aux = l ~ r; result = dg(aux); if (result) goto END; } END: return result; } } } return Namespace.Bags(nbags, nbags, nballs, nballs); } void main() { foreach (sol; pick(4, 9)) writeln(sol); } In main() I have had to use a foreach because currently writeln() is not able to print all the items of an iterable implemented with opApply. I hope this will be fixed (http://d.puremagic.com/issues/show_bug.cgi?id=4264 ). Both Walter and Andrei don't like opApply a lot, and I understand why they do (for uniformity, language simplicity, for the higher flexibility of ranges). On the other hand if you try to translate the Haskell code to D using Ranges it becomes long. ------------------------------ Your can object that this is not idiomatic D code, because I was translating (beautiful, short, readable, safe) Haskell/Python code. My answer is that this kind of lazy code is so commonly useful that I'd like it to be better supported in D :-) Bye, bearophile
Aug 28 2012
Did you time the runs? Non-lazy D version, compiled as -O -inline -release, and ran with pick(6, 11): real 0m8.587s user 0m8.497s sys 0m0.012s Lazy D version, compiled as -O -inline -release, and ran with pick(6, 11): real 0m4.195s user 0m4.168s sys 0m0.008s Haskell version, compiled as -O2, and ran with pick(6, 11): real 0m0.159s user 0m0.116s sys 0m0.028s
Aug 28 2012
Caligo:Did you time the runs?I didn't time them. My timings (compiling with GHC -O3) are similar to your ones. If you want a better comparison, this Haskell code is closer to the D/Python versions (the run-time is similar, maybe it's a bit faster): pick :: Int -> Int -> [String] pick nbags nballs = bags nbags nbags nballs nballs where bag :: Int -> Int -> [String] bag b n | b == 0 && n == 0 = [""] | b <= 0 || n <= 0 || even n = [] | otherwise = ["(" ++ replicate n1 '*' ++ chain ++ ")" | n1 <- [0 .. n], chain <- bags (b - 1) (b - 1) (n - n1) (n - n1)] bags :: Int -> Int -> Int -> Int -> [String] bags b c n m | b == 0 && n == 0 = [""] | b <= 0 || n <= 0 || c <= 0 || m <= 0 = [] | otherwise = [l ++ r | n1 <- [1 .. m], b1 <- if n1 == m then [1 .. c] else [1 .. b], l <- bag b1 n1, r <- bags (b - b1) b1 (n - n1) n1] main = do mapM_ putStrLn $ (pick 5 10)Lazy D version, compiled as -O -inline -release, and ran with pick(6, 11): real 0m4.195s...Haskell version, compiled as -O2, and ran with pick(6, 11): real 0m0.159sI don't exactly know where the difference comes from, but the GHC Haskell compiler is able to digest (deforestation, etc) lazyness very well. In the eager D version, if I introduce memoization: import std.stdio, std.array, std.range, std.functional; string[] pick(in int nbags, in int nballs) /*pure nothrow*/ { static struct Namespace { static string[] bag(in int b, in int n) /*pure nothrow*/ { if (b == 0 && n == 0) return [""]; if (b <= 0 || n <= 0 || n % 2 == 0) return []; typeof(return) result; foreach (n1; 0 .. n + 1) foreach (chain; mbags(b - 1, b - 1, n - n1, n - n1)) result ~= "(" ~ std.array.replicate("*", n1) ~ chain ~ ")"; return result; } static string[] bags(in int b, in int c, in int n, in int m) /*pure nothrow*/ { if (b == 0 && n == 0) return [""]; if (b <= 0 || n <= 0 || c <= 0 || m <= 0) return []; typeof(return) result; foreach (n1; 1 .. m + 1) // iota is not pure, nor nothrow foreach (b1; (n1 == m) ? iota(1, c+1) : iota(1, b + 1)) foreach (l; mbag(b1, n1)) foreach (r; mbags(b - b1, b1, n - n1, n1)) result ~= l ~ r; return result; } alias memoize!bags mbags; alias memoize!bag mbag; } return Namespace.mbags(nbags, nbags, nballs, nballs); } void main() { foreach (sol; pick(8, 13)) writeln(sol); } It runs the (8, 13) case (40_489 solutions) in less than half second, about eleven times faster than the Haskell version. I think the Haskell run-time is re-using some thunks of precedent lazy computations, so I think Haskell is doing a kind of automatic partial memoization. Bye, bearophile
Aug 29 2012
Am Wed, 29 Aug 2012 13:56:12 +0200 schrieb "bearophile" <bearophileHUGS lycos.com>:It runs the (8, 13) case (40_489 solutions) in less than half second, about eleven times faster than the Haskell version. I think the Haskell run-time is re-using some thunks of precedent lazy computations, so I think Haskell is doing a kind of automatic partial memoization. Bye, bearophileI've made the experience with another (non-professional) Haskell programmer, that my first attempts in D are sometimes slower than the Haskell version. And also he could show me that there are some low hanging fruits when it comes to optimizations in Haskell that can literally double the speed. I don't know what it was though - some sort storage class flag in any case. In D the optimization possibilities are almost endless, but also require deeper knowledge of the machine and caches. I guess the point I want to make is that Haskell isn't half bad with the optimizing compiler. The native efficiency is comparable to D until you really start to optimize. And this thread shows, that idiomatic D code can even fall behind when you don't pay attention. Maybe if I wasn't such a control junkie I would use it, too. :D -- Marco
Aug 29 2012
Marco Leise:Haskell isn't half bad with the optimizing compiler. The native efficiency is comparable to D until you really start to optimize.Lot of people say similar things, but saying it again and again doesn't make them true. So far I have seen no evidence of this. The few times where I've seen it true, it was caused by the well written GNU Multiple Precision Arithmetic Library used by GHC, that aren't written in Haskell :-) But the purpose of this thread was _not_ about run-time performance. In the original post I made no attempts to write efficient D code, and I have said the D code shown here was not idiomatic. Bye, bearophile
Aug 29 2012
On Tuesday, 28 August 2012 at 22:53:46 UTC, bearophile wrote:The purpose of this post of mine is to compare a nice looking Haskell solution of this problem to its Python/D translation. This means this post is about language design matters and it's not about problem solving :-) ... There is a significant problem: - This program generates a small sequence of solutions, so lazyness is not important. But sometimes the algoritms have to generate tons of things, and lazyness becomes useful. This D program is not lazy, unlike the Haskell and Python programs. Writing this program with lazyness is not too much hard, but designers did the right thing adding yield to their language, because it's a construct very often useful.In general, it is not possible to have both full expressiveness and full performance. D gains most of its performance by early binding as much as possible. You can delay binding by using the InputRange!T interfaces in D to get a similar level of expressiveness as in Haskell or Python, but you lose some performance. Yield is interesting, a nice way of turning an algorithm into a data structure. However, I imagine it would be very tricky to implement in D with various lifetime issues, although maybe I'm wrong. What I do know is that it is far too big a feature to be adding at this stage in D2.
Aug 29 2012
Peter Alexander:In general, it is not possible to have both full expressiveness and full performance.The ShedSkin compiler has shown me that in some important cases this is not true. But even if in this case you are right, I am OK with that, because I write both performance-critical D code, and D code where high performance is not needed. A feature usable only in the second kind of functions is acceptable to me :-)What I do know is that it is far too big a feature to be adding at this stage in D2.Now maybe it's not the right moment to add important features to D, for various reasons. But maybe 2-3 years from now the situation will be different. Bye, bearophile
Aug 29 2012