digitalmars.D.learn - Optimizing a bigint fibonacci
- helxi (29/29) Dec 06 2017 This is question not directly related to language concepts, it's
- Jonathan M Davis (35/42) Dec 06 2017 On a complete sidenote, if you want to correctly time stuff, you should ...
- codephantom (2/5) Dec 06 2017 Compile it with ldc ;-)
- codephantom (6/12) Dec 06 2017 also, fyi....in my test, when compiling with dmd and using -m32
- Biotronic (36/42) Dec 06 2017 Here's my version:, based on fast squaring:
This is question not directly related to language concepts, it's got more to do with the application. I would appreciate if anyone would point to me how I could optimize this bit of code auto fib(const int n) { import std.bigint; if (n == 0) return BigInt(0); if (n == 1) return BigInt(1); BigInt next; for (BigInt i = 0, j = 1, count = 1; count < n; ++count) { next = i + j; i = j; j = next; } return next; } void main() { import std.stdio, std.datetime; auto t0 = Clock.currTime; writeln(fib(100_000)); writeln(Clock.currTime - t0); } I also noticed that if I try to compute it in the compile time with enum x = fib(100000) the compiler freezes. What could cause this?
Dec 06 2017
On Wednesday, December 06, 2017 09:12:08 helxi via Digitalmars-d-learn wrote:void main() { import std.stdio, std.datetime; auto t0 = Clock.currTime; writeln(fib(100_000)); writeln(Clock.currTime - t0); }On a complete sidenote, if you want to correctly time stuff, you should use a monotonic clock rather than the wall clock, since the wall clock can be shifted by the system, whereas a monotonic clock is guaranteed to always move forward at a fixed rate. So, you'd either want to do something like import std.stdio, std.datetime; auto t0 = MonoTime.currTime; writeln(fib(100_000)); writeln(MonoTime.currTime - t0); or import std.stdio, std.datetime.stopwatch; auto sw = StopWatch(AutoStart.yes); writeln(fib(100_000)); writeln(sw.peek); That way, you don't have to worry about the timing being wrong due to the clock being shifted. Such shifting happens infrequently enough that it won't _usually_ create a problem, but it can create a serious problem if a shift occurs and you're doing something like waiting for the clock to reach a certain point in time. It's just better to get in the habit of using the monotonic clock for timing stuff. Note however that for the time being, if you want to use anything in std.datetime.stopwatch, you'll need to either just import it and not std.datetime or use selective import such as import std.datetime.stopwatch : StopWatch; This is because the old versions of the benchmarking stuff like benchmark and StopWatch are in std.datetime.package and have been deprecated (they use core.time.TickDuration which will soon be deprecated) and replaced with nearly identical versions in std.datetime.stopwatch which use core.time.MonoTime and core.time.Duration. Until the old versions have gone through the full deprecation cycle and have been removed, they'll conflict with the new ones, so std.datetime.stopwatch is not yet publicly imported in std.datetime.package, and if you import both, then selective imports are needed in order to deal with the symbol conflict. - Jonathan M Davis
Dec 06 2017
On Wednesday, 6 December 2017 at 09:12:08 UTC, helxi wrote:This is question not directly related to language concepts, it's got more to do with the application. I would appreciate if anyone would point to me how I could optimize this bit of codeCompile it with ldc ;-)
Dec 06 2017
On Wednesday, 6 December 2017 at 09:59:12 UTC, codephantom wrote:On Wednesday, 6 December 2017 at 09:12:08 UTC, helxi wrote:also, fyi....in my test, when compiling with dmd and using -m32 (instead of -m64) it timed *twice as fast* compared to the timing of the 64bit run; whereas compiling with ldc, both the 32bit and 64bit run timed at about the same speed. lesson? don't 'just' look at how to optimise your 'code' ;-)This is question not directly related to language concepts, it's got more to do with the application. I would appreciate if anyone would point to me how I could optimize this bit of codeCompile it with ldc ;-)
Dec 06 2017
On Wednesday, 6 December 2017 at 09:12:08 UTC, helxi wrote:This is question not directly related to language concepts, it's got more to do with the application. I would appreciate if anyone would point to me how I could optimize this bit of codeHere's my version:, based on fast squaring: auto fib(ulong n) { import std.bigint : BigInt; import std.meta : AliasSeq; import std.typecons : tuple; BigInt a = 0; BigInt b = 1; auto bit = 63; while (bit > 0) { AliasSeq!(a, b) = tuple( a * (2*b - a), a*a + b*b); if (n & (BigInt(1) << bit)) { AliasSeq!(a, b) = tuple(b, a+b); } --bit; } return a; } unittest { import std.stdio : writeln; import std.datetime : MonoTime; auto t0 = MonoTime.currTime; writeln(fib(10_000_000)); writeln(MonoTime.currTime - t0); } Takes around 600 milliseconds to compute fib(1_000_000), and 50 seconds for fib(10_000_000). Fast squaring algorithm found and described here: https://www.nayuki.io/page/fast-fibonacci-algorithmsI also noticed that if I try to compute it in the compile time with enum x = fib(100000) the compiler freezes. What could cause this?That's because the poor compiler isn't as good at optimizing compile-time code as run-time code, and because fib(100000) is frigging ginormous. -- Biotronic
Dec 06 2017
On Wednesday, 6 December 2017 at 10:00:48 UTC, Biotronic wrote:On Wednesday, 6 December 2017 at 09:12:08 UTC, helxi wrote:Nice. But why the AliasSeq?[...]Here's my version:, based on fast squaring: auto fib(ulong n) { import std.bigint : BigInt; import std.meta : AliasSeq; import std.typecons : tuple; BigInt a = 0; BigInt b = 1; auto bit = 63; while (bit > 0) { AliasSeq!(a, b) = tuple( a * (2*b - a), a*a + b*b); [...]That's because the poor compiler isn't as good at optimizing compile-time code as run-time code,Oh I see. We should definitely be careful with that :D
Dec 06 2017
On Wednesday, 6 December 2017 at 10:16:16 UTC, helxi wrote:On Wednesday, 6 December 2017 at 10:00:48 UTC, Biotronic wrote:Just playing around a bit. The alternative is to assign to a temporary: auto c = a * (2*b - a); b = a*a + b*b;' a = c; The resulting behavior is the same, and apart from the cludgy names I kinda like doing (a,b) = (c,d). -- BiotronicAliasSeq!(a, b) = tuple( a * (2*b - a), a*a + b*b); [...]Nice. But why the AliasSeq?
Dec 06 2017