www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - nextPermutation and ranges

reply "bearophile" <bearophileHUGS lycos.com> writes:
Recently "quickfur" and Andrei have added C++-style functions 
(nextPermutation and nextEvenPermutation) to std.algorithm to 
perform permutations, this is a Phobos improvement and I've 
already used them few times:

https://github.com/D-Programming-Language/phobos/compare/857f1ed87593...61d26e7dcf2f

Such functions take in account duplications (this is useful), and 
require the items to be comparable.

But in many cases I have a set of items, and I want to find (most 
or all of) their permutations lazily (even if they are not 
comparable). In many of such cases I prefer a permutations 
generator that plays well with ranges:

auto result = items
               .permutations()
               .filter!pred()
               .map!foo();


I have other cases in both Python and D where having a lazy 
permutations (or combinations) generator/range is handy.

A simple version of such range:

http://rosettacode.org/wiki/Permutations#Fast_Lazy_Version

- - - - - - - - - - - - -

A simple speed benchmark seems to show that a permutations Range 
is not bad (0.90 seconds for the range versus 2.76 seconds for 
nextPermutation to fully permute 11 integers):



import std.algorithm, std.conv, std.traits;

struct Permutations(bool doCopy=true, T) if (isMutable!T) {
     private immutable size_t num;
     private T[] items;
     private uint[31] indexes;
     private ulong tot;

     this (/*in*/ T[] items) /*pure nothrow*/
     in {
         static enum string L = text(indexes.length); // impure
         assert(items.length >= 0 && items.length <= 
indexes.length,
                "Permutations: items.length must be >= 0 && < " ~ 
L);
     } body {
         static ulong factorial(in uint n) pure nothrow {
             ulong result = 1;
             foreach (i; 2 .. n + 1)
                 result *= i;
             return result;
         }

         this.num = items.length;
         this.items = items.dup;
         foreach (i; 0 .. cast(typeof(indexes[0]))this.num)
             this.indexes[i] = i;
         this.tot = factorial(this.num);
     }

      property T[] front() pure nothrow {
         static if (doCopy) {
             //return items.dup; // Not nothrow.
             auto items2 = new T[items.length];
             items2[] = items;
             return items2;
         } else
             return items;
     }

      property bool empty() const pure nothrow {
         return tot == 0;
     }

     void popFront() pure nothrow {
         tot--;
         if (tot > 0) {
             size_t j = num - 2;

             while (indexes[j] > indexes[j + 1])
                 j--;
             size_t k = num - 1;
             while (indexes[j] > indexes[k])
                 k--;
             swap(indexes[k], indexes[j]);
             swap(items[k], items[j]);

             size_t r = num - 1;
             size_t s = j + 1;
             while (r > s) {
                 swap(indexes[s], indexes[r]);
                 swap(items[s], items[r]);
                 r--;
                 s++;
             }
         }
     }
}

Permutations!(doCopy,T) permutations(bool doCopy=true, T)
                                     (T[] items)
if (isMutable!T) {
     return Permutations!(doCopy, T)(items);
}

auto perms1(T)(T[] items) {
     foreach (p; permutations!false(items)) {}
     return items;
}

auto perms2(T)(T[] items) {
     while (nextPermutation(items)) {}
     return items;
}

int main() {
     auto data = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
     perms1(data);     // 0.90 seconds
     //perms2(data);   // 2.76 seconds
     return data.length;
}


D code compiled with dmd 2.062alpha, -O -release -inline 
-noboundscheck.

- - - - - - - - - - - - -

Extra note: maybe all such functions should be moved inside a 
std.combinatorics or std.math.comb module or something similar, 
with combinations range, binomial coefficient function, etc.

Another handy range is one that yields size_t[2] or 
Tuple!(size_t,size_t) that represent the swaps to compute all 
ajacent permutations (this code is not a range yet):

http://rosettacode.org/wiki/Permutations_by_swapping#D

Bye,
bearophile
Feb 07 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 07, 2013 at 07:22:10PM +0100, bearophile wrote:
 Recently "quickfur"
That's my github handle.
 and Andrei have added C++-style functions (nextPermutation and
 nextEvenPermutation) to std.algorithm to perform permutations, this is
 a Phobos improvement and I've already used them few times:
[...]
 But in many cases I have a set of items, and I want to find (most or
 all of) their permutations lazily (even if they are not comparable).
 In many of such cases I prefer a permutations generator that plays
 well with ranges:
I've considered implementing as a range before, but there are some considerations: 1) To avoid excessive allocations, it would have to be a transient range. Which means it may have unexpected results if you use it with an algorithm that doesn't handle transient ranges correctly. 2) If the starting range is not sorted, then the permutation range needs to make a copy of the original range so that it knows when all permutations have been enumerated. But there is no generic way to make a copy of a range (using .save on a forward range is not enough, because nextPermutation swaps elements in-place, and .save doesn't guarantee that the saved range isn't just aliasing the original range contents). If transience is acceptable, and the starting range is always sorted, then it's almost trivial to write a wrapper range: auto permutations(R)(R forwardRange) if (isForwardRange!R) { struct Permutations { R front; bool empty = false; this(R _src) { front = _src; } void popFront() { empty = !nextPermutation(front); } } return Permutations(forwardRange); } A similar wrapper can be made for nextEvenPermutation. This actually brings up an interesting point: the current documentation for SortedRange states that ranges with weaker than random access is unable to provide interesting functionality in SortedRange, but the above is a counterexample. :) That is, if SortedRange allowed forward ranges, then we could make the sorted requirement explicit: auto permutations(R)(SortedRange!R forwardRange) if (isForwardRange!R) { ... } [...]
 A simple speed benchmark seems to show that a permutations Range is
 not bad (0.90 seconds for the range versus 2.76 seconds for
 nextPermutation to fully permute 11 integers):
 
 
 
 import std.algorithm, std.conv, std.traits;
 
 struct Permutations(bool doCopy=true, T) if (isMutable!T) {
     private immutable size_t num;
     private T[] items;
     private uint[31] indexes;
     private ulong tot;
 
     this (/*in*/ T[] items) /*pure nothrow*/
     in {
         static enum string L = text(indexes.length); // impure
         assert(items.length >= 0 && items.length <= indexes.length,
                "Permutations: items.length must be >= 0 && < " ~ L);
     } body {
         static ulong factorial(in uint n) pure nothrow {
             ulong result = 1;
             foreach (i; 2 .. n + 1)
                 result *= i;
             return result;
         }
[...] I think this is an unfair comparison: using factorial assumes that all array elements are unique. The advantage of nextPermutation is that it correctly handles duplicated elements, producing only distinct permutations of them. It's no surprise that if you sacrifice handling of duplicated elements, the code will be faster. (Not to mention, using factorial is limited because its value grows too quickly, and once your range is large enough, you will be needing to use BigInt to be able to deal with the factorial values without overflow. The current implementation of nextPermutation doesn't suffer from this problem.)
 Extra note: maybe all such functions should be moved inside a
 std.combinatorics or std.math.comb module or something similar, with
 combinations range, binomial coefficient function, etc.
[...] Someone is already working on std.combinatorics, and when that is ready, these functions will be folded in there. I guess Andrei decided it was better to put them into Phobos earlier rather than later. They can be turned into aliases afterwards, once std.combinatorics is merged, so no existing code will break. T -- The volume of a pizza of thickness a and radius z can be described by the following formula: pi zz a. -- Wouter Verhelst
Feb 07 2013
next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
 Someone is already working on std.combinatorics
That's me. I'm very bust atm with work, so I haven't been able to do much with it lately, but it is progressing.
 1) To avoid excessive allocations, it would have to be a 
 transient
 range. Which means it may have unexpected results if you use it 
 with an
 algorithm that doesn't handle transient ranges correctly.
This has been discussed previously. bearophile suggested a policy to control whether the buffer is permuted in-place, with it defaulting to creating duplicates. The slow down from duplicates on DMD was ~3x.
 2) If the starting range is not sorted, then the permutation 
 range needs
 to make a copy of the original range so that it knows when all
 permutations have been enumerated.
I sort on range creation (if not already sorted).
 If transience is acceptable, and the starting range is always 
 sorted,
 then it's almost trivial to write a wrapper range:

 	auto permutations(R)(R forwardRange)
 		if (isForwardRange!R)
 	{
 		struct Permutations
 		{
 			R front;
 			bool empty = false;
 			this(R _src) { front = _src; }
 			void popFront() { empty = !nextPermutation(front); }
 		}
 		return Permutations(forwardRange);
 	}
This is missing some features: - Bidirectionality (this complicates things). - Length (this complicates things, because it easily overflows). - Random-access (non-trivial, but useful) A library solution should address all these.
Feb 07 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 07, 2013 at 08:40:25PM +0100, Peter Alexander wrote:
[...]
1) To avoid excessive allocations, it would have to be a transient
range. Which means it may have unexpected results if you use it with
an algorithm that doesn't handle transient ranges correctly.
This has been discussed previously. bearophile suggested a policy to control whether the buffer is permuted in-place, with it defaulting to creating duplicates. The slow down from duplicates on DMD was ~3x.
Hmm. There's also the problem that there is no generic way to duplicate a range. Only arrays are guaranteed to support .dup and .idup. So you'd have to allocate an array to store each permutation, and return that instead of the original range. This could be a major problem (one might expect to get elements of the same type as the original range, instead of arrays!).
2) If the starting range is not sorted, then the permutation range
needs to make a copy of the original range so that it knows when all
permutations have been enumerated.
I sort on range creation (if not already sorted).
Good idea! But nevertheless the original range will be modified, so we either have to live with that, or we still need to make a copy somehow.
If transience is acceptable, and the starting range is always sorted,
then it's almost trivial to write a wrapper range:

	auto permutations(R)(R forwardRange)
		if (isForwardRange!R)
	{
		struct Permutations
		{
			R front;
			bool empty = false;
			this(R _src) { front = _src; }
			void popFront() { empty = !nextPermutation(front); }
		}
		return Permutations(forwardRange);
	}
This is missing some features: - Bidirectionality (this complicates things).
Well, functionality beyond input ranges can, of course, be added on top. Bidirectionality is trivial, actually: you just reverse the predicate to nextPermutation: void popBack() { empty = !nextPermutation!"b < a"(front); }
 - Length (this complicates things, because it easily overflows).
Yeah, this one suffers from the same problem as using factorial. Permutation ranges grow exponentially (O(n^n)). This means if length is 64-bit ulong, you will overflow once the input range is longer than 20 elements (21! > 2^64), which is a laughably small upper limit for generic code.
 - Random-access (non-trivial, but useful)
 
 A library solution should address all these.
Random-access is certainly non-trivial. In the case of the input having all unique elements, indexing is not *too* hard (it's just the same as using factorial to map to the permutations). But if you're going to support non-unique elements, then you'll need to invent some other scheme for mapping range elements to indices. (Are there research papers on how to do this? I assume there's some kind of pattern to it... but it may be non-trivial to implement!) T -- What do you get if you drop a piano down a mineshaft? A flat minor.
Feb 07 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
08-Feb-2013 00:04, H. S. Teoh пишет:
 On Thu, Feb 07, 2013 at 08:40:25PM +0100, Peter Alexander wrote:
 [...]
 1) To avoid excessive allocations, it would have to be a transient
 range. Which means it may have unexpected results if you use it with
 an algorithm that doesn't handle transient ranges correctly.
This has been discussed previously. bearophile suggested a policy to control whether the buffer is permuted in-place, with it defaulting to creating duplicates. The slow down from duplicates on DMD was ~3x.
Hmm. There's also the problem that there is no generic way to duplicate a range. Only arrays are guaranteed to support .dup and .idup. So you'd have to allocate an array to store each permutation, and return that instead of the original range. This could be a major problem (one might expect to get elements of the same type as the original range, instead of arrays!).
I had long thougth of primitive like: build!Container(range); And if Container is not important then this : build(range); //okay here build might be worse then dup and it peeks the right container inside of it based on power of range Such as: random access - deque (or array) forward - unrolled s-list (or deque) bidirectional - deque After typing this it seems like Phobos is in dire need of deque ;) -- Dmitry Olshansky
Feb 07 2013
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
H. S. Teoh:

 1) To avoid excessive allocations, it would have to be a 
 transient range.
See the doCopy boolean template argument in my code. Note that it's true on default. (D Zen: do the safe thing on default, and the fast thing on request).
 2) If the starting range is not sorted, then the permutation 
 range needs
 to make a copy of the original range so that it knows when all
 permutations have been enumerated. But there is no generic way 
 to make a
 copy of a range
Isn't it possible to call array() on the input range? (Performing array() takes a microscopic amount of time compared to computing its permutations.)
 I think this is an unfair comparison: using factorial assumes 
 that all array elements are unique.
It's a fair comparison because it tests a common usage case in my code. I'm not asking to remove nextPermutation from Phobos. I think nextPermutation is useful, but in many cases my items are unique (example: I make them unique before giving them to permutations()). (And I think it's not good to slow down this very common case because in general they aren't unique.)
 The advantage of nextPermutation is that it
 correctly handles duplicated elements, producing only distinct
 permutations of them. It's no surprise that if you sacrifice 
 handling of
 duplicated elements, the code will be faster.
 ...
 (Not to mention, using factorial is limited because its value 
 grows too
 quickly, and once your range is large enough, you will be 
 needing to use
 BigInt to be able to deal with the factorial values without 
 overflow.
 The current implementation of nextPermutation doesn't suffer 
 from this
 problem.)
See above. Bye, bearophile
Feb 07 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 07, 2013 at 08:47:39PM +0100, bearophile wrote:
 H. S. Teoh:
 
1) To avoid excessive allocations, it would have to be a transient
range.
See the doCopy boolean template argument in my code. Note that it's true on default. (D Zen: do the safe thing on default, and the fast thing on request).
Good point.
2) If the starting range is not sorted, then the permutation range
needs to make a copy of the original range so that it knows when all
permutations have been enumerated. But there is no generic way to
make a copy of a range
Isn't it possible to call array() on the input range? (Performing array() takes a microscopic amount of time compared to computing its permutations.)
array() doesn't work on transient ranges. But I suppose it's OK to forego that, if we're going to be safe by default.
I think this is an unfair comparison: using factorial assumes that
all array elements are unique.
It's a fair comparison because it tests a common usage case in my code. I'm not asking to remove nextPermutation from Phobos. I think nextPermutation is useful, but in many cases my items are unique (example: I make them unique before giving them to permutations()). (And I think it's not good to slow down this very common case because in general they aren't unique.)
[...] We could make a variant of nextPermutation (or a range incarnation thereof) that assumes uniqueness. I think that would be a great addition to Phobos. Nevertheless, any implementation based on factorial would have to address the issue of how to handle the exponential growth. I think it's unacceptable for the standard library to support permutations only up to ranges of 20 elements or less, because 21! > 2^64. T -- "Real programmers can write assembly code in any language. :-)" -- Larry Wall
Feb 07 2013
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
H. S. Teoh:

 any implementation based on factorial would have to
 address the issue of how to handle the exponential growth. I 
 think it'sunacceptable for the standard library to support
 permutations only up to ranges of 20 elements or less,
 because 21! > 2^64.
What are the use cases of generating the permutations of a set of items higher than abot 20? (in my code I don't remember having to permute more than few items.) One thing I was forgetting: thank you for your work H. S. Teoh. Bye, bearophile
Feb 07 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 07, 2013 at 09:24:24PM +0100, bearophile wrote:
 H. S. Teoh:
 
any implementation based on factorial would have to address the issue
of how to handle the exponential growth. I think it'sunacceptable for
the standard library to support permutations only up to ranges of 20
elements or less, because 21! > 2^64.
What are the use cases of generating the permutations of a set of items higher than abot 20? (in my code I don't remember having to permute more than few items.)
Combinatorial puzzles come to mind (Rubik's cube solvers and its ilk, for example). Maybe other combinatorial problems that require some kind of exhaustive state space search. Those things easily go past 20! once you get beyond the most trivial cases.
 One thing I was forgetting: thank you for your work H. S. Teoh.
[...] You're welcome. T -- Never ascribe to malice that which is adequately explained by incompetence. -- Napoleon Bonaparte
Feb 07 2013
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
H. S. Teoh:

 Combinatorial puzzles come to mind (Rubik's cube solvers and 
 its ilk,
 for example).  Maybe other combinatorial problems that require 
 some kind
 of exhaustive state space search. Those things easily go past 
 20! once
 you get beyond the most trivial cases.
I know many situations/problems where you have more than 20! cases, but in those problems you don't iterate all permutations, because the program takes ages to do it. In those programs you don't use nextPermutation, you scan the search space in a different and smarter way. I don't know of any use case for permuting so large sets of items. Bye, bearophile
Feb 07 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Feb 07, 2013 at 09:42:34PM +0100, bearophile wrote:
 H. S. Teoh:
 
Combinatorial puzzles come to mind (Rubik's cube solvers and its ilk,
for example).  Maybe other combinatorial problems that require some
kind of exhaustive state space search. Those things easily go past
20!  once you get beyond the most trivial cases.
I know many situations/problems where you have more than 20! cases, but in those problems you don't iterate all permutations, because the program takes ages to do it. In those programs you don't use nextPermutation, you scan the search space in a different and smarter way. I don't know of any use case for permuting so large sets of items.
[...] It depends, sometimes in complex cases you have no choice but to do exhaustive search. I agree that it's very rare, though. T -- If creativity is stifled by rigid discipline, then it is not true creativity.
Feb 07 2013
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Thu, 7 Feb 2013 13:52:01 -0800
schrieb "H. S. Teoh" <hsteoh quickfur.ath.cx>:

 On Thu, Feb 07, 2013 at 09:42:34PM +0100, bearophile wrote:
 H. S. Teoh:
 
Combinatorial puzzles come to mind (Rubik's cube solvers and its ilk,
for example).  Maybe other combinatorial problems that require some
kind of exhaustive state space search. Those things easily go past
20!  once you get beyond the most trivial cases.
I know many situations/problems where you have more than 20! cases, but in those problems you don't iterate all permutations, because the program takes ages to do it. In those programs you don't use nextPermutation, you scan the search space in a different and smarter way. I don't know of any use case for permuting so large sets of items.
[...] It depends, sometimes in complex cases you have no choice but to do exhaustive search. I agree that it's very rare, though. T
So right now we can handle 20! = 2,432,902,008,176,640,000 permutations. If every check took only 20 clock cycles of a 4 Ghz CPU, it would take you ~386 years to go through the list. For the average human researcher this is plenty of time. -- Marco
Feb 07 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 8 February 2013 at 06:59:20 UTC, Marco Leise wrote:
 Am Thu, 7 Feb 2013 13:52:01 -0800
 schrieb "H. S. Teoh" <hsteoh quickfur.ath.cx>:

 On Thu, Feb 07, 2013 at 09:42:34PM +0100, bearophile wrote:
 H. S. Teoh:
 
Combinatorial puzzles come to mind (Rubik's cube solvers 
and its ilk,
for example).  Maybe other combinatorial problems that 
require some
kind of exhaustive state space search. Those things easily 
go past
20!  once you get beyond the most trivial cases.
I know many situations/problems where you have more than 20! cases, but in those problems you don't iterate all permutations, because the program takes ages to do it. In those programs you don't use nextPermutation, you scan the search space in a different and smarter way. I don't know of any use case for permuting so large sets of items.
[...] It depends, sometimes in complex cases you have no choice but to do exhaustive search. I agree that it's very rare, though. T
So right now we can handle 20! = 2,432,902,008,176,640,000 permutations. If every check took only 20 clock cycles of a 4 Ghz CPU, it would take you ~386 years to go through the list. For the average human researcher this is plenty of time.
On a modern supercomputer this would take well under 2 months. (I calculated it as ~44 days on minerva at Warwick, UK). 19! would take less than 3 days. In a parallel setting, such large numbers are assailable.
Feb 08 2013
parent reply "Era Scarecrow" <rtcvb32 yahoo.com> writes:
On Friday, 8 February 2013 at 12:27:50 UTC, John Colvin wrote:
 On Friday, 8 February 2013 at 06:59:20 UTC, Marco Leise wrote:
 So right now we can handle 20! = 2,432,902,008,176,640,000 
 permutations. If every check took only 20 clock cycles of a 4 
 Ghz CPU, it would take you ~386 years to go through the list. 
 For the average human researcher this is plenty of time.
On a modern supercomputer this would take well under 2 months. (I calculated it as ~44 days on Minerva at Warwick, UK). 19! would take less than 3 days. In a parallel setting, such large numbers are assailable.
If we have such a large number of computations, then either cent will have to be implemented, use BigInt, or an internal array that handles locational information. That would remove the limitations of 20 to either 255, or 65535 (assuming you EVER need that many). Course rummaging through the array for the next computation becomes more difficult the larger the number of elements.
Feb 08 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 8 February 2013 at 21:07:58 UTC, Era Scarecrow wrote:
 On Friday, 8 February 2013 at 12:27:50 UTC, John Colvin wrote:
 On Friday, 8 February 2013 at 06:59:20 UTC, Marco Leise wrote:
 So right now we can handle 20! = 2,432,902,008,176,640,000 
 permutations. If every check took only 20 clock cycles of a 4 
 Ghz CPU, it would take you ~386 years to go through the list. 
 For the average human researcher this is plenty of time.
On a modern supercomputer this would take well under 2 months. (I calculated it as ~44 days on Minerva at Warwick, UK). 19! would take less than 3 days. In a parallel setting, such large numbers are assailable.
If we have such a large number of computations, then either cent will have to be implemented, use BigInt, or an internal array that handles locational information. That would remove the limitations of 20 to either 255, or 65535 (assuming you EVER need that many). Course rummaging through the array for the next computation becomes more difficult the larger the number of elements.
Seeing as 61! is of the order of the number of atoms in the observable universe, i don't think there's much need to plan for any higher than that!
Feb 08 2013
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Feb 09, 2013 at 01:37:34AM +0100, John Colvin wrote:
 On Friday, 8 February 2013 at 21:07:58 UTC, Era Scarecrow wrote:
On Friday, 8 February 2013 at 12:27:50 UTC, John Colvin wrote:
On Friday, 8 February 2013 at 06:59:20 UTC, Marco Leise wrote:
So right now we can handle 20! = 2,432,902,008,176,640,000
permutations. If every check took only 20 clock cycles of a 4
Ghz CPU, it would take you ~386 years to go through the list.
For the average human researcher this is plenty of time.
On a modern supercomputer this would take well under 2 months. (I calculated it as ~44 days on Minerva at Warwick, UK). 19! would take less than 3 days. In a parallel setting, such large numbers are assailable.
If we have such a large number of computations, then either cent will have to be implemented, use BigInt, or an internal array that handles locational information. That would remove the limitations of 20 to either 255, or 65535 (assuming you EVER need that many). Course rummaging through the array for the next computation becomes more difficult the larger the number of elements.
Seeing as 61! is of the order of the number of atoms in the observable universe, i don't think there's much need to plan for any higher than that!
That doesn't prevent mathematicians from grappling with numbers like Graham's number (see wikipedia entry), the magnitude of which exploded my perception of infinity several times over, and it's still *finite*. ;-) Iterating over such inconceivably huge numbers is, of course, a fool's errand. But being able to *index* a large set of permutations is actually valuable. In this sense, bearophile's factorial method, suitably extended to some BigInt index, is superior, not because you want to iterate over the entire gigantic list of possibilities, but because using BigInt allows you to index a particular entry in that list without having to go through them all. T -- Perhaps the most widespread illusion is that if we were in power we would behave very differently from those who now hold it---when, in truth, in order to get power we would have to become very much like them. -- Unknown
Feb 08 2013
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Saturday, 9 February 2013 at 01:07:13 UTC, H. S. Teoh wrote:
 On Sat, Feb 09, 2013 at 01:37:34AM +0100, John Colvin wrote:
 On Friday, 8 February 2013 at 21:07:58 UTC, Era Scarecrow 
 wrote:
On Friday, 8 February 2013 at 12:27:50 UTC, John Colvin wrote:
On Friday, 8 February 2013 at 06:59:20 UTC, Marco Leise 
wrote:
So right now we can handle 20! = 2,432,902,008,176,640,000
permutations. If every check took only 20 clock cycles of a 
4
Ghz CPU, it would take you ~386 years to go through the 
list.
For the average human researcher this is plenty of time.
On a modern supercomputer this would take well under 2 months. (I calculated it as ~44 days on Minerva at Warwick, UK). 19! would take less than 3 days. In a parallel setting, such large numbers are assailable.
If we have such a large number of computations, then either cent will have to be implemented, use BigInt, or an internal array that handles locational information. That would remove the limitations of 20 to either 255, or 65535 (assuming you EVER need that many). Course rummaging through the array for the next computation becomes more difficult the larger the number of elements.
Seeing as 61! is of the order of the number of atoms in the observable universe, i don't think there's much need to plan for any higher than that!
That doesn't prevent mathematicians from grappling with numbers like Graham's number (see wikipedia entry), the magnitude of which exploded my perception of infinity several times over, and it's still *finite*. ;-) Iterating over such inconceivably huge numbers is, of course, a fool's errand. But being able to *index* a large set of permutations is actually valuable. In this sense, bearophile's factorial method, suitably extended to some BigInt index, is superior, not because you want to iterate over the entire gigantic list of possibilities, but because using BigInt allows you to index a particular entry in that list without having to go through them all. T
This is a fair point. Being able to obtain the kth permutation of a large set would indeed be useful, even if iteration is not feasible. For example, you might want to examine the k=2^n perturbations, as you have some a priori knowledge that they contain the solution you're looking for. In that case we'd want to be able to index with an effectively arbitrary size index. I don't have any experience with bigint but I presume it's the correct tool for the job.
Feb 08 2013
parent "Era Scarecrow" <rtcvb32 yahoo.com> writes:
On Saturday, 9 February 2013 at 01:25:58 UTC, John Colvin wrote:
 This is a fair point. Being able to obtain the kth permutation 
 of a large set would indeed be useful, even if iteration is not 
 feasible. For example, you might want to examine the k=2^n  
 perturbations, as you have some a priori knowledge that they 
 contain the solution you're looking for.

 In that case we'd want to be able to index with an effectively 
 arbitrary size index. I don't have any experience with bigint 
 but I presume it's the correct tool for the job.
BigInt would seem the easiest to implement as things presently stand, as it should only require you to modify the type(s) for the index while the remainder is the same. Using cent would need require the 128bit type implemented and the array one would take a lot of work and maybe change the whole algorithm to try and compensate for it. Regardless I'd like to see cent implemented. Hmmm although there's a predetermined definition for the 128bit type, was there for a 256bit type? Although that might be getting ahead of what we may need them for.
Feb 08 2013