digitalmars.D - What complexity have a log(sum) shape?
- Andrei Alexandrescu (10/10) Dec 08 2015 I'm working on the complexity algebra and of course trying to simplify
- tn (8/17) Dec 08 2015 It seems that for a complexity such as log(n + m) to make sense,
- tn (2/3) Dec 08 2015 Should of course be "larger than n".
- Timon Gehr (6/16) Dec 08 2015 You don't need to support those terms in the internal representation,
- Andrei Alexandrescu (2/3) Dec 08 2015 Noice. Yes I did miss it. Thx!! -- Andrei
- cym13 (3/6) Dec 08 2015 Surely I'm missing something obvious but why is it true exactly?
- cym13 (3/10) Dec 08 2015 Damn, I missed Timon's post, here is the link for anybody else:
- Timon Gehr (12/18) Dec 08 2015 n+m ≤ 2·max(n,m). (1)
- Charles (3/24) Dec 08 2015 This seems reasonable, but you have undefined behavior of
- Timon Gehr (6/33) Dec 08 2015 The context is asymptotic runtime bounds. The special cases for small
- Andrei Alexandrescu (6/11) Dec 08 2015 From the cycle "Andrei's esprits d'escalier", the inequality can be
- Timon Gehr (2/13) Dec 08 2015 Which inequality?
- Charles (5/12) Dec 08 2015 Im confident it isn't. I think the rule he was thinking of was
- cym13 (4/18) Dec 08 2015 No, his demonstration stands. This isn't about log algebra, it's
- H. S. Teoh via Digitalmars-d (20/33) Dec 08 2015 There is no contradiction. Big-O notation deals with asymptotic
- Timon Gehr (5/8) Dec 08 2015 The problem is, however, that only m /or/ n could be small. Therefore,
- Algo (5/11) Dec 08 2015 Pretty sure as per property 1 (common, "weak" definition of
- Timon Gehr (15/26) Dec 09 2015 It certainly can be. Property 1 only talks about those values of the
I'm working on the complexity algebra and of course trying to simplify my life :o). One good simplification would be to get rid of log(polynomial_sum) terms such as: log(n + m) log(n^^3 + n1^^2 + n2) etc. Do any of these occur in some important algorithms? I couldn't think of any nor find any on the various complexity cheat sheets around. Thanks, Andrei
Dec 08 2015
On Tuesday, 8 December 2015 at 15:25:28 UTC, Andrei Alexandrescu wrote:I'm working on the complexity algebra and of course trying to simplify my life :o). One good simplification would be to get rid of log(polynomial_sum) terms such as: log(n + m) log(n^^3 + n1^^2 + n2) etc. Do any of these occur in some important algorithms? I couldn't think of any nor find any on the various complexity cheat sheets around.It seems that for a complexity such as log(n + m) to make sense, it should be possible that both n is more than polynomially larger than m and that m is more than polynomially larger than s. Since obviously if for instance m < O(n^k), where k is a constant, then O(log(n + m)) = O(log n). I guess that such situations are quite rare.
Dec 08 2015
On Tuesday, 8 December 2015 at 16:25:50 UTC, tn wrote:... and that m is more than polynomially larger than s. ...Should of course be "larger than n".
Dec 08 2015
On 12/08/2015 04:25 PM, Andrei Alexandrescu wrote:I'm working on the complexity algebra and of course trying to simplify my life :o). One good simplification would be to get rid of log(polynomial_sum) terms such as: log(n + m) log(n^^3 + n1^^2 + n2) etc. Do any of these occur in some important algorithms? I couldn't think of any nor find any on the various complexity cheat sheets around. Thanks, AndreiYou don't need to support those terms in the internal representation, because they don't add anything to the expressiveness of the notation: O(log(n+m)) = O(log(n)+log(m)). O(log(n^^3+n1^^2+n2)) = O(log(n)+log(n1)+log(n2)). (I have already noted this in the other thread, in case you have missed it.)
Dec 08 2015
On 12/08/2015 12:12 PM, Timon Gehr wrote:O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On Tuesday, 8 December 2015 at 20:56:28 UTC, cym13 wrote:On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:Damn, I missed Timon's post, here is the link for anybody else: http://forum.dlang.org/thread/n3qq6e$2bis$1 digitalmars.com?page=10#post-n42ft2:2498u:241:40digitalmars.comOn 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On 12/08/2015 09:56 PM, cym13 wrote:On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:n+m ≤ 2·max(n,m). (1) max(n,m) ≤ n+m. (2) Hence, log(n+m) ≤ log(max(n,m)) + log(2) by (1) = max(log(n),log(m)) + log(2) by monotonicity ≤ log(n) + log(m) + log(2) by (2), log(n)+log(m) ≤ 2·max(log(n),log(m)) by (1) = 2·log(max(n,m)) by monotonicity ≤ 2·log(n+m) by " and (2). Similar arguments work for any monotone increasing function that does not grow too fast.On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On Tuesday, 8 December 2015 at 21:18:01 UTC, Timon Gehr wrote:On 12/08/2015 09:56 PM, cym13 wrote:This seems reasonable, but you have undefined behavior of logarithms if n or m is zero.On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:n+m ≤ 2·max(n,m). (1) max(n,m) ≤ n+m. (2) Hence, log(n+m) ≤ log(max(n,m)) + log(2) by (1) = max(log(n),log(m)) + log(2) by monotonicity ≤ log(n) + log(m) + log(2) by (2), log(n)+log(m) ≤ 2·max(log(n),log(m)) by (1) = 2·log(max(n,m)) by monotonicity ≤ 2·log(n+m) by " and (2). Similar arguments work for any monotone increasing function that does not grow too fast.On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On 12/08/2015 10:35 PM, Charles wrote:On Tuesday, 8 December 2015 at 21:18:01 UTC, Timon Gehr wrote:The context is asymptotic runtime bounds. The special cases for small values of continuous logarithms can just be defined away. They have no relevance for asymptotic runtime analysis (even though defining big-O adequately for multiple variables is more tricky than for a single variable).On 12/08/2015 09:56 PM, cym13 wrote:This seems reasonable, but you have undefined behavior of logarithms if n or m is zero.On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:n+m ≤ 2·max(n,m). (1) max(n,m) ≤ n+m. (2) Hence, log(n+m) ≤ log(max(n,m)) + log(2) by (1) = max(log(n),log(m)) + log(2) by monotonicity ≤ log(n) + log(m) + log(2) by (2), log(n)+log(m) ≤ 2·max(log(n),log(m)) by (1) = 2·log(max(n,m)) by monotonicity ≤ 2·log(n+m) by " and (2). Similar arguments work for any monotone increasing function that does not grow too fast.On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On 12/08/2015 04:55 PM, Timon Gehr wrote:The context is asymptotic runtime bounds. The special cases for small values of continuous logarithms can just be defined away. They have no relevance for asymptotic runtime analysis (even though defining big-O adequately for multiple variables is more tricky than for a single variable).From the cycle "Andrei's esprits d'escalier", the inequality can be derived from the somewhat notorious log sum inequality if you make all bi equal: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture3.pdf Andrei
Dec 08 2015
On 12/08/2015 11:31 PM, Andrei Alexandrescu wrote:On 12/08/2015 04:55 PM, Timon Gehr wrote:Which inequality?The context is asymptotic runtime bounds. The special cases for small values of continuous logarithms can just be defined away. They have no relevance for asymptotic runtime analysis (even though defining big-O adequately for multiple variables is more tricky than for a single variable).From the cycle "Andrei's esprits d'escalier", the inequality can be derived from the somewhat notorious log sum inequality if you make all bi equal: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture3.pdf Andrei
Dec 08 2015
On Tuesday, 8 December 2015 at 20:56:28 UTC, cym13 wrote:On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:Im confident it isn't. I think the rule he was thinking of was O(log(ab)) = O(log(a)+log(b)), which is just a basic property of logarithms. It's pretty easy to get to a contradiction between those two rules.On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On Tuesday, 8 December 2015 at 21:30:09 UTC, Charles wrote:On Tuesday, 8 December 2015 at 20:56:28 UTC, cym13 wrote:No, his demonstration stands. This isn't about log algebra, it's about asymptotic notation. I missed the fact that O(a+b) = O(max(a, b)).On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:Im confident it isn't. I think the rule he was thinking of was O(log(ab)) = O(log(a)+log(b)), which is just a basic property of logarithms. It's pretty easy to get to a contradiction between those two rules.On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On Tue, Dec 08, 2015 at 09:30:09PM +0000, Charles via Digitalmars-d wrote:On Tuesday, 8 December 2015 at 20:56:28 UTC, cym13 wrote:There is no contradiction. Big-O notation deals with asymptotic behaviour as the variables approach infinity, so behaviour at 'small' values of m and n are irrelevant. The idea is that if m grows fundamentally faster than n, then for sufficiently large values of m and n, the actual value will be dominated by m, and the contribution from n will be negligible. The same argument holds for the converse. Hence, O(n+m) = O(max(n,m)). Now, for O(log(n+m)), at sufficiently large values of n and m what dominates the argument to log will be max(n,m), so O(log(n+m)) = O(log(max(n,m))). But we've already established that O(f(x)+g(x)) = O(max(f(x),g(x))), so we note that O(log(max(n,m))) = O(max(log(n),log(m))) = O(log(n)+log(m)). Note that the last step only works for slow-growing functions like log, because log(x+y) < log(x) + log(y). This is no longer true for functions that grow too fast, e.g., exp(x+y) < exp(x) + exp(y) is false, so big-O expressions involving exp cannot be simplified in this way. T -- We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare. Now, thanks to the Internet, we know this is not true. -- Robert WilenskOn Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:Im confident it isn't. I think the rule he was thinking of was O(log(ab)) = O(log(a)+log(b)), which is just a basic property of logarithms. It's pretty easy to get to a contradiction between those two rules.On 12/08/2015 12:12 PM, Timon Gehr wrote:Surely I'm missing something obvious but why is it true exactly?O(log(n+m)) = O(log(n)+log(m)).Noice. Yes I did miss it. Thx!! -- Andrei
Dec 08 2015
On 12/09/2015 02:02 AM, H. S. Teoh via Digitalmars-d wrote:Big-O notation deals with asymptotic behaviour as the variables approach infinity, so behaviour at 'small' values of m and n are irrelevant.The problem is, however, that only m /or/ n could be small. Therefore, formalizing this intuition is somewhat more delicate than one would like it to be. E.g. see http://people.cis.ksu.edu/~rhowell/asymptotic.pdf
Dec 08 2015
On Wednesday, 9 December 2015 at 01:40:12 UTC, Timon Gehr wrote:On 12/09/2015 02:02 AM, H. S. Teoh via Digitalmars-d wrote:Pretty sure as per property 1 (common, "weak" definition of multi-variate O class) no variable can be "small". For all n >= ntresh and for m >= mtresh the inequality must hold, and that's only the weakest property.Big-O notation deals with asymptotic behaviour as the variables approach infinity, so behaviour at 'small' values of m and n are irrelevant.The problem is, however, that only m /or/ n could be small.
Dec 08 2015
On 12/09/2015 03:06 AM, Algo wrote:On Wednesday, 9 December 2015 at 01:40:12 UTC, Timon Gehr wrote:It certainly can be. Property 1 only talks about those values of the function where this is not the case though.On 12/09/2015 02:02 AM, H. S. Teoh via Digitalmars-d wrote:Pretty sure as per property 1 (common, "weak" definition of multi-variate O class) no variable can be "small".Big-O notation deals with asymptotic behaviour as the variables approach infinity, so behaviour at 'small' values of m and n are irrelevant.The problem is, however, that only m /or/ n could be small.For all n >= ntresh and for m >= mtresh the inequality must hold, and that's only the weakest property.Exactly, it is a rather weak property. Other properties can (and should) also constrain functions in O(f(n,m)) on those values where one of the two variables is small. You could have a function like: void bar(int n) Time(BigTheta("2^n")); void foo(int n,int m){ if(n<10) return bar(m); if(m<10) return bar(n); return 0; } If only property 1 is required, one can derive that foo runs in O(1), which is inadequate.
Dec 09 2015