digitalmars.D - Exponential operator
- Lars T. Kyllingstad (17/17) Aug 07 2009 In the 'proposed syntax change' thread, Don mentioned that an
- Bill Baxter (26/41) Aug 07 2009 ion
- Don (20/66) Aug 07 2009 Yes, it's powers of 2 and 3 that are 90% of the use cases.
- bearophile (7/8) Aug 07 2009 I want to add to two small things to that post of mine:
- Jimbob (7/15) Aug 07 2009 It wont be on x86. Multiplication has a latency of around 4 cycles wheth...
- Andrei Alexandrescu (4/19) Aug 07 2009 Yeah, but what's the throughput? With multiple ALUs you can get several
- Jimbob (7/25) Aug 07 2009 In this case you incur the latency of every mul because each one needs t...
- Andrei Alexandrescu (4/29) Aug 07 2009 Oh, you're right. At least if there were four multiplies in there, I
- BCS (4/27) Aug 07 2009 For constant integer exponents the compiler should be able to choose bet...
- bearophile (6/16) Aug 07 2009 I don't understand what you mean.
- Jimbob (3/19) Aug 07 2009 Oops, my brain didnt parse what you meant by "simple number".
- Moritz Warning (4/8) Aug 07 2009 [..]
- Bill Baxter (3/11) Aug 07 2009 Multiplying by a dereferenced pointer.
- Miles (18/20) Aug 07 2009 I think that a ** b can be used, is not ambiguous except for the
- Michel Fortin (7/19) Aug 07 2009 But to be coherent with a++ which does a+1, shouldn't a** mean a to the
- Simen Kjaeraas (6/18) Aug 08 2009 No. As we can see, ++ is the concatenation of two addition operators, so
- Don (7/35) Aug 10 2009 That doesn't work, because you still get new code being converted from
- grauzone (3/5) Aug 10 2009 int* a, b;
- Don (2/9) Aug 10 2009 Touche. C declaration syntax is dreadful.
- Daniel Keep (4/11) Aug 10 2009 Since that changes the type of b, it's at least likely to give you a
- Lars T. Kyllingstad (17/55) Aug 10 2009 I've been translating a lot of FORTRAN code to D lately, and it's
- Miles (13/17) Aug 10 2009 There are too many languages that support ** as an exponentiation
- Don (13/33) Aug 11 2009 Not at all! I'm attacking the fallacy that "** must be a good choice
- Zhenyu Zhou (4/6) Aug 11 2009 std.math.pow only support floating point number
- Michel Fortin (8/26) Aug 11 2009 HyperTalk used to have <> for inequality, but ≠ worked too. You could
- Michel Fortin (11/18) Aug 07 2009 I always wondered why there isn't an XOR logical operator.
- Daniel Keep (8/25) Aug 07 2009 a | b | a ^^ b | a != b
- Michel Fortin (14/30) Aug 07 2009 For this table to work, a and b need to be boolean values. With && and
- language_fan (9/20) Aug 07 2009 A lot of other built-in operators are missing. I'd like to add opGcd()
- Bill Baxter (4/18) Aug 07 2009 Ha ha. Funny jokes. Can we call it opDarwin instead of opFish? That
- Tim Matthews (10/21) Aug 25 2009 I prefer the *^ syntax because:
In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more. Daniel Keep has proposed the syntax a*^b while my suggestion was a^^b Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous. "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes, pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those, do we? Exponentiation is a very common mathematical operation that deserves its own symbol. Besides, bearophile has pointed out several optimisations that the compiler can/must perform on exponential expressions. He also proposed that the overload be called opPower. What do you think? -Lars
Aug 07 2009
On Fri, Aug 7, 2009 at 3:50 AM, Lars T. Kyllingstad<public kyllingen.nospamnet> wrote:In the 'proposed syntax change' thread, Don mentioned that an exponentiat=ionoperator is sorely missing from D. I couldn't agree more. Daniel Keep has proposed the syntax =C2=A0a*^b while my suggestion was =C2=A0a^^b Neither of the natural candidates, a^b and a**b, are an option, as they a=re,respectively, already taken and ambiguous. "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes, pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those,=dowe? Exponentiation is a very common mathematical operation that deserves =itsown symbol. Besides, bearophile has pointed out several optimisations tha=tthe compiler can/must perform on exponential expressions. He also proposed that the overload be called opPower. What do you think?I'm all for it. But if we can't get that, then it might be nice to have at least squaring and cubing template functions in std.math. Squaring numbers is so common it deserves a direct way. It always annoys me when I have to write float x =3D (some expression); float x2 =3D x*x; When I'd like to be able to just write (some expression)^^2. sqr(some expression) would be ok, too. It's odd that sqrt and cbrt exist in the std C math library but not their inverses. I found this list of languages with an exponentiation operator: derivatives), Haskell (for integer exponents), and most computer algebra systems Haskell (for floating-point exponents), Turing --bb
Aug 07 2009
Bill Baxter wrote:On Fri, Aug 7, 2009 at 3:50 AM, Lars T. Kyllingstad<public kyllingen.nospamnet> wrote:Yes, it's powers of 2 and 3 that are 90% of the use cases. Squaring is a really common operation (more common than xor). An optimising compiler always needs to recognize it.In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more. Daniel Keep has proposed the syntax a*^b while my suggestion was a^^b Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous. "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes, pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those, do we? Exponentiation is a very common mathematical operation that deserves its own symbol. Besides, bearophile has pointed out several optimisations that the compiler can/must perform on exponential expressions. He also proposed that the overload be called opPower. What do you think?I'm all for it. But if we can't get that, then it might be nice to have at least squaring and cubing template functions in std.math. Squaring numbers is so common it deserves a direct way. It always annoys me when I have to write float x = (some expression); float x2 = x*x; When I'd like to be able to just write (some expression)^^2. sqr(some expression) would be ok, too. It's odd that sqrt and cbrt exist in the std C math library but not their inverses.I found this list of languages with an exponentiation operator: derivatives), Haskell (for integer exponents), and most computer algebra systems Haskell (for floating-point exponents), Turing --bbI personally don't understand why anyone would like ** as an exponentiation operator. The only thing in its favour is that that's what Fortran did. And Fortran only used it because it had almost no characters to choose from. ↑ is the best (yup, I had a C64). ^ is the next best, but unfortunately C grabbed it for xor. I think ^^ is the best available option. Aside from the fact that x**y is ambiguous (eg, x***y could be x ** (*y) or x * (*(*y)) ), I just think ^^ looks better: assert( 3**3 + 4**3 + 5**3 == 6**3 ); assert( 3^^3 + 4^^3 + 5^^3 == 6^^3 ); Found an old discussion, pragma proposed ^^ and opPower: http://www.digitalmars.com/d/archives/digitalmars/D/18742.html#N18742 The fact that exactly the same proposal came up again is encouraging -- it's moderately intuitive. For overloading, by analogy to opAdd(), opSub(), opMul(), opDiv() it should probably be opPow().
Aug 07 2009
Lars T. Kyllingstad:He also proposed that the overload be called opPower.I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad. On the other hand if double^^2 is compiled as pow(double,2) then I'm not going to use ^^ in most of my code. Bye, bearophile
Aug 07 2009
"bearophile" <bearophileHUGS lycos.com> wrote in message news:h5h3uf$23sg$1 digitalmars.com...Lars T. Kyllingstad:It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.He also proposed that the overload be called opPower.I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
Aug 07 2009
Jimbob wrote:"bearophile" <bearophileHUGS lycos.com> wrote in message news:h5h3uf$23sg$1 digitalmars.com...Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency. AndreiLars T. Kyllingstad:It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.He also proposed that the overload be called opPower.I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
Aug 07 2009
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:4A7C5313.10105 erdani.org...Jimbob wrote:In this case you incur the latency of every mul because each one needs the result of the previous mul before it can start. Thats the main reason trancendentals take so long to compute, cause they have large dependancy chains which make it difficult, if not imposible for any of it to be done in parallel."bearophile" <bearophileHUGS lycos.com> wrote in message news:h5h3uf$23sg$1 digitalmars.com...Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency.Lars T. Kyllingstad:It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.He also proposed that the overload be called opPower.I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
Aug 07 2009
Jimbob wrote:"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message news:4A7C5313.10105 erdani.org...Oh, you're right. At least if there were four multiplies in there, I could've had a case :o). AndreiJimbob wrote:In this case you incur the latency of every mul because each one needs the result of the previous mul before it can start. Thats the main reason trancendentals take so long to compute, cause they have large dependancy chains which make it difficult, if not imposible for any of it to be done in parallel."bearophile" <bearophileHUGS lycos.com> wrote in message news:h5h3uf$23sg$1 digitalmars.com...Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency.Lars T. Kyllingstad:It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.He also proposed that the overload be called opPower.I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
Aug 07 2009
Reply to Jimbob,"bearophile" <bearophileHUGS lycos.com> wrote in message news:h5h3uf$23sg$1 digitalmars.com...For constant integer exponents the compiler should be able to choose between the multiplication solution and a intrinsic solution. also: http://en.wikipedia.org/wiki/Exponentiation#Efficiently_computing_a_powerLars T. Kyllingstad:It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.He also proposed that the overload be called opPower.I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalma rs.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
Aug 07 2009
Jimbob Wrote:bearophile:I don't understand what you mean. But "when A isn't a simple number" means for example when A is a matrix. In such case the algorithm of A^3 may be faster than doing two matrix multiplications, and even if it's not faster it may be better numerically, etc. In such cases I'd like to leave to the matrix power algorithm the decision regarding what do to and I don't think rewriting the power is good. This means the rewriting rules I have shown (x^^2 => x*x, x^^3 => x*x*x, and maybe x^^4 => y=x*x;y*y) have to be used only when x is a built-in datum. Bye, bearophileAnd A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.
Aug 07 2009
"bearophile" <bearophileHUGS lycos.com> wrote in message news:h5hvhh$if8$1 digitalmars.com...Jimbob Wrote:Oops, my brain didnt parse what you meant by "simple number".bearophile:I don't understand what you mean. But "when A isn't a simple number" means for example when A is a matrix.And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.
Aug 07 2009
On Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad wrote:In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more.[..]Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.[..] What is ** used for?
Aug 07 2009
On Fri, Aug 7, 2009 at 6:02 AM, Moritz Warning<moritzwarning web.de> wrote:On Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad wrote:Multiplying by a dereferenced pointer. --bbIn the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more.[..]Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.[..] What is ** used for?
Aug 07 2009
Lars T. Kyllingstad wrote:Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable) I don't know anyone who writes ** to mean multiplication and dereference, except when obfuscating code. People usually prefer adding a whitespace between both operators, for obvious readability purposes. I think it is perfectly reasonable to deprecate current usage of '**' for the next release, and a few releases later, make '**' a new operator. I doubt anyone would notice. Other examples: a-- - b a - --b a && b a & &b
Aug 07 2009
On 2009-08-07 12:33:09 -0400, Miles <_______ _______.____> said:Lars T. Kyllingstad wrote:But to be coherent with a++ which does a+1, shouldn't a** mean a to the power 1 ? -- Michel Fortin michel.fortin michelf.com http://michelf.com/Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable)
Aug 07 2009
On Fri, 07 Aug 2009 18:57:12 +0200, Michel Fortin <michel.fortin michelf.com> wrote:On 2009-08-07 12:33:09 -0400, Miles <_______ _______.____> said:No. As we can see, ++ is the concatenation of two addition operators, so the equivalent for exponential would be a****, a^^^^, or a*^*^. :p -- SimenLars T. Kyllingstad wrote:But to be coherent with a++ which does a+1, shouldn't a** mean a to the power 1 ?Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable)
Aug 08 2009
Miles wrote:Lars T. Kyllingstad wrote:That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable) I don't know anyone who writes ** to mean multiplication and dereference, except when obfuscating code. People usually prefer adding a whitespace between both operators, for obvious readability purposes. I think it is perfectly reasonable to deprecate current usage of '**' for the next release, and a few releases later, make '**' a new operator. I doubt anyone would notice.Other examples: a-- - b a - --b a && b a & &bYou didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2
Aug 10 2009
Don wrote:That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.int* a, b; Ooops...
Aug 10 2009
grauzone wrote:Don wrote:Touche. C declaration syntax is dreadful.That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.int* a, b; Ooops...
Aug 10 2009
grauzone wrote:Don wrote:Since that changes the type of b, it's at least likely to give you a compile error. Although I suppose you could say the same thing about a**b...That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.int* a, b; Ooops...
Aug 10 2009
Don wrote:Miles wrote:I've been translating a lot of FORTRAN code to D lately, and it's amazing what one can get used to reading. Even X.NE.2. :) The worst part is translating 1-based array indexing to base 0 (it should be a simple transformation, but those old FORTRAN coders have made damn sure that isn't always the case...), and unraveling horrible spaghetti code like this: 100 if(abserr.eq.oflow) go to 115 if(ier+ierro.eq.0) go to 110 if(ierro.eq.3) abserr = abserr+correc if(ier.eq.0) ier = 3 if(result.ne.0.0d+00.and.area.ne.0.0d+00) go to 105 if(abserr.gt.errsum) go to 115 if(area.eq.0.0d+00) go to 130 go to 110 105 if(abserr/dabs(result).gt.errsum/dabs(area)) go to 115 In the end I just had to admit defeat and define a few labels... -LarsLars T. Kyllingstad wrote:That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.Neither of the natural candidates, a^b and a**b, are an option, as they are, respectively, already taken and ambiguous.I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable) I don't know anyone who writes ** to mean multiplication and dereference, except when obfuscating code. People usually prefer adding a whitespace between both operators, for obvious readability purposes. I think it is perfectly reasonable to deprecate current usage of '**' for the next release, and a few releases later, make '**' a new operator. I doubt anyone would notice.Other examples: a-- - b a - --b a && b a & &bYou didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2
Aug 10 2009
Don wrote:You didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2There are too many languages that support ** as an exponentiation operator, that is the reason ** is a likely candidate. Your reasoning seemed to be: - Fortran is bad; - Fortran had ** as its exponentiation operator; - So, ** is bad as an exponentiation operator. I don't care for ** or .NE., really. I don't like * as a multiplication operator, in fact. I'd rather have × as multiplication, ↑ as exponentiation, ∧ as logical and, ∨ as logical or, ¬ as a logical not, = as equality, ≠ as inequality and ← as assignment. I don't know why, but every time I say this, it brings all sorts of controversies and euphoric reactions.
Aug 10 2009
Miles wrote:Don wrote:Not at all! I'm attacking the fallacy that "** must be a good choice because so many languages use it". * The ONLY reason other languages use ** is because Fortran used it. * Fortran used ** because it had no choice, not because it was believed to be good. * We have choices that Fortran did not have. The best choice for Fortran is not necessarily the best choice for D. Note that there are no C-family languages which use ** for exponentiation, so there isn't really a precedent. However, the syntax is really not the issue. The issue is, is there sufficient need for a power operator (of any syntax)?You didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2There are too many languages that support ** as an exponentiation operator, that is the reason ** is a likely candidate. Your reasoning seemed to be: - Fortran is bad; - Fortran had ** as its exponentiation operator; - So, ** is bad as an exponentiation operator.I don't care for ** or .NE., really. I don't like * as a multiplication operator, in fact. I'd rather have × as multiplication, ↑ as exponentiation, ∧ as logical and, ∨ as logical or, ¬ as a logical not, = as equality, ≠ as inequality and ← as assignment. I don't know why, but every time I say this, it brings all sorts of controversies and euphoric reactions.Lack of keyboards.
Aug 11 2009
Don Wrote:However, the syntax is really not the issue. The issue is, is there sufficient need for a power operator (of any syntax)?std.math.pow only support floating point number http://d.puremagic.com/issues/show_bug.cgi?id=2973 If we can't make the power function more powerful, yes, we need a new operator.
Aug 11 2009
On 2009-08-10 13:56:53 -0400, Miles <_______ _______.____> said:Don wrote:HyperTalk used to have <> for inequality, but ≠ worked too. You could write >= for greater or equal, but ≥ worked too. That said, having = as equality in a C-derived language is somewhat problematic. -- Michel Fortin michel.fortin michelf.com http://michelf.com/You didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2There are too many languages that support ** as an exponentiation operator, that is the reason ** is a likely candidate. Your reasoning seemed to be: - Fortran is bad; - Fortran had ** as its exponentiation operator; - So, ** is bad as an exponentiation operator. I don't care for ** or .NE., really. I don't like * as a multiplication operator, in fact. I'd rather have × as multiplication, ↑ as exponentiation, ∧ as logical and, ∨ as logical or, ¬ as a logical not, = as equality, ≠ as inequality and ← as assignment.
Aug 11 2009
On 2009-08-07 06:50:25 -0400, "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> said:Daniel Keep has proposed the syntax a*^b while my suggestion was a^^bI always wondered why there isn't an XOR logical operator. binary logical (a & b) => (a && b) (a | b) => (a || b) (a ^ b) => (a ^^ b) -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Aug 07 2009
Michel Fortin wrote:On 2009-08-07 06:50:25 -0400, "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> said:a | b | a ^^ b | a != b ---+---+--------+-------- F | F | F | F F | T | T | T T | F | T | T T | T | F | F That's why.Daniel Keep has proposed the syntax a*^b while my suggestion was a^^bI always wondered why there isn't an XOR logical operator. binary logical (a & b) => (a && b) (a | b) => (a || b) (a ^ b) => (a ^^ b)
Aug 07 2009
On 2009-08-07 13:01:55 -0400, Daniel Keep <daniel.keep.lists gmail.com> said:Michel Fortin wrote:For this table to work, a and b need to be boolean values. With && and ||, you have an implicit convertion to boolean, not with !=. So if a == 1 and b == 2, an hypothetical ^^ would yeild false since both are converted to true, while != would yield false. But I have another explanation now. With && and ||, there's always a chance that the expression on the left won't be evaluated. If that wasn't the case, the only difference in && vs. &, and || vs. | would be the automatic convertion to a boolean value. With ^^, you always have to evaluate both sides, so it's less useful. -- Michel Fortin michel.fortin michelf.com http://michelf.com/I always wondered why there isn't an XOR logical operator. binary logical (a & b) => (a && b) (a | b) => (a || b) (a ^ b) => (a ^^ b)a | b | a ^^ b | a != b ---+---+--------+-------- F | F | F | F F | T | T | T T | F | T | T T | T | F | F That's why.
Aug 07 2009
Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad thusly wrote:In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more...."Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes, pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those, do we?A lot of other built-in operators are missing. I'd like to add opGcd() (greatest common denominator), opFactorial (memoizing O(1) implementation, for advertising the terseness of D on reddit), opStar (has been discussed before), opSwap, opFold, opMap, and opFish (><>) to your list. Operations for paraconsistent logic would be nice, too. After all, these are operations I use quite often ==> everyone must need them badly.He also proposed that the overload be called opPower. What do you think?Sounds perfect.-Lars
Aug 07 2009
On Fri, Aug 7, 2009 at 10:43 AM, language_fan<foo bar.com.invalid> wrote:Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad thusly wrote:Ha ha. Funny jokes. Can we call it opDarwin instead of opFish? That would be funnier. --bbIn the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more...."Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes, pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those, do we?A lot of other built-in operators are missing. I'd like to add opGcd() (greatest common denominator), opFactorial (memoizing O(1) implementation, for advertising the terseness of D on reddit), opStar (has been discussed before), opSwap, opFold, opMap, and opFish (><>) to your list. Operations for paraconsistent logic would be nice, too. After all, these are operations I use quite often ==> everyone must need them badly.
Aug 07 2009
Lars T. Kyllingstad Wrote:In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more. Daniel Keep has proposed the syntax a*^b while my suggestion was a^^bI prefer the *^ syntax because: 1. ^^ looks like we a re including a new logical xor syntax 2. *^ has the asterix from the multiplication syntax while using the caret from the mathematical exponentiation syntax. Exponent is a kind of multiplication, with a default identity of 1. Also if this feature does get included then the code: a *^ b *^ c should be evaluated as a *^ (b *^ c) not (a *^ b) *^ c
Aug 25 2009