www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Exponential operator

reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
In the 'proposed syntax change' thread, Don mentioned that an 
exponentiation operator is sorely missing from D. I couldn't agree more.

Daniel Keep has proposed the syntax

   a*^b

while my suggestion was

   a^^b

Neither of the natural candidates, a^b and a**b, are an option, as they 
are, respectively, already taken and ambiguous.

"Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes, 
pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use 
those, do we? Exponentiation is a very common mathematical operation 
that deserves its own symbol. Besides, bearophile has pointed out 
several optimisations that the compiler can/must perform on exponential 
expressions.

He also proposed that the overload be called opPower.

What do you think?

-Lars
Aug 07 2009
next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
On Fri, Aug 7, 2009 at 3:50 AM, Lars T.
Kyllingstad<public kyllingen.nospamnet> wrote:
 In the 'proposed syntax change' thread, Don mentioned that an exponentiat=
ion
 operator is sorely missing from D. I couldn't agree more.

 Daniel Keep has proposed the syntax

 =C2=A0a*^b

 while my suggestion was

 =C2=A0a^^b

 Neither of the natural candidates, a^b and a**b, are an option, as they a=
re,
 respectively, already taken and ambiguous.

 "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes,
 pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those,=
do
 we? Exponentiation is a very common mathematical operation that deserves =
its
 own symbol. Besides, bearophile has pointed out several optimisations tha=
t
 the compiler can/must perform on exponential expressions.

 He also proposed that the overload be called opPower.

 What do you think?
I'm all for it. But if we can't get that, then it might be nice to have at least squaring and cubing template functions in std.math. Squaring numbers is so common it deserves a direct way. It always annoys me when I have to write float x =3D (some expression); float x2 =3D x*x; When I'd like to be able to just write (some expression)^^2. sqr(some expression) would be ok, too. It's odd that sqrt and cbrt exist in the std C math library but not their inverses. I found this list of languages with an exponentiation operator: derivatives), Haskell (for integer exponents), and most computer algebra systems Haskell (for floating-point exponents), Turing --bb
Aug 07 2009
parent Don <nospam nospam.com> writes:
Bill Baxter wrote:
 On Fri, Aug 7, 2009 at 3:50 AM, Lars T.
 Kyllingstad<public kyllingen.nospamnet> wrote:
 In the 'proposed syntax change' thread, Don mentioned that an exponentiation
 operator is sorely missing from D. I couldn't agree more.

 Daniel Keep has proposed the syntax

  a*^b

 while my suggestion was

  a^^b

 Neither of the natural candidates, a^b and a**b, are an option, as they are,
 respectively, already taken and ambiguous.

 "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes,
 pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use those, do
 we? Exponentiation is a very common mathematical operation that deserves its
 own symbol. Besides, bearophile has pointed out several optimisations that
 the compiler can/must perform on exponential expressions.

 He also proposed that the overload be called opPower.

 What do you think?
I'm all for it. But if we can't get that, then it might be nice to have at least squaring and cubing template functions in std.math. Squaring numbers is so common it deserves a direct way. It always annoys me when I have to write float x = (some expression); float x2 = x*x; When I'd like to be able to just write (some expression)^^2. sqr(some expression) would be ok, too. It's odd that sqrt and cbrt exist in the std C math library but not their inverses.
Yes, it's powers of 2 and 3 that are 90% of the use cases. Squaring is a really common operation (more common than xor). An optimising compiler always needs to recognize it.
 I found this list of languages with an exponentiation operator:


 derivatives), Haskell (for integer exponents), and most computer
 algebra systems

 Haskell (for floating-point exponents), Turing

 
 --bb
I personally don't understand why anyone would like ** as an exponentiation operator. The only thing in its favour is that that's what Fortran did. And Fortran only used it because it had almost no characters to choose from. ↑ is the best (yup, I had a C64). ^ is the next best, but unfortunately C grabbed it for xor. I think ^^ is the best available option. Aside from the fact that x**y is ambiguous (eg, x***y could be x ** (*y) or x * (*(*y)) ), I just think ^^ looks better: assert( 3**3 + 4**3 + 5**3 == 6**3 ); assert( 3^^3 + 4^^3 + 5^^3 == 6^^3 ); Found an old discussion, pragma proposed ^^ and opPower: http://www.digitalmars.com/d/archives/digitalmars/D/18742.html#N18742 The fact that exactly the same proposal came up again is encouraging -- it's moderately intuitive. For overloading, by analogy to opAdd(), opSub(), opMul(), opDiv() it should probably be opPow().
Aug 07 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Lars T. Kyllingstad:
 He also proposed that the overload be called opPower.
I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad. On the other hand if double^^2 is compiled as pow(double,2) then I'm not going to use ^^ in most of my code. Bye, bearophile
Aug 07 2009
parent reply "Jimbob" <jim bob.com> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:h5h3uf$23sg$1 digitalmars.com...
 Lars T. Kyllingstad:
 He also proposed that the overload be called opPower.
I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.
Aug 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jimbob wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:h5h3uf$23sg$1 digitalmars.com...
 Lars T. Kyllingstad:
 He also proposed that the overload be called opPower.
I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.
Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency. Andrei
Aug 07 2009
parent reply "Jimbob" <jim bob.com> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:4A7C5313.10105 erdani.org...
 Jimbob wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:h5h3uf$23sg$1 digitalmars.com...
 Lars T. Kyllingstad:
 He also proposed that the overload be called opPower.
I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.
Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency.
In this case you incur the latency of every mul because each one needs the result of the previous mul before it can start. Thats the main reason trancendentals take so long to compute, cause they have large dependancy chains which make it difficult, if not imposible for any of it to be done in parallel.
Aug 07 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jimbob wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:4A7C5313.10105 erdani.org...
 Jimbob wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:h5h3uf$23sg$1 digitalmars.com...
 Lars T. Kyllingstad:
 He also proposed that the overload be called opPower.
I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.
Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency.
In this case you incur the latency of every mul because each one needs the result of the previous mul before it can start. Thats the main reason trancendentals take so long to compute, cause they have large dependancy chains which make it difficult, if not imposible for any of it to be done in parallel.
Oh, you're right. At least if there were four multiplies in there, I could've had a case :o). Andrei
Aug 07 2009
prev sibling next sibling parent BCS <ao pathlink.com> writes:
Reply to Jimbob,

 "bearophile" <bearophileHUGS lycos.com> wrote in message
 news:h5h3uf$23sg$1 digitalmars.com...
 
 Lars T. Kyllingstad:
 
 He also proposed that the overload be called opPower.
 
I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalma rs.D&article_id=95123 The name opPow() may be good enough instead of opPower(). And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.
For constant integer exponents the compiler should be able to choose between the multiplication solution and a intrinsic solution. also: http://en.wikipedia.org/wiki/Exponentiation#Efficiently_computing_a_power
Aug 07 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Jimbob Wrote:

bearophile:
 And A^^3 may be faster than A*A*A when A isn't a simple number, so always 
 replacing the
 power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.
I don't understand what you mean. But "when A isn't a simple number" means for example when A is a matrix. In such case the algorithm of A^3 may be faster than doing two matrix multiplications, and even if it's not faster it may be better numerically, etc. In such cases I'd like to leave to the matrix power algorithm the decision regarding what do to and I don't think rewriting the power is good. This means the rewriting rules I have shown (x^^2 => x*x, x^^3 => x*x*x, and maybe x^^4 => y=x*x;y*y) have to be used only when x is a built-in datum. Bye, bearophile
Aug 07 2009
parent "Jimbob" <jim bob.com> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:h5hvhh$if8$1 digitalmars.com...
 Jimbob Wrote:

bearophile:
 And A^^3 may be faster than A*A*A when A isn't a simple number, so 
 always
 replacing the
 power with mults may be bad.
It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.
I don't understand what you mean. But "when A isn't a simple number" means for example when A is a matrix.
Oops, my brain didnt parse what you meant by "simple number".
Aug 07 2009
prev sibling next sibling parent reply Moritz Warning <moritzwarning web.de> writes:
On Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad wrote:

 In the 'proposed syntax change' thread, Don mentioned that an
 exponentiation operator is sorely missing from D. I couldn't agree more.
[..]
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
[..] What is ** used for?
Aug 07 2009
parent Bill Baxter <wbaxter gmail.com> writes:
On Fri, Aug 7, 2009 at 6:02 AM, Moritz Warning<moritzwarning web.de> wrote:
 On Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad wrote:

 In the 'proposed syntax change' thread, Don mentioned that an
 exponentiation operator is sorely missing from D. I couldn't agree more.
[..]
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
[..] What is ** used for?
Multiplying by a dereferenced pointer. --bb
Aug 07 2009
prev sibling next sibling parent reply Miles <_______ _______.____> writes:
Lars T. Kyllingstad wrote:
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable) I don't know anyone who writes ** to mean multiplication and dereference, except when obfuscating code. People usually prefer adding a whitespace between both operators, for obvious readability purposes. I think it is perfectly reasonable to deprecate current usage of '**' for the next release, and a few releases later, make '**' a new operator. I doubt anyone would notice. Other examples: a-- - b a - --b a && b a & &b
Aug 07 2009
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-08-07 12:33:09 -0400, Miles <_______ _______.____> said:

 Lars T. Kyllingstad wrote:
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable)
But to be coherent with a++ which does a+1, shouldn't a** mean a to the power 1 ? -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Aug 07 2009
parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Fri, 07 Aug 2009 18:57:12 +0200, Michel Fortin  
<michel.fortin michelf.com> wrote:

 On 2009-08-07 12:33:09 -0400, Miles <_______ _______.____> said:

 Lars T. Kyllingstad wrote:
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable)
But to be coherent with a++ which does a+1, shouldn't a** mean a to the power 1 ?
No. As we can see, ++ is the concatenation of two addition operators, so the equivalent for exponential would be a****, a^^^^, or a*^*^. :p -- Simen
Aug 08 2009
prev sibling parent reply Don <nospam nospam.com> writes:
Miles wrote:
 Lars T. Kyllingstad wrote:
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable) I don't know anyone who writes ** to mean multiplication and dereference, except when obfuscating code. People usually prefer adding a whitespace between both operators, for obvious readability purposes. I think it is perfectly reasonable to deprecate current usage of '**' for the next release, and a few releases later, make '**' a new operator. I doubt anyone would notice.
That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.
 
 Other examples:
 
   a-- - b
   a - --b
 
   a && b
   a & &b
You didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2
Aug 10 2009
next sibling parent reply grauzone <none example.net> writes:
Don wrote:
 That doesn't work, because you still get new code being converted from 
 C. It can't look the same, but behave differently.
int* a, b; Ooops...
Aug 10 2009
next sibling parent Don <nospam nospam.com> writes:
grauzone wrote:
 Don wrote:
 That doesn't work, because you still get new code being converted from 
 C. It can't look the same, but behave differently.
int* a, b; Ooops...
Touche. C declaration syntax is dreadful.
Aug 10 2009
prev sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
grauzone wrote:
 Don wrote:
 That doesn't work, because you still get new code being converted from
 C. It can't look the same, but behave differently.
int* a, b; Ooops...
Since that changes the type of b, it's at least likely to give you a compile error. Although I suppose you could say the same thing about a**b...
Aug 10 2009
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Don wrote:
 Miles wrote:
 Lars T. Kyllingstad wrote:
 Neither of the natural candidates, a^b and a**b, are an option, as they
 are, respectively, already taken and ambiguous.
I think that a ** b can be used, is not ambiguous except for the tokenizer of the language. It is the same difference you have with: a ++ b -> identifier 'a', unary operator '++', identifier 'b' (not parseable) a + + b -> identifier 'a', binary operator '+', unary operator '+', identifier 'b' (parseable) I don't know anyone who writes ** to mean multiplication and dereference, except when obfuscating code. People usually prefer adding a whitespace between both operators, for obvious readability purposes. I think it is perfectly reasonable to deprecate current usage of '**' for the next release, and a few releases later, make '**' a new operator. I doubt anyone would notice.
That doesn't work, because you still get new code being converted from C. It can't look the same, but behave differently.
 Other examples:

   a-- - b
   a - --b

   a && b
   a & &b
You didn't respond to my assertion: even if you _could_ do it, why would you want to? ** sucks as an exponential operator. I dispute the contention that ** is a natural choice. It comes from the same language that brought you IF X .NE. 2
I've been translating a lot of FORTRAN code to D lately, and it's amazing what one can get used to reading. Even X.NE.2. :) The worst part is translating 1-based array indexing to base 0 (it should be a simple transformation, but those old FORTRAN coders have made damn sure that isn't always the case...), and unraveling horrible spaghetti code like this: 100 if(abserr.eq.oflow) go to 115 if(ier+ierro.eq.0) go to 110 if(ierro.eq.3) abserr = abserr+correc if(ier.eq.0) ier = 3 if(result.ne.0.0d+00.and.area.ne.0.0d+00) go to 105 if(abserr.gt.errsum) go to 115 if(area.eq.0.0d+00) go to 130 go to 110 105 if(abserr/dabs(result).gt.errsum/dabs(area)) go to 115 In the end I just had to admit defeat and define a few labels... -Lars
Aug 10 2009
prev sibling parent reply Miles <_______ _______.____> writes:
Don wrote:
 You didn't respond to my assertion: even if you _could_ do it, why would
 you want to? ** sucks as an exponential operator. I dispute the
 contention that ** is a natural choice. It comes from the same language
 that brought you  IF X .NE. 2
There are too many languages that support ** as an exponentiation operator, that is the reason ** is a likely candidate. Your reasoning seemed to be: - Fortran is bad; - Fortran had ** as its exponentiation operator; - So, ** is bad as an exponentiation operator. I don't care for ** or .NE., really. I don't like * as a multiplication operator, in fact. I'd rather have × as multiplication, ↑ as exponentiation, ∧ as logical and, ∨ as logical or, ¬ as a logical not, = as equality, ≠ as inequality and ← as assignment. I don't know why, but every time I say this, it brings all sorts of controversies and euphoric reactions.
Aug 10 2009
next sibling parent reply Don <nospam nospam.com> writes:
Miles wrote:
 Don wrote:
 You didn't respond to my assertion: even if you _could_ do it, why would
 you want to? ** sucks as an exponential operator. I dispute the
 contention that ** is a natural choice. It comes from the same language
 that brought you  IF X .NE. 2
There are too many languages that support ** as an exponentiation operator, that is the reason ** is a likely candidate. Your reasoning seemed to be: - Fortran is bad; - Fortran had ** as its exponentiation operator; - So, ** is bad as an exponentiation operator.
Not at all! I'm attacking the fallacy that "** must be a good choice because so many languages use it". * The ONLY reason other languages use ** is because Fortran used it. * Fortran used ** because it had no choice, not because it was believed to be good. * We have choices that Fortran did not have. The best choice for Fortran is not necessarily the best choice for D. Note that there are no C-family languages which use ** for exponentiation, so there isn't really a precedent. However, the syntax is really not the issue. The issue is, is there sufficient need for a power operator (of any syntax)?
 I don't care for ** or .NE., really. I don't like * as a multiplication
 operator, in fact. I'd rather have × as multiplication, ↑ as
 exponentiation, ∧ as logical and, ∨ as logical or, ¬ as a logical not, =
 as equality, ≠ as inequality and ← as assignment.
 
 I don't know why, but every time I say this, it brings all sorts of
 controversies and euphoric reactions.
Lack of keyboards.
Aug 11 2009
parent Zhenyu Zhou <rinick gmail.com> writes:
Don Wrote:
 However, the syntax is really not the issue. The issue is, is there 
 sufficient need for a power operator (of any syntax)?
std.math.pow only support floating point number http://d.puremagic.com/issues/show_bug.cgi?id=2973 If we can't make the power function more powerful, yes, we need a new operator.
Aug 11 2009
prev sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-08-10 13:56:53 -0400, Miles <_______ _______.____> said:

 Don wrote:
 You didn't respond to my assertion: even if you _could_ do it, why would
 you want to? ** sucks as an exponential operator. I dispute the
 contention that ** is a natural choice. It comes from the same language
 that brought you  IF X .NE. 2
There are too many languages that support ** as an exponentiation operator, that is the reason ** is a likely candidate. Your reasoning seemed to be: - Fortran is bad; - Fortran had ** as its exponentiation operator; - So, ** is bad as an exponentiation operator. I don't care for ** or .NE., really. I don't like * as a multiplication operator, in fact. I'd rather have × as multiplication, ↑ as exponentiation, ∧ as logical and, ∨ as logical or, ¬ as a logical not, = as equality, ≠ as inequality and ← as assignment.
HyperTalk used to have <> for inequality, but ≠ worked too. You could write >= for greater or equal, but ≥ worked too. That said, having = as equality in a C-derived language is somewhat problematic. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Aug 11 2009
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-08-07 06:50:25 -0400, "Lars T. Kyllingstad" 
<public kyllingen.NOSPAMnet> said:

 Daniel Keep has proposed the syntax
 
    a*^b
 
 while my suggestion was
 
    a^^b
I always wondered why there isn't an XOR logical operator. binary logical (a & b) => (a && b) (a | b) => (a || b) (a ^ b) => (a ^^ b) -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Aug 07 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Michel Fortin wrote:
 On 2009-08-07 06:50:25 -0400, "Lars T. Kyllingstad"
 <public kyllingen.NOSPAMnet> said:
 
 Daniel Keep has proposed the syntax

    a*^b

 while my suggestion was

    a^^b
I always wondered why there isn't an XOR logical operator. binary logical (a & b) => (a && b) (a | b) => (a || b) (a ^ b) => (a ^^ b)
a | b | a ^^ b | a != b ---+---+--------+-------- F | F | F | F F | T | T | T T | F | T | T T | T | F | F That's why.
Aug 07 2009
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-08-07 13:01:55 -0400, Daniel Keep <daniel.keep.lists gmail.com> said:

 Michel Fortin wrote:
 I always wondered why there isn't an XOR logical operator.
 
 binary     logical
 (a & b) => (a && b)
 (a | b) => (a || b)
 (a ^ b) => (a ^^ b)
a | b | a ^^ b | a != b ---+---+--------+-------- F | F | F | F F | T | T | T T | F | T | T T | T | F | F That's why.
For this table to work, a and b need to be boolean values. With && and ||, you have an implicit convertion to boolean, not with !=. So if a == 1 and b == 2, an hypothetical ^^ would yeild false since both are converted to true, while != would yield false. But I have another explanation now. With && and ||, there's always a chance that the expression on the left won't be evaluated. If that wasn't the case, the only difference in && vs. &, and || vs. | would be the automatic convertion to a boolean value. With ^^, you always have to evaluate both sides, so it's less useful. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Aug 07 2009
prev sibling next sibling parent reply language_fan <foo bar.com.invalid> writes:
Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad thusly wrote:

 In the 'proposed syntax change' thread, Don mentioned that an
 exponentiation operator is sorely missing from D. I couldn't agree more.
 ...
 "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes,
 pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use
 those, do we?
A lot of other built-in operators are missing. I'd like to add opGcd() (greatest common denominator), opFactorial (memoizing O(1) implementation, for advertising the terseness of D on reddit), opStar (has been discussed before), opSwap, opFold, opMap, and opFish (><>) to your list. Operations for paraconsistent logic would be nice, too. After all, these are operations I use quite often ==> everyone must need them badly.
 He also proposed that the overload be called opPower.
 
 What do you think?
Sounds perfect.
 
 -Lars
Aug 07 2009
parent Bill Baxter <wbaxter gmail.com> writes:
On Fri, Aug 7, 2009 at 10:43 AM, language_fan<foo bar.com.invalid> wrote:
 Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad thusly wrote:

 In the 'proposed syntax change' thread, Don mentioned that an
 exponentiation operator is sorely missing from D. I couldn't agree more.
 ...
 "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes,
 pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use
 those, do we?
A lot of other built-in operators are missing. I'd like to add opGcd() (greatest common denominator), opFactorial (memoizing O(1) implementation, for advertising the terseness of D on reddit), opStar (has been discussed before), opSwap, opFold, opMap, and opFish (><>) to your list. Operations for paraconsistent logic would be nice, too. After all, these are operations I use quite often ==> everyone must need them badly.
Ha ha. Funny jokes. Can we call it opDarwin instead of opFish? That would be funnier. --bb
Aug 07 2009
prev sibling parent Tim Matthews <tim.matthews7 gmail.com> writes:
Lars T. Kyllingstad Wrote:

 In the 'proposed syntax change' thread, Don mentioned that an 
 exponentiation operator is sorely missing from D. I couldn't agree more.
 
 Daniel Keep has proposed the syntax
 
    a*^b
 
 while my suggestion was
 
    a^^b
 
I prefer the *^ syntax because: 1. ^^ looks like we a re including a new logical xor syntax 2. *^ has the asterix from the multiplication syntax while using the caret from the mathematical exponentiation syntax. Exponent is a kind of multiplication, with a default identity of 1. Also if this feature does get included then the code: a *^ b *^ c should be evaluated as a *^ (b *^ c) not (a *^ b) *^ c
Aug 25 2009