www.digitalmars.com         C & C++   DMDScript  

D - Compiler feature: warnings on known exceptions?

reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
The latest printf() discussion gave me an idea: the compiler should
probably throw warnings if it can determine that certain functions (or
function calls) will definitely throw exceptions.  Basically, the idea
is that if you call foo() with certain arguments and the compiler looks
ahead enough to figure out that this will certainly throw an exception,
then it should alert the programmer at compile time.  Ofc, the compiler
isn't responsible for what it didn't detect, so there's no guarantee
that it will give warnings in all cases.

This should not be a feature of the first version, but a wizbang of
later ones, imho.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]
Oct 09 2001
next sibling parent reply Axel Kittenberger <axel dtone.org> writes:
Russ Lewis wrote:

 The latest printf() discussion gave me an idea: the compiler should
 probably throw warnings if it can determine that certain functions (or
 function calls) will definitely throw exceptions.  Basically, the idea
 is that if you call foo() with certain arguments and the compiler looks
 ahead enough to figure out that this will certainly throw an exception,
 then it should alert the programmer at compile time.  Ofc, the compiler
 isn't responsible for what it didn't detect, so there's no guarantee
 that it will give warnings in all cases.
 
 This should not be a feature of the first version, but a wizbang of
 later ones, imho.
You code it okay? :o) Ala Linus Torvalds: "Talk is cheap, show me the code." :]
Oct 09 2001
next sibling parent Russ Lewis <russ deming-os.org> writes:
LOL

What I mean is a feature similar to "not all control paths return a
value."  That seems doable, from my perspective.  Not that I'm a compiler
coder, ofc...
Oct 09 2001
prev sibling parent reply John Nagle <nagle animats.com> writes:
Axel Kittenberger wrote:
 
 Russ Lewis wrote:
 
 The latest printf() discussion gave me an idea: the compiler should
 probably throw warnings if it can determine that certain functions (or
 function calls) will definitely throw exceptions.  

 This should not be a feature of the first version, but a wizbang of
 later ones, imho.
You code it okay? :o) Ala Linus Torvalds: "Talk is cheap, show me the code." :]
It's not a stupid idea, but it's way beyond ordinary compiler technology. I headed a team that built a proof-of-correctness system for Pascal (see "Practical Program Verification" in POPL 83), and it's quite possible to do far more checking at compile time than is usually done. Back then, verification of a 1000 line program took about 45 minutes on a 1 MIPS machine. Today, with 1000x as much CPU power on the desktop, it would be useful. This goes with what's now called "design by contract": entry conditions, exit conditions, and class invariants. Think of this as the next step beyond type checking. The basic idea is that if a function makes some assumption about its arguments, that assumption must be expressed as an entry condition, which looks like an "assert". It's the job of the caller to make sure the entry conditions are true. It's the job of the function called to work right for all cases where the entry conditions are true. So things like "p cannot be NULL", or "the input and output arrays must be different" have to be expressed as entry conditions, not as comments or undocumented. This makes for better-specified APIs. Of course, vendors who export APIs hate the idea of design by contract. It makes it possible to unambiguously define what it means for a library to have a defect. It tells you whose fault it is. It's common in DoD contracts to enforce interface standards contractually, with financial penalties. Commercial vendors hate that. John Nagle
Oct 11 2001
parent reply "Walter" <walter digitalmars.com> writes:
John Nagle wrote in message <3BC5ED4C.87398FAA animats.com>...
   This goes with what's now called "design by contract": entry
conditions, exit conditions, and class invariants.  Think of this
as the next step beyond type checking.  The basic idea is that if
a function makes some assumption about its arguments, that assumption
must be expressed as an entry condition, which looks like an "assert".
It's the job of the caller to make sure the entry conditions are true.
It's the job of the function called to work right for all cases where
the entry conditions are true.

   So things like "p cannot be NULL", or "the input and output arrays
must be different" have to be expressed as entry conditions, not as
comments or undocumented.  This makes for better-specified APIs.

   Of course, vendors who export APIs hate the idea of design by
contract.  It makes it possible to unambiguously define what it means
for a library to have a defect.  It tells you whose fault it is.
It's common in DoD contracts to enforce interface standards
contractually, with financial penalties.  Commercial vendors hate
that.
You're absolutely right. What also becomes possible is that since assert()'s are known to the compiler rather than being a preprocessor hack, if you code: assert(p != null); then the optimizer/code generator is free to take advantage of that fact if it can to generate better code. In effect, contracts can become more than just error checking, they also become hints to the optimizer.
Oct 13 2001
next sibling parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Walter wrote:

 You're absolutely right. What also becomes possible is that since assert()'s
 are known to the compiler rather than being a preprocessor hack, if you
 code:
     assert(p != null);
 then the optimizer/code generator is free to take advantage of that fact if
 it can to generate better code. In effect, contracts can become more than
 just error checking, they also become hints to the optimizer.
It's always nice when a feature that encourages better design also leads to faster runtime speeds :) Asserts will also be nice as a inline way of documentation: if(clase) .... else if(clause) ... else if(clause) ... else { assert(foo == NULL); .... }; -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Oct 13 2001
prev sibling parent reply Axel Kittenberger <axel dtone.org> writes:
 You're absolutely right. What also becomes possible is that since
 assert()'s are known to the compiler rather than being a preprocessor
 hack, if you code:
     assert(p != null);
 then the optimizer/code generator is free to take advantage of that fact
 if it can to generate better code. In effect, contracts can become more
 than just error checking, they also become hints to the optimizer.
Again to my original statement, if you can code all this then okay. But I would flinc to do the C++ way, in my eyes C++ is a very very huge and complicated language, not only for the user but also for the compiler writer. Honestly how many compilers today manage to fullfill the C++ ANSI standard 100%ly. (None?) Ideas can be all supreme cool, but somebody has to code them. I like the extereme case that a compiler could also use be probrammed to use artificial intelligence to transform functional project specifications directly to binary code, well then all programmers will have to search new jobs :o) . Actually this kind of "compiler" is called "programmer", and he uses tools like a machine C compiler and assembler etc. to do this high-level compilation task. (.spec -> .bin). Just don't loose the scope what you're able to do :o) I for my project must admit that lately because of business reasons I've latetly found rather limited time/energy to work on my hobby programming language project. - Axel. -- |D> http://www.dtone.org
Oct 13 2001
parent "Walter" <walter digitalmars.com> writes:
Axel Kittenberger wrote in message <9qa393$rg3$1 digitaldaemon.com>...
Honestly how many compilers today manage to fullfill the C++ ANSI
standard 100%ly. (None?)
None.
Oct 13 2001
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
Since contract errors are defined to be errors, if the compiler can deduce a
contract error at compile time, then it is encouraged to do so (a fatal
compilation error, not a warning). This applies to array bounds errors, etc.

Russ Lewis wrote in message <3BC31E6F.E987F138 deming-os.org>...
The latest printf() discussion gave me an idea: the compiler should
probably throw warnings if it can determine that certain functions (or
function calls) will definitely throw exceptions.  Basically, the idea
is that if you call foo() with certain arguments and the compiler looks
ahead enough to figure out that this will certainly throw an exception,
then it should alert the programmer at compile time.  Ofc, the compiler
isn't responsible for what it didn't detect, so there's no guarantee
that it will give warnings in all cases.

This should not be a feature of the first version, but a wizbang of
later ones, imho.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]
Oct 13 2001
parent reply "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
As a real-time embedded programmer, I would kill for a compiler smart enough to
handle some asserts before generating code.  I need fast, tight code that has a
prayer of meeting its spec.  Asserts help do exactly this (as part of an
enforcement mechanism for "Program By Contract" and its many ilk), yet there is
generally "no room" for them in the ROM (in the runtime code, that is).  So,
things often work fine under the debugger with asserts present, yet the code
fails when burned to ROM.  Too many times I've seen this happen, and the cost
to the project is always significant.

Give me a language that supports "smarter" asserts, and I'll show you a
language that's sure to win in the real-time embedded market.


-BobC


Walter wrote:

 Since contract errors are defined to be errors, if the compiler can deduce a
 contract error at compile time, then it is encouraged to do so (a fatal
 compilation error, not a warning). This applies to array bounds errors, etc.

 Russ Lewis wrote in message <3BC31E6F.E987F138 deming-os.org>...
The latest printf() discussion gave me an idea: the compiler should
probably throw warnings if it can determine that certain functions (or
function calls) will definitely throw exceptions.  Basically, the idea
is that if you call foo() with certain arguments and the compiler looks
ahead enough to figure out that this will certainly throw an exception,
then it should alert the programmer at compile time.  Ofc, the compiler
isn't responsible for what it didn't detect, so there's no guarantee
that it will give warnings in all cases.

This should not be a feature of the first version, but a wizbang of
later ones, imho.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]
Oct 13 2001
parent reply Russell Borogove <kaleja estarcion.com> writes:
"Robert W. Cunningham" wrote:
 
 As a real-time embedded programmer, I would kill for a compiler smart enough to
 handle some asserts before generating code.  I need fast, tight code that has a
 prayer of meeting its spec.  Asserts help do exactly this (as part of an
 enforcement mechanism for "Program By Contract" and its many ilk), yet there is
 generally "no room" for them in the ROM (in the runtime code, that is).  So,
 things often work fine under the debugger with asserts present, yet the code
 fails when burned to ROM.  Too many times I've seen this happen, and the cost
 to the project is always significant.
I don't think that's the fault of the assertions. Enabled asserts don't magically make code work that otherwise wouldn't (in fact, nearly the opposite -- they force code to fail consistently when it might sometimes appear to work, or they force code to fail at a predictable point). If code is working in debug builds with asserts present, but not in "release" builds, then generally one of the following is happening: (a) You have a bug which manifests or not according to what addresses various code and data appear at (since presence/absence of asserts moves both code and data around) -- which smells like a loose pointer which sometimes hits something important and sometimes doesn't; or (b) You have a compiler that's too aggressive in optimization, and turns working unoptimized code into differently-working or non-working optimized code; try building without asserts and other debugging features, but with optimizations turned off, and see if the problems go away; or (c) Code other than assert is either compiled or not on the basis of the debug configuration, and this code is causing a problem; or (d) The timing of operations is being slightly altered by the presence or absence of assert tests, or by the speed of optimized vs. unoptimized code, and thereby exposing subtle timing problems; or (e) The runtime environment is different from the testing environment in some way -- the debugger stub, emulation of hardware, or different revisions of hardware are conspiring to make your program act differently.
 Give me a language that supports "smarter" asserts, and I'll show you a
 language that's sure to win in the real-time embedded market.
None of the above explanations can really be helped by a "smarter" compiler. -Russell B
Oct 14 2001
next sibling parent reply Axel Kittenberger <axel dtone.org> writes:
 (f) You're hardware is defective. Instable hardware can have the coolest 
effects, also including optimize builds work while debug builds won't, and 
vice versa.
Oct 14 2001
parent "Walter" <walter digitalmars.com> writes:
Axel Kittenberger wrote in message <9qd27s$2drp$1 digitaldaemon.com>...
(f) You're hardware is defective. Instable hardware can have the coolest
effects, also including optimize builds work while debug builds won't, and
vice versa.
My first embedded system project had so many milliseconds for the interrupt service routine to run. Taking longer would cause cascading interrupts and a crash. To time it, I set an I/O port bit to 1 on entry, and 0 on exit. Connect an oscilloscope to the bit, and voila!
Oct 14 2001
prev sibling next sibling parent reply "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
It is almost always item D, the changed timing between the debug and ROM
environments, that causes most of the runtime problems I see in our systems. 
Most
of the other problem types are caught far earlier by meticulous design and
exhaustive analysis.

Another parallel problem is simple path coverage testing, where we need to
proceed
to the hardware tests in parallel with the coverage tests.  This is exhaustive
and
time-consuming testing.  However, since the real hardware is often far faster
than
the debug environment, most of the late bugs revealed by coverage testing tend
to
reveal themselves on the real hardware.

Language features that even slightly assists such development processes would be
very welcome.

Now we are adding reconfigurable processing elements and "soft" cores to our
designs, and the situation is getting worse.  We really need single languages
for
both hardware and software design and implementation.  The hardware folks use
VHDL
and Verilog, and we use C and C++.  Design and modeling tools as well: They use
schematic capture tools, we use UML.  Implementation support tools:  They have
hardware simulators, we have software simulators and debuggers.  There have been
many efforts to bridge this divide at several levels, yet none have proven
practical
for use by small teams (3-4 each for hardware and software, excluding
management).

Now I find I have functions that actually have better hardware implementations
that
software can ever do (thing like state machines and "straight-line" math
sequences),
and I need to be able to design and build in one environment, yet still be able
to
move to EITHER one for implementation.  Testing done in one domain (hardware or
software) should be transferable to the other at some level, and thus allow the
implementation to switch between domains as needed to meet project goals.

The longer we can put off such decisions, and do so with minimal cost, the
faster
and better we can create the product.  Ideally, it "shouldn't matter" if a given
algorithm is implemented in hardware or software:  At the design and testing
level,
it should be the same work for both domains.  Either the algorithm is correct
or it
is not.  If it is correct, then we only need verify the implementation. 
Presently,
very little software design is "proven correct" or even "demonstrated" before
being
implemented.

Not that D could even be a part of the solution for this situation!  But it may
be a
step in the right direction.  With the right features, it may allow us to start
testing earlier, and end it later, even to the point of shipping the system with
more testing code in it than we normally would (or could).  Compiler-tested
asserts
would be exactly such a thing:  If they can be identified in advance, then they
never need to be "removed" from the application code.

Sure, you can do some similar stuff using the preprocessor, but that's tedious
and
error prone, since the preprocessor language is NOT the implementation language!
They should be one and the same for this to work cleanly.


-BobC


Russell Borogove wrote:

 "Robert W. Cunningham" wrote:
 As a real-time embedded programmer, I would kill for a compiler smart enough to
 handle some asserts before generating code.  I need fast, tight code that has a
 prayer of meeting its spec.  Asserts help do exactly this (as part of an
 enforcement mechanism for "Program By Contract" and its many ilk), yet there is
 generally "no room" for them in the ROM (in the runtime code, that is).  So,
 things often work fine under the debugger with asserts present, yet the code
 fails when burned to ROM.  Too many times I've seen this happen, and the cost
 to the project is always significant.
I don't think that's the fault of the assertions. Enabled asserts don't magically make code work that otherwise wouldn't (in fact, nearly the opposite -- they force code to fail consistently when it might sometimes appear to work, or they force code to fail at a predictable point). If code is working in debug builds with asserts present, but not in "release" builds, then generally one of the following is happening: (a) You have a bug which manifests or not according to what addresses various code and data appear at (since presence/absence of asserts moves both code and data around) -- which smells like a loose pointer which sometimes hits something important and sometimes doesn't; or (b) You have a compiler that's too aggressive in optimization, and turns working unoptimized code into differently-working or non-working optimized code; try building without asserts and other debugging features, but with optimizations turned off, and see if the problems go away; or (c) Code other than assert is either compiled or not on the basis of the debug configuration, and this code is causing a problem; or (d) The timing of operations is being slightly altered by the presence or absence of assert tests, or by the speed of optimized vs. unoptimized code, and thereby exposing subtle timing problems; or (e) The runtime environment is different from the testing environment in some way -- the debugger stub, emulation of hardware, or different revisions of hardware are conspiring to make your program act differently.
 Give me a language that supports "smarter" asserts, and I'll show you a
 language that's sure to win in the real-time embedded market.
None of the above explanations can really be helped by a "smarter" compiler. -Russell B
Oct 14 2001
next sibling parent reply Russell Borogove <kaleja estarcion.com> writes:
"Robert W. Cunningham" wrote:
 
 It is almost always item D, the changed timing between the debug and ROM
 environments, that causes most of the runtime problems I see in our systems. 
Most
 of the other problem types are caught far earlier by meticulous design and
 exhaustive analysis.
Are your design and analysis phases capable of catching compiler bugs or bad-pointer implementation bugs, like items (a) and (b)? If your systems are that sensitive to timing, then aren't you going to run into all sorts of problems down the road, when your model Foo microcontrollers are phased out by the company that makes them and replaced with the Foo-Plus-Turbo models which are binary-compatible but 2 to 5 times faster depending on the instruction mix? (I'm not really an embedded systems engineer, but I've been programming game consoles and development systems for them, off and on, since '91 or so.) -RB
Oct 15 2001
next sibling parent reply "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
Russell Borogove wrote:

 "Robert W. Cunningham" wrote:
 It is almost always item D, the changed timing between the debug and ROM
 environments, that causes most of the runtime problems I see in our systems. 
Most
 of the other problem types are caught far earlier by meticulous design and
 exhaustive analysis.
Are your design and analysis phases capable of catching compiler bugs or bad-pointer implementation bugs, like items (a) and (b)?
Yes, sometimes, but we don't rely on them 100%. We prefer to take a defensive approach: We often use two compilers in parallel, and compare the output from both. While rare, I have been on projects that shipped code from more than a single compiler! We also use lots of lint-like tools to ensure we are using the language features we want, and avoid the ones we don't want. Avoiding known compiler bugs is a very difficult problem, especially on larger projects. When possible, we also create "reference" algorithm implementations in completely different environments (such as MathCad or Mathematica). We use them to generate independent results, which are then used to test the "real" implementation. I have uncovered many bugs in various floating point libraries and hardware this way.
 If your systems are that sensitive to timing, then aren't you
 going to run into all sorts of problems down the road, when your
 model Foo microcontrollers are phased out by the company that
 makes them and replaced with the Foo-Plus-Turbo models which are
 binary-compatible but 2 to 5 times faster depending on the
 instruction mix?
Yup, we do. When parts are phased out, we typically make "lifetime buys" of timing-critical components, then plan the follow-on product to be ready for market before those supplies run out. We have occasionally been caught using bleeding-edge components that somehow never became popular in the market. They make advanced products possible, but they also tend to shorten the overall product cycle. It's a normal design decision: Performance vs price vs time-to-market. Our business model goes for high margins, which means we have to always be first with the best. But it means nothing if we can't deliver repeatably, reliably and on schedule. And early part obsolescence (or an earthquake in Taiwan) can throw a wrench into the most carefully made plans. We avoid this to a large extent in digital circuits by placing more and more stuff into FPGAs. FPGAs only get larger and faster, and they have an excellent track record for being backward-compatible with prior parts and code. Quite often, we get a "free" upgrade when a new FPGA is pin compatible with an old one, but has several times the number of gates. Unfortunately, it seems almost impossible to get a similar processor upgrade that provides higher performance without other system changes. That's why we will be putting the CPU into an FPGA in many of our upcoming products. I have visions of a future language (and a compiler for it) that will find the best way to use the resources within an FPGA. When the application is "compiled", part of the code will become software, more of the code will become a CPU custom-crafted to run that software, and the rest will become fixed (or reprogrammable) logic. That is the direction our technology is heading. Compilers are getting smarter and smarter: Only in the past 5 years has VLIW (Itanium, Crusoe, DSPs) become truly practical for non-trivial applications, and it is all due to the compilers. The same goes for parallel processing and clusters: The compilers (and libraries) have made it possible "for the rest of us". On the hardware side, it is now practically impossible for a hardware designer to create an ASIC without "silicon compilers" for his VHDL and Verilog source code. Both hardware and software folks now have sophisticated code generator front-ends that take much of the drudgery out of implementing the repetitive and simple portions of any design. I use one to create device drivers, and the drivers created with the help of such tools are the best I've ever made. (And I've written lots of device drivers over the years, many in hand-tuned assembler). Someday, the compilers and generators will meet, and will be combined into a higher-level tool that will have a correspondingly sophisticated design, development and debug/test environment. I can hardly wait. But for the moment, I'm looking for whatever help I can get. Especially from languages such as D!
 (I'm not really an embedded systems engineer, but I've been
 programming game consoles and development systems for them,
 off and on, since '91 or so.)
Though I wrote my first computer program in '72, I started professionally programming real-time apps for an embedded 8085 target in '83, when 16K RAM and 32K ROM was truly massive for an embedded system (or any micro). I've burned and erased more UV EPROMS than I can count: In-system ROM emulators didn't become practical until the early 90's. Now I can often find a cycle-accurate CPU simulator (actually, I tend to avoid CPUs for which such simulators are not available) and can get the majority of my application implemented, tested and debugged long before the hardware is ready. Now, if only I could get my CPU simulator tied to the hardware simulator and run them together to do a full system simulation with clock cycle accuracy. One day... Yes, the big boys at General Dynamics and Boeing do this every day. I can't wait for the tools to filter down to those of us working at smaller companies. -BobC
Oct 15 2001
parent Russell Borogove <kaleja estarcion.com> writes:
"Robert W. Cunningham" wrote:
 We often use two compilers in parallel, and compare the output from both...
 We also use lots of lint-like tools to ensure we are using the language
features we
 want, and avoid the ones we don't want...
 When possible, we also create "reference" algorithm implementations in
completely
 different environments (such as MathCad or Mathematica).  We use them to
generate
 independent results, which are then used to test the "real" implementation...
Very cool. I wish I could work on projects where we were able to take some of those steps. -RB
Oct 15 2001
prev sibling parent "Walter" <walter digitalmars.com> writes:
Russell Borogove wrote in message <3BCB289D.7375676A estarcion.com>...
If your systems are that sensitive to timing, then aren't you
going to run into all sorts of problems down the road, when your
model Foo microcontrollers are phased out by the company that
makes them and replaced with the Foo-Plus-Turbo models which are
binary-compatible but 2 to 5 times faster depending on the
instruction mix?
Back when I did some hardware design, the rule of thumb was to design it so that replacing parts with faster ones would work. It was ok for it to fail if you plugged in slower ones.
Oct 20 2001
prev sibling parent "Walter" <walter digitalmars.com> writes:
Robert W. Cunningham wrote in message <3BCA2362.8805E797 yahoo.com>...
Sure, you can do some similar stuff using the preprocessor, but that's
tedious and
error prone, since the preprocessor language is NOT the implementation
language!
They should be one and the same for this to work cleanly.
Yes, exactly right!
Oct 16 2001
prev sibling parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
I would add (f): you put functional code in an assert.  From time to time I've
made
the critical mistake of trying to directly check the return code from a
function:

assert(ImportantFunction(...) == true)

At least in my compiler, when I compiled the Release version, the function was
never
called.  Oops.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]
Oct 14 2001
next sibling parent Russell Borogove <kaleja estarcion.com> writes:
Russ Lewis wrote:
 
 I would add (f): you put functional code in an assert.  From time to time I've
made
 the critical mistake of trying to directly check the return code from a
function:
 
 assert(ImportantFunction(...) == true)
 
 At least in my compiler, when I compiled the Release version, the function was
never
 called.  Oops.
Ah, good point -- this is certainly the easiest way to make behavior different between debug and release versions. -RB
Oct 15 2001
prev sibling parent reply "Walter" <walter digitalmars.com> writes:
In D, it will be an error to have any assert() expression have side effects.
It is not practical for the compiler/runtime to guarantee to diagnose 100%
of the side effects, but they'll be expected to catch the obvious ones like:

    assert(++i);

Russ Lewis wrote in message <3BCA687C.79AAC16B deming-os.org>...
I would add (f): you put functional code in an assert.  From time to time
I've made
the critical mistake of trying to directly check the return code from a
function:
assert(ImportantFunction(...) == true)

At least in my compiler, when I compiled the Release version, the function
was never
called.  Oops.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]
Oct 16 2001
next sibling parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Walter wrote:

 In D, it will be an error to have any assert() expression have side effects.
 It is not practical for the compiler/runtime to guarantee to diagnose 100%
 of the side effects, but they'll be expected to catch the obvious ones like:

     assert(++i);
This almost makes me think that you should be able to declare certain functions to have no side effects. A little like you can declare member functions in C++ to be 'const', but applying to side effects. Obviously, this wouldn't work for any linkages outside of the language...what would be the upsides and downsides of having this in the language for internal functions? Would it be a noticable increase in compiler complexity? Would it help optimization any? -- The Villagers are Online! villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Oct 17 2001
next sibling parent "Ben Cohen" <bc skygate.co.uk> writes:
In article <3BCDA7EC.3FEBB32B deming-os.org>, "Russ Lewis"
<spamhole-2001-07-16 deming-os.org> wrote:

 Walter wrote:
 
 In D, it will be an error to have any assert() expression have side
 effects. It is not practical for the compiler/runtime to guarantee to
 diagnose 100% of the side effects, but they'll be expected to catch the
 obvious ones like:

     assert(++i);
This almost makes me think that you should be able to declare certain functions to have no side effects. A little like you can declare member functions in C++ to be 'const', but applying to side effects.
Yes, I agree with this idea, and was considering suggesting the same thing. This would be useful as a guide to programmers to show that the function doesn't have any effect. gcc has an extension attribute "const" which does just this (plus the function doesn't examine any other values, apparently). (gcc also has a "noreturn" attribute which means that the function doesn't return control to the caller. This might also be useful.)
 Obviously, this wouldn't work for any linkages outside of the
 language...what would be the upsides and downsides of having this in the
 language for internal functions?  Would it be a noticable increase in
 compiler complexity?  Would it help optimization any?
Apparently it can be used in common subexpression elimination and loop optimisation. Presumably the compiler could work this out itself? In that case, the programmer doesn't have to worry about getting it wrong, and the result of the test for this can be stored in the symbol table for the module. By the way, could the set of exported interfaces be produced (by some utility) from the symbol table in a compiled module in readable form? (Something like a .h file.)
Oct 17 2001
prev sibling next sibling parent reply "Walter" <walter digitalmars.com> writes:
"Russ Lewis" <spamhole-2001-07-16 deming-os.org> wrote in message
news:3BCDA7EC.3FEBB32B deming-os.org...
 Walter wrote:

 In D, it will be an error to have any assert() expression have side
effects.
 It is not practical for the compiler/runtime to guarantee to diagnose
100%
 of the side effects, but they'll be expected to catch the obvious ones
like:
     assert(++i);
This almost makes me think that you should be able to declare certain
functions
 to have no side effects.  A little like you can declare member functions
in C++
 to be 'const', but applying to side effects.

 Obviously, this wouldn't work for any linkages outside of the
language...what
 would be the upsides and downsides of having this in the language for
internal
 functions?  Would it be a noticable increase in compiler complexity?
Would it
 help optimization any?
I'd rather try and have the compiler figure out if a function has side effects or not - it will be more reliable. The compiler already can reliably figure out if it is "noreturn".
Oct 17 2001
parent reply Axel Kittenberger <axel dtone.org> writes:
 I'd rather try and have the compiler figure out if a function has side
 effects or not - it will be more reliable. The compiler already can
 reliably figure out if it is "noreturn".
But in the view of "programming by contract" paradigm, should the programmer be allowed to write the contract : "I will not depend or alter any global variables." ? - Axel -- |D) http://www.dtone.org
Oct 17 2001
parent "Walter" <walter digitalmars.com> writes:
That's might be a good idea.

Axel Kittenberger wrote in message <9qlrq6$1l62$2 digitaldaemon.com>...
 I'd rather try and have the compiler figure out if a function has side
 effects or not - it will be more reliable. The compiler already can
 reliably figure out if it is "noreturn".
But in the view of "programming by contract" paradigm, should the programmer be allowed to write the contract : "I will not depend or alter any global variables." ? - Axel -- |D) http://www.dtone.org
Oct 18 2001
prev sibling parent Axel Kittenberger <axel dtone.org> writes:
 This almost makes me think that you should be able to declare certain
 functions
 to have no side effects.  A little like you can declare member functions
 in C++ to be 'const', but applying to side effects.
Look at the gcc manual: Section 5. Extensions to the C Language Family attribute 'const' for functions: ----------------------------------------- const Many functions do not examine any values except their arguments, and have no effects except the return value. Basically this is just slightly more strict class than the pure attribute above, since function is not allowed to read global memory. Note that a function that has pointer arguments and examines the data pointed to must not be declared const. Likewise, a function that calls a non-const function usually must not be const. It does not make sense for a const function to return void. The attribute const is not implemented in GCC versions earlier than 2.5. An alternative way to declare that a function has no side effects, which works in the current version and in some older versions, is as follows: -----------------------------------------
Oct 17 2001
prev sibling parent reply "Sean L. Palmer" <spalmer iname.com> writes:
THANK YOU!!!  <smooooch>

You've just knocked out the entire (f) class of bugs for D programmers!  ;)
Ok maybe 99% of the (f) bugs.

Sean

"Walter" <walter digitalmars.com> wrote in message
news:9qj3h5$2ml$3 digitaldaemon.com...
 In D, it will be an error to have any assert() expression have side
effects.
 It is not practical for the compiler/runtime to guarantee to diagnose 100%
 of the side effects, but they'll be expected to catch the obvious ones
like:
     assert(++i);
Oct 19 2001
parent reply "Ben Cohen" <bc skygate.co.uk> writes:
In article <9qophg$bku$1 digitaldaemon.com>, "Sean L. Palmer"
<spalmer iname.com> wrote:

 THANK YOU!!!  <smooooch>
 
 You've just knocked out the entire (f) class of bugs for D programmers!
 ;) Ok maybe 99% of the (f) bugs.
 
 Sean
 
 "Walter" <walter digitalmars.com> wrote in message
 news:9qj3h5$2ml$3 digitaldaemon.com...
 In D, it will be an error to have any assert() expression have side
effects.
 It is not practical for the compiler/runtime to guarantee to diagnose
 100% of the side effects, but they'll be expected to catch the obvious
 ones
like:
     assert(++i);
Why stop as assert? Why not have a modifier for function parameters which says that whenever the function is called, the expression used to generate that parameter can't have side-effects. This would be useful for writing your own assert handler, and perhaps other issues such as thread safety and order of expression evaluation. For example: int x = 4; int modify_var() { x++; return x; } void use_var(in no_effect my_int) { printf("%d\n", x + v); } use_var(4); /* these are allowed */ use_var(x); use_var(get_current_time_in_seconds()); use_var(x++); /* these are not allowed */ use_var(modify_var());
Oct 19 2001
parent reply Axel Kittenberger <axel dtone.org> writes:
 Why stop as assert?  Why not have a modifier for function parameters which
 says that whenever the function is called, the expression used to generate
 that parameter can't have side-effects.  This would be useful for writing
 your own assert handler, and perhaps other issues such as thread safety
 and order of expression evaluation.
Why stop even here? Why not make a language that does not allow any stupid side effects at all :))))) => ending up into the java-pascal line again? :o) - Axel
Oct 19 2001
next sibling parent "Ben Cohen" <bc skygate.co.uk> writes:
In article <9qp606$i8b$1 digitaldaemon.com>, "Axel Kittenberger"
<axel dtone.org> wrote:

 Why stop as assert?  Why not have a modifier for function parameters
 which says that whenever the function is called, the expression used to
 generate that parameter can't have side-effects.  This would be useful
 for writing your own assert handler, and perhaps other issues such as
 thread safety and order of expression evaluation.
Why stop even here? Why not make a language that does not allow any stupid side effects at all :))))) => ending up into the java-pascal line again? :o)
I am thinking of a cross between Ada, Python and C -- preferably the best bits from each. :) I don't like the overall feel of Java.
Oct 19 2001
prev sibling parent reply Axel Kittenberger <axel dtone.org> writes:
 Why stop even here? Why not make a language that does not allow any stupid
 side effects at all :)))))
 
 => ending up into the java-pascal line again? :o)
When I think more about side effect statements, looking today backward to 20? years of C, I think one can say side effects like ++, --, += and all that were stupid ideas, and brought nothing but problems in pratice. I think a clean language should either forbit these constructs after all (like java or the pascal family) or have defined behavior for things like: a[i]=i++; printf("%d %d", i++, i); std::cout << i*=2 << i*=2; ... etc... Or guarentees to produce a compiler error. The problem is only sometimes the expressions can be so complicated that it is nearly impossible to track all dependencies between the sync-points. GCC produces nowadays a warning on many situations, but still is not able to grap them all :( -- |D) http://www.dtone.org - Axel
Oct 19 2001
parent reply a <a b.c> writes:
Axel Kittenberger wrote:
 
 Why stop even here? Why not make a language that does not allow any stupid
 side effects at all :)))))

 => ending up into the java-pascal line again? :o)
When I think more about side effect statements, looking today backward to 20? years of C, I think one can say side effects like ++, --, += and all that were stupid ideas, and brought nothing but problems in pratice.
Why stop here. Get rid of all operators. There is NO ambiguity in lisp! When you look at the source, you're lookin' at the parse tree. For what it's worth, there are schools of thought that say there should not be any assignment operator. There are others that believe that x+=y is better that x=x+y since it better resemble an accumulator. Dan
Oct 19 2001
parent reply Axel Kittenberger <axel dtone.org> writes:
 Why stop here.  Get rid of all operators.  There is NO ambiguity in
 lisp!  
Pah! sarcasm, thats a bad move in a discussion. :o( Operators are something of convienience, I know you don't need them, but they make things easier to read.
 When you look at the source, you're lookin' at the parse tree.
 For what it's worth, there are schools of thought that say there should
 not be any assignment operator.  There are others that believe that x+=y
 is better that x=x+y since it better resemble an accumulator.
That += fact is so old, and so long no longer valid, in K&R C in 1980 maybe it mattered if you wrote x+=y instead of x=x+y since compilers this times wrote assembly directly during parsing, no syntax tree as intermediate step, but today -every- compiler can at least see fromt the syntax tree how to do x=x+y optimal.
Oct 21 2001
parent reply a <a b.c> writes:
Axel Kittenberger wrote:
 
 Why stop here.  Get rid of all operators.  There is NO ambiguity in
 lisp!
Pah! sarcasm, thats a bad move in a discussion. :o( Operators are something of convienience, I know you don't need them, but they make things easier to read.
I stand by my sarcasm. I suspect some lisp fanatics would back me here. (They might not even be sarcastic when they say it.)
 When you look at the source, you're lookin' at the parse tree.
 For what it's worth, there are schools of thought that say there should
 not be any assignment operator.  There are others that believe that x+=y
 is better that x=x+y since it better resemble an accumulator.
That += fact is so old, and so long no longer valid, in K&R C in 1980 maybe it mattered if you wrote x+=y instead of x=x+y since compilers this times wrote assembly directly during parsing, no syntax tree as intermediate step, but today -every- compiler can at least see fromt the syntax tree how to do x=x+y optimal.
As an optimization, += is not needed. That still does not change the fact that it provided that abstraction of an accumulator better than the alternative. Even with optimizers, there is still an important difference between += and the alternative if the lvalue contains a function call. *(f(x)) = *(f(x)) + y *(f(x)) += y These two lines could be very different. Lastly, += style operators may not be needed (and I won't commit to that for now) but at the very least, they are something of convenience and, for those of us who know the language, they make things easier to read.
Oct 21 2001
next sibling parent reply Axel Kittenberger <axel dtone.org> writes:
 *(f(x)) = *(f(x)) + y
 *(f(x)) += y
 
 These two lines could be very different.
Are they? You've two possiblities to decide, function f() either depends on global variables that it also changes or not. If it does the first expression is invalid in C eitherway. You may in this case not relay on any order of f() been called (or if it is called twice afterall). If it does not depend on global vars it changes (which the compiler could assume, otherwise the statement would be invalid) then it does not matter if it is called twice or once, and since x is equal in both cases he is also allowed to decide to only call f() once. Actually for cleaness purposes f() should have some 'const' attribute or other contract that it does not depend/change on global vars, in my opinion if it does an error should be created either way, if does not there is no difference between + and +=. And after all what is easier to understand in your opinion? *(f(x)) += y; or in example: int &p = f(x); p = p + y;
 Lastly, += style operators may not be needed (and I won't commit to
 that for now) but at the very least, they are something of convenience
 and, for those of us who know the language, they make things easier to
 read.
I work with C for years now, and honestly in my opinion they not make things easier to read if reading -others- code, variables that are changed in places you donnot expect them do can be horrific in trying to understand code, especially when the changes are combined with && and || tokens. Something somewhere in contest like this: (a++ == 1) && (x*=2); What does it do? In cleared text: if (a != 0) { x = x * 2; } a = a + 1; I've nothing against shortcut operators if people think it's worth the 3 characters safed to type. But I spoke against side effect operations, like all equal operations inside other operations, or function calls. This is something I would also have feelings against, no matter how the operator syntax looks like: fprintf("%d", x = x +1); - Axel -- |D) http://www.dtone.org
Oct 21 2001
parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Axel Kittenberger wrote:

 *(f(x)) = *(f(x)) + y
 *(f(x)) += y

 These two lines could be very different.
Are they? You've two possiblities to decide, function f() either depends on global variables that it also changes or not.
Careful here. You're thinking single threaded...and for a guy like me, who hopes to use D as the basis for a multithreaded library, that's fatal! :( f() could depend on volatile global data that f() does not change. It grabs a lock, reads it, and by the time that the 2nd function call comes, another thread has grabbed the lock and changed the underlying data. -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
Oct 21 2001
parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Hmmm...thought some more, and that last comment seemed hasty.  I apologize if it
wasn't thoughtful enough.

Let me qualify it by noting that in C/C++ you can assume that non-volatile
variables won't change like that.  But eventually that means that I, with a
multithreaded library, pretty much have to declare ALL of my variables volatile,
since all of them might be changed by actions on a peer thread.  Thus, a modern
language should allow more clarity.

For example, you might want to declare something as "volatile when lock x is not
held," which would cover most of what I'm looking for.  It would require that
the
language be aware of locks, but would be good because it could optimize some
things (when it knows that the lock is held) and not optimize others (when the
lock is not held).  Imagine this psuedo-syntax:

Lock foo;  // this declares a Lock object
volatile-when-not-held(foo) int bar;
foo.Lock();  // this performs the lock action
while(bar != 0)
{
   foo.Unlock();
   // wait on some signal here
   foo.Lock();
};
baz(bar);  // calls the function
foo.Unlock();

In this example, the only time that the compiler optimizes bar is when the test
works; in that case, the lock is held so it sees that it is legal to consider
bar
non-volatile.  However, every time you cycle the lock-unlock on foo, it forgets
the old value of bar and reloads it, assuming that things might have changed.

It would also be good to have an "atomic" keyword.  Some values are atomic in
some
architectures; for places where they are not, you could either throw a syntax
error or, if it was workable and the implementer decided to do it, you could
have
the compiler could implement a hidden lock to protect that variable.

--
The Villagers are Online! http://villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]
Oct 21 2001
prev sibling parent reply "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
a wrote:

 Axel Kittenberger wrote:
 That += fact is so old, and so long no longer valid, in K&R C in 1980 maybe
 it mattered if you wrote x+=y instead of x=x+y since compilers this times
 wrote assembly directly during parsing, no syntax tree as intermediate
 step, but today -every- compiler can at least see fromt the syntax tree how
 to do x=x+y optimal.
As an optimization, += is not needed. That still does not change the fact that it provided that abstraction of an accumulator better than the alternative. Even with optimizers, there is still an important difference between += and the alternative if the lvalue contains a function call. *(f(x)) = *(f(x)) + y *(f(x)) += y These two lines could be very different. Lastly, += style operators may not be needed (and I won't commit to that for now) but at the very least, they are something of convenience and, for those of us who know the language, they make things easier to read.
In general, for any LHS that has a side-effect, the extended assignment operators are very useful. They serve to avoid adding intermediate variables merely to bypass the side effects (variables that many compilers fail to optimize away). This especially applies when accessing hardware registers! If you've ever written a low-level device driver, you'd know. In several compilers, you can flag such expressions as being atomic, so their execution cannot be interrupted part way through. This simple enhancement eliminates the need for many hard and soft mutexes (for SMP and multi-threaded code) as well as minimizing the time that interrupts have to be disabled. I like it when the compiler can help me make low-latency thread-safe and highly reentrant code. Consider it a "wish-list" item for the complex assignment operators in D! Again, this is all based on D being aimed toward being not just a general-purpose language, but also an excellent low-level systems programming language, all the while being easier, safer and more powerful than C, C++ and Java. I am aware of no other language under active development with these goals, with the possible exception of the EC++ standardization effort (of which I was a minor participant). The goal here was to find the fight balance between C and C++ that was optimized for embedded systems. It was initiated by the Japanese car manufacturers, and soon spread world-wide. The process involved in creating the language had the explicit limitation that it could NOT create a "new" language: It had to be a strict subset of ANSI C++, so it would be guaranteed to compile under any C++ compiler. That means they could only delete things from the C++ spec to create the EC++ language spec, though there was also lots of work on creating versions of the STL and the other standard libraries that were optimized for EC++. P.J. Plaugher's Dinkumware company has produced both free and commercial versions of these libraries, and they help make EC++ SCREAM in embedded environments! At least two compilers have provided some form of support for EC++: GCC and Green Hills. However, the EC++ effort has failed to obtain a huge following, and the reasons are obvious: 1. So much of C++ is "anti-real-time" and "anti-embedded" that some of the most powerful features of C++ had to be left behind. However, as compilers get smarter, and more effective and efficient implementation strategies become available, EC++ is expected to backtrack on some of its earlier "butchering" of the C++ spec. This presently means that it is very hard to train a C++ person to become a good EC++ programmer. 2. Very few real-time and embedded (RT&E) systems are design and built from an OO perspective. This is due to many reasons: There are few tools that are truly useful for supporting OOA&D for RT&E systems. This is slowly changing, but I have yet to see a tool or suite I'd recommend using for the systems I've had to create. Also, many RT&E engineers were educated before the OO "boom" of the late 80's and early 90's. Furthermore, today's CS curriculum is not generating many of the kind of engineers needed to create RT&E systems. This two-way education gap has to close before OO will become common in RT&E systems. As senior RT&E engineers retire, it is becoming ever harder to fill their positions (believe me, I know!). So, things need to change. We need to bring the power of OO to systems and RT&E programming. IMHO, the D language may well be the best effort in that direction I've seen since the EC++ effort. And that's one (long) reason why I want to keep the complex assignment operators! -BobC
Oct 21 2001
parent Axel Kittenberger <axel dtone.org> writes:
 They serve to avoid adding intermediate variables merely to
bypass the side effects (variables that many compilers fail to optimize away). gcc is able to optimize some of them away and does it, if you debug the optimized code you'll notice that some of your variables dont exist afterall, and thus can't be inspected.
 This especially applies when accessing hardware registers!  If you've ever
 written a low-level device driver, you'd know.
I did write low level (linux) drivers, and if I wanted I would have not needed them
 In several compilers, you can flag such expressions as being atomic, so
 their execution cannot be interrupted part way through.  
As I see it sideeffect or not sideeffect situations have no influence on atomic-ness. - Axel -- |D) http://www.dtone.org
Oct 21 2001