digitalmars.D.announce - DMD 1.021 and 2.004 releases
- Walter Bright (5/5) Sep 05 2007 Mostly bug fixes for CTFE. Added library switches at Tango's request.
- Sean Kelly (3/4) Sep 05 2007 Awesome! And great job, as always.
- Lars Ivar Igesund (7/8) Sep 05 2007 Great! And now for GDC to follow suit ;)
- Gregor Richards (3/9) Sep 05 2007 GDC followed suit roughly twenty years before GDC was written.
- Sean Kelly (3/5) Sep 05 2007 Now there's a paradox for you... ;-)
- BLS (8/15) Sep 05 2007 Multiple
- Sean Kelly (8/27) Sep 05 2007 I thought they were already supported, but here's an example:
- Walter Bright (2/11) Sep 05 2007 They were already supported, they just didn't work :-(
- Sean Kelly (3/15) Sep 05 2007 Oh! Then why not make this change to the 1.0 release as well?
- Chris Nicholson-Sauls (3/21) Sep 05 2007 Walter: Yes please! Great job on the latest update, btw. (As if you ha...
- Sean Kelly (3/22) Sep 05 2007 My mistake. I thought this was only in the 2.0 changelog but it's in bo...
- Chris Nicholson-Sauls (3/28) Sep 05 2007 Pardon me while I do my happy dance.
- BLS (9/42) Sep 05 2007 Hm. okay I am able to call a static ctor/dtor from an foreign module ..
- Walter Bright (4/5) Sep 05 2007 In a long module, you can organize the static constructor code in a way
- Chad J (5/11) Sep 05 2007 Badass, it is good to see this rough edge get smoothed. Thank you Walte...
- Bill Baxter (3/10) Sep 05 2007 What's std.hiddenfunc for? I looked at the code but it didn't help.
- Walter Bright (3/4) Sep 05 2007 It's an exception thrown when an overridden function that still exists
- Brad Roberts (18/25) Sep 05 2007 The "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" lin...
- Walter Bright (4/18) Sep 05 2007 I think their is still a need, as there's always a risk I break
- Don Clugston (8/21) Sep 06 2007 1.020 seemed to be stable. Like 1.016, it was around for a long time, an...
- Sean Kelly (3/6) Sep 06 2007 Delimited string literals?
- Walter Bright (3/14) Sep 06 2007 Fixed.
- BCS (2/9) Sep 06 2007 where's the docs?
- Nathan Reed (5/18) Sep 06 2007 The docs for delimited string literals are now at
- Sean Kelly (3/20) Sep 06 2007 And the lecture slides have more info, obviously.
- BCS (3/25) Sep 06 2007 I wish Walter would put more links from the change log into the docs (an...
- Nathan Reed (5/25) Sep 06 2007 Actually, the docs on the web go into a bunch more detail than the
-
Stewart Gordon
(6/8)
Sep 09 2007
"Nathan Reed"
wrote in message - Kirk McDonald (9/21) Sep 10 2007 I've already updated the Pygments syntax highlighter with this new
-
Stewart Gordon
(6/9)
Sep 10 2007
"Kirk McDonald"
wrote in message - Walter Bright (5/7) Sep 10 2007 Delimited strings are standard practice in Perl. C++0x is getting
- Kirk McDonald (9/20) Sep 10 2007 Which, since there's no nesting going on, are actually very easy to
- Walter Bright (4/11) Sep 10 2007 I meant the:
- Kirk McDonald (56/74) Sep 10 2007 Those are also fairly easy. The Pygments lexer only highlights the
- Chris Nicholson-Sauls (4/33) Sep 10 2007 That's pretty danged nifty. Any chance, however, that it could apply a ...
- Kirk McDonald (13/52) Sep 11 2007 Not really. It would require defining a new token which highlights the
- Walter Bright (2/5) Sep 11 2007 Sweet!
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (13/30) Sep 11 2007 It's great to see Pygments handles so many possible syntaxes. Unfortunat...
- Jascha Wetzel (7/40) Sep 11 2007 D's delimited strings can (luckily) be scanned with regular languages,
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (8/14) Sep 11 2007 But e.g. syntax highlighting needs the semantic info to change the style...
- Jascha Wetzel (10/28) Sep 11 2007 before, the lexical structure was context free because of nested
- Jari-Matti =?ISO-8859-1?Q?M=E4kel=E4?= (11/20) Sep 11 2007 Nested comments don't necessarily need much more than a constant size
- Jascha Wetzel (4/20) Sep 11 2007 it makes the lexer context free, though, and it therefore cannot be
- Kirk McDonald (21/75) Sep 11 2007 Is the following a valid string?
- Jascha Wetzel (6/19) Sep 11 2007 what string would that represent?
- Kirk McDonald (8/36) Sep 11 2007 I would expect it to represent foo/bar, in the same way that
- Aziz K. (7/9) Sep 12 2007 '/' is not a nesting delimiter. I think q"/foo/bar/" should be scanned a...
- Kirk McDonald (16/28) Sep 12 2007 When I updated the Pygments lexer, I interpreted it like this: It sees
- Kirk McDonald (22/61) Sep 11 2007 While D now requires a fairly powerful lexer to lex properly, it's still...
- Bruno Medeiros (6/39) Sep 11 2007 Ok, why would syntax highlighting have to be implemented with a regexp
- Stewart Gordon (9/16) Sep 11 2007 But how many editors do a good job of syntax-highlighting Perl anyway,
- BCS (3/14) Sep 06 2007 OK I see DelimitedString and TokenString in the BNF but the doc seem to ...
- Nathan Reed (5/21) Sep 06 2007 What are you referring to? There are two doc sections, "Delimited
- BCS (3/27) Sep 06 2007 oops I read the table heading "Nesting Delimiters" as a section heading
- Reiner Pope (8/25) Sep 06 2007 According to the docs,
- Walter Bright (2/12) Sep 06 2007 It's a typo. Replace the ? with }.
- Ary Manzana (4/17) Sep 07 2007 I also thought it was a ?. Specially since the same example is in the
- Bruno Medeiros (7/24) Sep 07 2007 Speaking of which, what is the purpose of delimiter strings and the like...
- Walter Bright (3/6) Sep 10 2007 Makes it easier to insert arbitrary text as a string without having to
- Robert Fraser (2/9) Sep 05 2007 Wow, thanks! It was definitely worth the wait! Also, thanks for adding a...
- yidabu (5/5) Sep 05 2007 build every program, cause:
- Sascha Katzner (7/9) Sep 06 2007 I've just encountered the same error ("Error: 'QuadPart' is not a member...
- Stewart Gordon (8/14) Sep 09 2007 Indeed. I might have to go back to 1.020 pending a fix.
- Daniel Keep (10/17) Sep 05 2007 *ahem*
- Chad J (8/15) Sep 05 2007 Sweet, I like it. Thank you!!111
- negerns (3/3) Sep 05 2007 Also, the -defaultlib and -debuglib switches does not appear in the dmd
- Walter Bright (3/7) Sep 05 2007 Try:
- Chad J (3/11) Sep 06 2007 Ah, that works. As negerns mentioned, this doesn't show in the dmd
- Brad Roberts (3/21) Sep 05 2007 Try running dmd by itself and checking the version. I'll bet you
- Max Samukha (3/8) Sep 06 2007 Thanks a lot!
- Aziz K. (18/18) Sep 10 2007 Hello Walter,
- Walter Bright (6/22) Sep 11 2007 No, no.
- Aziz K. (26/30) Sep 11 2007 Thanks for clarifying. While implementing the methods in my lexer for
- Walter Bright (8/44) Sep 12 2007 Yes.
- BCS (4/9) Sep 12 2007 q"EOF
- Bill Baxter (29/42) Sep 13 2007 Or peel off the last line:
Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zip
Sep 05 2007
Walter Bright wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request.Awesome! And great job, as always. Sean
Sep 05 2007
Walter Bright wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request.Great! And now for GDC to follow suit ;) -- Lars Ivar Igesund blog at http://larsivi.net DSource, #d.tango & #D: larsivi Dancing the Tango
Sep 05 2007
Lars Ivar Igesund wrote:Walter Bright wrote:GDC followed suit roughly twenty years before GDC was written. - Gregor RichardsMostly bug fixes for CTFE. Added library switches at Tango's request.Great! And now for GDC to follow suit ;)
Sep 05 2007
Gregor Richards wrote:GDC followed suit roughly twenty years before GDC was written.Now there's a paradox for you... ;-) Sean
Sep 05 2007
Walter Bright schrieb:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipMultiple Module static constructors/destructors allowed. Unfortunately I have no idea what a "multiple module constructor" is. A code snippet showing a multi. module constructor in action would help. Sorry about my ignorance and thanks in advance. Bjoern
Sep 05 2007
BLS wrote:Walter Bright schrieb:I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); } SeanMostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipMultiple Module static constructors/destructors allowed. Unfortunately I have no idea what a "multiple module constructor" is. A code snippet showing a multi. module constructor in action would help. Sorry about my ignorance and thanks in advance.
Sep 05 2007
Sean Kelly wrote:I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); }They were already supported, they just didn't work :-(
Sep 05 2007
Walter Bright wrote:Sean Kelly wrote:Oh! Then why not make this change to the 1.0 release as well? SeanI thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); }They were already supported, they just didn't work :-(
Sep 05 2007
Sean Kelly wrote:Walter Bright wrote:Walter: Yes please! Great job on the latest update, btw. (As if you haven't heard it yet.) -- Chris Nicholson-SaulsSean Kelly wrote:Oh! Then why not make this change to the 1.0 release as well? SeanI thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); }They were already supported, they just didn't work :-(
Sep 05 2007
Chris Nicholson-Sauls wrote:Sean Kelly wrote:My mistake. I thought this was only in the 2.0 changelog but it's in both. SeanWalter Bright wrote:Walter: Yes please! Great job on the latest update, btw. (As if you haven't heard it yet.)Sean Kelly wrote:Oh! Then why not make this change to the 1.0 release as well?I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); }They were already supported, they just didn't work :-(
Sep 05 2007
Sean Kelly wrote:Chris Nicholson-Sauls wrote:Pardon me while I do my happy dance. -- Chris Nicholson-SaulsSean Kelly wrote:My mistake. I thought this was only in the 2.0 changelog but it's in both. SeanWalter Bright wrote:Walter: Yes please! Great job on the latest update, btw. (As if you haven't heard it yet.)Sean Kelly wrote:Oh! Then why not make this change to the 1.0 release as well?I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); }They were already supported, they just didn't work :-(
Sep 05 2007
Sean Kelly schrieb:BLS wrote:Hm. okay I am able to call a static ctor/dtor from an foreign module .. but the semantic association I have regarding static module constructor is different/ Something like loading one or more module at compile time, pick up some ctor infos from module A containing A.X A.Y and from module B containing B.C and init. ALL the good stuff in A and B from C. which is in your example MyModule. However. I have no idea which advantages this feature really has. BjoernWalter Bright schrieb:I thought they were already supported, but here's an example: module MyModule; static this() { printf( "ctor A\n" ); } static this() { printf( "ctor B\n" ); } static ~this() { printf( "dtor A\n" ); } static ~this() { printf( "dtor B\n" ); } SeanMostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipMultiple Module static constructors/destructors allowed. Unfortunately I have no idea what a "multiple module constructor" is. A code snippet showing a multi. module constructor in action would help. Sorry about my ignorance and thanks in advance.
Sep 05 2007
BLS wrote:However. I have no idea which advantages this feature really has.In a long module, you can organize the static constructor code in a way that makes sense, rather than being forced to put it all in one place. It also makes it practical to mixin code that requires static construction.
Sep 05 2007
Walter Bright wrote:BLS wrote:Badass, it is good to see this rough edge get smoothed. Thank you Walter! I think I used to have some hack where I would wrap each static ctor in its own class, and somehow this would make it work. I'm not sure if that's correct or not though.However. I have no idea which advantages this feature really has.In a long module, you can organize the static constructor code in a way that makes sense, rather than being forced to put it all in one place. It also makes it practical to mixin code that requires static construction.
Sep 05 2007
Walter Bright wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipWhat's std.hiddenfunc for? I looked at the code but it didn't help. --bb
Sep 05 2007
Bill Baxter wrote:What's std.hiddenfunc for? I looked at the code but it didn't help.It's an exception thrown when an overridden function that still exists in the vtbl[] gets called anyway.
Sep 05 2007
On Wed, 5 Sep 2007, Walter Bright wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipThe "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" link on http://www.digitalmars.com/d/changelog.html still points to 2.002. Similarly, though at least labeled, the 1.0 changelog still points to 1.016, now 5 versions behind? Now that the 1.0 code line is no longer receiving anything other than bug fixes, is there really the need to distinguish between the latest 1.0 release and some other really stable 1.0 release? http://www.digitalmars.com/d/1.0/dcompiler.html#Win32 still lists all the 1.00 (not 1.x) mirrors. The same with http://www.digitalmars.com/d/1.0/dcompiler.html#linux. There's a non 1.0 scoped version of the page, http://www.digitalmars.com/d/dcompiler.html, that at first glance looks identical to the 1.0/dcompiler.html page with the same problems. I know I've brought some of these things up at least a handfull of times in the past.. can they finally be cleaned up, pretty please? Thanks, Brad
Sep 05 2007
Brad Roberts wrote:The "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" link on http://www.digitalmars.com/d/changelog.html still points to 2.002. Similarly, though at least labeled, the 1.0 changelog still points to 1.016, now 5 versions behind? Now that the 1.0 code line is no longer receiving anything other than bug fixes, is there really the need to distinguish between the latest 1.0 release and some other really stable 1.0 release?I think their is still a need, as there's always a risk I break something with a new release, even if it's just bug fixes.http://www.digitalmars.com/d/1.0/dcompiler.html#Win32 still lists all the 1.00 (not 1.x) mirrors. The same with http://www.digitalmars.com/d/1.0/dcompiler.html#linux. There's a non 1.0 scoped version of the page, http://www.digitalmars.com/d/dcompiler.html, that at first glance looks identical to the 1.0/dcompiler.html page with the same problems.I'll fix it.
Sep 05 2007
Walter Bright wrote:Brad Roberts wrote:1.020 seemed to be stable. Like 1.016, it was around for a long time, and therefore particularly well tested. There were some great bug fixes in 1.018 and 1.019. There's that substantive change about .init which happened in 1.017. If that's permanent, it'd be good to stop further development relying on the old behaviour. I think we need a policy for when the 'stable version' should be updated. Also, I don't see any mention of delimited string literals in the changelog. <g>The "Download latest D 2.0 alpha D compiler for Win32 and x86 linux" link on http://www.digitalmars.com/d/changelog.html still points to 2.002. Similarly, though at least labeled, the 1.0 changelog still points to 1.016, now 5 versions behind? Now that the 1.0 code line is no longer receiving anything other than bug fixes, is there really the need to distinguish between the latest 1.0 release and some other really stable 1.0 release?I think their is still a need, as there's always a risk I break something with a new release, even if it's just bug fixes.
Sep 06 2007
Don Clugston wrote:Also, I don't see any mention of delimited string literals in the changelog. <g>Delimited string literals? Sean
Sep 06 2007
Don Clugston wrote:1.020 seemed to be stable. Like 1.016, it was around for a long time, and therefore particularly well tested. There were some great bug fixes in 1.018 and 1.019.Done.There's that substantive change about .init which happened in 1.017. If that's permanent, it'd be good to stop further development relying on the old behaviour. I think we need a policy for when the 'stable version' should be updated. Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Reply to Walter,Don Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
BCS wrote:Reply to Walter,The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.html Thanks, Nathan ReedDon Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Nathan Reed wrote:BCS wrote:And the lecture slides have more info, obviously. SeanReply to Walter,The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.htmlDon Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Reply to Sean,Nathan Reed wrote:I wish Walter would put more links from the change log into the docs (and more labels in the docs)BCS wrote:And the lecture slides have more info, obviously. SeanReply to Walter,The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.htmlDon Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Sean Kelly wrote:Nathan Reed wrote:Actually, the docs on the web go into a bunch more detail than the lecture slides :) Thanks, Nathan ReedBCS wrote:And the lecture slides have more info, obviously.Reply to Walter,The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.htmlDon Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
"Nathan Reed" <nathaniel.reed gmail.com> wrote in message news:fbpfek$2qpb$1 digitalmars.com... <snip>The docs for delimited string literals are now at http://www.digitalmars.com/d/lex.htmlOne thing for sure: these things are going to be a nightmare to syntax-highlight! Stewart.
Sep 09 2007
Stewart Gordon wrote:"Nathan Reed" <nathaniel.reed gmail.com> wrote in message news:fbpfek$2qpb$1 digitalmars.com... <snip>I've already updated the Pygments syntax highlighter with this new syntax. They are not fundamentally any harder to highlight than the existing nesting /+ +/ comments. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgThe docs for delimited string literals are now at http://www.digitalmars.com/d/lex.htmlOne thing for sure: these things are going to be a nightmare to syntax-highlight! Stewart.
Sep 10 2007
"Kirk McDonald" <kirklin.mcdonald gmail.com> wrote in message news:fc2u8u$21d9$1 digitalmars.com... <snip> [delimited string literals]I've already updated the Pygments syntax highlighter with this new syntax. They are not fundamentally any harder to highlight than the existing nesting /+ +/ comments.Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings. Stewart.
Sep 10 2007
Stewart Gordon wrote:Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 10 2007
Walter Bright wrote:Stewart Gordon wrote:Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1" -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgMaybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 10 2007
Kirk McDonald wrote:Walter Bright wrote:I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };The more unusual feature is the token delimited strings.Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"
Sep 10 2007
Walter Bright wrote:Kirk McDonald wrote:Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally. Since this lexer is the one used by Dsource, I've thrown together a wiki page showing it off: http://www.dsource.org/projects/dsource/wiki/DelimitedStringHighlighting A note about this lexer: It uses a combination of regular expressions, a state machine, and a stack. When a regex matches, you usually just specify that the matching text should be highlighted as such-and-such a token. In some cases, though, you want to push a particular state onto the stack, which will then swap in a different set of regexes, until such time as this new state pops itself off the stack. Also, it is of course written in Python, so the code below is Python code. For instance, the rule for the "heredoc" strings, which I mentioned previously, looks like this: (r'q"([a-zA-Z_]\w*)\n.*?\n\1"', String), That is, it takes the chunk of text matched by that regex, and highlights it as a string. The entry point for token strings is the following rule: (r'q{', String, 'token_string'), Or: Highlight the token "q{" as a string, then push the 'token_string' state onto the stack. (This third argument is optional, and most of the rules do not have it.) The 'token_string' state looks like this: 'token_string': [ (r'{', Punctuation, 'token_string_nest'), (r'}', String, '#pop'), include('root'), ], 'token_string_nest': [ (r'{', Punctuation, '#push'), (r'}', Punctuation, '#pop'), include('root'), ], include('root') tells it to include the contents of the 'root' state. (Which is the state the D lexer starts out in, which has all of the regular tokens in it.) '#push' means to push the current state onto the stack again, and '#pop' means to pop off of the stack. By putting the rules for '{' and '}' before the 'root' state, we override their default behavior. (Which is just to be highlighted as punctuation.) These two nearly-identical states are needed because we only want to highlight '}' as a string when it is the last one in the token string. When '}' is closing a nested brace, we want to highlight it as regular punctuation, and pop off of the stack. Even if the above is gibberish to you, I still assert that it's quite straightforward, and indeed is very much like how the nesting /+ +/ comments were already highlighted. (Albeit without the include('root') call, and only one extra state.) All of this is built on the Pygments lexer framework. All I had to do was define the big list of regexes, and the occasional extra state (as I've outlined above). -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgWalter Bright wrote:I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };The more unusual feature is the token delimited strings.Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"
Sep 10 2007
Kirk McDonald wrote:Walter Bright wrote:That's pretty danged nifty. Any chance, however, that it could apply a slight background color to the token string? -- Chris Nicholson-SaulsKirk McDonald wrote:Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally. Since this lexer is the one used by Dsource, I've thrown together a wiki page showing it off: http://www.dsource.org/projects/dsource/wiki/DelimitedStringHighlightingWalter Bright wrote:I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };The more unusual feature is the token delimited strings.Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"
Sep 10 2007
Chris Nicholson-Sauls wrote:Kirk McDonald wrote:Not really. It would require defining a new token which highlights the background for every existing token, and then updating all of the styles to provide coloring for that background... Pygments simply isn't set up to do that kind of manipulation. In fact, it would even be harder to highlight the whole thing as a string, than to highlight it the way it is now. (Unless I simply ignored the limitation that its contents consist only of valid tokens.) -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgWalter Bright wrote:That's pretty danged nifty. Any chance, however, that it could apply a slight background color to the token string? -- Chris Nicholson-SaulsKirk McDonald wrote:Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally. Since this lexer is the one used by Dsource, I've thrown together a wiki page showing it off: http://www.dsource.org/projects/dsource/wiki/DelimitedStringHighlightingWalter Bright wrote:I meant the: q{ these must be valid D tokens { and brackets nest } /* ignore this } */ };The more unusual feature is the token delimited strings.Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"
Sep 11 2007
Kirk McDonald wrote:Those are also fairly easy. The Pygments lexer only highlights the opening q{ and the closing }. The tokens inside of the string are highlighted normally.Sweet!
Sep 11 2007
Kirk McDonald wrote:Walter Bright wrote:It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?Stewart Gordon wrote:Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 11 2007
Jari-Matti Mäkelä wrote:Kirk McDonald wrote:D's delimited strings can (luckily) be scanned with regular languages, because the enclosing double quotes are required. else the lexical structure wouldn't even be context free and a nightmare for automatically generated lexers. therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.Walter Bright wrote:It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?Stewart Gordon wrote:Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 11 2007
Jascha Wetzel wrote:D's delimited strings can (luckily) be scanned with regular languages, because the enclosing double quotes are required. else the lexical structure wouldn't even be context free and a nightmare for automatically generated lexers.Right, thanks.therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.But e.g. syntax highlighting needs the semantic info to change the style of the text within the delimiters. The analyser also needs to check whether the two delimiters match. Like I said above, if the tool doesn't provide enough support, you're stuck. I haven't searched for all corner cases, but wasn't the old grammar scannable and highlightable with plain regular expressions (except the nested comments of course).
Sep 11 2007
Jari-Matti Mäkelä wrote:Jascha Wetzel wrote:before, the lexical structure was context free because of nested comments and floats of the form "[0-9]+\.". the latter can be matched with regexps if they support lookaheads, though. if you stick to the specs verbatim, q"EOS...EOS" as a whole is a string literal. assuming that all tokens/lexemes are atomic, a lexer can't "look inside" the string literal. from that point of view, the lexical structure it's still context free. if possible, i'd add a thin wrapper around an automatically generated lexer that checks the delimiters in a postprocess.D's delimited strings can (luckily) be scanned with regular languages, because the enclosing double quotes are required. else the lexical structure wouldn't even be context free and a nightmare for automatically generated lexers.Right, thanks.therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.But e.g. syntax highlighting needs the semantic info to change the style of the text within the delimiters. The analyser also needs to check whether the two delimiters match. Like I said above, if the tool doesn't provide enough support, you're stuck. I haven't searched for all corner cases, but wasn't the old grammar scannable and highlightable with plain regular expressions (except the nested comments of course).
Sep 11 2007
Jascha Wetzel wrote:before, the lexical structure was context free because of nested comments and floats of the form "[0-9]+\.". the latter can be matched with regexps if they support lookaheads, though.Nested comments don't necessarily need much more than a constant size counter, either.if you stick to the specs verbatim, q"EOS...EOS" as a whole is a string literal. assuming that all tokens/lexemes are atomic, a lexer can't "look inside" the string literal. from that point of view, the lexical structure it's still context free.But does a simple tool have to be so complex?if possible, i'd add a thin wrapper around an automatically generated lexer that checks the delimiters in a postprocess.That's a bit harder with e.g. closed source tools. Btw, is this a bug? auto foo = q"EOS EOS EOS"; doesn't compile with dmd 2.004. Or is the " always supposed to follow \n + matching identifier?
Sep 11 2007
Jari-Matti Mäkelä wrote:Jascha Wetzel wrote:it makes the lexer context free, though, and it therefore cannot be implemented with regular expressions only.before, the lexical structure was context free because of nested comments and floats of the form "[0-9]+\.". the latter can be matched with regexps if they support lookaheads, though.Nested comments don't necessarily need much more than a constant size counter, either.Btw, is this a bug? auto foo = q"EOS EOS EOS"; doesn't compile with dmd 2.004. Or is the " always supposed to follow \n + matching identifier?yep, since a non-nesting delimiter may only appear twice.
Sep 11 2007
Jascha Wetzel wrote:Jari-Matti Mäkelä wrote:Is the following a valid string? q"/foo " bar/" The grammar does not make it clear. The Pygments lexer treats it as though it is, under the assumption that the string continues until the first matching /" is found. Walter also said, in another branch of the thread, that this is not valid: q"/foo/bar/" Since it isn't all /that/ hard to match these examples, I wonder why they are disallowed. Just to simplify the lexer that much more? And, ah! I have found a bug in the Pygments lexer already: auto a = q"/foo/"; auto b = q"/bar/"; Everything from the opening of the first string literal to the end of the second is highlighted. Oops. I have a fix for the lexer, dsource will be updated at some point. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgKirk McDonald wrote:D's delimited strings can (luckily) be scanned with regular languages, because the enclosing double quotes are required. else the lexical structure wouldn't even be context free and a nightmare for automatically generated lexers. therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.Walter Bright wrote:It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?Stewart Gordon wrote:Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 11 2007
Kirk McDonald wrote:Jascha Wetzel wrote:oh, you're right of course...therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.Is the following a valid string? q"/foo " bar/"Walter also said, in another branch of the thread, that this is not valid: q"/foo/bar/" Since it isn't all /that/ hard to match these examples, I wonder why they are disallowed. Just to simplify the lexer that much more?what string would that represent? foo/bar foobar foo
Sep 11 2007
Jascha Wetzel wrote:Kirk McDonald wrote:I would expect it to represent foo/bar, in the same way that q"(foo(bar))" represents foo(bar). -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgJascha Wetzel wrote:oh, you're right of course...therefore you can match q"[^"]*" and check the delimiters during (context sensitive) semantic analysis.Is the following a valid string? q"/foo " bar/"Walter also said, in another branch of the thread, that this is not valid: q"/foo/bar/" Since it isn't all /that/ hard to match these examples, I wonder why they are disallowed. Just to simplify the lexer that much more?what string would that represent? foo/bar foobar foo
Sep 11 2007
Kirk McDonald wrote:I would expect it to represent foo/bar, in the same way that q"(foo(bar))" represents foo(bar).'/' is not a nesting delimiter. I think q"/foo/bar/" should be scanned as: q"/foo/ // Error: expected '"' after closing delimiter. "foo" would be the actual value of the literal. bar // Identifier token / // Division token " // Start of a new, normal string literal
Sep 12 2007
Aziz K. wrote:Kirk McDonald wrote:When I updated the Pygments lexer, I interpreted it like this: It sees q"/, and matches a string until it sees /". As Pygments is merely a syntax highlighter, it is not really that important for it to correctly flag invalid code as erroneous. Obviously, it /should/ do so in the optimum case, and I may get around to fixing this at some point, but it would be nice for the lexical docs to be a little more clear on this subject. Primarily, I see no reason why q"/foo/bar/" shouldn't be scanned as the string foo/bar. (Though I hasten to add that I recognize we are speaking of edge-cases, probably of interest only to people writing D lexers.) -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgI would expect it to represent foo/bar, in the same way that q"(foo(bar))" represents foo(bar).'/' is not a nesting delimiter. I think q"/foo/bar/" should be scanned as: q"/foo/ // Error: expected '"' after closing delimiter. "foo" would be the actual value of the literal. bar // Identifier token / // Division token " // Start of a new, normal string literal
Sep 12 2007
Jari-Matti Mäkelä wrote:Kirk McDonald wrote:While D now requires a fairly powerful lexer to lex properly, it's still easier to lex than, for example, Ruby. Ruby's heredoc strings are more complicated than D's. Even Pygments requires some advanced callback trickery to lex them properly. Docs on Ruby's "here document" string literals: http://docs.huihoo.com/ruby/ruby-man-1.4/syntax.html#here_doc Pygments's Ruby lexer: http://trac.pocoo.org/browser/pygments/trunk/pygments/lexers/agile.py#L260 Also, the lexical phase is still entirely independent of the syntactical and semantic phases, even if it is a little more difficult than it was before. My point is simply that any tool capable of lexing Ruby -- and there are a number of these -- is more than powerful enough to lex D. So the bar is high, but quite reachable. I do not think it is extraordinary that a tool written in Python would take advantage of Python's regular expressions' features. -- Kirk McDonald http://kirkmcdonald.blogspot.com Pyd: Connecting D and Python http://pyd.dsource.orgWalter Bright wrote:It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?Stewart Gordon wrote:Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 11 2007
Jari-Matti Mäkelä wrote:Kirk McDonald wrote:Ok, why would syntax highlighting have to be implemented with a regexp in the first place? -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#DWalter Bright wrote:It's great to see Pygments handles so many possible syntaxes. Unfortunately backreferences are not part of regular expressions. I've noticed two kinds of problems in tools: a) some can't handle backreferences, but provide support for nested comments as a special case. So comments are no problem then, but all delimited strings are. b) some lexers handles both nested comments and delimited strings, but all delimiters must be enumerated in the language definition. Even worse, some highlighters only handle delimited comments, not strings. Maybe the new features (= one saves on average < 5 characters of typing per string) are more important than tool support? Maybe all tools should be rewritten in Python & Pygments?Stewart Gordon wrote:Which, since there's no nesting going on, are actually very easy to match. The Pygments lexer matches them with the following regex: q"([a-zA-Z_]\w*)\n.*?\n\1"Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl. C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete. The more unusual feature is the token delimited strings.
Sep 11 2007
"Walter Bright" <newshound1 digitalmars.com> wrote in message news:fc45ic$1k04$1 digitalmars.com...Stewart Gordon wrote:But how many editors do a good job of syntax-highlighting Perl anyway, considering the mutual dependence between the lexer and the parser?Maybe. But still, nested comments are probably likely to be supported by more code editors than such an unusual feature as delimited strings.Delimited strings are standard practice in Perl.C++0x is getting delimited strings. Code editors that can't handle them are going to become rapidly obsolete.Maybe. But an editor being obsolete doesn't stop people from using it and even liking it for the features it does have. Take the number of people still using TextPad, for instance.The more unusual feature is the token delimited strings.Indeed. Stewart.
Sep 11 2007
Reply to Benjamin,Reply to Walter,OK I see DelimitedString and TokenString in the BNF but the doc seem to be a bit mangled about naming things (2 type in the BNF and 3 down below?)Don Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
BCS wrote:Reply to Benjamin,What are you referring to? There are two doc sections, "Delimited Strings" and "Token Strings". Thanks, Nathan ReedReply to Walter,OK I see DelimitedString and TokenString in the BNF but the doc seem to be a bit mangled about naming things (2 type in the BNF and 3 down below?)Don Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Reply to Nathan,BCS wrote:oops I read the table heading "Nesting Delimiters" as a section heading and I guess heredoc and delimited are the same thing.Reply to Benjamin,What are you referring to? There are two doc sections, "Delimited Strings" and "Token Strings". Thanks, Nathan ReedReply to Walter,OK I see DelimitedString and TokenString in the BNF but the doc seem to be a bit mangled about naming things (2 type in the BNF and 3 down below?)Don Clugston wrote:where's the docs?Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Walter Bright wrote:Don Clugston wrote:According to the docs, q{/*}*/ } is the same as "/*?*/ " is this a feature to assist macros in parsing strings -- all comments are turned to '?', or is it a mistake? -- Reiner1.020 seemed to be stable. Like 1.016, it was around for a long time, and therefore particularly well tested. There were some great bug fixes in 1.018 and 1.019.Done.There's that substantive change about .init which happened in 1.017. If that's permanent, it'd be good to stop further development relying on the old behaviour. I think we need a policy for when the 'stable version' should be updated. Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 06 2007
Reiner Pope wrote:According to the docs, q{/*}*/ } is the same as "/*?*/ " is this a feature to assist macros in parsing strings -- all comments are turned to '?', or is it a mistake?It's a typo. Replace the ? with }.
Sep 06 2007
Walter Bright wrote:Reiner Pope wrote:I also thought it was a ?. Specially since the same example is in the PDF of the conference (slide 36). I also have the doubt of Bruno: what's the use of delimited strings and token strings?According to the docs, q{/*}*/ } is the same as "/*?*/ " is this a feature to assist macros in parsing strings -- all comments are turned to '?', or is it a mistake?It's a typo. Replace the ? with }.
Sep 07 2007
Walter Bright wrote:Don Clugston wrote:Speaking of which, what is the purpose of delimiter strings and the like (token strings, identifier strings) ? Neither the docs or slides go into much detail. So far I can only see use for token strings, in string mixins. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D1.020 seemed to be stable. Like 1.016, it was around for a long time, and therefore particularly well tested. There were some great bug fixes in 1.018 and 1.019.Done.There's that substantive change about .init which happened in 1.017. If that's permanent, it'd be good to stop further development relying on the old behaviour. I think we need a policy for when the 'stable version' should be updated. Also, I don't see any mention of delimited string literals in the changelog. <g>Fixed.
Sep 07 2007
Bruno Medeiros wrote:Speaking of which, what is the purpose of delimiter strings and the like (token strings, identifier strings) ? Neither the docs or slides go into much detail. So far I can only see use for token strings, in string mixins.Makes it easier to insert arbitrary text as a string without having to worry about an inadvertent delimiter inside the string.
Sep 10 2007
Walter Bright Wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipWow, thanks! It was definitely worth the wait! Also, thanks for adding a few non-breaking features (multiple module static constructors/destructors) to the 1.x branch to show it's still got life, and for adding the default lib switch!
Sep 05 2007
build every program, cause: Compile error: QuadPart is not a member of LARGE_INTEGER DMD 1.021, Windows XP, I Searched the DMD directory, not find the definition of LARGE_INTEGER
Sep 05 2007
yidabu wrote:build every program, cause: Compile error: QuadPart is not a member of LARGE_INTEGERI've just encountered the same error ("Error: 'QuadPart' is not a member of 'LARGE_INTEGER'") when I tried to compile the WindowsAPI sources. Could be related to: http://d.puremagic.com/issues/show_bug.cgi?id=1473 DMD 1.021, Windows Vista LLAP, Sascha
Sep 06 2007
"Sascha Katzner" <sorry.no spam.invalid> wrote in message news:fbobj2$1tdh$1 digitalmars.com...yidabu wrote:Indeed. I might have to go back to 1.020 pending a fix. There are quite a few regressions. http://d.puremagic.com/issues/buglist.cgi?version=1.021&bug_severity=regression 1485 has broken my utility library. 1484 may have broken a project or two of mine as well. Stewart.build every program, cause: Compile error: QuadPart is not a member of LARGE_INTEGERI've just encountered the same error ("Error: 'QuadPart' is not a member of 'LARGE_INTEGER'") when I tried to compile the WindowsAPI sources. Could be related to: http://d.puremagic.com/issues/show_bug.cgi?id=1473
Sep 09 2007
Walter Bright wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zip*ahem* H A L L E L U J A ! Oh you've made me a very happy boy. :) The multiple module ctors/dtors thing is *very* welcome. I'll have to poke around the new 2.0 stuff, too. The only thing left that would allow me to ditch my current, let's call it, "insane" compiler set up would be a switch to specify a different sc.ini file. But none the less, thanks very much for these! :) -- Daniel
Sep 05 2007
Walter Bright wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipSweet, I like it. Thank you!!111 One thing though, when I run this: dmd -defaultlib it outputs this: Error: unrecognized switch '-defaultlib' Same with -debuglib. Am I missing something?
Sep 05 2007
Also, the -defaultlib and -debuglib switches does not appear in the dmd usage display on the commmandline. negerns
Sep 05 2007
Chad J wrote:One thing though, when I run this: dmd -defaultlib it outputs this: Error: unrecognized switch '-defaultlib'Try: dmd -defaultlib=foo test.d
Sep 05 2007
Walter Bright wrote:Chad J wrote:Ah, that works. As negerns mentioned, this doesn't show in the dmd usage info. If it did, that would probably help ;)One thing though, when I run this: dmd -defaultlib it outputs this: Error: unrecognized switch '-defaultlib'Try: dmd -defaultlib=foo test.d
Sep 06 2007
Chad J wrote:Walter Bright wrote:Try running dmd by itself and checking the version. I'll bet you downloaded dmd.zip which points to 1.016 still, not 1.021.Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipSweet, I like it. Thank you!!111 One thing though, when I run this: dmd -defaultlib it outputs this: Error: unrecognized switch '-defaultlib' Same with -debuglib. Am I missing something?
Sep 05 2007
On Wed, 05 Sep 2007 12:05:07 -0700, Walter Bright <newshound1 digitalmars.com> wrote:Mostly bug fixes for CTFE. Added library switches at Tango's request. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.021.zip http://www.digitalmars.com/d/changelog.html http://ftp.digitalmars.com/dmd.2.004.zipThanks a lot!
Sep 06 2007
Hello Walter, Thanks for the release. Could you clarify a few things regarding the new string literals for me, please? Example: q"/abc/def/" // Is this "abc/def" or is this an error? Token string examples: q{__TIME__} // Should special tokens be evaluated? Resulting in a different string than "__TIME__"? q{666, this is super __EOF__} // Should __EOF__ be evaluated here causing the token string to be unterminated? q{#line 4 "path/to/file" } // Should the special token sequence be evaluated here? You provided the following example on the lexer page: q{ 67QQ } // error, 67QQ is not a valid D token Isn't your comment wrong? I see two valid tokens there: an integer "67" and an identifier "QQ" Regards, Aziz
Sep 10 2007
Aziz K. wrote:Could you clarify a few things regarding the new string literals for me, please? Example: q"/abc/def/" // Is this "abc/def" or is this an error?Error.Token string examples: q{__TIME__} // Should special tokens be evaluated? Resulting in a different string than "__TIME__"?No, no.q{666, this is super __EOF__} // Should __EOF__ be evaluated here causing the token string to be unterminated?Yes (__EOF__ is not a token, it's an end of file)q{#line 4 "path/to/file" } // Should the special token sequence be evaluated here?No.You provided the following example on the lexer page: q{ 67QQ } // error, 67QQ is not a valid D token Isn't your comment wrong? I see two valid tokens there: an integer "67" and an identifier "QQ"I think you're right.
Sep 11 2007
Thanks for clarifying. While implementing the methods in my lexer for scanning the new string literals I found a few other ambiguities: q"∆abcdef∆" // Might be superfluous to ask, but are (non-alpha) Unicode character delimiters allowed? q" abcdef " // "abcdef". Allowed? q" äöüß " // "äöüß". Should leading newlines be skipped or are they allowed as delimiters? q"EOF abcdefEOF" // Valid? Or is \nEOF a requirement? If so, how would you write such a string excluding the last newline? Because you say in the specs that the last newline is part of the string. Maybe it shouldn't be? q"EOF abcdef EOF" // Provided the previous example is an error. Is indenting the matching delimiter allowed (with " \t\v\f")? Walter Bright wrote:Aziz K. wrote:Are you sure you want __EOF__ to really mean end of file like '\0' and 0x1A (^Z)? Every time one encounters '_', one would have to look ahead for "_EOF__" and one would have to make sure it's not followed by a valid identifier character. I have twelve instances where I check for \0 and ^Z. It wouldn't be that hard to adapt the code but I'm sure in general it would impact the speed of a D lexer adversely. Regards, Azizq{666, this is super __EOF__} // Should __EOF__ be evaluated here causing the token string to be unterminated?Yes (__EOF__ is not a token, it's an end of file)
Sep 11 2007
Aziz K. wrote:Thanks for clarifying. While implementing the methods in my lexer for scanning the new string literals I found a few other ambiguities: q"∆abcdef∆" // Might be superfluous to ask, but are (non-alpha) Unicode character delimiters allowed?Yes.q" abcdef " // "abcdef". Allowed?Yes.q" äöüß " // "äöüß". Should leading newlines be skipped or are they allowed as delimiters?Skipped.q"EOF abcdefEOF" // Valid?No.Or is \nEOF a requirement?Yes.If so, how would you write such a string excluding the last newline?Can't.Because you say in the specs that the last newline is part of the string. Maybe it shouldn't be? q"EOF abcdef EOF" // Provided the previous example is an error. Is indenting the matching delimiter allowed (with " \t\v\f")?No.Walter Bright wrote:Aziz K. wrote:Are you sure you want __EOF__ to really mean end of file like '\0' and 0x1A (^Z)? Every time one encounters '_', one would have to look ahead for "_EOF__" and one would have to make sure it's not followed by a valid identifier character. I have twelve instances where I check for \0 and ^Z. It wouldn't be that hard to adapt the code but I'm sure in general it would impact the speed of a D lexer adversely. Regards, Azizq{666, this is super __EOF__} // Should __EOF__ be evaluated here causing the token string to be unterminated?Yes (__EOF__ is not a token, it's an end of file)
Sep 12 2007
Reply to Aziz K.,q"EOF abcdefEOF" // Valid? Or is \nEOF a requirement? If so, how would you write such a string excluding the last newline?q"EOF abcdef EOF"[0..$-1]
Sep 12 2007
BCS wrote:Reply to Aziz K.,Or peel off the last line: q"EOF abcdef ghijkl mnop qrstuv EOF" "wxyz." Still... Why the draconian limitation that heredocs MUST always have a newline? Seems like allowing escaped newlines would make life easier. Like q"EOF abcdef ghijkl mnop qrstuv wxyz.\ EOF" Or make only the last newline escapable with something prefixing the terminator, like \: q"EOF abcdef ghijkl mnop qrstuv wxyz. \EOF" --bbq"EOF abcdefEOF" // Valid? Or is \nEOF a requirement? If so, how would you write such a string excluding the last newline?q"EOF abcdef EOF"[0..$-1]
Sep 13 2007