www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Member functions C to D

reply Craig Kuhnert <ckuhnert gmail.com> writes:
Hi
I am trying to convert some code I wrote in C++ to D to give it a try and I
have come across some code that I dont know how to convert.
I have simplified the code to illustrate the problem I have.
How do I do this in D?

class IFieldSetter
{
public:
	virtual void SetValue(void * object, const void * value) = 0;
};

template <class C, class T>
class FieldSetter : public IFieldSetter
{
private:
	typedef T (C::* MemberField);
	 MemberField field;

public:
	FieldSetter(MemberField afield)
		: field(afield)
	{}

	void SetTypedValue(C * object, const T& value)
	{
		object->*field = value;
	}
	
	void SetValue(void * object, const void * value)
	{
		SetTypedValue((C*) object, (const T&) value);
	}
};

class MySampleClass
{
public:
	int Status;
	std::string Name;
};

void main(void)
{
	IFieldSetter * StatusSetter = new
FieldSetter<MySampleClass,int>(&MySampleClass::Status);
	IFieldSetter * NameSetter   = new
FieldSetter<MySampleClass,std::string>(&MySampleClass::Name);

	MySampleClass * a = new MySampleClass();
	MySampleClass * b = new MySampleClass();

	StatusSetter->SetValue(a, (void*)20);
	StatusSetter->SetValue(b, (void*)40);

	NameSetter->SetValue(a, "2002");
	NameSetter->SetValue(b, "2002");
}

Thanks
Craig
Oct 07 2009
parent reply downs <default_357-line yahoo.de> writes:
Craig Kuhnert wrote:
 Hi
 I am trying to convert some code I wrote in C++ to D to give it a try and I
have come across some code that I dont know how to convert.
 I have simplified the code to illustrate the problem I have.
 How do I do this in D?
 
 class IFieldSetter
 {
 public:
 	virtual void SetValue(void * object, const void * value) = 0;
 };
 
 template <class C, class T>
 class FieldSetter : public IFieldSetter
 {
 private:
 	typedef T (C::* MemberField);
 	 MemberField field;
 
 public:
 	FieldSetter(MemberField afield)
 		: field(afield)
 	{}
 
 	void SetTypedValue(C * object, const T& value)
 	{
 		object->*field = value;
 	}
 	
 	void SetValue(void * object, const void * value)
 	{
 		SetTypedValue((C*) object, (const T&) value);
 	}
 };
 
 class MySampleClass
 {
 public:
 	int Status;
 	std::string Name;
 };
 
 void main(void)
 {
 	IFieldSetter * StatusSetter = new
FieldSetter<MySampleClass,int>(&MySampleClass::Status);
 	IFieldSetter * NameSetter   = new
FieldSetter<MySampleClass,std::string>(&MySampleClass::Name);
 
 	MySampleClass * a = new MySampleClass();
 	MySampleClass * b = new MySampleClass();
 
 	StatusSetter->SetValue(a, (void*)20);
 	StatusSetter->SetValue(b, (void*)40);
 
 	NameSetter->SetValue(a, "2002");
 	NameSetter->SetValue(b, "2002");
 }
 
 Thanks
 Craig
If I'm getting this correctly, here's one way to do it .. module test; import std.stdio, tools.ctfe: ctReplace; // easy to write your own ctReplace function template Init(T) { T Init; } interface IFieldSetter { void setValue(Object obj, void* value); } class FieldSetter(T: Object, string Name) : IFieldSetter { override void setValue(Object obj, void* value) { auto tee = cast(T) obj; mixin("tee.%NAME = *cast(typeof(tee.%NAME)*) value; ".ctReplace("%NAME", Name)); } void setValue(T obj, typeof(mixin("Init!(T)."~Name)) value) { mixin("obj.%NAME = value; ".ctReplace("%NAME", Name)); } } class Sample { int status; string name; } void main() { auto statSetter = new FieldSetter!(Sample, "status"); auto nameSetter = new FieldSetter!(Sample, "name"); auto sample = new Sample; int i = 20; statSetter.setValue(sample, &i); statSetter.setValue(sample, 40); nameSetter.setValue(sample, "Fooblr"); }
Oct 07 2009
parent reply Craig Kuhnert <ckuhnert gmail.com> writes:
downs Wrote:

 Craig Kuhnert wrote:
 Hi
 I am trying to convert some code I wrote in C++ to D to give it a try and I
have come across some code that I dont know how to convert.
 I have simplified the code to illustrate the problem I have.
 How do I do this in D?
 
 class IFieldSetter
 {
 public:
 	virtual void SetValue(void * object, const void * value) = 0;
 };
 
 template <class C, class T>
 class FieldSetter : public IFieldSetter
 {
 private:
 	typedef T (C::* MemberField);
 	 MemberField field;
 
 public:
 	FieldSetter(MemberField afield)
 		: field(afield)
 	{}
 
 	void SetTypedValue(C * object, const T& value)
 	{
 		object->*field = value;
 	}
 	
 	void SetValue(void * object, const void * value)
 	{
 		SetTypedValue((C*) object, (const T&) value);
 	}
 };
 
 class MySampleClass
 {
 public:
 	int Status;
 	std::string Name;
 };
 
 void main(void)
 {
 	IFieldSetter * StatusSetter = new
FieldSetter<MySampleClass,int>(&MySampleClass::Status);
 	IFieldSetter * NameSetter   = new
FieldSetter<MySampleClass,std::string>(&MySampleClass::Name);
 
 	MySampleClass * a = new MySampleClass();
 	MySampleClass * b = new MySampleClass();
 
 	StatusSetter->SetValue(a, (void*)20);
 	StatusSetter->SetValue(b, (void*)40);
 
 	NameSetter->SetValue(a, "2002");
 	NameSetter->SetValue(b, "2002");
 }
 
 Thanks
 Craig
If I'm getting this correctly, here's one way to do it .. module test; import std.stdio, tools.ctfe: ctReplace; // easy to write your own ctReplace function template Init(T) { T Init; } interface IFieldSetter { void setValue(Object obj, void* value); } class FieldSetter(T: Object, string Name) : IFieldSetter { override void setValue(Object obj, void* value) { auto tee = cast(T) obj; mixin("tee.%NAME = *cast(typeof(tee.%NAME)*) value; ".ctReplace("%NAME", Name)); } void setValue(T obj, typeof(mixin("Init!(T)."~Name)) value) { mixin("obj.%NAME = value; ".ctReplace("%NAME", Name)); } } class Sample { int status; string name; } void main() { auto statSetter = new FieldSetter!(Sample, "status"); auto nameSetter = new FieldSetter!(Sample, "name"); auto sample = new Sample; int i = 20; statSetter.setValue(sample, &i); statSetter.setValue(sample, 40); nameSetter.setValue(sample, "Fooblr"); }
Thanks Thats brilliant! D rocks! I never though of using mixin for that purpose.
Oct 07 2009
parent reply Don <nospam nospam.com> writes:
Craig Kuhnert wrote:
 downs Wrote:
 
 Craig Kuhnert wrote:
 Hi
 I am trying to convert some code I wrote in C++ to D to give it a try and I
have come across some code that I dont know how to convert.
 I have simplified the code to illustrate the problem I have.
 How do I do this in D?

 class IFieldSetter
 {
 public:
 	virtual void SetValue(void * object, const void * value) = 0;
 };

 template <class C, class T>
 class FieldSetter : public IFieldSetter
 {
 private:
 	typedef T (C::* MemberField);
 	 MemberField field;

 public:
 	FieldSetter(MemberField afield)
 		: field(afield)
 	{}

 	void SetTypedValue(C * object, const T& value)
 	{
 		object->*field = value;
 	}
 	
 	void SetValue(void * object, const void * value)
 	{
 		SetTypedValue((C*) object, (const T&) value);
 	}
 };

 class MySampleClass
 {
 public:
 	int Status;
 	std::string Name;
 };

 void main(void)
 {
 	IFieldSetter * StatusSetter = new
FieldSetter<MySampleClass,int>(&MySampleClass::Status);
 	IFieldSetter * NameSetter   = new
FieldSetter<MySampleClass,std::string>(&MySampleClass::Name);

 	MySampleClass * a = new MySampleClass();
 	MySampleClass * b = new MySampleClass();

 	StatusSetter->SetValue(a, (void*)20);
 	StatusSetter->SetValue(b, (void*)40);

 	NameSetter->SetValue(a, "2002");
 	NameSetter->SetValue(b, "2002");
 }

 Thanks
 Craig
If I'm getting this correctly, here's one way to do it .. module test; import std.stdio, tools.ctfe: ctReplace; // easy to write your own ctReplace function template Init(T) { T Init; } interface IFieldSetter { void setValue(Object obj, void* value); } class FieldSetter(T: Object, string Name) : IFieldSetter { override void setValue(Object obj, void* value) { auto tee = cast(T) obj; mixin("tee.%NAME = *cast(typeof(tee.%NAME)*) value; ".ctReplace("%NAME", Name)); } void setValue(T obj, typeof(mixin("Init!(T)."~Name)) value) { mixin("obj.%NAME = value; ".ctReplace("%NAME", Name)); } } class Sample { int status; string name; } void main() { auto statSetter = new FieldSetter!(Sample, "status"); auto nameSetter = new FieldSetter!(Sample, "name"); auto sample = new Sample; int i = 20; statSetter.setValue(sample, &i); statSetter.setValue(sample, 40); nameSetter.setValue(sample, "Fooblr"); }
Thanks Thats brilliant! D rocks! I never though of using mixin for that purpose.
There's almost NOTHING which is impossible with string mixins. With just recursive string mixins, coupled with .stringof and is(typeof()), you can get access to most of the compiler's semantic analysis, and its symbol table. Deep in the final semantic pass, just before code generation, when you have access to all the type information, you can generate new source code for the compiler to start again at the beginning with parsing. It's insanely powerful.
Oct 07 2009
parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Wed, Oct 7, 2009 at 8:07 AM, Don <nospam nospam.com> wrote:
 Craig Kuhnert wrote:
 downs Wrote:

 Craig Kuhnert wrote:
 Hi
 I am trying to convert some code I wrote in C++ to D to give it a try
 and I have come across some code that I dont know how to convert.
 I have simplified the code to illustrate the problem I have.
 How do I do this in D?

 class IFieldSetter
 {
 public:
 =A0 =A0 =A0 =A0virtual void SetValue(void * object, const void * value=
) =3D 0;
 };

 template <class C, class T>
 class FieldSetter : public IFieldSetter
 {
 private:
 =A0 =A0 =A0 =A0typedef T (C::* MemberField);
 =A0 =A0 =A0 =A0 MemberField field;

 public:
 =A0 =A0 =A0 =A0FieldSetter(MemberField afield)
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: field(afield)
 =A0 =A0 =A0 =A0{}

 =A0 =A0 =A0 =A0void SetTypedValue(C * object, const T& value)
 =A0 =A0 =A0 =A0{
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0object->*field =3D value;
 =A0 =A0 =A0 =A0}

 =A0 =A0 =A0 =A0void SetValue(void * object, const void * value)
 =A0 =A0 =A0 =A0{
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0SetTypedValue((C*) object, (const T&) v=
alue);
 =A0 =A0 =A0 =A0}
 };

 class MySampleClass
 {
 public:
 =A0 =A0 =A0 =A0int Status;
 =A0 =A0 =A0 =A0std::string Name;
 };

 void main(void)
 {
 =A0 =A0 =A0 =A0IFieldSetter * StatusSetter =3D new
 FieldSetter<MySampleClass,int>(&MySampleClass::Status);
 =A0 =A0 =A0 =A0IFieldSetter * NameSetter =A0 =3D new
 FieldSetter<MySampleClass,std::string>(&MySampleClass::Name);

 =A0 =A0 =A0 =A0MySampleClass * a =3D new MySampleClass();
 =A0 =A0 =A0 =A0MySampleClass * b =3D new MySampleClass();

 =A0 =A0 =A0 =A0StatusSetter->SetValue(a, (void*)20);
 =A0 =A0 =A0 =A0StatusSetter->SetValue(b, (void*)40);

 =A0 =A0 =A0 =A0NameSetter->SetValue(a, "2002");
 =A0 =A0 =A0 =A0NameSetter->SetValue(b, "2002");
 }

 Thanks
 Craig
If I'm getting this correctly, here's one way to do it .. module test; import std.stdio, tools.ctfe: ctReplace; // easy to write your own ctReplace function template Init(T) { T Init; } interface IFieldSetter { =A0void setValue(Object obj, void* value); } class FieldSetter(T: Object, string Name) : IFieldSetter { =A0override void setValue(Object obj, void* value) { =A0 =A0auto tee =3D cast(T) obj; =A0 =A0mixin("tee.%NAME =3D *cast(typeof(tee.%NAME)*) value; ".ctReplace("%NAME", Name)); =A0} =A0void setValue(T obj, typeof(mixin("Init!(T)."~Name)) value) { =A0 =A0mixin("obj.%NAME =3D value; ".ctReplace("%NAME", Name)); =A0} } class Sample { =A0int status; =A0string name; } void main() { =A0auto statSetter =3D new FieldSetter!(Sample, "status"); =A0auto nameSetter =3D new FieldSetter!(Sample, "name"); =A0auto sample =3D new Sample; =A0int i =3D 20; =A0statSetter.setValue(sample, &i); =A0statSetter.setValue(sample, 40); =A0nameSetter.setValue(sample, "Fooblr"); }
Thanks Thats brilliant! D rocks! I never though of using mixin for that purpose.
There's almost NOTHING which is impossible with string mixins. With just recursive string mixins, coupled with .stringof and is(typeof()), you can get access to most of the compiler's semantic analysis, and its symbol table. Deep in the final semantic pass, just before code generation, when you ha=
ve
 access to all the type information, you can generate new source code for =
the
 compiler to start again at the beginning with parsing.
 It's insanely powerful.
It's also insanely kludgy and ugly. Bleh.
Oct 07 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 07 Oct 2009 09:17:59 -0400, Jarrett Billingsley  
<jarrett.billingsley gmail.com> wrote:

 It's also insanely kludgy and ugly. Bleh.
If all a macro did was translate a scoped normal symbol to a mixin (or other macro) statement, would this take care of the ugliness? (would also be an insanely simple solution) i.e. macro doit(x, y, z) mixin("x" ~ "y" ~ "z"); // allow easy syntax for quoting parameters, since mixins are all about stringification. doit(a, b, c) => mixin("abc"); Another example, logging: class Logger { ... macro logError(msg) mixin("{if(this.level >= ERROR) logMessage(this.level.Error, msg);}"); } usage: log.logError("bad error occurred with object: " ~ expensiveObjectStringification(obj)); No more lazy parameters, no more stupid delegates :) -Steve
Oct 07 2009
parent reply Don <nospam nospam.com> writes:
Steven Schveighoffer wrote:
 On Wed, 07 Oct 2009 09:17:59 -0400, Jarrett Billingsley 
 <jarrett.billingsley gmail.com> wrote:
 
 It's also insanely kludgy and ugly. Bleh.
Ugly, yes. Kludgy, I don't think so. It's only a syntax issue. The basic concept of passing meta-code to the compiler in the form of raw text is simple: mixin() if you want to insert something into the parse step. is(typeof()) if you want to catch it again after the syntax pass. stringof if you want to catch it again after the semantic pass. And that's all. The syntax is ugly, but the semantics are beautifully elegant. By contrast, something like Nemerle macros are a kludge. The idea of providing a 'hook' into the compiler is a horrible hack. It exposes all kinds of compiler internals. Yes, it has nicer syntax.
 If all a macro did was translate a scoped normal symbol to a mixin (or 
 other macro) statement, would this take care of the ugliness? (would 
 also be an insanely simple solution)
I think that's where the majority of the ugliness comes from.
Oct 07 2009
next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Wed, Oct 7, 2009 at 11:21 AM, Don <nospam nospam.com> wrote:
 Steven Schveighoffer wrote:
 On Wed, 07 Oct 2009 09:17:59 -0400, Jarrett Billingsley
 <jarrett.billingsley gmail.com> wrote:

 It's also insanely kludgy and ugly. Bleh.
Ugly, yes. Kludgy, I don't think so. It's only a syntax issue. The basic concept of passing meta-code to the compiler in the form of raw text is simple: mixin() if you want to insert something into the parse step. =A0is(typeof()) if you want to catch it again after the syntax pass. =A0stringof if you want to catch it again after the semantic pass. And that's all. The syntax is ugly, but the semantics are beautifully elegant.
It'd be nice if they actually worked. is(typeof()) fails for *any* error, and it eats those errors too, so if your code fails to compile for some reason other than the one you're testing for, welp, good luck figuring that out. And don't even get me started on .stringof. Also, see my post on the "get template and its instantiation parameters" thread for my detailed opinion on them.
 By contrast, something like Nemerle macros are a kludge. The idea of
 providing a 'hook' into the compiler is a horrible hack. It exposes all
 kinds of compiler internals. Yes, it has nicer syntax.
I.. don't even know how to begin to respond to that.
Oct 07 2009
parent reply Don <nospam nospam.com> writes:
Jarrett Billingsley wrote:
 On Wed, Oct 7, 2009 at 11:21 AM, Don <nospam nospam.com> wrote:
 Steven Schveighoffer wrote:
 On Wed, 07 Oct 2009 09:17:59 -0400, Jarrett Billingsley
 <jarrett.billingsley gmail.com> wrote:

 It's also insanely kludgy and ugly. Bleh.
Ugly, yes. Kludgy, I don't think so. It's only a syntax issue. The basic concept of passing meta-code to the compiler in the form of raw text is simple: mixin() if you want to insert something into the parse step. is(typeof()) if you want to catch it again after the syntax pass. stringof if you want to catch it again after the semantic pass. And that's all. The syntax is ugly, but the semantics are beautifully elegant.
It'd be nice if they actually worked. is(typeof()) fails for *any* error, and it eats those errors too, so if your code fails to compile for some reason other than the one you're testing for, welp, good luck figuring that out. And don't even get me started on .stringof. Also, see my post on the "get template and its instantiation parameters" thread for my detailed opinion on them.
 By contrast, something like Nemerle macros are a kludge. The idea of
 providing a 'hook' into the compiler is a horrible hack. It exposes all
 kinds of compiler internals. Yes, it has nicer syntax.
I.. don't even know how to begin to respond to that.
Have you read the Nemerle extended macro tutorial? The compiler's internal structures are completely exposed. That's a hack.
Oct 08 2009
parent reply Bill Baxter <wbaxter gmail.com> writes:
On Thu, Oct 8, 2009 at 1:06 AM, Don <nospam nospam.com> wrote:
 Jarrett Billingsley wrote:
 On Wed, Oct 7, 2009 at 11:21 AM, Don <nospam nospam.com> wrote:
 Steven Schveighoffer wrote:
 On Wed, 07 Oct 2009 09:17:59 -0400, Jarrett Billingsley
 <jarrett.billingsley gmail.com> wrote:

 It's also insanely kludgy and ugly. Bleh.
Ugly, yes. Kludgy, I don't think so. It's only a syntax issue. The basi=
c
 concept of passing meta-code to the compiler in the form of raw text is
 simple:

 mixin() if you want to insert something into the parse step.
 =A0is(typeof()) if you want to catch it again after the syntax pass.
 =A0stringof if you want to catch it again after the semantic pass.

 And that's all. The syntax is ugly, but the semantics are beautifully
 elegant.
It'd be nice if they actually worked. is(typeof()) fails for *any* error, and it eats those errors too, so if your code fails to compile for some reason other than the one you're testing for, welp, good luck figuring that out. And don't even get me started on .stringof. Also, see my post on the "get template and its instantiation parameters" thread for my detailed opinion on them.
 By contrast, something like Nemerle macros are a kludge. The idea of
 providing a 'hook' into the compiler is a horrible hack. It exposes all
 kinds of compiler internals. Yes, it has nicer syntax.
I.. don't even know how to begin to respond to that.
Have you read the Nemerle extended macro tutorial? The compiler's interna=
l
 structures are completely exposed. That's a hack.
It seems macros are implemented as compiler extensions. You compile your macros into DLLs first, that then get loaded into the compiler as plugins. On the plus side, doing things that way you really do have access to any API you need at compile-time, using the same syntax as run-time. All of .NET can be used at compile-time in your macros. No more "can't CTFE that" gotchas. But it does raise security concerns. I wonder if they have some way to prevent macros from running malicious code. I guess you better run your web-based compiler service in a tightly partitioned VM. Overall it seems pretty nifty to me, really. Giving macros access to an actual compiler API seems less hackish than throwing in a smattering of diverse functionality under the heading of __traits. And less prone to gotchas than trying to create a separate compile-time D interpreter that runs inside the D compiler. What do you see as the down sides? Just that some rogue macro might mess up the AST? --bb
Oct 08 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
Bill Baxter wrote:
 It seems macros are implemented as compiler extensions.  You compile
 your macros into DLLs first, that then get loaded into the compiler as
 plugins.  On the plus side, doing things that way you really do have
 access to any API you need at compile-time, using the same syntax as
 run-time.  All of .NET can be used at compile-time in your macros.  No
 more "can't CTFE that" gotchas.
 
 But it does raise security concerns.  I wonder if they have some way
 to prevent macros from running malicious code.  I guess you better run
 your web-based compiler service in a tightly partitioned VM.
mode, fewer bad things can happen.
 Overall it seems pretty nifty to me, really.  Giving macros access to
 an actual compiler API seems less hackish than throwing in a
 smattering of diverse functionality under the heading of __traits.
 And less prone to gotchas than trying to create a separate
 compile-time D interpreter that runs inside the D compiler.
 
 What do you see as the down sides?  Just that some rogue macro might
 mess up the AST?
It makes macros highly compiler-specific, or requires the compiler's AST to be part of the language. Nemerle took the nuclear option, and its macros are all-powerful. That's a reasonable way of doing things. I'd be happy with a more restricted system that's easier to standardize, especially if it got rid of all the hacky string manipulation in current D metaprogramming. (Seriously, even __traits returns string arrays for a lot of stuff. It's ridiculous.)
Oct 08 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 09/10/2009 00:38, Christopher Wright wrote:
 Bill Baxter wrote:
 It seems macros are implemented as compiler extensions. You compile
 your macros into DLLs first, that then get loaded into the compiler as
 plugins. On the plus side, doing things that way you really do have
 access to any API you need at compile-time, using the same syntax as
 run-time. All of .NET can be used at compile-time in your macros. No
 more "can't CTFE that" gotchas.

 But it does raise security concerns. I wonder if they have some way
 to prevent macros from running malicious code. I guess you better run
 your web-based compiler service in a tightly partitioned VM.
mode, fewer bad things can happen.
 Overall it seems pretty nifty to me, really. Giving macros access to
 an actual compiler API seems less hackish than throwing in a
 smattering of diverse functionality under the heading of __traits.
 And less prone to gotchas than trying to create a separate
 compile-time D interpreter that runs inside the D compiler.

 What do you see as the down sides? Just that some rogue macro might
 mess up the AST?
It makes macros highly compiler-specific, or requires the compiler's AST to be part of the language. Nemerle took the nuclear option, and its macros are all-powerful. That's a reasonable way of doing things. I'd be happy with a more restricted system that's easier to standardize, especially if it got rid of all the hacky string manipulation in current D metaprogramming. (Seriously, even __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler. What's so hackish about that? many large modular systems do exactly that: eclipse, firefox, even the OS itself. Unix provides syscalls which *are* an API to the OS. a properly designed API doesn't have to expose internal implementation details. btw, in Nemerle they have syntax to compose/decompose AST specifically so they don't need to expose the internal structure of the AST.
Oct 09 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
Yigal Chripun wrote:
 On 09/10/2009 00:38, Christopher Wright wrote:
 It makes macros highly compiler-specific, or requires the compiler's AST
 to be part of the language.

 Nemerle took the nuclear option, and its macros are all-powerful. That's
 a reasonable way of doing things. I'd be happy with a more restricted
 system that's easier to standardize, especially if it got rid of all the
 hacky string manipulation in current D metaprogramming. (Seriously, even
 __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler.
Right. It adds something huge that's normally compiler-specific to the language. This makes me uncomfortable. It greatly increases the difficulty of implementation.
 What's so hackish about that?
Reread. Current D metaprogramming is hackish. Nemerle's isn't.
 many large modular systems do exactly that: eclipse, firefox, even the 
 OS itself. Unix provides syscalls which *are* an API to the OS.
 
 a properly designed API doesn't have to expose internal implementation 
 details.
 
 btw, in Nemerle they have syntax to compose/decompose AST specifically 
 so they don't need to expose the internal structure of the AST.
So they have a separate object model for the syntax tree that macros can affect. This is what I would recommend for D.
Oct 09 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 10/10/2009 00:36, Christopher Wright wrote:
 Yigal Chripun wrote:
 On 09/10/2009 00:38, Christopher Wright wrote:
 It makes macros highly compiler-specific, or requires the compiler's AST
 to be part of the language.

 Nemerle took the nuclear option, and its macros are all-powerful. That's
 a reasonable way of doing things. I'd be happy with a more restricted
 system that's easier to standardize, especially if it got rid of all the
 hacky string manipulation in current D metaprogramming. (Seriously, even
 __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler.
Right. It adds something huge that's normally compiler-specific to the language. This makes me uncomfortable. It greatly increases the difficulty of implementation.
I disagree - a properly designed compiler will have such an API anyway. Look at how Clang is designed - it's a modular compiler where each part has its own library. you can combine its libs in different ways to provide different options: a full command-line compiler, semantic analysis for IDE, incremental builder for an IDE, etc.. that design obviously requires APIs for the different components.
 What's so hackish about that?
Reread. Current D metaprogramming is hackish. Nemerle's isn't.
I was referring to what Don said that providing a hook into the compiler is hackish.
 many large modular systems do exactly that: eclipse, firefox, even the
 OS itself. Unix provides syscalls which *are* an API to the OS.

 a properly designed API doesn't have to expose internal implementation
 details.

 btw, in Nemerle they have syntax to compose/decompose AST specifically
 so they don't need to expose the internal structure of the AST.
So they have a separate object model for the syntax tree that macros can affect. This is what I would recommend for D.
What do you mean by object model? they have a synax to manipulate AST: <[ some code ]> would be parsed by the compiler as the AST of "some code" and would be represented internally by the compiler specific AST representation. This syntax also handles hygiene and provides means to break it when needed.
Oct 10 2009
next sibling parent reply Don <nospam nospam.com> writes:
Yigal Chripun wrote:
 On 10/10/2009 00:36, Christopher Wright wrote:
 Yigal Chripun wrote:
 On 09/10/2009 00:38, Christopher Wright wrote:
 It makes macros highly compiler-specific, or requires the compiler's 
 AST
 to be part of the language.

 Nemerle took the nuclear option, and its macros are all-powerful. 
 That's
 a reasonable way of doing things. I'd be happy with a more restricted
 system that's easier to standardize, especially if it got rid of all 
 the
 hacky string manipulation in current D metaprogramming. (Seriously, 
 even
 __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler.
Right. It adds something huge that's normally compiler-specific to the language. This makes me uncomfortable. It greatly increases the difficulty of implementation.
I disagree - a properly designed compiler will have such an API anyway.
Not if you have compilers from different vendors. And that's one of the key problems with making such an API part of language -- the potential for vendor lock-in.
 What's so hackish about that?
Reread. Current D metaprogramming is hackish. Nemerle's isn't.
I was referring to what Don said that providing a hook into the compiler is hackish.
I stand by that. Look, I was Forth guy back in the day. Forth and Lisp both have hack-free macros. Particularly in the case of Forth, the language is largely defined in the library; you can even make the case that the compiler is part of the library. So there's no problem with the library extending the language. But in the case of Nemerle, it's a conventional compiler with hooks for library code. I just feel that Nermele's approach is diametrically opposed to Forth/Lisp. It's personal opinion. To me, that looks like a hack. To make one thing clear: D's compile-time reflection is a hack. And that makes most current 'D macros' hackish. I just feel that most of the problems lie on the reflection side.
Oct 10 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 10/10/2009 10:50, Don wrote:
 Yigal Chripun wrote:
 On 10/10/2009 00:36, Christopher Wright wrote:
 Yigal Chripun wrote:
 On 09/10/2009 00:38, Christopher Wright wrote:
 It makes macros highly compiler-specific, or requires the
 compiler's AST
 to be part of the language.

 Nemerle took the nuclear option, and its macros are all-powerful.
 That's
 a reasonable way of doing things. I'd be happy with a more restricted
 system that's easier to standardize, especially if it got rid of
 all the
 hacky string manipulation in current D metaprogramming. (Seriously,
 even
 __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler.
Right. It adds something huge that's normally compiler-specific to the language. This makes me uncomfortable. It greatly increases the difficulty of implementation.
I disagree - a properly designed compiler will have such an API anyway.
Not if you have compilers from different vendors. And that's one of the key problems with making such an API part of language -- the potential for vendor lock-in.
if each compiler has its own API than you're correct but what I was talking about was a standard API that is part of the stdlib which the different vendors need to implement in order to be considered compliant to the language spec. the compiler internals need not be identical only the API as defined in the spec.
 What's so hackish about that?
Reread. Current D metaprogramming is hackish. Nemerle's isn't.
I was referring to what Don said that providing a hook into the compiler is hackish.
I stand by that. Look, I was Forth guy back in the day. Forth and Lisp both have hack-free macros. Particularly in the case of Forth, the language is largely defined in the library; you can even make the case that the compiler is part of the library. So there's no problem with the library extending the language. But in the case of Nemerle, it's a conventional compiler with hooks for library code.
I don't know how deep you looked into Nemerle, but from my understanding that description is false. Nemerle is much closer to your description of Forth than you'd think. Nemerle supports syntax extensions and parts of the language are already implemented as macros. They are now considering to generalize this construct further so they could implement more of Nemerle as Macros.
 I just feel that Nermele's approach is diametrically opposed to Forth/Lisp.
 It's personal opinion. To me, that looks like a hack.

 To make one thing clear:
 D's compile-time reflection is a hack. And that makes most current 'D
 macros' hackish. I just feel that most of the problems lie on the
 reflection side.
Oct 10 2009
parent reply Don <nospam nospam.com> writes:
Yigal Chripun wrote:
 On 10/10/2009 10:50, Don wrote:
 Yigal Chripun wrote:
 On 10/10/2009 00:36, Christopher Wright wrote:
 Yigal Chripun wrote:
 On 09/10/2009 00:38, Christopher Wright wrote:
 It makes macros highly compiler-specific, or requires the
 compiler's AST
 to be part of the language.

 Nemerle took the nuclear option, and its macros are all-powerful.
 That's
 a reasonable way of doing things. I'd be happy with a more restricted
 system that's easier to standardize, especially if it got rid of
 all the
 hacky string manipulation in current D metaprogramming. (Seriously,
 even
 __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler.
Right. It adds something huge that's normally compiler-specific to the language. This makes me uncomfortable. It greatly increases the difficulty of implementation.
I disagree - a properly designed compiler will have such an API anyway.
Not if you have compilers from different vendors. And that's one of the key problems with making such an API part of language -- the potential for vendor lock-in.
if each compiler has its own API than you're correct but what I was talking about was a standard API that is part of the stdlib which the different vendors need to implement in order to be considered compliant to the language spec. the compiler internals need not be identical only the API as defined in the spec.
 What's so hackish about that?
Reread. Current D metaprogramming is hackish. Nemerle's isn't.
I was referring to what Don said that providing a hook into the compiler is hackish.
I stand by that. Look, I was Forth guy back in the day. Forth and Lisp both have hack-free macros. Particularly in the case of Forth, the language is largely defined in the library; you can even make the case that the compiler is part of the library. So there's no problem with the library extending the language. But in the case of Nemerle, it's a conventional compiler with hooks for library code.
I don't know how deep you looked into Nemerle, but from my understanding that description is false. Nemerle is much closer to your description of Forth than you'd think. Nemerle supports syntax extensions and parts of the language are already implemented as macros. They are now considering to generalize this construct further so they could implement more of Nemerle as Macros.
Ah, OK. My cursory glance at Nemerle just screamed "hack". But first impressions can be misleading. No doubt as a C-family language, they have some useful ideas. But if Christopher's analysis is correct, the "macro" bit is different to the "plugin" bit. I think allowing the ASTs to be _modified_ by plugins is the path to madness, but a read-only ABI is OK (it's hard to see how compile-time reflection is possible without creating some kind of API).
Oct 12 2009
parent reply Yigal Chripun <yigal100 gmail.com> writes:
On 12/10/2009 10:47, Don wrote:
 Ah, OK. My cursory glance at Nemerle just screamed "hack". But first
 impressions can be misleading.
 No doubt as a C-family language, they have some useful ideas.
 But if Christopher's analysis is correct, the "macro" bit is different
 to the "plugin" bit. I think allowing the ASTs to be _modified_ by
 plugins is the path to madness, but a read-only ABI is OK (it's hard to
 see how compile-time reflection is possible without creating some kind
 of API).
modifying the AST is dangerous but how would you do things like making a class implement an interface without modifying the list of interfaces the class implements ? [serialize] class Foo {...} the Nemerle macro above transform this into: class Foo : Serializable { ... } what would be your design for this?
Oct 13 2009
parent reply Don <nospam nospam.com> writes:
Yigal Chripun wrote:
 On 12/10/2009 10:47, Don wrote:
 Ah, OK. My cursory glance at Nemerle just screamed "hack". But first
 impressions can be misleading.
 No doubt as a C-family language, they have some useful ideas.
 But if Christopher's analysis is correct, the "macro" bit is different
 to the "plugin" bit. I think allowing the ASTs to be _modified_ by
 plugins is the path to madness, but a read-only ABI is OK (it's hard to
 see how compile-time reflection is possible without creating some kind
 of API).
modifying the AST is dangerous but how would you do things like making a class implement an interface without modifying the list of interfaces the class implements ? [serialize] class Foo {...} the Nemerle macro above transform this into: class Foo : Serializable { ... } what would be your design for this?
I don't think it should be done that way. Either it should be intrusive (eg, require you to derive from Serializable), or else entirely external (and operate via reflection). Eg, possibly by declaring Serializable!(Foo) after the definition of Foo. It's an excellent question though, we might need some language changes to get a good solution. But I think it's very important to enforce that the only way to modify the AST is indirect, through code. One of the strengths I see of the string mixins, despite their syntactic ugliness, is that metaprogramming transformations are only ever syntax sugar: there is _always_ a source-code equivalent; moreover, it's trivially available, simply by expanding all of the mixins (and Descent will show it to you). It's not clear to me that that remains true if you can manipulate the AST directly. Of course, if you allow direct access to the AST, you have unlimited power. Capturing as much of the power as possible, without the danger, is the challenge.
Oct 14 2009
parent Yigal Chripun <yigal100 gmail.com> writes:
Don Wrote:

 Yigal Chripun wrote:
 On 12/10/2009 10:47, Don wrote:
 Ah, OK. My cursory glance at Nemerle just screamed "hack". But first
 impressions can be misleading.
 No doubt as a C-family language, they have some useful ideas.
 But if Christopher's analysis is correct, the "macro" bit is different
 to the "plugin" bit. I think allowing the ASTs to be _modified_ by
 plugins is the path to madness, but a read-only ABI is OK (it's hard to
 see how compile-time reflection is possible without creating some kind
 of API).
modifying the AST is dangerous but how would you do things like making a class implement an interface without modifying the list of interfaces the class implements ? [serialize] class Foo {...} the Nemerle macro above transform this into: class Foo : Serializable { ... } what would be your design for this?
I don't think it should be done that way. Either it should be intrusive (eg, require you to derive from Serializable), or else entirely external (and operate via reflection). Eg, possibly by declaring Serializable!(Foo) after the definition of Foo. It's an excellent question though, we might need some language changes to get a good solution. But I think it's very important to enforce that the only way to modify the AST is indirect, through code. One of the strengths I see of the string mixins, despite their syntactic ugliness, is that metaprogramming transformations are only ever syntax sugar: there is _always_ a source-code equivalent; moreover, it's trivially available, simply by expanding all of the mixins (and Descent will show it to you). It's not clear to me that that remains true if you can manipulate the AST directly. Of course, if you allow direct access to the AST, you have unlimited power. Capturing as much of the power as possible, without the danger, is the challenge.
I agree that there should be a source code equivalent for macros - otherwise it probably would be hard to debug. But I think we agree that at least a read only API would be very useful, much better than the current bag-o-hacks and without the risks. I know the Nemerle devs are working on improving their Macro system implementation. I'll ask in their NG about this issue.
Oct 14 2009
prev sibling parent Christopher Wright <dhasenan gmail.com> writes:
Yigal Chripun wrote:
 On 10/10/2009 00:36, Christopher Wright wrote:
 Yigal Chripun wrote:
 On 09/10/2009 00:38, Christopher Wright wrote:
 It makes macros highly compiler-specific, or requires the compiler's 
 AST
 to be part of the language.

 Nemerle took the nuclear option, and its macros are all-powerful. 
 That's
 a reasonable way of doing things. I'd be happy with a more restricted
 system that's easier to standardize, especially if it got rid of all 
 the
 hacky string manipulation in current D metaprogramming. (Seriously, 
 even
 __traits returns string arrays for a lot of stuff. It's ridiculous.)
It doesn't have to be compiler specific. all is needed is a standardized API to the compiler.
Right. It adds something huge that's normally compiler-specific to the language. This makes me uncomfortable. It greatly increases the difficulty of implementation.
I disagree - a properly designed compiler will have such an API anyway. Look at how Clang is designed - it's a modular compiler where each part has its own library. you can combine its libs in different ways to provide different options: a full command-line compiler, semantic analysis for IDE, incremental builder for an IDE, etc.. that design obviously requires APIs for the different components.
 What's so hackish about that?
Reread. Current D metaprogramming is hackish. Nemerle's isn't.
I was referring to what Don said that providing a hook into the compiler is hackish.
 many large modular systems do exactly that: eclipse, firefox, even the
 OS itself. Unix provides syscalls which *are* an API to the OS.

 a properly designed API doesn't have to expose internal implementation
 details.

 btw, in Nemerle they have syntax to compose/decompose AST specifically
 so they don't need to expose the internal structure of the AST.
So they have a separate object model for the syntax tree that macros can affect. This is what I would recommend for D.
What do you mean by object model? they have a synax to manipulate AST: <[ some code ]> would be parsed by the compiler as the AST of "some code" and would be represented internally by the compiler specific AST representation.
I looked up nemerle macros after this. There are a couple parts. 1. AST Mixins It's a lot like string mixins with builtin string formatting and automatic conversion of arguments to their string form. Syntactic sugar on top of this. That's all that macros are. Yes, you can manipulate the AST, but at this stage, it's entirely opaque. 2. Compiler plugins You can define a compiler module that does arbitrary things to the AST. Many modules will make use of macros. The Nemerle compiler might attempt to conflate plugins with macros, but if there were an alternate implementation of Nemerle, the difference would become very apparent very quickly. AST mixins are sexy. Compiler plugins are also sexy[1], but targeted toward a much different audience. D could benefit from both, but the latter is far lower on the list than a decent compile-time reflection system. And of this, only compiler plugins have the issues that I mentioned earlier. [1] Unless you're Richard Stallman. Onoz, someone could use a proprietary plugin with GCC!
Oct 10 2009
prev sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
 On Wed, Oct 7, 2009 at 11:21 AM, Don <nospam nospam.com> wrote:
 By contrast, something like Nemerle macros are a kludge. The idea of
 providing a 'hook' into the compiler is a horrible hack. It exposes all
 kinds of compiler internals. Yes, it has nicer syntax.
Are you talking specifically about the ability to define new syntax? Because it looks to me that one can use nemerle macros just fine without defining new syntax. I'm getting that from here: http://nemerle.org/Macros_tutorial Here's just a simple macro that adds no new syntax from that page: macro m () { Nemerle.IO.printf ("compile-time\n"); <[ Nemerle.IO.printf ("run-time\n") ]>; } module M { public Main () : void { m (); } } That seems significantly more elegant to me than string m() { pragma(msg, "compile-time"); return q{writefln("run-time");} } void main() { mixin(m()); } So it looks to me like the mechanics of it are basically identical. Just Nemerle's syntax is nicer. If you want to condem Nemerle's ability to define new syntax, I think that should be taken up as a separate matter. --bb
Oct 07 2009
parent reply Don <nospam nospam.com> writes:
Bill Baxter wrote:
 On Wed, Oct 7, 2009 at 11:21 AM, Don <nospam nospam.com> wrote:
 By contrast, something like Nemerle macros are a kludge. The idea of
 providing a 'hook' into the compiler is a horrible hack. It exposes all
 kinds of compiler internals. Yes, it has nicer syntax.
Are you talking specifically about the ability to define new syntax? Because it looks to me that one can use nemerle macros just fine without defining new syntax. I'm getting that from here: http://nemerle.org/Macros_tutorial Here's just a simple macro that adds no new syntax from that page: macro m () { Nemerle.IO.printf ("compile-time\n"); <[ Nemerle.IO.printf ("run-time\n") ]>; } module M { public Main () : void { m (); } } That seems significantly more elegant to me than string m() { pragma(msg, "compile-time"); return q{writefln("run-time");} } void main() { mixin(m()); } So it looks to me like the mechanics of it are basically identical. Just Nemerle's syntax is nicer.
Only with trivial examples. With more complicated examples they look less identical. I'm basing my views on pages like this: http://nemerle.org/Macros_-_extended_course._Part_2 Unless I'm totally misunderstanding this, it looks to me as though Nemerle macros are implemented as compiler plugins. All the advanced facilities are obtained by exposing the compiler's API! I pesonally think that is an utterly revolting thing to add to a language. Compare with macros in Lisp and Forth.
 If you want to condem Nemerle's ability to define new syntax, I think
 that should be taken up as a separate matter.
I do think it's a profoundly bad idea in a C-like language, but it's not what I'm referring to here.
Oct 08 2009
parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Thu, Oct 8, 2009 at 4:00 AM, Don <nospam nospam.com> wrote:

 So it looks to me like the mechanics of it are basically identical.
 Just Nemerle's syntax is nicer.
Only with trivial examples. With more complicated examples they look less identical. I'm basing my views on pages like this: http://nemerle.org/Macros_-_extended_course._Part_2 Unless I'm totally misunderstanding this, it looks to me as though Nemerle macros are implemented as compiler plugins. All the advanced facilities are obtained by exposing the compiler's API!
Well they're not.. "plugins" per se as much as "compiled modules." Okay yes that's basically a plugin :P But it's not that different from D, where you use compile-time executed functions to do the same sorts of things. It's just that you precompile those functions instead of having the compiler "compile" them on every compilation. But really, I don't see how this is significantly different from hooking into the D compiler's internals with __traits, .stringof, .mangleof and the like. So it uses an object-oriented API to access those things instead of ad-hoc hacks. And? I don't know how you can trash Nemerle's approach while leaving D's unmentioned.
Oct 08 2009
parent reply Don <nospam nospam.com> writes:
Jarrett Billingsley wrote:
 On Thu, Oct 8, 2009 at 4:00 AM, Don <nospam nospam.com> wrote:
 
 So it looks to me like the mechanics of it are basically identical.
 Just Nemerle's syntax is nicer.
Only with trivial examples. With more complicated examples they look less identical. I'm basing my views on pages like this: http://nemerle.org/Macros_-_extended_course._Part_2 Unless I'm totally misunderstanding this, it looks to me as though Nemerle macros are implemented as compiler plugins. All the advanced facilities are obtained by exposing the compiler's API!
Well they're not.. "plugins" per se as much as "compiled modules." Okay yes that's basically a plugin :P But it's not that different from D, where you use compile-time executed functions to do the same sorts of things. It's just that you precompile those functions instead of having the compiler "compile" them on every compilation.
No. CTFE is simply taking constant-folding to its logical conclusion.
 But really, I don't see how this is significantly different from
 hooking into the D compiler's internals with __traits, .stringof,
 .mangleof and the like. So it uses an object-oriented API to access
 those things instead of ad-hoc hacks. And? 
The thing I think is elegant about D's approach, ugly as the syntax currently is, is the complete separation of the lex - parse - semantic - codegen phases. And I think CTFE is fantastic (and I plan to fix it so it works properly). Think about how easy it is to explain. I'm not a fan of is(typeof()) .stringof and __traits in their current form. They are hackish indeed, and weren't originally intended for macro development. (Actually .stringof isn't hackish, just buggy and unspecified). BUT they demonstrate the benefit of the seperate compilation phases. The fundamentals are strong.
 I don't know how you can trash Nemerle's approach while leaving D's 
unmentioned. What do you mean, 'unmentioned'? Hey, you started this by trashing D's approach!
Oct 08 2009
next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Thu, Oct 8, 2009 at 11:25 AM, Don <nospam nospam.com> wrote:
 Jarrett Billingsley wrote:
 On Thu, Oct 8, 2009 at 4:00 AM, Don <nospam nospam.com> wrote:

 So it looks to me like the mechanics of it are basically identical.
 Just Nemerle's syntax is nicer.
Only with trivial examples. With more complicated examples they look less identical. I'm basing my views on pages like this: http://nemerle.org/Macros_-_extended_course._Part_2 Unless I'm totally misunderstanding this, it looks to me as though Nemerle macros are implemented as compiler plugins. All the advanced facilities are obtained by exposing the compiler's API!
Well they're not.. "plugins" per se as much as "compiled modules." Okay yes that's basically a plugin :P But it's not that different from D, where you use compile-time executed functions to do the same sorts of things. It's just that you precompile those functions instead of having the compiler "compile" them on every compilation.
No. CTFE is simply taking constant-folding to its logical conclusion.
All the Nemerle *implementation* is doing differently here is having you precompile the functions that are to be executed at compile-time. That's it. The resultant semantics are utterly the same. You are running code at compile-time, it doesn't matter if that code is running on the hardware or in a VM in the compiler.
 But really, I don't see how this is significantly different from
 hooking into the D compiler's internals with __traits, .stringof,
 .mangleof and the like. So it uses an object-oriented API to access
 those things instead of ad-hoc hacks. And?
The thing I think is elegant about D's approach, ugly as the syntax currently is, is the complete separation of the lex - parse - semantic - codegen phases. And I think CTFE is fantastic (and I plan to fix it so it works properly). Think about how easy it is to explain.
And Nemerle's macros are *hard* to explain? They're a function that executes at compile time. Oh, wait! That's exactly the same as CTFE.
 I don't know how you can trash Nemerle's approach while leaving D's
 unmentioned.
What do you mean, 'unmentioned'? Hey, you started this by trashing D's approach!
I'm not talking about me. I'm talking about you. I don't know how you can trash a language with macros that were designed *from the bottom up* to be used as such and which are treated as first-class citizens, while not admitting the hilariously ad-hoc nature of a language where macros fall out as a consequence of a number of other, ill-defined poorly-implemented unorthogonal features. Sigh, I'm done.
Oct 08 2009
parent Ary Borenszweig <ary esperanto.org.ar> writes:
Jarrett Billingsley wrote:
 On Thu, Oct 8, 2009 at 11:25 AM, Don <nospam nospam.com> wrote:
 Jarrett Billingsley wrote:
 On Thu, Oct 8, 2009 at 4:00 AM, Don <nospam nospam.com> wrote:

 So it looks to me like the mechanics of it are basically identical.
 Just Nemerle's syntax is nicer.
Only with trivial examples. With more complicated examples they look less identical. I'm basing my views on pages like this: http://nemerle.org/Macros_-_extended_course._Part_2 Unless I'm totally misunderstanding this, it looks to me as though Nemerle macros are implemented as compiler plugins. All the advanced facilities are obtained by exposing the compiler's API!
Well they're not.. "plugins" per se as much as "compiled modules." Okay yes that's basically a plugin :P But it's not that different from D, where you use compile-time executed functions to do the same sorts of things. It's just that you precompile those functions instead of having the compiler "compile" them on every compilation.
No. CTFE is simply taking constant-folding to its logical conclusion.
All the Nemerle *implementation* is doing differently here is having you precompile the functions that are to be executed at compile-time. That's it. The resultant semantics are utterly the same. You are running code at compile-time, it doesn't matter if that code is running on the hardware or in a VM in the compiler.
 But really, I don't see how this is significantly different from
 hooking into the D compiler's internals with __traits, .stringof,
 .mangleof and the like. So it uses an object-oriented API to access
 those things instead of ad-hoc hacks. And?
The thing I think is elegant about D's approach, ugly as the syntax currently is, is the complete separation of the lex - parse - semantic - codegen phases. And I think CTFE is fantastic (and I plan to fix it so it works properly). Think about how easy it is to explain.
And Nemerle's macros are *hard* to explain? They're a function that executes at compile time. Oh, wait! That's exactly the same as CTFE.
 I don't know how you can trash Nemerle's approach while leaving D's
 unmentioned.
What do you mean, 'unmentioned'? Hey, you started this by trashing D's approach!
I'm not talking about me. I'm talking about you. I don't know how you can trash a language with macros that were designed *from the bottom up* to be used as such and which are treated as first-class citizens, while not admitting the hilariously ad-hoc nature of a language where macros fall out as a consequence of a number of other, ill-defined poorly-implemented unorthogonal features. Sigh, I'm done.
I agree with Jarrett here. And also seeing how some things are implemented in D using CTFE and .stringof and it's parsing is very complex to understand. I mean, I read the code and it's very hard for me to understand what's going on. Specially because it's mostly all of the time parsing strings and extracting information which I don't know in what format it comes in the first place, it's just guess and work around it.
Oct 08 2009
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
On 08/10/2009 17:25, Don wrote:
 Jarrett Billingsley wrote:
 On Thu, Oct 8, 2009 at 4:00 AM, Don <nospam nospam.com> wrote:

 So it looks to me like the mechanics of it are basically identical.
 Just Nemerle's syntax is nicer.
Only with trivial examples. With more complicated examples they look less identical. I'm basing my views on pages like this: http://nemerle.org/Macros_-_extended_course._Part_2 Unless I'm totally misunderstanding this, it looks to me as though Nemerle macros are implemented as compiler plugins. All the advanced facilities are obtained by exposing the compiler's API!
Well they're not.. "plugins" per se as much as "compiled modules." Okay yes that's basically a plugin :P But it's not that different from D, where you use compile-time executed functions to do the same sorts of things. It's just that you precompile those functions instead of having the compiler "compile" them on every compilation.
No. CTFE is simply taking constant-folding to its logical conclusion.
 But really, I don't see how this is significantly different from
 hooking into the D compiler's internals with __traits, .stringof,
 .mangleof and the like. So it uses an object-oriented API to access
 those things instead of ad-hoc hacks. And?
The thing I think is elegant about D's approach, ugly as the syntax currently is, is the complete separation of the lex - parse - semantic - codegen phases. And I think CTFE is fantastic (and I plan to fix it so it works properly). Think about how easy it is to explain.
What about Nemerle's macro system design (I'm not talking about their specific implementation of it) conflicts with D's complete separation of lex -> parse -> semantic phases? The phases can still be completely separate *but* extensible by the macro system. BTW, regarding this design aspect of DMD and D The dragon book (second edition) says on page 966: <quote> Object-Oriented Versus Phase-Oriented with an object oriented approach, all the code for a construct is collected in the class for the construct. Alternatively, with a phase-oriented approach the code is grouped by phase so a type checking procedure would have a case for each construct and a code generation procedure would have a case for each construct, and so on. the tradeoff is that an object-oriented approach makes it easier to change or add a construct, such as "for" statements, and a phase-oriented approach makes it easier to change or add a phase, such as type checking. </quote>
 I'm not a fan of is(typeof()) .stringof and __traits in their current
 form. They are hackish indeed, and weren't originally intended for macro
 development. (Actually .stringof isn't hackish, just buggy and
 unspecified). BUT they demonstrate the benefit of the seperate
 compilation phases. The fundamentals are strong.

  > I don't know how you can trash Nemerle's approach while leaving D's
 unmentioned.

 What do you mean, 'unmentioned'? Hey, you started this by trashing D's
 approach!
Oct 09 2009