digitalmars.D - Should operator overload methods be virtual?
- Walter Bright (6/6) Nov 27 2009 Making them not virtual would also make them not overridable, they'd all...
- Denis Koroskin (4/11) Nov 27 2009 I thought operator overloading was going to be implemented via templates...
- Walter Bright (2/5) Nov 28 2009 Yes, that's the rationale. I'm looking for a hole in it.
- dsimcha (5/11) Nov 27 2009 What would making them non-virtual accomplish? I don't think making the...
- bearophile (4/6) Nov 28 2009 Right, it's an exception to a rule of the language, so it increases the ...
- dsimcha (8/23) Nov 28 2009 Ok, well then how does making operator overloads implicitly final improv...
- retard (5/13) Nov 27 2009 Is this again one of those features that is supposed to hide the fact
- dsimcha (7/20) Nov 27 2009 If so, I think it's a bad idea.
- Robert Jacques (5/18) Nov 27 2009 Yes and no. Yes, DMD doesn't have link time optimization (LTO), which is...
- Walter Bright (19/23) Nov 28 2009 The gnu linker (ld) does not do any optimizations of virtual call =>
- Leandro Lucarella (17/25) Nov 29 2009 The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM
- Walter Bright (2/18) Dec 01 2009 I don't see that particular one in the links.
- Leandro Lucarella (12/30) Dec 01 2009 Well, I was talking about link-time optimization in general, not virtual
- Steven Schveighoffer (7/14) Dec 01 2009 I use virtual operator overloads in dcollections. Such as opCat and
- Andrei Alexandrescu (3/22) Dec 01 2009 Would you put up with a couple of forwarding functions?
- Steven Schveighoffer (17/36) Dec 01 2009 Well, I'd certainly put up with it if I had no choice :) But if I had a...
- dsimcha (5/8) Dec 01 2009 What is the sudden obsession with code bloat here lately? Check out thi...
- Steven Schveighoffer (28/36) Dec 01 2009 If I'm writing template code effectively as a "macro" meaning "call this...
- dsimcha (8/45) Dec 01 2009 No, I agree. Space efficiency does matter. I've certainly jumped throu...
- retard (13/56) Dec 01 2009 If it leaks 200 MB per day, people can already run it for a month on a
- Steven Schveighoffer (27/39) Dec 02 2009 Notice I said XP. this system had 500MB of RAM, it's not a new system. ...
- retard (20/26) Dec 02 2009 Ok, if they accept those long boot times, you can waste even more memory...
- Don (6/50) Dec 01 2009 Most of the bloat in my experience comes from that ruddy int->ulong
Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.
Nov 27 2009
On Sat, 28 Nov 2009 02:32:21 +0300, Walter Bright <newshound1 digitalmars.com> wrote:Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.I thought operator overloading was going to be implemented via templates. As such, they are non-virtual by default, which is okay, in my opinion.
Nov 27 2009
Denis Koroskin wrote:I thought operator overloading was going to be implemented via templates. As such, they are non-virtual by default, which is okay, in my opinion.Yes, that's the rationale. I'm looking for a hole in it.
Nov 28 2009
== Quote from Walter Bright (newshound1 digitalmars.com)'s articleMaking them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.What would making them non-virtual accomplish? I don't think making them non-virtual would hurt too much in practice, but it would introduce an inconsistency into the language relative to "regular" methods. Therefore, I don't think it should be done without a very good reason.
Nov 27 2009
dsimcha:but it would introduce an inconsistency into the language relative to "regular" methods.Right, it's an exception to a rule of the language, so it increases the language complexity. Bye, bearophile
Nov 28 2009
== Quote from retard (re tard.com.invalid)'s articleSat, 28 Nov 2009 08:16:33 -0500, bearophile wrote:Ok, well then how does making operator overloads implicitly final improve over being consistent with the rest of the language and making them explicitly final if you want them final? Note: I'm not against making overloading non-virtual if it's implemented with templates, because this is non-arbitrary and consistent with the rest of the language. I'm only against it if it's done arbitrarily by treating operator overload functions as "special" in this regard.dsimcha:I guess the systems programming language users more often think that 'more executable bloat when compiled with the currently available practical real world tools, the more complex the language in practical real world use'. So if there's some tiny little feature that saves you 1-2 cpu cycles in practical real world systems programming applications or makes building a practical real world non-academic commercial compiler a bit easier and thus provides more practical value to the paying customer, the language should include that feature.but it would introduce an inconsistency into the language relative to "regular" methods.Right, it's an exception to a rule of the language, so it increases the language complexity.
Nov 28 2009
Fri, 27 Nov 2009 15:32:21 -0800, Walter Bright wrote:Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.
Nov 27 2009
== Quote from retard (re tard.com.invalid)'s articleFri, 27 Nov 2009 15:32:21 -0800, Walter Bright wrote:If so, I think it's a bad idea. 1. Eventually, we will get a better optimizer. GDC has been resurrected, and after D2 is finalized and all of the more severe bugs are fixed, hopefully Walter will have some time to focus on performance issues. 2. This optimization can trivially be done manually by declaring the overloads final. What would we gain by introducing the inconsistency with "normal" methods?Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.
Nov 27 2009
On Fri, 27 Nov 2009 22:58:00 -0500, retard <re tard.com.invalid> wrote:Fri, 27 Nov 2009 15:32:21 -0800, Walter Bright wrote:Yes and no. Yes, DMD doesn't have link time optimization (LTO), which is what enables this. No, because LTO can't do this optimization in many cases, such as creating/using a DLL/shared object. (Static libraries might also have some issues, but I'm not sure.)Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.
Nov 27 2009
retard wrote:Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it. struct C { virtual int foo() { return 3; } }; void bar(C* c) { c->foo(); <== no virtual call optimization here (1) } int main() { C* c = new C(); c->foo(); <== virtual call optimization here (2) bar(c); return 0; } What D doesn't do is (2). What D does do, and C++ does not, is allow one to specify a class is final or a method is final, and then both (1) and (2) will be optimized to direct calls. Doing (2) is entirely a function of the front end, not the linker.
Nov 28 2009
Walter Bright, el 28 de noviembre a las 13:31 me escribiste:retard wrote:The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM provides plug-ins for gold to do LTO). See: http://gcc.gnu.org/wiki/LinkTimeOptimization http://llvm.org/docs/GoldPlugin.html -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Se va a licitar un sistema de vuelos espaciales mendiante el cual, desde una plataforma que quizás se instale en la provincia de Córdoba. Esas naves espaciales va a salir de la atmósfera, va a remontar la estratósfera y desde ahí elegir el lugar donde quieran ir de tal forma que en una hora y media podamos desde Argentina estar en Japón, en Corea o en cualquier parte. -- Carlos Saúl Menem (sic)Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it.
Nov 29 2009
Leandro Lucarella wrote:Walter Bright, el 28 de noviembre a las 13:31 me escribiste:I don't see that particular one in the links.retard wrote:The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM provides plug-ins for gold to do LTO). See: http://gcc.gnu.org/wiki/LinkTimeOptimization http://llvm.org/docs/GoldPlugin.htmlIs this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it.
Dec 01 2009
Walter Bright, el 1 de diciembre a las 11:17 me escribiste:Leandro Lucarella wrote:Well, I was talking about link-time optimization in general, not virtual call elimination in particular :). I don't know exactly what kind of optimizations are supported currently, but bare in mind this is all very new (Gold and the LTO plug-ins)... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- - Que hacés, ratita? - Espero un ratito...Walter Bright, el 28 de noviembre a las 13:31 me escribiste:I don't see that particular one in the links.retard wrote:The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM provides plug-ins for gold to do LTO). See: http://gcc.gnu.org/wiki/LinkTimeOptimization http://llvm.org/docs/GoldPlugin.htmlIs this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it.
Dec 01 2009
On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright <newshound1 digitalmars.com> wrote:Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Dec 01 2009
Steven Schveighoffer wrote:On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright <newshound1 digitalmars.com> wrote:Would you put up with a couple of forwarding functions? AndreiMaking them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Dec 01 2009
On Tue, 01 Dec 2009 13:53:37 -0500, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:Steven Schveighoffer wrote:Well, I'd certainly put up with it if I had no choice :) But if I had a choice, I'd choose to keep them virtual. I have little need for defining bulk operators with templates and mixins, my usage is mainly going to be separate implementations for each operator. If the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying. One more thing I wonder, can you alias template instantiations? For example, I have code like this: struct S { alias opAdd add; void opAdd(int x); } How does one do that when opAdd is a template with an argument? -SteveOn Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright <newshound1 digitalmars.com> wrote:Would you put up with a couple of forwarding functions?Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Dec 01 2009
== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s articleIf the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying.What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it. http://stackoverflow.com/questions/1771692/when-does-template-instantiation-bloat-matter-in-practice
Dec 01 2009
On Tue, 01 Dec 2009 16:28:14 -0500, dsimcha <dsimcha yahoo.com> wrote:== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s articleIf I'm writing template code effectively as a "macro" meaning "call this virtual method", then there is no point in having template code whatsoever. If I'm forced to write it because the compiler only will call a template, then I would like for the compiler to optimize out its "mistake". Then I have no problem with it, because the net effect on the binary performance and size should be zero. Even if I have to annotate the function to force it, that is fine with me. Larger programs take more memory to run, and longer to load. Not that my D programs need to squeeze every ounce of power out of the system, but I think nowadays there's too little emphasis on executable size optimization (or even memory consumption). an ancecdote on bloatage: I once had a driver for XP for my wireless USB network adapter that put an icon on the task tray, consuming roughly 10MB of memory. Yep, to put an icon on the task tray, it needed 10MB. Just in case I ever wanted to click on that icon to set up my wireless network (which I would never do because once it's set up, I'm done). As a bonus, every week or so, some kind of memory leak would trigger, and it would consume about 200MB of memory before my system started thrashing and I had to kill the icon. I tried to disable it and use Windows to configure my wireless card, and then it used 10MB to put a *grayed out icon* in the tray (which would continue the bonus plan). I finally had to hunt down the offending executable and rename it to prevent it from starting. And guess what? the wireless adapter worked flawlessly. It's shit like this that pisses me off when people say "oh, bloat is a think of the past, you get soo much memory and cpu now adays, you don't even notice it." All those little 10MB programs add up pretty quickly. -SteveIf the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying.What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it.
Dec 01 2009
== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s articleOn Tue, 01 Dec 2009 16:28:14 -0500, dsimcha <dsimcha yahoo.com> wrote:No, I agree. Space efficiency does matter. I've certainly jumped through some serious hoops to make my code more space efficient when dealing with large datasets. The thing is that, at least in my experience, in any modern non-embedded program large enough for space efficiency to matter, the space requirements are dominated by data, not code. Therefore, I use as many templates as I feel like and don't worry about it, and when I think about space efficiency, I think about representing my data efficiently.== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s articleIf I'm writing template code effectively as a "macro" meaning "call this virtual method", then there is no point in having template code whatsoever. If I'm forced to write it because the compiler only will call a template, then I would like for the compiler to optimize out its "mistake". Then I have no problem with it, because the net effect on the binary performance and size should be zero. Even if I have to annotate the function to force it, that is fine with me. Larger programs take more memory to run, and longer to load. Not that my D programs need to squeeze every ounce of power out of the system, but I think nowadays there's too little emphasis on executable size optimization (or even memory consumption). an ancecdote on bloatage: I once had a driver for XP for my wireless USB network adapter that put an icon on the task tray, consuming roughly 10MB of memory. Yep, to put an icon on the task tray, it needed 10MB. Just in case I ever wanted to click on that icon to set up my wireless network (which I would never do because once it's set up, I'm done). As a bonus, every week or so, some kind of memory leak would trigger, and it would consume about 200MB of memory before my system started thrashing and I had to kill the icon. I tried to disable it and use Windows to configure my wireless card, and then it used 10MB to put a *grayed out icon* in the tray (which would continue the bonus plan). I finally had to hunt down the offending executable and rename it to prevent it from starting. And guess what? the wireless adapter worked flawlessly. It's shit like this that pisses me off when people say "oh, bloat is a think of the past, you get soo much memory and cpu now adays, you don't even notice it." All those little 10MB programs add up pretty quickly. -SteveIf the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying.What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it.
Dec 01 2009
Tue, 01 Dec 2009 16:48:34 -0500, Steven Schveighoffer wrote:On Tue, 01 Dec 2009 16:28:14 -0500, dsimcha <dsimcha yahoo.com> wrote:If it leaks 200 MB per day, people can already run it for a month on a typical home PC before the machine runs out of physical memory (assuming 8GB physical RAM like most of my friends have these days on their $500-600 systems). A typical user reboots every day so a program can freely leak at least 7 gigs per day, (during a 8h work day) that's 15 MB per minute or 250 kB per second. According to Moore's law the leak rate can grow exponentially. So in 2013 your typical taskbar apps leak at least one megabyte per second and most of users are still happy. With a RAM upgrade they can use apps that leak 4+ MB per second. As users tend to restart programs when the system starts running slowly, the shorter uptime of apps means that they can leak a lot more.== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s articleIf I'm writing template code effectively as a "macro" meaning "call this virtual method", then there is no point in having template code whatsoever. If I'm forced to write it because the compiler only will call a template, then I would like for the compiler to optimize out its "mistake". Then I have no problem with it, because the net effect on the binary performance and size should be zero. Even if I have to annotate the function to force it, that is fine with me. Larger programs take more memory to run, and longer to load. Not that my D programs need to squeeze every ounce of power out of the system, but I think nowadays there's too little emphasis on executable size optimization (or even memory consumption). an ancecdote on bloatage: I once had a driver for XP for my wireless USB network adapter that put an icon on the task tray, consuming roughly 10MB of memory. Yep, to put an icon on the task tray, it needed 10MB. Just in case I ever wanted to click on that icon to set up my wireless network (which I would never do because once it's set up, I'm done). As a bonus, every week or so, some kind of memory leak would trigger, and it would consume about 200MB of memory before my system started thrashing and I had to kill the icon. I tried to disable it and use Windows to configure my wireless card, and then it used 10MB to put a *grayed out icon* in the tray (which would continue the bonus plan). I finally had to hunt down the offending executable and rename it to prevent it from starting. And guess what? the wireless adapter worked flawlessly. It's shit like this that pisses me off when people say "oh, bloat is a think of the past, you get soo much memory and cpu now adays, you don't even notice it." All those little 10MB programs add up pretty quickly.If the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying.What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it.
Dec 01 2009
On Wed, 02 Dec 2009 01:36:54 -0500, retard <re tard.com.invalid> wrote:If it leaks 200 MB per day, people can already run it for a month on a typical home PC before the machine runs out of physical memory (assuming 8GB physical RAM like most of my friends have these days on their $500-600 systems).Notice I said XP. this system had 500MB of RAM, it's not a new system. AFAIK, XP doesn't even *support* more than 4GB of RAM (and I don't think my chipset would support more than 1G). 200MB is probably the most the OS would give it, because I think my typical idle memory usage was 400MB. Let's just say instead of 200MB, it uses whatever memory was left to consume, ok? But the memory leak isn't the biggest issue, that is clearly a bug and not a feature. The problem I have is the 10MB of memory it uses to put an icon on the task tray. I see loads of these icons all the time on other people's computers, all using up huge chunks of memory so they can instantaneously check for the latest logitech driver for their keyboard (oooh! what new awesome amazing things will my keyboard be able to do with this upgrade!). It's the computer equivalent to hiring a team of people around you 24/7, and some of those team member's *ONLY* job is to give you a q-tip in case you want it. And moores law seems to apply to moronic icon developers as well -- the more memory available, the bloatier they make their nifty task tray icons "hey, Windows 7 supports an alpha channel! let's make the icon [that nobody ever uses] fade in and out!"A typical user reboots every day so a program can freely leak at least 7 gigs per day, (during a 8h work day) that's 15 MB per minute or 250 kB per second. According to Moore's law the leak rate can grow exponentially. So in 2013 your typical taskbar apps leak at least one megabyte per second and most of users are still happy. With a RAM upgrade they can use apps that leak 4+ MB per second. As users tend to restart programs when the system starts running slowly, the shorter uptime of apps means that they can leak a lot more.I don't know what typical users you know, but the typical users I know do not reboot their computer unless it requires it. Most of the people I know have installed so much bloatware on their system that it takes 20 minutes to boot their system, so they only reboot when necessary. Your idea of "x amount of leakage is OK" where x > 0 is exactly the developer mindset I was talking about. -Steve
Dec 02 2009
Wed, 02 Dec 2009 13:15:33 -0500, Steven Schveighoffer wrote:I don't know what typical users you know, but the typical users I know do not reboot their computer unless it requires it. Most of the people I know have installed so much bloatware on their system that it takes 20 minutes to boot their system, so they only reboot when necessary.Ok, if they accept those long boot times, you can waste even more memory since they would probably accept disk cache trashing, too. Nowadays laptops have 640 GB hard drives, so basically a taskbar applet could easily use 100 GB of virtual RAM without the stupid user noticing anything.Your idea of "x amount of leakage is OK" where x > 0 is exactly the developer mindset I was talking about.It's not my idea :D I guess even if I was badly drunk, I couldn't make my code leak as much as those taskbar application developers do. I don't encourage writing bloaty crap applications. It's just the general trend. Applications get larger and slower. Wirth's law. If I recall correctly, my old postscript printer only needed a ppd driver file (< 100 kB). Nowadays even the cheapest printers with very modest features come with 500+ megabytes of "drivers". Since there is no good package manager on Windows, each vendor implements their own, poorly. The high end printers still use light weight drivers. What does this tell? If the printer costs $40, a webcam $15, and a network card $5..10, how can you expect extremely high quality drivers? They hire those worst off- shore coders to do the job, the cheapest artists draw the 16 color installer backgrounds (saved in 24-bit BMP format of course to waste more space) etc.
Dec 02 2009
Steven Schveighoffer wrote:On Tue, 01 Dec 2009 13:53:37 -0500, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:Most of the bloat in my experience comes from that ruddy int->ulong implicit conversion, which gets used in the function lookup rules. If ints didn't implicitly convert to ulong, 3/4 of my operator overloads would disappear -- because then 'long' would be able to do every integer type other than ulong.Steven Schveighoffer wrote:Well, I'd certainly put up with it if I had no choice :) But if I had a choice, I'd choose to keep them virtual. I have little need for defining bulk operators with templates and mixins, my usage is mainly going to be separate implementations for each operator. If the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying.On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright <newshound1 digitalmars.com> wrote:Would you put up with a couple of forwarding functions?Making them not virtual would also make them not overridable, they'd all be implicitly final. Is there any compelling use case for virtual operator overloads? Keep in mind that any non-virtual function can still be a wrapper for another virtual method, so it is still possible (with a bit of extra work) for a class to have virtual operator overloads. It just wouldn't be the default.I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.One more thing I wonder, can you alias template instantiations? For example, I have code like this: struct S { alias opAdd add; void opAdd(int x); } How does one do that when opAdd is a template with an argument? -Steve
Dec 01 2009