www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Alternatives to OOP in D

reply Brother Bill <brotherbill mail.com> writes:
I have heard that there are better or at least alternative ways 
to have encapsulation, polymorphism and inheritance outside of 
OOP.

With OOP in D, we have full support for Single Inheritance, 
including for Design by Contract (excluding 'old').  D also 
supports multiple Interfaces.

What would be the alternatives and why would they be better?
I assume the alternatives would have
1. Better performance
2. Simpler syntax
3. Easier to read, write and maintain

If possible, please provide links to documentation or examples.
Sep 01
next sibling parent reply Serg Gini <kornburn yandex.ru> writes:
On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative ways 
 to have encapsulation, polymorphism and inheritance outside of 
 OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
Usually alternative is related to composition. https://en.wikipedia.org/wiki/Object_composition https://en.wikipedia.org/wiki/Composition_over_inheritance
Sep 01
parent Peter C <peterc gmail.com> writes:
On Monday, 1 September 2025 at 14:14:58 UTC, Serg Gini wrote:
 On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative 
 ways to have encapsulation, polymorphism and inheritance 
 outside of OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
Usually alternative is related to composition. https://en.wikipedia.org/wiki/Object_composition https://en.wikipedia.org/wiki/Composition_over_inheritance
There are certainly alternative ways to OOP. But the problem you face, is not so much alternatives to OOP, but rather, that different worlds collide. That is, the real world collides with the computational world. So do you see the problem you're trying to solve as as a collection of interacting objects? - i.e. things have state, identity, and behavior (i.e much like we perceive the real world) If so, OOP does well in modelling that world - where a class type represents that real-world entity (with state, identity, and behavior). This aligns the real world problem your trying to solve with the code structure, so that the code structure itself provides clarity around the problem you are trying to solve. But, the code itself operates in a world of its own - separate from the object-like world that we perceive. That is, code operates in the computational world - which is governed by the rules of performance - which is related to the CPU, memory cache, compilers, garbage collection ... etc. So any alternatives to OOP will likely be more focused on optimizing for the computational world, and less focused on modelling the real world, as we perceive it. Then, instead of modelling what things are (i.e. things with state, identity, and behavior) you'll likely be modeling something different. So, it's all about the collision of two different worlds - the real world as we percieve it, and the computational world. 'Better performance' is a good reason to model it differently to the real world. 'Simpler syntax' is a good reason to model it differently to the real world. 'Easier to read, write and maintain' is also a good reason to model it differently to the real world. And, 'the problem domain being in the computational world rather than the real world' is also a good reason to model if differently. I'm sure there are other good reason as well. So, yes. There are certainly reasons for choosing alternative ways to OOP, but they will likely be more focused on the computational world, than the real world as we pereive it.
Nov 05
prev sibling next sibling parent Kapendev <alexandroskapretsos gmail.com> writes:
On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative ways 
 to have encapsulation, polymorphism and inheritance outside of 
 OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
I sometimes use single-level inheritance with structs when making games: ```d import std.stdio; struct Base { int hp; // Everyone will have this. void draw() { writeln("Debug stuff."); } } // Will use `Base.draw` by default. struct Door { Base base; alias base this; } // Will use the custom draw function. struct Player { Base base; alias base this; int[] items; void draw() { writeln("Player stuff."); } } void foo(Base* base) => base.draw(); void main() { Door door; Player player;; player.draw(); foo(cast(Base*) &player); door.draw(); foo(cast(Base*) &door); } ``` It's nice because you can have one (sometimes static) array of game entities. Needs the first member to be Base of course. [Alias This](https://dlang.org/spec/struct.html#alias-this)
Sep 01
prev sibling next sibling parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative ways 
 to have encapsulation, polymorphism and inheritance outside of 
 OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
 claims to not start flame wars
Its the wrong question you should answer 3 questions: 1. where is the data 2. how does data mutate 3. how do I control flow oo is bad because its answers all three questions as **a noun**; "Im just going to vistor pattern my gamestate manager with my mailboxes" to *start with* Id suggest 2 different styles: 1. going full ranges(standard) This is a mostly functional style, theres nothing enforcing you to be pure, but first class functions is just how `.map` works. "Ranges are views of data", so the data stays where it started, if you start with a File(...).byLineCopy, the data is in the file, same with a json parser that provides a range interface. You mutate data with maps and reduces, and you manage control flow with take, drop and sort 2. plain old c You have global scope, malloc and goto. You can at any time make all your data in global scope write a giant function; if you must allocate, call malloc, you just go edit data directly and you can do any control flow with goto as its the only thing hardward can do. The answers the the 3 questions are mix and matchable. --- Polymorphoism isn't remotely owned by oo, templates are more polymorphic, usually "upgrades" to templates like "generics" are about **preventing** their full polymorphic; contracts, generics, all this talk about function attributes, all of it is trading off the flexibility of templates to chase some safety.
Sep 01
parent reply Peter C <peterc gmail.com> writes:
On Monday, 1 September 2025 at 15:59:24 UTC, monkyyy wrote:
 On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative 
 ways to have encapsulation, polymorphism and inheritance 
 outside of OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
 claims to not start flame wars
Its the wrong question you should answer 3 questions: 1. where is the data 2. how does data mutate 3. how do I control flow oo is bad because its answers all three questions as **a noun**; "Im just going to vistor pattern my gamestate manager with my mailboxes" to *start with* Id suggest 2 different styles: 1. going full ranges(standard) This is a mostly functional style, theres nothing enforcing you to be pure, but first class functions is just how `.map` works. "Ranges are views of data", so the data stays where it started, if you start with a File(...).byLineCopy, the data is in the file, same with a json parser that provides a range interface. You mutate data with maps and reduces, and you manage control flow with take, drop and sort 2. plain old c You have global scope, malloc and goto. You can at any time make all your data in global scope write a giant function; if you must allocate, call malloc, you just go edit data directly and you can do any control flow with goto as its the only thing hardward can do. The answers the the 3 questions are mix and matchable. --- Polymorphoism isn't remotely owned by oo, templates are more polymorphic, usually "upgrades" to templates like "generics" are about **preventing** their full polymorphic; contracts, generics, all this talk about function attributes, all of it is trading off the flexibility of templates to chase some safety.
*If* the problem domain is in the real world, and not the computational world, then your proposed Data-Oriented Design (DOD), which intentionally breaks an entity apart into its constituent data components, effectively destroys the natural, cohesive identity we assign to real-world objects, making high-level reasoning more fragmented. When I'm in a meeting with customers, I'm discussing "Customers," "Orders," and "Invoices" (OOP concepts), which is far more natural and clearer than discussing "data streams". The model that best facilitates human thought, design clarity, and communication in the real world, is the Object-Oriented model. It is clearly the most effective model for describing the world. So why would I break an OO view of the world into a DOD view of the world? Well...to better fit the computational world. Of course then you are trading off design clarity for execution performance. My point: Neither OOP nor DOD is bad. In an encapsulated model, you have strong cohesion refecting a real-world entity. In an data-oriented model (where the goal is to process data as efficiently as possible) data is separated from behaviour, and behaviour acts upon these separated data sets, in order to prioritise computational efficiency over real-world object cohesion. Rather than calling methods on individual objects one at a time (which can have a computational disadvantage - e.g. a a cache miss), the CPU is can be fed a smooth, continuous stream of work. So while DOD excels at computational efficiency, it introduces a significant cognitive trade-off for the programmer, by shifting from an Object-Oriented (OO) view to a Data-Oriented Design (DOD). This can make the code harder to read and reason about.
Nov 05
parent Kapendev <alexandroskapretsos gmail.com> writes:
On Thursday, 6 November 2025 at 07:14:43 UTC, Peter C wrote:
 *If* the problem domain is in the real world, and not the 
 computational world, then your proposed Data-Oriented Design 
 (DOD), which intentionally...
I personally avoid using any random 3-letter words (OOP, DOD, ...) for my personal projects because it solves nothing real. It's so much simpler when you just focus on the job you have to do. Do you need to abstract something somewhere to make it easier to use? Do that. Do you need some extra speed elsewhere? Make that part more low level. Problem solved. Boom 💥🤯
Nov 06
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Sep 01, 2025 at 01:58:23PM +0000, Brother Bill via Digitalmars-d-learn
wrote:
 I have heard that there are better or at least alternative ways to
 have encapsulation, polymorphism and inheritance outside of OOP.
 
 With OOP in D, we have full support for Single Inheritance, including
 for Design by Contract (excluding 'old').  D also supports multiple
 Interfaces.
 
 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain
 
 If possible, please provide links to documentation or examples.
OOP is useful for certain classes (ha) of problems, but not all, as the OOP proponents would like you to believe. Some classes of problems are better solved with other means. OOP is useful when you need runtime polymorphism, as in, when the concrete type of your objects cannot be determined until runtime. This flexibility comes with a cost, however. It adds an extra layer of indirection to your data, which means data access involves an extra pointer access, which in performance-critical loops may cause cache misses and performance degradation. Also, class objects in D are by-reference types; in some situations this is not ideal. If you have a large number of small class objects, they can add a significant amount of GC pressure, also not ideal if you're dealing with lots of them inside a tight loop. My preferred alternative is to use structs and DbI (design by introspection). Rather than deal with monolithic class hierarchies (that often fail to capture all the true complexities of polymorphic data anyway -- real data is rarely easily decomposed into neat textbook class hierarchies), I have a bunch of data-bearing structs with diverse attributes. The functions that operate on them are template functions that use compile-time introspection to determine exactly what the concrete type is capable of, and select the most appropriate implementation based on that. This way, rather than trying to shoehorn the data into a predetermined class hierarchy, the code instead adapts itself to whatever form the data has that it receives. This allows great malleability in fast prototyping and extending existing code to deal with new forms of data, with the advantage that (almost) all concrete types are resolved at compile-time rather than runtime, so the resulting code is fully optimized to work on the specific forms of data actually encountered, rather than pay the overhead tax on potential forms of data that it does not know until runtime. In OOP, data and code are tightly coupled, which works well when you only ever need to perform a fixed set of operations on your data. But things get messy once you have diverse operations, or an open set of operations, that you need to perform on your data. You start running counter to the grain of OOP access restrictions, and end up wasting more time fighting with OOP than OOP helping you to solve your programming problem. DbI lets you separate data from the algorithms that operate on them, so that your data types can focus on the most effective data storage scheme, while your operations use compile-time introspection to discover what form your data, adapting itself accordingly to work on the data in the most effective way. (If you're interested in more details of DbI, search online for "design by introspection" and you should see the relevant material.) Here's an example of the contrast between OOP and DbI, from one of my own projects. The basic problem is that I have a bunch of data, residing in various objects of diverse types, representing program state, that I want to serialize and save to disk, to be restored at a later time. In the traditional OOP approach, you'd create an interface, say Serializable, that might look something like this: interface Serializable { void serialize(Storage store); } Then you'd add this interface to every class you want to serialize, and implement the .serialize method to write the class data to the Storage. If you have n classes, this means writing n different functions. In my actual project, however, I chose to use DbI instead. In DbI, you *don't* impose any additional structure on the incoming data. In contrast to the OOP philosophy of data-hiding, DbI is usually most effective when your types have public access data. Instead of writing n different functions for n different types (and I have a *lot* of different types -- one for every kind of data in the program that I want to serialize), I just have *one* template function that handles it all. Here's what it looks like: void serialize(T)(Storage store, T data) { static if (is(T == int)) { store.write(data.to!string); } else if (is(T == string)) { store.write(data); } else if (...) { ... // handle other basic data types here } else if (is(T == S[], S)) { // we have an array, loop over the elements and // serialize each of them foreach (e; data) { // N.B.: recursive call, to a // *different* instantiation of // serialize() adapted to the specific // type of the array element serialize(store, e); } } else if (is(T == struct)) { // This is a struct; loop over its fields and // invoke the appropriate overload of .serialize // to handle each specific field type foreach (field; FieldNameTuple!T) { serialize(store, __traits(getMember, data, field)); } } else { static assert(0, "Unsupported type: " ~ T.stringof); } } Notice that .serialize does NOT have any code that deals with specific user-defined types. Rather, it detects built-in types and aggregates, and uses compile-time introspection to adapt itself to each type of data it encounters, leveraging the combinatorial nature of these basic type building blocks to handle anything that the caller might throw at it. Instead of writing n different .serialize functions, one for each user-defined type, like you'd do in OOP, here you have a *single* function that handles everything. There are only as many static if cases as there are basic types you'd like to handle (and you don't even need to include all of them -- only those you actually use). With just a small, bounded set of static if blocks, you can handle *any* number of types. And you can throw new user-defined data types at it and have it Just Work(tm) without adding a single new line of code to .serialize(!). You can also incrementally build up your .serialize function: that last static assert is there deliberately so that if you ever hand it something it doesn't know what to do with, it will forcefully stop compilation loudly and tell you exactly what's the missing type that it hasn't learned to handle yet. Then you just add another appropriate static if block to the function, and it will now learn to handle that new type *everywhere it might occur*, including deep inside some nested aggregate type that it has never seen before. IOW, once your .serialize function is able to handle all the basic types you might have in your user-defined types, it will be able to handle any kind of new data type. All without needing to add a single new line of code. Compare this with the OOP case where every new class you add will require to inherit from the Serialiable interface, and then you'd have to implement the .serialize function. And hope that you didn't make a mistake and leave out an important data field. Whereas our DbI .serialize function automatically discovers all your data fields and serializes them using the appropriate overloads -- without human effort, and therefore without room for human error. // Now, you might ask, what if I want to serialize OOP objects? Since the whole point of OOP is that you *don't* know the concrete type of your data until runtime, how can .serialize know what the concrete data fields of the object are, in order to serialize them? Since they are not known at compile-time, does that mean our class objects are left out in the cold while the structs enjoy the power of our DbI .serialize function? Nope! Here's how, with a little scaffolding, we can teach our clever DbI .serialize function to dance with OOP class objects too: First, we create a CRTP template class that will serve as our DbI analog of class interfaces: class Serializable(Derived, Base = Object) : Base { static if (!is(Base : Serializable!(Base, C), C)) { // This is the top of our class hierarchy, // declare the .serialize method. void serialize(Storage store) { serializeImpl(store); } } else { // This is a derived class in our hierarchy; // override the base class method override void serialize(Storage store) { serializeImpl(store); } } private void serialize(Storage store) { // Record concrete type, since it's not // predictable at compile-time store.saveClassName(Derived.stringof); // Downcast to concrete subclass and save it serializeClassFields(store, cast(Derived) this); } } The saveClassFields function is similar to our original DbI .serialize function, except that it takes care to iterate over base class members as well, so that when serializing a derived class we don't miss base class members that will be required to deserialize the object later. Other than this handling, it just forwards the bulk of the work to the DbI .serialize function that uses compile-time introspection to discover and serialize these members. All that remains, then, is to add this static if block to our DbI .serialize function: void serialize(T)(Storage store, T data) { ... else static if (is(T == class)) { serializeClassFields(store, data); } } and, for every class that we want to serialize, declare them this way: class MyClass : Serializable!(MyBaseClass, MyClass) { ... } This slightly unusual way of declaring a base class is to allow the Serializable template class to inject the necessary .serialize() methods into the class objects so that you don't have to write class-specific serialize methods by hand. And with this, our clever little DbI .serialize function now knows how to serialize classes, too, and to do so automatically and with almost no human intervention (other than declaring the class in the above way). Now, you can even use OOP with DbI and enjoy its benefits! (Well OK, to a certain extent. The above assumes that your classes are data-storage classes with public members that can be mechanically serialized. If this isn't the case, you'll have to do more work. Which is why I prefer not to do that. Data-centric types work better.) // Now, the above is really only half of the story. The other half is deserialization, which follows the same principle: the corresponding DbI .deserialize function does pretty much the same thing: use compile-time introspection to discover the incoming type's data fields, read the serialized data from disk, and use std.conv.to to convert it to the correct type (in spite of the naysayers, I think std.conv.to is one of the best things that's ever happened to D -- my .deserialize function literally consists of calls to `data.field = serializedString.to!T` -- and it all Just Works(tm)). All fully automated and with no human intervention, which means that once you've fully debugged .serialize and .deserialize, you don't ever have to worry about serialization again; all new data types you add will automatically be serializable / deserializable with no further effort (or bugs). Handling classes in deserialization is a bit tricky, but nothing too hard by combining template classes with static ctors. I won't get into the details here, but if you're curious, just ask and I'd be more than happy to spill all the gory details. Basically, we (again) use compile-time introspection to discover base classes, instantiate a factory function for recreating the class, then use a static ctor to inject this function into a global registry of factories that the deserialization code can use to deserialize the original class. And of course, as you should expect by now, this is fully automated thanks to templates and DbI; the programmer does not need to write a single line of boilerplate to make it all work. Just declare the class using the CRTP serializable injector above, and the templates take care of the rest of the boilerplate for you. No need for human intervention, and therefore no room for human error. It all Just Works(tm). // Metaprogramming is D's greatest strength, and templates are one of the keystones. Rather than shy away from templates, we should rather embrace them and exploit them in ways other languages can only dream of. T -- Real Programmers use "cat > a.out".
Sep 01
next sibling parent reply Andy Valencia <dont spam.me> writes:
On Monday, 1 September 2025 at 16:26:15 UTC, H. S. Teoh wrote:

 OOP is useful for certain classes (ha) of problems, but not 
 all, as the OOP proponents would like you to believe.  Some 
 classes of problems are better solved with other means.
Do note that I had to bite the bullet and write an OO File class, just so I could write emulated (but compilation/type compatible) subclasses of File. String- and ubyte[]-based reader/writer instances, specifically. Think Python's StringIO. It sure made a number of things easy once I could use polymorphism wrt File. I'm very glad that D has classic OO semantics available. Andy
Sep 01
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Sep 01, 2025 at 09:28:29PM +0000, Andy Valencia via Digitalmars-d-learn
wrote:
 On Monday, 1 September 2025 at 16:26:15 UTC, H. S. Teoh wrote:
 
 OOP is useful for certain classes (ha) of problems, but not all, as
 the OOP proponents would like you to believe.  Some classes of
 problems are better solved with other means.
Do note that I had to bite the bullet and write an OO File class, just so I could write emulated (but compilation/type compatible) subclasses of File. String- and ubyte[]-based reader/writer instances, specifically. Think Python's StringIO.
Yeah, OO is useful for that. OTOH I've also tended to use the following pattern for testing functions that do I/O: auto processFile(File = std.stdio.File)(File input) { ... foreach (line; input.byLine) { ... } ... } unittest { struct MockFile { auto byLine() { return ... /* mock implementation of .byLine here */ } } auto output = processFile(MockFile()); assert(output == expectedOutput); } So normally, `File` will bind to std.stdio.File, but the unittest can override this to be a mock file type where I can use simple code to inject input strings to test the implementation of processFile. The nice thing about this is that MockFile doesn't have to implement most of std.stdio.File's API; only those actually used by processFile(). Whereas if you used a subclass you may have to write more code just to do lip service to the base class API so that it will compile, even if some of this code may actually never get used. Also, since this is a template, when not compiling with -unittest the MockFile instantiation of processFile never happens, so the release build binary does not contain unittest template bloat, and processFile binds directly to std.stdio.File without any intermediate indirections.
 It sure made a number of things easy once I could use polymorphism wrt
 File.  I'm very glad that D has classic OO semantics available.
[...] As I said, OO has its place, and should be used when it's appropriate. Just don't shoehorn every programming problem and its neighbour's NP-complete dog into an OO paradigm when it doesn't even fit OO's domain. T -- All men are mortal. Socrates is mortal. Therefore all men are Socrates.
Sep 01
prev sibling next sibling parent PeterHoo <peterhu.peterhu outlook.com> writes:
On Monday, 1 September 2025 at 16:26:15 UTC, H. S. Teoh wrote:
 On Mon, Sep 01, 2025 at 01:58:23PM +0000, Brother Bill via 
 Digitalmars-d-learn wrote:
 [...]
OOP is useful for certain classes (ha) of problems, but not all, as the OOP proponents would like you to believe. Some classes of problems are better solved with other means. [...]
This is a great learning material for Dlang metaprogramming. It would be grateful if provide a workable project with both serialize and deserialzie on github for download.
Nov 04
prev sibling parent reply Peter C <peterc gmail.com> writes:
On Monday, 1 September 2025 at 16:26:15 UTC, H. S. Teoh wrote:
 ...
OOP is useful for certain classes (ha) of problems, but not all, 
as the OOP proponents would like you to believe
So who exactly are these OOP proponents? I do not know of any OOP that says all problems can be suitably solved using OOP. That sound's like a mispresentation, designed to disparge either OOP or OOP programmers, or both. The fact is, differenet paradigms solve different kinds of problems more naturally. Consider this problem: Four boys and three girls are seated in a row at random. What are the chances that the two children at the ends of the row will be girls? Using an OOP paradigm to solve this problem seems like overkill, to say the least. Now consider this problem: Seven women have 20, 40, 60, 80, 100, 120, 140 apples. They all sell their apples at the same price per apple. Each woman receives the same total amount of money. You need to find the price per apple. Here, you are allowed to choose either an OOP or a DOD paradigm only. Note, that changing the paradigm, changes both the framing of the problem and its solution. OOP: Will likely frame the problem in terms of characters in a story (entities with roles). The solution will put an emphasis on identity, relationships, and encapsulated behavior. DOD: Will frame the problem in terms of numbers in a table (data flowing through transformations). The solution will put an emphasis on data layout, throughput, and cache-friendly processing. So being able to frame a problem and a solution from the perspective of more than one paradigm is extremely advantageous. I do not know of any OOP that would disagree with this assertion. Please stop disparaging OOP and OOP programmers. Those of us who use OOP use it because the class of problems we deal with are best suited to this paradigm.
Nov 08
parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Sunday, 9 November 2025 at 04:06:21 UTC, Peter C wrote:
 Please stop disparaging OOP and OOP programmers.
No ones started (yet), Id be happy to provide an example of actual disparaging anti-oo examples
Nov 09
parent reply Peter C <peterc gmail.com> writes:
On Sunday, 9 November 2025 at 21:49:50 UTC, monkyyy wrote:
 On Sunday, 9 November 2025 at 04:06:21 UTC, Peter C wrote:
 Please stop disparaging OOP and OOP programmers.
No ones started (yet), Id be happy to provide an example of actual disparaging anti-oo examples
There's no shortage of disparaging anti-oo examples. But that's is separate issue to the topic of this thread. I expect there are disparging examples that could be made available for pretty much any programming paradigm. I am certain I can provide you an example of code from your favoured data-oriented-design - which deliberately dismantles the unified object concept, by requiring the programmer to shift the focus from modeling the problem domain to optimizing memory layout, thus sacrificying conceptual clarity and modeling integrity, in order to organise code for the machine. btw. This is precisely the reversion to a lower-level concern that the founders of OOP sought to avoid. But.. let's not go there.. ;-)
Nov 09
parent reply Brother Bill <brotherbill mail.com> writes:
On Monday, 10 November 2025 at 06:20:42 UTC, Peter C wrote:
 On Sunday, 9 November 2025 at 21:49:50 UTC, monkyyy wrote:
 On Sunday, 9 November 2025 at 04:06:21 UTC, Peter C wrote:
 Please stop disparaging OOP and OOP programmers.
btw. This is precisely the reversion to a lower-level concern that the founders of OOP sought to avoid. But.. let's not go there.. ;-)
There is OOP done right, which is mainly done by Eiffel language, which supports Multiple Inheritance without the Diamond of Death, nor needing Interfaces. All the rest add complexity with Interfaces and only support single inheritance. D falls into this group. Bill Gates considered supporting Eiffel language as a Microsoft Multiple Inheritance made function dispatching (which version of this function should be called) take a little bit longer than a standard vTable lookup. It was a constant time lookup, but took a few more steps, thus slowing the program execution. The other thing is that Eiffel compilation, even with "Melting Ice" and modern workstations still takes significantly more time than There should be one rule which is: Whatever language or pattern that you are using, the program should be "correct", that is, obey the "business rules".
Nov 10
parent reply Peter C <peterc gmail.com> writes:
On Monday, 10 November 2025 at 08:53:29 UTC, Brother Bill wrote:
 ..
 There should be one rule which is:  Whatever language or 
 pattern that you are using, the program should be "correct", 
 that is, obey the "business rules".
Clearly correct code is necessary, but it's not sufficient. Code also needs to 'sustaining' correctness - through maintenance, scaling, and change. Adair Dingle, in his award winning book from 2014, titled 'Software Essentials Design and Construction', correctly asserts that it is 'software maintenance' that dominates the software life cycle. So patterns and paradigms are not just an arbitrary choice. They are the tools that provide for shared reasoning. They should be chosen precisely because they provide the guardrails that help to make 'correctness' easier to achieve and sustain over the life-cycle of the code. 'Good code' (regardless of patterns and paradigms) structures logic in a way that makes correctness easier to reason about -> collaboratively. That is, if the person writing the code is the only one that can reason about it, then is certainly *not* good code. "The true measure of code quality, is that it's correctness is easy to reason about collaboratively." - me, Nov 2025.
Nov 10
parent reply Serg Gini <kornburn yandex.ru> writes:
On Tuesday, 11 November 2025 at 07:12:38 UTC, Peter C wrote:
 Clearly correct code is necessary, but it's not sufficient.

 Code also needs to 'sustaining' correctness - through 
 maintenance, scaling, and change.

 Adair Dingle, in his award winning book from 2014, titled 
 'Software Essentials Design and Construction', correctly 
 asserts that it is 'software maintenance' that dominates the 
 software life cycle.
k
 So patterns and paradigms are not just an arbitrary choice.
Yes they are
 They are the tools that provide for shared reasoning. They 
 should be chosen precisely because they provide the guardrails 
 that help to make 'correctness' easier to achieve and sustain 
 over the life-cycle of the code.
Different code has different purposes. Also different languages desire less or more patters. Patterns were mostly designed for Java/corporate world. But not all code is like that. There is no "silver bullet" and patterns are definitely not that as well.
 'Good code' (regardless of patterns and paradigms) structures 
 logic in a way that makes correctness easier to reason about -> 
 collaboratively.
If the majority of the people who work with the code is fine - it's fine.
 That is, if the person writing the code is the only one that 
 can reason about it, then is certainly *not* good code.
Sometimes it is. Because nobody else will contribute to his project anyway. So he is writing in a style that he likes.
 "The true measure of code quality, is that it's correctness is 
 easy to reason about collaboratively." - me, Nov 2025.
Something is missing in this definition is quality of collaborators. Same code could be easy to reason for experienced devs and hard for 1st grade students.
Nov 10
next sibling parent Alexandru Ermicioi <alexandru.ermicioi gmail.com> writes:
On Tuesday, 11 November 2025 at 07:50:05 UTC, Serg Gini wrote:
 Different code has different purposes.
 Also different languages desire less or more patters.
 Patterns were mostly designed for Java/corporate world.

 But not all code is like that. There is no "silver bullet" and 
 patterns are definitely not that as well.
More like they were distilled from all code zoo there was, not created/designed for a specific language or corporate world. They are useful as a shortcut to tell some approach to your design, instead of having explain in entire paragraph what you'd like to do. Imho, basic patterns that were distilled from OOP (gang of four book for example) are not necesarily tied to OOP, and can be done in other design paradigms as well. For example chain of responsibility, can be done with just functions, or command pattern can use simple structs/functions with DbI.
Nov 11
prev sibling parent Peter C <peterc gmail.com> writes:
On Tuesday, 11 November 2025 at 07:50:05 UTC, Serg Gini wrote:
 On Tuesday, 11 November 2025 at 07:12:38 UTC, Peter C wrote:
 ...
 Adair Dingle, in his award winning book from 2014, titled 
 'Software Essentials Design and Construction', correctly 
 asserts that it is 'software maintenance' that dominates the 
 software life cycle.
k
 So patterns and paradigms are not just an arbitrary choice.
Yes they are
No they are not! Patterns and paradigms are fundamental engineering tools. They both contribute significantly to creating recognizable and, therefore (hopefully), more correct and maintainable code. When you erode core principles (like encapsulation for example), it can lead to unpredictable and less maintainable code. For example, what is the paradigm used below? (a paradigm that D enables) module myModule; private string globalToken; // This may look like an OOP class, // but it behaves like a procedural function operating on global data. public class User { private string username; private string password; public this(string username, string password) { this.username = username; this.password = password; } public void Authenticate() { // Simulate authentication logic if (password == "secret") { globalToken = "abc123"; // modifies shared global state } } public string GetUsername() => username; } // This function operates outside the User class but, // due to module-level encapsulation, it can directly access // and modify 'private' members of the userInstance. public void OverridePassword(User userInstance, string newPassword) { userInstance.password = newPassword; } // A function outside the User class, but inside the module, // directly modifies shared global state. public void LogoutGlobal() { if (globalToken != null) { globalToken = null; System.Console.WriteLine("System-wide token invalidated."); } }
Nov 11
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative ways 
 to have encapsulation, polymorphism and inheritance outside of 
 OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
I have heard that people use structs and do these things themselves. I know of one library that implements this: https://code.dlang.org/packages/tardy IMO, the largest problem with classes for people is that it's always a reference type, and generally heap (gc) allocated. So performance and expressiveness. IMO, reference type is the correct default for polymorphism. And also note that structs already support encapsulation. They just don't do inheritance/polymorphism. I believe last year's dconf Walter hinted that he would be open to adding struct inheritance/interfaces. I think roll-your-own polymorphism is always going to suck compared to language-supplied. -Steve
Sep 01
parent reply Peter C <peterc gmail.com> writes:
On Monday, 1 September 2025 at 22:45:58 UTC, Steven Schveighoffer 
wrote:
 ...
 And also note that structs already support encapsulation. They 
 just don't do inheritance/polymorphism.
True, but 'structs + templates' is a usable pattern for 'static polymorphism' - the effect of which, is to allow interchangeable objects, but without having to use inheritance or virtual method tables. In this example below, any type passed to the templated utility function 'calculateTotalArea', must have a callable area() method. The templated utility function is actually very fast, because the compiler hardcodes the exact function call (rect.area() or sq.area()). That is, it creates a completely new function optimized specifically for the data type T. However, there is a limitation. You can only use the templated utility function to process an array of Rectangles, and then again, to process an array of Squares. Of course, with runtime polymorphism you could make a single, unified call - but then you have some overhead associated with the need to make complex decision-making during runtime execution. If you're process a large collection of simple shape objects in a tight loop, then the performance of static polymorphism will likey win - primarly because you reduce the likelihoood of branch prediction failures, and having the data (value types) laid out contiguously in memory is ideal to fully utilize the various CPU cache levels. // ------------------------------------- module mymodule; safe: private: import std.stdio; struct Rectangle { int width; int height; int area() const { return width * height; } } struct Square { int side; int area() const { return side * side; } } int calculateTotalArea(T)(T[] shapes) { int total = 0; foreach (s; shapes) { total += s.area; } return total; } void main() { writeln("--- Shape Calculations (D) - Templates (Final) ---"); Rectangle[] rects = [ Rectangle(10, 5), Rectangle(20, 3) ]; int rectsArea = calculateTotalArea(rects); writeln("\nTotal Area of Rectangles: ", rectsArea); Square[] squares = [ Square(7), Square(12) ]; int squaresArea = calculateTotalArea(squares); writeln("Total Area of Squares: ", squaresArea); } // -------------------------------------
Nov 13
parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Friday, 14 November 2025 at 03:39:11 UTC, Peter C wrote:
 If you're process a large collection of simple shape objects in 
 a tight loop, then the performance of static polymorphism will 
 likey win - primarly because you reduce the likelihoood of 
 branch prediction failures
"static polymorphism", you either mean classes in which case every one is still likely a cache miss or you mean structs and it eliminates the possible cache miss and has no branch prediction to resolve
Nov 13
parent reply Peter C <peterc gmail.com> writes:
On Friday, 14 November 2025 at 06:35:23 UTC, monkyyy wrote:
 ..
I should point out, that I'm not advocating for alternative models ;-) If you need (for example) the flexibility of a heterogeneous collection at runtime, then class-based dynamic dispatch via v-tables is (by far) the easiest and most idiomatic model to meet that need. It prioritizes simplicity, clarity and maintainability (i.e. developers time and cognitive resources get the priority in this model). Yes, it comes with some runtime overhead -> pointer indirection, cache misses..., but nonetheless, it is still likely to be suitable for most application-level development. An alternative choice for modelling polymorphism will shift the priorities to something else all together.
Nov 14
parent reply Sergey <kornburn yandex.ru> writes:
On Friday, 14 November 2025 at 22:45:59 UTC, Peter C wrote:
 On Friday, 14 November 2025 at 06:35:23 UTC, monkyyy wrote:
 An alternative choice for modelling polymorphism will shift the 
 priorities to something else all together.
At least D did it right
Nov 14
parent reply Peter C <peterc gmail.com> writes:
On Friday, 14 November 2025 at 22:53:33 UTC, Sergey wrote:
 On Friday, 14 November 2025 at 22:45:59 UTC, Peter C wrote:
 On Friday, 14 November 2025 at 06:35:23 UTC, monkyyy wrote:
 An alternative choice for modelling polymorphism will shift 
 the priorities to something else all together.
At least D did it right
If doing it right means D lets you choose whether to order the meal delivered, or make it entirely yourself, then yes, it did it right. Here's the make it yourself way - for those don't have better things to do with their life ;) module myModule; import core.stdc.stdio; // for printf, sprintf import core.stdc.stdlib; // for malloc, free import core.stdc.string; // for sprintf struct Shape { ShapeVTable* vtable; const(char)* description; } struct ShapeVTable { double function(Shape*) getArea; } struct Rectangle { Shape base; double width; double height; } struct Square { Shape base; double side; } struct Circle { Shape base; double radius; } struct Triangle { Shape base; double base_len; double height; } // Implementations double Rectangle_getArea(Shape* s) { auto r = cast(Rectangle*)s; return r.width * r.height; } double Square_getArea(Shape* s) { auto sq = cast(Square*)s; return sq.side * sq.side; } double Circle_getArea(Shape* s) { auto c = cast(Circle*)s; return 3.14159 * c.radius * c.radius; } double Triangle_getArea(Shape* s) { auto t = cast(Triangle*)s; return 0.5 * t.base_len * t.height; } // Vtables ShapeVTable rectangle_vtable = { &Rectangle_getArea }; ShapeVTable square_vtable = { &Square_getArea }; ShapeVTable circle_vtable = { &Circle_getArea }; ShapeVTable triangle_vtable = { &Triangle_getArea }; // Constructors Rectangle* newRectangle(double w, double h) { auto r = cast(Rectangle*) malloc(Rectangle.sizeof); r.base.vtable = &rectangle_vtable; r.base.description = cast(const(char)*) malloc(50); sprintf(cast(char*)r.base.description, "Rectangle (width=%.0f, height=%.0f)", w, h); r.width = w; r.height = h; return r; } Square* newSquare(double side) { auto sq = cast(Square*) malloc(Square.sizeof); sq.base.vtable = &square_vtable; sq.base.description = cast(const(char)*) malloc(30); sprintf(cast(char*)sq.base.description, "Square (side=%.0f)", side); sq.side = side; return sq; } Circle* newCircle(double radius) { auto c = cast(Circle*) malloc(Circle.sizeof); c.base.vtable = &circle_vtable; c.base.description = cast(const(char)*) malloc(30); sprintf(cast(char*)c.base.description, "Circle (radius=%.0f)", radius); c.radius = radius; return c; } Triangle* newTriangle(double base_len, double height) { auto t = cast(Triangle*) malloc(Triangle.sizeof); t.base.vtable = &triangle_vtable; t.base.description = cast(const(char)*) malloc(40); sprintf(cast(char*)t.base.description, "Triangle (base=%.0f, height=%.0f)", base_len, height); t.base_len = base_len; t.height = height; return t; } int main() { Shape*[8] shapes; shapes[0] = cast(Shape*) newRectangle(10, 5); shapes[1] = cast(Shape*) newRectangle(20, 3); shapes[2] = cast(Shape*) newSquare(7); shapes[3] = cast(Shape*) newSquare(12); shapes[4] = cast(Shape*) newCircle(5); shapes[5] = cast(Shape*) newCircle(10); shapes[6] = cast(Shape*) newTriangle(10, 4); shapes[7] = cast(Shape*) newTriangle(6, 8); double totalArea = 0.0; printf("--- Mixed Shape Calculations ---\n\n"); printf("Shape | Area\n"); printf("------------------------------------+------\n"); foreach (i; 0 .. 8) { double area = shapes[i].vtable.getArea(shapes[i]); printf("%-35s | %.0f\n", shapes[i].description, area); totalArea += area; } printf("\nTotal Area of All Shapes: %.0f\n", totalArea); foreach (i; 0 .. 8) { free(cast(void*) shapes[i].description); free(shapes[i]); } return 0; }
Nov 14
parent Kapendev <alexandroskapretsos gmail.com> writes:
On Friday, 14 November 2025 at 23:35:02 UTC, Peter C wrote:
 Here's the make it yourself way - for those don't have better 
 things to do with their life ;)
No idea why you use a vtable here. They are useful sometimes, but also easy to ignore. A custom sum or union type would also make things a lot nicer. Here is an epic [example](https://github.com/Kapendev/joka/blob/main/examples/_006_union.d).
Nov 15
prev sibling next sibling parent Paul Backus <snarwin gmail.com> writes:
On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative ways 
 to have encapsulation, polymorphism and inheritance outside of 
 OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.
D supports several different kinds of polymorphism, which vary based on (a) whether dispatch occurs at compile-time or runtime, and (b) whether the set of implementations is closed or open. - **Function overloading** enable compile-time dispatch with a closed set of implementations. - **Templates** enable compile-time dispatch with an open set of implementations. - **Sum types** enable runtime dispatch with a closed set of implementations. - **Classes** enable runtime dispatch with an open set of implementations. Generally speaking, when choosing which type of polymorphism to use, I try to follow the [rule of least power][1]. Since compile-time dispatch is less powerful than runtime dispatch, and a closed set of implementations is less powerful than an open set, that means preferring function overloads when possible, then templates or sum types, and finally classes when neither of the other options will suffice. [1]: https://en.wikipedia.org/wiki/Rule_of_least_power
Nov 04
prev sibling parent solidstate1991 <laszloszeremi outlook.com> writes:
On Monday, 1 September 2025 at 13:58:23 UTC, Brother Bill wrote:
 I have heard that there are better or at least alternative ways 
 to have encapsulation, polymorphism and inheritance outside of 
 OOP.

 With OOP in D, we have full support for Single Inheritance, 
 including for Design by Contract (excluding 'old').  D also 
 supports multiple Interfaces.

 What would be the alternatives and why would they be better?
 I assume the alternatives would have
 1. Better performance
 2. Simpler syntax
 3. Easier to read, write and maintain

 If possible, please provide links to documentation or examples.
At least my game engine PixelPerfectEngine uses Entity Component System (ECS) for game logic stuff, with the ability of soft opting out from it.
Nov 09