www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - I've just fixed UFCS for the experimental type function branch

reply Stefan Koch <uplink.coder googlemail.com> writes:
Hi there,

just a quick update.
limited UFCS for type functions works again.
i.e.

this code:

---
struct S1 { double[2] x; }

static assert(S1.sizeOf == S1.sizeof);

size_t sizeOf(alias t)
{
     return t.sizeof;
}
---

will work.

There are a few caveats in the current POC implementation of 
type-function UFCS, which mean this will only apply to single 
type variables (`alias x;`)
and not to arrays of them, it also won't work for structures 
which have alias variable members.

If you want to play with it, the code is downloadable at:
https://github.com/UplinkCoder/dmd/tree/talias_master

which contains the type function changes applied on a current dmd 
~master.

Happy Hacking,

Stefan
Sep 10 2020
next sibling parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch wrote:
 just a quick update.
 limited UFCS for type functions works again.
Very nice. To gain momentum and motivation in adding more features how about starting to maintain a druntime and phobos branch where all the traits that can be expressed with current state of talias branch are converted to alias functions?
Sep 10 2020
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 10 September 2020 at 13:55:06 UTC, Per Nordlöw wrote:
 On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch 
 wrote:
 just a quick update.
 limited UFCS for type functions works again.
Very nice.
Indeed. Type functions look like a very promising lowering: a new base functionality that can be used to reduce complexity in both the compiler and, much more importantly, in all future meta programs.
Sep 10 2020
parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 10 September 2020 at 14:14:30 UTC, Bruce Carneal 
wrote:
 On Thursday, 10 September 2020 at 13:55:06 UTC, Per Nordlöw 
 wrote:
 On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch 
 wrote:
 just a quick update.
 limited UFCS for type functions works again.
Very nice.
Indeed. Type functions look like a very promising lowering: a new base functionality that can be used to reduce complexity in both the compiler and, much more importantly, in all future meta programs.
More accurately, the *addition* of type functions to the compiler will, necessarily, increase the complexity of the compiler in trade for complexity reduction in many future meta programs.
Sep 10 2020
prev sibling next sibling parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch wrote:
 limited UFCS for type functions works again.
Having type functions with UFCS is a significant improvement of the developer experience aswell compared to having to use templates.
Sep 10 2020
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 10 September 2020 at 15:14:00 UTC, Per Nordlöw wrote:
 On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch 
 wrote:
 limited UFCS for type functions works again.
Having type functions with UFCS is a significant improvement of the developer experience aswell compared to having to use templates.
Absolutely. Functions are more readable, compose more readily, are easier to debug (better locality), and tickle fewer compiler problems than templates. CTFE is a huge win overall, "it just works". By contrast, when using D's pattern matching meta programming sub-language, things start out very nicely but rapidly progress to "it probably works, but you'll have a devil of a time debugging it if it actually doesn't, and you may not know that it doesn't for a few years, best of luck". In a type function world we'll still need templates, we'll just need fewer of them.
Sep 10 2020
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 10, 2020 at 04:28:46PM +0000, Bruce Carneal via Digitalmars-d wrote:
 On Thursday, 10 September 2020 at 15:14:00 UTC, Per Nordlöw wrote:
 On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch wrote:
 limited UFCS for type functions works again.
Having type functions with UFCS is a significant improvement of the developer experience aswell compared to having to use templates.
Absolutely. Functions are more readable, compose more readily, are easier to debug (better locality), and tickle fewer compiler problems than templates. CTFE is a huge win overall, "it just works".
I wouldn't be so sure about that last part. The current CTFE implementation is, shall we say, hackish at best? It "works" by operating on AST nodes as if they were values, and is slow, memory inefficient, and when there are problems, it's a nightmare to debug. For small bits of code, it works wonderfully, but it's not at all scalable: try some non-trivial CTFE and pretty soon your compile times will be skyrocketing, if the compiler manages to finish its job at all before being killed by the kernel OOM killer for gobbling too much memory. (Of course, templates, esp. the recursive kind, are mostly to blame for this, but CTFE isn't exactly an innocent bystander when it comes to memory hoggage.)
 By contrast, when using D's pattern matching meta programming
 sub-language, things start out very nicely but rapidly progress to "it
 probably works, but you'll have a devil of a time debugging it if it
 actually doesn't, and you may not know that it doesn't for a few
 years, best of luck".
To be fair, the problems really only arise with IFTI and a few other isolated places in D's template system. In other respects, D templates are wonderfully nice to work with. Definitely a refreshing change from the horror show that is C++ templates.
 In a type function world we'll still need templates, we'll just need
 fewer of them.
Generally, I'm in favor of this. Templates have their place, but in many type manipulation operations, type functions are definitely better than truckloads of recursive template instantiations with their associated memory hoggage and slowdown of compile times. T -- English has the lovely word "defenestrate", meaning "to execute by throwing someone out a window", or more recently "to remove Windows from a computer and replace it with something useful". :-) -- John Cowan
Sep 10 2020
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 10 September 2020 at 16:52:12 UTC, H. S. Teoh wrote:
 On Thu, Sep 10, 2020 at 04:28:46PM +0000, Bruce Carneal via 
 Digitalmars-d wrote:
 On Thursday, 10 September 2020 at 15:14:00 UTC, Per Nordlöw 
 wrote:
 On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch 
 wrote:
 limited UFCS for type functions works again.
Having type functions with UFCS is a significant improvement of the developer experience aswell compared to having to use templates.
Absolutely. Functions are more readable, compose more readily, are easier to debug (better locality), and tickle fewer compiler problems than templates. CTFE is a huge win overall, "it just works".
I wouldn't be so sure about that last part. The current CTFE implementation is, shall we say, hackish at best? ...
When I said "it just works" I should have said "it just works as you'd expect any program to work". The implementation may be lacking but the contract with the programmer is wonderfully straightforward.
 It "works" by operating on AST nodes as if they were values, 
 and is slow, memory inefficient, and when there are problems, 
 it's a nightmare to debug. For small bits of code, it works 
 wonderfully, but ...
Yes. My understanding is that the current implementation is much less than ideal for both but that we may be able to clear away a lot of the template "problem" in one go with type functions. Crucially, the type function implementation complexity is, reportedly, much much lower than that seen in other sub components.
 By contrast, when using D's pattern matching meta programming 
 sub-language, things start out very nicely but <... rapidly 
 degenerate>
To be fair, the problems really only arise with IFTI and a few other isolated places in D's template system. In other respects, D templates are wonderfully nice to work with. Definitely a refreshing change from the horror show that is C++ templates.
Yes. Of course in theory D templates are no more powerful than C++ templates but anyone who has used both understands that simplicity in practical use trumps theoretical equivalence. As you note, it's not even close. It seems to me from forum postings and reports on the (un)maintainability and instability of large template heavy dlang code bases, that we're approaching the practical limits of our template capability. At least we're approaching the "heroic efforts may be needed ongoing" threshold. So, what to do? We can always add more tooling to try and help the situation: better error reporting, better logging, pattern resolution dependency graph visualizers, ... We can also go the "intrinsics" route: "have something that's too hard to do with templates? No problem, we'll add an intrinsic!". We can also go the template library route: "too tough for mere mortals? No problem, my super-duper layer of template magic will make it all better!". You'll note that not one of the above "solutions" actually reduces complexity, they just try to manage it. Type functions, on the other hand, look like they would support real world simplification. Much of that simplification comes from programmer familiarity and from the ability to "opt-in" to pattern/set operations rather than being forced to, awkwardly, opt-out.
 In a type function world we'll still need templates, we'll 
 just need fewer of them.
Generally, I'm in favor of this. Templates have their place, but in many type manipulation operations, type functions are definitely better than truckloads of recursive template instantiations with their associated memory hoggage and slowdown of compile times.
Yep.
Sep 10 2020
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Sep 10, 2020 at 11:44:30PM +0000, Bruce Carneal via Digitalmars-d wrote:
 On Thursday, 10 September 2020 at 16:52:12 UTC, H. S. Teoh wrote:
 On Thu, Sep 10, 2020 at 04:28:46PM +0000, Bruce Carneal via
 Digitalmars-d wrote:
[...]
 Absolutely.  Functions are more readable, compose more readily,
 are easier to debug (better locality), and tickle fewer compiler
 problems than templates.  CTFE is a huge win overall, "it just
 works".
I wouldn't be so sure about that last part. The current CTFE implementation is, shall we say, hackish at best? ...
When I said "it just works" I should have said "it just works as you'd expect any program to work". The implementation may be lacking but the contract with the programmer is wonderfully straightforward.
If that's what you meant, then I agree. One of the big wins of CTFE is that it unifies compile-time code and runtime code into a single interface (syntax), rather than require periphrasis in a separate sub-language. D templates win in this respect in the area of template functions: template parameters are "merely" compile-time parameters, rather than some odd distinct category of things with a different syntax; this has been one of the big factors in the usability of D templates. However, D templates fail on this point when it comes to non-function templates, e.g. you have to write a recursive template in order to manipulate a list of types, and imperative-style type manipulation is not possible. This adds friction to usage (e.g., I have to re-think my type sorting algorithm in terms of recursive templates rather than just calling std.algorithm.sort) and induces boilerplate (I can't reuse an existing sorting solution in std.algorithm.sort but have to rewrite essentially the same logic in recursive template style). Ideally, this redundant work should not be necessary; as long as I define an ordering predicate on types, I ought to be able to reuse std.algorithm for sorting or otherwise manipulating types. Seen in this light, type functions are a major step in the right direction. [...]
 To be fair, the problems really only arise with IFTI and a few other
 isolated places in D's template system.  In other respects, D
 templates are wonderfully nice to work with.  Definitely a
 refreshing change from the horror show that is C++ templates.
Yes. Of course in theory D templates are no more powerful than C++ templates but anyone who has used both understands that simplicity in practical use trumps theoretical equivalence. As you note, it's not even close.
Theoretical equivalence is really only useful in mathematical proofs; in practice there's a huge difference between languages of equivalent computing power. Lambda calculus can in theory express everything a D program can, but nobody in his sane mind would want to write a non-trivial program in lambda calculus. :-D Hence the term "Turing tarpit".
 It seems to me from forum postings and reports on the
 (un)maintainability and instability of large template heavy dlang code
 bases, that we're approaching the practical limits of our template
 capability.  At least we're approaching the "heroic efforts may be
 needed ongoing" threshold.
It really depends on what you're trying to do. I'm a pretty heavy template user (bring on those UFCS chains!), but IME it has not been a problem. The problems really only arise in specific usage patterns, such as excessive use of recursive templates (which is where Stefan's type functions come in), or excessive use of compile-time codegen with CTFE and templates (e.g., std.regex, std.uni tables). Or unreasonably-long UFCS chains: I have a non-trivial example in one of my projects where almost the entire program logic from processing input to outputting a PNG file exists in one gigantic UFCS chain. :-D It led to megabyte-long symbols that eventually spurred Rainer to implement the symbol folding that we enjoy today. Eventually, I had to break the chain down into 2-3 pieces just to get it to compile before running out of memory. :-D For "normal" template usage, templates really aren't that big of a problem. Unfortunately, some of the "bad" usage patterns occur in Phobos, so sometimes the unwary can trip up on them, which may be why there's been a string of complaints about templates lately.
 So, what to do?  We can always add more tooling to try and help the
 situation: better error reporting, better logging, pattern resolution
 dependency graph visualizers, ...
I say we fix the compiler implementation so that we can use what the language allows us to use. :-)
 We can also go the "intrinsics" route: "have something that's too hard
 to do with templates?  No problem, we'll add an intrinsic!".
I wouldn't add an intrinsic unless it provides some special functionality not expressible with the normal language. I don't think we have too many of those. The current problems with templates is really a matter of quality of implementation. That, and the lack of more suitable ways of doing certain things like type manipulation, so templates get pressed into service where they are perhaps not really the best tool for the job.
 We can also go the template library route: "too tough for mere
 mortals?  No problem, my super-duper layer of template magic will make
 it all better!".
std.algorithm anybody? ;-)
 You'll note that not one of the above "solutions" actually reduces
 complexity, they just try to manage it.  Type functions, on the other
 hand, look like they would support real world simplification.  Much of
 that simplification comes from programmer familiarity and from the
 ability to "opt-in" to pattern/set operations rather than being forced
 to, awkwardly, opt-out.
[...] Type functions will definitely be a major step towards unifying the meta language with the regular language: the holy grail of metaprogramming. I doubt we can really get all the way there, but the closer we get, the more powerful D will become, and at the same time the more easy to use D's metaprogramming features will become. That's worthy of pursuit IMO. T -- Doubt is a self-fulfilling prophecy.
Sep 10 2020
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 10 September 2020 at 23:44:30 UTC, Bruce Carneal 
wrote:
 Yes.  Of course in theory D templates are no more powerful than 
 C++ templates but anyone who has used both understands that 
 simplicity in practical use trumps theoretical equivalence.  As 
 you note, it's not even close.

 It seems to me from forum postings and reports on the 
 (un)maintainability and instability of large template heavy 
 dlang code bases, that we're approaching the practical limits 
 of our template capability.  At least we're approaching the 
 "heroic efforts may be needed ongoing" threshold.
I think the main difficulty of scaling code bases the rely heavily on templates (either D or C++), as compared to other kinds of generic code like OOP-style polymorphism or traits/typeclasses à la Rust and Haskell, is that templates themselves--not the code they generate when you instantiate them, but the actual *templates*--are essentially dynamically typed. In general, there's no way to catch errors in a template until you "run" it (that is, instantiate it) and see what it does. What this suggests to me (and this is borne out by my experience) is that writing correct, maintainable template code probably requires the same kind of disciplined approach to testing as writing correct, maintainable code in a dynamic language like Python or Ruby. Don't assume anything works until you can demonstrate it, actively look for ways to make your code fail, etc. If your test suite is shorter than your template code, you're almost certainly not being thorough enough.
Sep 10 2020
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Friday, 11 September 2020 at 01:07:55 UTC, Paul Backus wrote:
 On Thursday, 10 September 2020 at 23:44:30 UTC, Bruce Carneal 
 wrote:
 Yes.  Of course in theory D templates are no more powerful 
 than C++ templates but anyone who has used both understands 
 that simplicity in practical use trumps theoretical 
 equivalence.  As you note, it's not even close.

 It seems to me from forum postings and reports on the 
 (un)maintainability and instability of large template heavy 
 dlang code bases, that we're approaching the practical limits 
 of our template capability.  At least we're approaching the 
 "heroic efforts may be needed ongoing" threshold.
I think the main difficulty of scaling code bases the rely heavily on templates (either D or C++), as compared to other kinds of generic code like OOP-style polymorphism or traits/typeclasses à la Rust and Haskell, is that templates themselves--not the code they generate when you instantiate them, but the actual *templates*--are essentially dynamically typed. In general, there's no way to catch errors in a template until you "run" it (that is, instantiate it) and see what it does.
Yes exactly. templates are polymorphic, they can re-shape and what they do can be determined by the "call site". Therefore they do not even have any meaning before being instantiated. type functions on the other hand don't suffer from that. They have a meaning. Whether you call them or not.
Sep 10 2020
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 11, 2020 at 01:07:55AM +0000, Paul Backus via Digitalmars-d wrote:
[...]
 I think the main difficulty of scaling code bases the rely heavily on
 templates (either D or C++), [...], is that templates themselves--not
 the code they generate when you instantiate them, but the actual
 *templates*--are essentially dynamically typed. In general, there's no
 way to catch errors in a template until you "run" it (that is,
 instantiate it) and see what it does.
[...] This is why when I write template code, I try to write defensively in a way that makes as few assumptions as possible about the template arguments. Ideally, every operation you'd do with that type should be tested in the sig constraints. Even better would be if the compiler enforced this: unless you tested for some operation in the sig constraints, that operation would be deemed illegal. But in the past Walter & Andrei have shot down Concepts, which is very similar to this idea, so I don't know how likely this will ever make it into D. Another approach, instead of sig constraints, might be to have typed (or meta-typed) template arguments, a kind of template analogue of static typing, so that arguments are constrained to satisfy certain constraints (i.e., are instances of a meta-type). Though these meta-types are just Concepts redressed, so that's not saying very much. T -- Talk is cheap. Whining is actually free. -- Lars Wirzenius
Sep 10 2020
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Friday, 11 September 2020 at 02:24:07 UTC, H. S. Teoh wrote:
 [snip]

 Even better would be if the compiler enforced this: unless you 
 tested for some operation in the sig constraints, that 
 operation would be deemed illegal.  But in the past Walter & 
 Andrei have shot down Concepts, which is very similar to this 
 idea, so I don't know how likely this will ever make it into D.

 [snip]
Atila seems more convinced of the value of concepts.
Sep 11 2020
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Friday, 11 September 2020 at 02:24:07 UTC, H. S. Teoh wrote:
 On Fri, Sep 11, 2020 at 01:07:55AM +0000, Paul Backus via 
 Digitalmars-d wrote: [...]
 I think the main difficulty of scaling code bases the rely 
 heavily on templates (either D or C++), [...], is that 
 templates themselves--not the code they generate when you 
 instantiate them, but the actual *templates*--are essentially 
 dynamically typed. In general, there's no way to catch errors 
 in a template until you "run" it (that is, instantiate it) and 
 see what it does.
[...] This is why when I write template code, I try to write defensively in a way that makes as few assumptions as possible about the template arguments. Ideally, every operation you'd do with that type should be tested in the sig constraints. Even better would be if the compiler enforced this: unless you tested for some operation in the sig constraints, that operation would be deemed illegal. But in the past Walter & Andrei have shot down Concepts, which is very similar to this idea, so I don't know how likely this will ever make it into D.
Yeah, that's basically the traits/typeclasses approach: you commit to a particular set of constraints, and the compiler checks your generic code against them *prior* to instantiation with any particular type (or "monomorphization," as the Rustaceans call it). The main downside is that you can't do design-by-introspection. Once you commit to a typeclass, its interface is all you get. If you want your algorithm to work differently for InputRange and RandomAccessRange, you have to write two implementations. And if you don't want to deal with the combinatorial blow-up, you just use the least-restrictive typeclass possible, and miss out on any opportunities for progressive enhancement. (Hypothesis: one consequence of this is that an optimizing compiler backend has to work harder, on average, to get efficient code out of a Rust iterator than it does for the analogous range in D.) Theoretically, if you had something like a type-level version of TypeScript's flow-based type analysis, you could combine the two approaches. But I don't know of any language that's actually implemented such a feature.
Sep 11 2020
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Friday, 11 September 2020 at 11:54:20 UTC, Paul Backus wrote:
 On Friday, 11 September 2020 at 02:24:07 UTC, H. S. Teoh wrote:
 On Fri, Sep 11, 2020 at 01:07:55AM +0000, Paul Backus via 
 Digitalmars-d wrote: [...]
 I think the main difficulty of scaling code bases the rely 
 heavily on templates (either D or C++), [...], is that 
 templates themselves--not the code they generate when you 
 instantiate them, but the actual *templates*--are essentially 
 dynamically typed. In general, there's no way to catch errors 
 in a template until you "run" it (that is, instantiate it) 
 and see what it does.
[...] This is why when I write template code, I try to write defensively in a way that makes as few assumptions as possible about the template arguments. Ideally, every operation you'd do with that type should be tested in the sig constraints. Even better would be if the compiler enforced this: unless you tested for some operation in the sig constraints, that operation would be deemed illegal. But in the past Walter & Andrei have shot down Concepts, which is very similar to this idea, so I don't know how likely this will ever make it into D.
Yeah, that's basically the traits/typeclasses approach: you commit to a particular set of constraints, and the compiler checks your generic code against them *prior* to instantiation with any particular type (or "monomorphization," as the Rustaceans call it). The main downside is that you can't do design-by-introspection. Once you commit to a typeclass, its interface is all you get. If you want your algorithm to work differently for InputRange and RandomAccessRange, you have to write two implementations. And if you don't want to deal with the combinatorial blow-up, you just use the least-restrictive typeclass possible, and miss out on any opportunities for progressive enhancement. (Hypothesis: one consequence of this is that an optimizing compiler backend has to work harder, on average, to get efficient code out of a Rust iterator than it does for the analogous range in D.) Theoretically, if you had something like a type-level version of TypeScript's flow-based type analysis, you could combine the two approaches. But I don't know of any language that's actually implemented such a feature.
There is a lot of static type language design terrain to be explored on the way to the fully dynamic type border. Scouting ahead, as Paul and H.S. are doing, informs our near term decision regarding type functions. These are my summary questions regarding type functions: 1) Are they worth another 1000 LoC in the compiler? and 2) Do they preclude envisionable advances in the future? My answer to 1) is : Yes, the compounding benefits over time of simpler user meta code outweighs the, reportedly, small increase in compiler code today. My answer to 2) is : No, type functions do not preclude future advances on the type front. They are self contained.
Sep 11 2020
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 11, 2020 at 11:54:20AM +0000, Paul Backus via Digitalmars-d wrote:
[...]
 Yeah, that's basically the traits/typeclasses approach: you commit to
 a particular set of constraints, and the compiler checks your generic
 code against them *prior* to instantiation with any particular type
 (or "monomorphization," as the Rustaceans call it).
 
 The main downside is that you can't do design-by-introspection. Once
 you commit to a typeclass, its interface is all you get.
[...] Not necessarily. You can accept the most general type class in the sig constraints (or omit it altogether), and introspect in the function body. To take it a step further: what if you can progressively refine the allowed operations on a template argument T within the template function body? To use your example of InputRange vs. RandomAccessRange: the function declares it accepts InputRange because that's the most general class. Then within its function body, it tests for RandomAccessRange with a static if, the act of which adds random access range operations on T to the permitted operations within the static if block. In a different static if block you might test for BidirectionalRange instead, and that would permit BidirectionalRange operations on T within that block (and prohibit RandomAccessRange operations). This way, you get *both* DbI and static checking of valid operations on template arguments. T -- Don't modify spaghetti code unless you can eat the consequences.
Sep 11 2020
parent reply Meta <jared771 gmail.com> writes:
On Friday, 11 September 2020 at 14:54:33 UTC, H. S. Teoh wrote:
 Not necessarily.  You can accept the most general type class in 
 the sig constraints (or omit it altogether), and introspect in 
 the function body.

 To take it a step further: what if you can progressively refine 
 the allowed operations on a template argument T within the 
 template function body?  To use your example of InputRange vs. 
 RandomAccessRange: the function declares it accepts InputRange 
 because that's the most general class. Then within its function 
 body, it tests for RandomAccessRange with a static if, the act 
 of which adds random access range operations on T to the 
 permitted operations within the static if block.  In a 
 different static if block you might test for BidirectionalRange 
 instead, and that would permit BidirectionalRange operations on 
 T within that block (and prohibit RandomAccessRange operations).

 This way, you get *both* DbI and static checking of valid 
 operations on template arguments.


 T
I think that's exactly what he is talking about with "type-level" flow-based typing (kinding? ;-)). If types are just another value, though, you don't need separate mechanisms for value-level and type-level flow-based typing.
Sep 11 2020
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Sep 11, 2020 at 03:17:52PM +0000, Meta via Digitalmars-d wrote:
 On Friday, 11 September 2020 at 14:54:33 UTC, H. S. Teoh wrote:
[....]
 To take it a step further: what if you can progressively refine the
 allowed operations on a template argument T within the template
 function body?  To use your example of InputRange vs.
 RandomAccessRange: the function declares it accepts InputRange
 because that's the most general class. Then within its function
 body, it tests for RandomAccessRange with a static if, the act of
 which adds random access range operations on T to the permitted
 operations within the static if block.  In a different static if
 block you might test for BidirectionalRange instead, and that would
 permit BidirectionalRange operations on T within that block (and
 prohibit RandomAccessRange operations).
 
 This way, you get *both* DbI and static checking of valid operations
 on template arguments.
[...]
 I think that's exactly what he is talking about with "type-level"
 flow-based typing (kinding? ;-)). If types are just another value,
 though, you don't need separate mechanisms for value-level and
 type-level flow-based typing.
That gives me an idea. What if we have compile-time pseudo-classes that represent classes? Something like this: typeclass InputRange(T) { bool empty(); T front(); void popFront(); } typeclass BidirectionalRange(T) : InputRange!T { T back(); void popBack(); } auto myTemplateFunc(T, InputRange!T Ir)(Ir r) { // If successful, this makes `br` an alias of r but with // expanded allowed operations, analogous to downcasting // classes alias br = cast(BidirectionalRange!T) r; static if (is(typeof(br))) { // bidirectional range operations permitted on // br br.popBack(); } else { assert(!is(typeof(br))); // only input range operations permitted on r r.popFront(); } } T -- It's amazing how careful choice of punctuation can leave you hanging:
Sep 11 2020
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch wrote:
 Hi there,

 just a quick update.
 limited UFCS for type functions works again.
 i.e.

 this code:

 ---
 struct S1 { double[2] x; }

 static assert(S1.sizeOf == S1.sizeof);

 size_t sizeOf(alias t)
 {
     return t.sizeof;
 }
 ---

 will work.
I'm curious, will this also work? size_t sizeOf(alias t) { size_t result; /* static? */ if (__traits(isScalar)) { static if (is(t == int)) result += 32; else static if (...) ... } else static if (is(t == A[n], A, size_t n)) result += A.sizeOf * n else static if (...) ... else /* static? */ foreach (field; t.tupleof) result += field.sizeOf return result; } Basically, is the implementation at a level where sizeOf can be turtles all the way down, with minimal or no reliance on __traits?
Sep 10 2020
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 10 September 2020 at 17:05:02 UTC, Meta wrote:
 I'm curious, will this also work?

 size_t sizeOf(alias t)
 {
     size_t result;
     /* static? */ if (__traits(isScalar))
     {
         static if (is(t == int))
             result += 32;
         else static if (...)
         ...
     }
     else static if (is(t == A[n], A, size_t n))
         result += A.sizeOf * n
     else static if (...)
         ...
     else
         /* static? */ foreach (field; t.tupleof)
             result += field.sizeOf

     return result;
 }

 Basically, is the implementation at a level where sizeOf can be 
 turtles all the way down, with minimal or no reliance on 
 __traits?
One thing built-in .sizeof does that no user-code version can do is "freeze" the size of a type to prevent additional members from being added. For example, if you try to compile this code: struct S { int a; enum size = S.sizeof; mixin("int b;"); } ...you'll get an error: onlineapp.d-mixin-5(5): Error: variable onlineapp.S.b cannot be further field because it will change the determined S size
Sep 10 2020
next sibling parent reply Meta <jared771 gmail.com> writes:
On Thursday, 10 September 2020 at 18:05:23 UTC, Paul Backus wrote:
 On Thursday, 10 September 2020 at 17:05:02 UTC, Meta wrote:
 I'm curious, will this also work?

 size_t sizeOf(alias t)
 {
     size_t result;
     /* static? */ if (__traits(isScalar))
     {
         static if (is(t == int))
             result += 32;
         else static if (...)
         ...
     }
     else static if (is(t == A[n], A, size_t n))
         result += A.sizeOf * n
     else static if (...)
         ...
     else
         /* static? */ foreach (field; t.tupleof)
             result += field.sizeOf

     return result;
 }

 Basically, is the implementation at a level where sizeOf can 
 be turtles all the way down, with minimal or no reliance on 
 __traits?
One thing built-in .sizeof does that no user-code version can do is "freeze" the size of a type to prevent additional members from being added. For example, if you try to compile this code: struct S { int a; enum size = S.sizeof; mixin("int b;"); } ...you'll get an error: onlineapp.d-mixin-5(5): Error: variable onlineapp.S.b cannot be further field because it will change the determined S size
It looks like it depends on the order of fields: struct S { int a; mixin("int b;"); //No error enum size = S.sizeof; } Ideally `enum size = S.sizeof` could be delayed until after all mixins are "evaluated" (is that during one of the semantic phases? I don't know much about how DMD actually works), but I imagine that would take some major re-architecting. Really though, is it even necessary to be able to "freeze" a type like this when evaluating type functions?
Sep 10 2020
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 10 September 2020 at 18:44:46 UTC, Meta wrote:
 Really though, is it even necessary to be able to "freeze" a 
 type like this when evaluating type functions?
type functions cannot change the functions they are working on. So from that point you don't have to enforce a determinate size, But at the same time, I expect people to only feed determined things into type functions. Everything else is mighty confusing.
Sep 10 2020
prev sibling parent Paul Backus <snarwin gmail.com> writes:
On Thursday, 10 September 2020 at 18:44:46 UTC, Meta wrote:
 It looks like it depends on the order of fields:

 struct S
 {
     int a;
     mixin("int b;"); //No error
     enum size = S.sizeof;
 }

 Ideally `enum size = S.sizeof` could be delayed until after all 
 mixins are "evaluated" (is that during one of the semantic 
 phases? I don't know much about how DMD actually works), but I 
 imagine that would take some major re-architecting. Really 
 though, is it even necessary to be able to "freeze" a type like 
 this when evaluating type functions?
Sometimes there is no way to avoid "evaluating" one part of the program before another. For example: struct S { int a; static if (S.sizeof < 8) { int b; } } Because of the way static if works, the compiler *must* do semantic analysis on S.sizeof before it does semantic analysis on the declaration of b. There is no way to avoid it by re-ordering or deferring the analysis of certain declarations.
Sep 10 2020
prev sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 10 September 2020 at 18:05:23 UTC, Paul Backus wrote:
 [snip]

 One thing built-in .sizeof does that no user-code version can 
 do is "freeze" the size of a type to prevent additional members 
 from being added. For example, if you try to compile this code:

 struct S
 {
     int a;
     enum size = S.sizeof;
     mixin("int b;");
 }

 ...you'll get an error:

 onlineapp.d-mixin-5(5): Error: variable onlineapp.S.b cannot be 
 further field because it will change the determined S size
I wouldn't want to rely on something like that personally. I'd rather guarantee that the only members are those in a specific list.
Sep 10 2020
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/10/20 1:05 PM, Meta wrote:

 
 I'm curious, will this also work?
 
 size_t sizeOf(alias t)
 {
      size_t result;
      /* static? */ if (__traits(isScalar))
      {
          static if (is(t == int))
              result += 32;
          else static if (...)
          ...
      }
      else static if (is(t == A[n], A, size_t n))
          result += A.sizeOf * n
      else static if (...)
          ...
      else
          /* static? */ foreach (field; t.tupleof)
              result += field.sizeOf
 
      return result;
 }
 
 Basically, is the implementation at a level where sizeOf can be turtles 
 all the way down, with minimal or no reliance on __traits?
Doesn't the compiler have to do this anyway so it can define the memory layout? I'm curious what the benefit of doing this in a library would be. -Steve
Sep 10 2020
prev sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Thursday, 10 September 2020 at 17:05:02 UTC, Meta wrote:
     else static if (is(t == A[n], A, size_t n))
         result += A.sizeOf * n
that is expression introduces 2 symbols `n` and `A` based on whether t is a static array. which is something that you cannot do in a type function. You cannot change the form of the function body. Type functions are not polymorphic, they cannot change shape. Besides is there ever a case in which A.sizeOf * n would not be the same as t.sizeof ?
 Basically, is the implementation at a level where sizeOf can be 
 turtles all the way down, with minimal or no reliance on 
 __traits?
I'd guess you would have higher reliance on __traits simply because __traits have a return value and can be pure, as opposed to pattern matching is expressions which can change things outside of their contexts.
Sep 10 2020