digitalmars.D - Probable C# 6.0 features
- Suliman (4/4) Dec 10 2013 Maybe it would be possible to get some good idea from next
- bearophile (5/7) Dec 10 2013 Discussed a little here:
- Namespace (1/1) Dec 10 2013 I love Monadic null checking. Would be great if D would have it.
- Ary Borenszweig (4/5) Dec 10 2013 What does a monad have to do with that?
- Walter Bright (2/4) Dec 10 2013 The best way to learn something is to try to explain it to someone else.
- Ary Borenszweig (7/11) Dec 11 2013 I don't know why the order of the words I wrote became like that. I
- Max Samukha (10/15) Dec 11 2013 Some things are very hard to explain because explaining them
- Robert Clipsham (52/59) Dec 11 2013 Monads suffer from the same problem algebra, templates and many
- Ary Borenszweig (30/86) Dec 11 2013 Thanks for the explanation. I think I understand monads a bit more now. ...
- Jacob Carlborg (5/24) Dec 11 2013 It's like in Ruby with ActiveSupport:
- qznc (9/23) Dec 11 2013 They are too trivial to comprehend. ;)
- bearophile (5/9) Dec 11 2013 On related topics:
- John Colvin (5/29) Dec 11 2013 I haven't read the link, but what i presume you mean is that a
- Timon Gehr (196/198) Dec 11 2013 The term has a more general meaning in category theory. :)
- Timon Gehr (4/5) Dec 11 2013 This should read:
- Manu (6/7) Dec 10 2013 Yeah that's awesome. Definitely the most interesting one to me too.
- Rikki Cattermole (4/5) Dec 10 2013 +1 on Monadic null checking. Do miss it from Groovy.
- Jacob Carlborg (7/8) Dec 11 2013 Wouldn't that be possible to implement as a library function, at least
- Robert Clipsham (62/63) Dec 11 2013 Doesn't need to be a language feature - you can implement it as a
- qznc (4/69) Dec 11 2013 Now define map,bind,join. It quickly gets ugly.
- Adam Wilson (16/19) Dec 10 2013 Let's not forget the biggest feature of C# 6.0, Roslyn and
- Paulo Pinto (6/19) Dec 10 2013 Except Rosylin is also native code when you deploy with NGEN, Windows
- Adam Wilson (13/40) Dec 10 2013 Agreed. But those are implementation details in practice. From the
- Timon Gehr (2/5) Dec 10 2013 I assume they just don't expose any mutating operations.
- Idan Arye (23/27) Dec 10 2013 I really like the "Inline declarations for out params"
- =?UTF-8?B?U2ltZW4gS2rDpnLDpXM=?= (9/14) Dec 11 2013 Not just that - how would you signal that a is an invalid value? Throw
- Xinok (19/23) Dec 10 2013 Regarding #1 (Primary Constructors), I think the feature has
Maybe it would be possible to get some good idea from next It's only ideas about next version, but new future maybe next: http://damieng.com/blog/2013/12/09/probable-c-6-0-features-illustrated
Dec 10 2013
Suliman:Maybe it would be possible to get some good idea from nextDiscussed a little here: http://forum.dlang.org/thread/naeqxidypkpehynmiacz forum.dlang.org Bye, bearophile
Dec 10 2013
I love Monadic null checking. Would be great if D would have it.
Dec 10 2013
On 12/10/13 5:35 PM, Namespace wrote:I love Monadic null checking. Would be great if D would have it.What does a monad have to do with that? (just out of curiosity... BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himself)
Dec 10 2013
On 12/10/2013 3:53 PM, Ary Borenszweig wrote:BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himselfThe best way to learn something is to try to explain it to someone else.
Dec 10 2013
On 12/11/13 2:14 AM, Walter Bright wrote:On 12/10/2013 3:53 PM, Ary Borenszweig wrote:I don't know why the order of the words I wrote became like that. I meant to say: "the other day a friend tried to explain me monads and he realized he couldn't understand them himself" What you say is true: once you are able to explain something very well, you understand it much better.BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himselfThe best way to learn something is to try to explain it to someone else.
Dec 11 2013
On Wednesday, 11 December 2013 at 05:14:56 UTC, Walter Bright wrote:On 12/10/2013 3:53 PM, Ary Borenszweig wrote:Some things are very hard to explain because explaining them requires a lot of context unknown to the recipient, and there is no appropriate analogy to pass that context "by reference". So it's often wiser to give up and let the other person acquire the context on his own. Explaining monads to other people is a waste of time :). There is an interesting write-up by Dijkstra, which touches this subject as well http://www.cs.utexas.edu/~EWD/ewd10xx/EWD1036.PDF.BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himselfThe best way to learn something is to try to explain it to someone else.
Dec 11 2013
On Tuesday, 10 December 2013 at 23:53:25 UTC, Ary Borenszweig wrote:On 12/10/13 5:35 PM, Namespace wrote:Monads suffer from the same problem algebra, templates and many other things do in that they have a scary name and are often poorly explained. They way I like to think of monads is simply a box that performs computations. They have two methods, bind and return. return lets you put a value into the box, and bind lets you call some function with the value that's in the box. Probably the simplest example is the maybe monad ("monadic null checking"). The idea is to let you change something like this: ---- auto a = ...; if (a != null) { auto b = a.foo(); if (b != null) { b.bar(); // And so on } } ---- Into: ---- a.foo().bar(); ---- Here, the bind and return methods might look something like: ---- // "return" function Maybe!Foo ret(Foo f) { return just(f); } auto bind(Maybe!Foo thing, Foo function(Foo) fn) { if(thing) { return ret(fn(thing)); } return nothing(); } ---- There are two functions here for getting instances of maybe - nothing and just. Nothing says "I don't have a value" and just says "I have a value". The bind method then simply says "if I have a value, call the function, otherwise just return nothing". The code above would be used as follows: ---- bind(bind(a, &foo), &bar); ---- With a bit of magic you can get rid of the overhead of function pointers and allow it to work with other function types, but it shows the general concept - wrap up the values, then use bind to chain functions together. The ?. operator mentioned in the article is simply some syntactic sugar for a monad, hence "monadic null checking".I love Monadic null checking. Would be great if D would have it.What does a monad have to do with that? (just out of curiosity... BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himself)
Dec 11 2013
On 12/11/13 11:32 AM, Robert Clipsham wrote:On Tuesday, 10 December 2013 at 23:53:25 UTC, Ary Borenszweig wrote:Thanks for the explanation. I think I understand monads a bit more now. :-) However, isn't it too much to call the "?." operator "monadic null checking"? I see it as just syntax sugar: foo?.bar is the same as: (temp = foo) ? temp.bar : null And so: foo?.bar?.baz it the same as: (temp2 = ((temp = foo) ? temp.bar : null)) ? temp2.baz : null By the way, in Crystal we currently have: class Object def try yield end end class Nil def try(&block) nil end end You can use it like this: foo.try &.bar foo.try &.bar.try &.baz Not the nicest thing, but it's implemented as a library solution. Then you could have syntactic sugar for that. Again, I don't see why this is related to monads. Maybe because monads are boxes that can perform functions, anything which involves a transform can be related to a monad. :-POn 12/10/13 5:35 PM, Namespace wrote:Monads suffer from the same problem algebra, templates and many other things do in that they have a scary name and are often poorly explained. They way I like to think of monads is simply a box that performs computations. They have two methods, bind and return. return lets you put a value into the box, and bind lets you call some function with the value that's in the box. Probably the simplest example is the maybe monad ("monadic null checking"). The idea is to let you change something like this: ---- auto a = ...; if (a != null) { auto b = a.foo(); if (b != null) { b.bar(); // And so on } } ---- Into: ---- a.foo().bar(); ---- Here, the bind and return methods might look something like: ---- // "return" function Maybe!Foo ret(Foo f) { return just(f); } auto bind(Maybe!Foo thing, Foo function(Foo) fn) { if(thing) { return ret(fn(thing)); } return nothing(); } ---- There are two functions here for getting instances of maybe - nothing and just. Nothing says "I don't have a value" and just says "I have a value". The bind method then simply says "if I have a value, call the function, otherwise just return nothing". The code above would be used as follows: ---- bind(bind(a, &foo), &bar); ---- With a bit of magic you can get rid of the overhead of function pointers and allow it to work with other function types, but it shows the general concept - wrap up the values, then use bind to chain functions together. The ?. operator mentioned in the article is simply some syntactic sugar for a monad, hence "monadic null checking".I love Monadic null checking. Would be great if D would have it.What does a monad have to do with that? (just out of curiosity... BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himself)
Dec 11 2013
On 2013-12-11 16:10, Ary Borenszweig wrote:By the way, in Crystal we currently have: class Object def try yield end end class Nil def try(&block) nil end end You can use it like this: foo.try &.bar foo.try &.bar.try &.baz Not the nicest thing, but it's implemented as a library solution. Then you could have syntactic sugar for that. Again, I don't see why this is related to monads. Maybe because monads are boxes that can perform functions, anything which involves a transform can be related to a monad. :-PIt's like in Ruby with ActiveSupport: a = foo.try(:bar).try(:x) -- /Jacob Carlborg
Dec 11 2013
On Wednesday, 11 December 2013 at 14:32:34 UTC, Robert Clipsham wrote:On Tuesday, 10 December 2013 at 23:53:25 UTC, Ary Borenszweig wrote:They are too trivial to comprehend. ;) Integers are a finite ring [0]. What does that tell you? Does it help you that there is "multiplicative inverse modulo N" for every integer? If you do crypto or hash stuff, it sometimes does. Most programmers do not have to care, though. [0] http://jonisalonen.com/2013/mathematical-foundations-of-computer-integers/On 12/10/13 5:35 PM, Namespace wrote:Monads suffer from the same problem algebra, templates and many other things do in that they have a scary name and are often poorly explained.I love Monadic null checking. Would be great if D would have it.What does a monad have to do with that? (just out of curiosity... BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himself)
Dec 11 2013
qznc:Integers are a finite ring [0]. What does that tell you? Does it help you that there is "multiplicative inverse modulo N" for every integer? If you do crypto or hash stuff, it sometimes does. Most programmers do not have to care, though.On related topics: http://ericlippert.com/2013/11/14/a-practical-use-of-multiplicative-inverses/ Bye, bearophile
Dec 11 2013
On Wednesday, 11 December 2013 at 16:14:15 UTC, qznc wrote:On Wednesday, 11 December 2013 at 14:32:34 UTC, Robert Clipsham wrote:I haven't read the link, but what i presume you mean is that a contiguous subsection of the integers, with defined overflow that satisfies max(i) + 1 = min(i) forms a finite ring? Because Z is definitely an infinite ring.On Tuesday, 10 December 2013 at 23:53:25 UTC, Ary Borenszweig wrote:They are too trivial to comprehend. ;) Integers are a finite ring [0]. What does that tell you? Does it help you that there is "multiplicative inverse modulo N" for every integer? If you do crypto or hash stuff, it sometimes does. Most programmers do not have to care, though. [0] http://jonisalonen.com/2013/mathematical-foundations-of-computer-integers/On 12/10/13 5:35 PM, Namespace wrote:Monads suffer from the same problem algebra, templates and many other things do in that they have a scary name and are often poorly explained.I love Monadic null checking. Would be great if D would have it.What does a monad have to do with that? (just out of curiosity... BTW, the other day I friend tried to explain me monads and he realized couldn't understand them himself)
Dec 11 2013
On 12/11/2013 03:32 PM, Robert Clipsham wrote:They way I like to think of monads is simply a box that performs computations. ...The term has a more general meaning in category theory. :) What you are describing is a monad in the category of types and (pure!) functions. Since there has been some interest on this topic recently, I'll elaborate a little on that in a type theoretic setting. It would be easy to generalize the notions I'll describe to an arbitrary category. I think it is actually easier to picture the concept using the definition that does not squash map and join into a single bind implementation and then derives them from there. Then a monad consists of an endofunctor, and two natural transformations 'return' (or 'η') and 'join' (or 'μ'). (I'll explain what each term means, and there will be examples, so bear with me :o). Feel free to ask questions if I lose you at some point.) An endofunctor consists of: m: Type -> Type map: Π(a:Type)(b:Type). (a -> b) -> (m a -> m b) I.e. a mapping on types and a mapping on functions. The type of 'map' could be read as: for all types 'a' and 'b', map creates a function from 'm a' to 'm b' given a function from 'a' to 'b'. The closest concept in D is a template parameter (but what we have here is cleaner.) An example of an endofunctor is the following: m: Type -> Type m = list map: Π(a:Type)(b:Type). (a -> b) -> (list a -> list b) map a b f = list_rec [] ((::).f) The details are not that important. This is just the map function on lists, so for example: map nat nat [1,2,3] (λx. x + 1) = [2,3,4] (In case you are lost, a somewhat analogous statement in D would be assert([1,2,3].map!(int,int)(x=>x+1) == [2,3,4]).) The map function for any functor must satisfy some intuitive laws: ftr-id: Π(a:Type). map a a (id a) = id (m a) I.e. if we map an identity function, we should get back an identity function. ftr-compose: Π(a:Type)(b:Type)(c:Type)(f:b->c)(g:a->b). map b c f . map a b g = map a c (f . g) I.e. if we map twice in a row, we can just map the composition of both functions once. Eg. instead of map nat nat (λx. x + 1) (map nat nat (λx. x + 2) [1,2,3]) we can write map nat nat (λx. x + 3) [1,2,3] without changing the result. Many polymorphic containers form endofunctors in the obvious way. You could e.g. imagine mapping a tree (this corresponds to a functor (tree, maptree) 1 2 / \ / \ 2 3 - maptree nat nat (λx. x + 1) -> 3 4 / \ / \ 4 5 5 6 A natural transformation between two functors (f, mapf) (g, mapg) is a mapping of the form: η: Π(a:Type). f a -> g a There are additional restrictions (though it is redundant to state them in a type theory where types arguments are parametric). Intuitively, if your functors are containers, a natural transformation is only allowed to reshape your data. For example, a natural transformation from (tree, maptree) to (list, map) might reshape a tree into a list as follows: 1 / \ 2 3 - inorder nat -> [2,1,4,3,5] / \ 4 5 This example just performs an in-order traversal, but an arbitrary reshaping would be a natural transformation. Note that it is valid to lose or duplicate data, though usually one uses natural transformations that just preserve your data.) Formally, the restriction is naturality: Π(a:Type)(b:Type)(h:a->b). η b. mapf h = mapg h . η h I.e.: if we map h over an 'f' and then reshape it we should get the same as if we had reshaped it first and then mapped on the reshaped structure. For example, if we reshape a tree into a list using a natural transformation, then it does not matter whether we increase all its elements by one before reshaping, or if we increase all elements of the resulting list by one. We are now ready to state what a monad is (still in the restricted sense where we just consider the category of types and functions): A monad consists of: an endofunctor (m, map) together with two natural transformations: return: Π(a:Type). a -> m a join: Π(a:Type). m (m a) -> m a Note that return is a natural transformation from the identity functor (id Type, λa b. id (a->b)) to our endofunctor (m, map). And join is a natural transformation from (m . m, map . map) to (m, map). It is easy to see that those are indeed functors. The first one is an example of a functor that is not a kind of container (mapping is just function application on a single value.) For implementing 'return', we should reshape a single value into an 'm'. E.g. if 'm' is 'list', the most canonical implementation of return is: return: Π(a:Type). a -> list a return a x = [x] I.e. we create a singleton list. This is clearly a natural transformation. For the tree, we'd just create a single node. For join, the most canonical implementation just preserves the order of the elements, but forgets some of the structure, eg: [[1,2],[3,4],[5]] - join nat -> [1,2,3,4,5] This is also what the eponymous function in std.array does for D arrays. It is a little hard to draw in ASCII, but it is also easy to see how one could implement join for the tree example: Just join the root of every tree in your tree to the outer tree. 1 / \ 1 --- \ / \ |2| \ - jointree nat -> 2 3 --- \ / \ --------- 4 5 | 3 | | / \ | | 4 5 | --------- Of course there are now some intuitive restrictions on what 'return' and 'join' operations constitute a valid monad, namely: neutral_left: Π(a:Type). join a . map a (m a) (return a) = id neutral_right: Π(a:Type). join a . return a = id This is quite intuitive. I.e. if we reshape each element into the monad structure using return and then merge the inner structure into the outer one, we don't do anything. Analogously if we reshape the entire structure into a new monad structure and then join. Examples for the list case: join nat (map nat (list nat) return [1,2,3]) = join nat [[1],[2],[3]] = [1,2,3] join nat (return nat [1,2,3]) = join nat [[1,2,3]] = [1,2,3] Furthermore we need: associativity: Π(a:Type). join a . map (join a) = join a . join (m a) I.e. it does not matter in which order we join. These restrictions are also called the 'monad laws'. Example for the list case: join nat (map (join nat) [[[1,2],[3]],[[4],[5,6]]]) = join nat [[1,2,3],[4,5,6]] = [1,2,3,4,5,6] join nat (join (list nat) [[[1,2],[3]],[[4],[5,6]]]) = join nat [[1,2],[3],[4],[5,6]] = [1,2,3,4,5,6] At this point, this second restriction should feel quite intuitive as well. Now what about bind? Bind is simply: bind: Π(a:Type)(b:Type). m a -> (a -> m b) -> m b bind a b x f = join b (map a (m b) f x) i.e. bind is 'flatMap'. Example with a list: bind nat nat [1,2,3] (λx. [3*x-2,3*x-1,3*x]) = join nat (map a (m b) (λx. [3*x-2,3*x-1,3*x] [1,2,3]) = join nat [[1,2,3],[4,5,6],[7,8,9]] = [1,2,3,4,5,6,7,8,9] Now on to something completely different: The state monad. First we'll describe the endofunctor: Let 'state' be the type of some state we want the monad to thread through. m: Type -> Type m a = state -> (a, state) I.e. the structure we are looking at is a function that computes a result and a new state from some starting state. This is how we can capture side-effects to the state with a pure function. 'm a' is hence the type of a computation of a value of type 'a' that modifies a store of type 'state'. Note that now our structure may be huge: It 'stores' an 'a' for every possible starting state. In order to map, we need to destructure down to the point where we can reach a single value: map: Π(a:Type)(b:Type). (a -> b) -> (m a -> m b) map a b f x = λ(s:state). case x s of { (a,s') => (f a, s') } Note how this is quite straightforward. We need to return a function, so we just create a lambda and get a state. After we run x on the state we get a tuple whose first component we can map. 'return' is even easier: Embedding a value into the state monad creates a 'computation' with a constant result. return: Π(a:Type). a -> m a return a x = λ(s:state). (x,s) Before we implement join, lets look at m (m nat): state->(state->(a,state), state) I.e. if we apply such a thing to a state, we get a function taking a state and returning an (a,state) as well as a state. Since we are looking to get an (a,state), the implementation writes itself: join: Π(a:Type). m (m a) -> m a join a x = λ(s:state). case x s of { (x',s') => x' s' } Intuitively, to run a computation within the monad, just apply it to the current state and update the state accordingly. Why does this satisfy the monad laws? neutral_left: Π(a:Type). join a . map a (m a) (return a) = id This states that turning a value into a constant computation with that result and then running that computation is a no-op. Check. neutral_right: Π(a:Type). join a . return a = id This states that if we wrap a computation in another one and then run the computation inside, this is the same computation as the one we started with. Check. associativity: Π(a:Type). join a . map (join a) = join a . join (m a) This states that composition of computations is associative. Check. In case this last point is not so obvious, it says that if we have eg: a=2; b=a+3; c=a+b; Then it does not matter if we first execute a and then (b and then c) or if we first execute (a and then b) and then c. This is obvious. In order to fully grok monads, it is also useful to look at how they can influence control flow. The monad with the most general effects on control flow is the continuation monad. But it's getting late, so if someone is interested I could explain this another time, or you might google it.
Dec 11 2013
On 12/12/2013 01:04 AM, Timon Gehr wrote:naturality: Π(a:Type)(b:Type)(h:a->b). η b. mapf h = mapg h . η hThis should read: naturality: Π(a:Type)(b:Type)(h:a->b). η b . mapf a b h = mapg a b h . η a
Dec 11 2013
On 11 December 2013 06:35, Namespace <rswhite4 googlemail.com> wrote:I love Monadic null checking. Would be great if D would have it.Yeah that's awesome. Definitely the most interesting one to me too. It's that sort of little detail that can make code better/safer due to a convenient side-stepping of coder laziness. That said, I've always wished D had '??'. So maybe this isn't likely to be added...
Dec 10 2013
On Tuesday, 10 December 2013 at 20:35:08 UTC, Namespace wrote:I love Monadic null checking. Would be great if D would have it.+1 on Monadic null checking. Do miss it from Groovy. For me at least its the only thing on that list that I think D could really use.
Dec 10 2013
On 2013-12-10 21:35, Namespace wrote:I love Monadic null checking. Would be great if D would have it.Wouldn't that be possible to implement as a library function, at least the "points?.FirstOrDefault()?.X" part. Although it won't look as pretty. BTW, how is that handled in a language where everything is not an object and can be null? -- /Jacob Carlborg
Dec 11 2013
On Tuesday, 10 December 2013 at 20:35:08 UTC, Namespace wrote:I love Monadic null checking. Would be great if D would have it.Doesn't need to be a language feature - you can implement it as a library type. Here's a quick hacked together maybe monad: ---- import std.stdio; struct Maybe(T) { private T val = null; this(T t) { val = t; } auto opDispatch(string method, U...)(U params) { alias typeof(mixin("val." ~ method ~ "(params)")) retType; if (val) { mixin("return Maybe!(" ~ retType.stringof ~ ")(val." ~ method ~ "(params));"); } return nothing!retType(); } } Maybe!T just(T)(T t) { return Maybe!T(t); } Maybe!T nothing(T)() { return Maybe!T(); } class Foo { Bar retNull() { writeln("foo null"); return null; } Bar notNull() { writeln("foo not null"); return new Bar(); } } class Bar { Foo retNull() { writeln("bar null"); return null; } Foo notNull() { writeln("bar not null"); return new Foo(); } } void main() { auto maybe = just(new Foo); maybe.notNull().notNull().notNull().retNull().notNull(); } ----
Dec 11 2013
On Wednesday, 11 December 2013 at 13:40:22 UTC, Robert Clipsham wrote:On Tuesday, 10 December 2013 at 20:35:08 UTC, Namespace wrote:Now define map,bind,join. It quickly gets ugly. https://bitbucket.org/qznc/d-monad/src/5b9d41c611093db74485b017a72473447f8d5595/generic.d?at=masterI love Monadic null checking. Would be great if D would have it.Doesn't need to be a language feature - you can implement it as a library type. Here's a quick hacked together maybe monad: ---- import std.stdio; struct Maybe(T) { private T val = null; this(T t) { val = t; } auto opDispatch(string method, U...)(U params) { alias typeof(mixin("val." ~ method ~ "(params)")) retType; if (val) { mixin("return Maybe!(" ~ retType.stringof ~ ")(val." ~ method ~ "(params));"); } return nothing!retType(); } } Maybe!T just(T)(T t) { return Maybe!T(t); } Maybe!T nothing(T)() { return Maybe!T(); } class Foo { Bar retNull() { writeln("foo null"); return null; } Bar notNull() { writeln("foo not null"); return new Bar(); } } class Bar { Foo retNull() { writeln("bar null"); return null; } Foo notNull() { writeln("bar not null"); return new Foo(); } } void main() { auto maybe = just(new Foo); maybe.notNull().notNull().notNull().retNull().notNull(); } ----
Dec 11 2013
On Tue, 10 Dec 2013 10:57:59 -0800, Suliman <evermind live.ru> wrote:It's only ideas about next version, but new future maybe next: http://damieng.com/blog/2013/12/09/probable-c-6-0-features-illustratedCompiler-as-a-Library! *nudgenudge* Also I was reading an interview with Anders and he talked about something VERY interesting. Immutable AST's... I wonder how they did that? Since D has immutability, rewriting the front-end as a library in D could be extremely interesting, and unlike Roslyn, it'd be native-code... I'm just saying... :-D -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Dec 10 2013
Am 10.12.2013 23:49, schrieb Adam Wilson:On Tue, 10 Dec 2013 10:57:59 -0800, Suliman <evermind live.ru> wrote:Except Rosylin is also native code when you deploy with NGEN, Windows launch keynote. -- PauloIt's only ideas about next version, but new future maybe next: http://damieng.com/blog/2013/12/09/probable-c-6-0-features-illustratedCompiler-as-a-Library! *nudgenudge* Also I was reading an interview with Anders and he talked about something VERY interesting. Immutable AST's... I wonder how they did that? Since D has immutability, rewriting the front-end as a library in D could be extremely interesting, and unlike Roslyn, it'd be native-code... I'm just saying... :-D
Dec 10 2013
On Tue, 10 Dec 2013 14:54:23 -0800, Paulo Pinto <pjmlp progtools.org> wrote:Am 10.12.2013 23:49, schrieb Adam Wilson:Agreed. But those are implementation details in practice. From the compiler is probably the most interesting to me personally, however, for the moment, according to Anders the default mode of Roslyn is JIT'ed IL. :-) -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/On Tue, 10 Dec 2013 10:57:59 -0800, Suliman <evermind live.ru> wrote:Except Rosylin is also native code when you deploy with NGEN, Windows launch keynote. -- PauloMaybe it would be possible to get some good idea from next version of It's only ideas about next version, but new future maybe next: http://damieng.com/blog/2013/12/09/probable-c-6-0-features-illustratedCompiler-as-a-Library! *nudgenudge* Also I was reading an interview with Anders and he talked about something VERY interesting. Immutable AST's... I wonder how they did that? Since D has immutability, rewriting the front-end as a library in D could be extremely interesting, and unlike Roslyn, it'd be native-code... I'm just saying... :-D
Dec 10 2013
On 12/10/2013 11:49 PM, Adam Wilson wrote:Also I was reading an interview with Anders and he talked about something VERY interesting. Immutable AST's... I wonder how they did that?I assume they just don't expose any mutating operations.
Dec 10 2013
On Tuesday, 10 December 2013 at 18:58:01 UTC, Suliman wrote:Maybe it would be possible to get some good idea from next It's only ideas about next version, but new future maybe next: http://damieng.com/blog/2013/12/09/probable-c-6-0-features-illustratedI really like the "Inline declarations for out params" feature(number 9). The example given in the article is kind of lame, but combined with conditionals it can be a really big improvement: if(int.TryParse(a,out int b)){ //some code that uses `b` } If this works as I expect it to work, `b` will only be defined in the scope of the `if` statement. If we had to declare `b` beforehand, it would have polluted the surrounding scope, when not only we don't use it after the `if` but it doesn't have a meaningful value if `TryParse` yields `false`! Also - it allows using type inference when declaring those out parameters, which is always a good thing. This feature can be compared to D's declare-in-if syntax, but they are not equivalent. Consider: if(int b=a.tryParse!int()){ //some code that uses `b` } if `a` is "0" we won't enter the then-clause even though we managed to parse. This is why we don't have this `tryParse` function in D...
Dec 10 2013
On 2013-12-11 00:21, Idan Arye wrote:if(int b=a.tryParse!int()){ //some code that uses `b` } if `a` is "0" we won't enter the then-clause even though we managed to parse. This is why we don't have this `tryParse` function in D...Not just that - how would you signal that a is an invalid value? Throw an exception? A solution would be: if (Option!int b = a.tryParse!int) { // Use b in here. } -- Simen
Dec 11 2013
On Tuesday, 10 December 2013 at 18:58:01 UTC, Suliman wrote:Maybe it would be possible to get some good idea from next It's only ideas about next version, but new future maybe next: http://damieng.com/blog/2013/12/09/probable-c-6-0-features-illustratedlimited usefulness which doesn't deserve a presence in the language and it would serve to confuse newbies who weren't aware of that feature. I prefer to consolidate existing features and make them more flexible to serve other purposes. For example, an old idea of mine is a syntax like this (assuming D had Python-style tuples): this.(x, y) = (x, y); The syntax would generate a tuple from members of this. Similarly, one could also generate arrays or dictionaries by writing this.[x, y]. idea to implement it. My initial thoughts are that it would make grep'ing for declarations more difficult and I think it would actually make code more difficult to read. Also, due to order of evaluation, you couldn't use the variable until after the statement in which it's declared, e.g.: x + foo(out int x)
Dec 10 2013