www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Found on proggit: Krug, a new experimental programming language,

reply Joakim <dlang joakim.fea.st> writes:
https://github.com/felixangell/krug

https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/
Apr 26 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Apr 26, 2018 at 08:50:27AM +0000, Joakim via Digitalmars-d wrote:
 https://github.com/felixangell/krug
 
 https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/
It's still too early to judge, but from the little I've seen of it, it seems nothing more than just a rehash of C with a slightly different syntax. It wasn't clear from the docs what exactly it brings to the table that isn't already done in C, or any other language. T -- If it tastes good, it's probably bad for you.
Apr 26 2018
next sibling parent reply arturg <var.spool.mail700 gmail.com> writes:
On Thursday, 26 April 2018 at 15:07:37 UTC, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 08:50:27AM +0000, Joakim via 
 Digitalmars-d wrote:
 https://github.com/felixangell/krug
 
 https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/
It's still too early to judge, but from the little I've seen of it, it seems nothing more than just a rehash of C with a slightly different syntax. It wasn't clear from the docs what exactly it brings to the table that isn't already done in C, or any other language. T
why do people use this syntax? if val == someVal or while val != someVal it makes editing the code harder then if you use if(val == someVal).
Apr 26 2018
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 04/26/2018 01:13 PM, arturg wrote:
 
 why do people use this syntax?
 
 if val == someVal
 
 or
 
 while val != someVal
 
 it makes editing the code harder then if you use if(val == someVal).
The theory goes: A. "less syntax => easier to read". B. "There's no technical need to require it, and everything that can be removed should be removed, thus it should be removed". Personally, I find the lack of parens gives my brain's visual parser insufficient visual cues to work with, so I always find it harder to read. And regarding "B", I just don't believe in "less is more" - at least not as an immutable, universal truth anyway. Sometimes it's true, sometimes it's not.
Apr 26 2018
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Apr 26, 2018 at 06:29:46PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 04/26/2018 01:13 PM, arturg wrote:
 
 why do people use this syntax?
 
 if val == someVal
 
 or
 
 while val != someVal
 
 it makes editing the code harder then if you use if(val == someVal).
The theory goes: A. "less syntax => easier to read". B. "There's no technical need to require it, and everything that can be removed should be removed, thus it should be removed". Personally, I find the lack of parens gives my brain's visual parser insufficient visual cues to work with, so I always find it harder to read. And regarding "B", I just don't believe in "less is more" - at least not as an immutable, universal truth anyway. Sometimes it's true, sometimes it's not.
If "less is more" were universally true, we'd be programming in BF instead of D. :-O (Since, after all, it's Turing-complete, which is all anybody really needs. :-P) T -- What do you call optometrist jokes? Vitreous humor.
Apr 26 2018
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 04/26/2018 06:47 PM, H. S. Teoh wrote:
 
 If "less is more" were universally true, we'd be programming in BF
 instead of D.  :-O  (Since, after all, it's Turing-complete, which is
 all anybody really needs. :-P)
 
Yea. Speaking of which, I wish more CS students were taught the the inherent limitations of "Turing-complete" vs (for example) "Big-O". There's faaaar too many people being taught "Turing-complete means it can do anything" which, of course, is complete and total bunk in more (important) ways than one. I see the same thing in other areas of CS, too, like parser theory. The formal CS material makes it sound as if LR parsing is more or less every bit as powerful as LL (and they often straight-up say so in no uncertain terms), but then they all gloss over the fact that: That's ONLY true for "detecting whether an input does or doesn't match the grammar", which is probably the single most UNIMPORTANT characteristic to consider when ACTUALLY PARSING. Outside of the worthless "does X input satisfy Y grammar: yes or no" bubble, LL-family is vastly more powerful than LR-family, but you'd never know it going by CS texts (and certainly not from those legendary-yet-overrated Dragon texts).
Apr 26 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Apr 26, 2018 at 07:14:17PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 04/26/2018 06:47 PM, H. S. Teoh wrote:
 
 If "less is more" were universally true, we'd be programming in BF
 instead of D.  :-O  (Since, after all, it's Turing-complete, which
 is all anybody really needs. :-P)
 
Yea. Speaking of which, I wish more CS students were taught the the inherent limitations of "Turing-complete" vs (for example) "Big-O". There's faaaar too many people being taught "Turing-complete means it can do anything" which, of course, is complete and total bunk in more (important) ways than one.
Actually, Turing-complete *does* mean it can do anything... well, anything that can be done by a machine, that is. There are inherently unsolvable problems that no amount of Turing-completeness will help you with. The problem, however, lies in how *practical* a particular form of Turing-completeness is. You wouldn't want to write a GUI app with lambda calculus, for example, even though in theory you *could*. (It would probably take several lifetimes and an eternity of therapy afterwards, but hey, we're talking theory here. :-P) Just like you *could* write basically anything in machine language, but it's simply not practical in this day and age. And actually, speaking of Big-O, one thing that bugs me all the time is that the constant factor in front of the Big-O term is rarely considered. When it comes to performance, the constant factor *does* matter. You can have O(log n) for your algorithm, but if the constant in front is 1e+12, then my O(n^2) algorithm with a small constant in front will still beat yours by a mile for small to medium sized use cases. The O(log n) won't mean squat unless the size of the problem you're solving is commensurate with the constant factor in the front. You can sort 5 integers with an on-disk B-tree and rest assured that you have a "superior" algorithm, but my in-memory bubble sort will still beat yours any day. The size of the problem matters. Not to mention, constant-time setup costs that's even more frequently disregarded when it comes to algorithm analysis. You may have a O(n^1.9) algorithm that's supposedly superior to my O(n^2) algorithm, but if it takes 2 days to set up the data structures required to run your O(n^1.9) algorithm, then my O(n^2) algorithm is still superior (until the problem size becomes large enough it will take more than 2 days to compute). And if your O(n^1.9) algorithm has a setup time of 10 years, then it might as well be O(2^n) for all I care, it's pretty much useless in practice, theoretical superiority be damned. And that's not even beginning to consider practical factors like the hardware you're running on, and why the theoretically-superior O(1) hash is in practice inferior to supposedly inferior algorithms that nevertheless run faster because they are cache-coherent, whereas hashing essentially throws caching out the window. Thankfully, recently there's been a slew of papers on cache-aware and cache-oblivious algorithms that are reflect reality closer than ivory-tower Big-O analyses that disregard reality.
 I see the same thing in other areas of CS, too, like parser theory.
 The formal CS material makes it sound as if LR parsing is more or less
 every bit as powerful as LL (and they often straight-up say so in no
 uncertain terms), but then they all gloss over the fact that: That's
 ONLY true for "detecting whether an input does or doesn't match the
 grammar", which is probably the single most UNIMPORTANT characteristic
 to consider when ACTUALLY PARSING.  Outside of the worthless "does X
 input satisfy Y grammar: yes or no" bubble, LL-family is vastly more
 powerful than LR-family, but you'd never know it going by CS texts
 (and certainly not from those legendary-yet-overrated Dragon texts).
Well, LR parsing is useful for writing compilers that tell you "congratulations, you have successfully written a program without syntax errors!". What's that? Where's the executable? Sorry, I don't know what that word means. And what? Which line did the syntax error occur in? Who knows! That's your problem, my job is just to approve or reject the program in its entirety! :-P (And don't get me started on computability theory courses where the sole purpose is to explore the structure of the hierarchy of unsolvable problems. I mean, OK, it's kinda useful to know when something is unsolvable (i.e., when not to waste your time trying to do something that's impossible), but seriously, what is even the point of the tons of research that has gone into discerning entire *hierarchies* of unsolvability?! I recall taking a course where the entire term consisted of proving things about the length of proofs. No, we weren't actually *writing* proofs. We were proving things *about* proofs, and I might add, the majority of the "proofs" we worked with were of infinite length. I'm sure that it will prove (ha!) to be extremely important in the next iPhone release, which will also cure world hunger and solve world peace, but I'm a bit fuzzy on the details, so don't quote me on that. :-P) T -- Your inconsistency is the only consistent thing about you! -- KD
Apr 26 2018
next sibling parent sarn <sarn theartofmachinery.com> writes:
On Friday, 27 April 2018 at 00:03:34 UTC, H. S. Teoh wrote:
 Actually, Turing-complete *does* mean it can do anything... 
 well, anything that can be done by a machine, that is.
No, it means there's some *abstract mapping* between what a thing can do and what any Turing machine can do. That sounds pedantic, but it makes a difference. For example, early C++ template metaprogamming was Turing complete, but it couldn't do much of the code generation that's possible nowadays. Sure, there's a theoretical abstract function that maps the problem you're really trying to solve to a complex C++ type, and then template metaprogramming could convert that to a "solution" type that has a theoretical mapping to the solution you want, but there weren't any concrete implementations of these abstract functions you needed to get the actual code generation you wanted. Inputs and outputs are usually the killer of "but it's Turing complete" systems.
Apr 26 2018
prev sibling next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 04/26/2018 08:03 PM, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 07:14:17PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 04/26/2018 06:47 PM, H. S. Teoh wrote:
 If "less is more" were universally true, we'd be programming in BF
 instead of D.  :-O  (Since, after all, it's Turing-complete, which
 is all anybody really needs. :-P)
Yea. Speaking of which, I wish more CS students were taught the the inherent limitations of "Turing-complete" vs (for example) "Big-O". There's faaaar too many people being taught "Turing-complete means it can do anything" which, of course, is complete and total bunk in more (important) ways than one.
Actually, Turing-complete *does* mean it can do anything... well, anything that can be done by a machine, that is. There are inherently unsolvable problems that no amount of Turing-completeness will help you with.
It's directly analogous to the LR vs LL matter, and LR's "Well, I can tell you this input does/doesn't satisfy the grammar, but I can't help ya with much more than that": Turning-completeness only tells you whether a given Turing-complete system (ie, "language", machine, etc) *can* compute XYZ if given infinite time and memory resources. That's it, that's all it says. (Granted, that *is* still a useful thing to know...) However, Turing-completeness says nothing about whether the given language can accomplish said task *in the same time complexity* as another Turing-complete language. Or any other resource complexity, for that matter. And Turing-completeness also says nothing about what inputs/outputs a language/system has access to. Ex: VBScript is Turing-complete, but it can't do direct memory access, period, or invoke hardware interrupts (at least not without interop to another language, at which point it's not really VBScript doing the direct memory access, etc). This means there are things that simply cannot be done in VBScript. Another example: Alan's Turing machine (as well as the BF language) is incapable of O(1) random-access. Accessing an arbitrary memory cell is an O(n) operation, where n is the difference between the target address and the current address. But many other languages/machines, like D, ARE capable of O(1) random-access. Therefore, any algorithm which relies on O(1) random-access (of which there are many) CANNOT be implemented with the same algorithmic complexity in BF and it could be in D. And yet, both BF and D are Turing-complete. Therefore, we have Turing-complete languages (BF, VBScript) which are INCAPABLE of doing something another Turing-complete language (D) can do. Thus, Turing-completeness does not imply a language/machine "can do anything" another language/machine can do. And then, of course, *in addition* to all that, there's the separate matter you brought up of how convenient or masochistic it is to do ABC in language XYZ. ;)
 And actually, speaking of Big-O, one thing that bugs me all the time is
 that the constant factor in front of the Big-O term is rarely
 considered.
 [...]
 And that's not even beginning to consider practical factors like the
 hardware you're running on, and why the theoretically-superior O(1) hash
 is in practice inferior to supposedly inferior algorithms that
 nevertheless run faster because they are cache-coherent, whereas hashing
 essentially throws caching out the window.  Thankfully, recently there's
 been a slew of papers on cache-aware and cache-oblivious algorithms that
 are reflect reality closer than ivory-tower Big-O analyses that
 disregard reality.
 
Certainly good points.
 Well, LR parsing is useful for writing compilers that tell you
 "congratulations, you have successfully written a program without syntax
 errors!".  What's that?  Where's the executable?  Sorry, I don't know
 what that word means.  And what?  Which line did the syntax error occur
 in?  Who knows!  That's your problem, my job is just to approve or
 reject the program in its entirety! :-P
Exactly! :) (Ugh, I would've saved myself sooooo much bother if ANY of that had been even remotely clear from any of the parsing books I had read. But nope! One of the items on my bucket list is to write a "CS Theory for Programmers" book that actually fills in all this stuff, along with going easy on the math-theory syntax that you can't realistically expect programmers to be fluent in. The average CS book is the equivalent of marketing a "How to speak German book" in the US...but writing it in French. Sure, *some* Americans will be able to read it, but...)
 (And don't get me started on computability theory courses where the sole
 purpose is to explore the structure of the hierarchy of unsolvable
 problems.  I mean, OK, it's kinda useful to know when something is
 unsolvable (i.e., when not to waste your time trying to do something
 that's impossible), but seriously, what is even the point of the tons of
 research that has gone into discerning entire *hierarchies* of
 unsolvability?!  I recall taking a course where the entire term
 consisted of proving things about the length of proofs. No, we weren't
 actually *writing* proofs. We were proving things *about* proofs, and I
 might add, the majority of the "proofs" we worked with were of infinite
 length.  I'm sure that it will prove (ha!) to be extremely important in
 the next iPhone release, which will also cure world hunger and solve
 world peace, but I'm a bit fuzzy on the details, so don't quote me on
 that.  :-P)
Well, I think a big part of it is the general acceptance that we can never really be certain what knowledge will/won't be useful. So science is all about learning whatever we can about reality on the presumption that the pursuit of scientific knowledge is a virtue in and of itself. Maybe it's all the Star Trek I've watched, but I can get onboard with that[1]. The part where things really start to bug me, though, is when the scientific knowledge *does* cross outside the realm of pure science, but in the process gets misrepresented as implying something it really doesn't imply, or critically important details get ignored or under-emphasised[2]. [1] Although I would certainly prefer to see preferential focus placed on scientific inquiries that ARE already known to have practical, real-world impact (like: "Can this LR parser (which we can theoretically create to validate the same grammar as some LL parser) be constructed to emit the same parse-tree as the LL parser? (Hint: Probably not) If so, what costs are involved?") [2] It also bugs me how they say "abstract" when they don't actually mean "abstract" at all, but really mean "summary" or "overview", but...well...that's considerably less important ;)
Apr 26 2018
parent reply sarn <sarn theartofmachinery.com> writes:
On Friday, 27 April 2018 at 04:06:52 UTC, Nick Sabalausky 
(Abscissa) wrote:
 One of the items on my bucket list is to write a "CS Theory for 
 Programmers" book that actually fills in all this stuff, along 
 with going easy on the math-theory syntax that you can't 
 realistically expect programmers to be fluent in. The average 
 CS book is the equivalent of marketing a "How to speak German 
 book" in the US...but writing it in French. Sure, *some* 
 Americans will be able to read it, but...)
The first Haskell tutorial I read was written by someone who thought it would be cute to do mathsy typesetting of all the syntax. E.g., -> became some right arrow symbol, meaning that nothing the book taught could be put into an actual Haskell compiler and executed. The book never explained the real syntax. Thankfully there are more useful tutorials out there (like this one: http://learnyouahaskell.com/).
Apr 26 2018
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 27, 2018 at 06:22:55AM +0000, sarn via Digitalmars-d wrote:
[...]
 The first Haskell tutorial I read was written by someone who thought
 it would be cute to do mathsy typesetting of all the syntax.  E.g., ->
 became some right arrow symbol, meaning that nothing the book taught
 could be put into an actual Haskell compiler and executed.  The book
 never explained the real syntax.
Ouch. That's cruel! T -- Never wrestle a pig. You both get covered in mud, and the pig likes it.
Apr 27 2018
prev sibling parent reply IntegratedDimensions <IntegratedDimensions gmail.com> writes:
On Friday, 27 April 2018 at 00:03:34 UTC, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 07:14:17PM -0400, Nick Sabalausky 
 (Abscissa) via Digitalmars-d wrote:
 On 04/26/2018 06:47 PM, H. S. Teoh wrote:
 
 If "less is more" were universally true, we'd be programming 
 in BF instead of D.  :-O  (Since, after all, it's 
 Turing-complete, which is all anybody really needs. :-P)
 
Yea. Speaking of which, I wish more CS students were taught the the inherent limitations of "Turing-complete" vs (for example) "Big-O". There's faaaar too many people being taught "Turing-complete means it can do anything" which, of course, is complete and total bunk in more (important) ways than one.
Actually, Turing-complete *does* mean it can do anything... well, anything that can be done by a machine, that is. There are inherently unsolvable problems that no amount of Turing-completeness will help you with. The problem, however, lies in how *practical* a particular form of Turing-completeness is. You wouldn't want to write a GUI app with lambda calculus, for example, even though in theory you *could*. (It would probably take several lifetimes and an eternity of therapy afterwards, but hey, we're talking theory here. :-P) Just like you *could* write basically anything in machine language, but it's simply not practical in this day and age. And actually, speaking of Big-O, one thing that bugs me all the time is that the constant factor in front of the Big-O term is rarely considered. When it comes to performance, the constant factor *does* matter. You can have O(log n) for your algorithm, but if the constant in front is 1e+12, then my O(n^2) algorithm with a small constant in front will still beat yours by a mile for small to medium sized use cases. The O(log n) won't mean squat unless the size of the problem you're solving is commensurate with the constant factor in the front. You can sort 5 integers with an on-disk B-tree and rest assured that you have a "superior" algorithm, but my in-memory bubble sort will still beat yours any day. The size of the problem matters. Not to mention, constant-time setup costs that's even more frequently disregarded when it comes to algorithm analysis. You may have a O(n^1.9) algorithm that's supposedly superior to my O(n^2) algorithm, but if it takes 2 days to set up the data structures required to run your O(n^1.9) algorithm, then my O(n^2) algorithm is still superior (until the problem size becomes large enough it will take more than 2 days to compute). And if your O(n^1.9) algorithm has a setup time of 10 years, then it might as well be O(2^n) for all I care, it's pretty much useless in practice, theoretical superiority be damned. And that's not even beginning to consider practical factors like the hardware you're running on, and why the theoretically-superior O(1) hash is in practice inferior to supposedly inferior algorithms that nevertheless run faster because they are cache-coherent, whereas hashing essentially throws caching out the window. Thankfully, recently there's been a slew of papers on cache-aware and cache-oblivious algorithms that are reflect reality closer than ivory-tower Big-O analyses that disregard reality.
 I see the same thing in other areas of CS, too, like parser 
 theory. The formal CS material makes it sound as if LR parsing 
 is more or less every bit as powerful as LL (and they often 
 straight-up say so in no uncertain terms), but then they all 
 gloss over the fact that: That's ONLY true for "detecting 
 whether an input does or doesn't match the grammar", which is 
 probably the single most UNIMPORTANT characteristic to 
 consider when ACTUALLY PARSING.  Outside of the worthless 
 "does X input satisfy Y grammar: yes or no" bubble, LL-family 
 is vastly more powerful than LR-family, but you'd never know 
 it going by CS texts (and certainly not from those 
 legendary-yet-overrated Dragon texts).
Well, LR parsing is useful for writing compilers that tell you "congratulations, you have successfully written a program without syntax errors!". What's that? Where's the executable? Sorry, I don't know what that word means. And what? Which line did the syntax error occur in? Who knows! That's your problem, my job is just to approve or reject the program in its entirety! :-P (And don't get me started on computability theory courses where the sole purpose is to explore the structure of the hierarchy of unsolvable problems. I mean, OK, it's kinda useful to know when something is unsolvable (i.e., when not to waste your time trying to do something that's impossible), but seriously, what is even the point of the tons of research that has gone into discerning entire *hierarchies* of unsolvability?! I recall taking a course where the entire term consisted of proving things about the length of proofs. No, we weren't actually *writing* proofs. We were proving things *about* proofs, and I might add, the majority of the "proofs" we worked with were of infinite length. I'm sure that it will prove (ha!) to be extremely important in the next iPhone release, which will also cure world hunger and solve world peace, but I'm a bit fuzzy on the details, so don't quote me on that. :-P) T
The point of O is for the most dominant rate of growth(asymptotic behavior). In mathematics, one only cares about n as it approaches infinity and so any constant term will eventually be dwarfed. So technically, in the theory, it is "incorrect" to have any extraneous terms. In CS, it is used to approximate the time for large n to be able to compare different algorithms in the "long run". Since computers cannot process infinite n, n will be finite and generally relatively small(e.g., less than 10^100, which is quite small compared to infinity). So the failure you are pointing out is really because the application. In some cases the constant term may be applicable and same cases it isn't. Since it depends on n and we cannot use the simplification that n is infinity, it matters what n is. This is why it is also important to know which algorithms do better for small n because if n is small during program use one might be using the wrong algorithm. But once one goes down this road one then has to not count ideal cycles but real cycles and include all the other factors involved. Big O was not mean to give real world estimates because it is a mathematical domain. It may or may not work well depending on how poorly it is used, sorta like statistics. Generally though, they are such a great simplification tool that it works for many processes that are well behaved. Ideally it would be better to count the exact number of cycles used by an algorithm and have them normalized to some standard cycle that could be compared across different architectures. Machines can do the accounting easily. Even then, many anomalous behavior will generally be seen but it would be more accurate, although possibly not much more informative, than Big O.
May 01 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, May 01, 2018 at 05:13:13PM +0000, IntegratedDimensions via
Digitalmars-d wrote:
[...]
 The point of O is for the most dominant rate of growth(asymptotic
 behavior).  In mathematics, one only cares about n as it approaches
 infinity and so any constant term will eventually be dwarfed. So
 technically, in the theory, it is "incorrect" to have any extraneous
 terms.
Well, yes. Of course the whole idea behind big O is asymptotic behaviour, i.e., behaviour as n becomes arbitrarily large. Unfortunately, as you point out below, this is not an accurate depiction of the real world:
 In CS, it is used to approximate the time for large n to be able to
 compare different algorithms in the "long run". Since computers cannot
 process infinite n, n will be finite and generally relatively
 small(e.g., less than 10^100, which is quite small compared to
 infinity).
Exactly, and therefore the model does not quite match reality, and so when the scale of n matters in reality, then the conclusions you draw from big O will be inaccurate at best, outright wrong at worst.
 So the failure you are pointing out is really because the application.
 In some cases the constant term may be applicable and same cases it
 isn't.  Since it depends on n and we cannot use the simplification
 that n is infinity, it matters what n is. This is why it is also
 important to know which algorithms do better for small n because if n
 is small during program use one might be using the wrong algorithm.
Exactly. Yet this important point is often overlooked / neglected to be mentioned when big O is taught to CS students. I'm not saying big O is useless -- it has its uses, but people need to be aware of its limitations rather than blindly assuming (or worse, being told by an authority) that it's a magical pill that will solve everything.
 But once one goes down this road one then has to not count ideal
 cycles but real cycles and include all the other factors involved. Big
 O was not mean to give real world estimates because it is a
 mathematical domain. It may or may not work well depending on how
 poorly it is used, sorta like statistics.  Generally though, they are
 such a great simplification tool that it works for many processes that
 are well behaved.
I agree that big O is a wonderful simplification in many cases. But this comes with caveats, and my complaint was that said caveats are more often than not overlooked or neglected. To use a concrete example: traditional big O analysis says a hashtable is fastest, being O(1), especially when the hash function minimizes collisions. Minimal collisions means short linear search chains when multiple entries fall into the same bucket, i.e., we stay close to O(1) instead of the worst case of O(n) (or O(log n), depending on how you implement your buckets). In certain situations, however, it may actually be advantageous to *increase* collisions with a locality-sensitive hash function, because it increases the likelihood that the next few lookups may already be in cache and therefore doesn't incur the cost of yet another cache miss and RAM/disk roundtrip. The buckets are bigger, and according to big O analysis "slower", because each lookup incurs the cost of O(n) or O(log n) search within a bucket. However, in practice it's faster, because it's expensive to load a bucket into the cache (or incur disk I/O to read a bucket from disk). If lookups are clustered around similar keys and end up in the same few buckets, then once the buckets are cached any subsequent lookups become very cheap. Large buckets actually work better because instead of having to incur the cost of loading k small buckets, you just pay once for one large bucket that contains many entries that you will soon access in the near future. (And larger buckets are more likely to contain entries you will need soon.) Also, doing an O(n) linear search within small-ish buckets may actually be faster than fancy O(log n) binary trees, due to the CPU's cache predictor. A linear scan is easy for the cache predictor to recognize and load in a series of consecutive cache lines, thus amortizing away the RAM roundtrip costs, whereas with a fancy binary tree the subsequent memory access is hard to predict (or may have no predictable pattern), so the predictor can't help you, and you have to pay for the RAM roundtrips. When n gets large, of course, the binary tree will overtake the performance of the linear search. But the way big O is taught in CS courses gives the wrong impression that O(n) linear search is always inferior and therefore bad and to be avoided. Students need to be told that this is not always the case, and that there are times when O(n) is actually better than O(log n).
 Ideally it would be better to count the exact number of cycles used by
 an algorithm and have them normalized to some standard cycle that
 could be compared across different architectures. Machines can do the
 accounting easily. Even then, many anomalous behavior will generally
 be seen but it would be more accurate, although possibly not much more
 informative, than Big O.
[...] Sorry, counting cycles does not solve the problem. That may have worked back in the days of the 8086, but CPU architectures have moved on a long way since then. These days, cache behaviour is arguably far more important than minimizing cycles, because your cycle-minimized algorithm will do you no good if every other cycle the instruction pipeline has to be stalled because of branch hazards, or because of a cache miss that entails a RAM roundtrip. Furthermore, due to the large variety of cache structures out there, it's unrealistic to expect a single generalized cycle-counting model to work for all CPU architectures. You'd be drowning in nitty gritty details instead of getting useful results from your analysis. CPU instruction pipelines, out-of-order execution, speculative execution, etc., will complicate the analysis so much that a unified model that works across all CPUs would pretty much be impossible. A more promising approach that has been pursued in the recent years is the cache-oblivious model, where the algorithm is designed to take advantage of a cache hierarchy, but *without* depending on any specific one. I.e., it is assumed that linear access of N elements sequentially across blocks of size B will be faster than N random accessese, but the algorithm is designed in such a way that it does not depend on specific values of B and N, and it does not need to be tuned to any specific value of B and N. This model has shown a lot of promise in algorithms that may in theory have "poor" big O behaviour, but in practice operates measurably faster because they take advantage of the modern CPU cache hierarchy. As an added bonus, the cache hierarchy analysis naturally extends to include secondary storage like disk I/O, and the beauty of cache oblivious algorithms is that they can automatically take advantage of this without needing massive redesign, unlike the earlier situation where you have to know beforehand whether your code is going to be CPU-bound or disk-bound, and you may have to use completely different algorithms in either case. Or you may have to hand-tune the parameters of your algorithm (such as optimize for specific disk block sizes). A cache-oblivious can Just Work(tm) without further ado, and without sudden performance hits. T -- One reason that few people are aware there are programs running the internet is that they never crash in any significant way: the free software underlying the internet is reliable to the point of invisibility. -- Glyn Moody, from the article "Giving it all away"
May 01 2018
parent jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 1 May 2018 at 18:46:20 UTC, H. S. Teoh wrote:
 Well, yes.  Of course the whole idea behind big O is asymptotic 
 behaviour, i.e., behaviour as n becomes arbitrarily large. 
 Unfortunately, as you point out below, this is not an accurate 
 depiction of the real world:

 [snip]
The example I like to use is parallel computing. Sure, throwing 8 cores at a problem might be the most efficient with a huge amount of data, but with a small array there's so much overhead that it's way slower than a single processor algorithm.
May 01 2018
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/26/2018 3:29 PM, Nick Sabalausky (Abscissa) wrote:
 The theory goes:
 
 A. "less syntax => easier to read".
 B. "There's no technical need to require it, and everything that can be
removed 
 should be removed, thus it should be removed".
 
 Personally, I find the lack of parens gives my brain's visual parser 
 insufficient visual cues to work with, so I always find it harder to read. And 
 regarding "B", I just don't believe in "less is more" - at least not as an 
 immutable, universal truth anyway. Sometimes it's true, sometimes it's not.
Haskell seems to take the "minimal syntax" as far as possible (well, not as far as APL which has a reputation for being write only). Personally, I find it makes Haskell much harder to read than necessary. Having redundancy in the syntax makes for better, more accurate error diagnostics. In the worst case, for a language with zero redundancy, every sequence of characters is a valid program. Hence, no errors can be diagnosed! Besides, redundancy can make a program easier to read (English has a lot of it, and is hence easy to read). And I don't know about others, but I read code an awful lot more than I write it. I posit that redundancy is something programmers learn to appreciate as they gain experience, and that eliminating redundancy is something new programmers think is a new idea :-) P.S. Yes, excessive redundancy and verbosity can be bad. See COBOL.
Apr 26 2018
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Apr 26, 2018 at 04:26:30PM -0700, Walter Bright via Digitalmars-d wrote:
[...]
 Having redundancy in the syntax makes for better, more accurate error
 diagnostics. In the worst case, for a language with zero redundancy,
 every sequence of characters is a valid program. Hence, no errors can
 be diagnosed!
 
 Besides, redundancy can make a program easier to read (English has a
 lot of it, and is hence easy to read).
People often complain about how redundant natural languages are... not realizing that it actually provides, in addition to being easier to read, some degree of built-in error-correction and resilience in a lossy medium. Think of reading a text that has occasional typos or omitted words. Most of the time, you can still figure out what it's saying in spite of the "syntax errors". Or talking over the phone with lots of static noise. You can still make out what the other person is saying, even if some words are garbled. Computer languages aren't quite at that level of self-correctiveness and resilience yet, but I'd like to think we're on the way there. Redundancy is not always a bad thing.
 And I don't know about others, but I read code an awful lot more than
 I write it.
Yes, something language designers often fail to account for. Well, many programmers also tend to write without the awareness that 5 months later, someone (i.e., themselves :-D) will be staring at that same piece of code and going "what the heck was the author thinking when he wrote this trash?!".
 I posit that redundancy is something programmers learn to appreciate
 as they gain experience, and that eliminating redundancy is something
 new programmers think is a new idea :-)
 
 P.S. Yes, excessive redundancy and verbosity can be bad. See COBOL.
And Java. ;-) T -- INTEL = Only half of "intelligence".
Apr 26 2018
parent Chris <wendlec tcd.ie> writes:
On Friday, 27 April 2018 at 00:18:05 UTC, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 04:26:30PM -0700, Walter Bright via 
 Digitalmars-d wrote: [...]
 [...]
People often complain about how redundant natural languages are... not realizing that it actually provides, in addition to being easier to read, some degree of built-in error-correction and resilience in a lossy medium. Think of reading a text that has occasional typos or omitted words. Most of the time, you can still figure out what it's saying in spite of the "syntax errors". Or talking over the phone with lots of static noise. You can still make out what the other person is saying, even if some words are garbled. Computer languages aren't quite at that level of self-correctiveness and resilience yet, but I'd like to think we're on the way there. Redundancy is not always a bad thing.
 [...]
Yes, something language designers often fail to account for. Well, many programmers also tend to write without the awareness that 5 months later, someone (i.e., themselves :-D) will be staring at that same piece of code and going "what the heck was the author thinking when he wrote this trash?!".
 [...]
And Java. ;-) T
Yep. Good point. German is more redundant than English (case endings) and it's easier to re-construct garbled text. But natural languages tend to remove redundancy (e.g. case endings when the relation is clear) - up to a certain point! But redundancy is needed and good. Maybe natural languages show us how far you can go before you get in trouble.
Apr 27 2018
prev sibling next sibling parent reply Meta <jared771 gmail.com> writes:
On Thursday, 26 April 2018 at 23:26:30 UTC, Walter Bright wrote:
 Besides, redundancy can make a program easier to read (English 
 has a lot of it, and is hence easy to read).
I completely agree. I always make an effort to make my sentences as redundant as possible such that they can be easily read and understood by anyone: Buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo. Unfortunately, I think the Chinese have us beat; they can construct redundant sentences far beyond anything we could ever imagine, and thus I predict that within 50 years Chinese will be the new international language of science, commerce, and politics: Shíshì shīshì Shī Shì, shì shī, shì shí shí shī. Shì shíshí shì shì shì shī. Shí shí, shì shí shī shì shì. Shì shí, shì Shī Shì shì shì. Shì shì shì shí shī, shì shǐ shì, shǐ shì shí shī shìshì. Shì shí shì shí shī shī, shì shíshì. Shíshì shī, Shì shǐ shì shì shíshì. Shíshì shì, Shì shǐ shì shí shì shí shī. Shí shí, shǐ shí shì shí shī shī, shí shí shí shī shī. Shì shì shì shì. 石室诗士施氏,嗜狮,誓食十狮。氏时时适市视狮。十时 适十狮适市。 是时,适施氏适市。氏视是十狮,恃矢势,使是十狮逝世。氏拾是十狮尸,适石室。石室湿,氏使侍拭石室。石室拭,氏始试食是十狮尸。食时,始识是十狮,实十石狮尸。试释是事。
Apr 27 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Apr 28, 2018 at 04:47:54AM +0000, Meta via Digitalmars-d wrote:
[...]
 Unfortunately, I think the Chinese have us beat; they can construct
 redundant sentences far beyond anything we could ever imagine, and
 thus I predict that within 50 years Chinese will be the new
 international language of science, commerce, and politics:
 
 Shíshì shīshì Shī Shì, shì shī, shì shí shí shī. Shì shíshí
shì shì shì shī.
 Shí shí, shì
 shí shī shì shì. Shì shí, shì Shī Shì shì shì. Shì shì shì shí
shī, shì shǐ
 shì, shǐ shì shí shī shìshì. Shì shí shì shí shī shī, shì
shíshì. Shíshì
 shī, Shì shǐ shì shì shíshì. Shíshì shì, Shì shǐ shì shí shì
shí shī. Shí
 shí, shǐ shí shì shí shī shī, shí shí shí shī shī. Shì shì shì
shì.
 
 石室诗士施氏,嗜狮,誓食十狮。氏时时适市视狮。十时,适十狮适市。
 是时,适施氏适市。氏视是十狮,恃矢势,使是十狮逝世。氏拾是十狮尸,适石室。石室湿,氏使侍拭石室。石室拭,氏始试食是十狮尸。食时,始识是十狮,实十石狮尸。试释是事。
As a native Chinese speaker, I find contortions of this kind mildly amusing but mostly ridiculous, because this is absolutely NOT how the language works. It is carrying an ancient scribal ivory-tower ideal of one syllable per word to ludicrous extremes, an ideal that's mostly unattained, because most so-called monosyllabic "words" in the language are in fact multi-consonantal clusters retroactively analysed as monosyllables. Isolated syllables taken out of their context have no real meaning of their own (except perhaps in writing, which again is an invention of the scribes that doesn't fully reflect the spoken reality [*]). Actually pronouncing the atrocity above might as well be speaking reverse-encrypted Klingon as far as comprehensibility by a native speaker is concerned. [*] The fact that the written ideal doesn't really line up with the vernacular is evidenced by the commonplace exchange where native speakers have to explain to each other which written word they mean (especially where names are involved), and this is done by -- you guessed it -- quoting the multiconsonantal cluster from which said monosyllabic "word" was derived, thus reconstructing the context required to actually make sense of what would otherwise be an ambiguous, unintelligible monosyllable. Foreigners, of course, have little idea about this, and gullibly believe the fantasy that the language actually works on monosyllables. It does not. T -- Mediocrity has been pushed to extremes.
Apr 30 2018
parent reply Meta <jared771 gmail.com> writes:
On Monday, 30 April 2018 at 16:20:38 UTC, H. S. Teoh wrote:
 As a native Chinese speaker, I find contortions of this kind 
 mildly amusing but mostly ridiculous, because this is 
 absolutely NOT how the language works.  It is carrying an 
 ancient scribal ivory-tower ideal of one syllable per word to 
 ludicrous extremes, an ideal that's mostly unattained, because 
 most so-called monosyllabic "words" in the language are in fact 
 multi-consonantal clusters retroactively analysed as 
 monosyllables. Isolated syllables taken out of their context 
 have no real meaning of their own (except perhaps in writing, 
 which again is an invention of the scribes that doesn't fully 
 reflect the spoken reality [*]). Actually pronouncing the 
 atrocity above might as well be speaking reverse-encrypted 
 Klingon as far as comprehensibility by a native speaker is 
 concerned.
Oh yes, I'm well aware that there's a lot of semantic contortion required here, and that as spoken, this sounds like complete gibberish. I don't know where the monosyllable meme came from, either; it's readily apparently from learning even basic vocabulary. 今天, 马上, 故事, hell, 中国 is a compound word.
Apr 30 2018
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 30, 2018 at 06:10:35PM +0000, Meta via Digitalmars-d wrote:
[...]
 Oh yes, I'm well aware that there's a lot of semantic contortion
 required here, and that as spoken, this sounds like complete
 gibberish. I don't know where the monosyllable meme came from, either;
 it's readily apparently from learning even basic vocabulary. 今天,
 马上, 故事, hell, 中国 is a compound word.
AFAICT, the monosyllable thing is something that came from the ancient scribes who were (at least partly) responsible for the writing system. It's an ideal that gives a nice 1-to-1 mapping between glyphs and "words", providing a philosophically elegant way to rationalize everything into monosyllabic units. Sortof like representing everything with 1's and 0's. :-D But, like all ideals, it's also somewhat detached from reality, despite having permeated Chinese thinking such that people have come to equate "word" with "syllable", even though many such "words" clearly only exist as compounds that cannot be separated without destroying its meaning. There's also another, currently not-fully-understood factor, and that is that pronunciation has changed over time, and there is some evidence that certain features of the written language may reflect inserted consonants that formed consonant clusters in the ancient language, sounds that have since become silent. It makes one wonder if some of these "monosyllables" may have been, in some distant past, actually polysyllabic words in their own right, and the monosyllable thing may have been merely a side-effect of the shift in pronunciation over many centuries. (This pronunciation shift is evidenced by the "alternate readings" of certain glyphs that's adopted when reading ancient poetry, clearly a retroactive effort to compensate for the divergence in rhyme caused by sound change since the time said poetry was written. Who knows what other compensatory measures have been taken over time that may have, consciously or not, contributed to the monosyllabic illusion.) T -- Debian GNU/Linux: Cray on your desktop.
Apr 30 2018
prev sibling parent reply TheDalaiLama <DalaiLama noplace.com> writes:
On Thursday, 26 April 2018 at 23:26:30 UTC, Walter Bright wrote:
 I posit that redundancy is something programmers learn to 
 appreciate as they gain experience, and that eliminating 
 redundancy is something new programmers think is a new idea :-)
Not just 'new programmers', but even old programmers! (i.e. think 'Go'..) How did they get 'Go'... so wrong? Such 'smart' 'experienced' people .. came up with such an awful language... I don't get it. There's no substituion for taste...some have it.. some don't. 'Experience' is irrelevant.
May 01 2018
next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 05/01/2018 10:51 PM, TheDalaiLama wrote:
 
 There's no substituion for taste...some have it.. some don't.
 
 'Experience' is irrelevant.
 
Honestly, there's a lot of truth to this. People can certainly learn, of course (well, at least some people can), but experience definitely does not imply learning has actually occurred. Contrary to popular belief (especially common HR belief), experience is NOT a consistent, commoditized thing. There's such a thing as quality of experience, and that has far more impact than quantity.
May 01 2018
parent reply Russel Winder <russel winder.org.uk> writes:
On Tue, 2018-05-01 at 23:54 -0400, Nick Sabalausky (Abscissa) via Digitalma=
rs-
d wrote:
 On 05/01/2018 10:51 PM, TheDalaiLama wrote:
=20
 There's no substituion for taste...some have it.. some don't.
=20
 'Experience' is irrelevant.
=20
=20 Honestly, there's a lot of truth to this. People can certainly learn, of=
=20
 course (well, at least some people can), but experience definitely does=
=20
 not imply learning has actually occurred.
=20
 Contrary to popular belief (especially common HR belief), experience is=
=20
 NOT a consistent, commoditized thing. There's such a thing as quality of=
=20
 experience, and that has far more impact than quantity.
Agreed that hours claimed as experience is not a measure of knowledge. But conversely experience with learning is critical to future development. Thus statements such as "experience is irrelevant" are dangerous statements in m= ost contexts. =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
May 01 2018
parent reply TheDalaiLama <TheDalaiLama nowhere.com> writes:
On Wednesday, 2 May 2018 at 06:04:59 UTC, Russel Winder wrote:
 Thus
 statements such as "experience is irrelevant" are dangerous 
 statements in most
 contexts.
sorry. but 'experience' IS irrelevant. What is relevant, is what your experience demonstrates (i.e. what do you have to show for your experience). Plenty of people are experienced idiots.
May 02 2018
parent Russel Winder <russel winder.org.uk> writes:
This is turning into a debate on the semantics of the word experience,
so let's leave it that each person has their own belief system.

On Wed, 2018-05-02 at 09:22 +0000, TheDalaiLama via Digitalmars-d
wrote:
 On Wednesday, 2 May 2018 at 06:04:59 UTC, Russel Winder wrote:
 Thus
 statements such as "experience is irrelevant" are dangerous=20
 statements in most
 contexts.
=20 sorry. but 'experience' IS irrelevant. =20 What is relevant, is what your experience demonstrates (i.e. what=20 do you have to show for your experience). =20 Plenty of people are experienced idiots. =20
--=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
May 02 2018
prev sibling parent Russel Winder <russel winder.org.uk> writes:
On Wed, 2018-05-02 at 02:51 +0000, TheDalaiLama via Digitalmars-d wrote:
=20
[=E2=80=A6]
 How did they get 'Go'... so wrong?
They didn't. A lot of people out there are using Go very effectively and thoroughly enjoying it. True it is a language by Google for Google, but it = has massive traction outside Google. Go got a lot right.
 Such 'smart' 'experienced' people .. came up with such an awful=20
 language... I don't get it.
Just because you think it is awful doesn't make it awful. Indeed Go got a l= ot right exactly because smart experienced people were involved in its development. People who have spent 40+ years designing languages generally = do language design quite well. =20

It works very well for those people who use it as it is meant to be used.
 There's no substituion for taste...some have it.. some don't.
Taste is a personal thing not an objective measure. That you dislike Go so much is fine, that is your opinion. That does not make it objective fact. Others really like Go, that is their opinion, and equally valid. It also is not objective fact. What can be shown consistently though is that the process (preferably lightweight) and channel model avoids much if not all the unpleasantness of shared memory multithreaded programming. Go has this built in and it makes = it a very appealing language. Rust has channels but only heavyweight processes= , at least at present in release form. D has heavyweight processes with messa= ge passing, but I couldn't seem to make it work for Me TV, I just got no messa= ge passing and lots of segfaults.
 'Experience' is irrelevant.
Now that is just a rubbish statement. Experience can be crucial. cf. Roman architecture. Even people who claim to have no experience are in fact usual= ly using knowledge gained by experience of others. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
May 01 2018
prev sibling parent arturg <var.spool.mail700 gmail.com> writes:
On Thursday, 26 April 2018 at 22:29:46 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 04/26/2018 01:13 PM, arturg wrote:
 
 why do people use this syntax?
 
 if val == someVal
 
 or
 
 while val != someVal
 
 it makes editing the code harder then if you use if(val == 
 someVal).
The theory goes: A. "less syntax => easier to read". B. "There's no technical need to require it, and everything that can be removed should be removed, thus it should be removed". Personally, I find the lack of parens gives my brain's visual parser insufficient visual cues to work with, so I always find it harder to read. And regarding "B", I just don't believe in "less is more" - at least not as an immutable, universal truth anyway. Sometimes it's true, sometimes it's not.
yeah same here, and if people find delimiters anoying, editors with syntax highlighting can help with that by making them less visible so the important content sticks out more. But not having some delimiter removes the ability to edit the text by using for exemple vims text object commands (vi(, yi[, and so on). In this regard, ddoc's syntax is anoying because the identifier is inside the parens: $(somemacro, content) it would have been better if it was: $somemacro(content). Which can make html more editable then ddoc :/ as vim recognises tags as text objects.
Apr 26 2018
prev sibling next sibling parent Joakim <dlang joakim.fea.st> writes:
On Thursday, 26 April 2018 at 15:07:37 UTC, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 08:50:27AM +0000, Joakim via 
 Digitalmars-d wrote:
 https://github.com/felixangell/krug
 
 https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/
It's still too early to judge, but from the little I've seen of it, it seems nothing more than just a rehash of C with a slightly different syntax. It wasn't clear from the docs what exactly it brings to the table that isn't already done in C, or any other language.
In a comment from that proggit thread, the creator says, "The language right now isn't the focus but... you can think of it as Go but with generics and no garbage collection."
Apr 26 2018
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Thursday, 26 April 2018 at 15:07:37 UTC, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 08:50:27AM +0000, Joakim via 
 Digitalmars-d wrote:
 https://github.com/felixangell/krug
 
 https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/
It's still too early to judge, but from the little I've seen of it, it seems nothing more than just a rehash of C with a slightly different syntax. It wasn't clear from the docs what exactly it brings to the table that isn't already done in C, or any other language. T
Author specified that it's just a hobby project.
Apr 26 2018
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Apr 26, 2018 at 06:26:03PM +0000, Meta via Digitalmars-d wrote:
 On Thursday, 26 April 2018 at 15:07:37 UTC, H. S. Teoh wrote:
 On Thu, Apr 26, 2018 at 08:50:27AM +0000, Joakim via Digitalmars-d
 wrote:
 https://github.com/felixangell/krug
 
 https://www.reddit.com/r/programming/comments/8dze54/krug_a_systems_programming_language_that_compiles/
It's still too early to judge, but from the little I've seen of it, it seems nothing more than just a rehash of C with a slightly different syntax. It wasn't clear from the docs what exactly it brings to the table that isn't already done in C, or any other language. T
Author specified that it's just a hobby project.
Fair enough. But if that's all there is to it, then why bring it up here? Not that I object, mind you, but it just seemed kinda random, why pick this one hobby project over any other, when there are tons of new languages out there being made almost every day. T -- The most powerful one-line C program: #include "/dev/tty" -- IOCCC
Apr 26 2018
parent ag0aep6g <anonymous example.com> writes:
On 04/26/2018 09:11 PM, H. S. Teoh wrote:
 Fair enough.  But if that's all there is to it, then why bring it up
 here?  Not that I object, mind you, but it just seemed kinda random, why
 pick this one hobby project over any other, when there are tons of new
 languages out there being made almost every day.
"compiler written in D"
Apr 26 2018