www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - How can D become adopted at my company?

reply "Eljay" <eljay451 gmail.com> writes:
[I had sent Walter an email, answering a question he asked me a 
few years ago.  Walter asked me to post the message here.  
Slightly redacted - I took out an non-relevant aside, and since 
the forum doesn't support HTML bold, italic, underscore tags 
those have been stripped as well.]

Hi Walter,

You had asked me a question that I have been thinking about for a 
couple years.

Your question from 2009 was, “How can D become adopted at my 
company?”


*** What makes a language successful? ***

I think Stroustrup’s answer is the best metric I’ve heard:  a 
language is successful if it is used.  Or in my wording, if you 
know that technology can you get a job utilizing that technology.

Looking at all the successful languages, I have noticed that all 
the successful ones I am familiar with have had some sort of 
sponsor pushing the technology.  For example, Ada by the U.S. 

AT&T, Lua by PCU in Rio, REXX by IBM, the list goes on and on and 
on.

The only exception I can think of is Ruby, a successful language 
that prospered by a grassroots movement without a corporate, 
government, or academic sponsor as far as I’m aware.

My understanding is that Facebook is sponsoring D.  At least in 
some capacity, I’m not sure of the details or extent.  But 
still, it’s a start, and important, and Facebook has monetary 
resources.


*** What makes a language popular? ***

The classic chicken-and-egg problem.  A language is popular 
because a lot of people use it.  A lot of people use it because 
it is popular.

So how does a budding language like D become more popular?  
That’s a marketing and evangelist problem.  And not my forte.  
But I have raised awareness of D with developers I know, 
word-of-mouth.

Some leading-edge independent developers have used D, and their 
programs demonstrate that D can walk-the-walk as well as 
talk-the-talk (e.g., Torus Troopers using D 0.110 and OpenGL).

Maybe Facebook could be convinced to pay some book writers to 
make D books.
    * Numerical Recipes in D
    * Design Patterns in D
    * The Dao of D
    * Harry Potter and the Frumious D Compiler


*** Who will use D?  And why?  And for what? ***

D is a neat language because it is a systems programming 
language, suitable as a more-than-viable alternative to C or C++.

But unlike C and C++, it has Safe D, which is eminently suitable 
for applications programming language.  Sure, C and C++ are used 
as applications programming languages, but that’s not what they 
were designed for.  D has an applications language story that is 
far more compelling than C and C++, and has the potential as a 
native applications programming language that VM based languages 
can’t touch.


*** What does D lack? ***

I think it is interesting to take a step back, and look at the 
whole enchilada of programming.  From Python to BASIC, from SQL 
to Brainf**k.

I think all these following things could be asked for as D 
Project Requests to the D user community.

Web framework.  What do people use Ruby on Rails, or slightly 
less popular Python and Django for?  Is D a suitable alternative? 
  Maybe, maybe not.  D can do the compiling job, no doubt, but is 
there a “Rails” or “Django” for D that is as strong as, 
well, Rails or Django?

Scripting.  Many games use Lua as an embedded scripting language. 
  With mixin we could write our own DSL in D.  Possibly even 
re-implement Lua-in-D.  Yet, I think D could greatly benefit by 
coming with a canned scripting language (e.g., a Lua-like 
language), as a standard feature.  Perhaps someone will do so (or 
has done so), and submit it for consideration.  That would make 
plumbing up a do-it-yourself scripting language (like Lua-on-C) 
to your engine-written-in-D so much easier.  If not Lua-in-D, 
perhaps JavaScript-in-D...?

Linux kernel in D.  This would be a crazy project for someone who 
is a Linux lover, an operating systems wonk, a D enthusiast, and 
highly OCD.  It is crazy because is a lot of effort to re-write 
something like a Linux kernel in another language, because it is 
a lot of work without any visible gain at the end.  But the final 
result would showcase that D can do the heavy lifting of an 
operating system.  Let’s see /language-X/ do that!  (Where 


D on iOS.  So for me personally, I would love to use D to write 
my applications for iOS, and OS X.  But… I’m not sure how to 
do that.  (The Objective-D project looks abandoned, never got out 
of the “toy” project stage, and doesn’t bridge Cocoa’s 
Frameworks written in Objective-C to D/Objective-D anyway.)

D for .NET.  Or I would love to use D to write my applications 
for .NET … and again, I’m not sure how to do that.  (The 
D.NET project looks abandoned, and never got out of the “toy” 
project stage.  And I don’t see a way to generate CIL from D.)

(For the above two D Project Requests, I have to admit I 
haven’t really looked all that hard.  So they could be already 
solved problems that I’m just ignorant of the existing 
solution.)


*** What features would you add to D? ***

I think these two features would help D tremendously:
    * D comes with a canned embedded scripting language, like Lua
    * D comes with the Objective facilities, like those found in 
Objective-C

For embedded scripting languages, Lua has shown it has the right 
balance of tiny-tiny-tiny footprint, rich expressivity, and 
highly performant.  It has been used in many games like World of 
Warcraft, and desktop applications such as the logic-and-UI 
language for Adobe Lightroom.

So why not just use Lua itself in D?  Because Lua has a nice 
Lua-to-C API, and in my opinion having Lua-in-D would be able to 
leverage some of D’s strengths and make the scripting language 
seamless.

The Objective portion of Objective-C is very interesting.  The 
amazing advantage of Objective-Whatever is not in the Whatever 
inasmuch as it is in what the Objective portion brings to the 
table.  Since all Objective objects use message-and-dispatch 
mechanism, it means that frameworks all become incredibly loosely 
coupled.  Also do to late-binding and the dispatch mechanism, 
anyone can extend any class or proxy any class or remote proxy 
any class, easily, even without source code.

To illustrate, I will compare C++ to Objective-C.  In C++, if you 
have a public API that takes in a std∷string const& as a 
parameter, you will soon discover that the std∷string const& is 
intimately affected by the compiler used and the optimization 
settings.  In Objective-C, the Objective string objects can be 
mixed-and-matched from different frameworks in which different 
string objective have entirely different implementations, but all 
comply with the same message contract.  And any framework can 
extend all the string objects used with novel functionality.  
That de-coupling is super-important for scalability, including 
plug-ins and extension frameworks.


*** What’s the future of programming? ***

The “Next Big Thing” for computer languages probably won’t 
be emphasizing their super-awesome encapsulation and way-cool 
message-and-dispatcher based de-coupling.  :-)

I think the Next Big Thing in computer languages is rich DSL 
support, which will enable more complexity by simplifying what 
needs to be written in a more suitably (i.e., domain specific) 
expressivity.  Due to mixin and generational programming, I think 
D enables DSL grammars really-really-really well.

Also, due to a confluence of factors, the rising star for 
becoming the most widely used programming language is JavaScript. 
  I have to say, I’m not a fan of JavaScript.  I’ve seen the 
leading edge of compile-to-JavaScript languages, such as 
CoffeeScript and DART.  Can D get on that bandwagon and have the 
D compiler compile to JavaScript as some sort of IL?  I know that 

compiler).

Sincerely,
Eljay
Apr 24 2012
next sibling parent reply Trass3r <un known.com> writes:
 My understanding is that Facebook is sponsoring D.  At least in some  =
 capacity, I=E2=80=99m not sure of the details or extent.  But still, i=
t=E2=80=99s a =
 start, and important, and Facebook has monetary resources.
Andrei works at facebook, that's all.
 Web framework.  What do people use Ruby on Rails, or slightly less  =
 popular Python and Django for?  Is D a suitable alternative?
See Adam Ruppe's work.
 Scripting.  Many games use Lua as an embedded scripting language.   Wi=
th =
 mixin we could write our own DSL in D.  Possibly even re-implement  =
 Lua-in-D.
No need to reinvent the wheel.
 So why not just use Lua itself in D?  Because Lua has a nice Lua-to-C =
=
 API, and in my opinion having Lua-in-D would be able to leverage some =
of =
 D=E2=80=99s strengths and make the scripting language seamless.
LuaD.
 kernel in D.
XOmB.
 Can D get on that bandwagon and have the D compiler compile to  =
 JavaScript as some sort of IL?
Theoretically. See emscripten.
Apr 24 2012
parent "Eljay" <eljay451 gmail.com> writes:
Awesome!  Thanks Trass3r!
Apr 24 2012
prev sibling next sibling parent reply "Eljay" <eljay451 gmail.com> writes:
As a follow up to my email to Walter...

I know I didn't address the question "How can D become adopted at 
my company?" head-on.

An on-going project written in (say) C++ is not going to get 
approval to re-write in D.  There is no ROI in it.

A new project that could be written in D will be met with a lot 
of resistance.  Management will consider D too risky, as compared 

familiar with D will consider it as a pain-in-the-learning-curve 
[an attitude I cannot fathom; learning a new computer language is 
a joy, like opening a birthday present].

In some cases, such as shipping an application for iOS or Windows 
Phone or Android devices, can D even be utilized?  Even if 
management and the team's developers are behind using D?

---

A brief blurb about who I am...

I started programming in 1976, where I contributed to a program 
called Oregon Trail written in HP2000A BASIC on TIES.  That was 
my very first programming experience.

After learning BASIC, I learned 6502 assembly, then later picked 
up FORTRAN, Pascal, and C.  Then 68000 assembly.

I abandoned programming in assembly when I got my first 
optimizing C compiler, which was able to out-optimize my lovingly 
hand-crafted assembly.  I became a true believer in the powerful 
mojo of optimizing compilers.

In 1990, I switched from C to C++, first as as "Better C" 
compiler.  By two years later, I had fully embraced OOP style.

C++ was my main language for a long time, with a couple years 


---

About 12 years ago, using Aho's dragon book by my side, I tried 
my hand at writing my own programming language.  After six 
months, I gave up because creating a good, general purpose 
programming language IS VERY VERY HARD.

Later, when I stumbled upon D, it was like Walter had read my 
mind and implemented what I could only conceive of... I was 
smitten.  And I still am.

So the languages I admire are...
    * D, as a general purpose natively compiled multi-paradigm 
programming language
    * Lua, as a barebones, small footprint, embed-able 
do-it-yourself scripting language
    * Python 3, as a kitchen-sink-included scripting language

I have used extensively BASIC (HP2000A, Apple Integer, Applesoft, 
MAI BusinessBASIC IV, PickBASIC), FORTRAN, Prolog, LISP & Scheme, 
6502 Assembly, 680x0 Assembly, Pascal, Mathematica, C, C++, 
Objective-C, Objective-C++, Java.

I'm also intrigued by some other languages but I do not use them 

with many other programming languages, such as Perl, Ruby, REXX, 
Ada, Squeak, Forth, PostScript, yada yada yada.

My educational background is in high-energy physics where I 
learned FORTRAN, linguistics (with a focus on semantics and 
artificial intelligence) where I learned Prolog and LISP, and 
computer science.

---

And the most important bit of information:  I use vi (Vim).
Apr 24 2012
next sibling parent reply Brad Roberts <braddr slice-2.puremagic.com> writes:
On Tue, 24 Apr 2012, Eljay wrote:

 As a follow up to my email to Walter...
 
 I know I didn't address the question "How can D become adopted at my company?"
 head-on.
Your response is actually very typical of most responses to the question. What's interesting to me is that it's really a deflection and dodges the entire point of the question. By avoiding the question, you (and don't take this personally, I mean 'the person answering a different question') avoid committing to trying to find a way at all.
 An on-going project written in (say) C++ is not going to get approval to
 re-write in D.  There is no ROI in it.
Neither Walter (in this case) nor the question asked for re-writting anything. In fact, that's frequently stated (again, by Walter and others, including myself) as explicitly a non-goal. Rewriting applications to another language is an exercise in time wasting and bug-reintroduction. Unless you have _another_ driving reason to do a rewrite, don't.
 A new project that could be written in D will be met with a lot of resistance.
 Management will consider D too risky, as compared to writing the same project

 pain-in-the-learning-curve [an attitude I cannot fathom; learning a new
 computer language is a joy, like opening a birthday present].
And this is finally getting a the heart of the question, but also approaching it with an intend to fail approach to it. Of course you don't want to take something new and introduce it as the solution for the next huge risky project. That's bound to be smacked down and get no where. To introduce change and reduce risk, you start small. Something that's safe to let fail. Of course that can backfire too if you want it to: "See, it failed, so the tools we used must suck." Except that might not actually be why it failed. So, the obvious follow up.. what have I done with D where I work? Little, other than get it on the approved list of software we can use. It's not on the list of officially supported languages (more a defacto thing than an actual list). But the key problem is that I haven't written any new code in a very long time, something I miss more and more. The applications I do touch are all pre-existing code bases, so see above about rewriting. My 2 cents, Brad
Apr 24 2012
next sibling parent mta`chrono <chrono mta-international.net> writes:
Am 24.04.2012 21:53, schrieb Brad Roberts:
 Neither Walter (in this case) nor the question asked for re-writting 
 anything.  In fact, that's frequently stated (again, by Walter and others, 
 including myself) as explicitly a non-goal.  Rewriting applications to 
 another language is an exercise in time wasting and bug-reintroduction.  
 Unless you have _another_ driving reason to do a rewrite, don't.
 
 So, the obvious follow up.. what have I done with D where I work?  Little, 
 other than get it on the approved list of software we can use.  It's not 
 on the list of officially supported languages (more a defacto thing than 
 an actual list).  But the key problem is that I haven't written any new 
 code in a very long time, something I miss more and more.  The 
 applications I do touch are all pre-existing code bases, so see above 
 about rewriting.
 
 My 2 cents,
 Brad
 
Exactly!!! That's the point. I fully agree with this and we should take account to this in every furthur endeavours. D must be seamlessly integratable with any kind of existing codebase.
Apr 24 2012
prev sibling parent reply "Eljay" <eljay451 gmail.com> writes:
Thank you Brad, that's the kind of response I was hoping to 
elicit.

 What's interesting to me is that it's really a deflection and 
 dodges the entire point of the question.
Yes, I know. I tried to step back and look at the bigger picture, and the issue of "what are the pain points which hinder D from being used". As well as "what could the D community do to make D a more compelling alternative". Even when I had my own one-man company, and could use any programming language I wanted -- and despite my own unbridled enthusiasm for D -- I ended up not using D.
Apr 25 2012
parent Brad Roberts <braddr puremagic.com> writes:
On 4/25/2012 1:37 PM, Eljay wrote:
 Thank you Brad, that's the kind of response I was hoping to elicit.
 
 What's interesting to me is that it's really a deflection and dodges the
entire point of the question.
Yes, I know. I tried to step back and look at the bigger picture, and the issue of "what are the pain points which hinder D from being used". As well as "what could the D community do to make D a more compelling alternative". Even when I had my own one-man company, and could use any programming language I wanted -- and despite my own unbridled enthusiasm for D -- I ended up not using D.
Part of my point is that it's _easy_ to find reasons to not introduce change, regardless of the nature of the change. Even if the change is something that's low risk and done or used all the time. It takes a little bravery and faith and determination to cause change. It takes even more to make risky changes, and no doubt, using D carries risks. BUT, unless those risks are taken, the status quo won't change. It's a lot like interviewing potential employees. It's really pretty easy to seek out reasons not to hire and pass on every candidate. I know people that take that approach with their interviews.. and quickly get taken aside and re-trained how to interview or are just removed from the process altogether. It takes a balanced approach. We don't need more generalizations about why not to use D, we need people willing to take a minor risk and introduce D to demonstrate its strengths and accept the warts knowing that the trend is clearly in the right direction. Another 2 cents, Brad
Apr 25 2012
prev sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Tuesday, 24 April 2012 at 12:50:27 UTC, Eljay wrote:
 ---

 And the most important bit of information:  I use vi (Vim).
I think that the type of application where D is proving itself right now is high performance server applications, and particularly web servers. D seems completely fit to replace Java on most server apps, with both better performance and better memory usage. The web interface to the newsgroups, as well as the recently revealed vibe.d web server seem to support this view. D can handle both batch and real time treatments really well I think. That is where it can gain a lot of weight in the enterprise, even before games and scientific applications.
Apr 27 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Friday, 27 April 2012 at 23:28:09 UTC, SomeDude wrote:
 On Tuesday, 24 April 2012 at 12:50:27 UTC, Eljay wrote:
 ---

 And the most important bit of information:  I use vi (Vim).
I think that the type of application where D is proving itself right now is high performance server applications, and particularly web servers. D seems completely fit to replace Java on most server apps, with both better performance and better memory usage. The web interface to the newsgroups, as well as the recently revealed vibe.d web server seem to support this view. D can handle both batch and real time treatments really well I think. That is where it can gain a lot of weight in the enterprise, even before games and scientific applications.
The other thing that would make it attractive among the C++ developers, would be the development of a lightweight, high performance, minimal library that doesn't use the GC at all. Ideally, it would be compatible with Phobos. I bet if such a library existed, flocks of C++ developers would suddenly switch to D.
Apr 27 2012
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Apr 28, 2012 at 01:31:32AM +0200, SomeDude wrote:
[...]
 The other thing that would make it attractive among the C++
 developers, would be the development of a lightweight, high
 performance, minimal library that doesn't use the GC at all.  Ideally,
 it would be compatible with Phobos. I bet if such a library existed,
 flocks of C++ developers would suddenly switch to D.
I know the current GC leaves much room for improvement, but what's the hangup about the GC anyway? If -- and yes this is a very big if -- the GC has real-time guarantees, would that make it more palatable to C++ devs? Or is it just because they have trouble with the idea of having a GC in the first place? T -- Three out of two people have difficulties with fractions. -- Dirk Eddelbuettel
Apr 27 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 28 April 2012 at 01:09:25 UTC, H. S. Teoh wrote:
 On Sat, Apr 28, 2012 at 01:31:32AM +0200, SomeDude wrote:
 [...]
 The other thing that would make it attractive among the C++
 developers, would be the development of a lightweight, high
 performance, minimal library that doesn't use the GC at all.  
 Ideally,
 it would be compatible with Phobos. I bet if such a library 
 existed,
 flocks of C++ developers would suddenly switch to D.
I know the current GC leaves much room for improvement, but what's the hangup about the GC anyway? If -- and yes this is a very big if -- the GC has real-time guarantees, would that make it more palatable to C++ devs? Or is it just because they have trouble with the idea of having a GC in the first place? T
Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.
Apr 28 2012
next sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
 Real time guarantees on a GC is not something we are going to 
 offer anytime soon anyway. While a minimal library, loosely 
 based on the C standard library, with some more bells and 
 whistles that could be borrowed from Phobos, this is a goal 
 that is achievable in a foreseeable future. And both game 
 developers and embedded programmers would be interested.
Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
Apr 28 2012
next sibling parent reply "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:
 On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
 Real time guarantees on a GC is not something we are going to 
 offer anytime soon anyway. While a minimal library, loosely 
 based on the C standard library, with some more bells and 
 whistles that could be borrowed from Phobos, this is a goal 
 that is achievable in a foreseeable future. And both game 
 developers and embedded programmers would be interested.
Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
I also write games in D. My approach is this: use the GC all you want during loading or other non-interactive parts of the game and then just make sure that you don't use it during gameplay. GC vs. manual memory allocation is a non-issue for real-time guarantees. The simple fact of the matter is that you should be using neither. I also don't use malloc/free during runtime because it has the same non-real-time problems as using the GC. A single malloc can stall for tens of milliseconds or more, and that's simply too much. Just learn how to write code that doesn't allocate memory. A bigger problem with GC for games is memory management i.e. controlling how much memory is currently allocated, and what systems are using what memory. Having deterministic memory usage is preferable for those cases because I know that as soon as I delete something that the memory is available for something else. I don't get that guarantee with a GC.
Apr 28 2012
next sibling parent reply Manu <turkeyman gmail.com> writes:
On 28 April 2012 18:16, Peter Alexander <peter.alexander.au gmail.com>wrote:

 On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:

 On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:

 Real time guarantees on a GC is not something we are going to offer
 anytime soon anyway. While a minimal library, loosely based on the C
 standard library, with some more bells and whistles that could be borrowed
 from Phobos, this is a goal that is achievable in a foreseeable future. And
 both game developers and embedded programmers would be interested.
Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
I also write games in D. My approach is this: use the GC all you want during loading or other non-interactive parts of the game and then just make sure that you don't use it during gameplay. GC vs. manual memory allocation is a non-issue for real-time guarantees. The simple fact of the matter is that you should be using neither. I also don't use malloc/free during runtime because it has the same non-real-time problems as using the GC. A single malloc can stall for tens of milliseconds or more, and that's simply too much. Just learn how to write code that doesn't allocate memory. A bigger problem with GC for games is memory management i.e. controlling how much memory is currently allocated, and what systems are using what memory. Having deterministic memory usage is preferable for those cases because I know that as soon as I delete something that the memory is available for something else. I don't get that guarantee with a GC.
I think that basically sums it up. I'm interested to know is whether using a new precise GC will guarantee ALL unreferenced stuff will be cleaned on any given sweep. I can imagine a model in games where I could: 1 Use the GC to allocate as much as I like during initialisation 2 During runtime you never allocate anyway, so disable the GC (this is when it is important to know about hidden allocations) 3 During some clean-up, first run the logic to de-reference all things that are no longer required 4 Finally, force a precise GC scan, which should guarantee that all no-longer referenced memory would be cleaned up at that time. This would actually be a very convenient working model for games. But it only works if I know everything that was released will definitely be cleaned, otherwise I may not be ale to allocate the next level (games often allocate all memory a machine has within 100k or so).
Apr 29 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/29/2012 2:38 AM, Manu wrote:
 I'm interested to know is whether using a new precise GC will guarantee ALL
 unreferenced stuff will be cleaned on any given sweep.
The new hook put into the typeinfo will do precise collection for references within GC allocated objects. For references that sit on the stack or in static data, it will still use the current conservative scheme. Some things will always be imprecise, like if you have a union of a pointer with an integer, or if you allocate untyped data.
Apr 29 2012
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
	charset=us-ascii

On Apr 29, 2012, at 2:38 AM, Manu <turkeyman gmail.com> wrote:

 On 28 April 2012 18:16, Peter Alexander <peter.alexander.au gmail.com> wro=
te:
 On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:
 On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
=20
 Real time guarantees on a GC is not something we are going to offer anytim=
e soon anyway. While a minimal library, loosely based on the C standard libr= ary, with some more bells and whistles that could be borrowed from Phobos, t= his is a goal that is achievable in a foreseeable future. And both game deve= lopers and embedded programmers would be interested.
=20
 Note that Kenta Cho, who wrote fast games in D1, used this approach, and i=
t worked very well for him.
=20
 I also write games in D.
=20
 My approach is this: use the GC all you want during loading or other non-i=
nteractive parts of the game and then just make sure that you don't use it d= uring gameplay.
=20
 GC vs. manual memory allocation is a non-issue for real-time guarantees. T=
he simple fact of the matter is that you should be using neither. I also don= 't use malloc/free during runtime because it has the same non-real-time prob= lems as using the GC. A single malloc can stall for tens of milliseconds or m= ore, and that's simply too much.
=20
 Just learn how to write code that doesn't allocate memory.
=20
 A bigger problem with GC for games is memory management i.e. controlling h=
ow much memory is currently allocated, and what systems are using what memor= y. Having deterministic memory usage is preferable for those cases because I= know that as soon as I delete something that the memory is available for so= mething else. I don't get that guarantee with a GC.
=20
 I think that basically sums it up.
=20
 I'm interested to know is whether using a new precise GC will guarantee AL=
L unreferenced stuff will be cleaned on any given sweep.
 I can imagine a model in games where I could:
  1 Use the GC to allocate as much as I like during initialisation
  2 During runtime you never allocate anyway, so disable the GC (this is wh=
en it is important to know about hidden allocations)
  3 During some clean-up, first run the logic to de-reference all things th=
at are no longer required
  4 Finally, force a precise GC scan, which should guarantee that all no-lo=
nger referenced memory would be cleaned up at that time.
=20
 This would actually be a very convenient working model for games. But it o=
nly works if I know everything that was released will definitely be cleaned,= otherwise I may not be ale to allocate the next level (games often allocate= all memory a machine has within 100k or so). For a use pattern like this, one thing that may work is to add a GC proxy im= mediately before loading a level. To unload the level, terminate that GC.=20=
Apr 29 2012
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 29 April 2012 16:53, Sean Kelly <sean invisibleduck.org> wrote:

 On Apr 29, 2012, at 2:38 AM, Manu <turkeyman gmail.com> wrote:

 On 28 April 2012 18:16, Peter Alexander <peter.alexander.au gmail.com>wrote:

 On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:

 On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:

 Real time guarantees on a GC is not something we are going to offer
 anytime soon anyway. While a minimal library, loosely based on the C
 standard library, with some more bells and whistles that could be borrowed
 from Phobos, this is a goal that is achievable in a foreseeable future. And
 both game developers and embedded programmers would be interested.
Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
I also write games in D. My approach is this: use the GC all you want during loading or other non-interactive parts of the game and then just make sure that you don't use it during gameplay. GC vs. manual memory allocation is a non-issue for real-time guarantees. The simple fact of the matter is that you should be using neither. I also don't use malloc/free during runtime because it has the same non-real-time problems as using the GC. A single malloc can stall for tens of milliseconds or more, and that's simply too much. Just learn how to write code that doesn't allocate memory. A bigger problem with GC for games is memory management i.e. controlling how much memory is currently allocated, and what systems are using what memory. Having deterministic memory usage is preferable for those cases because I know that as soon as I delete something that the memory is available for something else. I don't get that guarantee with a GC.
I think that basically sums it up. I'm interested to know is whether using a new precise GC will guarantee ALL unreferenced stuff will be cleaned on any given sweep. I can imagine a model in games where I could: 1 Use the GC to allocate as much as I like during initialisation 2 During runtime you never allocate anyway, so disable the GC (this is when it is important to know about hidden allocations) 3 During some clean-up, first run the logic to de-reference all things that are no longer required 4 Finally, force a precise GC scan, which should guarantee that all no-longer referenced memory would be cleaned up at that time. This would actually be a very convenient working model for games. But it only works if I know everything that was released will definitely be cleaned, otherwise I may not be ale to allocate the next level (games often allocate all memory a machine has within 100k or so). For a use pattern like this, one thing that may work is to add a GC proxy immediately before loading a level. To unload the level, terminate that GC.
Interesting work around, although there are many other things that don't get freed from state to state, and things that are shared by both state A and B are best to keep around, save the unload/reload time of that resource (there is always lots of sharing, it adds up). Is it technically possible to have a precise GC clean up all unreferenced memory in one big pass?
Apr 29 2012
parent reply "Tove" <tove fransson.se> writes:
On Sunday, 29 April 2012 at 22:13:22 UTC, Manu wrote:
 Is it technically possible to have a precise GC clean up all 
 unreferenced
 memory in one big pass?
yes, but unless it's also moving/compacting... one would suffer memory fragmentation... so I would imagine TempAlloc is a better fit?
Apr 29 2012
parent reply Manu <turkeyman gmail.com> writes:
On 30 April 2012 01:24, Tove <tove fransson.se> wrote:

 On Sunday, 29 April 2012 at 22:13:22 UTC, Manu wrote:

 Is it technically possible to have a precise GC clean up all unreferenced
 memory in one big pass?
yes, but unless it's also moving/compacting... one would suffer memory fragmentation... so I would imagine TempAlloc is a better fit?
In some cases I'm comfortable with that type of fragmentation (large regularly sized resources), although that leads me to a gaping hole in D's allocation system... <OT, but still very important> There is no way to request aligned memory. I can't even specify an alignment on a user type and expect it to be aligned if I create one on the stack, let alone the heap >_< It seems I can request alignment for items within a struct, but I can't align the struct its self. In addition, a struct doesn't inherit the alignment of its aligned members, so the struct is allocated unaligned, and the aligned member fails its promise anyway. I frequently align to: 16 bytes for basically everything. This facilitates hardware simd, fast memcpy, efficient write-combining, better cache usage. 128(ish) bytes for L1 cache alignment (depending on architecture). Frequently used to guarantee ~128byte sized structs will never straddle cache lines (wasting a memory fetch/L1 eviction), and supporting predictable prefetch algorithms. 4k(ish) for texture/gpu page alignment (again, depending on architecture). Many GPU resources MUST be aligned for the GPU to access them. Swizzling is applied to aligned pages, resource allocation must match this. 4-64k virtual memory pages. Many uses. And occasionally other alignments pop up, often where they may be useful to help reduce/avoid fragmentation for instance. Sometimes I need to squat some data in a couple of low its in a pointer... requires the pointers be aligned. Obviously I can manually align my memory with various techniques, and I do, but it's rather fiddly and can also be very wasteful. One fast technique for general allocations is over-allocating by alignment-1, pasting a little header and padding the allocation. Allocating a GPU page for instance would waste another whole page just to guarantee alignment. In that case, you need to allocate a bog pool of pages and implement some pool system to dish them out, but then you need to know the precise number of pages to be allocated in advance in order not to waste memory that way. </OT>
Apr 29 2012
next sibling parent "Tove" <tove fransson.se> writes:
On Sunday, 29 April 2012 at 23:04:00 UTC, Manu wrote:
 In some cases I'm comfortable with that type of fragmentation 
 (large
 regularly sized resources), although that leads me to a gaping 
 hole in D's
 allocation system...
Hmmm I see, also I was thinking... since we have TLS, couldn't we abuse killing threads for fast deallocations? While adding persistent data to __gshared?
 <OT, but still very important>
 There is no way to request aligned memory. I can't even specify
I feel your pain, couldn't agree more.
Apr 29 2012
prev sibling parent reply Don Clugston <dac nospam.com> writes:
On 30/04/12 01:03, Manu wrote:
 On 30 April 2012 01:24, Tove <tove fransson.se
 <mailto:tove fransson.se>> wrote:

     On Sunday, 29 April 2012 at 22:13:22 UTC, Manu wrote:

         Is it technically possible to have a precise GC clean up all
         unreferenced
         memory in one big pass?


     yes, but unless it's also moving/compacting... one would suffer
     memory fragmentation... so I would imagine TempAlloc is a better fit?


 In some cases I'm comfortable with that type of fragmentation (large
 regularly sized resources), although that leads me to a gaping hole in
 D's allocation system...

 <OT, but still very important>
 There is no way to request aligned memory. I can't even specify an
 alignment on a user type and expect it to be aligned if I create one on
 the stack, let alone the heap >_<
 It seems I can request alignment for items within a struct, but I can't
 align the struct its self. In addition, a struct doesn't inherit the
 alignment of its aligned members, so the struct is allocated unaligned,
 and the aligned member fails its promise anyway.
Bug 2278.
May 03 2012
parent reply Manu <turkeyman gmail.com> writes:
On 3 May 2012 12:27, Don Clugston <dac nospam.com> wrote:

 On 30/04/12 01:03, Manu wrote:

 On 30 April 2012 01:24, Tove <tove fransson.se
 <mailto:tove fransson.se>> wrote:

    On Sunday, 29 April 2012 at 22:13:22 UTC, Manu wrote:

        Is it technically possible to have a precise GC clean up all
        unreferenced
        memory in one big pass?


    yes, but unless it's also moving/compacting... one would suffer
    memory fragmentation... so I would imagine TempAlloc is a better fit?


 In some cases I'm comfortable with that type of fragmentation (large
 regularly sized resources), although that leads me to a gaping hole in
 D's allocation system...

 <OT, but still very important>
 There is no way to request aligned memory. I can't even specify an

 alignment on a user type and expect it to be aligned if I create one on
 the stack, let alone the heap >_<
 It seems I can request alignment for items within a struct, but I can't
 align the struct its self. In addition, a struct doesn't inherit the
 alignment of its aligned members, so the struct is allocated unaligned,
 and the aligned member fails its promise anyway.
Bug 2278.
Why do you suggest alignment to only 8 bytes (not 16)? MOVAPS and friends operate on 16 byte aligned data, and all non-x86 architectures are strictly 16byte aligned with no unaligned alternative possible. I'd like to see that proposal extended to arbitrary power-of-2, and to allow align(n) applied to structs/classes.
May 03 2012
parent "Eljay" <eljay451 gmail.com> writes:
Been away for a bit.  My recap of the recent discussion, in 
reverse chronological order...

GC.  The discussion on the GC is great, *but* it may be best to 
move it to its own thread.  I don't think that the nuances of the 
GC is a critical issue preventing D becoming adopted at a company 
-- even if those nuances (e.g., memory fragmentation, 
BlkAttr.NO_SCAN, false pointers, 10 second garbage collection, 
room for improvement) are very important to fully grok.

J2EE.  Using D instead of J2EE is an interesting notion.  
Companies that are invested in using J2EE probably are not 
amenable to changing from JVM, so D would be viable for 
J2EE-centric enterprise environments if it could be compiled to 
Java Bytecode.  (The J2EE infrastructure provides a lot of 
administration, monitoring, and management facilities.)

Games.  As Peter indicated, and I assume Kenta Cho would 
whole-heartedly agree, for high-performance games the GC (or even 
malloc/free) can be avoided.  D is a fully viable language for 
high-performance games.  Awesome!

License.  The discussion on the licenses, as far as I can tell, 
impacts several things, and merits having further discussion in 
its own thread.  Its great to see that D2 compilers are coming 
either as stock part of some Linux distros, or as an easily 
obtainable package[1].
[1] alas far less great, since the "slight" increase to barrier 
to entry is actually quite high.  Back before Ruby and Python 
were stock components, I've seen people who prefer Python or 
prefer Ruby use Perl instead just because they could rely on it 
being there -- even though pulling down Python or Ruby was a snap.

Arrays.  Thank you for bringing to my attention the GC 
implications of using the built-in arrays and slicing.  I think 
that having D-based template C++ STL/BOOST-like alternatives that 
have different (non-GC) memory requirements makes sense now.  I 
don't think this is a show-stopper for D becoming adopted at a 
company.
May 07 2012
prev sibling parent "Nick Sabalausky" <SeeWebsiteToContactMe semitwist.com> writes:
"SomeDude" <lovelydear mailmetrash.com> wrote in message 
news:zmlqmuhznaynwtcyplof forum.dlang.org...
 On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
 Real time guarantees on a GC is not something we are going to offer 
 anytime soon anyway. While a minimal library, loosely based on the C 
 standard library, with some more bells and whistles that could be 
 borrowed from Phobos, this is a goal that is achievable in a foreseeable 
 future. And both game developers and embedded programmers would be 
 interested.
Note that Kenta Cho, who wrote fast games in D1,
Actually, I think it was pre-D1. (They were fantastic games, too.)
 used this approach, and it worked very well for him.
Interesting, I had wondered about that. I never dug quite that deep into the code, so I never knew he had done it that way.
Apr 29 2012
prev sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, April 28, 2012 11:12:21 SomeDude wrote:
 Real time guarantees on a GC is not something we are going to
 offer anytime soon anyway. While a minimal library, loosely based
 on the C standard library, with some more bells and whistles that
 could be borrowed from Phobos, this is a goal that is achievable
 in a foreseeable future. And both game developers and embedded
 programmers would be interested.
If what you want is the C standard library, then use the C standard library. There's nothing stopping you, and trying to replicate it in D would be pointless. The main problems with the GC in Phobos are likely arrays and containers. You can't fix the array problem. If you do _anything_ which involves slicing or any array functions which could allocate, you're going to need the GC. The only way to avoid the problem completely is to restrict the functions that you use with arrays to those which won't append to an array or otherwise allocate memory for an array. The container problem should be resolved via custom allocators once they've been added. The custom allocators will also help reduce GC issues for classes in general. But in general, by minimizing how much you do which would require the GC, the little that does shouldn't be a big deal. Still, due to how arrays work, there's really no way to get away from the GC completely without restricting what you do with them, which in some cases means not using Phobos. I don't think that there's really any way around that. - Jonathan M Davis
Apr 28 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Saturday, 28 April 2012 at 09:22:35 UTC, Jonathan M Davis 
wrote:
 On Saturday, April 28, 2012 11:12:21 SomeDude wrote:
 Real time guarantees on a GC is not something we are going to
 offer anytime soon anyway. While a minimal library, loosely 
 based
 on the C standard library, with some more bells and whistles 
 that
 could be borrowed from Phobos, this is a goal that is 
 achievable
 in a foreseeable future. And both game developers and embedded
 programmers would be interested.
If what you want is the C standard library, then use the C standard library. There's nothing stopping you, and trying to replicate it in D would be pointless. The main problems with the GC in Phobos are likely arrays and containers. You can't fix the array problem. If you do _anything_ which involves slicing or any array functions which could allocate, you're going to need the GC. The only way to avoid the problem completely is to restrict the functions that you use with arrays to those which won't append to an array or otherwise allocate memory for an array. The container problem should be resolved via custom allocators once they've been added. The custom allocators will also help reduce GC issues for classes in general. But in general, by minimizing how much you do which would require the GC, the little that does shouldn't be a big deal. Still, due to how arrays work, there's really no way to get away from the GC completely without restricting what you do with them, which in some cases means not using Phobos. I don't think that there's really any way around that. - Jonathan M Davis
Right, I understand the situation better now. So basically, what's needed is the custom allocators, and the GC would be relieved from much of the work. That would still not work for hard real time embedded, but for those applications, there are lots of restrictions on memory anyway (no dynamic allocation for once), so it wouldn't change much.
Apr 28 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Saturday, April 28, 2012 11:35:19 SomeDude wrote:
 Right, I understand the situation better now. So basically,
 what's needed is the custom allocators, and the GC would be
 relieved from much of the work. That would still not work for
 hard real time embedded, but for those applications, there are
 lots of restrictions on memory anyway (no dynamic allocation for
 once), so it wouldn't change much.
With custom allocators and/or shared pointers/references, you can pretty much avoid the GC entirely for classes as well as any structs that you put on the heap. So, you'd be in essentially the same place that C++ is for that. It's just arrays that you can't really fix. If you restrict yourself to what C/C++ can do with arrays (plus taking advantage of the length property), then you're fine, but if you do much beyond that, then you need the GC or you're going to have problems. So, as long as you're careful with arrays, you should be able to have the memory situation be pretty much identical to what it is in C/C++. And, of course, if you can afford to use the GC in at least some of your code, then it's there to use. I believe that the typical approach however is to use the GC unless profiling indicates that it's causing you performance problems somewhere, and then you optimize that code so that it minimizes its GC usage or so that it avoids the GC entirely. That way, your program as a whole can reap the benefits granted by the GC, but your performance-critical code can still be performant. Actually, now that I think about it, delegates would be another area where you'd have to be careful, since they generally end up having to have closures allocated for them when you pass them to a function unless that function them takes them as scope parameters. But it's easy to avoid using delegates if you want to. And if you want to program in a subset of the language that's closer to C, then you probably wouldn't be using them anyway. - Jonathan M Davis
Apr 28 2012
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On 28 April 2012 04:10, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:

 On Sat, Apr 28, 2012 at 01:31:32AM +0200, SomeDude wrote:
 [...]
 The other thing that would make it attractive among the C++
 developers, would be the development of a lightweight, high
 performance, minimal library that doesn't use the GC at all.  Ideally,
 it would be compatible with Phobos. I bet if such a library existed,
 flocks of C++ developers would suddenly switch to D.
I know the current GC leaves much room for improvement, but what's the hangup about the GC anyway? If -- and yes this is a very big if -- the GC has real-time guarantees, would that make it more palatable to C++ devs? Or is it just because they have trouble with the idea of having a GC in the first place?
If the GC guarantees to behave in a deterministic and predictable way, I have no problem with it. And even if it doesn't, as long as it's lightning fast, and I can control the sweeps. One major concern to me is invisible allocations. I want to know when I'm allocating, I like allocate operations to be clearly visible. There are a lot of operations that cause invisible allocations in D, but they are avoidable. Games are both embedded and realtime code at the same time, this unions the strict requirements of both worlds into a system that demands very tight control of these things. Fragmentation is the enemy, so is losing 1ms (GC takes WAY longer than this currently) at random moments. There is a problem right now where the GC doesn't actually seem to work, and I'm seeing D apps allocating gigabytes and never releasing the memory. A great case study for the GC is VisualD, if any GC experts would like to check it out. It shows a use case where the GC utterly fails, and makes the software borderline unusable as a result. It seems to 'leak' memory, and collects can take 5-10 seconds at a time (manifested by locking up the entire application). VisualD has completely undermined by faith and trust in the GC, and I've basically banned using it. I can't afford to run into that situation a few months down the line.
Apr 29 2012
parent reply "Nick Sabalausky" <SeeWebsiteToContactMe semitwist.com> writes:
"Manu" <turkeyman gmail.com> wrote in message 
news:mailman.93.1335691450.24740.digitalmars-d puremagic.com...
 On 28 April 2012 04:10, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:

 On Sat, Apr 28, 2012 at 01:31:32AM +0200, SomeDude wrote:
 [...]
 The other thing that would make it attractive among the C++
 developers, would be the development of a lightweight, high
 performance, minimal library that doesn't use the GC at all.  Ideally,
 it would be compatible with Phobos. I bet if such a library existed,
 flocks of C++ developers would suddenly switch to D.
I know the current GC leaves much room for improvement, but what's the hangup about the GC anyway? If -- and yes this is a very big if -- the GC has real-time guarantees, would that make it more palatable to C++ devs? Or is it just because they have trouble with the idea of having a GC in the first place?
If the GC guarantees to behave in a deterministic and predictable way, I have no problem with it. And even if it doesn't, as long as it's lightning fast, and I can control the sweeps. One major concern to me is invisible allocations. I want to know when I'm allocating, I like allocate operations to be clearly visible. There are a lot of operations that cause invisible allocations in D, but they are avoidable. Games are both embedded and realtime code at the same time, this unions the strict requirements of both worlds into a system that demands very tight control of these things. Fragmentation is the enemy, so is losing 1ms (GC takes WAY longer than this currently) at random moments. There is a problem right now where the GC doesn't actually seem to work, and I'm seeing D apps allocating gigabytes and never releasing the memory. A great case study for the GC is VisualD, if any GC experts would like to check it out. It shows a use case where the GC utterly fails, and makes the software borderline unusable as a result. It seems to 'leak' memory, and collects can take 5-10 seconds at a time (manifested by locking up the entire application). VisualD has completely undermined by faith and trust in the GC, and I've basically banned using it. I can't afford to run into that situation a few months down the line.
I once tried to create a batch image processing tool in D, and false pointers rendered the whole thing unusable. I've been wary about such things since. This was a number of years ago, though.
Apr 30 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 04/30/2012 08:45 PM, Nick Sabalausky wrote:
 I once tried to create a batch image processing tool in D, and false
 pointers rendered the whole thing unusable. I've been wary about such things
 since.

 This was a number of years ago, though.
False pointers in the image data? Why would that even be scanned?
Apr 30 2012
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/30/2012 12:09 PM, Timon Gehr wrote:
 On 04/30/2012 08:45 PM, Nick Sabalausky wrote:
 I once tried to create a batch image processing tool in D, and false
 pointers rendered the whole thing unusable. I've been wary about such things
 since.

 This was a number of years ago, though.
False pointers in the image data? Why would that even be scanned?
Such was scanned in early D. Anyhow, a large allocation is more likely to have false pointers into it than a small one. For very large allocations, I would suggest handling them explicitly rather than using the GC.
Apr 30 2012
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
On Apr 30, 2012, at 12:09 PM, Timon Gehr wrote:

 On 04/30/2012 08:45 PM, Nick Sabalausky wrote:
=20
 I once tried to create a batch image processing tool in D, and false
 pointers rendered the whole thing unusable. I've been wary about such =
things
 since.
=20
 This was a number of years ago, though.
=20 False pointers in the image data? Why would that even be scanned?
It was probably back when the GC scanned everything. BlkAttr.NO_SCAN = has only been around for a few years.
May 01 2012
prev sibling next sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 24 April 2012 at 12:04:27 UTC, Eljay wrote:
  I have to say, I’m not a fan of JavaScript.  I’ve seen the 
 leading edge of compile-to-JavaScript languages, such as 
 CoffeeScript and DART.  Can D get on that bandwagon and have 
 the D compiler compile to JavaScript as some sort of IL?
Yeah, I've had more success than I thought I would with a dmd fork: https://github.com/adamdruppe/dmd/tree/dtojs A good chunk of the language works and we can do pretty good library stuff (when I find the time!) there's also LLVM's emscripten that takes a different approach.
Apr 24 2012
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Eljay:

 Looking at all the successful languages, I have noticed that 
 all the successful ones I am familiar with have had some sort 
 of sponsor pushing the technology.
Python was widely used before Google "support". And I think Haskell has enjoyed corporate support for a lot of time.
 My understanding is that Facebook is sponsoring D.
Not much, I think.
 *** What does D lack? ***
Sometimes the problem is having too much ;-)
 But the final result would showcase that D can do the heavy 
 lifting of an operating system.  Let’s see /language-X/ do 

Bye, bearophile
Apr 24 2012
next sibling parent =?UTF-8?B?QWxleCBSw7hubmUgUGV0ZXJzZW4=?= <xtzgzorex gmail.com> writes:
On 24-04-2012 16:05, bearophile wrote:
 Eljay:

 Looking at all the successful languages, I have noticed that all the
 successful ones I am familiar with have had some sort of sponsor
 pushing the technology.
Python was widely used before Google "support". And I think Haskell has enjoyed corporate support for a lot of time.
 My understanding is that Facebook is sponsoring D.
Not much, I think.
 *** What does D lack? ***
Sometimes the problem is having too much ;-)
 But the final result would showcase that D can do the heavy lifting of
 an operating system. Let’s see /language-X/ do that! (Where

Bye, bearophile
don't think it's a matter of belief... -- - Alex
Apr 24 2012
prev sibling next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Tuesday, 24 April 2012 at 14:05:14 UTC, bearophile wrote:
 Eljay:

 Looking at all the successful languages, I have noticed that 
 all the successful ones I am familiar with have had some sort 
 of sponsor pushing the technology.
Python was widely used before Google "support". And I think Haskell has enjoyed corporate support for a lot of time.
Python's killer application was Zope. I recall before Zope, no one cared about Python in Portugal, only afterwards, people started taking Python seriously, Some of the main Haskell researchers are in the payroll of companies like Microsoft, or Siemens, for example. The proprietary languages usually are pushed by big companies, until you cannot avoid them. While the, lets call them, community oriented languages, really need something that makes people care for the language and introduce them silently in the company. I played a bit with D1, but never cared about it too much. What really made me give a second look to it was Andrei's book, but then I was disappointed to find out that not everything was really working as described in the book. As a language geek, I toy around with all programming languages I can play with, but I see the same issues as raised by Eljay.
Apr 24 2012
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Tuesday, 24 April 2012 at 14:05:14 UTC, bearophile wrote:
 Eljay:

 Looking at all the successful languages, I have noticed that 
 all the successful ones I am familiar with have had some sort 
 of sponsor pushing the technology.
Python was widely used before Google "support". And I think Haskell has enjoyed corporate support for a lot of time.
And who's behind PHP?
Apr 25 2012
next sibling parent "Paulo Pinto" <pjmlp progtools.org> writes:
On Wednesday, 25 April 2012 at 14:58:13 UTC, Kagamin wrote:
 On Tuesday, 24 April 2012 at 14:05:14 UTC, bearophile wrote:
 Eljay:

 Looking at all the successful languages, I have noticed that 
 all the successful ones I am familiar with have had some sort 
 of sponsor pushing the technology.
Python was widely used before Google "support". And I think Haskell has enjoyed corporate support for a lot of time.
And who's behind PHP?
Zend + the endless amount of ISPs that offer only cheap PHP installations, while charging endless amount of money for other types of server deployments.
Apr 25 2012
prev sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/04/12 16:58, Kagamin wrote:
 On Tuesday, 24 April 2012 at 14:05:14 UTC, bearophile wrote:
 Python was widely used before Google "support". And I think Haskell has
 enjoyed corporate support for a lot of time.
And who's behind PHP?
... but importantly, Python and PHP (and Ruby, and Haskell, and others) were fully open source in their reference implementations from the get-go, or at least from very early on. This isn't just important in itself, but has a multiplicative impact with inclusion in the Linux distros, BSD's, etc. which make up the server infrastructure of the web. It also enables all sorts of 3rd-party suppliers who feel comfortable including the software in their hosting provision because they can be certain they won't in future suffer from the commercial constraints of a proprietary supplier. D's reference implementation _still_ isn't fully open source -- only the frontend -- and the available open source compilers lag behind the reference.
Apr 25 2012
parent reply Don Clugston <dac nospam.com> writes:
On 25/04/12 17:38, Joseph Rushton Wakeling wrote:
 On 25/04/12 16:58, Kagamin wrote:
 On Tuesday, 24 April 2012 at 14:05:14 UTC, bearophile wrote:
 Python was widely used before Google "support". And I think Haskell has
 enjoyed corporate support for a lot of time.
And who's behind PHP?
... but importantly, Python and PHP (and Ruby, and Haskell, and others) were fully open source in their reference implementations from the get-go, or at least from very early on. This isn't just important in itself, but has a multiplicative impact with inclusion in the Linux distros, BSD's, etc. which make up the server infrastructure of the web. It also enables all sorts of 3rd-party suppliers who feel comfortable including the software in their hosting provision because they can be certain they won't in future suffer from the commercial constraints of a proprietary supplier. D's reference implementation _still_ isn't fully open source -- only the frontend -- and the available open source compilers lag behind the reference.
<rant> "open source" is a horrible, duplicitous term. Really what you mean is "the license is not GPL compatible". </rant> Based on my understanding of the legal situation with Symantec, the backend CANNOT become GPL compatible. Stop using the word "still", it will NEVER happen.
Apr 26 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, April 26, 2012 11:07:04 Don Clugston wrote:
 <rant>
 "open source" is a horrible, duplicitous term. Really what you mean is
 "the license is not GPL compatible".
 </rant>
 
 Based on my understanding of the legal situation with Symantec, the
 backend CANNOT become GPL compatible. Stop using the word "still", it
 will NEVER happen.
And it really doesn't need to. I honestly don't understand why it's an issue at all other than people completely misunderstanding the situation or being the types of folks who think that anything which isn't completely and totally open is evil. Whether the backend is open or not has _zero_ impact on your ability to use it. The source is freely available, so you can look at and see what it does. You can even submit pull requests for it. Yes, there are some limitations on you going and doing whatever you want with the source, but so what? There's _nothing_ impeding your ability to use it to compile programs. And the front- end - which is really where D itself is - _is_ under the GPL. Not to mention, if really want a "fully open" D compiler, there's always gdc and ldc, so you there _are_ alternatives. The fact that dmd isn't really doesn't affect much except for the people whom are overzealous about "free software." I think that the "openness" of dmd being an issue is purely a matter of misunderstandings and FUD. And if Walter _could_ make the backend GPL, he may very well have done so ages ago. But he can't, so there's no point in complaining about it - especially since it doesn't impede your ability to use dmd. - Jonathan M Davis
Apr 26 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/26/2012 2:27 AM, Jonathan M Davis wrote:
 I think that the "openness" of dmd being an issue is purely  a matter of
 misunderstandings and FUD. And if Walter _could_ make the backend GPL, he may
 very well have done so ages ago. But he can't, so there's no point in
 complaining about it - especially since it doesn't impede your ability to use
 dmd.
I have tried, but failed. I also agree with you that it's moot, as LDC and GDC exist.
Apr 26 2012
parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/04/12 21:08, Walter Bright wrote:
 I have tried, but failed.

 I also agree with you that it's moot, as LDC and GDC exist.
I think I should probably add here that I do recognize the amount of effort you've put in here, and wasn't intending to be pejorative about DMD. I just think it's a terrible shame that you've been constrained in this way.
Apr 26 2012
prev sibling parent "Kagamin" <spam here.lot> writes:
On Thursday, 26 April 2012 at 09:28:30 UTC, Jonathan M Davis 
wrote:
 Whether the backend is open or not has _zero_ impact on your 
 ability to use
 it. The source is freely available, so you can look at and see 
 what it does.
Casual users are generally ignorant about licenses (as long as they can use the software), but not geeks - and proprietary software has bad publicity, it's not something technical, just a reputation.
Apr 27 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-04-26 11:07, Don Clugston wrote:

 Based on my understanding of the legal situation with Symantec, the
 backend CANNOT become GPL compatible. Stop using the word "still", it
 will NEVER happen.
Theoretically someone could: A. Replace all parts of the backend that Symantec can't/won't license as GPL (don't know if that is the whole backend or not) B. Buy the backend from Symantec -- /Jacob Carlborg
Apr 26 2012
prev sibling next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/04/12 11:07, Don Clugston wrote:
 <rant>
 "open source" is a horrible, duplicitous term. Really what you mean is "the
 license is not GPL compatible".
 </rant>
No, I don't mean "GPL compatible". I'd be perfectly happy for the DMD backend to be released under a GPL-incompatible free/open source licence like the CDDL. The problem is not GPL compatibility but whether sufficient freedoms are granted to distribute and modify sources. That has a knockon impact on the ability of 3rd parties to package and distribute the software, to patch it without necessarily going via upstream, etc. etc., all of which affects the degree to which others can easily use the language.
 Based on my understanding of the legal situation with Symantec, the backend
 CANNOT become GPL compatible. Stop using the word "still", it will NEVER
happen.
Please understand that I'm not suggesting any bad faith on the part of D's developers. Walter's good intentions are clear in the strong support he's given to GDC and other freely-licensed compilers. All I'm suggesting is that being free software (a somewhat better-defined term) was a key factor in some languages gaining popularity without corporate backing, and that the non-free nature of the DMD backend may have prevented D from enjoying this potential source of support. On 26/04/12 11:27, Jonathan M Davis wrote:
 And it really doesn't need to. I honestly don't understand why it's an issue
 at all other than people completely misunderstanding the situation or being
 the types of folks who think that anything which isn't completely and totally
 open is evil.

 Whether the backend is open or not has _zero_ impact on your ability to use
 it. The source is freely available, so you can look at and see what it does.
 You can even submit pull requests for it. Yes, there are some limitations on
 you going and doing  whatever you want with the source, but so what? There's
 _nothing_ impeding your ability to use it to compile programs. And the front-
 end - which is really where D itself is - _is_ under the GPL.
You misunderstand my point. I'm not saying anyone is evil; I'm simply pointing out that the licensing constraints prevent various kinds of 3rd party distribution and engagement that could be useful in spreading awareness and use of the language. That _does_ have an impact on use, in terms of constraining the development of 3rd-party support and infrastructure.
 Not to mention, if really want a "fully open" D compiler, there's always gdc
 and ldc, so you there _are_ alternatives. The fact that dmd isn't really
 doesn't affect much except for the people whom are overzealous about "free
 software."
Yes, but GDC and LDC both (for now) lag behind DMD in terms of functionality -- I was not able to compile my updates to Phobos using GDC -- and it's almost inevitable that they will always have to play catch-up, even though the impact of that will lessen over time. That's why I spoke about the "reference implementation" of the language: D2 has been available for quite some time now, but it's only last Autumn that a D2 compiler landed in my Linux distro.
 I think that the "openness" of dmd being an issue is purely  a matter of
 misunderstandings and FUD. And if Walter _could_ make the backend GPL, he may
 very well have done so ages ago. But he can't, so there's no point in
 complaining about it - especially since it doesn't impede your ability to use
 dmd.
To an extent I agree with you. The good intentions of Walter and the other D developers are clear, it's always been apparent that there will be fully open source compilers for the language, etc. etc.; I wouldn't be here if I wasn't happy to work with DMD under its given licence terms. But it's not FUD to say that the licensing does make more difficult certain kinds of engagement that have been very helpful for other languages, such as inclusion in Linux distros and BSD's or other software collections -- and that has a further impact in those suppliers' willingness or ability to ship other software written in D. It's also fair to say that if the licensing was different, that would remove an entire source of potential FUD. Again, I'm not saying that anyone is evil, that I find the situation personally unacceptable or that I don't understand the reasons why things are as they are. I just made the point that _being_ free/open source software was probably an important factor in the success of a number of now-popular languages that didn't originally enjoy corporate support, and that the licensing of the DMD backend prevents it from enjoying some of those avenues to success. .... and I _want_ to see that success, because I think D deserves it. Best wishes, -- Joe
Apr 26 2012
parent reply Don Clugston <dac nospam.com> writes:
On 26/04/12 14:58, Joseph Rushton Wakeling wrote:
 On 26/04/12 11:07, Don Clugston wrote:
 <rant>
 "open source" is a horrible, duplicitous term. Really what you mean is
 "the
 license is not GPL compatible".
 </rant>
No, I don't mean "GPL compatible". I'd be perfectly happy for the DMD backend to be released under a GPL-incompatible free/open source licence like the CDDL. The problem is not GPL compatibility but whether sufficient freedoms are granted to distribute and modify sources.
And the only one such limitation of freedom which has ever been identified, in numerous posts (hundreds!) on this topic, is that the license is not GPL compatible and therefore cannot be distributed with (say) OS distributions. Everything else is FUD.
Apr 26 2012
next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/04/12 16:59, Don Clugston wrote:
 And the only one such limitation of freedom which has ever been identified, in
 numerous posts (hundreds!) on this topic, is that the license is not GPL
 compatible and therefore cannot be distributed with (say) OS distributions.
Yes, I appreciate I touched on a sore point and one that must have been discussed to death. I wasn't meaning to add to the noise, but your response to my original email was so hostile I felt I had to reply at length to clarify. I personally don't think it's a minor issue that the reference version of D can't be included with open source distributions, but I also think there are much more pressing immediate issues than this to resolve in the short term. By the way, there are plenty of non-GPL-compatible licences that have traditionally been considered acceptable by open source distributions -- the original Mozilla Public Licence and Apache Licence (new versions have since been released which ensure compatibility), at least one variant of the permissive BSD/MIT licences, and probably others. It's whether the licence implements the "four freedoms" that matters.
Apr 26 2012
prev sibling parent Jeff Nowakowski <jeff dilacero.org> writes:
On 04/26/2012 10:59 AM, Don Clugston wrote:
 No, I don't mean "GPL compatible". I'd be perfectly happy for the DMD
 backend to be released under a GPL-incompatible free/open source licence
 like the CDDL.

 The problem is not GPL compatibility but whether sufficient freedoms are
 granted to distribute and modify sources.
And the only one such limitation of freedom which has ever been identified, in numerous posts (hundreds!) on this topic, is that the license is not GPL compatible and therefore cannot be distributed with (say) OS distributions.
I don't understand your fixation on the GPL, as even a GPL-incompatible license would allow it to be distributed on FOSS operating systems like Debian or Fedora. The important principle, which you've been ignoring for some reason, is that you can redistribute the source along with modifications. This is not special to GPL, and is fundamental both to open source and Free Software.
Apr 27 2012
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
On Apr 26, 2012, at 5:58 AM, Joseph Rushton Wakeling wrote:

 The problem is not GPL compatibility but whether sufficient freedoms =
are granted to distribute and modify sources. That has a knockon impact = on the ability of 3rd parties to package and distribute the software, to = patch it without necessarily going via upstream, etc. etc., all of which = affects the degree to which others can easily use the language. While distributing modified sources is certainly one way of dealing with = changes not represented by the official distribution, I prefer = distributing patches instead. It's easier to audit what's being = changed, and updating to a new release tends to be easier.=
Apr 26 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-04-24 14:04, Eljay wrote:

 D on iOS. So for me personally, I would love to use D to write my
 applications for iOS, and OS X. But… I’m not sure how to do that. (The
 Objective-D project looks abandoned, never got out of the “toy” project
 stage, and doesn’t bridge Cocoa’s Frameworks written in Objective-C to
 D/Objective-D anyway.)
You would need to write bindings to the Objective-C classes just as you need to write bindings to the C functions you want to use. I'm currently working on a tool that does this automatically. As a first step I intend to support C and Objective-C, then probably C++ as well. -- /Jacob Carlborg
Apr 24 2012