www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DUB - call to arms

reply Seb <seb wilzba.ch> writes:
Hi all,

The problem
-----------

I think this is well-known, but I still highly encourage you all 
to have a look at this thread:

https://forum.dlang.org/thread/eftttpylxanvxjhoigqu forum.dlang.org?page=1

or the issue tracker:

https://github.com/dlang/dub/issues

tl;dr: the status quo of dub is really really terrible:

- tons of issues (often unanswered)
- __critical__ piece of infrastructure
- less than a PR per month

I had to use Cargo the other day and compiling 80 (!) 
dependencies was a
miracle (parallel fetching and builds).

We want the D ecosystem to grow and flourish and I don't think I 
have to
stress how important a good package manager is for this.

What can we do?
---------------

We have about $6k in our OpenCollective budget [1] (there's still 
the
$3k that we owe WebFreak) which means we have about $3k left.
I propose to use this as a starting base to fund serious work
on Dub (see the forum thread for a few ideas).

I believe that with the D community pitching in and a few 
companies
(they complain about dub too), we could get about $10k for this 
effort.

Now, of course, the question is what can we do to improve Dub's 
status quo?

It would be amazing if we could hire Martin or Sönke (even if 
it's only
part-time), but unfortunately as far as I am aware that's not an 
option
as both are pretty busy these days.

[1] https://opencollective.com/dlang/

Proposed actionable
-------------------

Hence, here's my proposal:

Publish a blog post analog to this "Calls to arms for Dub". The 
post could

- summarize the reasons why funding dub is so crucial
- what the biggest problems of dub are (i.e. the issues that we 
would
target first)
- announce that we start a new funding round (with $3k already 
seeded
from OC)
- maybe some companies are willing to pitch in too (e.g. 
Symmetry, Funkwerk, Sociomantic, ...)
- put out a call open for anyone to apply for this project

The exact details on payment and expectations would need to be 
specified, but I guess we should be able to fund roughly three 
months of work. OTOH we can leave these details open to the 
applicants as students might be willing to work for $10k for 
three months, but experienced software engineers might not. Also, 
some people probably could only afford to work part-time on these 
which imho would be fine too.

 community:

1) Would this be sth. you would consider applying in general? If 
not what could we do to improve this?
2) Would this be sth. you would consider donating to?


For completeness, my personal favorite items for this would be 
sth. like:

- parallel dependency downloads and builds (dub could and should 
be a lot faster)
- option to emit only include flags (for good integration in 
other tools)
- "dub install"
- dub watch (there's a PR that needs to be revived - 
https://github.com/dlang/dub/pull/446)
- colored error messages (there's already a PR that just needs to 
be revived - https://github.com/dlang/dub/pull/1490)
- cached dub registry index

Though I think if we crowd-fund this campaign, we should make a 
list of the top 20-30 issues and then let everyone vote on them.
Apr 14
next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 1) Would this be sth. you would consider applying in general? 
 If not what could we do to improve this?
If think DUB is completely _unsound_ and need a rewrite from someone versed in compilers and correctness.
 2) Would this be sth. you would consider donating to?
Only if it's a rewrite.
Apr 14
parent reply Andre Pany <andre s-e-a-p.de> writes:
On Sunday, 14 April 2019 at 10:59:29 UTC, Guillaume Piolat wrote:
 On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 1) Would this be sth. you would consider applying in general? 
 If not what could we do to improve this?
If think DUB is completely _unsound_ and need a rewrite from someone versed in compilers and correctness.
 2) Would this be sth. you would consider donating to?
Only if it's a rewrite.
Dub has a working eco system and a lot of developers put effort into it. It will take years to have another solution with the same functionality like Dub and please do not forget we still have to maintain Dub until the new solution can replace Dub. Could you please write the details why a correction of Dub isn't possible and a complete rewrite is the only solution? Kind regards Andre
Apr 14
next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 14 April 2019 at 11:21:09 UTC, Andre Pany wrote:
 Dub has a working eco system and a lot of developers put effort 
 into it. It will take years to have another solution with the 
 same functionality like Dub and please do not forget we still 
 have to maintain Dub until the new solution can replace Dub.
 Could you please write the details why a correction of Dub 
 isn't possible and a complete rewrite is the only solution?
Don't mistake me, DUB is a good software that is very useful and that's a feat for a build system - something very easy to hate. A lot of things are possible incrementally, but I believe in the DUB design some things will not be possible. - dub.selections.json is supposed to encode dependency resolution, but fundamentally it does that for a particular optional dependencies selections set and configuration. - I personally think it has way to many features, and in such situations it's way harder to find those to remove because agreement is difficult - work is piling on for DUB because it is designed in a way that say yes to everything, features have piled on with no regards for who would maintain them Withing DUB there is a much simpler subset to be discovered I believe. For the record I was an early contributor, I did what I could.
Apr 14
next sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 14 April 2019 at 11:38:23 UTC, Guillaume Piolat wrote:
 Withing DUB there is a much simpler subset to be discovered I 
 believe.
With that said, I think I have been unreasonable, would contribute to an incremental effort too. Dub probably just lacks a tutorial.
Apr 14
prev sibling parent Atila Neves <atila.neves gmail.com> writes:
On Sunday, 14 April 2019 at 11:38:23 UTC, Guillaume Piolat wrote:
 On Sunday, 14 April 2019 at 11:21:09 UTC, Andre Pany wrote:
 Dub has a working eco system and a lot of developers put 
 effort into it. It will take years to have another solution 
 with the same functionality like Dub and please do not forget 
 we still have to maintain Dub until the new solution can 
 replace Dub.
 Could you please write the details why a correction of Dub 
 isn't possible and a complete rewrite is the only solution?
Don't mistake me, DUB is a good software that is very useful and that's a feat for a build system - something very easy to hate. A lot of things are possible incrementally, but I believe in the DUB design some things will not be possible. - dub.selections.json is supposed to encode dependency resolution, but fundamentally it does that for a particular optional dependencies selections set and configuration. - I personally think it has way to many features, and in such situations it's way harder to find those to remove because agreement is difficult - work is piling on for DUB because it is designed in a way that say yes to everything, features have piled on with no regards for who would maintain them Withing DUB there is a much simpler subset to be discovered I believe. For the record I was an early contributor, I did what I could.
+1 to all your points, including your previous post on this thread. FWIW, I'm currently writing code to use dub as a library and bypass most of it. Rewriting from scratch isn't feasible - it has to be bug compatible. But reusing the parts that get information from packages and ignoring everything else...
Apr 15
prev sibling parent reply JN <666total wp.pl> writes:
On Sunday, 14 April 2019 at 11:21:09 UTC, Andre Pany wrote:
 Dub has a working eco system and a lot of developers put effort 
 into it. It will take years to have another solution with the 
 same functionality like Dub and please do not forget we still 
 have to maintain Dub until the new solution can replace Dub.
 Could you please write the details why a correction of Dub 
 isn't possible and a complete rewrite is the only solution?
I think on the ecosystem side, dub is the #1 thing that ever came to D. I remember what life was before dub. First of all, there was no central repository, so it was hard to find a package you're interested in to begin with. Secondly, every package required additional steps to download dependencies, usually through a bash script. Then you had to roll a dice which build tool it expects, will it be bud, dsss, maybe makefiles, maybe something else? Oh, and if you were on Windows, you were screwed 99% of time, because the author loves bash and never tested it on non-POSIX platforms. Right now the situation is soo much better. I can download a random project from the internet, type dub build and it just works. It will download and build necessary dependencies and then build the project, usually without any manual steps involved. Personally I haven't really encountered these pain points with D, so it's hard for me to say if it's a cause I'd donate to, but I think anything related to improving Dub is crucial for D ecosystem to flourish.
Apr 14
next sibling parent Zekereth <paul acheronsoft.com> writes:
On Sunday, 14 April 2019 at 20:09:17 UTC, JN wrote:
 On Sunday, 14 April 2019 at 11:21:09 UTC, Andre Pany wrote:
 [...]
I think on the ecosystem side, dub is the #1 thing that ever came to D. I remember what life was before dub. First of all, there was no central repository, so it was hard to find a package you're interested in to begin with. Secondly, every package required additional steps to download dependencies, usually through a bash script. Then you had to roll a dice which build tool it expects, will it be bud, dsss, maybe makefiles, maybe something else? Oh, and if you were on Windows, you were screwed 99% of time, because the author loves bash and never tested it on non-POSIX platforms. [...]
I just want to mirror this exactly. I didn't really play with D much until DUB came along. Now is nearly always the language that I reach for when starting a new project.
Apr 15
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Sunday, 14 April 2019 at 20:09:17 UTC, JN wrote:
 I think on the ecosystem side, dub is the #1 thing that ever 
 came to D. I remember what life was before dub.
+1 Life before DUB was hellish because it was impossible to have many projects, or rely on anithing external. Improving DUB (whatever the way) is a fantastic opportunity to improve D.
Apr 15
prev sibling next sibling parent reply Jacob Shtokolov <jacob.100205 gmail.com> writes:
On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,

 The problem
 -----------
Hey guys, maybe this is an offtopic (which could sound heretical btw), but didn't you ever think about writing a package manager using some scripting languages like Python or JavaScript? The package manager is an infrastructure thing, so it not so critical as the rest of other tools. Maybe, a consideration of this idea would help to attract more people from the outside of the community, because there is a feeling that we're pretty much locked on the community itself.
Apr 14
next sibling parent Andre Pany <andre s-e-a-p.de> writes:
On Sunday, 14 April 2019 at 11:27:18 UTC, Jacob Shtokolov wrote:
 On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,

 The problem
 -----------
Hey guys, maybe this is an offtopic (which could sound heretical btw), but didn't you ever think about writing a package manager using some scripting languages like Python or JavaScript? The package manager is an infrastructure thing, so it not so critical as the rest of other tools. Maybe, a consideration of this idea would help to attract more people from the outside of the community, because there is a feeling that we're pretty much locked on the community itself.
In addition to the Dub executable, you can also use Dub as Library in your D coding. This will not longer be possible, or only with a lot more effort. Several developers have put a lot of effort to make DMD independent from the Microsoft build tools / visual studio. If we now add a new dependency to Python or NPM, that would be really strange) Kind regards Andre
Apr 14
prev sibling next sibling parent Tourist <gravatar gravatar.com> writes:
On Sunday, 14 April 2019 at 11:27:18 UTC, Jacob Shtokolov wrote:
 On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,

 The problem
 -----------
Hey guys, maybe this is an offtopic (which could sound heretical btw), but didn't you ever think about writing a package manager using some scripting languages like Python or JavaScript? The package manager is an infrastructure thing, so it not so critical as the rest of other tools. Maybe, a consideration of this idea would help to attract more people from the outside of the community, because there is a feeling that we're pretty much locked on the community itself.
We are too small a community so can you please do this for us? Not a good argument.
Apr 14
prev sibling parent bachmeier <no spam.net> writes:
On Sunday, 14 April 2019 at 11:27:18 UTC, Jacob Shtokolov wrote:
 On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,

 The problem
 -----------
Hey guys, maybe this is an offtopic (which could sound heretical btw), but didn't you ever think about writing a package manager using some scripting languages like Python or JavaScript? The package manager is an infrastructure thing, so it not so critical as the rest of other tools. Maybe, a consideration of this idea would help to attract more people from the outside of the community, because there is a feeling that we're pretty much locked on the community itself.
I wonder what would possibly be gained from using Python or Javascript. If the goal is to have developers that are not current D users: (i) Why would they want to work on Dub? (ii) Why would we turn over the development of critical infrastructure to someone that doesn't know D and isn't interested in learning? And then there's the marketing: D community rewrites critical infrastructure in an alternative language because D doesn't cut it. A better decision would be to shut down the project entirely.
Apr 15
prev sibling next sibling parent Andre Pany <andre s-e-a-p.de> writes:
On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,

 The problem
 -----------

 [...]
I fully agree. Also somehow off topic but we should not only think about how to solve coding problems but also how to attract new developers which in the end increases the community and will also help us to solve issues and build enhancements and in the end will also attract other developers. Maybe we should invest some money into marketing. Also another idea is to setup a campaing, for 2 weeks all community members concentrate not on development but on advertising D in some form. Maybe we can also donate some exemplares of Ali's book to students in universities. There are so much possibilities to increase the visibility of D. Kind regards Andre
Apr 14
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 1) Would this be sth. you would consider applying in general? 
 If not what could we do to improve this?
 2) Would this be sth. you would consider donating to?
No and no. Dub delivers negative value for my use-case; it is a cost to me to maintain these definition files for other people (and it is a support pain too, which people dropping in having dub problems that don't exist in the underlying compiler), even though I don't use it. I don't use it because it offers nothing of value to me; it solves a problem I don't even have (and does so in a way that is mostly incompatible with my existing code). It is all negative with nothing in the positive column to balance it out. I'd say we split dub up into three concerns: 1) code.dlang.org. I do see some value in this, and think it is salvageable in its current form. 2) dub, the package manager. Maybe useful, though I don't personally believe in dependencies, I can see some value, though I'd want to change it so it has more compatibility with other workflows (like mine) and be sure to limit its scope to developer use-cases; end users should never know. 3) dub, the build tool. I'd rather have it either do absolutely nothing here, or just delegate to something else. I gotta run, might write more later.
Apr 14
parent reply Russel Winder <russel winder.org.uk> writes:
On Sun, 2019-04-14 at 13:40 +0000, Adam D. Ruppe via Digitalmars-d
wrote:
=20
[=E2=80=A6]
 No and no. Dub delivers negative value for my use-case; it is a=20
 cost to me to maintain these definition files for other people=20
 (and it is a support pain too, which people dropping in having=20
 dub problems that don't exist in the underlying compiler), even=20
 though I don't use it.
So that is fine, you do not use it. But for those who have experience Cargo with Rust, Go, Conan with C++, etc. D needs something like Dub.
 I don't use it because it offers nothing of value to me; it=20
 solves a problem I don't even have (and does so in a way that is=20
 mostly incompatible with my existing code). It is all negative=20
 with nothing in the positive column to balance it out.
See above. :-)
=20
 I'd say we split dub up into three concerns:
=20
 1) code.dlang.org. I do see some value in this, and think it is=20
 salvageable in its current form.
=20
 2) dub, the package manager. Maybe useful, though I don't=20
 personally believe in dependencies, I can see some value, though=20
 I'd want to change it so it has more compatibility with other=20
 workflows (like mine) and be sure to limit its scope to developer=20
 use-cases; end users should never know.
=20
 3) dub, the build tool. I'd rather have it either do absolutely=20
 nothing here, or just delegate to something else.
I disagree. The lessons, particularly from Cargo/Rust, and Go is that "small language, small standard library, large pool of trivially accessible dependencies" is the way things are going just now. Having a dependency and build manager is what Maven and (far better) Gradle have been doing for yonks in the JVM-based milieu, this way of working is finally penetrating into the native code arena. Make, CMake, SCons, Meson, etc. remain tied to the "I have all the source I need locally, everything else is provided via precompiled static or dynamic libraries that are present locally". If this works for a given use case fine. But increasingly the JVM world, Cargo/Rust, Go are working with a distributed source code approach. An approach that can use a central publishing place, local file locations, or Git/Mercurial/Breezy repositories.=20 Having worked with Gradle, Cargo, and Go, returning to the approach of Make, CMake, SCons, and Meson really has no appeal to me at all. A consequence of this for me is that for D almost all of Phobos should not be in Phobos but should be in the Dub repository as packages, and Dub should work with Git/Mercurial/Breezy repositories as easily as it does with the Dub repostory and local filestores. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 15
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 15 April 2019 at 07:27:54 UTC, Russel Winder wrote:
 I disagree. The lessons, particularly from Cargo/Rust, and Go 
 is that "small language, small standard library, large pool of 
 trivially accessible dependencies" is the way things are going 
 just now.
You can still have that (and more of it, as we can get more library authors on board) if the systems are more compatible. You can keep your `dub build` command if you must, but it should just forward to some user-defined system, which is able to get metadata like build target back out of dub. So, the dub executable does four (actually more!), conceptually independent tasks: 1) get metadata about package 2) download package for use 3) full dependency resolution 4) build And I want to decouple those as much as we can.
Apr 15
parent Russel Winder <russel winder.org.uk> writes:
On Mon, 2019-04-15 at 16:44 +0000, Adam D. Ruppe via Digitalmars-d
wrote:
 On Monday, 15 April 2019 at 07:27:54 UTC, Russel Winder wrote:
 I disagree. The lessons, particularly from Cargo/Rust, and Go=20
 is that "small language, small standard library, large pool of=20
 trivially accessible dependencies" is the way things are going=20
 just now.
=20 You can still have that (and more of it, as we can get more=20 library authors on board) if the systems are more compatible. =20 You can keep your `dub build` command if you must, but it should=20 just forward to some user-defined system, which is able to get=20 metadata like build target back out of dub. =20 =20 So, the dub executable does four (actually more!), conceptually=20 independent tasks: =20 1) get metadata about package 2) download package for use 3) full dependency resolution 4) build =20 And I want to decouple those as much as we can.
To be honest this is all implementation detail about which most users will not care. Most users really just want a single command with a CLI or a way of integrating with their IDE or editor.=20 So with Cargo and Rust you have the cargo command. How it is implemented really doesn't matter. With Go there is the go command. How it is implemented really doesn't matter. If Dub is the system for D then dub has a command line, how it is implemented under the hood really doesn't matter as long as it works. And currently it seems a bit of a mess. Especially compared to Cargo and Go. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 16
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
On Sun, 2019-04-14 at 10:53 +0000, Seb via Digitalmars-d wrote:
[=E2=80=A6]
=20
 I had to use Cargo the other day and compiling 80 (!)=20
 dependencies was a
 miracle (parallel fetching and builds).
[=E2=80=A6] And herein lies the problem. Cargo is properly supported with actual resource that isn't the occasional volunteer effort. Also Cargo has a significantly better way of working with dependencies and build than Dub. Perhaps the D community as a whole should stop working on DMD/LDC/GDC etc. and start working on Dub? Much as I prefer D over Rust for the GTK+ and GStreamer stuff I am doing, there is always the Rustward drive because of Cargo and because of the Rust plugin in CLion (*). It is clear that Dub was and is a good idea per se, but that lots of decisions made along the way have meant it not as good for D as Cargo is for Rust. :-( (*) I know I had promised to help out on the CLion D plugin and have failed. The Big Problem=E2=84=A2 is that it is easier to write code now usi= ng Rust and CLion with the Rust plugin than it is to get the CLion D plugin to a state where it is usable in production. I chat with the JetBrains/CLion folks from time to time, especially at ACCU conferences. Clearly they cannot take over management of the D plugin for IntelliJ IDEA and CLion as there is no perceived traction of D compared to Rust, but neither can they justify providing any resource to support the D plugin for the exact same reason. Rust has perceived traction in the market, D has none. :-( --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 15
prev sibling next sibling parent reply Anton Fediushin <fediushin.anton yandex.com> writes:
I don't have much experience contributing to dub but I have 
contributed to dub-registry in the past and must I say, it wasn't 
a pleasant experience for me. Everything I'm writing in this post 
is from a year ago, I am not sure what's the state of the 
ecosystem now.

Contributing starts with setting up a local development 
environment. For dub-registry it was awful to say the least. I 
had to fix my local vibe-d source and some setup code while 
making sure that I don't commit it to one of my frontend-related 
branches.

Compilation of dub-registry on a laptop with just 4Gb RAM takes 
ages and hangs up the whole system. A tiny edit in any diet 
template caused them all to be parsed over and over again. I'd 
like to remind you that I was focusing on frontend and user 
integration therefore I had to recompile registry *many times*. 
Imagine how much it slowed me down.

Other thing is, of course, technology that registry used. It was 
ancient. I couldn't push the idea of adding a css preprocessor 
like SASS to sort out the CSS anarchy we had through maintainers 
who wanted to avoid any third-party dependencies for the sake of 
simpler development. Yes, 1000-line CSS file which had styles for 
the whole registry and had 0 useful documentation was kept. (I 
wonder if it is still there lol)

After all the struggles I opened a PR. Usually it took a week or 
two to get merged, bigger changes were hanging for an entire 
month. This was so frustrating. I appreciate all of the 
maintainers who provided their feedback and pointed out at my 
obvious mistakes but we could have saved so much of each other's 
time if dub-registry used somewhat modern technologies, had 
documentation and maybe some guidelines.

Later dub-registry has been split into the registry itself and 
dub documentation. Yes, now simple PRs to dub had to have a 
documentation PR to this new repository. They are slowing each 
other down for very little benefit.

In conclusion, I will not support dub's further development. It 
is a piece of software I saw so much potential in, but it failed 
all of my expectations. It is not up to modern world's standards. 
It slows down the whole community.

I totally agree with people who see the only solution to this 
problem in a complete rewrite of dub. My only addition is that 
registry has to be rewritten as well, now using modern 
technologies. There is no shame if it will be written in 
JavaScript or whatever. D is not the right tool for that 
particular job.

Best regards,
Anton
Apr 15
next sibling parent reply Andre Pany <andre s-e-a-p.de> writes:
On Monday, 15 April 2019 at 09:34:00 UTC, Anton Fediushin wrote:
 I don't have much experience contributing to dub but I have 
 contributed to dub-registry in the past and must I say, it 
 wasn't a pleasant experience for me. Everything I'm writing in 
 this post is from a year ago, I am not sure what's the state of 
 the ecosystem now.

 [...]
What issues do you have specific to Dub? You wrote: it slows down the whole community. This statement is not true. It does not slow me down, actually it is working like a charme for me now (I did a few dub pull requests). Kind regards Andre
Apr 15
parent reply Anton Fediushin <fediushin.anton yandex.com> writes:
On Monday, 15 April 2019 at 11:06:56 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 09:34:00 UTC, Anton Fediushin wrote:
 I don't have much experience contributing to dub but I have 
 contributed to dub-registry in the past and must I say, it 
 wasn't a pleasant experience for me. Everything I'm writing in 
 this post is from a year ago, I am not sure what's the state 
 of the ecosystem now.

 [...]
What issues do you have specific to Dub? You wrote: it slows down the whole community. This statement is not true. It does not slow me down, actually it is working like a charme for me now (I did a few dub pull requests). Kind regards Andre
It is slowing down the d community because it's not what a package manager and a build system of a modern programming language should look like. For example, LDC is able to compile code for quite a few architectures, even GPUs, yet you cannot painlessly integrate that with dub. As soon as you want to do something slightly unusual, your dub.json/dub.sdl becomes a spaghetti of preBuildCommands/postBuildCommands. In my personal opinion, package file of a high-level build system (which is what dub is trying to be) should never contain any shell commands. And yes, dub.json/dub.sdl is another problem of dub. Having two package formats is nothing but a bad decision. Sure, sdl can contain comments that are very useful when you are trying to make sense of pre/postBuildCommands-spaghetti. This is how one bad design decision depends on another. Thing is, it's impossible to fix without breaking changes Best regards, Anton
Apr 15
parent reply Andre Pany <andre s-e-a-p.de> writes:
On Monday, 15 April 2019 at 13:34:21 UTC, Anton Fediushin wrote:
 On Monday, 15 April 2019 at 11:06:56 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 09:34:00 UTC, Anton Fediushin 
 wrote:
 I don't have much experience contributing to dub but I have 
 contributed to dub-registry in the past and must I say, it 
 wasn't a pleasant experience for me. Everything I'm writing 
 in this post is from a year ago, I am not sure what's the 
 state of the ecosystem now.

 [...]
What issues do you have specific to Dub? You wrote: it slows down the whole community. This statement is not true. It does not slow me down, actually it is working like a charme for me now (I did a few dub pull requests). Kind regards Andre
It is slowing down the d community because it's not what a package manager and a build system of a modern programming language should look like. For example, LDC is able to compile code for quite a few architectures, even GPUs, yet you cannot painlessly integrate that with dub. As soon as you want to do something slightly unusual, your dub.json/dub.sdl becomes a spaghetti of preBuildCommands/postBuildCommands. In my personal opinion, package file of a high-level build system (which is what dub is trying to be) should never contain any shell commands. And yes, dub.json/dub.sdl is another problem of dub. Having two package formats is nothing but a bad decision. Sure, sdl can contain comments that are very useful when you are trying to make sense of pre/postBuildCommands-spaghetti. This is how one bad design decision depends on another. Thing is, it's impossible to fix without breaking changes Best regards, Anton
Integration of dub with LDC is working fine. I created a tutorial (german) here: http://d-land.sepany.de/tutorials/einplatinenrechner/einstieg-in-die-raspberry-pi-entwicklung-mit-ldc/ But yes, integration could be better. I use the shell commands to compile a git commit ID into the executable. It works like a charme. There are cases where shell commands are highly valuable and does not lead to spaghetti code. Without specific examples it is hard to discuss wheter something works or not. Kind regards Andre
Apr 15
parent reply Atila Neves <atila.neves gmail.com> writes:
On Monday, 15 April 2019 at 16:09:10 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 13:34:21 UTC, Anton Fediushin wrote:
 [...]
Integration of dub with LDC is working fine. I created a tutorial (german) here: http://d-land.sepany.de/tutorials/einplatinenrechner/einstieg-in-die-raspberry-pi-entwicklung-mit-ldc/ But yes, integration could be better. I use the shell commands to compile a git commit ID into the executable. It works like a charme.
Try doing that cross platform between Windows and Linux.
 Without specific examples it is hard to discuss wheter 
 something works or not.
My experience matches Anton's - as soon as you try to do anything non-trivial in "pure dub" it gets frustrating pretty quickly. I've given examples before.
Apr 15
parent reply Andre Pany <andre s-e-a-p.de> writes:
On Monday, 15 April 2019 at 17:17:49 UTC, Atila Neves wrote:
 On Monday, 15 April 2019 at 16:09:10 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 13:34:21 UTC, Anton Fediushin 
 wrote:
 [...]
Integration of dub with LDC is working fine. I created a tutorial (german) here: http://d-land.sepany.de/tutorials/einplatinenrechner/einstieg-in-die-raspberry-pi-entwicklung-mit-ldc/ But yes, integration could be better. I use the shell commands to compile a git commit ID into the executable. It works like a charme.
Try doing that cross platform between Windows and Linux.
 Without specific examples it is hard to discuss wheter 
 something works or not.
My experience matches Anton's - as soon as you try to do anything non-trivial in "pure dub" it gets frustrating pretty quickly. I've given examples before.
Actually I have a bat script for the windows release pipeline and a bash script for linux/Darwin pipeline. My assumption is, there are a lot of developers which uses dub without any issue and for them replacing Dub would be a major pain. Also a lot of bugs were already solved in Dub in the recent month's. Bug fixes are just not listed in the D changelog. Kind regards Andre
Apr 15
next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/15/19 2:31 PM, Andre Pany wrote:
 
 My assumption is, there are a lot of developers which uses dub without 
 any issue and for them replacing Dub would be a major pain.
 
We can't really say that replacing dub would be a pain without looking at what the replacement would be. Without that, we can only speculate.
Apr 15
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 15, 2019 at 08:59:26PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/15/19 2:31 PM, Andre Pany wrote:
 
 My assumption is, there are a lot of developers which uses dub
 without any issue and for them replacing Dub would be a major pain.
We can't really say that replacing dub would be a pain without looking at what the replacement would be. Without that, we can only speculate.
As long as dub's current functionality is supported and remains the default behaviour, this is not an issue. What's currently lacking isn't dub's default behaviour, it's the lack of support for changing the default behaviour. T -- Who told you to swim in Crocodile Lake without life insurance??
Apr 16
prev sibling next sibling parent Atila Neves <atila.neves gmail.com> writes:
On Monday, 15 April 2019 at 18:31:01 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 17:17:49 UTC, Atila Neves wrote:
 On Monday, 15 April 2019 at 16:09:10 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 13:34:21 UTC, Anton Fediushin 
 wrote:
 [...]
Integration of dub with LDC is working fine. I created a tutorial (german) here: http://d-land.sepany.de/tutorials/einplatinenrechner/einstieg-in-die-raspberry-pi-entwicklung-mit-ldc/ But yes, integration could be better. I use the shell commands to compile a git commit ID into the executable. It works like a charme.
Try doing that cross platform between Windows and Linux.
 Without specific examples it is hard to discuss wheter 
 something works or not.
My experience matches Anton's - as soon as you try to do anything non-trivial in "pure dub" it gets frustrating pretty quickly. I've given examples before.
Actually I have a bat script for the windows release pipeline and a bash script for linux/Darwin pipeline.
Which is less than ideal.
 My assumption is, there are a lot of developers which uses dub 
 without any issue and for them replacing Dub would be a major 
 pain.
Of course! For trivial applications that want to use dependencies without hassle, it's great. But it doesn't scale.
 Also a lot of bugs were already solved in Dub in the recent 
 month's. Bug fixes are just not listed in the D changelog.
The ones I filed are all open. And some aren't even filed yet AFAIK, such as the fact that dub.selections.json, as it exists now, is broken.
Apr 16
prev sibling parent reply aberba <karabutaworld gmail.com> writes:
On Monday, 15 April 2019 at 18:31:01 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 17:17:49 UTC, Atila Neves wrote:
 Integration of dub with LDC is working fine. I created a 
 tutorial (german) here:
 My assumption is, there are a lot of developers which uses dub 
 without any issue and for them replacing Dub would be a major 
 pain.
Dub works great for me too. I could use some build speed though. By the way, there are some people who consistently bash Dub at any opportunity they get. They just don't like Dub.
Apr 25
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 26, 2019 at 12:18:54AM +0000, aberba via Digitalmars-d wrote:
[...]
 By the way, there are some people who consistently bash Dub at any
 opportunity they get. They just don't like Dub.
Because I tried using it and discovered that it wouldn't let me do what ought to be easy to do. If it works fine for you, then good for you. It doesn't do what I need it to do, so I'd rather not use it, but it's forced upon me because without dub it's impractical to use code from code.dlang.org. That's why I feel a lot of frustration with it -- it's something I can't avoid if I want to use code.dlang.org, but it also works very poorly for what I do. I often feel like I'm up the proverbial creek without a paddle when I'm using dub. It seems to work fine up your creek, but I need to be on *this* creek and dub refuses to give me a paddle to work with. T -- A linguistics professor was lecturing to his class one day. "In English," he said, "A double negative forms a positive. In some languages, though, such as Russian, a double negative is still a negative. However, there is no language wherein a double positive can form a negative." A voice from the back of the room piped up, "Yeah, yeah."
Apr 25
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Friday, 26 April 2019 at 00:18:54 UTC, aberba wrote:
 On Monday, 15 April 2019 at 18:31:01 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 17:17:49 UTC, Atila Neves wrote:
 Integration of dub with LDC is working fine. I created a 
 tutorial (german) here:
 My assumption is, there are a lot of developers which uses dub 
 without any issue and for them replacing Dub would be a major 
 pain.
Dub works great for me too. I could use some build speed though. By the way, there are some people who consistently bash Dub at any opportunity they get. They just don't like Dub.
As I've said on more than one occasion, for simple usage dub works fine. Slowly, but fine. It just doesn't scale. Anything non-trivial is hard to do and has to be hacked together. I liked dub too when I used it for personal projects but then I had to use it to do "real work" and it quickly broke down.
Apr 26
parent reply Andre Pany <andre s-e-a-p.de> writes:
On Friday, 26 April 2019 at 10:01:26 UTC, Atila Neves wrote:
 On Friday, 26 April 2019 at 00:18:54 UTC, aberba wrote:
 On Monday, 15 April 2019 at 18:31:01 UTC, Andre Pany wrote:
 On Monday, 15 April 2019 at 17:17:49 UTC, Atila Neves wrote:
 Integration of dub with LDC is working fine. I created a 
 tutorial (german) here:
 My assumption is, there are a lot of developers which uses 
 dub without any issue and for them replacing Dub would be a 
 major pain.
Dub works great for me too. I could use some build speed though. By the way, there are some people who consistently bash Dub at any opportunity they get. They just don't like Dub.
As I've said on more than one occasion, for simple usage dub works fine. Slowly, but fine. It just doesn't scale. Anything non-trivial is hard to do and has to be hacked together. I liked dub too when I used it for personal projects but then I had to use it to do "real work" and it quickly broke down.
I fully agree with you on the current state while I have another idea on consequences. Dub is no fully sophisticated build tool but that is OK. It is intended for the simple cases and it does a good job on that. Use Dub for simple cases. Integrate Dub into existing Build tools for complex cases. We may have to work on the integration cases to make them more smooth. I also use Dub integrated into a Xmake implementation and that is working fine. Kind regards Andre
Apr 26
parent Russel Winder <russel winder.org.uk> writes:
On Fri, 2019-04-26 at 13:06 +0000, Andre Pany via Digitalmars-d wrote:
=20
[=E2=80=A6]
 Use Dub for simple cases. Integrate Dub into existing Build tools=20
 for complex cases.
=20
[=E2=80=A6] I started trying to use Dub purely for package fetch in SCons, but I stoppe= d using SCons so stopped work on it. Also Dub has a weird way of managing package build products which was beginning to become a mountain over which = is was rather difficult to climb. If Dub was a library with a Python binding life would be a lot easier. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 26
prev sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 15 April 2019 at 09:34:00 UTC, Anton Fediushin wrote:
 Compilation of dub-registry on a laptop with just 4Gb RAM takes 
 ages and hangs up the whole system. A tiny edit in any diet 
 template caused them all to be parsed over and over again.
Yeah, I think the code.dlang.org site is salvageable, but I am not a vibe.d and especially not a diet template fan. Such a pain.
 D is not the right tool for that particular job.
D is amazing for web stuff.
Apr 15
prev sibling next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
I've personally put a lot of effort into trying to fix DUB's problems in 
the past. Though I would be happy to be proven wrong, at this point, I'm 
all but convinced the problems with DUB are far too fundamental.

I know this is the sort of proposal that makes people cringe, and often 
for good reason, but in this case, I really do think it would be 
quicker, easier, and produce a better result to simply re-design it from 
the ground up (while making sure to leverage the existing code.dlang.org 
ecosystem in some say), than to try to wrangle this 600lb gorilla into 
something it was never designed to be.

I've long been hoping to take a stab at this myself, and I often find 
myself thinking through details of how it would work. I would, however, 
need help with the dependency-resolution algorithm (perhaps somebody 
could go into DUB and work to decouple its dep resolution algoritms from 
the rest DUB as much as possible - or at least document how to interface 
with it as a lib).
Apr 15
next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Monday, 15 April 2019 at 16:37:38 UTC, Nick Sabalausky 
(Abscissa) wrote:
 I've personally put a lot of effort into trying to fix DUB's 
 problems in the past. Though I would be happy to be proven 
 wrong, at this point, I'm all but convinced the problems with 
 DUB are far too fundamental.

 I know this is the sort of proposal that makes people cringe, 
 and often for good reason, but in this case, I really do think 
 it would be quicker, easier, and produce a better result to 
 simply re-design it from the ground up (while making sure to 
 leverage the existing code.dlang.org ecosystem in some say), 
 than to try to wrangle this 600lb gorilla into something it was 
 never designed to be.

 I've long been hoping to take a stab at this myself, and I 
 often find myself thinking through details of how it would 
 work. I would, however, need help with the 
 dependency-resolution algorithm (perhaps somebody could go into 
 DUB and work to decouple its dep resolution algoritms from the 
 rest DUB as much as possible - or at least document how to 
 interface with it as a lib).
I've tried paying MrSmith33 to write a DUB replacement for a while. I think the "incremental" camp is 80% right saying that a DUB rewrite is a pipe dream in large parts. However if you want to build on that small attempt, don't hesitate to pick what you like: https://gitlab.com/AuburnSounds/rub (coined "idub" for a pun) In the end it needs a dedicated programmer with a _principled_ vision, but I'm sure some money would find your way, maybe not much though. I think recipe files (which format, we'll disagree on) and the registry are good and MUST be kept.
Apr 15
parent reply Atila Neves <atila.neves gmail.com> writes:
On Monday, 15 April 2019 at 19:52:10 UTC, Guillaume Piolat wrote:
 On Monday, 15 April 2019 at 16:37:38 UTC, Nick Sabalausky 
 (Abscissa) wrote:
 I've personally put a lot of effort into trying to fix DUB's 
 problems in the past. Though I would be happy to be proven 
 wrong, at this point, I'm all but convinced the problems with 
 DUB are far too fundamental.

 I know this is the sort of proposal that makes people cringe, 
 and often for good reason, but in this case, I really do think 
 it would be quicker, easier, and produce a better result to 
 simply re-design it from the ground up (while making sure to 
 leverage the existing code.dlang.org ecosystem in some say), 
 than to try to wrangle this 600lb gorilla into something it 
 was never designed to be.

 I've long been hoping to take a stab at this myself, and I 
 often find myself thinking through details of how it would 
 work. I would, however, need help with the 
 dependency-resolution algorithm (perhaps somebody could go 
 into DUB and work to decouple its dep resolution algoritms 
 from the rest DUB as much as possible - or at least document 
 how to interface with it as a lib).
I've tried paying MrSmith33 to write a DUB replacement for a while. I think the "incremental" camp is 80% right saying that a DUB rewrite is a pipe dream in large parts. However if you want to build on that small attempt, don't hesitate to pick what you like: https://gitlab.com/AuburnSounds/rub (coined "idub" for a pun)
I'll take a look. In the meanwhile I started this: https://github.com/kaleidicassociates/bud It uses dub as a library to make sure that it works just as dub does, but bypasses what it can. The idea is to completely separate these disparate tasks: * Dependency version resolution (from dub.{sdl,json} -> dub.selections.json) * Fetching dependencies that are not already in the file system (trivial after dub.selections.json has been generated) * Extracting information from dub about source files, import paths, etc. * Actually using the information from dub to build In the future dub.selections.json will have to be fixed to have different entries per configuration, and possibly per platform as well. Fortunately there's a version field in the current format. I'm currently concentrating on assuming that dub.selections.json is generated and taking it from there.
Apr 16
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Tuesday, 16 April 2019 at 12:12:45 UTC, Atila Neves wrote:
 The idea is to completely separate these disparate tasks:

 * Dependency version resolution (from dub.{sdl,json} -> 
 dub.selections.json)
 * Fetching dependencies that are not already in the file system 
 (trivial after dub.selections.json has been generated)
 * Extracting information from dub about source files, import 
 paths, etc.
 * Actually using the information from dub to build
It sounds like us dub-skeptics all actually agree on a surprising number of details. If we go with a plan like this, we can get more authors on board and get dub some more serious buy-in.
Apr 16
prev sibling next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Tuesday, 16 April 2019 at 12:12:45 UTC, Atila Neves wrote:
 I'll take a look. In the meanwhile I started this:

 https://github.com/kaleidicassociates/bud

 It uses dub as a library to make sure that it works just as dub 
 does, but bypasses what it can. The idea is to completely 
 separate these disparate tasks:

 * Dependency version resolution (from dub.{sdl,json} -> 
 dub.selections.json)
 * Fetching dependencies that are not already in the file system 
 (trivial after dub.selections.json has been generated)
 * Extracting information from dub about source files, import 
 paths, etc.
 * Actually using the information from dub to build
Won't complain about such good news! dub.json and dub.selections.json must unify though, as dub.selections.json is not self-sufficient, what if they contradict themselves?
 In the future dub.selections.json will have to be fixed to have 
 different entries per configuration, and possibly per platform 
 as well. Fortunately there's a version field in the current 
 format. I'm currently concentrating on assuming that 
 dub.selections.json is generated and taking it from there.
I'm not really sure which are the original DUB assumtions. I think our research concluded that solving dependencies for a particular platform+configuration is (in an idealized DUB) a fundamentally different operation than generating a proper dub.selections.json (for which you would have to find a super-set for "all conf"). The problem is that configurations/platform could appear and disappear alongside previous versions. (For readers: this would lead the path to platform-based dependencies and configuration-based dependencies which are currently not doable, such as: "dependencies-linux": { "x11": "~>1.0" } )
Apr 16
parent reply Atila Neves <atila.neves gmail.com> writes:
On Tuesday, 16 April 2019 at 15:52:57 UTC, Guillaume Piolat wrote:
 On Tuesday, 16 April 2019 at 12:12:45 UTC, Atila Neves wrote:
 I'll take a look. In the meanwhile I started this:

 https://github.com/kaleidicassociates/bud

 It uses dub as a library to make sure that it works just as 
 dub does, but bypasses what it can. The idea is to completely 
 separate these disparate tasks:

 * Dependency version resolution (from dub.{sdl,json} -> 
 dub.selections.json)
 * Fetching dependencies that are not already in the file 
 system (trivial after dub.selections.json has been generated)
 * Extracting information from dub about source files, import 
 paths, etc.
 * Actually using the information from dub to build
Won't complain about such good news! dub.json and dub.selections.json must unify though, as dub.selections.json is not self-sufficient, what if they contradict themselves?
I'm not sure I know what you mean. Unless you edit dub.selections.json manually, it can't contradict the build recipe it came from. Version 1 of dub.selections.json isn't sufficient, but I'm leaving that as a problem to solve later.
 I think our research concluded that solving dependencies for a 
 particular platform+configuration is (in an idealized DUB) a 
 fundamentally different operation than generating a proper 
 dub.selections.json (for which you would have to find a 
 super-set for "all conf").
Could you elaborate on this please? At least for configurations, I don't see why it wouldn't be possible to generate the current JSON object in dub.selections.json, but for each configuration instead. And, possibly platform (linux, x86_64, etc.).
 The problem is that configurations/platform could appear and 
 disappear alongside previous versions.
Could you please give an example of this?
 (For readers: this would lead the path to platform-based 
 dependencies and configuration-based dependencies which are 
 currently not doable, such as:

     "dependencies-linux": { "x11": "~>1.0" }
 )
Yep!
Apr 17
next sibling parent Guillaume Piolat <contact youknowwhere.com> writes:
On Wednesday, 17 April 2019 at 08:50:31 UTC, Atila Neves wrote:
 Could you please give an example of this?
Sorry I'm not clear. I'll dig into my conversations with MrSmith33 later and will come back to you because it's not in my mind anymore (and I wasn't the one doing the work). Let's meet at DConf at least to talk about this.
Apr 17
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Wednesday, 17 April 2019 at 08:50:31 UTC, Atila Neves wrote:
 (For readers: this would lead the path to platform-based 
 dependencies and configuration-based dependencies which are 
 currently not doable, such as:

     "dependencies-linux": { "x11": "~>1.0" }
 )
Yep!
I feel I've way overstated the soundness problems of DUB. Why "dependency-<platform>" and "dependency-<configuration>" is not supported is explained here, since two years... https://github.com/dlang/dub/wiki/FAQ
Apr 20
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/16/19 8:12 AM, Atila Neves wrote:
 
 I'll take a look. In the meanwhile I started this:
 
 https://github.com/kaleidicassociates/bud
 
 It uses dub as a library to make sure that it works just as dub does, 
 but bypasses what it can. The idea is to completely separate these 
 disparate tasks:
What is the current state of this tool?
Apr 17
parent Atila Neves <atila.neves gmail.com> writes:
On Wednesday, 17 April 2019 at 19:07:07 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 4/16/19 8:12 AM, Atila Neves wrote:
 
 I'll take a look. In the meanwhile I started this:
 
 https://github.com/kaleidicassociates/bud
 
 It uses dub as a library to make sure that it works just as 
 dub does, but bypasses what it can. The idea is to completely 
 separate these disparate tasks:
What is the current state of this tool?
In its infancy.
Apr 18
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/15/19 12:37 PM, Nick Sabalausky (Abscissa) wrote:
 I've personally put a lot of effort into trying to fix DUB's problems in 
 the past. Though I would be happy to be proven wrong, at this point, I'm 
 all but convinced the problems with DUB are far too fundamental.
 
 I know this is the sort of proposal that makes people cringe, and often 
 for good reason, but in this case, I really do think it would be 
 quicker, easier, and produce a better result to simply re-design it from 
 the ground up (while making sure to leverage the existing code.dlang.org 
 ecosystem in some say), than to try to wrangle this 600lb gorilla into 
 something it was never designed to be.
 
 I've long been hoping to take a stab at this myself, and I often find 
 myself thinking through details of how it would work. I would, however, 
 need help with the dependency-resolution algorithm (perhaps somebody 
 could go into DUB and work to decouple its dep resolution algoritms from 
 the rest DUB as much as possible - or at least document how to interface 
 with it as a lib).
To clarify, I'm mainly referring to the "package manager" aspect of DUB here. I'm less concerned with the "buildsystem" part simply because with a proper package manager, individual projects and even libraries will be free to use whatever buildsystem they want, even if that happens to be "dub build" (which *is* an entirely reasonable choice for many simpler projects...in large part because that's specifically the use-case it was primarily designed for.).
Apr 15
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 15, 2019 at 06:56:24PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
[...]
 To clarify, I'm mainly referring to the "package manager" aspect of
 DUB here. I'm less concerned with the "buildsystem" part simply
 because with a proper package manager, individual projects and even
 libraries will be free to use whatever buildsystem they want, even if
 that happens to be "dub build" (which *is* an entirely reasonable
 choice for many simpler projects...in large part because that's
 specifically the use-case it was primarily designed for.).
Y'know, the idea just occurred to me: why even bind the package manager to the build system at all? Why not make the build system one of the dependencies of the project? So you could have library A that is written to be built using dub build, and library B that depends on 'make', say (where 'make' is some dub-ified description of make as a build system), and A depends on B. So to build A, dub would fetch the necessary sources to build 'make', then use 'make' to build B, and then build A, and so forth. This way, you can have multiple build systems coexisting peacefully with each other, without requiring the package manager to roll its own build system. You could have library C depend on 'scons' and library D depend on 'gradle' or whatever, as long as there's a suitably dub-ified description of these build systems, dub could fetch and build them as dependencies and then use them to build whatever targets are needed. 'dub' itself (or maybe we should call it 'dub-build' or something) would be its own, standalone build module that projects could depend on by default. So if you don't like dub's built-in build system, just override it to depend on 'make' or whatever your choice of build poison is. For backward compatibility, 'dub-build' would be assumed if no build system is specified. If we were to pull this off, I might even consider using dub in a non-toy way. (Right now the only way I can get it to work sanely with my vibe.d project is to use the hack of an empty dub project whose sole purpose is to declare dub dependencies. It's ugly, and annoying.) T -- Без труда не выловишь и рыбку из пруда.
Apr 15
next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/15/19 7:17 PM, H. S. Teoh wrote:
 On Mon, Apr 15, 2019 at 06:56:24PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 [...]
 To clarify, I'm mainly referring to the "package manager" aspect of
 DUB here. I'm less concerned with the "buildsystem" part simply
 because with a proper package manager, individual projects and even
 libraries will be free to use whatever buildsystem they want, even if
 that happens to be "dub build" (which *is* an entirely reasonable
 choice for many simpler projects...in large part because that's
 specifically the use-case it was primarily designed for.).
Y'know, the idea just occurred to me: why even bind the package manager to the build system at all? Why not make the build system one of the dependencies of the project? So you could have library A that is written to be built using dub build, and library B that depends on 'make', say (where 'make' is some dub-ified description of make as a build system), and A depends on B. So to build A, dub would fetch the necessary sources to build 'make', then use 'make' to build B, and then build A, and so forth.
Yup, from the sound of it, I think we're on exactly the same page here. Basically, a package manager's config should need exactly three things from your project: 1. What dependencies. 2. The command(s) to (build, test, etc.) 3. Name(s)/Location(s) of the build's output. Then, with that information, a package manager provides services such as (but not necessarily limited to): 1. A simple, standardized way for you and your users to obtain/build the dependencies. 2. A simple, standardized way for buildscripts/buildsystems to obtain the information needed to include the dependencies in their own build (such as -I... include directories, paths to the now-already-built lib/exec binaries, etc.) From this, each project can naturally either just roll its own buildscripts, or depend on another package providing a builsystem. Some of the *details* can be quite nontrivial...like dependency resolution algorithms, or designing the interactions between package manager and buildsystem to be simple, yet effective enough to suit all parties needs. But ultimately, it boils down conceptually to be very simple. I tried to argue in favor of an approach like this earlier on in DUB's development, but the author maintained that making a package ecosystem all work together requires all packages using the same buildsystem. I wasn't able to convince otherwise.
Apr 15
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 16/04/2019 11:52 AM, Nick Sabalausky (Abscissa) wrote:
 I tried to argue in favor of an approach like this earlier on in DUB's 
 development, but the author maintained that making a package ecosystem 
 all work together requires all packages using the same buildsystem. I 
 wasn't able to convince otherwise.
I can understand that argument. Regarding targets same compiler ext. But I do agree that it would be the better design if they could all be worked out.
Apr 15
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 15, 2019 at 07:52:07PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
[...]
 [...]  Basically, a package manager's config should need exactly three
 things from your project:
 
 1. What dependencies.
 2. The command(s) to (build, test, etc.)
 3. Name(s)/Location(s) of the build's output.
 
 Then, with that information, a package manager provides services such
 as (but not necessarily limited to):
 
 1. A simple, standardized way for you and your users to obtain/build
 the dependencies.
 
 2. A simple, standardized way for buildscripts/buildsystems to obtain
 the information needed to include the dependencies in their own build
 (such as -I... include directories, paths to the now-already-built
 lib/exec binaries, etc.)
I'd add: 3. A standard query interface for querying a remote repository for packages (matching some names / patterns) and version numbers.
 From this, each project can naturally either just roll its own
 buildscripts, or depend on another package providing a builsystem.
That's a good idea. Completely decouple package management from building. Let the package manager do what it does best: managing packages, and leave the compilation to another tool more suited for the job. Of course, the default build command can be set to `dub build` to keep existing users happy. I was thinking the build/test/etc. command can itself be defined by a dependency. For example, if your project uses make for builds, there could be a `make` dub package that defines command-line templates used for generating platform-specific build commands. Your package would simply specify: "build-system": "make" and the "make" package would define the exact command(s) for invoking make (e.g., if gmake is picked up as a resolution, it could define the executable name as `gmake`, or it could define different CLI syntax for Windows vs. Linux, etc.). The package manager would then recognize "make" as a dependency, and would download and install that package (which presumably would install an appropriate version of 'make') if it hasn't already been installed, then invoke it to build the project. The "make" package itself would be presumably some kind of wrapper around a system utility, or an installer of some sort that installs an appropriate version of make on the system. It could be implemented as a normal dub package whose "build" command is something that runs an attached installer.
 Some of the *details* can be quite nontrivial...like dependency
 resolution algorithms, or designing the interactions between package
 manager and buildsystem to be simple, yet effective enough to suit all
 parties needs.  But ultimately, it boils down conceptually to be very
 simple.
[...] If done correctly, dependency resolution is just a topological walk on a DAG. Get this right, and everything else falls into place. To generate the DAG, there would need to be some kind of list of repositories, which defaults to code.dlang.org (but which can be customized if necessary -- e.g., add local filesystem repos for local packages, or a URL to a private additional repository). There would also need to be some kind of version resolution scheme. It shouldn't be too hard to do: all you need is for repositories to support three queries: 1) Fetch a list of packages matching the given name(s); 2) Filter said list by a given OS/platform identifier. 3) Filter said list by zero or more given version specs (either an exact match, or a <= or >= match, or whatever else you may want to filter version strings by.). Repositories ought to support such queries efficiently, i.e., the package manager shouldn't have to download the entire list of 1000 packages each with 50 different version identifiers, and then perform an O(n) search on the list to find the necessary matching package. It should behave like a database: you send it an OS identifier, a list of desired package names, each with one or more version filters, and it returns a list of matching package descriptions, including URLs from which you may download the package(s). It should take only 1 network roundtrip to get this information. Then once the package manager receives the URLs, it can use whatever method necessary to download said URLs (or just cache it somewhere if it's a pathname on the local filesystem), and read the package description(s) to find any additional dependencies that it may need. T -- A program should be written to model the concepts of the task it performs rather than the physical world or a process because this maximizes the potential for it to be applied to tasks that are conceptually similar and, more important, to tasks that have not yet been conceived. -- Michael B. Allen
Apr 16
next sibling parent reply drug <drug2004 bk.ru> writes:
On 16.04.2019 20:39, H. S. Teoh wrote:
 If done correctly, dependency resolution is just a topological walk on a
 DAG.  Get this right, and everything else falls into place.
 
How do you resolve conflicts using DAG only?
Apr 17
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 10:40:29AM +0300, drug via Digitalmars-d wrote:
 On 16.04.2019 20:39, H. S. Teoh wrote:
 If done correctly, dependency resolution is just a topological walk
 on a DAG.  Get this right, and everything else falls into place.
 
How do you resolve conflicts using DAG only?
What kind of conflicts? T -- Too many people have open minds but closed eyes.
Apr 17
parent reply drug <drug2004 bk.ru> writes:
On 17.04.2019 18:53, H. S. Teoh wrote:
 
 What kind of conflicts?
 
To be more correct I mean version constraints.
Apr 17
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 07:03:33PM +0300, drug via Digitalmars-d wrote:
 On 17.04.2019 18:53, H. S. Teoh wrote:
 
 What kind of conflicts?
 
To be more correct I mean version constraints.
Ah, I see what you mean. So package xyz may have, say, 10 different versions, and each version could potentially have different dependencies (since dependencies can change over time). So after narrowing down to the subset of xyz versions that satisfy the given version criteria, you still have to decide which of the remaining versions to use, since they may depend on other packages that are subject to other version constraints. So you'll need some graph algorithms to resolve these constraints. Interesting, I've never thought about this before. I'll have to think about this some more. Thanks! T -- Not all rumours are as misleading as this one.
Apr 17
parent reply Paul Backus <snarwin gmail.com> writes:
On 4/17/19 12:18 PM, H. S. Teoh wrote:
 On Wed, Apr 17, 2019 at 07:03:33PM +0300, drug via Digitalmars-d wrote:
 To be more correct I mean version constraints.
Ah, I see what you mean. So package xyz may have, say, 10 different versions, and each version could potentially have different dependencies (since dependencies can change over time). So after narrowing down to the subset of xyz versions that satisfy the given version criteria, you still have to decide which of the remaining versions to use, since they may depend on other packages that are subject to other version constraints. So you'll need some graph algorithms to resolve these constraints. Interesting, I've never thought about this before. I'll have to think about this some more. Thanks! T
Dependency resolution is actually NP-complete in the general case: https://research.swtch.com/version-sat Of course, in practice, there are solutions that work well enough to be usable; several are discussed in the linked article.
Apr 17
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 07:29:39PM -0400, Paul Backus via Digitalmars-d wrote:
 On 4/17/19 12:18 PM, H. S. Teoh wrote:
 On Wed, Apr 17, 2019 at 07:03:33PM +0300, drug via Digitalmars-d wrote:
 To be more correct I mean version constraints.
Ah, I see what you mean. So package xyz may have, say, 10 different versions, and each version could potentially have different dependencies (since dependencies can change over time). So after narrowing down to the subset of xyz versions that satisfy the given version criteria, you still have to decide which of the remaining versions to use, since they may depend on other packages that are subject to other version constraints. So you'll need some graph algorithms to resolve these constraints. Interesting, I've never thought about this before. I'll have to think about this some more. Thanks!
[...]
 Dependency resolution is actually NP-complete in the general case:
 
 https://research.swtch.com/version-sat
 
 Of course, in practice, there are solutions that work well enough to
 be usable; several are discussed in the linked article.
Ouch. The term "dependency hell" never seemed more apt, after reading the article. :-/ Interestingly, NP completeness can be avoided by only allowing packages to specify minimum versions, rather than a specific version (or arbitrarily complex version conditions). Allowing installation of multiple versions of the same package also gets away from NP completeness, though in practice it's probably a lot harder to pull off. An interesting hybrid approach is to combine both using semver, which sorta makes sense in some way. Though it's unclear how well this works in practice. Time to bust out that SAT solver... :-/ T -- Tell me and I forget. Teach me and I remember. Involve me and I understand. -- Benjamin Franklin
Apr 17
prev sibling parent reply Russel Winder <russel winder.org.uk> writes:
On Wed, 2019-04-17 at 22:27 -0700, H. S. Teoh via Digitalmars-d wrote:
[=E2=80=A6]
=20
 Ouch.  The term "dependency hell" never seemed more apt, after
 reading
 the article. :-/
[=E2=80=A6] The Gradle people have been through all this over the years, and now have an excellent dependency management system including allowing quite fine grain manual specifications. Gradle now has a C++ build as well as the whol gamut of JVM-based languages. An alternative is to add D to this. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 18
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/18/19 5:40 AM, Russel Winder wrote:
 
 The Gradle people have been through all this over the years, and now
 have an excellent dependency management system including allowing quite
 fine grain manual specifications.
 
 Gradle now has a C++ build as well as the whol gamut of JVM-based
 languages. An alternative is to add D to this.
My old JVM days predate Gradle, but at a glance it looks like Gradle is mainly a buildsystem. As the key problem we're facing is needing to separate package management from buildsystem, utilizing Gradle seems like a tough sell (tough to sell to me, included), even if only utilized for its package management. Although I'm personally not entirely against the idea of utilizing an existing cross-platform language-agnostic package manager behind-the-scenes. For example: What do you think of 0install? <https://github.com/Abscissa/DPak/issues/1> I haven't personally used it, but I've had my eye on it for quite a number of years. Seems similar to Nix, but without the difficult Nix expression language, and without the difficulty of installing older package versions.
Apr 18
parent reply Russel Winder <russel winder.org.uk> writes:
On Thu, 2019-04-18 at 12:01 -0400, Nick Sabalausky (Abscissa) via Digitalma=
rs-
d wrote:
[=E2=80=A6]
=20
 My old JVM days predate Gradle, but at a glance it looks like Gradle is=
=20
 mainly a buildsystem. As the key problem we're facing is needing to=20
 separate package management from buildsystem, utilizing Gradle seems=20
 like a tough sell (tough to sell to me, included), even if only utilized=
=20
 for its package management.
[=E2=80=A6] Gradle has always been an integrated dependency management and build system ever since its genesis in the Conservatory of Barbican Centre in 2007. Looking at Gradle, Maven, Cargo, Go, Conan, the choice is for integrated dependency management and build. Dub it would seem has taken the right over= all approach, just not gone an implementation route that serves D the way Cargo serves Rust. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 18
next sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Friday, 19 April 2019 at 06:19:42 UTC, Russel Winder wrote:
 Gradle has always been an integrated dependency management and 
 build system ever since its genesis in the Conservatory of 
 Barbican Centre in 2007.

 Looking at Gradle, Maven, Cargo, Go, Conan, the choice is for 
 integrated dependency management and build. Dub it would seem 
 has taken the right overall approach, just not gone an 
 implementation route that serves D the way Cargo serves Rust.
+1 (for everything that Russel said in this thread!)
Apr 19
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/19/19 2:19 AM, Russel Winder wrote:
 
 Looking at Gradle, Maven, Cargo, Go, Conan, the choice is for integrated
 dependency management and build. Dub it would seem has taken the right overall
 approach,
Well, it certainly seems to have taken a *popular* approach. I wouldn't necessarily take that as implying the "right" approach (for us).
 just not gone an implementation route that serves D the way Cargo
 serves Rust.
Maybe. But there's two things I think you're overlooking: 1. Differences in manpower. 2. Differences in situation/requirements. 1: Getting something like that right, without being overly-limited, amounts to quite a significant project in terms of manpower and, though I could be wrong, my perspective is that those communities have had more manpower to put into such a project than we have. At the very least, I think DUB has proven it's a poor approach for our current level of manpower. 2: D currently has plenty of buildsystem options to choose from, there's likely to be something for everyone. BUT, what D completely lacks right now, and desperately needs IMO, is a good package manager that's non-restrictive enough to suit everyone's needs. Making the D community wait to get that until we *finally* manage to work out our own one-buildsystem-to-rule-them-all (which we're currently nowhere near) is a terrible strategic approach for our current position. Later on, if a D buildsystem that works great for everyone's needs actually manages to emerge...THEN we can integrate our package manager with it. Summary: There is FAR more to good design and management than playing follow-the-leader.
Apr 19
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 19, 2019 at 03:54:45PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/19/19 2:19 AM, Russel Winder wrote:
 
 Looking at Gradle, Maven, Cargo, Go, Conan, the choice is for
 integrated dependency management and build. Dub it would seem has
 taken the right overall approach,
Well, it certainly seems to have taken a *popular* approach. I wouldn't necessarily take that as implying the "right" approach (for us).
[...] Yes. I for one dumped Gradle shortly after starting my Android project, because it just didn't let me do what I need to do, or at least not easily. Gradle expects your typical Java codebase with standard source tree structure. I needed D codegen and cross-compilation as integral parts of my build. The two ... let's just say, don't work very well together. It's the "my way or the highway" philosophy all over again. Yes it hides away a lot of complexity, and does a lot of nice things automatically -- when what you need to do happens to match the narrow use case Gradle was designed to do. But when you need to do something *other* than the One Accepted Way, you're in for a long uphill battle -- assuming it's even possible in the first place. To that, I say, No Thank You, I have other tools that fit better with how I work. (Not to mention, I have a hard time justifying to myself installing a multi-GB program just to be able to compile a tiny bit of code. And the bare act of invoking Gradle soaks up GBs of RAM and takes forever and 64 days just to decide what it should be doing. Perhaps I'm just a cranky old antiquated ape who needs to just get with the times, but I seriously cannot swallow that this is now the Accepted Way of Doing Things. And with D compile times supposedly being one of its top selling points, I seriously cannot see how such an approach will work well in the long run. (Though OTOH perhaps it will finally get rid of that cringe-inspiring fast-fast-fast motto.)) In any case, with all the work being poured into betterC and C++ interop, any proposed universal D build / package system MUST take into account interacting with C/C++ codebases, which generally do *not* use this integrated package/build approach. At the very least, a D build system that has pretensions of being universal must allow easy integration of C/C++ codebases, and that means accepting the possibility of other build systems, and at least making an effort to work with them nicely. This is why I proposed a standard "API" for the package manager to interact with arbitrary codebases with arbitrary build systems. Rather than imposing arbitrary restrictions on how packages ought to be built, it should acknowledge that other build systems exist, and provide a simple, consistent way of interacting with them that will *still* work well with those who prefer the integrated approach. T -- If creativity is stifled by rigid discipline, then it is not true creativity.
Apr 19
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/19/19 5:58 PM, H. S. Teoh wrote:
 On Fri, Apr 19, 2019 at 03:54:45PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/19/19 2:19 AM, Russel Winder wrote:
 Looking at Gradle, Maven, Cargo, Go, Conan, the choice is for
 integrated dependency management and build. Dub it would seem has
 taken the right overall approach,
Well, it certainly seems to have taken a *popular* approach. I wouldn't necessarily take that as implying the "right" approach (for us).
[...] Yes. I for one dumped Gradle shortly after starting my Android project, because it just didn't let me do what I need to do, or at least not easily. Gradle expects your typical Java codebase with standard source tree structure. I needed D codegen and cross-compilation as integral parts of my build. The two ... let's just say, don't work very well together. It's the "my way or the highway" philosophy all over again.
No big surprise why a "my way or the highway philosophy" is more successful with a JVM audience than a D audience ;) Like I alluded to: different language -> different audience -> different needs and requirements -> different "right" answers.
Apr 19
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Apr 20, 2019 at 01:14:12AM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/19/19 5:58 PM, H. S. Teoh wrote:
[...]
 Yes. I for one dumped Gradle shortly after starting my Android
 project, because it just didn't let me do what I need to do, or at
 least not easily.  Gradle expects your typical Java codebase with
 standard source tree structure.  I needed D codegen and
 cross-compilation as integral parts of my build.  The two ... let's
 just say, don't work very well together.  It's the "my way or the
 highway" philosophy all over again.
No big surprise why a "my way or the highway philosophy" is more successful with a JVM audience than a D audience ;) Like I alluded to: different language -> different audience -> different needs and requirements -> different "right" answers.
What I have trouble comprehending is, the technology needed to support *both* use cases already exists. So why not use it?? Why arbitrarily exclude certain use cases in favor of one specific one, when there is no technological reason not to support all use cases? We already have the tech to fly to the moon and back, yet we arbitrarily impose the restriction that the aircraft must remain in contact with the runway at all times, because some grandma on the plane is scared of heights (aka codebases that don't conform to the One True Way To Organize And Build Source Code). It doesn't make any sense to me. T -- It's bad luck to be superstitious. -- YHL
Apr 19
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
On Fri, 2019-04-19 at 14:58 -0700, H. S. Teoh via Digitalmars-d wrote:
=20
[=E2=80=A6]
 Yes. I for one dumped Gradle shortly after starting my Android project,
 because it just didn't let me do what I need to do, or at least not
 easily.  Gradle expects your typical Java codebase with standard source
 tree structure.  I needed D codegen and cross-compilation as integral
 parts of my build.  The two ... let's just say, don't work very well
 together.  It's the "my way or the highway" philosophy all over again.
 Yes it hides away a lot of complexity, and does a lot of nice things
 automatically -- when what you need to do happens to match the narrow
 use case Gradle was designed to do.  But when you need to do something
 *other* than the One Accepted Way, you're in for a long uphill battle --
 assuming it's even possible in the first place.  To that, I say, No
 Thank You, I have other tools that fit better with how I work.
=20
Gradle is definitely not rigid as implied in the above: it can work with an= y source structure. True there is a default, and "convention over configurati= on" is the main philosophy, but this can be easily overiden by those who need t= o.=20 There are hooks for doing pre-compilation code generation, though I suspect whilst there is support for C++, there is no ready made support for D. That you chose to ditch Gradle and go another way is entirely fine, but to denigrate Gradle as above based on what appears to be a single episode quic= kly abandoned, is a bit unfair to Gradle.
=20
[=E2=80=A6] The elided comments on Gradle requiring JVM and thus lots of memory and relatively slow startup is very fair. Much better grounds for your ditching= of Gradle than not finding the Gradle way of doing things you needed to do. I feel Gradle is probably not a good choice for D builds now, but it could = be. This is because no-one is using for that purpose and so the easy to use, D oriented tools are not available, everything has to be done with the base Gradle tools.=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 26
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 26, 2019 at 02:37:31PM +0100, Russel Winder via Digitalmars-d wrote:
 On Fri, 2019-04-19 at 14:58 -0700, H. S. Teoh via Digitalmars-d wrote:
 
[…]
 Yes. I for one dumped Gradle shortly after starting my Android
 project, because it just didn't let me do what I need to do, or at
 least not easily.  Gradle expects your typical Java codebase with
 standard source tree structure.  I needed D codegen and
 cross-compilation as integral parts of my build.  The two ... let's
 just say, don't work very well together.  It's the "my way or the
 highway" philosophy all over again.  Yes it hides away a lot of
 complexity, and does a lot of nice things automatically -- when what
 you need to do happens to match the narrow use case Gradle was
 designed to do.  But when you need to do something *other* than the
 One Accepted Way, you're in for a long uphill battle -- assuming
 it's even possible in the first place.  To that, I say, No Thank
 You, I have other tools that fit better with how I work.
 
Gradle is definitely not rigid as implied in the above: it can work with any source structure. True there is a default, and "convention over configuration" is the main philosophy, but this can be easily overiden by those who need to. There are hooks for doing pre-compilation code generation, though I suspect whilst there is support for C++, there is no ready made support for D.
Here's a question, since you obviously know Gradle far better than my admitted ignorance: does Gradle support the following? - Compile a subset of source files into an executable, and run said executable on some given set of input files in order to produce a set of auto-generated source files. Then compile the remaining source files along with the auto-generated ones to produce the final product. - Compile a subset of source files into an executable, then run said executable on a set of data files to transform them in some way, then invoke a second program on the transformed data files to produce a set of images that are then post-processed by image-processing tools to produce the final outputs. - Given a set of build products (executables, data files, etc.), pack them into an archive, compress, sign, etc., to produce a deliverable. - Have all of the above work correctly and minimally whenever one or more input files (either data files or source files) changes. Minimally meaning if a particular target doesn't depend on the changed input files, it doesn't get rebuilt, and if an intermediate build product is identical to the previous build, subsequent steps are elided. If Gradle can support all of the above, then it may be worthy of consideration. Of course, the subsequent question would then be, how easy is it to accomplish the above, i.e., is it just a matter of adding a few lines to the build description, or does it involve major build system hackery.
 That you chose to ditch Gradle and go another way is entirely fine,
 but to denigrate Gradle as above based on what appears to be a single
 episode quickly abandoned, is a bit unfair to Gradle.
I did not intend to denigrate Gradle; I was merely relating my own experience with it. Clearly, it works very well for many people, otherwise it wouldn't even be on the radar in the first place. But it took far too much effort than I was willing to put in to get it to do what I need to do, esp. when I already have an SCons system that I'm already familiar with, and can get up and running with minimal effort. Given such a choice, it should be obvious why I decided against using Gradle. [...]
 The elided comments on Gradle requiring JVM and thus lots of memory
 and relatively slow startup is very fair. Much better grounds for your
 ditching of Gradle than not finding the Gradle way of doing things you
 needed to do.
See, this is the problem I have with many supposedly "modern" tools. They are giant CPU and memory hogs, require tons of disk space for who knows what, take forever to start up, and chug along taking their sweet time just to get the simplest of tasks done, where supposedly "older" or "antiquated" tools fire up in an instant, get right to work, and get the same task done in a fraction of the time. This in itself may have been excusable, but then these "modern" tools also come with a whole bunch of red tape, arbitrary limitations, and the philosophy of "follow our way or walk there on foot yourself". IOW they do what "antiquated" tools do poorly, and can't even do advanced things said "antiquated" tools can do without any fuss -- and this not because of *technical* limitations, but because it was arbitrarily decided at some point that use cases X and Y will not be supported, just 'cuz. So somebody please enlighten this here dinosaur of *why* I should choose these "modern" tools over my existing, and IMO better, ones? This is why I'm a fan of empowering the user. Instead of being obsessed over how to package your software in a beautiful jewel-encrusted box with a gold ribbon on top, (which IMO is an utter waste of time and only attracts those with an inflated sense of entitlement), give the user the tools to do what the software can do. Instead of delivering a magic black box that does everything but with the hood welded shut, give them the components within the black box with which they can build something far beyond what you may have initially conceived. I believe *that* is the way to progress, not this "hand me my software on a silver platter" philosophy so pervasive nowadays. (The jewel-encrusted box is still nice, BTW -- but only as long as it's not at the expense of welding the hood shut.)
 I feel Gradle is probably not a good choice for D builds now, but it
 could be.  This is because no-one is using for that purpose and so the
 easy to use, D oriented tools are not available, everything has to be
 done with the base Gradle tools. 
[...] If Gradle is dependent on Java, then using it for D builds would be kind of ... anticlimactic. :-D But probably the primary issue would be the slow startup times and heavy storage requirements that goes against the grain of our present (cringe-worthy) fast-fast-fast slogan. I suppose it's no coincidence that the Gradle logo is ... just as slow and mellow as the tool itself. ;-) (I love RL elephants, BTW, so this isn't meant in a denigrating way. But I'd have a very hard time justifying the order-of-magnitude increase in my edit-compile-run cycle turnaround times. Life is far too short to have to wait for such things.) T -- Talk is cheap. Whining is actually free. -- Lars Wirzenius
Apr 26
next sibling parent reply Gilter <Gilter gmall.com> writes:
On Friday, 26 April 2019 at 17:30:50 UTC, H. S. Teoh wrote:
 On Fri, Apr 26, 2019 at 02:37:31PM +0100, Russel Winder via 
 Digitalmars-d wrote:
 On Fri, 2019-04-19 at 14:58 -0700, H. S. Teoh via 
 Digitalmars-d wrote:
 
[…]
 Yes. I for one dumped Gradle shortly after starting my 
 Android project, because it just didn't let me do what I 
 need to do, or at least not easily.  Gradle expects your 
 typical Java codebase with standard source tree structure.  
 I needed D codegen and cross-compilation as integral parts 
 of my build.  The two ... let's just say, don't work very 
 well together.  It's the "my way or the highway" philosophy 
 all over again.  Yes it hides away a lot of complexity, and 
 does a lot of nice things automatically -- when what you 
 need to do happens to match the narrow use case Gradle was 
 designed to do.  But when you need to do something *other* 
 than the One Accepted Way, you're in for a long uphill 
 battle -- assuming it's even possible in the first place.  
 To that, I say, No Thank You, I have other tools that fit 
 better with how I work.
 
Gradle is definitely not rigid as implied in the above: it can work with any source structure. True there is a default, and "convention over configuration" is the main philosophy, but this can be easily overiden by those who need to. There are hooks for doing pre-compilation code generation, though I suspect whilst there is support for C++, there is no ready made support for D.
Here's a question, since you obviously know Gradle far better than my admitted ignorance: does Gradle support the following? - Compile a subset of source files into an executable, and run said executable on some given set of input files in order to produce a set of auto-generated source files. Then compile the remaining source files along with the auto-generated ones to produce the final product. - Compile a subset of source files into an executable, then run said executable on a set of data files to transform them in some way, then invoke a second program on the transformed data files to produce a set of images that are then post-processed by image-processing tools to produce the final outputs.
Not sure what you find difficult about this. You create 2 tasks, one task generates the executable. The second task runs the executable, add the first task as a dependency of the second task. It really just boils down to, do this task before this other task and a task should be able to be just a command you can run. Which is basically what every build system supports. https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:adding_dependencies_to_tasks
 - Given a set of build products (executables, data files, 
 etc.), pack
   them into an archive, compress, sign, etc., to produce a 
 deliverable.
This really is just a task where you run a command. Signing is different for basically every system, and depending on the system it can be different based on what program you are using to sign. If you expect the build system to know how to sign an executable by just doing "sign: true" then you aren't going to find a build system like that unless it only works on one system. For zip files you can easily archive them: https://docs.gradle.org/current/userguide/working_with_files.html#sec:archives
 - Have all of the above work correctly and minimally whenever 
 one or
   more input files (either data files or source files) changes.
   Minimally meaning if a particular target doesn't depend on 
 the changed
   input files, it doesn't get rebuilt, and if an intermediate 
 build
   product is identical to the previous build, subsequent steps 
 are
   elided.
This has been around for quite a while, since 2.5 by the looks of it. You have to specify the input and output files so it can detect if they were changed. https://docs.gradle.org/2.5/userguide/more_about_tasks.html#sec:up_to_date_checks
 If Gradle can support all of the above, then it may be worthy 
 of consideration.

 Of course, the subsequent question would then be, how easy is 
 it to accomplish the above, i.e., is it just a matter of adding 
 a few lines to the build description, or does it involve major 
 build system hackery.
What build systems do you know that support the above. Like I said somethings are kind of unreasonable, like expecting a build system to be able to sign your executables with just a simple flag. The signing process I have to do doesn't actually allow us to automate it. For some reason it won't work, you have to be physically be at the server for it to sign the executable. Seems like it might be a security feature, can't even remote into the server to sign it manually like that either.
 That you chose to ditch Gradle and go another way is entirely 
 fine, but to denigrate Gradle as above based on what appears 
 to be a single episode quickly abandoned, is a bit unfair to 
 Gradle.
I did not intend to denigrate Gradle; I was merely relating my own experience with it. Clearly, it works very well for many people, otherwise it wouldn't even be on the radar in the first place. But it took far too much effort than I was willing to put in to get it to do what I need to do, esp. when I already have an SCons system that I'm already familiar with, and can get up and running with minimal effort. Given such a choice, it should be obvious why I decided against using Gradle.
Imagine that, learning something new takes a lot more effort than using something you already know. Who would have thought?
 [...]
 The elided comments on Gradle requiring JVM and thus lots of 
 memory and relatively slow startup is very fair. Much better 
 grounds for your ditching of Gradle than not finding the 
 Gradle way of doing things you needed to do.
See, this is the problem I have with many supposedly "modern" tools. They are giant CPU and memory hogs, require tons of disk space for who knows what, take forever to start up, and chug along taking their sweet time just to get the simplest of tasks done, where supposedly "older" or "antiquated" tools fire up in an instant, get right to work, and get the same task done in a fraction of the time. This in itself may have been excusable, but then these "modern" tools also come with a whole bunch of red tape, arbitrary limitations, and the philosophy of "follow our way or walk there on foot yourself". IOW they do what "antiquated" tools do poorly, and can't even do advanced things said "antiquated" tools can do without any fuss -- and this not because of *technical* limitations, but because it was arbitrarily decided at some point that use cases X and Y will not be supported, just 'cuz. So somebody please enlighten this here dinosaur of *why* I should choose these "modern" tools over my existing, and IMO better, ones?
What do you consider giant CPU and memory hogs that require a ton of disk space? Cause from my experience the people here complain about VS taking up so much space, end up doing development work on some shitty tablet with 32 GB of HDD space. I just use a python script, I don't feel like bothering with things like SCons. I execute exactly what I need and it is easy to see what it is doing and what is happening. It doesn't take long to compile either though, I usually only need to re-compile one component as well. Use whatever you want, just don't go condemning something cause you never bothered to learn how to use it.
Apr 27
parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/27/19 10:25 AM, Gilter wrote:
 Use whatever you want, just don't go condemning something cause you 
 never bothered to learn how to use it.
 
As he already said, he wasn't condemning it, just relaying his experience with it.
Apr 27
prev sibling next sibling parent Dibyendu Majumdar <d.majumdar gmail.com> writes:
Hi,

I think there is a lot to be said for using Gradle / and Maven 
repository as the package management tool. Why reinvent something 
that already works.

Gradle is a task dependency management tool at its core - it 
doesn't care what these tasks are, and you can write plugins to 
create any tasks you want. I have seen Gradle used successfully 
for C++ projects.

Regards
Dibyendu
Apr 27
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/26/19 1:30 PM, H. S. Teoh wrote:
 See, this is the problem I have with many supposedly "modern" tools.
 They are giant CPU and memory hogs, require tons of disk space for who
 knows what, take forever to start up, and chug along taking their sweet
 time just to get the simplest of tasks done, where supposedly "older" or
 "antiquated" tools fire up in an instant, get right to work, and get the
 same task done in a fraction of the time.  This in itself may have been
 excusable, but then these "modern" tools also come with a whole bunch of
 red tape, arbitrary limitations, and the philosophy of "follow our way
 or walk there on foot yourself".  IOW they do what "antiquated" tools do
 poorly, and can't even do advanced things said "antiquated" tools can do
 without any fuss -- and this not because of *technical* limitations, but
 because it was arbitrarily decided at some point that use cases X and Y
 will not be supported, just 'cuz.
+1
 So somebody please enlighten this here dinosaur of *why* I should choose
 these "modern" tools over my existing, and IMO better, ones?
Because "you just have to believe" and join the newer-is-always-better cult? Because we're told by everyday joe that we always have to keep up with new technology or get left behind, so by golly that must be true, after all, who are we to question those smarter than us, those from whom everyday joe obtained the "keep up or left behind" Truth? I think THAT'S why you have to. ;)
 This is why I'm a fan of empowering the user.  Instead of being obsessed
 over how to package your software in a beautiful jewel-encrusted box
 with a gold ribbon on top, (which IMO is an utter waste of time and only
 attracts those with an inflated sense of entitlement), give the user the
 tools to do what the software can do. Instead of delivering a magic
 black box that does everything but with the hood welded shut, give them
 the components within the black box with which they can build something
 far beyond what you may have initially conceived.  I believe *that* is
 the way to progress, not this "hand me my software on a silver platter"
 philosophy so pervasive nowadays.
 
 (The jewel-encrusted box is still nice, BTW -- but only as long as it's
 not at the expense of welding the hood shut.)
Unix philosophy. Yup. *nods*. "A tool should do one thing, and do it well." The Lego approach to software. I grew up on MS-DOS and Windows (well, ok, Apple 2...then DOS/Win). And hey, they served their purposes for me. Fast-forward some years later and I was talking to a game programmer I had previously known (gamedev circles tend to be fairly Win/MS-heavy - and there's admittedly been reasons for that). At one point he stopped and said, "Hey, wait a minute, since when did you become a Linux guy?" Well, since I grew up and learned that I like being able to automate any repetitive sequence I need to, to aid my productivity and help manage cognitive load. The unix philosophy is key in enabling that. It's also why a world of random hackers have been able to build and maintain a system that, in terms of capabilities, even giants like Microsoft and Google can't keep up with (although, certain business realities actually make it much more lucrative for large companies to keep their products artificially limited - so it's not as if they're especially motivated to compete with *nix on capabilities).
Apr 27
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Apr 27, 2019 at 04:41:56PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/26/19 1:30 PM, H. S. Teoh wrote:
[...]
 This is why I'm a fan of empowering the user.  Instead of being
 obsessed over how to package your software in a beautiful
 jewel-encrusted box with a gold ribbon on top, (which IMO is an
 utter waste of time and only attracts those with an inflated sense
 of entitlement), give the user the tools to do what the software can
 do. Instead of delivering a magic black box that does everything but
 with the hood welded shut, give them the components within the black
 box with which they can build something far beyond what you may have
 initially conceived.  I believe *that* is the way to progress, not
 this "hand me my software on a silver platter" philosophy so
 pervasive nowadays.
[...]
 Unix philosophy. Yup. *nods*. "A tool should do one thing, and do it
 well." The Lego approach to software.
It's not so much doing one thing and doing it well (even though that is a natural consequence), but (1) giving the user access to the tools that *you* have at your disposal, and (2) making the interface *composable*, so that the user is empowered to put the blocks together in brand new ways that you've never thought of. [...]
 "Hey, wait a minute, since when did you become a Linux guy?" Well,
 since I grew up and learned that I like being able to automate any
 repetitive sequence I need to, to aid my productivity and help manage
 cognitive load. The unix philosophy is key in enabling that.
[...] Yes, the ability to automate *any arbitrary task* is a key functionality that modern designs seem to have overlooked. Automation is the raison d'etre of the first machines. Humans are bad at repetitive tasks, and machines are supposed to take over these tasks so that humans can excel at the other things they are good at. Yet the UIs of the modern day force you to click through endless nested levels of submenus and provide very little (if any at all) way to automate things. And when such is provided, it's usually arbitrarily limited or encumbered in some way, and usually cannot come up to the full functionality accessible when you do things the manual way. It doesn't make any sense. Machines are supposed to abstract away the repetitive tasks and allow you to program them to do arbitrary things on the click of a button, not to force you to make repetitive gestures and get a wrist aneurysm because the GUI designer didn't think scripting was an important use case. The whole point of a general-purpose computing machine is to be able to apply this automation to ANY ARBITRARY TASK, not merely some arbitrarily limited subset that the designers deigned worthy to be included. Making your interface composable is what allows your code to be composable with any other code in a painless way. This is why more and more in my own designs I'm leaning towards the guiding principle of programs being thin wrappers around reusable library modules that provide the real functionality. And the programs themselves should as much as possible provide the simplest, most straightforward interface that makes it composable with any other arbitrary program. Composability, because the user is smarter than you and will think of use cases far beyond your wildest imaginations, so they should not be arbitrarily limited to only what you can think of. Libraries, because no matter how awesome your program interface is, eventually somebody will want to reuse your functionality from their *own* program. Allowing them to do that without needlessly cumbersome workarounds like spawning a subprocess and contorting their code to your quirky CLI syntax or method of invocation will make them very happy and loyal users, and will increase the scope of applicability of your code far beyond what you may have conceived. (And no, remote procedure call is not an excuse to avoid providing a library API to your functionality. RPC has its uses, but the user should be in control of how they wish your code to run. They should not need to spawn a daemon via JSON over HTTP to a remote server just to be able to call a single function. They *can* do this *if they want to*, but this should not be the *only* way of interfacing with your code. The user should be empowered to deploy your code according to *their* needs, not yours. They should be able to choose whether to call your function as a local function call to a DLL / .so on the local filesystem, or to run your library on a remote server accessed via RPC over the network.) Empower the user to do what *they* want, how they want it, rather than straitjacket them to do what *you* want, the way you dictate it. This principle applies to all levels of code, from not writing a class that requires its methods to be called in a specific order otherwise it crashes or gets into an inconsistent state, to encapsulating your primary functionality in a library API that can be used anywhere in any way. Life is too short to be wasted inventing shims and workarounds to functionalities that *should* have been made available generally with no strings attached and no straitjackets imposing arbitrary limitations. Empowerment, composability, automatability. I'm tired of trying to work with software that doesn't provide all three. It's just not worth my bother anymore. T -- MSDOS = MicroSoft's Denial Of Service
Apr 28
prev sibling parent Russel Winder <russel winder.org.uk> writes:
On Fri, 2019-04-26 at 10:30 -0700, H. S. Teoh via Digitalmars-d wrote:
=20
[=E2=80=A6]
 Here's a question, since you obviously know Gradle far better than my
 admitted ignorance: does Gradle support the following?
=20
 - Compile a subset of source files into an executable, and run said
   executable on some given set of input files in order to produce a set
   of auto-generated source files. Then compile the remaining source
   files along with the auto-generated ones to produce the final product.
Yes.
 - Compile a subset of source files into an executable, then run said
   executable on a set of data files to transform them in some way, then
   invoke a second program on the transformed data files to produce a set
   of images that are then post-processed by image-processing tools to
   produce the final outputs.
Yes.
 - Given a set of build products (executables, data files, etc.), pack
   them into an archive, compress, sign, etc., to produce a deliverable.
Yes.
 - Have all of the above work correctly and minimally whenever one or
   more input files (either data files or source files) changes.
   Minimally meaning if a particular target doesn't depend on the changed
   input files, it doesn't get rebuilt, and if an intermediate build
   product is identical to the previous build, subsequent steps are
   elided.
Yes.
 If Gradle can support all of the above, then it may be worthy of
 consideration.
=20
 Of course, the subsequent question would then be, how easy is it to
 accomplish the above, i.e., is it just a matter of adding a few lines to
 the build description, or does it involve major build system hackery.
Not all the things are part of Gradle as distributed, you have to write som= e tasks. However the infrastructure is there to make writing the tasks straightforward. This not one or two lines, but not major hackery. [...]
=20
 See, this is the problem I have with many supposedly "modern" tools.
 They are giant CPU and memory hogs, require tons of disk space for who
 knows what, take forever to start up, and chug along taking their sweet
 time just to get the simplest of tasks done, where supposedly "older" or
 "antiquated" tools fire up in an instant, get right to work, and get the
 same task done in a fraction of the time.  This in itself may have been
 excusable, but then these "modern" tools also come with a whole bunch of
 red tape, arbitrary limitations, and the philosophy of "follow our way
 or walk there on foot yourself".  IOW they do what "antiquated" tools do
 poorly, and can't even do advanced things said "antiquated" tools can do
 without any fuss -- and this not because of *technical* limitations, but
 because it was arbitrarily decided at some point that use cases X and Y
 will not be supported, just 'cuz.
This is a general problem of software, not just build systems such as Gradl= e. It does seem though that Meson + Ninja are much lighter weight.
 So somebody please enlighten this here dinosaur of *why* I should choose
 these "modern" tools over my existing, and IMO better, ones?
What are the better ones, one cannot debate without knowledge. ;-)
 This is why I'm a fan of empowering the user.  Instead of being obsessed
 over how to package your software in a beautiful jewel-encrusted box
 with a gold ribbon on top, (which IMO is an utter waste of time and only
 attracts those with an inflated sense of entitlement), give the user the
 tools to do what the software can do. Instead of delivering a magic
 black box that does everything but with the hood welded shut, give them
 the components within the black box with which they can build something
 far beyond what you may have initially conceived.  I believe *that* is
 the way to progress, not this "hand me my software on a silver platter"
 philosophy so pervasive nowadays.
=20
 (The jewel-encrusted box is still nice, BTW -- but only as long as it's
 not at the expense of welding the hood shut.)
UI and UX are important, definitely. The Groovy DSL and the Kotlin DSL for writing build are focussed on convention over configuration so as to make things as simple as possible for the straightforward cases. But the tools a= re there to configure and programme the build for complicated cases. In this Gradle and SCons have similar capabilities though SCons lack many things bu= ilt in to Gradle. There are probably 10x as many build requirements as there are programmers.= A single unopenable black box build system simply has no chance.=20 [=E2=80=A6]
=20
 If Gradle is dependent on Java, then using it for D builds would be kind
 of ... anticlimactic. :-D  But probably the primary issue would be the
 slow startup times and heavy storage requirements that goes against the
 grain of our present (cringe-worthy) fast-fast-fast slogan. I suppose
 it's no coincidence that the Gradle logo is ... just as slow and mellow
 as the tool itself. ;-)  (I love RL elephants, BTW, so this isn't meant
 in a denigrating way. But I'd have a very hard time justifying the
 order-of-magnitude increase in my edit-compile-run cycle turnaround
 times.  Life is far too short to have to wait for such things.)
Gradle is JVM-based, but they have build servers and caching to make builds faster than you might think. Clearly the bulk of build focus is on JVM-rela= ted=20 stuff, but C and C++ got added because someone paid for it to be added. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 27
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/16/19 1:39 PM, H. S. Teoh wrote:
 On Mon, Apr 15, 2019 at 07:52:07PM -0400, Nick Sabalausky (Abscissa) >>
 Then, with that information, a package manager provides services such
 as (but not necessarily limited to):

 1. A simple, standardized way for you and your users to obtain/build
 the dependencies.

 2. A simple, standardized way for buildscripts/buildsystems to obtain
 the information needed to include the dependencies in their own build
 (such as -I... include directories, paths to the now-already-built
 lib/exec binaries, etc.)
I'd add: 3. A standard query interface for querying a remote repository for packages (matching some names / patterns) and version numbers.
Oh, don't get me wrong, there's a whole BUNCH of things I'd like to see on this particular list, and that definitely includes your suggestion. I left it at "not necessarily limited to..." for good reason ;)
  From this, each project can naturally either just roll its own
 buildscripts, or depend on another package providing a builsystem.
That's a good idea. Completely decouple package management from building. Let the package manager do what it does best: managing packages, and leave the compilation to another tool more suited for the job.
Exactly.
 I was thinking the build/test/etc. command can itself be defined by a
 dependency.  For example, if your project uses make for builds, there
 could be a `make` dub package that defines command-line templates used
 for generating platform-specific build commands. Your package would
 simply specify:
 
 	"build-system": "make"
 
 and the "make" package would define the exact command(s) for invoking
 make [...etc...]
Interesting idea. I definitely like it, though I would consider it not a "basic fundamental" mechanism, but rather a really nice bonus feature that provides a convenient shortcut for having to specifying certain things manually...IF you happen to be using a buildsystem known by the package manager...or (better yet, if possible) a buildsystem that's able to tell the package manager enough about itself to make this happen.
 Some of the *details* can be quite nontrivial...like dependency
 resolution algorithms, or designing the interactions between package
 manager and buildsystem to be simple, yet effective enough to suit all
 parties needs.  But ultimately, it boils down conceptually to be very
 simple.
[...] If done correctly, dependency resolution is just a topological walk on a DAG. Get this right, and everything else falls into place.
Yes, but therin lies the rub - "getting this right". AFAICT, Sonke seems to have a very good grasp on dependency resolution via DAG (or at least, certainly a far better grasp than I do). But yet, even his algorithms in DUB, he needed to occasionally patch with common-use-case short-circuiting logic (and I don't even know what else) just to fix various seemingly-infinite-loop issues (not technically infinite, but appeared to be to the user) that people were still reporting some years into DUBs lifetime. 'Course, if it really is simpler than it seems to me (and *especially* is it *isn't*), then I'd certainly be more than happy to delegate dependency resolution out to somebody else.
Apr 17
next sibling parent reply Russel Winder <russel winder.org.uk> writes:
On Wed, 2019-04-17 at 03:48 -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
=20
[=E2=80=A6]
=20
 Oh, don't get me wrong, there's a whole BUNCH of things I'd like to
 see=20
 on this particular list, and that definitely includes your
 suggestion. I=20
 left it at "not necessarily limited to..." for good reason ;)
[=E2=80=A6] The problem here is that people are having lots of ideas on this email list, but no-one is choosing to collate them into a an actual "call to action". As ever in an ever extending email thread the energy to do something within the D community dissipates and nothing ends up happening. Perhaps we should make the hack day at DConf 2019 "get something actually moving on this" day. Or perhaps people will continue to say stuff on very long email threads and end up doing nothing. This is the third such thread on Dub I can remember, the previous two ended up with no outcomes other than lots of opinions in very long email threads. =20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 17
next sibling parent reply NaN <divide by.zero> writes:
On Wednesday, 17 April 2019 at 08:06:13 UTC, Russel Winder wrote:
 On Wed, 2019-04-17 at 03:48 -0400, Nick Sabalausky (Abscissa) 
 via
 Digitalmars-d wrote:
 
[…]
 
 Oh, don't get me wrong, there's a whole BUNCH of things I'd 
 like to
 see
 on this particular list, and that definitely includes your
 suggestion. I
 left it at "not necessarily limited to..." for good reason ;)
[…] The problem here is that people are having lots of ideas on this email list, but no-one is choosing to collate them into a an actual "call to action". As ever in an ever extending email thread the energy to do something within the D community dissipates and nothing ends up happening. Perhaps we should make the hack day at DConf 2019 "get something actually moving on this" day. Or perhaps people will continue to say stuff on very long email threads and end up doing nothing. This is the third such thread on Dub I can remember, the previous two ended up with no outcomes other than lots of opinions in very long email threads.
The people who are actually going to do the work should setup a private group somewhere. These sprawling threads on the public newsgroup dilute not just the vision but also the motivation.
Apr 17
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 17/04/2019 8:27 PM, NaN wrote:
 On Wednesday, 17 April 2019 at 08:06:13 UTC, Russel Winder wrote:
 On Wed, 2019-04-17 at 03:48 -0400, Nick Sabalausky (Abscissa) via
 Digitalmars-d wrote:

 […]
 Oh, don't get me wrong, there's a whole BUNCH of things I'd like to
 see
 on this particular list, and that definitely includes your
 suggestion. I
 left it at "not necessarily limited to..." for good reason ;)
[…] The problem here is that people are having lots of ideas on this email list, but no-one is choosing to collate them into a an actual "call to action". As ever in an ever extending email thread the energy to do something within the D community dissipates and nothing ends up happening. Perhaps we should make the hack day at DConf 2019 "get something actually moving on this" day. Or perhaps people will continue to say stuff on very long email threads and end up doing nothing. This is the third such thread on Dub I can remember, the previous two ended up with no  outcomes other than lots of opinions in very long email threads.
The people who are actually going to do the work should setup a private group somewhere. These sprawling threads on the public newsgroup dilute not just the vision but also the motivation.
Over on Discord we have the Graphics work group which is working out quite well. Still very early on work wise so we've kept quite quiet about it. We have done our best to set aside our opinions and do research to find out the requirements before writing code with a majority in agreement before decisions / pulling of code. It would be a good model to replicate if a few people came together and decided to start work on a new (although compatible) build system.
Apr 17
prev sibling next sibling parent reply bachmeier <no spam.net> writes:
On Wednesday, 17 April 2019 at 08:06:13 UTC, Russel Winder wrote:

 Or perhaps people will continue to say stuff on very long email 
 threads
 and end up doing nothing. This is the third such thread on Dub 
 I can
 remember, the previous two ended up with no  outcomes other 
 than lots
 of opinions in very long email threads.
This is a leadership problem. Notably absent from these discussions are Andrei and Walter. This is a critical piece of infrastructure that's not working, it's distributed with DMD, and they're completely silent because it's not something they find interesting. If you're the leader of a large open source project, now is when you step in and ask who is willing to put in the time to take this on, so you can give one or more of them the authority to make decisions. There should be a formal proposal that can be commented on as we do with a DIP. Work begins, with a formal definition of when it's ready to go live guiding those doing the work. Maybe we'll see something like that happen. Based on what I've seen over the last six years, it seems unlikely. Instead there will be long email threads by individuals that have no decisionmaking authority, and everyone will conclude that they'd be crazy to dump hundreds of hours into such a project. In a couple months we'll have a fourth long email thread on ways to fix Dub.
Apr 17
parent reply Andre Pany <andre s-e-a-p.de> writes:
On Wednesday, 17 April 2019 at 13:41:43 UTC, bachmeier wrote:
 On Wednesday, 17 April 2019 at 08:06:13 UTC, Russel Winder 
 wrote:

 Or perhaps people will continue to say stuff on very long 
 email threads
 and end up doing nothing. This is the third such thread on Dub 
 I can
 remember, the previous two ended up with no  outcomes other 
 than lots
 of opinions in very long email threads.
This is a leadership problem. Notably absent from these discussions are Andrei and Walter. This is a critical piece of infrastructure that's not working, it's distributed with DMD, and they're completely silent because it's not something they find interesting. If you're the leader of a large open source project, now is when you step in and ask who is willing to put in the time to take this on, so you can give one or more of them the authority to make decisions. There should be a formal proposal that can be commented on as we do with a DIP. Work begins, with a formal definition of when it's ready to go live guiding those doing the work. Maybe we'll see something like that happen. Based on what I've seen over the last six years, it seems unlikely. Instead there will be long email threads by individuals that have no decisionmaking authority, and everyone will conclude that they'd be crazy to dump hundreds of hours into such a project. In a couple months we'll have a fourth long email thread on ways to fix Dub.
Please be more precise. Dub does a perfect job if you have standalone D applications. There are issues, as I understand, if you have complex integration scenarios with C/C++. Dub is built by the community for the community. I also had to create some pull reduests to get all my scenarios working and now everything works for my scenarios like a charme. Maybe for complex integration scenarios another build tool has to be used. Or maybe Dub has to be improved to support complex scenarios. I do not see any management issue here. If you (to all) have issues, please consider creating pull requests. Kind regards Andre
Apr 17
next sibling parent drug <drug2004 bk.ru> writes:
On 17.04.2019 17:09, Andre Pany wrote:
 
 Please be more precise. Dub does a perfect job if you have standalone D 
 applications. There are issues, as I understand, if you have complex 
 integration scenarios with C/C++.
 
 Dub is built by the community for the community. I also had to create 
 some pull reduests to get all my scenarios working and now everything 
 works for my scenarios like a charme.
 
 Maybe for complex integration scenarios another build tool has to be 
 used. Or maybe Dub has to be improved to support complex scenarios.
 
 I do not see any management issue here.
 If you (to all) have issues, please consider creating pull requests.
 
 Kind regards
 Andre
 
 
I agree with you totally. I don't understand why W&A should appear in this thread. dub wasn't intended to be standard package manager. And it's absolutely normally that now community needs to improve it. There is no someone fault at all.
Apr 17
prev sibling parent bachmeier <no spam.net> writes:
On Wednesday, 17 April 2019 at 14:09:06 UTC, Andre Pany wrote:

 Please be more precise. Dub does a perfect job if you have 
 standalone D applications. There are issues, as I understand, 
 if you have complex integration scenarios with C/C++.

 Dub is built by the community for the community. I also had to 
 create some pull reduests to get all my scenarios working and 
 now everything works for my scenarios like a charme.

 Maybe for complex integration scenarios another build tool has 
 to be used. Or maybe Dub has to be improved to support complex 
 scenarios.

 I do not see any management issue here.
 If you (to all) have issues, please consider creating pull 
 requests.
Well clearly others disagree with you. You are saying "it works for me - make changes to the source code and submit a PR if you don't like it." That is not an answer. It excludes anyone new to D, anyone that uses D but does not have time to work on Dub, or anyone that doesn't have the ability to make changes to Dub as needed for acceptance. No such requirement exists for using any other language to my knowledge, and I've used a lot of languages over the years. If this is the path that will be taken, there is no point speculating why the language isn't used more, especially in commercial settings. There's not even a starting point for using the language if these issues are not resolved. Dub is an official project because it is included with DMD and it is the recommended (and for the most part only) way to distribute D code. It needs to be taken seriously.
Apr 17
prev sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/17/19 4:06 AM, Russel Winder wrote:
 On Wed, 2019-04-17 at 03:48 -0400, Nick Sabalausky (Abscissa) via
 Digitalmars-d wrote:

 […]
 Oh, don't get me wrong, there's a whole BUNCH of things I'd like to
 see
 on this particular list, and that definitely includes your
 suggestion. I
 left it at "not necessarily limited to..." for good reason ;)
[…] The problem here is that people are having lots of ideas on this email list, but no-one is choosing to collate them into a an actual "call to action".
FWIW, the only reason I was being non-canonical about desired features in a package manager is that (from experience) I wanted to avoid pie-in-the-sky up-front design and focus first on the core necessities - additional niceties can come later, on top of a solid working core.
 As ever in an ever extending email thread the energy to do
 something within the D community dissipates and nothing ends up
 happening.
Actually, I've been fired up enough about this for a good while to put my code where my mouth is (I just have a bad habit of spreading myself too thin dev-wise, and having too much other life crap getting in the way). And, personally, I find this to be D's biggest current stumbling block, even moreso than anything in the language or compiler (which is probably good that I feel that way, because I'm pretty much out-of-my-league when it comes to hacking on the compiler). So note my next post in this thread...
Apr 17
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 03:48:36AM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/16/19 1:39 PM, H. S. Teoh wrote:
[...]
 I was thinking the build/test/etc. command can itself be defined by
 a dependency.  For example, if your project uses make for builds,
 there could be a `make` dub package that defines command-line
 templates used for generating platform-specific build commands. Your
 package would simply specify:
 
 	"build-system": "make"
 
 and the "make" package would define the exact command(s) for
 invoking make [...etc...]
Interesting idea. I definitely like it, though I would consider it not a "basic fundamental" mechanism, but rather a really nice bonus feature that provides a convenient shortcut for having to specifying certain things manually...IF you happen to be using a buildsystem known by the package manager...or (better yet, if possible) a buildsystem that's able to tell the package manager enough about itself to make this happen.
The idea behind this is that in order to avoid baking in support for a multitude of build systems (make, scons, meson, gradle, whatever) into dub, which will inevitably be unmaintained and out-of-date 'cos is too much work to keep up with everything, we delegate the job to individual packages. There would be one package per external build system, e.g., a 'make' package for make support, an 'scons' package for scons support, and so on. These packages don't have to do very much, all they need is to (1) define a way of installing said build system, in a way that can be used by dub via a standard interface, and (2) export various command-line templates that dub will use for invoking the build system. So for example, the 'make' package might contain a bunch of URLs for downloading a 'make' executable and any associated libraries and data files, and then export `/path/to/dub/cache/bin/make` as the command template for invoking the build system. There could be OS-dependent variations, such as `C:\path\to\dub\cache\make\gmake.exe` if you're on Windows, for example. These command flavors are defined within the 'make' dub package, and then any project that uses make can just declare a dependency on 'make', and dub will know how to do the Right Thing(tm). Dub itself wouldn't have to know the specifics of how to work with a make-based source tree; it will just execute the commands defined by the 'make' dub package. [...]
 If done correctly, dependency resolution is just a topological walk
 on a DAG.  Get this right, and everything else falls into place.
Yes, but therin lies the rub - "getting this right". AFAICT, Sonke seems to have a very good grasp on dependency resolution via DAG (or at least, certainly a far better grasp than I do). But yet, even his algorithms in DUB, he needed to occasionally patch with common-use-case short-circuiting logic (and I don't even know what else) just to fix various seemingly-infinite-loop issues (not technically infinite, but appeared to be to the user) that people were still reporting some years into DUBs lifetime. 'Course, if it really is simpler than it seems to me (and *especially* is it *isn't*), then I'd certainly be more than happy to delegate dependency resolution out to somebody else.
It would help if somebody can provide a concrete example of a complex dependency graph that requires anything more than a straightforward topological walk. T -- MS Windows: 64-bit rehash of 32-bit extensions and a graphical shell for a 16-bit patch to an 8-bit operating system originally coded for a 4-bit microprocessor, written by a 2-bit company that can't stand 1-bit of competition.
Apr 17
next sibling parent drug <drug2004 bk.ru> writes:
On 17.04.2019 19:03, H. S. Teoh wrote:
 If done correctly, dependency resolution is just a topological walk
 on a DAG.  Get this right, and everything else falls into place.
Yes, but therin lies the rub - "getting this right". AFAICT, Sonke seems to have a very good grasp on dependency resolution via DAG (or at least, certainly a far better grasp than I do). But yet, even his algorithms in DUB, he needed to occasionally patch with common-use-case short-circuiting logic (and I don't even know what else) just to fix various seemingly-infinite-loop issues (not technically infinite, but appeared to be to the user) that people were still reporting some years into DUBs lifetime. 'Course, if it really is simpler than it seems to me (and *especially* is it *isn't*), then I'd certainly be more than happy to delegate dependency resolution out to somebody else.
It would help if somebody can provide a concrete example of a complex dependency graph that requires anything more than a straightforward topological walk. T
Couple of years ago I've made an example - https://github.com/drug007/dub-reduced-case Now dub works in other way than that time but nevertheless. Now it says: ``` drug drug:/tmp/dub-reduced-case/rdpl$ dub build :inspector Building package rdpl:inspector in /tmp/dub-reduced-case/rdpl/inspector/ Root package rdpl:core reference gfm:math ~>6.1.4 cannot be satisfied. Packages causing the conflict: rdpl:core depends on ~>6.1.4 timespatial depends on ~>6.0.0 timespatial depends on ~>6.0.0 timespatial depends on ~>6.0.0 ``` shouldn't dub select gfm:math of version 6.1.4 here?
Apr 17
prev sibling next sibling parent reply Dragos Carp <dragoscarp gmail.com> writes:
On Wednesday, 17 April 2019 at 16:03:24 UTC, H. S. Teoh wrote:
 The idea behind this is that in order to avoid baking in 
 support for a multitude of build systems (make, scons, meson, 
 gradle, whatever) into dub, which will inevitably be 
 unmaintained and out-of-date 'cos is too much work to keep up 
 with everything, we delegate the job to individual packages.  
 There would be one package per external build system, e.g., a 
 'make' package for make support, an 'scons' package for scons 
 support, and so on.  These packages don't have to do very much, 
 all they need is to (1) define a way of installing said build 
 system, in a way that can be used by dub via a standard 
 interface, and (2) export various command-line templates that 
 dub will use for invoking the build system.
 [...]
Invoking the external tools is probably doable, the problem is consuming artifacts generated by these tools. Then there are the specialties of each tool: some distinguish between configuration (done once for a fresh build) and build, some are very good at tracking the dependencies and incremental builds, some rediscover the world on each invocation, some support cross-compiling, some use command line parameter for customizing the build, other use configuration files, some support debug builds, some support listing the build targets, some support a server mode, some require a server mode, some deprecate features over time, etc., etc. Finally, it's a mess. There is an effort in Scala world to do something in the same direction (they have a lot of build systems): https://github.com/scalacenter/bsp/blob/master/docs/bsp.md. But this effort resumes itself to just IDE integration. What I'm missing is the tool for building D projects, do you suggest that each project should use what it seams fit, or just stick with dub?
 It would help if somebody can provide a concrete example of a 
 complex dependency graph that requires anything more than a 
 straightforward topological walk.
I don't know if I understand you right, but probably having some generated code, or calling shell scripts with side-effects already complicates the problem enough.
Apr 17
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 04:51:57PM +0000, Dragos Carp via Digitalmars-d wrote:
 On Wednesday, 17 April 2019 at 16:03:24 UTC, H. S. Teoh wrote:
 The idea behind this is that in order to avoid baking in support for
 a multitude of build systems (make, scons, meson, gradle, whatever)
 into dub, which will inevitably be unmaintained and out-of-date 'cos
 is too much work to keep up with everything, we delegate the job to
 individual packages.  There would be one package per external build
 system, e.g., a 'make' package for make support, an 'scons' package
 for scons support, and so on.  These packages don't have to do very
 much, all they need is to (1) define a way of installing said build
 system, in a way that can be used by dub via a standard interface,
 and (2) export various command-line templates that dub will use for
 invoking the build system.
 [...]
Invoking the external tools is probably doable, the problem is consuming artifacts generated by these tools.
It's not really a problem, as far as dub is concerned. All we need is some way to export the list of generated artifacts. E.g., if it's a library package, all you really need is to export (1) the import paths and (2) the path(s) to the generated .so / .a / .lib / .dll files, and dub should be able to figure out how to make use of this library without actually knowing what the build command does. In other words, we hide the complexity of the build system under one or more canned invocation commands (e.g., 'make', 'make clean', 'make docs', 'make test', for a standard set of actions you might commonly want to do), and have the wrapper package tell us what the build artifacts are.
 Then there are the specialties of each tool: some distinguish between
 configuration (done once for a fresh build) and build, some are very
 good at tracking the dependencies and incremental builds, some
 rediscover the world on each invocation, some support cross-compiling,
 some use command line parameter for customizing the build, other use
 configuration files, some support debug builds, some support listing
 the build targets, some support a server mode, some require a server
 mode, some deprecate features over time, etc., etc. Finally, it's a
 mess.
I don't think we need to support all of those cases. As far as dub-as-a-package-manager is concerned, all it really cares about is that each dependency is some black box amorphous blob of files, with an associated set of arbitrary shell commands for each standard action: build debug, build release, test, query import paths, query library paths, etc.. These actions should be standard across all codebases, and it doesn't matter how they are implemented underneath. They can be arbitrarily complex shell commands. The dirty details are abstracted away by the build system wrapper packages. As for cross-compilation, IMO this should have built-in support as an optional feature. I.e., there should be a standard command for "cross-compile to architecture XYZ", and packages can either support this or not. If one or more dependencies don't support cross-compilation, then you cannot cross-compile (with dub).
 There is an effort in Scala world to do something in the same
 direction (they have a lot of build systems):
 https://github.com/scalacenter/bsp/blob/master/docs/bsp.md. But this
 effort resumes itself to just IDE integration.
 
 What I'm missing is the tool for building D projects, do you suggest
 that each project should use what it seams fit, or just stick with
 dub?
If you're asking me about the *current* state of dub, I have to say I don't use it except to pull in code.dlang.org dependencies. I.e., I find it only useful as a package manager and nothing else. As a build system it does not support critical use cases that I need,[*] so I find it completely unusable. [*] Specifically, these are not supported by the current version of dub AFAIK: (1) Arbitrary codegen (i.e., build a subset of D files into helper program H1, then invoke H1 on some data files to generate D code, then compile the resulting D code into the final executable); (2) Cross-compilation (e.g., Linux x86 host -> Android/ARM); (3) Arbitrary preprocessing of data files (build a subset of D files into helper program H2, then invoke H2 on some data input files to produce final data files -- e.g., compress HTML/Javascript, rescale images, etc.); (4) Building of deliverables (e.g., pack build products X,Y,Z into a .zip file, or build/sign an APK, or build a Debian .deb package, etc.). These are all non-trivial tasks, and I do not expect that dub will ever support all of them natively (though it would be nice if it allowed me to specify custom commands to accomplish such tasks!). That's why I'm pushing for separating dub as a package manager from dub as a build tool. The package manager's responsibility is to resolve dependencies, resolve version constraints, and download dependencies. How these dependencies are built should be handled by a *real* build system that can handle (1) to (4) above. That's why I'm proposing to encapsulate the task of building to a wrapper dub package that abstracts away the complexity. At the end of the day, it just boils down to (1) you have a bunch of input files, (2) there's some command that transforms these input files into output files, and (3) these output files reside somewhere in the filesystem. As long as you're told where the output files in (3) are, you know enough to use this package. What goes on inside (2) is irrelevant; said command can be arbitrarily complex -- it can be an entire build system, for example, that consists of many subcommmands -- but a package manager doesn't need to (and shouldn't) care about such details.
 It would help if somebody can provide a concrete example of a
 complex dependency graph that requires anything more than a
 straightforward topological walk.
I don't know if I understand you right, but probably having some generated code, or calling shell scripts with side-effects already complicates the problem enough.
This is only a problem if you're conflating package management with a build system. If you tackle solely the problem of package management, then these are not problems at all. As far as a package manager is concerned, you have a root package P that has dependencies on some packages A, B, C, that in turn depend on other packages D, E, F. Your job as package manager is to figure out which version(s) of A, B, C, D, E, F to download. Each package is an arbitrary collection of files, with an associated command that transforms these files into build products. All you need to know is a string containing the command that performs this transformation, and a list of paths to the build product(s). You don't care what files are in the package, really. For all you care they could be a bunch of URIs that point to remote objects, or a bunch of encrypted data files, or a .zip file of an entire OS. None of that is relevant. All that matters is that there's a command (specified within the package itself) that tells you how to transform said files into the build products. You just run this command and let it do its job -- how, you don't care. As long as the command produces the build products, that's good enough. Once it's done, you just query for the path(s) to the build products, and use that in the next node up the dependency tree. It's not the package manager's job to stick its grubby hands into the dirty details of the actual build process. T -- ASCII stupid question, getty stupid ANSI.
Apr 17
next sibling parent reply Julian <julian.fondren gmail.com> writes:
On Wednesday, 17 April 2019 at 17:53:35 UTC, H. S. Teoh wrote:
 In other words, we hide the complexity of the build system
 under one or more canned invocation commands (e.g., 'make',
 'make clean', 'make docs', 'make test', for a standard set of
 actions you might commonly want to do), and have the wrapper
 package tell us what the build artifacts are.
What you lose with this is dub's current platform independence. That's not always present anyway, for example a project could rely on specific cPanel or macOS features, but it would be nice to retain it as the default. Ada's gprbuild has project files like this: project Packetbl is for Object_Dir use "build"; for Exec_Dir use "bin"; for Source_Dirs use ("src"); for Languages use ("Ada", "C"); for Main use ("packetbl.adb"); package Compiler is for Default_Switches ("Ada") use ("-O3", "-gnata", "-gnaty-m", "-gnatwa"); for Default_Switches ("C") use ("-O3", "-Wall"); end Compiler; package Linker is for Default_Switches ("Ada") use ("-lnfnetlink", "-lnetfilter_queue", "-lanl"); end Linker; end Packetbl; So, this project uses Ada and C languages, a binary should be built with packetbl.adb as the main, and other language files under src/ are linked in. This project doesn't bother to list source files, but gprbuild knows what .adb/.ads and .c files are, so it builds them all with the appropriate compiler and then links them together. To know how to compile, gprbuild has a system configuration that includes the exact compilers to use for each language, their paths, etc. The sort of thing you'd set with 'dub init', with probable defaults, and with compiler knowledge (of how to invoke "gcc", "clang", etc.) built into dub.
Apr 17
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 06:17:25PM +0000, Julian via Digitalmars-d wrote:
 On Wednesday, 17 April 2019 at 17:53:35 UTC, H. S. Teoh wrote:
 In other words, we hide the complexity of the build system
 under one or more canned invocation commands (e.g., 'make',
 'make clean', 'make docs', 'make test', for a standard set of
 actions you might commonly want to do), and have the wrapper
 package tell us what the build artifacts are.
What you lose with this is dub's current platform independence. That's not always present anyway, for example a project could rely on specific cPanel or macOS features, but it would be nice to retain it as the default.
[...] Actually, this is precisely why I said that build systems would have their own wrapper packages. The wrapper packages would specify potentially different commands to run, depending on the current platform. Since the wrapper package would have specific knowledge about the build system it's wrapping, it could craft the build command such that it sets up the appropriate platform-dependent settings. E.g., the 'make' wrapper would specify a command that sets up INCPATH to /usr/include on Linux, for example, and C:\blah\blah\include on Windows. In fact, we could even use this as a mechanism to support cross compilation. E.g., given the current platform and the target platform as parameters, the wrapper package would craft a build command that sets up the appropriate path(s) and environment variables for cross compilation. This is an optional bonus feature, of course. The point is that the build system wrapper packages would hide away these details from dub, so dub doesn't have to know about how to set up the build for Windows vs. Linux. It just installs the wrapper package and queries it for the right command to use. T -- Frank disagreement binds closer than feigned agreement.
Apr 17
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/17/19 1:53 PM, H. S. Teoh wrote:
 At the end of
 the day, it just boils down to (1) you have a bunch of input files, (2)
 there's some command that transforms these input files into output
 files, and (3) these output files reside somewhere in the filesystem.
 As long as you're told where the output files in (3) are, you know
 enough to use this package.  What goes on inside (2) is irrelevant; said
 command can be arbitrarily complex -- it can be an entire build system,
 for example, that consists of many subcommmands -- but a package manager
 doesn't need to (and shouldn't) care about such details.
+1, exactly correct
Apr 17
parent reply JN <666total wp.pl> writes:
On Wednesday, 17 April 2019 at 19:15:40 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 4/17/19 1:53 PM, H. S. Teoh wrote:
 At the end of
 the day, it just boils down to (1) you have a bunch of input 
 files, (2)
 there's some command that transforms these input files into 
 output
 files, and (3) these output files reside somewhere in the 
 filesystem.
 As long as you're told where the output files in (3) are, you 
 know
 enough to use this package.  What goes on inside (2) is 
 irrelevant; said
 command can be arbitrarily complex -- it can be an entire 
 build system,
 for example, that consists of many subcommmands -- but a 
 package manager
 doesn't need to (and shouldn't) care about such details.
+1, exactly correct
In the ideal world, perhaps. I am just worried opening dub to other build systems will just open a can of worms. Packages will use different build systems, which will be a pain to set up. Right now, the advice is "to get started with D, install official DMD distribution and set up your project with dub". Afterwards, the advice may be "to get started with D, install official DMD distribution, set up your project with dub, and install git,svn,meson,scons,make,cmake so that you can build some packages. Oh and set these environmental variables for these tools. Oh and if you're on Windows you're screwed". While dub may seem limiting to some, it offers standarization and unification. I know these are forbidden words in D community, but standards and forcing people to do things certain way are often beneficial and boost the cohesion of the ecosystem. "I find this to be D's biggest current stumbling block, even moreso than anything in the language or compiler" - really? The biggest D stumbling block is the fact that it's not easy to set up a mixed C/C++ project with D? How many projects of that kind are there even? How about issues such as crappy std.xml long overdue for replacement (I could list more but I don't want to derail this thread)?
Apr 17
next sibling parent reply Julian <julian.fondren gmail.com> writes:
On Wednesday, 17 April 2019 at 20:10:08 UTC, JN wrote:
 "I find this to be D's biggest current stumbling block, even
 moreso than anything in the language or compiler" - really? The
 biggest D stumbling block is the fact that it's not easy to set
 up a mixed C/C++ project with D? How many projects of that kind
 are there even? How about issues such as crappy std.xml long
 overdue for replacement (I could list more but I don't want to
 derail this thread)?
Is there no good C or C++ XML library that you could make a std.xml alternative out of? Maybe there'd be a dub package that you could just grab and use for now, that wrapped one, if this were an option for dub. Consider std.regex and https://github.com/jrfondren/topsender-bench The fastest std.regex option is more than 10x slower than libc regex, which is already too slow to seriously use for anything but once-off tasks. (I put a thread in learn about this. None of the other suggestions there made any significant change to performance.) When the first thing someone'd try is ** 140x ** slower than the Python script they didn't even think about optimizing, I can't say it benefits D that nobody can say "oh yeah just dub add pcre and change your import"
Apr 17
next sibling parent Julian <julian.fondren gmail.com> writes:
On Wednesday, 17 April 2019 at 20:32:28 UTC, Julian wrote:
 When the first thing someone'd try is ** 140x **
 slower than the Python script they didn't even think about
 optimizing
37x, rather. It's still 2s ("I can hit enter and wait for the output") vs. over a minute ("I'd better take a 10min break while this runs.")
Apr 17
prev sibling next sibling parent Russel Winder <russel winder.org.uk> writes:
On Wed, 2019-04-17 at 20:32 +0000, Julian via Digitalmars-d wrote:
=20
[=E2=80=A6]
 Is there no good C or C++ XML library that you could make a=20
 std.xml
 alternative out of? Maybe there'd be a dub package that you could
 just grab and use for now, that wrapped one, if this were an=20
 option
 for dub.
[=E2=80=A6] Isn't the current done thing just to make use of libxml2 ? --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Apr 18
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Wednesday, 17 April 2019 at 20:32:28 UTC, Julian wrote:
 On Wednesday, 17 April 2019 at 20:10:08 UTC, JN wrote:

 Consider std.regex and 
 https://github.com/jrfondren/topsender-bench
 The fastest std.regex option is more than 10x slower than libc
 regex, which is already too slow to seriously use for anything 
 but
 once-off tasks.
Author of std.regex here. It's been awhile since I monitored its performance. Still, let's drill down. Caveats emptor: I do not have your datafile, but I produced one by sampling example lines from the web. First, your flags are a bit off for DMD, use the following: dmd -release -inline -O I know, not very intuitive. This more then doubles performance with dmd, std.regex is templated library (sadly) so it heavily depends on passing the right flags at the application level :( For LDC I used the following (can't remember if -O implies -release): ldc2 -release -O Second, if doing match per line regex it's best to use `matchFirst` instead of `match` which caches the engine in between calls. `match` is intended to plow through large chunks such as iterating over matches in memmory-mapped file and therefore creates new engine on each call to `match`. With these two tweaks I get a respectful speed of 1.5x of PCRE/JIT. IIRC PCRE_JIT option doesn't work for Unicode and std.regex supports Unicode by default. In general I agree - std.regex needs more love, a casual look at disasembly shows some degradation compared to a couple years back. Truth is, code like that needs constant tweaking. P.S. I lack time or energy to improve on regex esp. in std proper. I hope to get back to my experiments on rewind-regex though. JIT compilation is on the list, mostly to avoid reliance on compiler + being more aggressive on low-level tricks.
Apr 25
parent reply Bastiaan Veelo <Bastiaan Veelo.net> writes:
On Thursday, 25 April 2019 at 16:49:27 UTC, Dmitry Olshansky 
wrote:
 P.S. I lack time or energy to improve on regex esp. in std 
 proper. I hope to get back to my experiments on rewind-regex 
 though. JIT compilation is on the list, mostly to avoid 
 reliance on compiler + being more aggressive on low-level 
 tricks.
Hey Dmitry, nice to see you again! Bastiaan.
Apr 25
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Thursday, 25 April 2019 at 20:13:02 UTC, Bastiaan Veelo wrote:
 On Thursday, 25 April 2019 at 16:49:27 UTC, Dmitry Olshansky 
 wrote:
 P.S. I lack time or energy to improve on regex esp. in std 
 proper. I hope to get back to my experiments on rewind-regex 
 though. JIT compilation is on the list, mostly to avoid 
 reliance on compiler + being more aggressive on low-level 
 tricks.
Hey Dmitry, nice to see you again! Bastiaan.
Thanks, I look forward to your DConf talk ;)
Apr 26
prev sibling next sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/17/19 4:10 PM, JN wrote:
 On Wednesday, 17 April 2019 at 19:15:40 UTC, Nick Sabalausky (Abscissa) 
 wrote:
 On 4/17/19 1:53 PM, H. S. Teoh wrote:
 At the end of
 the day, it just boils down to (1) you have a bunch of input files, (2)
 there's some command that transforms these input files into output
 files, and (3) these output files reside somewhere in the filesystem.
 As long as you're told where the output files in (3) are, you know
 enough to use this package.  What goes on inside (2) is irrelevant; said
 command can be arbitrarily complex -- it can be an entire build system,
 for example, that consists of many subcommmands -- but a package manager
 doesn't need to (and shouldn't) care about such details.
+1, exactly correct
In the ideal world, perhaps.
No, there's no "ideal world" about this. What HSTeoh describes above are just plain the cold, hard facts of the matter. No ifs, ands or buts. The complications that we're accustomed to seeing and expecting are ones that come straight from DUB's failure to fully separate package management from building.
 I am just worried opening dub to other 
 build systems will just open a can of worms. Packages will use different 
 build systems, which will be a pain to set up. Right now, the advice is 
 "to get started with D, install official DMD distribution and set up 
 your project with dub". Afterwards, the advice may be "to get started 
 with D, install official DMD distribution, set up your project with dub, 
 and install git,svn,meson,scons,make,cmake so that you can build some 
 packages. Oh and set these environmental variables for these tools. Oh 
 and if you're on Windows you're screwed".
With what I have in mind (and HS Teoh appears to have the same thing in mind), then no, those scenarios are absolutely, definitely not a realistic result. They're just fear-based speculation at best. In the long run, DUB may or may not remain the recommended buildsystem for newcomers, but that's all. And it least it won't actively PREVENT alternative buildsystems from coming along and proving superior enough to dethrone DUB.
 While dub may seem limiting to some, it offers standarization and 
 unification.
Well that's just the problem - It *doesn't* provide that, it completely fails to. It tries to standardize and unify through basically the same strong-arm "submit or die" approach as Nobunaga or Napoleon, but as I predicted it would from the start, it just would up fracturing the ecosystem instead[1]. And a fractured ecosystem is by definition NEITHER standardized NOR unified. [1] The no-so-well-known fact is, many projects are forced out of the DUB package ecosystem because of DUB buildsystem's limitations. Though that may not be easy to see, since such packages are forced under-the-radar because of their opting out of DUB as a buildsystem also forces them out of the visibility provided by code.dlang.org. And then the ones that do join the DUB side are too often forced into hardships that first of all, shouldn't even be necessary, and second of all, still fail to offer the tradeoff of unifying/standardizing the ecosystem anyway. The *only* way to standardize and unify is to do so in a way that allows individual packages to work the way they need to.
Apr 17
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 08:10:08PM +0000, JN via Digitalmars-d wrote:
[...]
 In the ideal world, perhaps. I am just worried opening dub to other
 build systems will just open a can of worms. Packages will use
 different build systems, which will be a pain to set up. Right now,
 the advice is "to get started with D, install official DMD
 distribution and set up your project with dub". Afterwards, the advice
 may be "to get started with D, install official DMD distribution, set
 up your project with dub, and install git,svn,meson,scons,make,cmake
 so that you can build some packages. Oh and set these environmental
 variables for these tools. Oh and if you're on Windows you're
 screwed".
Again, this concern is addressed by my proposal of build system wrapper dub packages. Basically, the act of declaring a dependency on, say, the 'make' wrapper package, should cause dub to download and install a working installation of make. This can be done by structuring the 'make' wrapper package such that its "build" command is to download and install a working version of make. In a nutshell, it would work like this: - Project P depends on project Q, so dub downloads project Q. - Project Q depends on 'make', so dub downloads the 'make' wrapper package. - The 'make' wrapper package's "build" command is actually an installer that downloads and installs make on the local system. For maximum non-intrusion, the installation location can be somewhere in a standard location in the dub cache, so it won't conflict with any other make installation you may have on your system. So dub invokes this "build" command, and now you have a working version of make. (Note that this includes setting up any environment variables that may be necessary.) - Now that project Q's dependencies are satisfied, it invokes Q's build command, which runs make to build everything. This now works, because the previous step ensures we have a working version of make installed. - Now that project P's dependencies are satisfied, we can build P as usual. This is one example of why abstracting away the "build" command of a package from the package manager can be highly beneficial: it allows you to pull in any arbitrary external package without needing dub to be aware of how said external package works. In the above scheme, you can substitute 'make' with any build system of your choice, and it will still work with no additional effort from the end user. This is ensured by the wrapper package abstracting away the dirty details of how to install a working version of ${buildtool} so that dub can perform the installation without needing to know the details of how to do it. The wrapper package also hides away the dirty details of how to setup a build tool like make properly, including setting up environment variables and what-not. Basically, it does whatever it takes for the whole process to be completely transparent to the user.
 While dub may seem limiting to some, it offers standarization and
 unification. I know these are forbidden words in D community, but
 standards and forcing people to do things certain way are often
 beneficial and boost the cohesion of the ecosystem.
Standards should be empowering, rather than arbitrarily limiting. What we're proposing here is a standard way for arbitrary software packages to be used as dub dependencies. Let me repeat that again: what we're proposing here is a *standard* way for *arbitrary* software packages (of any language, any build system, etc.) to be usable as dub dependencies. The standard here is the standard "API" of how packages work: 1) A package can depend on one or more packages, with optional version constraints. This is the way dub already works. 2) A package can consist of any arbitrary collection of input files. What they are and how they are structured is irrelevant. 3) A package has a standard declaration of how it should be built. Basically, this is any arbitrary command that transforms said input files into output files. How this is carried out is irrelevant -- that's a detail the package is responsible for, and is none of the package manager's business. 4) A package has a standard declaration of where to find the built products (import paths, library files, etc.). 5) A package that wraps around a build system has a standard declaration of how to formulate commands to invoke it for various standard tasks (build debug, build release, test, etc.). So dependent packages don't actually have to know how to run make on Windows vs. Linux, it uses a standard query to formulate the right command, and simply runs it. The wrapper package ensures that this command will Just Work(tm). Just like the range API, the user of the range (the package manager) doesn't, and shouldn't, care about how each of these "methods" work under the hood. As long as they conform to the API (declare items (1) to (4) in a standard way), that's all that's necessary for dub to do the Right Thing(tm).
 "I find this to be D's biggest current stumbling block, even moreso
 than anything in the language or compiler" - really? The biggest D
 stumbling block is the fact that it's not easy to set up a mixed C/C++
 project with D?  How many projects of that kind are there even?
Given the amount of effort currently being put into C++ interop? I'm guessing it's a lot more common than you think.
 How about issues such as crappy std.xml long overdue for replacement
 (I could list more but I don't want to derail this thread)?
Jonathan's dxml is already a dub package, isn't it? You could just use that. T -- If it tastes good, it's probably bad for you.
Apr 17
prev sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/17/19 4:10 PM, JN wrote:
 "I find this to be D's biggest current stumbling block, even moreso than 
 anything in the language or compiler" - really? The biggest D stumbling 
 block is the fact that it's not easy to set up a mixed C/C++ project 
 with D? 
Oh dear god that is not even *REMOTELY* in the ballpark of actual reality. Lets keep the blatant strawmen far away from this, please.
Apr 17
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/17/19 12:03 PM, H. S. Teoh wrote:
 On Wed, Apr 17, 2019 at 03:48:36AM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
 On 4/16/19 1:39 PM, H. S. Teoh wrote:
[...]
 I was thinking the build/test/etc. command can itself be defined by
 a dependency.  For example, if your project uses make for builds,
 there could be a `make` dub package that defines command-line
 templates used for generating platform-specific build commands. Your
 package would simply specify:

 	"build-system": "make"

 and the "make" package would define the exact command(s) for
 invoking make [...etc...]
Interesting idea. I definitely like it, though I would consider it not a "basic fundamental" mechanism, but rather a really nice bonus feature that provides a convenient shortcut for having to specifying certain things manually...IF you happen to be using a buildsystem known by the package manager...or (better yet, if possible) a buildsystem that's able to tell the package manager enough about itself to make this happen.
The idea behind this is that in order to avoid baking in support for a multitude of build systems (make, scons, meson, gradle, whatever) into dub, which will inevitably be unmaintained and out-of-date 'cos is too much work to keep up with everything, we delegate the job to individual packages.
I agree, letting build system packages tell the packmangr how to use them is definitely preferable to baked-in support.
 So for example, the 'make' package might contain a bunch of URLs for
 downloading a 'make' executable and any associated libraries and data
 files, and then export `/path/to/dub/cache/bin/make` as the command
 template for invoking the build system.  There could be OS-dependent
 variations, such as `C:\path\to\dub\cache\make\gmake.exe` if you're on
 Windows, for example.  These command flavors are defined within the
 'make' dub package, and then any project that uses make can just declare
 a dependency on 'make', and dub will know how to do the Right Thing(tm).
 Dub itself wouldn't have to know the specifics of how to work with a
 make-based source tree; it will just execute the commands defined by the
 'make' dub package.
Yes, in general, pre-built binary tool packages that originate from outside (or inside) D-land must be supported. This has been one of my bigger issues with DUB. And a little bit extra consideration for buildsystems in particular is likely warranted. Only change I would make is that declaring a dependency on a buildsystem shouldn't *necessarily* mean that the package manager invokes the buildsystem directly. In many cases it would, and could maybe be a good default, but it also needs to support the case where a project uses its own buildscript and the buildscript invokes something like make or such (probably, in this situation, the buildscript would invoke the buildsystem - or any other executable tools it relies on - through the package manager...or at least through information provided by the package manager via envvars, for example). Another thing to keep in mind is that the interactions between package manager and buildsystem will likely require more than just the buildsystem telling the package manager "Here are the command(s) you use to invoke me". For example, the package manager may need to relay things like "Which configuration is being built". Although, now that I think of it, since the package manager is banned by charter from being a buildsystem, it might not need to pass nearly as much information to buildscripts/systems/etc as DUB-as-a-pack-manager would need to.
 It would help if somebody can provide a concrete example of a complex
 dependency graph that requires anything more than a straightforward
 topological walk.
Personally, in the absence of someone like Sonke who's already experienced with real-world dependency resolution popping in to help out on that, I'd really like to just simply start by adapting and re-using some already-established dependency-resolving code, such as from DUB. This is one thing I'd really want to avoid reinventing the wheel on if at all possible. -------------------------------- In response to suggestions from NaN and Russel, I'd like to move this to a Github project. No code yet, just because I don't have any yet, but we can use the issues/PRs/wiki features to collaborate publicly in a more structured, less transient, way: https://github.com/Abscissa/DPak H. S. Teoh: I forget your Github handle. Let me know and I'll add you as a collaborator.
Apr 17
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 17, 2019 at 02:57:05PM -0400, Nick Sabalausky (Abscissa) via
Digitalmars-d wrote:
[...]
 In response to suggestions from NaN and Russel, I'd like to move this
 to a Github project.  No code yet, just because I don't have any yet,
 but we can use the issues/PRs/wiki features to collaborate publicly in
 a more structured, less transient, way:
 
 https://github.com/Abscissa/DPak
Awesome, let's see if we can actually get this thing off the ground.
  H. S. Teoh: I forget your Github handle. Let me know and I'll add you
 as a collaborator.
quickfur. T -- Famous last words: I *think* this will work...
Apr 17
prev sibling parent Atila Neves <atila.neves gmail.com> writes:
On Monday, 15 April 2019 at 23:17:46 UTC, H. S. Teoh wrote:
 On Mon, Apr 15, 2019 at 06:56:24PM -0400, Nick Sabalausky 
 (Abscissa) via Digitalmars-d wrote:
 [...]
 [...]
Y'know, the idea just occurred to me: why even bind the package manager to the build system at all?
They really shouldn't be. If anything, dub is a good example of why not.
 Why not make the build system one of the dependencies of the 
 project?
That could work, as long as there's a sensible default one.
 For backward compatibility, 'dub-build' would be assumed if no 
 build system is specified.
+1
Apr 16
prev sibling next sibling parent reply Dragos Carp <dragoscarp gmail.com> writes:
On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,
 ...
I have a different proposal. One of the goals of D is the interoperability with C/C++ and interoperability in general. Because of that I think it really make sens to not reinvent the wheel, but to have good support for an already existing tool. Since a couple of months I'm using Bazel, and I like it. Not necessary the implementation (is big and Java), but the concepts behind it. So I'm already working right now on improving the D support [1]: adding more tests, supporting more compiler versions, etc. So my plan is: - add tests: supporting d libraries, exectuables, tests and C++/D hybrid applications - add compiler version selection - ldc and gcc support - dub .json, .sdl -> bazel BUILD file converter (not foolproof). If necessary, manual intervention is acceptable. - add windows support - write a D starlark [2] implementation - rewrite bazel in D. Probably this will never happen, but maybe the subset sufficent to cover the dub functionality will be doable. If somebody else is interested to join to this effort, please give me a sign. Dragos [1] https://github.com/bazelbuild/rules_d [2] https://docs.bazel.build/versions/master/skylark/language.html
Apr 16
next sibling parent aliak <something something.com> writes:
On Tuesday, 16 April 2019 at 17:08:22 UTC, Dragos Carp wrote:
 On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 [...]
I have a different proposal. One of the goals of D is the interoperability with C/C++ and interoperability in general. Because of that I think it really make sens to not reinvent the wheel, but to have good support for an already existing tool. Since a couple of months I'm using Bazel, and I like it. Not necessary the implementation (is big and Java), but the concepts behind it. So I'm already working right now on improving the D support [1]: adding more tests, supporting more compiler versions, etc. [...]
Super nice! I was just going to ask if anyone's been using bazel and what they think of it. And here you are already hacking away d rules :D
Apr 16
prev sibling parent reply 9il <ilyayaroshenko gmail.com> writes:
On Tuesday, 16 April 2019 at 17:08:22 UTC, Dragos Carp wrote:
 On Sunday, 14 April 2019 at 10:53:17 UTC, Seb wrote:
 Hi all,
 ...
So my plan is: - add tests: supporting d libraries, exectuables, tests and C++/D hybrid applications - add compiler version selection - ldc and gcc support - dub .json, .sdl -> bazel BUILD file converter (not foolproof). If necessary, manual intervention is acceptable. - add windows support - write a D starlark [2] implementation - rewrite bazel in D. Probably this will never happen, but maybe the subset sufficent to cover the dub functionality will be doable.
Very nice, I can add support for mir packages. Do you have an opensource working D+Bazel project and Travis config for it to use it as an example? --Ilya
Apr 16
parent Dragos Carp <dragoscarp gmail.com> writes:
On Wednesday, 17 April 2019 at 02:57:44 UTC, 9il wrote:
 Very nice, I  can add support for mir packages. Do you have an 
 opensource working D+Bazel project and Travis config for it to 
 use it as an example? --Ilya
Not yet. I'll get back to you when this is set. Travis config is part of first item on the plan. BTW the repo is https://github.com/dcarp/rules_d.
Apr 16
prev sibling next sibling parent reply H. S. Teoh <hsteoh quickfur.ath.cx> writes:
Recently, Nick & some others (including myself) created a Github 
project to discuss the possibility of a dub replacement that has 
a better architecture. In the course of analysing our current 
situation and where we'd like to be, Nick pointed out something 
that immediately strikes me as low-hanging fruit that could 
potentially improve dub's performance significantly.

According to Nick, every dub package carries its own description 
file (sdl/json) that encodes, among other things, its version and 
dependencies.  Apparently, this information resides per-package 
on code.dlang.org, and so every time dub has to resolve 
dependencies for a particular package, it has to download the 
description files of *every version* of the package??  Can 
somebody confirm whether or not this is really the case?

Because if this is true, that means dub is making N network 
roundtrips *per dependency* just to obtain dependency information 
from each package.  This is obviously extremely inefficient, and 
I'll bet that it's a significant source of dub's perceived 
slowness.

But this also immediately suggests a simple solution: upon 
uploading a package to code.dlang.org, the server should collect 
the description files of all versions of that package, and make 
that available somewhere in a per-package global description 
file.  It doesn't need to contain everything in dub.{sdl/json}; 
all it needs is version information and per-version dependency 
lists.  This way, whenever dub needs to resolve dependencies on 
that package, it can obtain all of the information it needs on 
that package with a single HTTP roundtrip.  Of course, it will 
still need to make more roundtrips in order to obtain information 
for sub-dependencies and other dependencies, but if what Nick 
says is true, this should significantly cut down on the total 
number of roundtrips required (O(n) as opposed to O(n*v), where n 
= #packages, v = #versions-per-package).


--T
Apr 26
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 26, 2019 at 08:28:37PM +0000, H. S. Teoh via Digitalmars-d wrote:
[...]
 According to Nick, every dub package carries its own description file
 (sdl/json) that encodes, among other things, its version and
 dependencies.  Apparently, this information resides per-package on
 code.dlang.org, and so every time dub has to resolve dependencies for
 a particular package, it has to download the description files of
 *every version* of the package??  Can somebody confirm whether or not
 this is really the case?
[...] Actually, nevermind that. Dub already combines the descriptions into a single file. So the bottleneck(s) must lie elsewhere... T -- 2+2=4. 2*2=4. 2^2=4. Therefore, +, *, and ^ are the same operation.
Apr 26
next sibling parent reply Andre Pany <andre s-e-a-p.de> writes:
On Friday, 26 April 2019 at 22:33:50 UTC, H. S. Teoh wrote:
 On Fri, Apr 26, 2019 at 08:28:37PM +0000, H. S. Teoh via 
 Digitalmars-d wrote: [...]
 According to Nick, every dub package carries its own 
 description file (sdl/json) that encodes, among other things, 
 its version and dependencies.  Apparently, this information 
 resides per-package on code.dlang.org, and so every time dub 
 has to resolve dependencies for a particular package, it has 
 to download the description files of *every version* of the 
 package??  Can somebody confirm whether or not this is really 
 the case?
[...] Actually, nevermind that. Dub already combines the descriptions into a single file. So the bottleneck(s) must lie elsewhere... T
Also with recent Dub version a major performance optimization was done https://dlang.org/changelog/2.086.0.html#single-api-requests In addition Sebastian is also working on parallel dependencies download. Kind regards Andre
Apr 26
parent Seb <seb wilzba.ch> writes:
On Friday, 26 April 2019 at 22:56:53 UTC, Andre Pany wrote:
 On Friday, 26 April 2019 at 22:33:50 UTC, H. S. Teoh wrote:
 On Fri, Apr 26, 2019 at 08:28:37PM +0000, H. S. Teoh via 
 Digitalmars-d wrote: [...]
 According to Nick, every dub package carries its own 
 description file (sdl/json) that encodes, among other things, 
 its version and dependencies.  Apparently, this information 
 resides per-package on code.dlang.org, and so every time dub 
 has to resolve dependencies for a particular package, it has 
 to download the description files of *every version* of the 
 package??  Can somebody confirm whether or not this is really 
 the case?
[...] Actually, nevermind that. Dub already combines the descriptions into a single file. So the bottleneck(s) must lie elsewhere... T
Also with recent Dub version a major performance optimization was done https://dlang.org/changelog/2.086.0.html#single-api-requests In addition Sebastian is also working on parallel dependencies download. Kind regards Andre
Unfortunately I'm not actively working on this. I have too many other things on my plate. I just did a quick experiment and it more than halfed the fresh build time for the tested projects (the unzipping happens in parallel too). However, the quick solution doesn't seem to work with older D versions. In case anyone is interested: https://github.com/dlang/dub/pull/1677
Apr 26
prev sibling parent reply Basile B. <b2.temp gmx.com> writes:
On Friday, 26 April 2019 at 22:33:50 UTC, H. S. Teoh wrote:
 On Fri, Apr 26, 2019 at 08:28:37PM +0000, H. S. Teoh via 
 Digitalmars-d wrote: [...]
 According to Nick, every dub package carries its own 
 description file (sdl/json) that encodes, among other things, 
 its version and dependencies.  Apparently, this information 
 resides per-package on code.dlang.org, and so every time dub 
 has to resolve dependencies for a particular package, it has 
 to download the description files of *every version* of the 
 package??  Can somebody confirm whether or not this is really 
 the case?
[...] Actually, nevermind that. Dub already combines the descriptions into a single file. So the bottleneck(s) must lie elsewhere... T
Speaking of bottlenecks... (and probably unrelated): For dexed I have a project group that contains the 3 sdl for dmd/druntime/phobos. It's very slow to load... the IDE launches dub 3 times to convert the projects to JSON and it takes 8 to 10 seconds. But none of the sdl contains dependencies, nor configs, build types or whatever... I have the feeling that dub does something stupid with the files since these 3 repos contains many.
Apr 26
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Apr 27, 2019 at 12:02:58AM +0000, Basile B. via Digitalmars-d wrote:
[...]
 For dexed I have a project group that contains the 3 sdl for
 dmd/druntime/phobos. It's very slow to load... the IDE launches dub 3
 times to convert the projects to JSON and it takes 8 to 10 seconds.
 But none of the sdl contains dependencies, nor configs, build types or
 whatever...
 
 I have the feeling that dub does something stupid with the files since
 these 3 repos contains many.
Shouldn't we be able to just run a profiler on dub to figure out where it's spending those 10 seconds? (or about 3 seconds per run.) Seems like something easy enough to identify, even if the problem itself may not be so simple to fix. We should find out whether it's CPU-bound, I/O-bound, or network-bound, at the very least. T -- MS Windows: 64-bit rehash of 32-bit extensions and a graphical shell for a 16-bit patch to an 8-bit operating system originally coded for a 4-bit microprocessor, written by a 2-bit company that can't stand 1-bit of competition.
Apr 26
parent Basile B. <b2.temp gmx.com> writes:
On Saturday, 27 April 2019 at 00:37:35 UTC, H. S. Teoh wrote:
 On Sat, Apr 27, 2019 at 12:02:58AM +0000, Basile B. via 
 Digitalmars-d wrote: [...]
 For dexed I have a project group that contains the 3 sdl for 
 dmd/druntime/phobos. It's very slow to load... the IDE 
 launches dub 3 times to convert the projects to JSON and it 
 takes 8 to 10 seconds. But none of the sdl contains 
 dependencies, nor configs, build types or whatever...
 
 I have the feeling that dub does something stupid with the 
 files since these 3 repos contains many.
Shouldn't we be able to just run a profiler on dub to figure out where it's spending those 10 seconds? (or about 3 seconds per run.) Seems like something easy enough to identify, even if the problem itself may not be so simple to fix. We should find out whether it's CPU-bound, I/O-bound, or network-bound, at the very least. T
I've determined that the slowdown is caused by the auto-fetch option of the IDE. When disabled the group loads instantly. Actually there are dependencies in the sdl for DMD: dmd:lexer, dmd:parser, dmd:frontend. Dexed fails to determine that they are described directly in the description.
May 01
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Apr 26, 2019 at 05:37:35PM -0700, H. S. Teoh via Digitalmars-d wrote:
 On Sat, Apr 27, 2019 at 12:02:58AM +0000, Basile B. via Digitalmars-d wrote:
 [...]
 For dexed I have a project group that contains the 3 sdl for
 dmd/druntime/phobos. It's very slow to load... the IDE launches dub
 3 times to convert the projects to JSON and it takes 8 to 10
 seconds.  But none of the sdl contains dependencies, nor configs,
 build types or whatever...
 
 I have the feeling that dub does something stupid with the files
 since these 3 repos contains many.
Shouldn't we be able to just run a profiler on dub to figure out where it's spending those 10 seconds? (or about 3 seconds per run.) Seems like something easy enough to identify, even if the problem itself may not be so simple to fix. We should find out whether it's CPU-bound, I/O-bound, or network-bound, at the very least.
[...] OK, so I upgraded my dub to the latest git master, and did a quick and dirty test. Init a fresh new project with `-t vibe.d`, accept all default values (name, license, etc.), then run `time dub -v` to make the first build. It took 2 minutes 45 seconds (!). From a crude visual inspection, most of the time was spent downloading and compiling the dependencies, so that seems reasonable, even if unfortunate. Subsequent runs (remove dub.selections, remove executable, rerun dub -v) took about 11 seconds or so. Still rather long, but it's a LOT faster than the previous version of dub that I tested prior to upgrading to git master. The previous version of dub took about 2 minutes 15 seconds for the first build of a fresh empty vibe.d project, but most of the time appeared to be spent on the "searching for versions of xyz (1 suppliers)" phase, which I presume was the per-version downloading of package descriptions. (The fact that it took 30 seconds less than the new dub is likely caused by some cached objects from previous vibe.d builds, so that measurement shouldn't be trusted. I deleted the entire dub cache before running the test on the new version of dub.) Subsequent runs (delete dub.selections, rerun dub -v) took about 40 seconds, most of which was spent on the "searching for versions" phase. So the new dub appeared to have eliminated a good chunk of network turnaround time, which is good news. 10 seconds is still rather long for an empty project, but from casual visual inspection of -v output, it appears to still be stuck on the "searching for versions of vibe.d" phase, which took several solid seconds. I'm presuming this is where it's waiting on the network. Re-running dub without deleting dub.selections eliminates this step, and turnaround time drops to about 4-5 seconds. Finally!! It's getting a lot closer to a serviceable edit-compile-test turnaround time! So the latest version of dub is looking a lot better than before. Now, if we could lift some of its functional limitations, we might finally have an acceptably performant and functional package/build tool. In the meantime, it would seem that we need to look into why the "searching for versions" phase takes so long. Is it just a network-dependent thing (my network has bogonously slow DNS resolution, no thanks to my ISP), or is it something that can be fixed in dub itself? T -- VI = Visual Irritation
Apr 26
next sibling parent Seb <seb wilzba.ch> writes:
On Saturday, 27 April 2019 at 01:13:33 UTC, H. S. Teoh wrote:
 On Fri, Apr 26, 2019 at 05:37:35PM -0700, H. S. Teoh via 
 Digitalmars-d wrote:
 [...]
[...] OK, so I upgraded my dub to the latest git master, and did a quick and dirty test. Init a fresh new project with `-t vibe.d`, accept all default values (name, license, etc.), then run `time dub -v` to make the first build. [...]
10 seconds for a rebuild is still too much. One quick solution is to upgrade to ld.gold - it will half your build time.
Apr 26
prev sibling parent reply Seb <seb wilzba.ch> writes:
On Saturday, 27 April 2019 at 01:13:33 UTC, H. S. Teoh wrote:
 In the meantime, it would seem that we need to look into why 
 the "searching for versions" phase takes so long.  Is it just a 
 network-dependent thing (my network has bogonously slow DNS 
 resolution, no thanks to my ISP), or is it something that can 
 be fixed in dub itself?
Thanks a lot for your interest in dub and investigating this! First off you should never delete your dub.selections.json as it locks your project dependencies (it won't be used if your project is used as a library). Anyhow, that being said there are still a ton of things that can be done: - The new single API request feature doesn't work for all 100% with optional dependencies (see the respective GitHub PR that introduced it for details) - Dependencies could be checked in parallel - The registry itself could be optimized more for caching (maybe even with a CDN proxy) ... There are more experimental things we could try like e.g. a fully local JSON index that is only updated when needed and supports partial updates (think apt), but I believe the bigger gains in user-experience will be: - initial fetch (important for fast CI turnaround times. First point of attack: parallelizing the fetching process) - build times with existing dependencies (important as the default case. First points of attack: build independent dependencies in parallel, warn if ld.gold isn't the default on Linux, ...)
Apr 26
parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/27/19 1:45 AM, Seb wrote:
 
 - The new single API request feature doesn't work for all 100% with 
 optional dependencies (see the respective GitHub PR that introduced it 
 for details)
The API you're referring to, is it this one? https://code.dlang.org/api/packages/[PACKAGE_NAME]/info or something else?
Apr 27
prev sibling next sibling parent Seb <seb wilzba.ch> writes:
On Friday, 26 April 2019 at 20:28:37 UTC, H. S. Teoh wrote:
 Recently, Nick & some others (including myself) created a 
 Github project to discuss the possibility of a dub replacement 
 that has a better architecture. In the course of analysing our 
 current situation and where we'd like to be, Nick pointed out 
 something that immediately strikes me as low-hanging fruit that 
 could potentially improve dub's performance significantly.

 [...]
https://dlang.org/changelog/2.086.0.html#single-api-requests Though tere are still a ton of other easy low-hanging fruits which is why I started this thread.
Apr 26
prev sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/26/19 4:28 PM, H. S. Teoh wrote:
 Recently, Nick & some others (including myself) created a Github project 
 to discuss the possibility of a dub replacement that has a better 
 architecture.
To clarify, this project is only intended to replace the package management side of dub. It deliberately *doesn't* replace the buildsystem side of dub. It's strictly buildsystem-agnostic (and even programming language agnostic), so you can bring-your-own-buildsystem whether it be ninja, SCons, make, manual buildscripts, or indeed, even dub itself. Also, to assuage any fears, one of the key goals is to support the existing DUB packages on code.dlang.org, with no work necessary on the part of DUB package authors. (Naturally, such packages will still be using dub as their buildsystem...or perhaps Atila's 'bud' once that's up and running.) For anyone interested, here's the project on Github: https://github.com/Abscissa/DPak
Apr 27
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Apr 27, 2019 at 05:22:40AM +0000, Seb via Digitalmars-d wrote:
 On Saturday, 27 April 2019 at 01:13:33 UTC, H. S. Teoh wrote:
[...]
 OK, so I upgraded my dub to the latest git master, and did a quick
 and dirty test.  Init a fresh new project with `-t vibe.d`, accept
 all default values (name, license, etc.), then run `time dub -v` to
 make the first build.
 [...]
10 seconds for a rebuild is still too much. One quick solution is to upgrade to ld.gold - it will half your build time.
Just to be clear: the 10 seconds rebuild time is only if dub.selections is deleted before the rebuild. I only did that as a way of getting a rough measure of how fast/slow the dependency resolution algorithm is, when dependencies have already been downloaded into the dub cache. It's not something I do habitually, or recommend. :-D Without deleting dub.selections, the turnaround time is about 4-5 seconds. Which is not fast, but at least marginally acceptable. Certainly, 10 seconds turnaround would be unacceptable edit-compile-test turnaround time on an empty project. On Sat, Apr 27, 2019 at 05:45:50AM +0000, Seb via Digitalmars-d wrote:
 On Saturday, 27 April 2019 at 01:13:33 UTC, H. S. Teoh wrote:
 In the meantime, it would seem that we need to look into why the
 "searching for versions" phase takes so long.  Is it just a
 network-dependent thing (my network has bogonously slow DNS
 resolution, no thanks to my ISP), or is it something that can be
 fixed in dub itself?
Thanks a lot for your interest in dub and investigating this! First off you should never delete your dub.selections.json as it locks your project dependencies (it won't be used if your project is used as a library).
Yes, I only did that to get some idea of how efficient the dependency resolution is.
 Anyhow, that being said there are still a ton of things that can be
 done:
 
 - The new single API request feature doesn't work for all 100% with
   optional dependencies (see the respective GitHub PR that introduced
   it for details)
 - Dependencies could be checked in parallel
Parallel checking is a must, IMO, since it doesn't make sense to bottleneck on individual network requests when there's plenty of bandwidth to run multiple queries in parallel.
 - The registry itself could be optimized more for caching (maybe even
   with a CDN proxy)
I'm not sure we need to use a CDN yet, unless code.dlang.org is really getting that much traffic. But what might help is if the registry allows more complex queries, like "fetch me all candidate packages satisfying contraints P, Q, R... ." Single network roundtrip for the entire query, rather than separate network requests, once per package. Of course, this puts more load on the server, which may or may not be a good thing, I'm not sure. [[...]
 There are more experimental things we could try like e.g. a fully
 local JSON index that is only updated when needed and supports partial
 updates (think apt), but I believe the bigger gains in user-experience
 will be:
 
 - initial fetch (important for fast CI turnaround times. First point
   of attack: parallelizing the fetching process)
 - build times with existing dependencies (important as the default
   case.  First points of attack: build independent dependencies in
   parallel, warn if ld.gold isn't the default on Linux, ...)
Doesn't --parallel already do this?? If not, that certainly needs to be fixed. Sigh... what I wouldn't give for a generic topological walk framework that allows maximal parallelization... T -- Customer support: the art of getting your clients to pay for your own incompetence.
Apr 29
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 4/29/19 3:11 PM, H. S. Teoh wrote:
 But what might help is if the registry allows more complex queries, like
 "fetch me all candidate packages satisfying contraints P, Q, R... ."
 Single network roundtrip for the entire query, rather than separate
 network requests, once per package.  Of course, this puts more load on
 the server, which may or may not be a good thing, I'm not sure.
GraphQL: https://graphql.org/ (Forget where I came across it, but it was just recently.) And yes, you're pretty much right about the tradeoff between REST vs GraphQL. Choosing between them is about balancing "minimizing server trips" (GraphQL) vs "maximizing cacheability" (REST).
May 02
parent John Colvin <john.loughran.colvin gmail.com> writes:
On Thursday, 2 May 2019 at 18:35:10 UTC, Nick Sabalausky 
(Abscissa) wrote:
 On 4/29/19 3:11 PM, H. S. Teoh wrote:
 But what might help is if the registry allows more complex 
 queries, like
 "fetch me all candidate packages satisfying contraints P, Q, 
 R... ."
 Single network roundtrip for the entire query, rather than 
 separate
 network requests, once per package.  Of course, this puts more 
 load on
 the server, which may or may not be a good thing, I'm not sure.
GraphQL: https://graphql.org/ (Forget where I came across it, but it was just recently.) And yes, you're pretty much right about the tradeoff between REST vs GraphQL. Choosing between them is about balancing "minimizing server trips" (GraphQL) vs "maximizing cacheability" (REST).
http://code.dlang.org/packages/graphqld
May 02