www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Orbit - Package Manager - Specification/ideas

reply Jacob Carlborg <doob me.com> writes:
I've written a more formal specification of my ideas for a package 
manager for D.

https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

Note that I am exploring the possibility of using D as the language for 
all the files mentioned in the link above.

The current status is that building packages and installing them works, 
but quite limited. No dependency tracking or central repository so far.

Please comment and suggest.

-- 
/Jacob Carlborg
Jul 13 2011
next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
Just curious, why do people prefer using extensionless files? I mean,
I can easily open them up in an editor, but I can't set up syntax
highlighting without knowing what type the file is.
Jul 13 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Andrej Mitrovic" <andrej.mitrovich gmail.com> wrote in message 
news:mailman.1613.1310586759.14074.digitalmars-d puremagic.com...
 Just curious, why do people prefer using extensionless files? I mean,
 I can easily open them up in an editor, but I can't set up syntax
 highlighting without knowing what type the file is.
Programmer's Notepad 2 lets me map an extention *or* a filename to a syntax highlighting profile. But I agree, it's still easier to just use an existing extension. I suspect it's partly a just buildsystem tradition, and partly because it's not very common to use/need a buildscript with a name other than the standard one. ------------------------------- Not sent from an iPhone.
Jul 13 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-07-13 22:14, Nick Sabalausky wrote:
 "Andrej Mitrovic"<andrej.mitrovich gmail.com>  wrote in message
 news:mailman.1613.1310586759.14074.digitalmars-d puremagic.com...
 Just curious, why do people prefer using extensionless files? I mean,
 I can easily open them up in an editor, but I can't set up syntax
 highlighting without knowing what type the file is.
Programmer's Notepad 2 lets me map an extention *or* a filename to a syntax highlighting profile. But I agree, it's still easier to just use an existing extension. I suspect it's partly a just buildsystem tradition, and partly because it's not very common to use/need a buildscript with a name other than the standard one. ------------------------------- Not sent from an iPhone.
Something like that. -- /Jacob Carlborg
Jul 14 2011
prev sibling next sibling parent reply jdrewsen <jdrewsen nospam.com> writes:
Den 13-07-2011 21:19, Jacob Carlborg skrev:
 I've written a more formal specification of my ideas for a package
 manager for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language for
 all the files mentioned in the link above.

 The current status is that building packages and installing them works,
 but quite limited. No dependency tracking or central repository so far.

 Please comment and suggest.
Nice work! Orb - tool section: Describe what the "use" command does. I guess it simply adds a required orb to the Orbfile in the current directory? Orbfile section: The "orb" command that accepts git/hg/svn repositories should also allow for a tag/commit parameter I think. The "orb" commands second parameter could also be a list of serveral repositories to try in order for fallback. I guess a user configuration file in ~/.orb could contain "source" commands as well. Orb package section: I think the versioning scheme should be set in stone actually. Most other packaging systems does that. It makes your life much easier. Central repository section: Please let us settle for one format for the metadata.xxx file. My vote is for json or yaml. XML is too verbose for my taste. I also think that it should be compressed e.g. metadata.json.bzip since it will quickly grow quite large and the packaging system has to be fast. Maybe add the build revision on /orb/<package>-<version>_<build> since it is quite common reupload the same package version with at simple build fix. Additionally the architecture should be added to the name: /orb/<package>-<version>_<build>-<arch> Now it is just like how debian files look like :) Maybe put the file in an arch subdir /orb/<arch>/<package>-<version>_<build> I'm really in favor of doing this in D instead of ruby though. /Jonas
Jul 13 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-07-13 22:20, jdrewsen wrote:
 Den 13-07-2011 21:19, Jacob Carlborg skrev:
 I've written a more formal specification of my ideas for a package
 manager for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language for
 all the files mentioned in the link above.

 The current status is that building packages and installing them works,
 but quite limited. No dependency tracking or central repository so far.

 Please comment and suggest.
Nice work! Orb - tool section: Describe what the "use" command does. I guess it simply adds a required orb to the Orbfile in the current directory?
I'm not sure if this is possible or not (specially on multiple platforms) but I was thinking something like this: Say that you have several versions of the dwt package installed and you want to use a particular version, you can run the command like this: "orb use dwt --version 0.3.5" Then when you link with the "dwt" library it will link with the 0.3.5 version.
 Orbfile section:

 The "orb" command that accepts git/hg/svn repositories should also allow
 for a tag/commit parameter I think.
Absolutely.
 The "orb" commands second parameter could also be a list of serveral
 repositories to try in order for fallback.
That might be a good idea.
 I guess a user configuration file in ~/.orb could contain "source"
 commands as well.
Sounds reasonable.
 Orb package section:

 I think the versioning scheme should be set in stone actually. Most
 other packaging systems does that. It makes your life much easier.
So you mean I shouldn't allow custom version strings?
 Central repository section:

 Please let us settle for one format for the metadata.xxx file. My vote
 is for json or yaml. XML is too verbose for my taste. I also think that
 it should be compressed e.g. metadata.json.bzip since it will quickly
 grow quite large and the packaging system has to be fast.
I haven't actually thought about what format to use. I just listed a few reasonable formats. Settling for one format is probably a good thing. Probably json since both Phobos and Tango has json modules and it's available in most languages.
 Maybe add the build revision on /orb/<package>-<version>_<build>
 since it is quite common reupload the same package version with at
 simple build fix.
If you read the version scheme I was thinking about having the last number as a build number: major.minor.build * Increment "build" when an implementation detail change * Increment "minor" when a non-braking API change is made, i.e. adding a new function * Increment "major" when a braking API change is made, i.e. removing a function Maybe there should be a patch level of some king as well and alpha, beta and release candidates as well.
 Additionally the architecture should be added to the name:
 /orb/<package>-<version>_<build>-<arch>
 Now it is just like how debian files look like :)
Hm, this will take some though. I don't like the idea that the developer has to create multiple packages for a single package.
 Maybe put the file in an arch subdir
 /orb/<arch>/<package>-<version>_<build>
That is probably better. I haven't actually thought that much about binary packages. I was mostly thinking about source packages that need to be built when installing them. Source packages don't have these problems.
 I'm really in favor of doing this in D instead of ruby though.

 /Jonas
I guess most people are. -- /Jacob Carlborg
Jul 14 2011
prev sibling parent reply Johannes Pfau <spam example.com> writes:
jdrewsen wrote:
Den 13-07-2011 21:19, Jacob Carlborg skrev:
 I've written a more formal specification of my ideas for a package
 manager for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language
 for all the files mentioned in the link above.

 The current status is that building packages and installing them
 works, but quite limited. No dependency tracking or central
 repository so far.

 Please comment and suggest.
Nice work! Orb - tool section: Describe what the "use" command does. I guess it simply adds a required orb to the Orbfile in the current directory? Orbfile section: The "orb" command that accepts git/hg/svn repositories should also allow for a tag/commit parameter I think. The "orb" commands second parameter could also be a list of serveral repositories to try in order for fallback. I guess a user configuration file in ~/.orb could contain "source" commands as well. Orb package section: I think the versioning scheme should be set in stone actually. Most other packaging systems does that. It makes your life much easier. Central repository section: Please let us settle for one format for the metadata.xxx file. My vote is for json or yaml. XML is too verbose for my taste. I also think that it should be compressed e.g. metadata.json.bzip since it will quickly grow quite large and the packaging system has to be fast. Maybe add the build revision on /orb/<package>-<version>_<build> since it is quite common reupload the same package version with at simple build fix. Additionally the architecture should be added to the name: /orb/<package>-<version>_<build>-<arch> Now it is just like how debian files look like :) Maybe put the file in an arch subdir /orb/<arch>/<package>-<version>_<build> I'm really in favor of doing this in D instead of ruby though. /Jonas
And in Package SubTypes: No documentation subtype for now? Do Library and Dynamic library always contain the .d/.di headers? In orbspec: The callbacks are run when the package is being build, correct? (The install callback could also be called when the package is being installed, but I think we don't need this functionality) So those are the hooks to use any custom build system? Are Check Platform and Set Platform really needed in the orbspec? I think this can be left to drake or other build systems, can't we keep orbspecs platform agnostic? I'd make build_dependencies and dependencies a required field. (Although empty is ok. Can we distinguish between not set and empty?) -- Johannes Pfau
Jul 14 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-14 11:07, Johannes Pfau wrote:
 jdrewsen wrote:
 Den 13-07-2011 21:19, Jacob Carlborg skrev:
 I've written a more formal specification of my ideas for a package
 manager for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language
 for all the files mentioned in the link above.

 The current status is that building packages and installing them
 works, but quite limited. No dependency tracking or central
 repository so far.

 Please comment and suggest.
Nice work! Orb - tool section: Describe what the "use" command does. I guess it simply adds a required orb to the Orbfile in the current directory? Orbfile section: The "orb" command that accepts git/hg/svn repositories should also allow for a tag/commit parameter I think. The "orb" commands second parameter could also be a list of serveral repositories to try in order for fallback. I guess a user configuration file in ~/.orb could contain "source" commands as well. Orb package section: I think the versioning scheme should be set in stone actually. Most other packaging systems does that. It makes your life much easier. Central repository section: Please let us settle for one format for the metadata.xxx file. My vote is for json or yaml. XML is too verbose for my taste. I also think that it should be compressed e.g. metadata.json.bzip since it will quickly grow quite large and the packaging system has to be fast. Maybe add the build revision on /orb/<package>-<version>_<build> since it is quite common reupload the same package version with at simple build fix. Additionally the architecture should be added to the name: /orb/<package>-<version>_<build>-<arch> Now it is just like how debian files look like :) Maybe put the file in an arch subdir /orb/<arch>/<package>-<version>_<build> I'm really in favor of doing this in D instead of ruby though. /Jonas
And in Package SubTypes: No documentation subtype for now?
I though they were quite self explanatory. But I can add documentation for them if necessary.
 Do Library and Dynamic library always contain the .d/.di headers?
Library (which is static library) needs to contain headers yes. Dynamic library will probably contain headers as well.
 In orbspec:
 The callbacks are run when the package is being build, correct? (The
 install callback could also be called when the package is being
 installed, but I think we don't need this functionality)
I see now that I have to give this some thought. The "build" callback could be called both when all files are built into the package and when the package later is built to be installed.
 So those are the hooks to use any custom build system?
Yes and no. There is also the "build" field which allow you yo call a custom shell script or custom build tool. Something like this: build :shell, "my_custom_build_script.sh"
 Are Check Platform and Set Platform really needed in the orbspec? I
 think this can be left to drake or other build systems, can't we keep
 orbspecs platform agnostic?
Absolutely, the orbspecs are as platform agnostic as possible. But I see no harm in having the possibility to check the platform. I'm pretty sure at least someone will find a need for it.
 I'd make build_dependencies and dependencies a required field.
 (Although empty is ok. Can we distinguish between not set and empty?)
I see no point in setting the build_dependencies and dependencies fields if the package doesn't have any dependencies. nil/null would be not set and an empty array (in this case) would be empty. -- /Jacob Carlborg
Jul 15 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-14 11:07, Johannes Pfau wrote:
 jdrewsen wrote:
 Den 13-07-2011 21:19, Jacob Carlborg skrev:
 I've written a more formal specification of my ideas for a package
 manager for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language
 for all the files mentioned in the link above.

 The current status is that building packages and installing them
 works, but quite limited. No dependency tracking or central
 repository so far.

 Please comment and suggest.
Nice work! Orb - tool section: Describe what the "use" command does. I guess it simply adds a required orb to the Orbfile in the current directory? Orbfile section: The "orb" command that accepts git/hg/svn repositories should also allow for a tag/commit parameter I think. The "orb" commands second parameter could also be a list of serveral repositories to try in order for fallback. I guess a user configuration file in ~/.orb could contain "source" commands as well. Orb package section: I think the versioning scheme should be set in stone actually. Most other packaging systems does that. It makes your life much easier. Central repository section: Please let us settle for one format for the metadata.xxx file. My vote is for json or yaml. XML is too verbose for my taste. I also think that it should be compressed e.g. metadata.json.bzip since it will quickly grow quite large and the packaging system has to be fast. Maybe add the build revision on /orb/<package>-<version>_<build> since it is quite common reupload the same package version with at simple build fix. Additionally the architecture should be added to the name: /orb/<package>-<version>_<build>-<arch> Now it is just like how debian files look like :) Maybe put the file in an arch subdir /orb/<arch>/<package>-<version>_<build> I'm really in favor of doing this in D instead of ruby though. /Jonas
And in Package SubTypes: No documentation subtype for now?
I though they were quite self explanatory. But I can add documentation for them if necessary.
Sorry, my question wasn't clear: I meant where will 'api documentation' for libraries go? In a special 'documentation' subtype or in the library packages?
 Do Library and Dynamic library always contain the .d/.di headers?
Library (which is static library) needs to contain headers yes. Dynamic library will probably contain headers as well.
 In orbspec:
 The callbacks are run when the package is being build, correct? (The
 install callback could also be called when the package is being
 installed, but I think we don't need this functionality)
I see now that I have to give this some thought. The "build" callback could be called both when all files are built into the package and when the package later is built to be installed.
 So those are the hooks to use any custom build system?
Yes and no. There is also the "build" field which allow you yo call a custom shell script or custom build tool. Something like this: build :shell, "my_custom_build_script.sh"
 Are Check Platform and Set Platform really needed in the orbspec? I
 think this can be left to drake or other build systems, can't we keep
 orbspecs platform agnostic?
Absolutely, the orbspecs are as platform agnostic as possible. But I see no harm in having the possibility to check the platform. I'm pretty sure at least someone will find a need for it.
 I'd make build_dependencies and dependencies a required field.
 (Although empty is ok. Can we distinguish between not set and empty?)
I see no point in setting the build_dependencies and dependencies fields if the package doesn't have any dependencies. nil/null would be not set and an empty array (in this case) would be empty.
Well, having to set it explicitly makes it less likely to forget those fields, but that's the only reason. -- Johannes Pfau
Jul 15 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-07-15 11:21, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-07-14 11:07, Johannes Pfau wrote:
 And in Package SubTypes:
 No documentation subtype for now?
I though they were quite self explanatory. But I can add documentation for them if necessary.
Sorry, my question wasn't clear: I meant where will 'api documentation' for libraries go? In a special 'documentation' subtype or in the library packages?
Aha. In the library package. Or is there a need for a package containing just API documentation?
 I see no point in setting the build_dependencies and dependencies
 fields if the package doesn't have any dependencies. nil/null would be
 not set and an empty array (in this case) would be empty.
Well, having to set it explicitly makes it less likely to forget those fields, but that's the only reason.
Ok, I don't think it would be necessary. -- /Jacob Carlborg
Jul 15 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:ivkrdj$ci4$1 digitalmars.com...
 I've written a more formal specification of my ideas for a package manager 
 for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language for 
 all the files mentioned in the link above.

 The current status is that building packages and installing them works, 
 but quite limited. No dependency tracking or central repository so far.

 Please comment and suggest.
Good start :) Here are my random thoughts on it (sorry if you've already answered some of them, I haven't read the rest of this thread yet): - I find the "orb" vs "orbit" distinction a little confusing and unnecessary. Why not just call it all "orb"? (Or call everything "orbit"?) - I think the files "orbfile", "{name}.orbspec" and "metadata" (the one inside the packages) should all just be one file. Frankly, the existence of all three of them confuses the hell out of me. - Are the subtypes a "pick one" or "pick all that are included" deal? I think that latter would make more sense. - Instead of "~> 0.3.4", what about "> 0.3+.4+"? Or ""~> 0.3+.4+"? Or something vaguely like that. That would be more flexible. - What happens if someone tries to upload a newer version with the same old version number? Or if they forcefully do it? - Are version numbers allowed to have more or less than three parts? I think they should. Do version comparisons still work on version numbers with an arbitrary number of parts? Again, I think they should. - Is version "2.10" after "2.1" or are they the same? What about "2.01" vs "2.1"? I would vote for "2.10 > 2.1 == 2.01", because I see version "parts" as distinct numbers separated by a period, rather than fractional numbers. - It should allow boolean operators and parens for the version selections. For instance: "(>= 2.1 && <= 2.6 && != 2.4) || >= 3.4" (Ie, "Any version from 2.1 through 2.6, but 2.4 has critical bugs, and 3.4+ contains a 2.x compatibility layer.") - There should be a "list" command and a "list {package name}" command to see what's installed. Maybe even "list {package name} {version expression}"? And maybe something too see what's available but not yet installed? They should all list the versions in guaranteed-sorted order so you can see which are the newest and oldest installed versions by looking at the first and last. - At some point, 7z should be supported (and tarballs, of course). - How should platforms be handled WRT packages? Ie, Do all platforms need to be in the same orb package? Do they all need to be in separate packages? Either way? If they're not all required to be in the same package, how does orb find the package that had the right platform? - Is it really necessary to have separate "build_dependencies" and "runtime_dependencies"? And why have both "runtime_dependencies" and "orbs" instead of just picking one name and sticking with it? - Would it be a good idea to have and additional field "extra" for non-standard expandability? So people could add extra fields they felt would be useful, like "extra.foo" and "extra.bar", etc. And popular ones could eventually be formally added to the specification as just simply "foo" and "bar". - What's the point of the fields "files", "libraries" and "executables"? Seems like extra work for no real benefit. - This supports having multiple versions of the same package installed at the same time, right? If not it should. - I see there's an "upgrade" callback, but I didn't see an "upgrade" command. Is upgrading in or out? I think that there should be an "upgrade" command that upgrades the installed versions of packages as far as it can *without* breaking any other installed packages that depend on it. Ex, if Foo requires Bar v2.6 or earlier, and SuperFoo requires Bar v2.7 or earlier, and Bar v2.3 is installed, but the latest Bar is v2.9, then "upgrade bar" would upgrade Bar v2.3 to v2.6 and display a message that says "Bar upgraded from 2.3 to 2.6, but the newest is 2.9, run "orb install Bar" to install the newest Bar, too." (Or maybe it should install both 2.6 and 2.7? Or one/both of those and 2.9?) For upgrading, we should also think about how to do upgrades without clobbering any of it's settings. - For POSTing a package to a repository, how does authentication work? All repos don't have to provide unrestricted upload access do they? - I'm not sure I understand how the "source" command works. Can it be provided more than once? And then it just picks the first one that actually has the package? - The "central repositories" don't necessarily sound all that central, so they probably should just be called "repositories". - What about default repositories? It should support that. (Kinda makes sense, otherwise how would "orb install xxx" know where to look?) And there should be simple commands to add/update/remove/list (and reorder?) the default repositories. If a package A specifies a dependency B and a repository for that dependency B, then which one has priority for downloading B: The default repositories or the repository specified by package A? - Here's a problem with using an actual programming language for the orbfile/name.orbspec/metadata file: Suppose Orb version X uses Ruby version Y. Then, Orb X+1 comes out and which has Ruby upgraded to Y+1. Now, someone creates PackageA with an orbfile/name.orbspec/whatever that relies on Ruby Y+1. Someone else still has Orb version X and tries to get PackageA. Kaboom! Therefore, the orbfile/name.orbspec/whatever needs to specify which version of Orb (or Ruby) it requires. But now we have a chicken-and-the-egg problem: How can Orb X figure out that PackageA requires Orb X+1 if Orb X can't properly read PackageA's orbfile/name.orbspec/whatever? - If I install D library "libfoo", then I should be able write myapp.d with "include foo.blah;" and then do "dmd myapp.d" *without* manually specifying -Ipath_to_libfoo. It should just work. How will Orb handle that? And how will that interact with DVM? Ie, if I do "dvm use 2.051", then "orb install libfoo", then "dvm use 2.054", then I should still have access to libfoo without needing to specify -Ipath_to_libfoo. - Where does everything get installed? - In many ways this sounds a lot like a generalized DVM. Maybe Orb should eventually take over DVM's duties by making a DMD orb package.
Jul 15 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-15 09:14, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:ivkrdj$ci4$1 digitalmars.com...
 I've written a more formal specification of my ideas for a package manager
 for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language for
 all the files mentioned in the link above.

 The current status is that building packages and installing them works,
 but quite limited. No dependency tracking or central repository so far.

 Please comment and suggest.
Good start :) Here are my random thoughts on it (sorry if you've already answered some of them, I haven't read the rest of this thread yet): - I find the "orb" vs "orbit" distinction a little confusing and unnecessary. Why not just call it all "orb"? (Or call everything "orbit"?)
This is how I'm thinking: Orbit is the name of the package manager, "orb" is what you type on the command line when interacting with the package manager. A package is also called an "orb".
 - I think the files "orbfile", "{name}.orbspec" and "metadata" (the one
 inside the packages) should all just be one file. Frankly, the existence of
 all three of them confuses the hell out of me.
Ok, this is how it works: "orbfile" is a completely optional file listing all packages a project depends on, it shouldn't contain anything else. It allows you to run "orb install" in the project directory to install all needed packages. Just because a project uses packages doesn't mean it self needs to be a package. The orbspec file requires more than just listing dependencies. I guess the tool could look for an .orbspec file and install all its dependencies. But what happends if it finds several .orbspec files in a directory. This is just like the Gemfile for those how have used bundler: http://gembundler.com/ "orbspec" is a specification of how a package looks like and what it contains. It's intended for creating a package out of a project. "metadata" is basically the orbspec copied into the archive. I was first thinking about "compiling" down the Ruby code into YAML or JSON but for now the Ruby code is included in the archive.
 - Are the subtypes a "pick one" or "pick all that are included" deal? I
 think that latter would make more sense.
I'm thinking this can be inferred from other fields like "executables" and "libraries". A package could contain several types.
 - Instead of "~>  0.3.4", what about ">  0.3+.4+"? Or ""~>  0.3+.4+"? Or
 something vaguely like that. That would be more flexible.
So you mean I can have a version like this: "> 0.3.4+" meaning any version from "0.3.4" to "0.3.9"? It might be a good idea.
 - What happens if someone tries to upload a newer version with the same old
 version number? Or if they forcefully do it?
I haven't thought about that. It probably shouldn't be possible.
 - Are version numbers allowed to have more or less than three parts? I think
 they should. Do version comparisons still work on version numbers with an
 arbitrary number of parts? Again, I think they should.
I was hoping to only have version numbers with three parts. If fewer parts are used it would probably easiest to infer a 0 for the missing parts, i.e. "1" == "1.0.0". Is there a need for more parts than three? The whole idea of having three version parts is to be able to use "~> 0.3.4". But if "> 0.3.4+" would be allowed then arbitrary number of parts could be allowed.
 - Is version "2.10" after "2.1" or are they the same? What about "2.01" vs
 "2.1"? I would vote for "2.10>  2.1 == 2.01", because I see version "parts"
 as distinct numbers separated by a period, rather than fractional numbers.
I haven't thought about that. I see version parts as distinct numbers as well.
 - It should allow boolean operators and parens for the version selections.
 For instance: "(>= 2.1&&  <= 2.6&&  != 2.4) ||>= 3.4" (Ie, "Any version
 from 2.1 through 2.6, but 2.4 has critical bugs, and 3.4+ contains a 2.x
 compatibility layer.")
Hehe. Now this is getting quite complicate, but it would be nice to have yes. Not something I will aim for in the first release.
 - There should be a "list" command and a "list {package name}" command to
 see what's installed. Maybe even "list {package name} {version expression}"?
 And maybe something too see what's available but not yet installed? They
 should all list the versions in guaranteed-sorted order so you can see which
 are the newest and oldest installed versions by looking at the first and
 last.
Absolutely, I've completely forgot about the "list" command.
 - At some point, 7z should be supported (and tarballs, of course).
If someone is willing to create D module of create bindings for any available libraries. Any libraries the forces a specific license is out of the question.
 - How should platforms be handled WRT packages? Ie, Do all platforms need to
 be in the same orb package? Do they all need to be in separate packages?
 Either way? If they're not all required to be in the same package, how does
 orb find the package that had the right platform?
I good questions. I haven't given binary packages that much thought. I was first going for a source only package manager that requires all packages to be built before installed. I see three options: * One package for all platforms * Include the platform in the package name and in the orbspec * Have a sub path (on the server) for every platform, i.e.: dorbit.org/orbs/linux/dwt-1.3.2.orb.zip
 - Is it really necessary to have separate "build_dependencies" and
 "runtime_dependencies"? And why have both "runtime_dependencies" and "orbs"
 instead of just picking one name and sticking with it?
There is no runtime dependency on something that is statically linked. Therefore it would be unnecessary to do a permanent installation on those dependencies. The user could get an option to either permanently installed these dependencies or to temporarily install them. The other way around would be possible as well. A package can depend on a dynamically linked library and use it only through function pointers. Then the package would only have a runtime dependency on the library, i.e. it wouldn't be needed when building.
 - Would it be a good idea to have and additional field "extra" for
 non-standard expandability? So people could add extra fields they felt would
 be useful, like "extra.foo" and "extra.bar", etc. And popular ones could
 eventually be formally added to the specification as just simply "foo" and
 "bar".
I guess there could be an "extra" field accepting a hash. How would the field be used, by other tools?
 - What's the point of the fields "files", "libraries" and "executables"?
 Seems like extra work for no real benefit.
"files" is basically all files that should be put in the package. "libraries" and "executables" are all libraries and executables that should be installed (regardless if they are pre-compiled or built during installation).
 - This supports having multiple versions of the same package installed at
 the same time, right? If not it should.
Yes, that's the whole point of having version, as I see it.
 - I see there's an "upgrade" callback, but I didn't see an "upgrade"
 command. Is upgrading in or out? I think that there should be an "upgrade"
 command that upgrades the installed versions of packages as far as it can
 *without* breaking any other installed packages that depend on it. Ex, if
 Foo requires Bar v2.6 or earlier, and SuperFoo requires Bar v2.7 or earlier,
 and Bar v2.3 is installed, but the latest Bar is v2.9, then "upgrade bar"
 would upgrade Bar v2.3 to v2.6 and display a message that says "Bar upgraded
 from 2.3 to 2.6, but the newest is 2.9, run "orb install Bar" to install the
 newest Bar, too." (Or maybe it should install both 2.6 and 2.7? Or one/both
 of those and 2.9?) For upgrading, we should also think about how to do
 upgrades without clobbering any of it's settings.
I guess I didn't think this through. I think it will require some thought.
 - For POSTing a package to a repository, how does authentication work? All
 repos don't have to provide unrestricted upload access do they?
I haven't thought about this more than there will be some kind of authentication. Probably HTTP basic authentication.
 - I'm not sure I understand how the "source" command works. Can it be
 provided more than once? And then it just picks the first one that actually
 has the package?
The "source" command specifies a path to a repository where to fetch packages from. I haven't thought about if the it can be provided more than once. It might be a good idea.
 - The "central repositories" don't necessarily sound all that central, so
 they probably should just be called "repositories".
Ok.
 - What about default repositories? It should support that. (Kinda makes
 sense, otherwise how would "orb install xxx" know where to look?) And there
 should be simple commands to add/update/remove/list (and reorder?) the
 default repositories. If a package A specifies a dependency B and a
 repository for that dependency B, then which one has priority for
 downloading B: The default repositories or the repository specified by
 package A?
Yes there will be a default repository. As someone else suggested there could be an "orbfile" in the users home directory that can contain default settings like for "source". Or a more general config file for orbit.
 - Here's a problem with using an actual programming language for the
 orbfile/name.orbspec/metadata file: Suppose Orb version X uses Ruby version
 Y. Then, Orb X+1 comes out and which has Ruby upgraded to Y+1. Now, someone
 creates PackageA with an orbfile/name.orbspec/whatever that relies on Ruby
 Y+1. Someone else still has Orb version X and tries to get PackageA. Kaboom!
Yeah, that is a problem. But I wonder how much this will be a problem in practice. I don't think this will so big problem in practice using Ruby as the language. On the other hand, using D, will be a big problem. D breaks something in every release.
 Therefore, the orbfile/name.orbspec/whatever needs to specify which version
 of Orb (or Ruby) it requires. But now we have a chicken-and-the-egg problem:
 How can Orb X figure out that PackageA requires Orb X+1 if Orb X can't
 properly read PackageA's orbfile/name.orbspec/whatever?
That is a problem. If the metadata is "compiled" YAML/JSON then we can get around this.
 - If I install D library "libfoo", then I should be able write myapp.d with
 "include foo.blah;" and then do "dmd myapp.d" *without* manually
 specifying -Ipath_to_libfoo. It should just work. How will Orb handle that?
It can't. The solution to this is a build tool, as I see it. The build tool knows about the package manager and let you specify dependencies on package. Think about Drake, it could look like this: target("myapp.d", { orb("libfoo"); // automatically links with "libfoo" and includes its header path. }); -Ipath_to_libfoo needs somehow be passed to the compiler, and linking with the library as well. Maybe it would be possible to manipulate the dmd.conf/sc.ini but this seems very complicated.
 And how will that interact with DVM? Ie, if I do "dvm use 2.051", then "orb
 install libfoo", then "dvm use 2.054", then I should still have access to
 libfoo without needing to specify -Ipath_to_libfoo.
If you do "dvm use 2.051", then "orb install libfoo" then "libfoo" will only be installed for dmd 2.051, that's the whole point. You would have to run "orb install libfoo" again after switching compiler. Orbit could of course share the same package if possible.
 - Where does everything get installed?
For now, on Posix, in /usr/local/orbit. If used through DVM it will be installed in somewhere in ~/.dvm.
 - In many ways this sounds a lot like a generalized DVM. Maybe Orb should
 eventually take over DVM's duties by making a DMD orb package.
No, I don't think so. DVM is quite specialized in what it does. Manipulating the PATH variable (or the registy on Windows) to be able to do what it does. I don't what to mix DVM and Orbit. -- /Jacob Carlborg
Jul 15 2011
next sibling parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-15 09:14, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:ivkrdj$ci4$1 digitalmars.com...
 I've written a more formal specification of my ideas for a package
 manager for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language
 for all the files mentioned in the link above.

 The current status is that building packages and installing them
 works, but quite limited. No dependency tracking or central
 repository so far.

 Please comment and suggest.
Good start :) Here are my random thoughts on it (sorry if you've already answered some of them, I haven't read the rest of this thread yet): - I find the "orb" vs "orbit" distinction a little confusing and unnecessary. Why not just call it all "orb"? (Or call everything "orbit"?)
This is how I'm thinking: Orbit is the name of the package manager, "orb" is what you type on the command line when interacting with the package manager. A package is also called an "orb".
 - I think the files "orbfile", "{name}.orbspec" and "metadata" (the
 one inside the packages) should all just be one file. Frankly, the
 existence of all three of them confuses the hell out of me.
Ok, this is how it works: "orbfile" is a completely optional file listing all packages a project depends on, it shouldn't contain anything else. It allows you to run "orb install" in the project directory to install all needed packages. Just because a project uses packages doesn't mean it self needs to be a package. The orbspec file requires more than just listing dependencies. I guess the tool could look for an .orbspec file and install all its dependencies. But what happends if it finds several .orbspec files in a directory. This is just like the Gemfile for those how have used bundler: http://gembundler.com/ "orbspec" is a specification of how a package looks like and what it contains. It's intended for creating a package out of a project. "metadata" is basically the orbspec copied into the archive. I was first thinking about "compiling" down the Ruby code into YAML or JSON but for now the Ruby code is included in the archive.
 - Are the subtypes a "pick one" or "pick all that are included"
 deal? I think that latter would make more sense.
I'm thinking this can be inferred from other fields like "executables" and "libraries". A package could contain several types.
 - Instead of "~>  0.3.4", what about ">  0.3+.4+"? Or ""~>
 0.3+.4+"? Or something vaguely like that. That would be more
 flexible.
So you mean I can have a version like this: "> 0.3.4+" meaning any version from "0.3.4" to "0.3.9"? It might be a good idea.
 - What happens if someone tries to upload a newer version with the
 same old version number? Or if they forcefully do it?
I haven't thought about that. It probably shouldn't be possible.
 - Are version numbers allowed to have more or less than three parts?
 I think they should. Do version comparisons still work on version
 numbers with an arbitrary number of parts? Again, I think they
 should.
I was hoping to only have version numbers with three parts. If fewer parts are used it would probably easiest to infer a 0 for the missing parts, i.e. "1" == "1.0.0". Is there a need for more parts than three? The whole idea of having three version parts is to be able to use "~> 0.3.4". But if "> 0.3.4+" would be allowed then arbitrary number of parts could be allowed.
 - Is version "2.10" after "2.1" or are they the same? What about
 "2.01" vs "2.1"? I would vote for "2.10>  2.1 == 2.01", because I
 see version "parts" as distinct numbers separated by a period,
 rather than fractional numbers.
I haven't thought about that. I see version parts as distinct numbers as well.
 - It should allow boolean operators and parens for the version
 selections. For instance: "(>= 2.1&&  <= 2.6&&  != 2.4) ||>=
 3.4" (Ie, "Any version from 2.1 through 2.6, but 2.4 has critical
 bugs, and 3.4+ contains a 2.x compatibility layer.")
Hehe. Now this is getting quite complicate, but it would be nice to have yes. Not something I will aim for in the first release.
 - There should be a "list" command and a "list {package name}"
 command to see what's installed. Maybe even "list {package name}
 {version expression}"? And maybe something too see what's available
 but not yet installed? They should all list the versions in
 guaranteed-sorted order so you can see which are the newest and
 oldest installed versions by looking at the first and last.
Absolutely, I've completely forgot about the "list" command.
 - At some point, 7z should be supported (and tarballs, of course).
If someone is willing to create D module of create bindings for any available libraries. Any libraries the forces a specific license is out of the question.
The lzma SDK is public domain (lzma is the compression algorithm used in 7z), xzutils has a library called liblzma which is based on the lzma sdk, easier to use and also public domain. Tar should be easy enough to implement in d. Seems like there are more or less recent D bindings for liblzma: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=117344 But someone still has to combine those parts together. LZMA bindings would also fit nicely into phobos, as we already have zlib in there.
 - How should platforms be handled WRT packages? Ie, Do all platforms
 need to be in the same orb package? Do they all need to be in
 separate packages? Either way? If they're not all required to be in
 the same package, how does orb find the package that had the right
 platform?
I good questions. I haven't given binary packages that much thought. I was first going for a source only package manager that requires all packages to be built before installed. I see three options: * One package for all platforms * Include the platform in the package name and in the orbspec * Have a sub path (on the server) for every platform, i.e.: dorbit.org/orbs/linux/dwt-1.3.2.orb.zip
 - Is it really necessary to have separate "build_dependencies" and
 "runtime_dependencies"? And why have both "runtime_dependencies" and
 "orbs" instead of just picking one name and sticking with it?
There is no runtime dependency on something that is statically linked. Therefore it would be unnecessary to do a permanent installation on those dependencies. The user could get an option to either permanently installed these dependencies or to temporarily install them. The other way around would be possible as well. A package can depend on a dynamically linked library and use it only through function pointers. Then the package would only have a runtime dependency on the library, i.e. it wouldn't be needed when building.
 - Would it be a good idea to have and additional field "extra" for
 non-standard expandability? So people could add extra fields they
 felt would be useful, like "extra.foo" and "extra.bar", etc. And
 popular ones could eventually be formally added to the specification
 as just simply "foo" and "bar".
I guess there could be an "extra" field accepting a hash. How would the field be used, by other tools?
 - What's the point of the fields "files", "libraries" and
 "executables"? Seems like extra work for no real benefit.
"files" is basically all files that should be put in the package. "libraries" and "executables" are all libraries and executables that should be installed (regardless if they are pre-compiled or built during installation).
 - This supports having multiple versions of the same package
 installed at the same time, right? If not it should.
Yes, that's the whole point of having version, as I see it.
 - I see there's an "upgrade" callback, but I didn't see an "upgrade"
 command. Is upgrading in or out? I think that there should be an
 "upgrade" command that upgrades the installed versions of packages
 as far as it can *without* breaking any other installed packages
 that depend on it. Ex, if Foo requires Bar v2.6 or earlier, and
 SuperFoo requires Bar v2.7 or earlier, and Bar v2.3 is installed,
 but the latest Bar is v2.9, then "upgrade bar" would upgrade Bar
 v2.3 to v2.6 and display a message that says "Bar upgraded from 2.3
 to 2.6, but the newest is 2.9, run "orb install Bar" to install the
 newest Bar, too." (Or maybe it should install both 2.6 and 2.7? Or
 one/both of those and 2.9?) For upgrading, we should also think
 about how to do upgrades without clobbering any of it's settings.
I guess I didn't think this through. I think it will require some thought.
 - For POSTing a package to a repository, how does authentication
 work? All repos don't have to provide unrestricted upload access do
 they?
I haven't thought about this more than there will be some kind of authentication. Probably HTTP basic authentication.
 - I'm not sure I understand how the "source" command works. Can it be
 provided more than once? And then it just picks the first one that
 actually has the package?
The "source" command specifies a path to a repository where to fetch packages from. I haven't thought about if the it can be provided more than once. It might be a good idea.
 - The "central repositories" don't necessarily sound all that
 central, so they probably should just be called "repositories".
Ok.
 - What about default repositories? It should support that. (Kinda
 makes sense, otherwise how would "orb install xxx" know where to
 look?) And there should be simple commands to add/update/remove/list
 (and reorder?) the default repositories. If a package A specifies a
 dependency B and a repository for that dependency B, then which one
 has priority for downloading B: The default repositories or the
 repository specified by package A?
Yes there will be a default repository. As someone else suggested there could be an "orbfile" in the users home directory that can contain default settings like for "source". Or a more general config file for orbit.
 - Here's a problem with using an actual programming language for the
 orbfile/name.orbspec/metadata file: Suppose Orb version X uses Ruby
 version Y. Then, Orb X+1 comes out and which has Ruby upgraded to
 Y+1. Now, someone creates PackageA with an
 orbfile/name.orbspec/whatever that relies on Ruby Y+1. Someone else
 still has Orb version X and tries to get PackageA. Kaboom!
Yeah, that is a problem. But I wonder how much this will be a problem in practice. I don't think this will so big problem in practice using Ruby as the language. On the other hand, using D, will be a big problem. D breaks something in every release.
 Therefore, the orbfile/name.orbspec/whatever needs to specify which
 version of Orb (or Ruby) it requires. But now we have a
 chicken-and-the-egg problem: How can Orb X figure out that PackageA
 requires Orb X+1 if Orb X can't properly read PackageA's
 orbfile/name.orbspec/whatever?
That is a problem. If the metadata is "compiled" YAML/JSON then we can get around this.
 - If I install D library "libfoo", then I should be able write
 myapp.d with "include foo.blah;" and then do "dmd myapp.d" *without*
 manually specifying -Ipath_to_libfoo. It should just work. How will
 Orb handle that?
It can't. The solution to this is a build tool, as I see it. The build tool knows about the package manager and let you specify dependencies on package. Think about Drake, it could look like this: target("myapp.d", { orb("libfoo"); // automatically links with "libfoo" and includes its header path. }); -Ipath_to_libfoo needs somehow be passed to the compiler, and linking with the library as well. Maybe it would be possible to manipulate the dmd.conf/sc.ini but this seems very complicated.
 And how will that interact with DVM? Ie, if I do "dvm use 2.051",
 then "orb install libfoo", then "dvm use 2.054", then I should still
 have access to libfoo without needing to specify -Ipath_to_libfoo.
If you do "dvm use 2.051", then "orb install libfoo" then "libfoo" will only be installed for dmd 2.051, that's the whole point. You would have to run "orb install libfoo" again after switching compiler. Orbit could of course share the same package if possible.
 - Where does everything get installed?
For now, on Posix, in /usr/local/orbit. If used through DVM it will be installed in somewhere in ~/.dvm.
 - In many ways this sounds a lot like a generalized DVM. Maybe Orb
 should eventually take over DVM's duties by making a DMD orb package.
No, I don't think so. DVM is quite specialized in what it does. Manipulating the PATH variable (or the registy on Windows) to be able to do what it does. I don't what to mix DVM and Orbit.
-- Johannes Pfau
Jul 15 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-07-15 19:11, Johannes Pfau wrote:
 The lzma SDK is public domain (lzma is the compression algorithm used
 in 7z), xzutils has a library called liblzma which is based on the lzma
 sdk, easier to use and also public domain. Tar should be easy enough to
 implement in d. Seems like there are more or less recent D
 bindings for liblzma:
 http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=117344
 But someone still has to combine those parts together. LZMA bindings
 would also fit nicely into phobos, as we already have zlib in there.
When someone actually does this I can use lzma/7z instead of/in addition to zip. But I have no interest myself in creating bindings or building a D module. -- /Jacob Carlborg
Jul 15 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:ivpkdp$hq4$1 digitalmars.com...
 On 2011-07-15 09:14, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:ivkrdj$ci4$1 digitalmars.com...
 I've written a more formal specification of my ideas for a package 
 manager
 for D.

 https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D

 Note that I am exploring the possibility of using D as the language for
 all the files mentioned in the link above.

 The current status is that building packages and installing them works,
 but quite limited. No dependency tracking or central repository so far.

 Please comment and suggest.
Good start :) Here are my random thoughts on it (sorry if you've already answered some of them, I haven't read the rest of this thread yet): - I find the "orb" vs "orbit" distinction a little confusing and unnecessary. Why not just call it all "orb"? (Or call everything "orbit"?)
This is how I'm thinking: Orbit is the name of the package manager, "orb" is what you type on the command line when interacting with the package manager. A package is also called an "orb".
But why should the name of the project and the name of the tool be different? It would be less confusing to have a packager manager named Orb that gets invoked via "orb". (Or a package manager named Orbit that gets invoked via "orbit".
 - I think the files "orbfile", "{name}.orbspec" and "metadata" (the one
 inside the packages) should all just be one file. Frankly, the existence 
 of
 all three of them confuses the hell out of me.
Ok, this is how it works: "orbfile" is a completely optional file listing all packages a project depends on, it shouldn't contain anything else. It allows you to run "orb install" in the project directory to install all needed packages. Just because a project uses packages doesn't mean it self needs to be a package. The orbspec file requires more than just listing dependencies. I guess the tool could look for an .orbspec file and install all its dependencies. But what happends if it finds several .orbspec files in a directory. This is just like the Gemfile for those how have used bundler: http://gembundler.com/ "orbspec" is a specification of how a package looks like and what it contains. It's intended for creating a package out of a project. "metadata" is basically the orbspec copied into the archive. I was first thinking about "compiling" down the Ruby code into YAML or JSON but for now the Ruby code is included in the archive.
I think the "metadata" would make sence if, and *only* if, it were "compiled" down to a pure-data format. As for orbfile vs orbspec: I still think these could and should be merged. I don't need a separate buildscript for different actions. *Everything* the buildsystem does is configured in one centralized script (and whatever other files it may or may not choose to delegate out to). I see no reason why the same can't or shouldn't be done with package management. If your "orbblah" file doesn't contain the parts needed for orb to auto-create a package, then you just can't have orb auto-create your package. Simple. Also, I don't see why the name of "orbspec" needs to be prefixed with the package's name. Just like an orbfile, or makefile, or rakefile, etc., you already know what package it's for from what directory/package it's inside of.
 - Instead of "~>  0.3.4", what about ">  0.3+.4+"? Or ""~>  0.3+.4+"? Or
 something vaguely like that. That would be more flexible.
So you mean I can have a version like this: "> 0.3.4+" meaning any version from "0.3.4" to "0.3.9"? It might be a good idea.
Well, any version that's >= 0.3.4 and < 0.4.0
 - What happens if someone tries to upload a newer version with the same 
 old
 version number? Or if they forcefully do it?
I haven't thought about that. It probably shouldn't be possible.
That would make sense. But, it could always be forcefully done by doing it manually without going through orb. Don't know if that's worth worrying about though.
 - Are version numbers allowed to have more or less than three parts? I 
 think
 they should. Do version comparisons still work on version numbers with an
 arbitrary number of parts? Again, I think they should.
I was hoping to only have version numbers with three parts. If fewer parts are used it would probably easiest to infer a 0 for the missing parts, i.e. "1" == "1.0.0".
I agree with inferring 0 for any parts not provided.
 Is there a need for more parts than three? The whole idea of having three 
 version parts is to be able to use "~>  0.3.4". But if "> 0.3.4+" would be 
 allowed then arbitrary number of parts could be allowed.
I think it should be up to the project developer to choose how many parts if appropriate for their project. And I don't see any problems with doing that.
 - It should allow boolean operators and parens for the version 
 selections.
 For instance: "(>= 2.1&&  <= 2.6&&  != 2.4) ||>= 3.4" (Ie, "Any version
 from 2.1 through 2.6, but 2.4 has critical bugs, and 3.4+ contains a 2.x
 compatibility layer.")
Hehe. Now this is getting quite complicate, but it would be nice to have yes. Not something I will aim for in the first release.
Fair enough. :)
 - How should platforms be handled WRT packages? Ie, Do all platforms need 
 to
 be in the same orb package? Do they all need to be in separate packages?
 Either way? If they're not all required to be in the same package, how 
 does
 orb find the package that had the right platform?
I good questions. I haven't given binary packages that much thought. I was first going for a source only package manager that requires all packages to be built before installed.
I think that might be a bit limiting. And keep in mind, just because something is source-only doesn't mean it'll actually work on more than one platform.
 I see three options:

 * One package for all platforms
 * Include the platform in the package name and in the orbspec
 * Have a sub path (on the server) for every platform, i.e.:

 dorbit.org/orbs/linux/dwt-1.3.2.orb.zip
I think the first option makes the most sense, and the second option sounds like it would be a sensible workaround when someone really wants things separate. I'm not sure I like the third option because that wouldn't work for repositories that aren't explicity chosen unless you made it a standard naming system, but then that creates extra requirements for the server, and then how do you know whether to GET from the platform-specific directory or the platform-independent/cross-platform directory, etc. And what about having a windows package and a combination OSX/linux package? It's workable, but I'm not sure it's worth it in light of the first two options. Although, on second thought, if a good system for the third option could be worked out, then maybe that would be best. (But maybe just not for the initial release?)
 - Is it really necessary to have separate "build_dependencies" and
 "runtime_dependencies"? And why have both "runtime_dependencies" and 
 "orbs"
 instead of just picking one name and sticking with it?
There is no runtime dependency on something that is statically linked. Therefore it would be unnecessary to do a permanent installation on those dependencies. The user could get an option to either permanently installed these dependencies or to temporarily install them.
So you mean a statically-linked dependency then? "Runtime dependency" makes me think it's needed at runtime and thus *would* need to be installed. Actually, what you're describing sounds just like a build dependency to me. Is there really all that much benefit to doing a temporary install of things temporarally needed anyway? And is it that common of a need? I'm wondering if it's worth the extra complexity.
 The other way around would be possible as well. A package can depend on a 
 dynamically linked library and use it only through function pointers. Then 
 the package would only have a runtime dependency on the library, i.e. it 
 wouldn't be needed when building.
Hmm I think I still don't understand the whole runtime dep vs build dep matter.
 - Would it be a good idea to have and additional field "extra" for
 non-standard expandability? So people could add extra fields they felt 
 would
 be useful, like "extra.foo" and "extra.bar", etc. And popular ones could
 eventually be formally added to the specification as just simply "foo" 
 and
 "bar".
I guess there could be an "extra" field accepting a hash. How would the field be used, by other tools?
That would be up to them. The idea is just that the package metadata is extendable for whatever uses may arise.
 - What's the point of the fields "files", "libraries" and "executables"?
 Seems like extra work for no real benefit.
"files" is basically all files that should be put in the package. "libraries" and "executables" are all libraries and executables that should be installed (regardless if they are pre-compiled or built during installation).
So it's purely for creating a package? If not, then I'm afraid I still don't understand. If so, then I still don't see the use of "libraries" and "executables". Also I think they should all be optional (if they aren't already) because all that info is already going to exist in the buildscript, and so a lot of people (like me) are going to want to generate the package via their build system rather than duplicating all that work and information just so that the package manager can do it "automatically".
 - For POSTing a package to a repository, how does authentication work? 
 All
 repos don't have to provide unrestricted upload access do they?
I haven't thought about this more than there will be some kind of authentication. Probably HTTP basic authentication.
Other things to think about are: How does the user actually specify the login info? Wouldn't want them in the orbfile/orbspec/metadata/etc.
 - If I install D library "libfoo", then I should be able write myapp.d 
 with
 "include foo.blah;" and then do "dmd myapp.d" *without* manually
 specifying -Ipath_to_libfoo. It should just work. How will Orb handle 
 that?
It can't. The solution to this is a build tool, as I see it.
Ouch. I have to say, I don't like that at all. The way I see it, this is one of the two primary responsibilities of a package manager for a language (the other being automatic dependency handling). Without it, we're not much ahead of where we are right now.
 The build tool knows about the package manager and let you specify 
 dependencies on package. Think about Drake, it could look like this:

 target("myapp.d", {
     orb("libfoo"); // automatically links with "libfoo" and includes its 
 header path.
 });
That doesn't work: If orb is managing the packages, then how is dake/drake/etc supposed to know what path to include for -I? Orb knows where it's sticking packages. The build systems don't know that. This also reminds me of another issue: For D libraries, the orbfile/orbspec/metadata/whatever needs to give the relative path for the base include directory.
 -Ipath_to_libfoo needs somehow be passed to the compiler, and linking with 
 the library as well. Maybe it would be possible to manipulate the 
 dmd.conf/sc.ini but this seems very complicated.
I think what we need is to have DVM handle dmd.conf/sc.ini separately from DVM need to work together to manage dmd.conf/sc.ini. It's complicated, but I think it's necessary. This makes me think of another, too: How does the buildscript know what version of a lib orb has chosen? Since orb packages (rightfully) are allowed to specify a range of versions instead of just one specific version, I think that will be needed.
 And how will that interact with DVM? Ie, if I do "dvm use 2.051", then 
 "orb
 install libfoo", then "dvm use 2.054", then I should still have access to
 libfoo without needing to specify -Ipath_to_libfoo.
If you do "dvm use 2.051", then "orb install libfoo" then "libfoo" will only be installed for dmd 2.051, that's the whole point. You would have to run "orb install libfoo" again after switching compiler.
I think that's very, very bad. I should be able to compile something with a different version of DMD without reinstalling everything. "dvm use xxx" should *just work*, just as it already does now.
 - In many ways this sounds a lot like a generalized DVM. Maybe Orb should
 eventually take over DVM's duties by making a DMD orb package.
No, I don't think so. DVM is quite specialized in what it does. Manipulating the PATH variable (or the registy on Windows) to be able to do what it does. I don't what to mix DVM and Orbit.
Orb manages different versions of arbitrary packages. DVM manages different versions of one specific package. If they're so different despite all that, then either Orbit isn't generalized enough, or DVM is too specialized.
Jul 17 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-18 00:26, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 But why should the name of the project and the name of the tool be
 different? It would be less confusing to have a packager manager named Orb
 that gets invoked via "orb". (Or a package manager named Orbit that gets
 invoked via "orbit".
I don't know. It's also easy to separate the library code in one package named "orbit" and the application code in on package named "orb".
 I think the "metadata" would make sence if, and *only* if, it were
 "compiled" down to a pure-data format.
Ok, fair enough. That was the initial idea, to have the "metadata" file in a pure-data format.
 As for orbfile vs orbspec: I still think these could and should be merged. I
 don't need a separate buildscript for different actions. *Everything* the
 buildsystem does is configured in one centralized script (and whatever other
 files it may or may not choose to delegate out to). I see no reason why the
 same can't or shouldn't be done with package management.
I guess I can do that.
 If your "orbblah" file doesn't contain the parts needed for orb to
 auto-create a package, then you just can't have orb auto-create your
 package. Simple.
Ok.
 Also, I don't see why the name of "orbspec" needs to be prefixed with the
 package's name. Just like an orbfile, or makefile, or rakefile, etc., you
 already know what package it's for from what directory/package it's inside
 of.
The package needs a name. One option is to specify that in the orbspec file. Another option is to specify it in the name of the orbspec file. I just thought that it was convenient. Is it a good idea to take the name directory where the orbspec is located? Note that many of my choices are based on RubyGems.
 Well, any version that's>= 0.3.4 and<  0.4.0
Ok.
 That would make sense. But, it could always be forcefully done by doing it
 manually without going through orb. Don't know if that's worth worrying
 about though.
Do you mean if you have direct access to the server where the repository is located? I don't think that's something worth worrying about. Isn't that the same with all servers you don't own yourself? If you have a website on a hosted server they can, if they want to, just remove it.
 I agree with inferring 0 for any parts not provided.
Good.
 I think it should be up to the project developer to choose how many parts if
 appropriate for their project. And I don't see any problems with doing that.
Ok. But then I think we most allow: "> 0.3.4+". Or else you cannot know if a given version will break the API.
 I think that might be a bit limiting. And keep in mind, just because
 something is source-only doesn't mean it'll actually work on more than one
 platform.
As you can read in the specification I do intend to have binary packages. But as I said, that was not my original thought and that's why I haven't given it much thought. Of course it won't work on all platform just because it's source only but if there are source only packages then the problem below doesn't exist.
 I see three options:

 * One package for all platforms
 * Include the platform in the package name and in the orbspec
 * Have a sub path (on the server) for every platform, i.e.:

 dorbit.org/orbs/linux/dwt-1.3.2.orb.zip
I think the first option makes the most sense, and the second option sounds like it would be a sensible workaround when someone really wants things separate.
There's always this issue with downloading files you don't need.
 I'm not sure I like the third option because that wouldn't work for
 repositories that aren't explicity chosen unless you made it a standard
 naming system, but then that creates extra requirements for the server, and
 then how do you know whether to GET from the platform-specific directory or
 the platform-independent/cross-platform directory, etc. And what about
 having a windows package and a combination OSX/linux package? It's workable,
 but I'm not sure it's worth it in light of the first two options.
I don't understand what you mean. Why would you have one package for Windows and one for osx/linux? Or have I misunderstood something.
 Although, on second thought, if a good system for the third option could be
 worked out, then maybe that would be best. (But maybe just not for the
 initial release?)
An other issue with one package for all platforms is when building the package. The developer first needs to build the package on one platform, then move the package to all other platforms and rebuild the package. The tool needs to open the package and add files for the other platforms.
 - Is it really necessary to have separate "build_dependencies" and
 "runtime_dependencies"? And why have both "runtime_dependencies" and
 "orbs"
 instead of just picking one name and sticking with it?
There is no runtime dependency on something that is statically linked. Therefore it would be unnecessary to do a permanent installation on those dependencies. The user could get an option to either permanently installed these dependencies or to temporarily install them.
So you mean a statically-linked dependency then? "Runtime dependency" makes me think it's needed at runtime and thus *would* need to be installed. Actually, what you're describing sounds just like a build dependency to me.
Statically-linked dependency is basically the same as build dependency. If you have a look at the DVM project page you can see the list of build dependencies. The only thing you need to actually run the application is zlib (on Poisx), because it's dynamically linked, aka a runtime dependency.
 Is there really all that much benefit to doing a temporary install of things
 temporarally needed anyway? And is it that common of a need? I'm wondering
 if it's worth the extra complexity.
The package would need access to the dependencies when its build. How should it else be done?
 The other way around would be possible as well. A package can depend on a
 dynamically linked library and use it only through function pointers. Then
 the package would only have a runtime dependency on the library, i.e. it
 wouldn't be needed when building.
Hmm I think I still don't understand the whole runtime dep vs build dep matter.
If you use, on Posix, dlopen and friends to open a dynamic library you don't need to link to it when building. You call the functions via function pointers, that's how Derelict works, if you have used it.
 That would be up to them. The idea is just that the package metadata is
 extendable for whatever uses may arise.
Ok, that won't be any problem including.
 - What's the point of the fields "files", "libraries" and "executables"?
 Seems like extra work for no real benefit.
"files" is basically all files that should be put in the package. "libraries" and "executables" are all libraries and executables that should be installed (regardless if they are pre-compiled or built during installation).
So it's purely for creating a package?
Yes, the "files" field is just when creating the package.
 If not, then I'm afraid I still don't understand.

 If so, then I still don't see the use of "libraries" and "executables". Also
 I think they should all be optional (if they aren't already) because all
 that info is already going to exist in the buildscript, and so a lot of
 people (like me) are going to want to generate the package via their build
 system rather than duplicating all that work and information just so that
 the package manager can do it "automatically".
Both the "libraries" and "executables" fields are optional. "summary" and "version" are the only required fields (so far), as you can see on the wiki. As I said before, "libraries" and "executables" are what actually should be installed. This is what I'm thinking and is planning for Orbit and my build tool Dake. Dake will be able to generate an orbspec file, because, as you said, the build tool will have all the necessary information. Then you don't have to duplicate any information. But, I'm not forcing anyone to use my build tool if they want to use Orbit, therefore the fields "libraries" and "executables" are available.
 Other things to think about are: How does the user actually specify the
 login info? Wouldn't want them in the orbfile/orbspec/metadata/etc.
No, absolutely not. Passing user name and password using HTTP basic authentication on the command line?
 Ouch. I have to say, I don't like that at all. The way I see it, this is one
 of the two primary responsibilities of a package manager for a language (the
 other being automatic dependency handling). Without it, we're not much ahead
 of where we are right now.
If you have suggestions I'm listening.
 That doesn't work: If orb is managing the packages, then how is
 dake/drake/etc supposed to know what path to include for -I? Orb knows where
 it's sticking packages. The build systems don't know that.
The build tool invokes Orbit and asks about the include path.
 This also reminds me of another issue: For D libraries, the
 orbfile/orbspec/metadata/whatever needs to give the relative path for the
 base include directory.
Something like that. Or if given full path the tool will assume the current directory is the base directory and remove that from the path. For example: $ cd ~/projects/d/foo Contains: main.d bar/foobar.d files << ["~/projects/d/foo/main.d", "~/projects/d/foo/bar/foobar.d"] $ orb build foo Since I run the "orb build" command in the "~/projects/d/foo" directory the tool will just remove that from the paths when including the files in the package.
 I think what we need is to have DVM handle dmd.conf/sc.ini separately from

 DVM need to work together to manage dmd.conf/sc.ini. It's complicated, but I
 think it's necessary.
I was hoping I didn't need to drag in DVM in this. And I don't won't to force anyone using DVM if they want to use Orbit.
 This makes me think of another, too: How does the buildscript know what
 version of a lib orb has chosen? Since orb packages (rightfully) are allowed
 to specify a range of versions instead of just one specific version, I think
 that will be needed.
It doesn't need to know what exact version is chosen, it just needs to know the include paths and libraries to link with. The rest is up to Orbit to handle and decide. The build tool just links with what it received from Orbit.
 I think that's very, very bad. I should be able to compile something with a
 different version of DMD without reinstalling everything. "dvm use xxx"
 should *just work*, just as it already does now.
The whole point is to be able to have different packages installed with different compilers. There could be a command for moving over packages from one DMD installation to another. Say I'm using DMD 2.053, then installing package "foo". Then I'm installing DMD 2.054 and switching to it. Say also that package "foo" doesn't work with DMD 2.054, what happens when you switch to 2.054?
 Orb manages different versions of arbitrary packages. DVM manages different
 versions of one specific package. If they're so different despite all that,
 then either Orbit isn't generalized enough, or DVM is too specialized.
DVM is very specialized, yes. The installation and switching of D compiler, using DVM, is very specialized. DMD requires wrappers, manipulating the PATH or registry. None of the Orbit packages require this. Orbit would require loads of special cases to have DVM built-in. On your description DVM and Orbit sound very similar but how they actually work when installing and switch compilers are very different. You should know this, you ported DVM to Windows. The current compiler decides what packages are available/installed, that wouldn't work if the compiler package would just be a regular orb package. What would be possible it to create a tool that is a front end for DVM and Orbit. I'm currently building Orbit as a library with a thin wrapper that is the tool the user will use. I'm planning to rewrite DVM in the same way. This also allows to create GUIs and integrate the tools into IDEs. -- /Jacob Carlborg
Jul 18 2011
next sibling parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-18 00:26, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 But why should the name of the project and the name of the tool be
 different? It would be less confusing to have a packager manager
 named Orb that gets invoked via "orb". (Or a package manager named
 Orbit that gets invoked via "orbit".
I don't know. It's also easy to separate the library code in one package named "orbit" and the application code in on package named "orb".
 I think the "metadata" would make sence if, and *only* if, it were
 "compiled" down to a pure-data format.
Ok, fair enough. That was the initial idea, to have the "metadata" file in a pure-data format.
 As for orbfile vs orbspec: I still think these could and should be
 merged. I don't need a separate buildscript for different actions.
 *Everything* the buildsystem does is configured in one centralized
 script (and whatever other files it may or may not choose to
 delegate out to). I see no reason why the same can't or shouldn't be
 done with package management.
I guess I can do that.
 If your "orbblah" file doesn't contain the parts needed for orb to
 auto-create a package, then you just can't have orb auto-create your
 package. Simple.
Ok.
 Also, I don't see why the name of "orbspec" needs to be prefixed
 with the package's name. Just like an orbfile, or makefile, or
 rakefile, etc., you already know what package it's for from what
 directory/package it's inside of.
The package needs a name. One option is to specify that in the orbspec file. Another option is to specify it in the name of the orbspec file. I just thought that it was convenient. Is it a good idea to take the name directory where the orbspec is located? Note that many of my choices are based on RubyGems.
 Well, any version that's>= 0.3.4 and<  0.4.0
Ok.
 That would make sense. But, it could always be forcefully done by
 doing it manually without going through orb. Don't know if that's
 worth worrying about though.
Do you mean if you have direct access to the server where the repository is located? I don't think that's something worth worrying about. Isn't that the same with all servers you don't own yourself? If you have a website on a hosted server they can, if they want to, just remove it.
 I agree with inferring 0 for any parts not provided.
Good.
 I think it should be up to the project developer to choose how many
 parts if appropriate for their project. And I don't see any problems
 with doing that.
Ok. But then I think we most allow: "> 0.3.4+". Or else you cannot know if a given version will break the API.
 I think that might be a bit limiting. And keep in mind, just because
 something is source-only doesn't mean it'll actually work on more
 than one platform.
As you can read in the specification I do intend to have binary packages. But as I said, that was not my original thought and that's why I haven't given it much thought. Of course it won't work on all platform just because it's source only but if there are source only packages then the problem below doesn't exist.
 I see three options:

 * One package for all platforms
 * Include the platform in the package name and in the orbspec
 * Have a sub path (on the server) for every platform, i.e.:

 dorbit.org/orbs/linux/dwt-1.3.2.orb.zip
I think the first option makes the most sense, and the second option sounds like it would be a sensible workaround when someone really wants things separate.
There's always this issue with downloading files you don't need.
 I'm not sure I like the third option because that wouldn't work for
 repositories that aren't explicity chosen unless you made it a
 standard naming system, but then that creates extra requirements for
 the server, and then how do you know whether to GET from the
 platform-specific directory or the
 platform-independent/cross-platform directory, etc. And what about
 having a windows package and a combination OSX/linux package? It's
 workable, but I'm not sure it's worth it in light of the first two
 options.
I don't understand what you mean. Why would you have one package for Windows and one for osx/linux? Or have I misunderstood something.
 Although, on second thought, if a good system for the third option
 could be worked out, then maybe that would be best. (But maybe just
 not for the initial release?)
An other issue with one package for all platforms is when building the package. The developer first needs to build the package on one platform, then move the package to all other platforms and rebuild the package. The tool needs to open the package and add files for the other platforms.
 - Is it really necessary to have separate "build_dependencies" and
 "runtime_dependencies"? And why have both "runtime_dependencies"
 and "orbs"
 instead of just picking one name and sticking with it?
There is no runtime dependency on something that is statically linked. Therefore it would be unnecessary to do a permanent installation on those dependencies. The user could get an option to either permanently installed these dependencies or to temporarily install them.
So you mean a statically-linked dependency then? "Runtime dependency" makes me think it's needed at runtime and thus *would* need to be installed. Actually, what you're describing sounds just like a build dependency to me.
Statically-linked dependency is basically the same as build dependency. If you have a look at the DVM project page you can see the list of build dependencies. The only thing you need to actually run the application is zlib (on Poisx), because it's dynamically linked, aka a runtime dependency.
 Is there really all that much benefit to doing a temporary install
 of things temporarally needed anyway? And is it that common of a
 need? I'm wondering if it's worth the extra complexity.
The package would need access to the dependencies when its build. How should it else be done?
 The other way around would be possible as well. A package can
 depend on a dynamically linked library and use it only through
 function pointers. Then the package would only have a runtime
 dependency on the library, i.e. it wouldn't be needed when building.
Hmm I think I still don't understand the whole runtime dep vs build dep matter.
If you use, on Posix, dlopen and friends to open a dynamic library you don't need to link to it when building. You call the functions via function pointers, that's how Derelict works, if you have used it.
 That would be up to them. The idea is just that the package metadata
 is extendable for whatever uses may arise.
Ok, that won't be any problem including.
 - What's the point of the fields "files", "libraries" and
 "executables"? Seems like extra work for no real benefit.
"files" is basically all files that should be put in the package. "libraries" and "executables" are all libraries and executables that should be installed (regardless if they are pre-compiled or built during installation).
So it's purely for creating a package?
Yes, the "files" field is just when creating the package.
 If not, then I'm afraid I still don't understand.

 If so, then I still don't see the use of "libraries" and
 "executables". Also I think they should all be optional (if they
 aren't already) because all that info is already going to exist in
 the buildscript, and so a lot of people (like me) are going to want
 to generate the package via their build system rather than
 duplicating all that work and information just so that the package
 manager can do it "automatically".
Both the "libraries" and "executables" fields are optional. "summary" and "version" are the only required fields (so far), as you can see on the wiki. As I said before, "libraries" and "executables" are what actually should be installed. This is what I'm thinking and is planning for Orbit and my build tool Dake. Dake will be able to generate an orbspec file, because, as you said, the build tool will have all the necessary information. Then you don't have to duplicate any information. But, I'm not forcing anyone to use my build tool if they want to use Orbit, therefore the fields "libraries" and "executables" are available.
 Other things to think about are: How does the user actually specify
 the login info? Wouldn't want them in the
 orbfile/orbspec/metadata/etc.
No, absolutely not. Passing user name and password using HTTP basic authentication on the command line?
 Ouch. I have to say, I don't like that at all. The way I see it,
 this is one of the two primary responsibilities of a package manager
 for a language (the other being automatic dependency handling).
 Without it, we're not much ahead of where we are right now.
If you have suggestions I'm listening.
It would be possible to install libraries into the dmd default search path. Right now, this would be /usr/include/d/dmd and library files in /usr/lib on posix, but any path can be used as long as it's included in dmd.conf. However, this means that only one version of a library can be installed system-wide, so it's not optimal. It should also be noted, that linking against a specific library version can only work well with static libraries. With static libraries you can give linker a specific path to the library at compile time (So you can have multiple versions in different directories). Using different directories with dynamic libraries requires setting LD_LIBRARY_PATH before executing a program, so this won't work. We'll have to use the library versioning mechanism that's used by C libraries (e.g for linux: http://www.ibm.com/developerworks/linux/library/l-shlibs/index.html ). I don't know if windows even supports library versioning, but as windows programs usually don't install dlls globally that's less of a problem.
 That doesn't work: If orb is managing the packages, then how is
 dake/drake/etc supposed to know what path to include for -I? Orb
 knows where it's sticking packages. The build systems don't know
 that.
The build tool invokes Orbit and asks about the include path.
 This also reminds me of another issue: For D libraries, the
 orbfile/orbspec/metadata/whatever needs to give the relative path
 for the base include directory.
Something like that. Or if given full path the tool will assume the current directory is the base directory and remove that from the path. For example: $ cd ~/projects/d/foo Contains: main.d bar/foobar.d files << ["~/projects/d/foo/main.d", "~/projects/d/foo/bar/foobar.d"] $ orb build foo Since I run the "orb build" command in the "~/projects/d/foo" directory the tool will just remove that from the paths when including the files in the package.
 I think what we need is to have DVM handle dmd.conf/sc.ini
 separately from the compiler version (which kind of goes along with

 dmd.conf/sc.ini. It's complicated, but I think it's necessary.
I was hoping I didn't need to drag in DVM in this. And I don't won't to force anyone using DVM if they want to use Orbit.
 This makes me think of another, too: How does the buildscript know
 what version of a lib orb has chosen? Since orb packages
 (rightfully) are allowed to specify a range of versions instead of
 just one specific version, I think that will be needed.
It doesn't need to know what exact version is chosen, it just needs to know the include paths and libraries to link with. The rest is up to Orbit to handle and decide. The build tool just links with what it received from Orbit.
 I think that's very, very bad. I should be able to compile something
 with a different version of DMD without reinstalling everything.
 "dvm use xxx" should *just work*, just as it already does now.
The whole point is to be able to have different packages installed with different compilers. There could be a command for moving over packages from one DMD installation to another. Say I'm using DMD 2.053, then installing package "foo". Then I'm installing DMD 2.054 and switching to it. Say also that package "foo" doesn't work with DMD 2.054, what happens when you switch to 2.054?
 Orb manages different versions of arbitrary packages. DVM manages
 different versions of one specific package. If they're so different
 despite all that, then either Orbit isn't generalized enough, or DVM
 is too specialized.
DVM is very specialized, yes. The installation and switching of D compiler, using DVM, is very specialized. DMD requires wrappers, manipulating the PATH or registry. None of the Orbit packages require this. Orbit would require loads of special cases to have DVM built-in. On your description DVM and Orbit sound very similar but how they actually work when installing and switch compilers are very different. You should know this, you ported DVM to Windows. The current compiler decides what packages are available/installed, that wouldn't work if the compiler package would just be a regular orb package. What would be possible it to create a tool that is a front end for DVM and Orbit. I'm currently building Orbit as a library with a thin wrapper that is the tool the user will use. I'm planning to rewrite DVM in the same way. This also allows to create GUIs and integrate the tools into IDEs.
-- Johannes Pfau
Jul 18 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-18 18:03, Johannes Pfau wrote:
 It would be possible to install libraries into the dmd default search
 path. Right now, this would be /usr/include/d/dmd and library files
 in /usr/lib on posix, but any path can be used as long as it's included
 in dmd.conf. However, this means that only one version of a library can
 be installed system-wide, so it's not optimal.

 It should also be noted, that linking against a specific library
 version can only work well with static libraries. With static libraries
 you can give linker a specific path to the library at compile time
 (So you can have multiple versions in different directories). Using
 different directories with dynamic libraries requires setting
 LD_LIBRARY_PATH before executing a program, so this won't work. We'll
 have to use the library versioning mechanism that's used by C
 libraries (e.g for linux:
 http://www.ibm.com/developerworks/linux/library/l-shlibs/index.html ).
 I don't know if windows even supports library versioning, but as
 windows programs usually don't install dlls globally that's less of a
 problem.
You can give a specific path to the compiler with dynamic libraries as well. Link with libfoo, version 3.4.0: dmd -L-L/path/to/libfoo-3.4.0 -L-lfoo Or have I missed something? -- /Jacob Carlborg
Jul 18 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-18 18:03, Johannes Pfau wrote:
 It would be possible to install libraries into the dmd default search
 path. Right now, this would be /usr/include/d/dmd and library files
 in /usr/lib on posix, but any path can be used as long as it's
 included in dmd.conf. However, this means that only one version of a
 library can be installed system-wide, so it's not optimal.

 It should also be noted, that linking against a specific library
 version can only work well with static libraries. With static
 libraries you can give linker a specific path to the library at
 compile time (So you can have multiple versions in different
 directories). Using different directories with dynamic libraries
 requires setting LD_LIBRARY_PATH before executing a program, so this
 won't work. We'll have to use the library versioning mechanism
 that's used by C libraries (e.g for linux:
 http://www.ibm.com/developerworks/linux/library/l-shlibs/index.html
 ). I don't know if windows even supports library versioning, but as
 windows programs usually don't install dlls globally that's less of a
 problem.
You can give a specific path to the compiler with dynamic libraries as well. Link with libfoo, version 3.4.0: dmd -L-L/path/to/libfoo-3.4.0 -L-lfoo Or have I missed something?
I'd have to test that, but I doubt it will work. This will help to find the library at compile time, but not at runtime. The runtime linker will only search in directories listed in /etc/ld.so.conf.d or in the LD_LIBRARY_PATH variable. Each .so library has a 'soname' embedded. If you link like in your example command the resulting binary only contains the sonames of the libraries it needs, not the full path. At runtime, the linker then reads that soname and searches in its cache for a library with the same soname. It might be possible to make this soname mechanism use absolute paths or subdirectories, but this seems like a hack. Sonames are usually just "libfoo.so.3" where 3 is a ABI revision. I think we'll eventually have to install shared libraries exactly the way C does it, i.e all in /usr/lib and using the soname versioning. But we can think about that when dmd finally supports shared libraries on linux, it's not important right now. -- Johannes Pfau
Jul 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-19 10:16, Johannes Pfau wrote:
 I'd have to test that, but I doubt it will work. This will help to find
 the library at compile time, but not at runtime. The runtime linker
 will only search in directories listed in /etc/ld.so.conf.d or in the
 LD_LIBRARY_PATH variable. Each .so library has a 'soname' embedded. If
 you link like in your example command the resulting binary only
 contains the sonames of the libraries it needs, not the full path. At
 runtime, the linker then reads that soname and searches in its cache for
 a library with the same soname. It might be possible to make this
 soname mechanism use absolute paths or subdirectories, but this seems
 like a hack. Sonames are usually just "libfoo.so.3" where 3 is a ABI
 revision. I think we'll eventually have to install shared libraries
 exactly the way C does it, i.e all in /usr/lib and using the soname
 versioning. But we can think about that when dmd finally supports
 shared libraries on linux, it's not important right now.
Oh, at runtime, didn't think of that :). The above command is only for compile time. What about the linker flag "-rpath"? That seems it could be used. Linux is not the only OS, it's easy to add support for dynamic libraries on Mac OS X. All code is already in druntime, it just needs to be enabled. Tango already supports this. -- /Jacob Carlborg
Jul 19 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-19 10:16, Johannes Pfau wrote:
 I'd have to test that, but I doubt it will work. This will help to
 find the library at compile time, but not at runtime. The runtime
 linker will only search in directories listed in /etc/ld.so.conf.d
 or in the LD_LIBRARY_PATH variable. Each .so library has a 'soname'
 embedded. If you link like in your example command the resulting
 binary only contains the sonames of the libraries it needs, not the
 full path. At runtime, the linker then reads that soname and
 searches in its cache for a library with the same soname. It might
 be possible to make this soname mechanism use absolute paths or
 subdirectories, but this seems like a hack. Sonames are usually just
 "libfoo.so.3" where 3 is a ABI revision. I think we'll eventually
 have to install shared libraries exactly the way C does it, i.e all
 in /usr/lib and using the soname versioning. But we can think about
 that when dmd finally supports shared libraries on linux, it's not
 important right now.
Oh, at runtime, didn't think of that :). The above command is only for compile time. What about the linker flag "-rpath"? That seems it could be used. Linux is not the only OS, it's easy to add support for dynamic libraries on Mac OS X. All code is already in druntime, it just needs to be enabled. Tango already supports this.
Seems like rpath could indeed work in this case. I can't find much documentation about it though. Debian recommends not to use it: http://wiki.debian.org/RpathIssue but I'm not sure if this problem applies to orbit. I'd prefer installing shared libraries system wide though. The soname/version approach is not that bad. Your proposed package versioning scheme could even be mapped 1:1 to the soname versions. Or we could use libtools versioning scheme, which is similar, ('major' and 'minor' are one field, 'build' stays the same, and an additional 'age' field is added) http://sourceware.org/autobook/autobook/autobook_91.html Having read more about it, i think I have to correct my previous statement: It is possible to link to specific versions with the soname approach. It's maybe a little more limited (You can't say: "I want to use libfoo.so.1.2.0", You can only say: "I want to use libfoo 1.x.x", and the linker could end up using 1.1.0, 1.2.0 ...) but it seems this should be good enough. -- Johannes Pfau
Jul 19 2011
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
Won't the same problem occur if rpath isn't used? With LD_LIBRARY_PATH for example.
 I'd prefer installing shared libraries system wide though. The
 soname/version approach is not that bad. Your proposed package
 versioning scheme could even be mapped 1:1 to the soname versions. Or
 we could use libtools versioning scheme, which is similar, ('major' and
 'minor' are one field, 'build' stays the same, and an additional 'age'
 field is added)
 http://sourceware.org/autobook/autobook/autobook_91.html
I don't want to install the libraries system wide. Again your assuming Linux only. It has to work on all supported platforms. At least: Linux, Mac OS X and Windows.
 Having read more about it, i think I have to correct my previous
 statement: It is possible to link to specific versions with the soname
 approach. It's maybe a little more limited (You can't say: "I want to
 use libfoo.so.1.2.0", You can only say: "I want to use libfoo 1.x.x",
 and the linker could end up using 1.1.0, 1.2.0 ...) but it seems this
 should be good enough.
No, I want to be able to use an exact version. -- /Jacob Carlborg
Jul 19 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
Won't the same problem occur if rpath isn't used? With LD_LIBRARY_PATH for example.
I'm not sure about that, but using LD_LIBRARY_PATH is even more discouraged (because it propagates to child processes)
 I'd prefer installing shared libraries system wide though. The
 soname/version approach is not that bad. Your proposed package
 versioning scheme could even be mapped 1:1 to the soname versions. Or
 we could use libtools versioning scheme, which is similar, ('major'
 and 'minor' are one field, 'build' stays the same, and an additional
 'age' field is added)
 http://sourceware.org/autobook/autobook/autobook_91.html
I don't want to install the libraries system wide. Again your assuming Linux only. It has to work on all supported platforms. At least: Linux, Mac OS X and Windows.
I'm not sure if we can have one perfect approach for all operating systems. But we could just use the rpath way and see how it works out.
 Having read more about it, i think I have to correct my previous
 statement: It is possible to link to specific versions with the
 soname approach. It's maybe a little more limited (You can't say: "I
 want to use libfoo.so.1.2.0", You can only say: "I want to use
 libfoo 1.x.x", and the linker could end up using 1.1.0, 1.2.0 ...)
 but it seems this should be good enough.
No, I want to be able to use an exact version.
I agree that's a nice-to-have feature, but is it ever really necessary? This mechanism always picks up the newest ABI compatible library version. Why would you want to specify one version explicitly if the newer version can be used as a 1:1 replacement? -- Johannes Pfau
Jul 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-19 13:43, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
Won't the same problem occur if rpath isn't used? With LD_LIBRARY_PATH for example.
I'm not sure about that, but using LD_LIBRARY_PATH is even more discouraged (because it propagates to child processes)
Ok. What is used to tell the system where to find a library? Only the default paths as /usr/lib ?
 I'm not sure if we can have one perfect approach for all operating
 systems. But we could just use the rpath way and see how it works out.
Yeah. We'll see how it turns out.
 I agree that's a nice-to-have feature, but is it ever really necessary?
 This mechanism always picks up the newest ABI compatible library
 version. Why would you want to specify one version explicitly if the
 newer version can be used as a 1:1 replacement?
I don't know if it's really necessary but that's how I've been thinking that I want it to behave. -- /Jacob Carlborg
Jul 19 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-19 13:43, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
Won't the same problem occur if rpath isn't used? With LD_LIBRARY_PATH for example.
I'm not sure about that, but using LD_LIBRARY_PATH is even more discouraged (because it propagates to child processes)
Ok. What is used to tell the system where to find a library? Only the default paths as /usr/lib ?
The rpath page says this: 1. the RPATH binary header (set at build-time) of the library causing the lookup (if any) 2. the RPATH binary header (set at build-time) of the executable 3. the LD_LIBRARY_PATH environment variable (set at run-time) 4. the RUNPATH binary header (set at build-time) of the executable 5. /etc/ld.so.cache (generated from /etc/ld.so.conf and /etc/ld.so.conf.d) 6. base library directories (/lib and /usr/lib) But that depends on the C library / linker implementation. -- Johannes Pfau
Jul 22 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-22 12:38, Johannes Pfau wrote:
 The rpath page says this:
 1. the RPATH binary header (set at build-time) of the library causing
     the lookup (if any)

 2. the RPATH binary header (set at build-time) of
     the executable

 3. the LD_LIBRARY_PATH environment variable (set at
     run-time)

 4. the RUNPATH binary header (set at build-time) of the
     executable

 5. /etc/ld.so.cache (generated from /etc/ld.so.conf
     and /etc/ld.so.conf.d)

 6. base library directories (/lib and /usr/lib)

 But that depends on the C library / linker implementation.
Since RPATH and LD_LIBRARY_PATH is discouraged only 4, 5 and 6 are left? -- /Jacob Carlborg
Jul 22 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-22 12:38, Johannes Pfau wrote:
 The rpath page says this:
 1. the RPATH binary header (set at build-time) of the library causing
     the lookup (if any)

 2. the RPATH binary header (set at build-time) of
     the executable

 3. the LD_LIBRARY_PATH environment variable (set at
     run-time)

 4. the RUNPATH binary header (set at build-time) of the
     executable

 5. /etc/ld.so.cache (generated from /etc/ld.so.conf
     and /etc/ld.so.conf.d)

 6. base library directories (/lib and /usr/lib)

 But that depends on the C library / linker implementation.
Since RPATH and LD_LIBRARY_PATH is discouraged only 4, 5 and 6 are left?
As far as I know, yes. However, note that 5 and 6 are 'global' options, they affect all executables and (dynamic) libraries. -- Johannes Pfau
Jul 22 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-07-22 14:53, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-07-22 12:38, Johannes Pfau wrote:
 The rpath page says this:
 1. the RPATH binary header (set at build-time) of the library causing
      the lookup (if any)

 2. the RPATH binary header (set at build-time) of
      the executable

 3. the LD_LIBRARY_PATH environment variable (set at
      run-time)

 4. the RUNPATH binary header (set at build-time) of the
      executable

 5. /etc/ld.so.cache (generated from /etc/ld.so.conf
      and /etc/ld.so.conf.d)

 6. base library directories (/lib and /usr/lib)

 But that depends on the C library / linker implementation.
Since RPATH and LD_LIBRARY_PATH is discouraged only 4, 5 and 6 are left?
As far as I know, yes. However, note that 5 and 6 are 'global' options, they affect all executables and (dynamic) libraries.
So far I've understood that :) -- /Jacob Carlborg
Jul 22 2011
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
The problem mentioned on that page is: A problem arises when binary A defines a NEEDED dependency on libraries B.so.1 and C.so.2, while library B.so.1 depends on library C.so.1 How is this handled when rpath isn't used and in general? -- /Jacob Carlborg
Jul 19 2011
parent reply Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
The problem mentioned on that page is: A problem arises when binary A defines a NEEDED dependency on libraries B.so.1 and C.so.2, while library B.so.1 depends on library C.so.1 How is this handled when rpath isn't used and in general?
To be honest, I don't know. I'm not even sure if I understand the issue with rpath at all, but I thought I'd better mention it. -- Johannes Pfau
Jul 22 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-07-22 12:38, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
The problem mentioned on that page is: A problem arises when binary A defines a NEEDED dependency on libraries B.so.1 and C.so.2, while library B.so.1 depends on library C.so.1 How is this handled when rpath isn't used and in general?
To be honest, I don't know. I'm not even sure if I understand the issue with rpath at all, but I thought I'd better mention it.
I can see that there could be a problem but I don't see how this can be a problem only for rpath. Seems to me that it would be a problem regardless which paths are used. So what happens if you link two libraries which are the same library but of different versions? Conflicting symbols? -- /Jacob Carlborg
Jul 22 2011
parent Johannes Pfau <spam example.com> writes:
Jacob Carlborg wrote:
On 2011-07-22 12:38, Johannes Pfau wrote:
 Jacob Carlborg wrote:
 On 2011-07-19 12:33, Johannes Pfau wrote:
 Seems like rpath could indeed work in this case. I can't find much
 documentation about it though. Debian recommends not to use it:
 http://wiki.debian.org/RpathIssue but I'm not sure if this problem
 applies to orbit.
The problem mentioned on that page is: A problem arises when binary A defines a NEEDED dependency on libraries B.so.1 and C.so.2, while library B.so.1 depends on library C.so.1 How is this handled when rpath isn't used and in general?
To be honest, I don't know. I'm not even sure if I understand the issue with rpath at all, but I thought I'd better mention it.
I can see that there could be a problem but I don't see how this can be a problem only for rpath. Seems to me that it would be a problem regardless which paths are used. So what happens if you link two libraries which are the same library but of different versions? Conflicting symbols?
Yes I think you'll get conflicting symbols. It is possible to add version information to a symbol, but I don't know how often this feature is used. (Or how it works exactly) Example: objdump -T /lib/i386-linux-gnu/libc.so.6 00000043 GLIBC_2.0 pthread_attr_getinheritsched 0000002d GLIBC_2.5 __readlinkat_chk 00000076 GLIBC_2.1 key_decryptsession 00000097 GLIBC_PRIVATE __nss_hosts_lookup2 -- Johannes Pfau
Jul 22 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:j01dlr$nm$1 digitalmars.com...
 On 2011-07-18 00:26, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 But why should the name of the project and the name of the tool be
 different? It would be less confusing to have a packager manager named 
 Orb
 that gets invoked via "orb". (Or a package manager named Orbit that gets
 invoked via "orbit".
I don't know. It's also easy to separate the library code in one package named "orbit" and the application code in on package named "orb".
What about "orb"/"orblib", or "orbit"/"liborbit", etc? Be easier to tell which is which that way anyway.
 Also, I don't see why the name of "orbspec" needs to be prefixed with the
 package's name. Just like an orbfile, or makefile, or rakefile, etc., you
 already know what package it's for from what directory/package it's 
 inside
 of.
The package needs a name. One option is to specify that in the orbspec file. Another option is to specify it in the name of the orbspec file. I just thought that it was convenient.
Ok, I see. That makes sense.
 Is it a good idea to take the name directory where the orbspec is located?
No, I don't think it is. When orb downloads/installs a package it should definitely put it in a directory named the same as the package. But I often have reason to projects in a directory with a (slightly) different name from the project, so I don't think they should be expected to be the same names a packages the user is developing. So I'm fine with the orbspec filename being prepended with the package name...Only one minor potential issue I see: What happens if a single project has both "foo.orbspec" and "bar.orbspec"?
 Note that many of my choices are based on RubyGems.
I guess I should look at that :P
 That would make sense. But, it could always be forcefully done by doing 
 it
 manually without going through orb. Don't know if that's worth worrying
 about though.
Do you mean if you have direct access to the server where the repository is located?
Right.
 I don't think that's something worth worrying about.
Fair enough. I guess just as long as it's understood that that shouldn't be done because if it is done, other users won't get the new version "a.b.c" unless they uninstall and reinstall it.
 I think it should be up to the project developer to choose how many parts 
 if
 appropriate for their project. And I don't see any problems with doing 
 that.
Ok. But then I think we most allow: "> 0.3.4+".
Sorry, can you be more specific on what you mean by that?
 I see three options:

 * One package for all platforms
 * Include the platform in the package name and in the orbspec
 * Have a sub path (on the server) for every platform, i.e.:

 dorbit.org/orbs/linux/dwt-1.3.2.orb.zip
I think the first option makes the most sense, and the second option sounds like it would be a sensible workaround when someone really wants things separate.
There's always this issue with downloading files you don't need.
True, so I guess there should be a way to do platform-specific packages.
 I'm not sure I like the third option because that wouldn't work for
 repositories that aren't explicity chosen unless you made it a standard
 naming system, but then that creates extra requirements for the server, 
 and
 then how do you know whether to GET from the platform-specific directory 
 or
 the platform-independent/cross-platform directory, etc. And what about
 having a windows package and a combination OSX/linux package? It's 
 workable,
 but I'm not sure it's worth it in light of the first two options.
I don't understand what you mean.
Suppose there's a (possibly default) repository: http://www.super-d-repos.com/joesmith/ And the user runs: $ orb install foo 1.2.3 What url does orb try to retreive?: http://www.super-d-repos.com/joesmith/foo-1.2.3.zip http://www.super-d-repos.com/joesmith/linux/foo-1.2.3.zip http://www.super-d-repos.com/joesmith/multiplatform/foo-1.2.3.zip Etc... We'd need to have some well-designed standard for how that works. I'm not sure we should require the repo actually be specified as: http://www.super-d-repos.com/joesmith/linux/ Because that would be platform-dependent and require extra platform-handling code for the orbspec author. And it would be different for different repos. Come to think of it, the same issue applies to making the platform part of the package name. So maybe a good path-based system would be better. This makes me think of another question: Once different comression formats are allowed, if someone uses orb to download/install a package, how does orb know whether to grab a .zip, or a .bz2, or a .7z, etc?
 Why would you have one package for Windows and one for osx/linux? Or have 
 I misunderstood something.
If, for example, the author wants the windows package to be separate because it (hypothetically) has a lot of differences from posix, but the different posix versions are very similar so may as well be in the same package.
 Although, on second thought, if a good system for the third option could 
 be
 worked out, then maybe that would be best. (But maybe just not for the
 initial release?)
An other issue with one package for all platforms is when building the package. The developer first needs to build the package on one platform, then move the package to all other platforms and rebuild the package. The tool needs to open the package and add files for the other platforms.
Good point. Although, of course, that only applies to binary packages. So yea, it seems that both multi-platform and platform-specific packages should be supported. And there needs to be some system for dealing with that.
 So it's purely for creating a package?
Yes, the "files" field is just when creating the package.
Ok.
 As I said before, "libraries" and "executables" are what actually should 
 be installed.
This is another thing I'm unclear on, the nature of "installing". It almost sounds like you're saying that all the files in the package other than "libraries" and "executables" are just not used for anything and just thrown away. I doubt that's it though. Is this how you're envisioning it?: 1. User says "orb install foo". 2. Orb downloads and extracts the foo package to a temp directory. 3. Orb invokes the build process to build foo (BTW, how is that "how to build" commandline string specified in the orbspec? The closest thing I see is the "build" field, but it looks like that's just the name of the tool used. An actual command line string is going to be needed.) 4. Orb copies the files listed in "libraries" and "executables" from the temp dir to their permanent location. 5. Temp dir is cleared. If so, how do you account for non-compiled source-based libraries? Include the *.d files in "libraries"?
 This is what I'm thinking and is planning for Orbit and my build tool 
 Dake. Dake will be able to generate an orbspec file, because, as you said, 
 the build tool will have all the necessary information. Then you don't 
 have to duplicate any information.
Hmm, I guess "files" isn't so bad.
 Ouch. I have to say, I don't like that at all. The way I see it, this is 
 one
 of the two primary responsibilities of a package manager for a language 
 (the
 other being automatic dependency handling). Without it, we're not much 
 ahead
 of where we are right now.
If you have suggestions I'm listening.
Lemme think about it a bit and get back to you.
 That doesn't work: If orb is managing the packages, then how is
 dake/drake/etc supposed to know what path to include for -I? Orb knows 
 where
 it's sticking packages. The build systems don't know that.
The build tool invokes Orbit and asks about the include path.
I think it'd be nicer if invoking a process wasn't needed for that. (Although it could just be done in the "configure" step, but it'd be nicer if orb could just handle it without the buildscript needing to worry about it.)
 This also reminds me of another issue: For D libraries, the
 orbfile/orbspec/metadata/whatever needs to give the relative path for the
 base include directory.
Something like that. Or if given full path the tool will assume the current directory is the base directory and remove that from the path. For example: $ cd ~/projects/d/foo Contains: main.d bar/foobar.d files << ["~/projects/d/foo/main.d", "~/projects/d/foo/bar/foobar.d"] $ orb build foo Since I run the "orb build" command in the "~/projects/d/foo" directory the tool will just remove that from the paths when including the files in the package.
What about projects that don't use "{project_root}" as the include directory? My projects, for instance, typically use "{project_root}/src".
 I think that's very, very bad. I should be able to compile something with 
 a
 different version of DMD without reinstalling everything. "dvm use xxx"
 should *just work*, just as it already does now.
The whole point is to be able to have different packages installed with different compilers. There could be a command for moving over packages from one DMD installation to another.
Seems unnecessary.
 Say I'm using DMD 2.053, then installing package "foo". Then I'm 
 installing DMD 2.054 and switching to it. Say also that package "foo" 
 doesn't work with DMD 2.054, what happens when you switch to 2.054?
The same thing as if you manually did: $ dvm use 2.054 $ rdmd -I~/proj/foo-1.2.3/includes myApp.d It just won't work.
 Orb manages different versions of arbitrary packages. DVM manages 
 different
 versions of one specific package. If they're so different despite all 
 that,
 then either Orbit isn't generalized enough, or DVM is too specialized.
DVM is very specialized, yes. The installation and switching of D compiler, using DVM, is very specialized. DMD requires wrappers, manipulating the PATH or registry. None of the Orbit packages require this.
What about "orb use" on binary packages?
 Orbit would require loads of special cases to have DVM built-in. On your 
 description DVM and Orbit sound very similar but how they actually work 
 when installing and switch compilers are very different. You should know 
 this, you ported DVM to Windows.
What I'm saying is that maybe they shouldn't be so different.
 The current compiler decides what packages are available/installed, that 
 wouldn't work if the compiler package would just be a regular orb package.
I don't believe that the current compiler *should* decide what packages are available/installed. Maybe there could be some optional magic to warn when using a compiler/package combination that isn't known to work. But aside from that, I really think these should be orthogonal.
Jul 18 2011
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-07-19 03:58, Nick Sabalausky wrote:
 What about "orb"/"orblib", or "orbit"/"liborbit", etc? Be easier to tell
 which is which that way anyway.
Yeah, that would be one option. You can also see like this: "Hey, I have a compiler, it's named 'GNU C Compiler', it's invoked with the 'gcc' command". What's the confusion with that?
 Ok, I see. That makes sense.
Ok, good.
 Is it a good idea to take the name directory where the orbspec is located?
No, I don't think it is. When orb downloads/installs a package it should definitely put it in a directory named the same as the package. But I often have reason to projects in a directory with a (slightly) different name from the project, so I don't think they should be expected to be the same names a packages the user is developing. So I'm fine with the orbspec filename being prepended with the package name...Only one minor potential issue I see: What happens if a single project has both "foo.orbspec" and "bar.orbspec"?
When building the package? You just run "orb build foo" or "or build bar". Were you hoping for only "orb build"? In that case I guess it would be possible if there's only one .orbspec in the current directory. That know if that's confusing.
 Sorry, can you be more specific on what you mean by that?
The syntax you suggested. Instead of having "~> 0.3.4" then you could have "~> 0.3.4+" or "~> 0.3+.4+".
 True, so I guess there should be a way to do platform-specific packages.
Don't know how much of an issue this is in reality. Although there could be fairly large libraries and applications. I know DWT is quite large.
 Suppose there's a (possibly default) repository:

 http://www.super-d-repos.com/joesmith/

 And the user runs:

 $ orb install foo 1.2.3

 What url does orb try to retreive?:

 http://www.super-d-repos.com/joesmith/foo-1.2.3.zip
 http://www.super-d-repos.com/joesmith/linux/foo-1.2.3.zip
 http://www.super-d-repos.com/joesmith/multiplatform/foo-1.2.3.zip
I guess that just needs to be standardized.
 Etc...

 We'd need to have some well-designed standard for how that works.

 I'm not sure we should require the repo actually be specified as:

 http://www.super-d-repos.com/joesmith/linux/

 Because that would be platform-dependent and require extra platform-handling
 code for the orbspec author. And it would be different for different repos.
 Come to think of it, the same issue applies to making the platform part of
 the package name. So maybe a good path-based system would be better.
How would it require extra platform-handling for the orbspec author? All repositories behave in the same way, if not, it's not an orb repository.
 This makes me think of another question: Once different comression formats
 are allowed, if someone uses orb to download/install a package, how does orb
 know whether to grab a .zip, or a .bz2, or a .7z, etc?
I guees it's same as with the platform. Just standardize on a specific order. i.e. 7z, then bz2 and last zip.
 If, for example, the author wants the windows package to be separate because
 it (hypothetically) has a lot of differences from posix, but the different
 posix versions are very similar so may as well be in the same package.
That is for a source package?
 Good point. Although, of course, that only applies to binary packages. So
 yea, it seems that both multi-platform and platform-specific packages should
 be supported. And there needs to be some system for dealing with that.
There is way more to think about than one actually want to recognize :)
 This is another thing I'm unclear on, the nature of "installing". It almost
 sounds like you're saying that all the files in the package other than
 "libraries" and "executables" are just not used for anything and just thrown
 away. I doubt that's it though.
If you but "main.d" and "main.exe" in the "files" field. And you have "main.exe" in the "executables" field it, will only install "main.exe". There is no point in installing "main.d" as well, you actually don't need to put in the package in the first place. But there could be an option to install the source code as well, if present.
 Is this how you're envisioning it?:

 1. User says "orb install foo".

 2. Orb downloads and extracts the foo package to a temp directory.

 3. Orb invokes the build process to build foo (BTW, how is that "how to
 build" commandline string  specified in the orbspec? The closest thing I see
 is the "build" field, but it looks like that's just the name of the tool
 used. An actual command line string is going to be needed.)

 4. Orb copies the files listed in "libraries" and "executables" from the
 temp dir to their permanent location.

 5. Temp dir is cleared.
Something like that.
 If so, how do you account for non-compiled source-based libraries? Include
 the *.d files in "libraries"?
That's a good question. Maybe the "type" field needs to include "source".
 Hmm, I guess "files" isn't so bad.
Oh, and Dake does this with the help of Orbit. It's acutally Orbit that generates the orbspec but Dake passes all information about the orbspec to Orbit.
 I think it'd be nicer if invoking a process wasn't needed for that.
 (Although it could just be done in the "configure" step, but it'd be nicer
 if orb could just handle it without the buildscript needing to worry about
 it.)
It doesn't technically needs to create a process and invoke the orb command. Dake can just like with liborbit and take the information it needs. Every tool should be built as a library and a thin wrapper on top of that for the executable. Makes everything easier :).
 What about projects that don't use "{project_root}" as the include
 directory? My projects, for instance, typically use "{project_root}/src".
Hmm, that's a good question. Wonder how other tools handle this.
 The same thing as if you manually did:

 $ dvm use 2.054
 $ rdmd -I~/proj/foo-1.2.3/includes myApp.d

 It just won't work.
Once you have installed the packages for the new compiler you can use dvm as before.
 What about "orb use" on binary packages?
I don't follow.
 What I'm saying is that maybe they shouldn't be so different.
As far as I know dvm HAS to work as it does. And I'm certainly don't arbitrary packages manipulating the PATH, registry and the shell start up file.
 I don't believe that the current compiler *should* decide what packages are
 available/installed. Maybe there could be some optional magic to warn when
 using a compiler/package combination that isn't known to work. But aside
 from that, I really think these should be orthogonal.
Ok, one option would be to have all packages installed globally in DVM, i.e. available for all compilers. DVM could let you create orb sets and switch between them. Then you could have one orb set for each compiler if you wanted. Example: $ dvm use 2.053 $ orb list Installed packages: foo bar $ dvm install 2.054 $ dvm use 2.054 $ orb list Installed packages: foo bar $ dvm orbset create test $ dvm orbset use test $ orb list No installed packages Now, when installing a new package it will only be installed to the "test" orb set. How those this sound? -- /Jacob Carlborg
Jul 19 2011
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2011-07-19 03:58, Nick Sabalausky wrote:
 3. Orb invokes the build process to build foo (BTW, how is that "how to
 build" commandline string  specified in the orbspec? The closest thing I see
 is the "build" field, but it looks like that's just the name of the tool
 used. An actual command line string is going to be needed.)
Yes, it's the name of the build tool to invoke. By default it has support for a couple of build tools and knows how to invoke them. Example: To invoke Make it would just be "make". To invoke DSSS it would be "dsss build". In addition to that the "build" field accepts a variable argument list with arguments to the build tool. Example: build :make, "-f foo.mak" Would invoke "make -f foo.mak". One of the known build tools is "shell" which basically lets you run arbitrary commands. Example: build :shell, "./build_my_app.sh" build :shell, "rdmd --build-only main.d".
 What about projects that don't use "{project_root}" as the include
 directory? My projects, for instance, typically use "{project_root}/src".
Actually I don't think this will be a problem. The working directory when installing a package will look the same as the project directory where you created the package. Example: 1. You have all source files in "{project_root}/src". When you build the package, from "{project_root}", it will create a "src" directory in the package and all you source files will be in this directory. 2. When the package is to be installed the package manager extracts the package and you will have the "src" folder in the working directory/a temp directory. 3. The package manager will execute the build command from the working directory. So the build system you have chosen just needs to find the "src" folder relative to the working directory. Lets take Make as an example, because it's a tool that exists and we know how it works. Either you have the makefile in the working directory (the original place was the project root). Or it will be in the "src" directory. This is how the orbspec will look like when the makefile is in the root folder: build :make Or when the makefile is in the "src" folder: build :make, "-f src/makefile" -- /Jacob Carlborg
Jul 19 2011