digitalmars.D - Falling in love with D, but...
- Peter Verswyvelen (34/34) Apr 10 2007 The more I read about D, the more I fall in love with it. It contains al...
- Knud Soerensen (2/41) Apr 10 2007 Take a look at this project http://www.dsource.org/projects/codeanalyzer
- Dan (5/6) Apr 10 2007 @Knud: not to knock your project, but I think Peter had it more on the ...
- Tyler Knott (5/8) Apr 10 2007 Have you tried looking in the x86(-64) manuals from Intel or AMD? They ...
- Peter Verswyvelen (26/26) Apr 11 2007 Such a code analyzer is a really useful tool indeed.
- Georg Wrede (91/120) Apr 14 2007 Representing code as graphs with nodes and edges is IMHO not all too
- David B. Held (116/116) Apr 14 2007 [Snip discussion of graphical language...]
- Lionello Lunesu (5/15) Apr 15 2007 I'm very interested!
- Bill Baxter (11/29) Apr 15 2007 Isn't that kind of what Smalltalk was supposed to be? I don't know
- David B. Held (54/60) Apr 16 2007 I'm sure it's not completely different from Smalltalk. Just browsing
- Peter Verswyvelen (94/94) Apr 16 2007 Indeed, Smalltalk was really cool, revolutionary maybe. But for
- Dan (12/12) Apr 16 2007 In line with that thinking...
- Peter Verswyvelen (7/9) Apr 16 2007 EIP goes back to the start of the *same* statement.
- Dan (11/23) Apr 16 2007 In all honesty, I don't see how a Lisp Continuation relates to flow cont...
- Lionello Lunesu (14/53) Apr 11 2007 Wow! You've worded my feelings quite nicely :)
- Peter Verswyvelen (7/7) Apr 11 2007 Indeed. Actually the whole file system thingy is an ancient leftover, we...
- Lionello Lunesu (3/11) Apr 11 2007 PlayLogic you say!? I know quite a few guys that work(ed) there :)
- Ary Manzana (9/84) Apr 11 2007 That reminds me of "Divide and conquer". It's not prehistoric, it's just...
The more I read about D, the more I fall in love with it. It contains almost everything I ever wanted to see in a programming language, and I've been coding for 25 years now: from 6502 assembler to C/C+ inhouse visual mostly functional language which my team created for videogame designers/artists. Logix was used to create some special effects and mini-games on Playstation 3. It was amazing to see that artists with no programming skills could create incredible stuff given the right visuals / notation... Anyway, D looks really great, but I'm spoiled with todays popular RAD tools such as integrated debugging, edit-and-continue, code completion, parameter tooltips, refactoring, fast navigation, call graphs, builtin version control, etc... as found in e.g. Visual Studio 2005 + Resharper 2.5 or Eclipse/ IntelliJ IDEA. It's also handy to have a huge standard framework such as DOTNET or J2SE/EE, or even STL/boost. It's not really necessary: my first videogames did not use any code from the OS, it was 100% pure self written assembly code directly talking to the hardware, but that was a century ago ;-) So as soon as I want to get started with D I find myself stuck (also because of my RSI... I just refuse to type a symbol all over again ;-). It is as if I got this brand new car engine that looks to outperform all others, but I can't find the correct tires, suspension, etc. Frustrating. One thing I don't like about current IDEs: they still work on the text level, which is horrible for refactoring in a team (think extreme programming). For example renaming a symbol should be one change under version control, but it currently means that all source files refering to the symbol (by name!) must be modified, potentially giving a lot of merge conflicts and co-workers shouting not to rename a symbol anymore, just leave the bad names... The advantage of a pure drag-drop-connect- the-dots visual programming language like Logix is that it can work very close to the AST, directly linking to statements/functions by "pointer/identifier", so a symbolname never matters for the computer, only for a human, and a rename is just one modification to the symbol and not to its references. Of course we programmers don't want to work with visual graphs (screen clutter!), we want to see code, but I think we might also benefit from writing code closer to the AST; after all, code completion and all those handy code snippets are a bit like that: you insert a foreach loop using a single keystroke, and fill in the symbols, but its still just text. Why not insert a foreach statetement in a high-level AST, and regard the text as a representation/tagged navigation of the (high level) AST, instead of translating the text into the AST... I heared some old LISP editors worked like that, but I never saw one. So maybe it would be a good idea to develop and IDE just as (r)evolutionary as D is? Or does it already exist, meaning I just wasted half an hour typing this email ;-) Keep up the amazing work, Peter
Apr 10 2007
On Tue, 10 Apr 2007 22:26:40 +0000, Peter Verswyvelen wrote:The more I read about D, the more I fall in love with it. It contains almost everything I ever wanted to see in a programming language, and I've been coding for 25 years now: from 6502 assembler to C/C+ inhouse visual mostly functional language which my team created for videogame designers/artists. Logix was used to create some special effects and mini-games on Playstation 3. It was amazing to see that artists with no programming skills could create incredible stuff given the right visuals / notation... Anyway, D looks really great, but I'm spoiled with todays popular RAD tools such as integrated debugging, edit-and-continue, code completion, parameter tooltips, refactoring, fast navigation, call graphs, builtin version control, etc... as found in e.g. Visual Studio 2005 + Resharper 2.5 or Eclipse/ IntelliJ IDEA. It's also handy to have a huge standard framework such as DOTNET or J2SE/EE, or even STL/boost. It's not really necessary: my first videogames did not use any code from the OS, it was 100% pure self written assembly code directly talking to the hardware, but that was a century ago ;-) So as soon as I want to get started with D I find myself stuck (also because of my RSI... I just refuse to type a symbol all over again ;-). It is as if I got this brand new car engine that looks to outperform all others, but I can't find the correct tires, suspension, etc. Frustrating. One thing I don't like about current IDEs: they still work on the text level, which is horrible for refactoring in a team (think extreme programming). For example renaming a symbol should be one change under version control, but it currently means that all source files refering to the symbol (by name!) must be modified, potentially giving a lot of merge conflicts and co-workers shouting not to rename a symbol anymore, just leave the bad names... The advantage of a pure drag-drop-connect- the-dots visual programming language like Logix is that it can work very close to the AST, directly linking to statements/functions by "pointer/identifier", so a symbolname never matters for the computer, only for a human, and a rename is just one modification to the symbol and not to its references. Of course we programmers don't want to work with visual graphs (screen clutter!), we want to see code, but I think we might also benefit from writing code closer to the AST; after all, code completion and all those handy code snippets are a bit like that: you insert a foreach loop using a single keystroke, and fill in the symbols, but its still just text. Why not insert a foreach statetement in a high-level AST, and regard the text as a representation/tagged navigation of the (high level) AST, instead of translating the text into the AST... I heared some old LISP editors worked like that, but I never saw one. So maybe it would be a good idea to develop and IDE just as (r)evolutionary as D is? Or does it already exist, meaning I just wasted half an hour typing this email ;-) Keep up the amazing work, PeterTake a look at this project http://www.dsource.org/projects/codeanalyzer
Apr 10 2007
Knud Soerensen Wrote:Take a look at this project http://www.dsource.org/projects/codeanalyzerKnud: not to knock your project, but I think Peter had it more on the mark. We should move more visual and less textual. Once Walter releases AST Reflection I think we'll be able to write a "truly" visual IDE with a dynamic debugger, code proveability, symbol recognition (rename a symbol and it renames all cases of it), and the likes. I'm looking forward to it. PS: Anyone know how one might gain access to a complete x86 opcode -> hex table? All the ones I find only list the mnemonics and description, none list the actual hex. Thx.
Apr 10 2007
Dan wrote:PS: Anyone know how one might gain access to a complete x86 opcode -> hex table? All the ones I find only list the mnemonics and description, none list the actual hex. Thx.Have you tried looking in the x86(-64) manuals from Intel or AMD? They should provide a complete reference for all opcodes, e.g. this (http://www.intel.com/design/intarch/manuals/243191.htm) "Programmer's Manual" for the 286 through PII from Intel (see Appendix A).
Apr 10 2007
Such a code analyzer is a really useful tool indeed. But as what I actually meant, is more like what Dan said: instead of taking text as the format of the "source" code, we should use a code-DOM or something structured like an AST, where symbols are either defined or referenced directly by pointer/id, not by name. Code style and formatting is then pure meta data, just present to reconstruct a textual view in a particular style. Given other presentation related metadata (e.g. node X,Y positions), the code could just as well be presented using a graph with nodes and edges. In the fantastic "structure and interpretation of computer programs" video lectures made in the eighties this was already mentioned somewhere. If you want a crash course in programming, this is still amazing stuff: http://video.google.com/videoplay?docid=5546836985338782440&q=structure+and+interpretation+of+computer+programs Now this already existed on the Commodore 64, in an assembly language editor: when you typed a line of assembly, it would be parsed immediately, and symbols got resolved lazely on the fly. There was no need to parse the whole program anymore, just linking was needed, so it was really fast (I believe the program's name was "Turbo Assembler", but it had nothing to do with Borland). Also, the code got automatically formatted and syntax highlighted, and symbol lookup was easy. Ofcourse, this is really simple because of the simplicity of assembler, but the principle remains. Todays refactoring tools and intelligent editors have a really hard time to sync their internal code-DOM with the text; just deleting a curly bracket messes up the whole DOM, making refactoring a fuzzy adnd dangerous concept; clearly a more structured (and faster!) way of entering code could be enforced, but then of course we all should adapt, and we don't like that, the tools should adapt to us no? ;-)
Apr 11 2007
Peter Verswyvelen wrote:Such a code analyzer is a really useful tool indeed. But as what I actually meant, is more like what Dan said: instead of taking text as the format of the "source" code, we should use a code-DOM or something structured like an AST, where symbols are either defined or referenced directly by pointer/id, not by name. Code style and formatting is then pure meta data, just present to reconstruct a textual view in a particular style. Given other presentation related metadata (e.g. node X,Y positions), the code could just as well be presented using a graph with nodes and edges. In the fantastic "structure and interpretation of computer programs" video lectures made in the eighties this was already mentioned somewhere. If you want a crash course in programming, this is still amazing stuff: http://video.google.com/videoplay?docid=5546836985338782440&q=structure+and+interpretation+of+computer+programs Now this already existed on the Commodore 64, in an assembly language editor: when you typed a line of assembly, it would be parsed immediately, and symbols got resolved lazely on the fly. There was no need to parse the whole program anymore, just linking was needed, so it was really fast (I believe the program's name was "Turbo Assembler", but it had nothing to do with Borland). Also, the code got automatically formatted and syntax highlighted, and symbol lookup was easy. Ofcourse, this is really simple because of the simplicity of assembler, but the principle remains. Todays refactoring tools and intelligent editors have a really hard time to sync their internal code-DOM with the text; just deleting a curly bracket messes up the whole DOM, making refactoring a fuzzy adnd dangerous concept; clearly a more structured (and faster!) way of entering code could be enforced, but then of course we all should adapt, and we don't like that, the tools should adapt to us no? ;-)Representing code as graphs with nodes and edges is IMHO not all too dependent on the underlying programming language. Basically you always have the same three parts in any imperative language (namely sequence, selection and iteration). Representing those graphically removes the need to be coupled to a specific programming language. Of course you would represent structs and objects too, but behind the scenes they could just as easily be mapped to even a non-OO language, and you'd never know, unless you specifically wanted to alternate between the graphical and the textual representation. If somebody created such a graphic environment (and wanted "the source code accessible too"), it would not be too much of an additional effort to also have several languages as alternate choices. You'd probably be able to switch on the fly between those programming languages. After all, you'd be conjuring up the essence, the aspects, the actual logic, and the structure, instead of choosing between the syntactical choices or semantic details. All this of course forgets the fact that the graphical representation constitutes a Programming Language per se! So, strictly speaking, the "underlying" programming language isn't even necessary. [1] If you search in the archives, you'll see that I suggested this precisely same thing some three or four years ago in this very newsgroup. At that time it was D related, but today my view has changed, as you see here. :-) In some sense (in addition to what's been said in this thread), the old Borland GUI editors did much of the same. Essentially the same program created the UI for both C++ and Pascal. It was bundled with both, only with a different language back-end, which probably only was a set of code snippets and rules in a database, from which it picked what it needed. They also had an equivalent code generator for character-UI programs that worked in the same way. Several prominent products were generated with it, for example the Alpha-Four database (a real gem!) Of course, here you didn't use drag and drop to directly create language constructs, instead you drew up the UI and the input field connections, but the idea is pretty much the same. Then you of course have the round-trip UML tools, some of which create code for more than one of Java, C++ and Pascal. Back to the issue at hand, the essential part of such a tool is the actual "graphical programming language". The target audience should be carefully decided on, or the whole thing becomes just "a little for everybody, but not enough for anybody". (I suspect that within the foreseeable future we won't have enough know-how to create a really universal GUI-language. Later, it will become reality, though.) This being a D newsgroup, the default idea is of course that of "a general purpose" thing, or even "a systems programming" thing. But I have a hard time imagining Linus Torvalds dragging iterative constructs across the screen for the ultimate core in a serious OS kernel project. I admit that in the long run we seem to be headed towards practically all programming being drag-and-drop (or at that time, grab-and-drop, once the UI hardware becomes way more subtle and hominid centric). But for the time being, developing, say, this graphical thing for "essentially all things D would be used in" would be a tall order. Rather, things like network programming, web servers, database front ends, or even middle-tier glue SW, robotics, toys (e.g. advanced Tamagotchi SW design, GameBoy apps), point-of-sale cash register SW, medical systems, would seem like a good idea. It may be possible that for hard-core programming, the keyboard won't go away, not even in a thousand years. Think about it: we have computers, we have (or had) regular typewriters, the ball-point pen, dictaphones (small sound recording devices for notes and letters your secretary later typed for you), mobile phones, calculators, and whatnot. And still today the number of regular pencils around exceeds that of computers, mobile phones and calculators combined. Speech recognition, GUI programming, AI, and whatever other /plate du jours/ all had their try, but we still bang away like nothing ever happened. Also, the interplay of writing, seeing, reading, and communicating, really is a paradigm that has proven itself in areas vastly more broad than computer programming. Anything where the meaning has to be universally understandable, unambiguous, or thoroughly fixed -- law, procedures, public statements, speaches, in-depth system descriptions, J. Edgar Hoover's files, medical record statements, species descriptions, cake receipes, the US Constitution... So, "graphical programming" is unlikely to usurp "emacs programming" in our lifetime, in the general sense. But for some niches, it probably will be a killer. PS, sorry, I didn't intend this to be a bazooka attack on you, or this thread in general. Just aired my 3.14 cents. :-) ---- [1] Upon proofreading, it occurred to me that if there does not exist an "underlying" programming language, then the graphical representation might have a lot more freedom to express ideas. But this is not even a thought, it's merely a hunch. I probably should think some more about it. Of course, ultimately any such thing is representable by at least an Assembler listing, so one could argue there's _always_ a programming language as the back-end. But then another could argue that it's having a _specific_ language here that hurts. If we pretend not to have any such language, then we're free to wave hands as we please and decide upon the meaning of such without (at least undue) restrictions. Extremely interesting. And there'll be a heck of a practical difference between early implementations, I'm sure.
Apr 14 2007
[Snip discussion of graphical language...] This is somewhat related to Charles Simonyi's idea of "Intentional Programming", which never really took off (probably because Simonyi didn't really know what he was talking about, despite becoming a billionaire and launching himself into space). I think a lot of people are coming to the gradual realization that syntax isn't everything. On the other hand, syntax is *something*. For instance, "sequence, selection, and iteration" is not enough to express the notion of functions. And while functions do not make a language strictly more powerful than a language without, they do make a language much easier to use. In fact, languages have lots of features like classes and unions and exceptions and dynamic dispatch which are not fundamental but are convenient. Many of these features interact with each other, and this combination creates an emergent synthesis that is perhaps difficult to capture in purely abstract terms. Anyone who has used Lisp with some mechanism built on top of it (such as CLOS) can see that Lisp + library to implement a feature != language with feature built in. This is why languages aren't really as interchangeable as one might like to believe just by looking at their ASTs. Even languages which are directly related, like C and C++, can have major incompatibilities between them. The only way you could make a "universal" graphical language would be to have an "underlying language" which was the union of all the features of all the languages in the "universe". This is clearly not practical (or even possible). Then there is the problem that text is much denser than graphics. Many graphical languages I have seen have the problem that there are too many graphics and not enough information. If one small program takes up an entire screen instead of a mere paragraph, nobody will want to use it. And many graphical languages are literally toys. Take the language designed for Lego(TM) Mindstorms, for instance. It's all "plug-n-play", but it's nearly impossible to write an interesting program with it. That's why people introduced C compilers for the Mindstorms kit. That being said, I think a graphical language would be interesting if designed well. I think one thing it would have to do is recognize that Text is King. I think text is appropriate for the "units of programming", which generally tend to be functions. Within a function, I don't see much point in having fancy graphics. However, there is really no reason for functions to be lined up linearly within a file, or for classes to be serialized within a module. Viewing these things iconically or even three-dimensionally might allow for interesting new forms of program visualization. For instance, I think a 3D call tree could be a very powerful conceptual aid, as well as the module dependency graphs generated by tools like Doxygen. Within a function, I believe we are already taking advantage of the most obvious graphical features by using syntax highlighting. Some editors even allow you to manipulate the typeface itself for various syntax elements. While using color to encode syntax was a joke of the past, I actually think it could be useful in a language that has a lot of type qualifiers. Even now D has 'const', 'final', and 'volatile', with 'invariant' on the way. There was brief consideration for 'unique' and 'lent', perhaps within a model that included 'owned' and 'shared'. And honestly, there are many other type qualifiers that could be useful if it weren't so awkward to work with them. Nobody would want to have to define overloads for functions like: void foo(const unique invariant ref T x); But if you just defined: void foo(T x); and the qualifiers on T were indicated with colors or icons, then perhaps adding type qualifiers to the language would not be seen as such a burdensome task. Similarly, we have to prepend 'static' to many keywords to get the compile-time version. There are many ways to mitigate this verbosity, but one way is to visually highlight metacode in a different way from non-metacode. Folding is another useful feature that already exists, but is generally not employed to an arbitrarily high degree. For instance, if there were a way to annotate certain code as "error handling" or "invariant checking" (outside of an invariant block, which has different semantics), then one could simply fold that code when inspecting a function to get an idea of the essence of the algorithm being employed without being distracted by try/catch or if/throw blocks. I like the direction Eclipse has gone with instant searching (which they call "occurrences"). This, taken to its logical extreme, would allow for tooltips that do everything from show the set of known callees for a given function to statically inferring the throwable exceptions list. But just being able to see many functions as sticky notes that can be manipulated independently and juxtaposed arbitrarily would be immensely powerful, IMO. Being able to hover over a function call and have the function definition pop up in another window, or have a command to recursively expand all functions called from here down to a certain depth could make code exploration go from a painful chore to an exciting adventure (ok, that's a bit of marketing hype there, but you get what I mean). The problem is that we write programs linearly, but they don't execute linearly. So to trace the flow of a program, we hop around numerous files and modules scrolling this way and that trying to figure out what's going on. Being able to see functions forming a visual call tree and zoom in on each one on demand would truly bring programming to life. The funny thing is that none of this is particularly difficult or revolutionary...it's just a heck of a lot of hard and thankless work. The connection back to Intentional Programming and programs-as-files-is-obsolete is definitely the idea that identifiers should probably all be numeric database keys, and programs should probably be stored in intermediate form with the visual form generated on the fly (which is what I understand you to be expressing). That IL isn't completely language-independent, but it certainly does allow that program or library to be visualized in a multitude of ways. It also allows refactoring to proceed much more accurately, because the compiler has a much better idea of what you mean. Perhaps it is something of a pipe dream to imagine that Eclipse could be taken to this level, but I think it would be a fantastic progression in tool evolution if this were to happen. The problem is that this kind of IDE is so fundamentally different from your average text editor that there may be nothing to leverage. Anyway, if anyone is interested in working on something like this, I would definitely try to make some time available to help out. Dave P.S. This could be accomplished for D by designing a database schema that stores programs in an intermediate form and generates D source from that form, which feeds directly to the compiler. The IDE would let you "define classes", "implement functions", etc., rather than "write text". This is how it would be able to generate the intermediate form directly. It would still have to know about language syntax so that it knows how to look up references, etc., so it may need to keep everything in a full-blown D AST all the time. It would be a challenge, no doubt, but it could be a really fun challenge.
Apr 14 2007
David B. Held wrote:... Perhaps it is something of a pipe dream to imagine that Eclipse could be taken to this level, but I think it would be a fantastic progression in tool evolution if this were to happen. The problem is that this kind of IDE is so fundamentally different from your average text editor that there may be nothing to leverage. Anyway, if anyone is interested in working on something like this, I would definitely try to make some time available to help out. DaveI'm very interested! I even started a SF project for it quite some time ago, http://sf.net/projects/tood, but I only have ideas at the moment, no code :( L.
Apr 15 2007
Lionello Lunesu wrote:David B. Held wrote: >...Isn't that kind of what Smalltalk was supposed to be? I don't know about the graphical representations of functions part, but at least the main editing required a "SmallTalk Browser" which was basically just a hierarchical explorer for all your bits of code. I never used SmallTalk though... just read about it, so maybe it's nothing like what you guys are talking about. I was thinking about taking the time to learn it back when the Disney research folks were doing interesting things with Squeak, but then they decided to move to Python. :-) --bbPerhaps it is something of a pipe dream to imagine that Eclipse could be taken to this level, but I think it would be a fantastic progression in tool evolution if this were to happen. The problem is that this kind of IDE is so fundamentally different from your average text editor that there may be nothing to leverage. Anyway, if anyone is interested in working on something like this, I would definitely try to make some time available to help out. DaveI'm very interested! I even started a SF project for it quite some time ago, http://sf.net/projects/tood, but I only have ideas at the moment, no code :( L.
Apr 15 2007
Bill Baxter wrote:[...] Isn't that kind of what Smalltalk was supposed to be? I don't know about the graphical representations of functions part, but at least the main editing required a "SmallTalk Browser" which was basically just a hierarchical explorer for all your bits of code. [...]I'm sure it's not completely different from Smalltalk. Just browsing some screenshots of various Smalltalk implementations, however, I'm not particularly impressed. Probably the closest thing to what I envision is the UML editor in one of the implementations. The key, I think, is to understand where you need text for its density and where you want to take advantage of the visualization benefits of graphics. What I am seeing in a lot of the screenshots is something like a class/method browser that reminds me too much of formula/query builders that you see in business apps designed for novices. While that has its place, I really don't think a good graphical IDE should be a hand-holding crutch for noobs. The other key is having multiple perspectives. This is something I really like about Eclipse. You can view the code, you can view the object hierarchy, you can view the project explorer all at once. They are all views on the same underlying data. If we extend this to the point where you can view the declarations in a class as text or as icons, then I think you start to see the benefits going GUI. For instance, UML is nice when you want to see the high level structure, but no good if you need to see the implementation. That's because UML was specifically designed to abstract away the implementation. So basing a GIDE on UML alone would be a mistake. However, having a UML view of your library/app class hierarchy *would* be really useful if it also allowed you to zoom in on class/function details. In fact, having a dynamic zoom feature that automatically compressed class definitions into UML entities or even just class names at the highest levels and got down to data types and code at the lowest would be pretty darned cool, IMO. Tree-based browsers display packages and modules as lists because that's a convenient thing to do in a primarily text-oriented IDE. But there's really no reason to limit the display of such entities as lists. Being able to automatically arrange your modules according to inheritance or dependency relationships (or both at once in separate views) would take advantage of 2D screen real estate. Being able to zoom in on the dependency view to the point where function call dependencies are visualized could make it much easier to understand an unfamiliar codebase. It would also help to see how much coupling you have between modules, and how much work is involved in refactoring to eliminate a dependency, all of which are important tasks when maintaining a large codebase. This is the kind of capability that I haven't seen in any existing products (though admittedly, I haven't looked all *that* hard). Encoding syntactic information in the text presentation (or with a hybrid text/graphical presentation) is also not something I've seen anyone brave enough to attempt. Syntax highlighting + folding is generally as far as people take it, but I don't see any harm in trying to push the envelope. I'd love to see an IDE with an integrated TeX editor that lets you enter mathematical formulas in the natural format and just do the right thing. It would be somewhat like integrating Mathematica/Maple into the IDE, but without all the automated algebraic logic behind it. I think D is still clean enough to support this kind of IDE without too much trouble, but macros could make things really interesting (since they rely primarily on the text representation of the program...but maybe they aren't as much of a problem as I suspect). Dave
Apr 16 2007
Indeed, Smalltalk was really cool, revolutionary maybe. But for some reason, it never really took of did it? It's still alive, so it might still one day reach a huge audience... As you said, a good IDE should at least seperate the model from the view = model/view/controller design pattern from... Smalltalk! ;-) *** warning *** random confusing brainstorm follows... The way I see it, is that you actually have a "core-model" which is the representation of the language in a format suitable for the machine. Attached to this core-model are many meta-models that hold meta-data required for the views that represent this language in a human readable and editable form (for example, the XY position of a flowgraph node or comments). This model is not the same as the compiled module; it is much closer to the intermediate structure between the frontend and backend (some kind of decorated AST?). It is this model that is THE source (read D source ;-), and that is stored on disk, NOT the text which is a lossy representation of the model. E.g. it would become impossible to use a plain text editor to edit the textual representation, because this would break the links between the view and the model. So the model can be represented both textual and visual, and might support many different textual and visual languages. And yes, when using text, it would be great to be able to finally get rid of the ASCII format, and have rich text with math formulas, multimedia, interactive applets, etc, much like the web, but cleaner (I really hate web technology, it seems to be invented by managers instead of computer scientists ;-). The underlying format of this "rich text" should again be the core-model, just as Scheme/LISP trees can be used to describe any datastructure using the language itself (e.g. a language must allow the creation of embedded mini- languages) A (bad) example of this is Microsofts .NET assembly format. Using a tool like Reflector you can reverse engineer the assembly into for Java class format?) Of course, this is NOT the way it should be, because this assembly format seems way too low-level, and it misses metadata needed for good representation. One can use a tool like Phoenix to get a higher level model from that, but this looks like the world upside down. I think it is impossible to come up with one view/representation that is suitable for all needs. Some people might prefer a textual approach (which could be imperative, functional, OO, logical, etc by preference), others would prefer visual representations, and most of us will prefer a combination (as you said, UML for the overview, text for the implementation, maybe some flow-graphs or state-machines using visuals, etc). But the model should be the "single-point-of-definition", so that the different views don't get out of sync and so that fully round trip engineering is possible (I believe all good UML tools already do today). The same goes for documentation and all other "derived" code; it should be stored into the model objects. By e.g. using unique numbers inside the model instead of symbols and strings to identify entities (compare with numeric primary keys in SQL), cross module refactoring and version control merges become much less of a problem. Today one must make sure to include *all* models in a project when refactoring, otherwise you're screwed. A golden rule in OO is that if you can't find a good name, your design must be wrong... But this means I always have bad designs because getting those names right the first time is d*mn difficult! Okay, lets take an other example, Microsofts WPF... It has a method called "FindName". What does it? I believe it finds an element in the tree by name, and returns a reference to that element or null when not found. Well, IMHO that should be named FindElementByName or at least FindByName, so lucky for me, even the "pros" don't get it right the first time ;-) So we can conclude that *humans* never get it right the first time (and maybe even GOD needed 7 refactorings instead of 7 days to create the universe ;-), so a good IDE, language and filesystem should have refactoring support very high on their requirement list. Furthermore because we can't do it alone (okay, excluding Walter ;- ), version control and team support must be very high too (without the need to perform SVN CLEANUP all the time...) The same goes for testing, fast turnaround times, textual+visual debugging, etc... D provides a lot of this, but I think the (r)evolution will only happen when all these things are integrated in one big masterpiece working in harmony, instead of trying to get decade old systems and ideas to work together. Major advances in science do not happen by inventing something new, but by presenting existing ideas in a way accessible to many (btw, this is not my quote, I believe comes from the E - The story of a number ;-) Liebnitz and many others prooved that, and I think LINQ, just to name a few. When I first tried Symantec C++, I hoped it would happen too, but - sorry Walter - Symantec made a huge mess of your amazing C++ technology; I can't tell you the frustration I got after buying Symantec C++, its still on my list of the buggiest IDEs I ever bought, together with Borland C++ Builder (after which I abandoned Borland and sold my soul to the Visual C++ devil ;-) Well, the big yellow Symantec C++ box that contained enough books to fill a libary still looks cool tough, and the visual debugger that could show the relation between objects at runtime, man that was cool for its time, if only it would not have crashed all the time. Okay, I really got carried away here, and I might find myself on thin ice now :)
Apr 16 2007
In line with that thinking... I had once thought of creating a programming engine - for lack of something better to call it - which would represent algorithms correctly. This idea was from a long time ago before I really knew how to program. The idea was to implement an Operation struct, which would carry the hexcode of the operation, including operands, and from that would abstract the operands and operator information. The idea was to provide arrays of these operations, and then slice them and access the slice as an operand for branching and looping Operations. The problems I encountered were: - that x86 assembly operations are never the same size - that slices are bigger than an x86 operand (void* + length) - that branches and loops are performed accross two operations (the compare(s) and the jump(s)) - that nobody understood what the heck I was talking about. It seems to me that algorithms are arrays of operations that are traversed and executed in a polylinear sequential fashion. Representing them as such seems more natural to me than inventing words for different branch and loop operations, which are algorithmically very similar. An "if" has a buffer that gets executed for the true condition, which, when complete, EIP goes to the start of the *next* statement. A "while" has exactly the same thing, except that when that buffer is completed, EIP goes back to the start of the *same* statement. Such differences are an off-by-one; hardly worthy of being granted their own keywords.
Apr 16 2007
An "if" has a buffer that gets executed for the true condition, which, whencomplete, EIP goes to the start of the *next* statement.A "while" has exactly the same thing, except that when that buffer is completed,EIP goes back to the start of the *same* statement. Well, it seems you really have to take a look at LISP/Scheme's "continuations" then. See http://en.wikipedia.org/wiki/Continuation. Really cool things, but not many people understand them, and they make it difficult to debug or read the code, but the same could be said about lazy evaluation, closures, etc... I'm not sure if D supports continuations? I guess one can do the same with closures anyway.
Apr 16 2007
Peter Verswyvelen Wrote:In all honesty, I don't see how a Lisp Continuation relates to flow control other than that it claims to "save state to be continued" and then get back to it after performing something? Like a stack? Wikipedia was using all kinds of word overloading to express the concept. (I hate word overloading) ~~~~ Abstraction of the x86 stack: Consider the x86 stack a uint[]. The most recent element is always pushed onto the front (the zero side) of the stack. When a function is called, the top item on the stack is thus the first, and so forth. Thus when we perform a function call in D, we get [EAX, &(ESP+0), &(ESP+4), &(ESP+8)...] If we thus call the function "naked" and push EAX onto the stack, we then have ESP = arguments[arity]; One would obviously need to know the arity arbitrarily, or if you pass it in EAX, you don't even need to push it and you can store the argument.length there. ~~~~ Comparable to Continuations: Furthermore, if one examines the x86 stack after PUSHA, they will realize that you have everything you need for a thread state. Allowing single-cpu multithreading merely requires you to provide an interrupt which maintains a flexible ring buffer of ESP's which point to stacks immediately after a PUSHA. Immediately after switching items on the ring buffer, perform a POPA and then IRET. In Ring 0, this costs roughly 118 cycles on a x686; if you port it out of an interrupt to Ring 3, it can take as little as 38 cycles for an x686 system.An "if" has a buffer that gets executed for the true condition, which, whencomplete, EIP goes to the start of the *next* statement.A "while" has exactly the same thing, except that when that buffer is completed,EIP goes back to the start of the *same* statement. Well, it seems you really have to take a look at LISP/Scheme's "continuations" then. See http://en.wikipedia.org/wiki/Continuation. Really cool things, but not many people understand them, and they make it difficult to debug or read the code, but the same could be said about lazy evaluation, closures, etc... I'm not sure if D supports continuations? I guess one can do the same with closures anyway.
Apr 16 2007
Peter Verswyvelen wrote:The more I read about D, the more I fall in love with it. It contains almost everything I ever wanted to see in a programming language, and I've been coding for 25 years now: from 6502 assembler to C/C+ inhouse visual mostly functional language which my team created for videogame designers/artists. Logix was used to create some special effects and mini-games on Playstation 3. It was amazing to see that artists with no programming skills could create incredible stuff given the right visuals / notation... Anyway, D looks really great, but I'm spoiled with todays popular RAD tools such as integrated debugging, edit-and-continue, code completion, parameter tooltips, refactoring, fast navigation, call graphs, builtin version control, etc... as found in e.g. Visual Studio 2005 + Resharper 2.5 or Eclipse/ IntelliJ IDEA. It's also handy to have a huge standard framework such as DOTNET or J2SE/EE, or even STL/boost. It's not really necessary: my first videogames did not use any code from the OS, it was 100% pure self written assembly code directly talking to the hardware, but that was a century ago ;-) So as soon as I want to get started with D I find myself stuck (also because of my RSI... I just refuse to type a symbol all over again ;-). It is as if I got this brand new car engine that looks to outperform all others, but I can't find the correct tires, suspension, etc. Frustrating. One thing I don't like about current IDEs: they still work on the text level, which is horrible for refactoring in a team (think extreme programming). For example renaming a symbol should be one change under version control, but it currently means that all source files refering to the symbol (by name!) must be modified, potentially giving a lot of merge conflicts and co-workers shouting not to rename a symbol anymore, just leave the bad names... The advantage of a pure drag-drop-connect- the-dots visual programming language like Logix is that it can work very close to the AST, directly linking to statements/functions by "pointer/identifier", so a symbolname never matters for the computer, only for a human, and a rename is just one modification to the symbol and not to its references. Of course we programmers don't want to work with visual graphs (screen clutter!), we want to see code, but I think we might also benefit from writing code closer to the AST; after all, code completion and all those handy code snippets are a bit like that: you insert a foreach loop using a single keystroke, and fill in the symbols, but its still just text. Why not insert a foreach statetement in a high-level AST, and regard the text as a representation/tagged navigation of the (high level) AST, instead of translating the text into the AST... I heared some old LISP editors worked like that, but I never saw one. So maybe it would be a good idea to develop and IDE just as (r)evolutionary as D is? Or does it already exist, meaning I just wasted half an hour typing this email ;-) Keep up the amazing work, PeterWow! You've worded my feelings quite nicely :) I also hate the way we're writing text files. If the files/modules get large, we split the text files. If we get too many files, we make folders/packages, etc.. It all feels so pre-historic. I wish we could just write code, connect functions, drop-in a pattern and fill in the blanks... I started with some ideas about using a function repository from which you can drag-n-drop blocks into your project, connecting the parameters with your local variables, possible passing a code block as a delegate to another code block. Do you have a link to that Logix you've mentioned? I find too many references to it and don't know which one you meant. L.
Apr 11 2007
Indeed. Actually the whole file system thingy is an ancient leftover, we should be working with an object database as 'file'system by now ;-) Logix was the codename of proprietary system developed for my previous employer, Playlogic, for creating realtime 3D effects and procedural content. It is not available. I even think I'm not allowed to talk about it, you know how these company policies are... Systems like virtools and quest3D look a bit like it. But if you're a programmer, you won't like this stuff, because we are trained to read text and symbols (as long as it contains syntax colors, its amazing how lazy my brain became because of these colorscreens ;-)
Apr 11 2007
Peter Verswyvelen wrote:Indeed. Actually the whole file system thingy is an ancient leftover, we should be working with an object database as 'file'system by now ;-) Logix was the codename of proprietary system developed for my previous employer, Playlogic, for creating realtime 3D effects and procedural content. It is not available. I even think I'm not allowed to talk about it, you know how these company policies are... Systems like virtools and quest3D look a bit like it. But if you're a programmer, you won't like this stuff, because we are trained to read text and symbols (as long as it contains syntax colors, its amazing how lazy my brain became because of these colorscreens ;-)PlayLogic you say!? I know quite a few guys that work(ed) there :) L.
Apr 11 2007
Lionello Lunesu escribió:Peter Verswyvelen wrote:That reminds me of "Divide and conquer". It's not prehistoric, it's just that it's easier for a human to handle small cases. Same as you split a function into smaller functions, you split a file into multiple files to understand the whole easier. I like the ideas proposed in this thread, but I'd also like to see a screenshot of this visual language, or some more thorough explanation... or a plan to make that kind of IDE, because I can't imagine well how that is going to work.The more I read about D, the more I fall in love with it. It contains almost everything I ever wanted to see in a programming language, and I've been coding for 25 years now: from 6502 assembler to C/C+ "Logix" - an inhouse visual mostly functional language which my team created for videogame designers/artists. Logix was used to create some special effects and mini-games on Playstation 3. It was amazing to see that artists with no programming skills could create incredible stuff given the right visuals / notation... Anyway, D looks really great, but I'm spoiled with todays popular RAD tools such as integrated debugging, edit-and-continue, code completion, parameter tooltips, refactoring, fast navigation, call graphs, builtin version control, etc... as found in e.g. Visual Studio 2005 + Resharper 2.5 or Eclipse/ IntelliJ IDEA. It's also handy to have a huge standard framework such as DOTNET or J2SE/EE, or even STL/boost. It's not really necessary: my first videogames did not use any code from the OS, it was 100% pure self written assembly code directly talking to the hardware, but that was a century ago ;-) So as soon as I want to get started with D I find myself stuck (also because of my RSI... I just refuse to type a symbol all over again ;-). It is as if I got this brand new car engine that looks to outperform all others, but I can't find the correct tires, suspension, etc. Frustrating. One thing I don't like about current IDEs: they still work on the text level, which is horrible for refactoring in a team (think extreme programming). For example renaming a symbol should be one change under version control, but it currently means that all source files refering to the symbol (by name!) must be modified, potentially giving a lot of merge conflicts and co-workers shouting not to rename a symbol anymore, just leave the bad names... The advantage of a pure drag-drop-connect- the-dots visual programming language like Logix is that it can work very close to the AST, directly linking to statements/functions by "pointer/identifier", so a symbolname never matters for the computer, only for a human, and a rename is just one modification to the symbol and not to its references. Of course we programmers don't want to work with visual graphs (screen clutter!), we want to see code, but I think we might also benefit from writing code closer to the AST; after all, code completion and all those handy code snippets are a bit like that: you insert a foreach loop using a single keystroke, and fill in the symbols, but its still just text. Why not insert a foreach statetement in a high-level AST, and regard the text as a representation/tagged navigation of the (high level) AST, instead of translating the text into the AST... I heared some old LISP editors worked like that, but I never saw one. So maybe it would be a good idea to develop and IDE just as (r)evolutionary as D is? Or does it already exist, meaning I just wasted half an hour typing this email ;-) Keep up the amazing work, PeterWow! You've worded my feelings quite nicely :) I also hate the way we're writing text files. If the files/modules get large, we split the text files. If we get too many files, we make folders/packages, etc..
Apr 11 2007