digitalmars.D.announce - John Warner Backus
- sclytrack (2/2) Mar 21 2007 John Warner Backus died on March 17, 2007.
- Pragma (7/10) Mar 21 2007 I'm actually kind of saddened by this. It's hard to see someone so infl...
- kris (2/11) Mar 21 2007 Aye
- Tomas Lindquist Olsen (4/9) Mar 21 2007 Reading the slashdot freakshow this PDF got me thinking a bit about wher...
- Pragma (11/23) Mar 22 2007 Thanks for the link. It's an interesting read. Sweeny says some really...
- Sean Kelly (28/36) Mar 22 2007 I'm not sure I agree. Many of the most common transactional processes
- Pragma (9/52) Mar 22 2007 I see what you mean. These things always seem so much more tranquil on ...
John Warner Backus died on March 17, 2007. http://en.wikipedia.org/wiki/John_Backus
Mar 21 2007
sclytrack wrote:John Warner Backus died on March 17, 2007. http://en.wikipedia.org/wiki/John_BackusI'm actually kind of saddened by this. It's hard to see someone so influential in this field go. FWIW, he had one heck of a sendoff over on slashdot and digg: http://developers.slashdot.org/article.pl?sid=07/03/20/0223234 http://www.digg.com/programming/John_W_Backus_82_Fortran_Developer_Dies -- - EricAnderton at yahoo
Mar 21 2007
Pragma wrote:sclytrack wrote:AyeJohn Warner Backus died on March 17, 2007. http://en.wikipedia.org/wiki/John_BackusI'm actually kind of saddened by this. It's hard to see someone so influential in this field go.
Mar 21 2007
Pragma wrote:FWIW, he had one heck of a sendoff over on slashdot and digg: http://developers.slashdot.org/article.pl?sid=07/03/20/0223234 http://www.digg.com/programming/John_W_Backus_82_Fortran_Developer_DiesReading the slashdot freakshow this PDF got me thinking a bit about where we (and D) are going... http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf
Mar 21 2007
Tomas Lindquist Olsen wrote:Pragma wrote:Thanks for the link. It's an interesting read. Sweeny says some really *odd* things about the stauts quo, that make me wonder WTF the programmers on his team are doing. His comments on concurrency and musings on the next language are dead on, with (appropriate) shades of Backus thrown in: "In an concurrent world, imperative is the wrong default." "Transactions are the only plausible solution to concurrent mutable statue." D shines in a few of these areas, but needs library support for transactional memory, better concurrency support and constructs, something like a well-coded numerics library (true integers, etc), and something resembling compile-time iterator/bounds checking to fit the bill. :( -- - EricAnderton at yahooFWIW, he had one heck of a sendoff over on slashdot and digg: http://developers.slashdot.org/article.pl?sid=07/03/20/0223234 http://www.digg.com/programming/John_W_Backus_82_Fortran_Developer_DiesReading the slashdot freakshow this PDF got me thinking a bit about where we (and D) are going... http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf
Mar 22 2007
Pragma wrote:"Transactions are the only plausible solution to concurrent mutable statue."I'm not sure I agree. Many of the most common transactional processes work much like mutexes. In SQL, for example, data affected by a transaction is locked (typically at row, page, or table granularity) in much the same way as obtaining locks on mutexes protecting data. Deadlocks are quite possible, and before the era of automatic deadlock resolution, froze the DB indefinitely. The new concept of transactional memory turns this idea on its head by cloning affected data instead of locking it, and mutating the clones. Committing a transaction is therefore accomplished by comparing the original version of all affected data with the current version of the affected data, and if they match, the clones are substituted. If they don't match however, the entire transaction is rolled back and retried. The result is that large transactions are slow and require an unbounded amount of memory (because of the cloning), and no guarantee of progress is provided, because success ultimately relies on a race condition. That said, there have been proposals to add a transactional memory feature to hardware, and I think this is actually a good idea. The existing hardware-based solutions are typically limited to updating no more than 4-8 bytes of contiguous data, while transactional memory would allow for additional flexibility. I've seen implementations of lock-free binary trees based on this concept, and I'm not aware of anything comparable without it. Progress guarantees are less of an issue as well because hardware-level transactions will typically be very small.D shines in a few of these areas, but needs library support for transactional memory, better concurrency support and constructs, something like a well-coded numerics library (true integers, etc), and something resembling compile-time iterator/bounds checking to fit the bill. :(I'd add something like CSP to the category of "better concurrency support." And I agree with the rest. Sean
Mar 22 2007
Sean Kelly wrote:Pragma wrote:I see what you mean. These things always seem so much more tranquil on the surface. It seems to me that the only positive trade off is for highly parallelizable and/or long-running algorithms, which hardly solves anything."Transactions are the only plausible solution to concurrent mutable statue."I'm not sure I agree. Many of the most common transactional processes work much like mutexes. In SQL, for example, data affected by a transaction is locked (typically at row, page, or table granularity) in much the same way as obtaining locks on mutexes protecting data. Deadlocks are quite possible, and before the era of automatic deadlock resolution, froze the DB indefinitely. The new concept of transactional memory turns this idea on its head by cloning affected data instead of locking it, and mutating the clones. Committing a transaction is therefore accomplished by comparing the original version of all affected data with the current version of the affected data, and if they match, the clones are substituted. If they don't match however, the entire transaction is rolled back and retried. The result is that large transactions are slow and require an unbounded amount of memory (because of the cloning), and no guarantee of progress is provided, because success ultimately relies on a race condition.That said, there have been proposals to add a transactional memory feature to hardware, and I think this is actually a good idea. The existing hardware-based solutions are typically limited to updating no more than 4-8 bytes of contiguous data, while transactional memory would allow for additional flexibility. I've seen implementations of lock-free binary trees based on this concept, and I'm not aware of anything comparable without it. Progress guarantees are less of an issue as well because hardware-level transactions will typically be very small.Neat! Seeing as how the industry is moving towards more and more processor cores, I suppose it follows that we'll eventually see additional hardware support to make it less unwieldy as well. I'm eager to see stuff like this happen. It sounds like something D could adopt easily, provided there's a way to qualify these concepts in a way that doesn't make a person's head explode.-- - EricAnderton at yahooD shines in a few of these areas, but needs library support for transactional memory, better concurrency support and constructs, something like a well-coded numerics library (true integers, etc), and something resembling compile-time iterator/bounds checking to fit the bill. :(I'd add something like CSP to the category of "better concurrency support." And I agree with the rest. Sean
Mar 22 2007