digitalmars.D - Walter's DConf 2014 Talks - Topics in Finance
- TJB (12/12) Mar 21 2014 Walter,
- Joakim (21/30) Mar 21 2014 Heh, right before I read this, I stumbled across this snippet
- w0rp (11/31) Mar 21 2014 That is a really strange argument. Let's break it down into
- Paulo Pinto (7/39) Mar 21 2014 I would take Miguel's comments about C++ with a grain of salt.
- TJB (3/39) Mar 21 2014 The removal of pain points is indeed, in my mind, the main issue.
- Walter Bright (3/10) Mar 21 2014 It's a good thought, but I have zero knowledge of how C++ is used for hi...
- TJB (3/19) Mar 21 2014 I would be happy to help you with an option pricing example that
- Brian Rogoff (12/17) Mar 22 2014 This is a very interesting thread that you started. Could you
- TJB (15/27) Mar 22 2014 Well, right away people jumped to high-frequency trading.
- Daniel Davidson (9/19) Mar 24 2014 I apologize for that. HFT has been a driver of a lot of business
- TJB (9/31) Mar 24 2014 Dan,
- Walter Bright (2/4) Mar 23 2014 Sure, please email it to me.
- TJB (4/9) Mar 23 2014 Walter, I would be happy to. Where do I find your email address?
- Walter Bright (2/4) Mar 23 2014 My first name followed by digitalmars.com.
- Chris Williams (6/8) Mar 21 2014 Reading through the Wikipedia article on Computational Finance,
- Russel Winder (37/46) Mar 22 2014 http://en.wikipedia.org/wiki/Computational_finance
- evansl (11/18) Mar 23 2014 By "bog standard" do you mean "plain or ordinary?
- Russel Winder (16/39) Mar 23 2014 Sort of. This webpage almost, but not quite, totally misses all the
- Daniel Davidson (6/18) Mar 21 2014 Maybe a good starting point would be to port some of QuantLib and
- TJB (4/26) Mar 21 2014 Dan,
- bearophile (8/9) Mar 21 2014 That code must always be hard-real time. So a GC is allowed only
- Walter Bright (6/10) Mar 21 2014 These are all very doable in D, and I've written apps that do so. The "f...
- deadalnix (2/12) Mar 21 2014 Fear of GC is overblown. Fear of D's GC isn't.
- Paulo Pinto (5/19) Mar 22 2014 Yes, as there are a few high performance trading systems done with
- Ziad Hatahet (5/9) Mar 22 2014 What about AAA games? :) Even though I agree with the pro-GC arguments y...
- Paulo Pinto (6/16) Mar 22 2014 What about ballistic missile tracking radar control systems?
- zoomba (3/28) Mar 22 2014 Ah Thales... I hope they are better at writing software for
- Russel Winder (16/25) Mar 22 2014 Not entirely the case. Yes the ultra-high-frequency traders tend to be C
- Daniel Davidson (10/16) Mar 22 2014 That is wrong. Trading is competitive and your competitors rarely
- Paulo Pinto (5/19) Mar 22 2014 Maybe this is a wrong conclusion, but from the comments, I would say the...
- Daniel Davidson (19/47) Mar 22 2014 I guess it depends where your edge comes from. If it is speed,
- Paulo Pinto (22/64) Mar 22 2014 I read a bit more into it, as the post is from 2011, there are some
- Daniel Davidson (9/15) Mar 22 2014 Labor is a market like any other. It depends on supply and
- Russel Winder (17/20) Mar 22 2014 Good C++ programmers appear to be able to get $350k to $400k in NY.
- Walter Bright (3/7) Mar 22 2014 Having built C++ and D compilers, I know this to be true for a fact. It'...
- deadalnix (3/24) Mar 22 2014 HFT is very latency sensitive. D stop the world GC is a no go.
- Sean Kelly (2/2) Mar 22 2014 The work Don's company does has very similar requirements to HFT.
- TJB (3/6) Mar 22 2014 Yes, I'm very much looking forward to that talk as well.
- deadalnix (3/6) Mar 22 2014 They did significant work to customize th GC in order to meet the
- Russel Winder (25/28) Mar 23 2014 GC technology was well beyond "stop the world" in Common Lisp in the
- Paulo Pinto (10/32) Mar 23 2014 Well, there is a nice quote from Bjarne as well:
- Walter Bright (3/5) Mar 23 2014 malloc/free cannot be used in hard real time systems, either. malloc/fre...
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (4/6) Mar 23 2014 While that is true you can have a soft real time thread feeding
- Walter Bright (3/9) Mar 23 2014 Yes, and you can do that with GC, too.
- "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= (4/8) Mar 24 2014 You can, but the way I view "soft real time" is that you accept
- Russel Winder (10/17) Mar 23 2014 By estimating the resource needs and preallocating a freelist, you can
- Walter Bright (2/13) Mar 23 2014 Yes, and you can do that in D, the same way.
- Nick Sabalausky (5/10) Mar 23 2014 If the domain is high-performance, high-volume, (hard|soft) realtime,
- Paulo Pinto (5/16) Mar 23 2014 My question was precisely because I tend to see that a lot in general,
- Russel Winder (13/30) Mar 22 2014 If the data isn't available we'll never really know.
- Daniel Davidson (36/66) Mar 21 2014 Well, I am a fan of D. That said, there is a ton of money that
- Russel Winder (27/30) Mar 22 2014 I would certainly agree that (at least initially) pitching D against the
- Daniel Davidson (19/35) Mar 22 2014 I guess it depends on the goal. The OP was interested in
- TJB (6/14) Mar 22 2014 Well, I for one, would be hugely interested in such a thing. A
- Daniel Davidson (17/22) Mar 22 2014 A bit. You can check out some of my C++ code generation support
- Laeeth Isharc (7/23) Dec 22 2014 Well for HDF5 - the bindings are here now - pre alpha but will
- aldanor (9/35) Dec 22 2014 @Laeeth
- Laeeth Isharc (12/20) Dec 22 2014 Oh, well :) I would certainly be interested to see what you
- Oren Tirosh (4/13) Dec 22 2014 In that case, a good start might be a D kernel for
- Laeeth Isharc (47/69) Dec 22 2014 In case it wasn't obvious from the discussion that followed:
- aldanor (24/70) Dec 22 2014 I agree with most of these points.
- Daniel Davidson (22/38) Dec 22 2014 This description feels too broad. Assume that it is the "data
- aldanor (25/33) Dec 22 2014 On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson
- Paulo Pinto (6/39) Dec 22 2014 From what I have learned in Skills Matter presentations, for that
- Daniel Davidson (12/25) Dec 22 2014 I don't know about low frequency which is why I asked about
- Laeeth Isharc (128/154) Dec 22 2014 Hi.
- Daniel Davidson (17/44) Dec 22 2014 Interesting perspective on the FI group's use of perl. Yes that
- jmh530 (21/25) Jan 14 2015 I have a longer horizon than the HFT guys, but I still have quite
- Oren Tirosh (14/20) Dec 23 2014 There is no lack of tools if you can integrate well with existing
- Saurabh Das (17/29) Mar 22 2014 You are absolutely correct - the finance industry _wants_ to
- Daniel Davidson (23/38) Mar 22 2014 Well, the finance industry is pretty big and therefore diverse.
- Saurabh Das (34/79) Mar 22 2014 Yes. The finance industry is very big and diverse. I am in
- Daniel Davidson (27/42) Mar 22 2014 But, clearly that is not necessarily a benefit of D. It is a
- Walter Bright (5/7) Mar 22 2014 That's correct. It's also true of writing code in C or even assembler.
- Saurabh Das (17/65) Mar 22 2014 Yes - I didn't mean this as a point in favour of D, but just to
- Sean Kelly (12/18) Mar 23 2014 Try no funding and a trivial amount of time. The JSON parser I
- Paulo Pinto (7/24) Mar 23 2014 At least on Java world it is not quite true.
- Russel Winder (11/19) Mar 23 2014 This is exactly why Groovy has two distinct XML parsers. British Library
- Sean Kelly (6/10) Mar 23 2014 It's been a while since I used it, but the Apache SAX parser
- Paulo Pinto (6/20) Mar 24 2014 Ah Xerces! Last time I looked into it was around 2003.
- Walter Bright (2/11) Mar 23 2014 Lazy evaluation FTW. Ranges and algorithms fit right in with that.
- deadalnix (6/26) Mar 24 2014 This isn't for ridiculous reasons, this is because in other
- Walter Bright (2/4) Mar 22 2014 Isn't that the best kind of distinction?
- Walter Bright (2/6) Mar 22 2014 C'mon, man, you gotta come. I want to hear more about the HFT stuff!
- NVolcz (2/14) Jan 14 2015 1+ for finance talk
Walter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJB
Mar 21 2014
On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:Walter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space.Heh, right before I read this, I stumbled across this snippet from Miguel De Icaza's blog from a couple months back, where he regretted using C++ to build Moonlight, their Silverlight implementation: "But this would not be a Saturday blog post without pointing out that Cairo's C-based API is easier and simpler to use than many of those C++ libraries out there. The more sophisticated the use of the C++ language to get some performance benefit, the more unpleasant the API is to use. The incredibly powerful Antigrain sports an insanely fast software renderer and also a quite hostile template-based API. We got to compare Antigrain and Cairo back when we worked on Moonlight. Cairo was the clear winner. We built Moonlight in C++ for all the wrong reasons ("better performance", "memory usage") and was a decision we came to regret. Not only were the reasons wrong, it is not clear we got any performance benefit and it is clear that we did worse with memory usage. But that is a story for another time." http://tirania.org/blog/archive/2014/Jan-04.html
Mar 21 2014
On Friday, 21 March 2014 at 21:30:29 UTC, Joakim wrote:Heh, right before I read this, I stumbled across this snippet from Miguel De Icaza's blog from a couple months back, where he regretted using C++ to build Moonlight, their Silverlight implementation: "But this would not be a Saturday blog post without pointing out that Cairo's C-based API is easier and simpler to use than many of those C++ libraries out there. The more sophisticated the use of the C++ language to get some performance benefit, the more unpleasant the API is to use. The incredibly powerful Antigrain sports an insanely fast software renderer and also a quite hostile template-based API. We got to compare Antigrain and Cairo back when we worked on Moonlight. Cairo was the clear winner. We built Moonlight in C++ for all the wrong reasons ("better performance", "memory usage") and was a decision we came to regret. Not only were the reasons wrong, it is not clear we got any performance benefit and it is clear that we did worse with memory usage. But that is a story for another time." http://tirania.org/blog/archive/2014/Jan-04.htmlThat is a really strange argument. Let's break it down into stages. 1. Use C++ for better performance. 2. Find C++ library with better performance, but it's ugly. 3. Use C library in C++ instead because it's less ugly. 4. Conclude that C++ can't deliver better performance. That is really weak. This is why the industry needs salvation from C++ with D. It would mostly be then, "Oh it has better performance with these template things... and it's not a pain in the ass to use."
Mar 21 2014
Am 21.03.2014 22:39, schrieb w0rp:On Friday, 21 March 2014 at 21:30:29 UTC, Joakim wrote:I would take Miguel's comments about C++ with a grain of salt. Back when I participated in Gtkmm (early 200x), there were the occasional C++ bashes coming from the Gtk+ guys and Miguel was never found of C++. -- PauloHeh, right before I read this, I stumbled across this snippet from Miguel De Icaza's blog from a couple months back, where he regretted using C++ to build Moonlight, their Silverlight implementation: "But this would not be a Saturday blog post without pointing out that Cairo's C-based API is easier and simpler to use than many of those C++ libraries out there. The more sophisticated the use of the C++ language to get some performance benefit, the more unpleasant the API is to use. The incredibly powerful Antigrain sports an insanely fast software renderer and also a quite hostile template-based API. We got to compare Antigrain and Cairo back when we worked on Moonlight. Cairo was the clear winner. We built Moonlight in C++ for all the wrong reasons ("better performance", "memory usage") and was a decision we came to regret. Not only were the reasons wrong, it is not clear we got any performance benefit and it is clear that we did worse with memory usage. But that is a story for another time." http://tirania.org/blog/archive/2014/Jan-04.htmlThat is a really strange argument. Let's break it down into stages. 1. Use C++ for better performance. 2. Find C++ library with better performance, but it's ugly. 3. Use C library in C++ instead because it's less ugly. 4. Conclude that C++ can't deliver better performance. That is really weak. This is why the industry needs salvation from C++ with D. It would mostly be then, "Oh it has better performance with these template things... and it's not a pain in the ass to use."
Mar 21 2014
On Friday, 21 March 2014 at 21:39:54 UTC, w0rp wrote:On Friday, 21 March 2014 at 21:30:29 UTC, Joakim wrote:The removal of pain points is indeed, in my mind, the main issue. :-)Heh, right before I read this, I stumbled across this snippet from Miguel De Icaza's blog from a couple months back, where he regretted using C++ to build Moonlight, their Silverlight implementation: "But this would not be a Saturday blog post without pointing out that Cairo's C-based API is easier and simpler to use than many of those C++ libraries out there. The more sophisticated the use of the C++ language to get some performance benefit, the more unpleasant the API is to use. The incredibly powerful Antigrain sports an insanely fast software renderer and also a quite hostile template-based API. We got to compare Antigrain and Cairo back when we worked on Moonlight. Cairo was the clear winner. We built Moonlight in C++ for all the wrong reasons ("better performance", "memory usage") and was a decision we came to regret. Not only were the reasons wrong, it is not clear we got any performance benefit and it is clear that we did worse with memory usage. But that is a story for another time." http://tirania.org/blog/archive/2014/Jan-04.htmlThat is a really strange argument. Let's break it down into stages. 1. Use C++ for better performance. 2. Find C++ library with better performance, but it's ugly. 3. Use C library in C++ instead because it's less ugly. 4. Conclude that C++ can't deliver better performance. That is really weak. This is why the industry needs salvation from C++ with D. It would mostly be then, "Oh it has better performance with these template things... and it's not a pain in the ass to use."
Mar 21 2014
On 3/21/2014 2:14 PM, TJB wrote:I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space.It's a good thought, but I have zero knowledge of how C++ is used for high frequency trading.
Mar 21 2014
On Friday, 21 March 2014 at 22:28:36 UTC, Walter Bright wrote:On 3/21/2014 2:14 PM, TJB wrote:I would be happy to help you with an option pricing example that is commonly used. Let me know if you are interested.I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space.It's a good thought, but I have zero knowledge of how C++ is used for high frequency trading.
Mar 21 2014
On Friday, 21 March 2014 at 22:33:37 UTC, TJB wrote:On Friday, 21 March 2014 at 22:28:36 UTC, Walter Bright wrote:This is a very interesting thread that you started. Could you flesh it out more with some example C++ that you'd like compared to D? I'm sure quite a few people would assist with a translation. I'm not expert in high frequency trading, but I was inspired by your post to start poking around here http://www.quantstart.com/articles/european-vanilla-option-pricing-with-c-via-monte-carlo-methods and study some of the algorithms. Nothing there that I wouldn't rather see in D than C++. D's GC is problematic, but the hope is that you can avoid allocating from the GC'ed heap and that eventually (soon? please?) it will be replaced by a better precise GC.It's a good thought, but I have zero knowledge of how C++ is used for high frequency trading.I would be happy to help you with an option pricing example that is commonly used. Let me know if you are interested.
Mar 22 2014
On Saturday, 22 March 2014 at 16:35:07 UTC, Brian Rogoff wrote:This is a very interesting thread that you started. Could you flesh it out more with some example C++ that you'd like compared to D? I'm sure quite a few people would assist with a translation.Well, right away people jumped to high-frequency trading. Although that may be the most visible area in computational finance - it's not the only one. There are areas where performance is crucial, but where trading is done at a lower frequency (where latency is not the main issue).I'm not expert in high frequency trading, but I was inspired by your post to start poking around here http://www.quantstart.com/articles/european-vanilla-option-pricing-with-c-via-monte-carlo-methods and study some of the algorithms. Nothing there that I wouldn't rather see in D than C++.The example that you link to is exactly what I have in mind. A simple comparison of Monte Carlo routines for pricing options would be a great place to start. The bible on this is the book by Glasserman: http://www.amazon.com/Financial-Engineering-Stochastic-Modelling-Probability/dp/0387004513/ref=sr_1_1?ie=UTF8&qid=1395509317&sr=8-1&keywords=monte+carlo+in+financial+engineering And a great source for approaching this is in C++ is Joshi: http://www.amazon.com/Patterns-Derivatives-Pricing-Mathematics-Finance/dp/0521832357/ref=sr_1_5?s=books&ie=UTF8&qid=1395509376&sr=1-5D's GC is problematic, but the hope is that you can avoid allocating from the GC'ed heap and that eventually (soon? please?) it will be replaced by a better precise GC.Sounds great to me. I would love to see it. Thanks for taking interest.
Mar 22 2014
On Saturday, 22 March 2014 at 17:30:45 UTC, TJB wrote:On Saturday, 22 March 2014 at 16:35:07 UTC, Brian Rogoff wrote:I apologize for that. HFT has been a driver of a lot of business and attention. Of course you are right about areas with less latency sensitivity and D is attractive there. Even in latency sensitive efforts I think could be D attractive for new efforts providing some of the memory management efforts continue to evolve. My main point was selling it would be tough. Thanks DanThis is a very interesting thread that you started. Could you flesh it out more with some example C++ that you'd like compared to D? I'm sure quite a few people would assist with a translation.Well, right away people jumped to high-frequency trading. Although that may be the most visible area in computational finance - it's not the only one. There are areas where performance is crucial, but where trading is done at a lower frequency (where latency is not the main issue).
Mar 24 2014
On Monday, 24 March 2014 at 11:57:14 UTC, Daniel Davidson wrote:On Saturday, 22 March 2014 at 17:30:45 UTC, TJB wrote:Dan, Thanks for your thoughtful points. I think your experience is worth listening to - I think it confirms that both: * D is worth pursuing in these areas * It won't be easy to convince others to adopt it These points are worth thinking deeply about as D is developed for the real world. TJBOn Saturday, 22 March 2014 at 16:35:07 UTC, Brian Rogoff wrote:I apologize for that. HFT has been a driver of a lot of business and attention. Of course you are right about areas with less latency sensitivity and D is attractive there. Even in latency sensitive efforts I think could be D attractive for new efforts providing some of the memory management efforts continue to evolve. My main point was selling it would be tough. Thanks DanThis is a very interesting thread that you started. Could you flesh it out more with some example C++ that you'd like compared to D? I'm sure quite a few people would assist with a translation.Well, right away people jumped to high-frequency trading. Although that may be the most visible area in computational finance - it's not the only one. There are areas where performance is crucial, but where trading is done at a lower frequency (where latency is not the main issue).
Mar 24 2014
On 3/21/2014 3:33 PM, TJB wrote:I would be happy to help you with an option pricing example that is commonly used. Let me know if you are interested.Sure, please email it to me.
Mar 23 2014
On Sunday, 23 March 2014 at 07:14:06 UTC, Walter Bright wrote:On 3/21/2014 3:33 PM, TJB wrote:Walter, I would be happy to. Where do I find your email address? Sorry if this is a dumb question. TJBI would be happy to help you with an option pricing example that is commonly used. Let me know if you are interested.Sure, please email it to me.
Mar 23 2014
On 3/23/2014 11:10 AM, TJB wrote:Walter, I would be happy to. Where do I find your email address? Sorry if this is a dumb question.My first name followed by digitalmars.com.
Mar 23 2014
On Friday, 21 March 2014 at 22:28:36 UTC, Walter Bright wrote:It's a good thought, but I have zero knowledge of how C++ is used for high frequency trading.Reading through the Wikipedia article on Computational Finance, it looks like it's basically performing simulations where some data is known but other is not. Random numbers are generated for the unknown data and the simulations are run several times to find the range of possible outcomes given the known values.
Mar 21 2014
On Fri, 2014-03-21 at 22:44 +0000, Chris Williams wrote:On Friday, 21 March 2014 at 22:28:36 UTC, Walter Bright wrote:http://en.wikipedia.org/wiki/Computational_finance Is seriously lacking in actual content but at least it isn't entirely wrong. There are many different things happening in what many would label as computational finance from ultra-high-frequency trading systems to modelling macroeconomics for fund management. In the former case of course the most crucial thing is the length of the cable of you computer to the router :-) In the latter case, and indeed quantitative analysis (quant), the mathematical models can be seriously weird (most likely because they are based on phenomenology rather than science-based modelling). What you are alluding to is the use of Monte Carlo approach to solve some of the models given boundary conditions. This is a "bog standard" approach to numerical modelling. Many quants use Excel, many are C++ folk. Those using Excel really need to stop it. Many of the hedge funds are following in the HEP, bioinformatics direction and using Python (PyPy, NumPy, SciPy, etc.) for writing the initial models and then, if greater speed is needed, using Cython or rewriting in C++. Mathematica, R and Julia are increasingly also players in this game. D could be a very interesting player in this game, but it would have to have some early adopters get on board and show in the domain itself that it can beat Python, C++, Mathematics, R and Julia at their own game. Whilst I am not a finance person myself: I train a lot of finance people, and signal processing people, in Python, Java, Scala, and Groovy. In a number of the major international finance houses a Python/Scala/C++ stack has taken hold. However this is not fixed forever, since whenever the CTO changes the stack tends to as well. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winderIt's a good thought, but I have zero knowledge of how C++ is used for high frequency trading.Reading through the Wikipedia article on Computational Finance, it looks like it's basically performing simulations where some data is known but other is not. Random numbers are generated for the unknown data and the simulations are run several times to find the range of possible outcomes given the known values.
Mar 22 2014
On 03/22/14 06:40, Russel Winder wrote: [snip]What you are alluding to is the use of Monte Carlo approach to solve some of the models given boundary conditions. This is a "bog standard"By "bog standard" do you mean "plain or ordinary? http://en.wiktionary.org/wiki/bog_standardapproach to numerical modelling.[snip]Many of the hedge funds are following in the HEP,By HEP do you mean "High Energy Physics"? [snip]In a number of the major international finance houses a Python/Scala/C++ stack has taken hold. However this is not fixed forever, since whenever the CTO changes the stack tends to as well.By CTO do you mean "Chief Technology Officer"? TIA. -regards, Larry
Mar 23 2014
On Sun, 2014-03-23 at 11:46 -0500, evansl wrote:On 03/22/14 06:40, Russel Winder wrote: [snip]Sort of. This webpage almost, but not quite, totally misses all the British humorous subtleties. But yes. Effectively.What you are alluding to is the use of Monte Carlo approach to solve some of the models given boundary conditions. This is a "bog standard"By "bog standard" do you mean "plain or ordinary? http://en.wiktionary.org/wiki/bog_standardOh yes. But I was only involved in HEP when all the software was in FORTRAN. Recently they have started using Fortran, and even C++. This is almost certainly Stephen Wolfram's fault.approach to numerical modelling.[snip]Many of the hedge funds are following in the HEP,By HEP do you mean "High Energy Physics"?[snip]Indeed. Many companies outside the US hipster company organization theory may use the term Technical Director. :-)In a number of the major international finance houses a Python/Scala/C++ stack has taken hold. However this is not fixed forever, since whenever the CTO changes the stack tends to as well.By CTO do you mean "Chief Technology Officer"?TIA. -regards, LarryHopefully that is not Page or Ellison. ;-) -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 23 2014
On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:Walter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJBMaybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately. Thanks Dan
Mar 21 2014
On Saturday, 22 March 2014 at 00:14:11 UTC, Daniel Davidson wrote:On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:Dan, Why a tough sell? Please explain. TJBWalter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJBMaybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately. Thanks Dan
Mar 21 2014
TJB:Why a tough sell? Please explain.That code must always be hard-real time. So a GC is allowed only during startup time (unless it's a quite special GC), hidden heap allocations are forbidden, data access patterns need to be carefully chosen, you even have to use most of the hot part of the stack, etc. Bye, bearophile
Mar 21 2014
On 3/21/2014 5:39 PM, bearophile wrote:That code must always be hard-real time. So a GC is allowed only during startup time (unless it's a quite special GC), hidden heap allocations are forbidden, data access patterns need to be carefully chosen, you even have to use most of the hot part of the stack, etc.These are all very doable in D, and I've written apps that do so. The "fear of GC" is completely overblown. However, I do recognize that Phobos can be made a lot more non-GC friendly, and I am working on PR's to do that. For example, https://github.com/D-Programming-Language/phobos/pull/2014
Mar 21 2014
On Saturday, 22 March 2014 at 01:24:38 UTC, Walter Bright wrote:On 3/21/2014 5:39 PM, bearophile wrote:Fear of GC is overblown. Fear of D's GC isn't.That code must always be hard-real time. So a GC is allowed only during startup time (unless it's a quite special GC), hidden heap allocations are forbidden, data access patterns need to be carefully chosen, you even have to use most of the hot part of the stack, etc.These are all very doable in D, and I've written apps that do so. The "fear of GC" is completely overblown.
Mar 21 2014
Am 22.03.2014 06:58, schrieb deadalnix:On Saturday, 22 March 2014 at 01:24:38 UTC, Walter Bright wrote:Yes, as there are a few high performance trading systems done with JVM/.NET languages. -- PauloOn 3/21/2014 5:39 PM, bearophile wrote:Fear of GC is overblown. Fear of D's GC isn't.That code must always be hard-real time. So a GC is allowed only during startup time (unless it's a quite special GC), hidden heap allocations are forbidden, data access patterns need to be carefully chosen, you even have to use most of the hot part of the stack, etc.These are all very doable in D, and I've written apps that do so. The "fear of GC" is completely overblown.
Mar 22 2014
On Sat, Mar 22, 2014 at 12:30 AM, Paulo Pinto <pjmlp progtools.org> wrote:Yes, as there are a few high performance trading systems done with JVM/.NET languages. -- PauloWhat about AAA games? :) Even though I agree with the pro-GC arguments you put forth, but I really have a hard time imagining something like Battlefield 4 written in Java and running on the JVM (though I would love to be proven wrong).
Mar 22 2014
Am 22.03.2014 09:42, schrieb Ziad Hatahet:On Sat, Mar 22, 2014 at 12:30 AM, Paulo Pinto <pjmlp progtools.org <mailto:pjmlp progtools.org>> wrote: Yes, as there are a few high performance trading systems done with JVM/.NET languages. -- Paulo What about AAA games? :) Even though I agree with the pro-GC arguments you put forth, but I really have a hard time imagining something like Battlefield 4 written in Java and running on the JVM (though I would love to be proven wrong).What about ballistic missile tracking radar control systems? http://www.bloomberg.com/apps/news?pid=newsarchive&sid=aHLFBJZrqaoM You don't want those babies out of control. -- Paulo
Mar 22 2014
On Saturday, 22 March 2014 at 08:54:46 UTC, Paulo Pinto wrote:Am 22.03.2014 09:42, schrieb Ziad Hatahet:Ah Thales... I hope they are better at writing software for missiles than they are at tram management systems.On Sat, Mar 22, 2014 at 12:30 AM, Paulo Pinto <pjmlp progtools.org <mailto:pjmlp progtools.org>> wrote: Yes, as there are a few high performance trading systems done with JVM/.NET languages. -- Paulo What about AAA games? :) Even though I agree with the pro-GC arguments you put forth, but I really have a hard time imagining something like Battlefield 4 written in Java and running on the JVM (though I would love to be proven wrong).What about ballistic missile tracking radar control systems? http://www.bloomberg.com/apps/news?pid=newsarchive&sid=aHLFBJZrqaoM You don't want those babies out of control. -- Paulo
Mar 22 2014
On Sat, 2014-03-22 at 00:39 +0000, bearophile wrote:TJB:Not entirely the case. Yes the ultra-high-frequency traders tend to be C ++ and hard real time with no operating system scheduler active. However there are a number of high frequency trading systems using the JVM with the G1 garbage collector since actually only soft real time is needed for the trading they are doing. It is also worth pointing out the LMAX Disruptor which is a lock-free ring buffer based framework used to create dealing platforms on the JVM. They outperform any other trading platform still. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winderWhy a tough sell? Please explain.That code must always be hard-real time. So a GC is allowed only during startup time (unless it's a quite special GC), hidden heap allocations are forbidden, data access patterns need to be carefully chosen, you even have to use most of the hot part of the stack, etc.
Mar 22 2014
On Saturday, 22 March 2014 at 11:46:43 UTC, Russel Winder wrote:It is also worth pointing out the LMAX Disruptor which is a lock-free ring buffer based framework used to create dealing platforms on the JVM. They outperform any other trading platform still.That is wrong. Trading is competitive and your competitors rarely will inform you they are losing to you or beating you. And when you do not get the fills you want it is not always clear you are being beaten. This article (http://mechanical-sympathy.blogspot.com/2011/08/inter-thread-latency.html) suggests C++ version of the same architecture is faster than the Java. Thanks Dan
Mar 22 2014
Am 22.03.2014 13:38, schrieb Daniel Davidson:On Saturday, 22 March 2014 at 11:46:43 UTC, Russel Winder wrote:Maybe this is a wrong conclusion, but from the comments, I would say the Java version is pretty much on par with the C++ attempts. -- PauloIt is also worth pointing out the LMAX Disruptor which is a lock-free ring buffer based framework used to create dealing platforms on the JVM. They outperform any other trading platform still.That is wrong. Trading is competitive and your competitors rarely will inform you they are losing to you or beating you. And when you do not get the fills you want it is not always clear you are being beaten. This article (http://mechanical-sympathy.blogspot.com/2011/08/inter-thread-latency.html) suggests C++ version of the same architecture is faster than the Java. Thanks Dan
Mar 22 2014
On Saturday, 22 March 2014 at 12:54:11 UTC, Paulo Pinto wrote:Am 22.03.2014 13:38, schrieb Daniel Davidson:I guess it depends where your edge comes from. If it is speed, the following suggests it is worth it to go C++. 10% is huge and reduction in variability is also huge. [[ So what does this all mean for the Disruptor? Basically, the latency of the Disruptor is about as low as we can get from Java. It would be possible to get a ~10% latency improvement by moving to C++. I’d expect a similar improvement in throughput for C++. The main win with C++ would be the control, and therefore, the predictability that comes with it if used correctly. The JVM gives us nice safety features like garbage collection in complex applications but we pay a little for that with the extra instructions it inserts that can be seen if you get Hotspot to dump the assembler instructions it is generating. ]] If your edge is quickly trying out new algorithms and identifying edge by trying many less latency sensitive strategies in the market - then "pretty much on par" is a fine bar. Thanks, DanOn Saturday, 22 March 2014 at 11:46:43 UTC, Russel Winder wrote:Maybe this is a wrong conclusion, but from the comments, I would say the Java version is pretty much on par with the C++ attempts.It is also worth pointing out the LMAX Disruptor which is a lock-free ring buffer based framework used to create dealing platforms on the JVM. They outperform any other trading platform still.That is wrong. Trading is competitive and your competitors rarely will inform you they are losing to you or beating you. And when you do not get the fills you want it is not always clear you are being beaten. This article (http://mechanical-sympathy.blogspot.com/2011/08/inter-thread-latency.html) suggests C++ version of the same architecture is faster than the Java. Thanks Dan
Mar 22 2014
Am 22.03.2014 14:21, schrieb Daniel Davidson:On Saturday, 22 March 2014 at 12:54:11 UTC, Paulo Pinto wrote:I read a bit more into it, as the post is from 2011, there are some comments of bringing the performance findings into the following Disruptor's version and the discussion about code generation quality of different Java JIT compilers. So I would be curious how the 2014 version of Disruptor with modern JVMs like Hotspot, JRockit, Azul fare against the C++ version in modern C++ optimizing compilers like Intel C++, Visual C++. Going a bit off topic, as although I do like C++, the day job is in JVM/.NET land where we sometimes do replace C++ systems by the later ones, but I don't possess any trade experience besides what I get to read in blogs and articles. Assuming those 10% still happen if the test was done today as suggested, how much are trade companies willing to pay for developers to achieve those 10% in C++ vs having a system although 10% slower, still fast enough for operations while saving salaries for more cheaper developers? Asking, because on our enterprise consulting I see most companies willing to sacrifice performance vs salaries, with direct impact on the technology stack being used. -- PauloAm 22.03.2014 13:38, schrieb Daniel Davidson:I guess it depends where your edge comes from. If it is speed, the following suggests it is worth it to go C++. 10% is huge and reduction in variability is also huge. [[ So what does this all mean for the Disruptor? Basically, the latency of the Disruptor is about as low as we can get from Java. It would be possible to get a ~10% latency improvement by moving to C++. I’d expect a similar improvement in throughput for C++. The main win with C++ would be the control, and therefore, the predictability that comes with it if used correctly. The JVM gives us nice safety features like garbage collection in complex applications but we pay a little for that with the extra instructions it inserts that can be seen if you get Hotspot to dump the assembler instructions it is generating. ]] If your edge is quickly trying out new algorithms and identifying edge by trying many less latency sensitive strategies in the market - then "pretty much on par" is a fine bar. Thanks, DanOn Saturday, 22 March 2014 at 11:46:43 UTC, Russel Winder wrote:Maybe this is a wrong conclusion, but from the comments, I would say the Java version is pretty much on par with the C++ attempts.It is also worth pointing out the LMAX Disruptor which is a lock-free ring buffer based framework used to create dealing platforms on the JVM. They outperform any other trading platform still.That is wrong. Trading is competitive and your competitors rarely will inform you they are losing to you or beating you. And when you do not get the fills you want it is not always clear you are being beaten. This article (http://mechanical-sympathy.blogspot.com/2011/08/inter-thread-latency.html) suggests C++ version of the same architecture is faster than the Java. Thanks Dan
Mar 22 2014
On Saturday, 22 March 2014 at 13:47:31 UTC, Paulo Pinto wrote:Assuming those 10% still happen if the test was done today as suggested, how much are trade companies willing to pay for developers to achieve those 10% in C++ vs having a system although 10% slower, still fast enough for operations while saving salaries for more cheaper developers?Labor is a market like any other. It depends on supply and demand. The demand is obviously high. http://dealbook.nytimes.com/2010/12/10/ex-goldman-programmer-is-convicted/?_php=true&_type=blogs&_r=0 Performance engineers who can eek out that 10% on existing systems do very well. The same engineers who can build it entirely do much better. Thanks Dan
Mar 22 2014
On Sat, 2014-03-22 at 14:17 +0000, Daniel Davidson wrote: […]Performance engineers who can eek out that 10% on existing systems do very well. The same engineers who can build it entirely do much better.Good C++ programmers appear to be able to get $350k to $400k in NY. Of course the effect of that good programmers can be a profit increase measures in millions. I guess D should be able to do things just as fast as C++, at least using LDC or GDC. My little informal microbenchmarks indicate that this is the case, but for now this is anecdotal evidence not statistically significant. Sadly for a while D programmers won't be able to achieve the same remuneration as the top C++ programmers exactly because there is little demand/supply pressure. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 22 2014
On 3/22/2014 7:29 AM, Russel Winder wrote:I guess D should be able to do things just as fast as C++, at least using LDC or GDC. My little informal microbenchmarks indicate that this is the case, but for now this is anecdotal evidence not statistically significant.Having built C++ and D compilers, I know this to be true for a fact. It's what I'll be talking about at Dconf.
Mar 22 2014
On Saturday, 22 March 2014 at 14:30:00 UTC, Russel Winder wrote:On Sat, 2014-03-22 at 14:17 +0000, Daniel Davidson wrote: […]HFT is very latency sensitive. D stop the world GC is a no go. D needs a better GC to be viable in these markets.Performance engineers who can eek out that 10% on existing systems do very well. The same engineers who can build it entirely do much better.Good C++ programmers appear to be able to get $350k to $400k in NY. Of course the effect of that good programmers can be a profit increase measures in millions. I guess D should be able to do things just as fast as C++, at least using LDC or GDC. My little informal microbenchmarks indicate that this is the case, but for now this is anecdotal evidence not statistically significant. Sadly for a while D programmers won't be able to achieve the same remuneration as the top C++ programmers exactly because there is little demand/supply pressure.
Mar 22 2014
The work Don's company does has very similar requirements to HFT. His talks here are totally relevant to the use of D in this area.
Mar 22 2014
On Saturday, 22 March 2014 at 23:27:18 UTC, Sean Kelly wrote:The work Don's company does has very similar requirements to HFT. His talks here are totally relevant to the use of D in this area.Yes, I'm very much looking forward to that talk as well. His talk last year killed!
Mar 22 2014
On Saturday, 22 March 2014 at 23:27:18 UTC, Sean Kelly wrote:The work Don's company does has very similar requirements to HFT. His talks here are totally relevant to the use of D in this area.They did significant work to customize th GC in order to meet the requirements.
Mar 22 2014
On Sat, 2014-03-22 at 21:13 +0000, deadalnix wrote: […]HFT is very latency sensitive. D stop the world GC is a no go. D needs a better GC to be viable in these markets.GC technology was well beyond "stop the world" in Common Lisp in the 1990s. Java learnt this lesson in the 2000s. IBM, Azul, and now OpenJDK have an array of very sophisticated, and indeed some real-time, garbage collectors. Clearly though the Lisp and Java worlds are very different from the C++, D and Go worlds, thus sophisticated algorithms cannot simply be transplanted. As I understand it, Go has a parallel mark and sweep algorithm which is relatively unsophisticated, and a bit "stop the world", but they claim fine for small heaps: generational and compacting collectors (as per Java) are only beneficial for large heaps is the claim (but I haven't seen much data to back this up for Go). There is rumour of a change, but this has been circulating for ages. I guess Herb Sutter's (reported) view that he can write C++ code with no new and so no need for garbage collection, if applied to D would mean that although the collector "stopped the world", the heap would need no work and be small. But for real time you would just have to remove the GC completely to have the needed guarantees. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 23 2014
Am 23.03.2014 08:13, schrieb Russel Winder:On Sat, 2014-03-22 at 21:13 +0000, deadalnix wrote: […]Well, there is a nice quote from Bjarne as well: "C++ is my favourite garbage collected language because it generates so little garbage" And yet C++11 got to have an optional GC API defined, so even the ANSI/ISO folks do recognize its value in C++. Visual C++ already supports it, http://msdn.microsoft.com/en-us/library/vstudio/hh567368%28v=vs.120%29.aspx -- PauloHFT is very latency sensitive. D stop the world GC is a no go. D needs a better GC to be viable in these markets.GC technology was well beyond "stop the world" in Common Lisp in the 1990s. Java learnt this lesson in the 2000s. IBM, Azul, and now OpenJDK have an array of very sophisticated, and indeed some real-time, garbage collectors. Clearly though the Lisp and Java worlds are very different from the C++, D and Go worlds, thus sophisticated algorithms cannot simply be transplanted. As I understand it, Go has a parallel mark and sweep algorithm which is relatively unsophisticated, and a bit "stop the world", but they claim fine for small heaps: generational and compacting collectors (as per Java) are only beneficial for large heaps is the claim (but I haven't seen much data to back this up for Go). There is rumour of a change, but this has been circulating for ages. I guess Herb Sutter's (reported) view that he can write C++ code with no new and so no need for garbage collection, if applied to D would mean that although the collector "stopped the world", the heap would need no work and be small. But for real time you would just have to remove the GC completely to have the needed guarantees.
Mar 23 2014
On 3/23/2014 12:13 AM, Russel Winder wrote:But for real time you would just have to remove the GC completely to have the needed guarantees.malloc/free cannot be used in hard real time systems, either. malloc/free do not have latency guarantees.
Mar 23 2014
On Sunday, 23 March 2014 at 17:35:37 UTC, Walter Bright wrote:malloc/free cannot be used in hard real time systems, either. malloc/free do not have latency guarantees.While that is true you can have a soft real time thread feeding the hard real time thread with new configurations and the associated preallocated buffers using a lock-free queue.
Mar 23 2014
On 3/23/2014 11:29 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang gmail.com>" wrote:On Sunday, 23 March 2014 at 17:35:37 UTC, Walter Bright wrote:Yes, and you can do that with GC, too.malloc/free cannot be used in hard real time systems, either. malloc/free do not have latency guarantees.While that is true you can have a soft real time thread feeding the hard real time thread with new configurations and the associated preallocated buffers using a lock-free queue.
Mar 23 2014
On Sunday, 23 March 2014 at 18:58:20 UTC, Walter Bright wrote:On 3/23/2014 11:29 AM, "Ola Fosheim Grøstad"You can, but the way I view "soft real time" is that you accept jitter, but not prolonged freezing of threads. I don't think the current GC is "soft real time".While that is true you can have a soft real time thread feeding the hard real time threadYes, and you can do that with GC, too.
Mar 24 2014
On Sun, 2014-03-23 at 10:35 -0700, Walter Bright wrote:On 3/23/2014 12:13 AM, Russel Winder wrote:By estimating the resource needs and preallocating a freelist, you can get round this issue. C++ is quite good at supporting this sort of stuff. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winderBut for real time you would just have to remove the GC completely to have the needed guarantees.malloc/free cannot be used in hard real time systems, either. malloc/free do not have latency guarantees.
Mar 23 2014
On 3/23/2014 12:42 PM, Russel Winder wrote:On Sun, 2014-03-23 at 10:35 -0700, Walter Bright wrote:Yes, and you can do that in D, the same way.On 3/23/2014 12:13 AM, Russel Winder wrote:By estimating the resource needs and preallocating a freelist, you can get round this issue. C++ is quite good at supporting this sort of stuff.But for real time you would just have to remove the GC completely to have the needed guarantees.malloc/free cannot be used in hard real time systems, either. malloc/free do not have latency guarantees.
Mar 23 2014
On 3/22/2014 9:47 AM, Paulo Pinto wrote:Assuming those 10% still happen if the test was done today as suggested, how much are trade companies willing to pay for developers to achieve those 10% in C++ vs having a system although 10% slower, still fast enough for operations while saving salaries for more cheaper developers?If the domain is high-performance, high-volume, (hard|soft) realtime, then I doubt very much you can get away with significantly cheaper developers, even if it is Java. Unless it just happens to be a sub-par company with a questionable future.
Mar 23 2014
Am 23.03.2014 22:04, schrieb Nick Sabalausky:On 3/22/2014 9:47 AM, Paulo Pinto wrote:My question was precisely because I tend to see that a lot in general, as on my area saving project costs seems to be more valuable than quality. -- PauloAssuming those 10% still happen if the test was done today as suggested, how much are trade companies willing to pay for developers to achieve those 10% in C++ vs having a system although 10% slower, still fast enough for operations while saving salaries for more cheaper developers?If the domain is high-performance, high-volume, (hard|soft) realtime, then I doubt very much you can get away with significantly cheaper developers, even if it is Java. Unless it just happens to be a sub-par company with a questionable future.
Mar 23 2014
On Sat, 2014-03-22 at 12:38 +0000, Daniel Davidson wrote:On Saturday, 22 March 2014 at 11:46:43 UTC, Russel Winder wrote:If the data isn't available we'll never really know. It is now over 9 months since the people I know left LMAX for other organizations so I have no current data, or even rumour. It would be interesting to rerun the comparison with JDK8 and C++14. I suspect C++ may well be the winner on latency, whether by 10% I am not sure. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winderIt is also worth pointing out the LMAX Disruptor which is a lock-free ring buffer based framework used to create dealing platforms on the JVM. They outperform any other trading platform still.That is wrong. Trading is competitive and your competitors rarely will inform you they are losing to you or beating you. And when you do not get the fills you want it is not always clear you are being beaten. This article (http://mechanical-sympathy.blogspot.com/2011/08/inter-thread-latency.html) suggests C++ version of the same architecture is faster than the Java.
Mar 22 2014
On Saturday, 22 March 2014 at 00:34:22 UTC, TJB wrote:On Saturday, 22 March 2014 at 00:14:11 UTC, Daniel Davidson wrote:Well, I am a fan of D. That said, there is a ton of money that goes into the HFT low latency arms race. I felt they were always willing to spend whatever it takes - but they had to make choices. First, I know for several firms there was a history the C++ vs Java back and forth until it was clear that the lowest latency systems were C/C++. The difference was attributed to many causes, but garbage collection always came up as a big one. I think just that association of C++ beating Java because of garbage collection will be enough to prevent many HFT shops from considering D for a long while, even if you could get rid of the gc entirely. Additionally, many of the advantages firms get come from dealing intelligently with the kernel and lowest level networking techniques which implies expertise in C and that often comes with a well-deserved pride in knowing how to make things run fast in specific ways. Further they are pushing even lower into custom hardware which is expensive. With the money to throw at it and the various alternatives to choose from there is a go big or go home mentality. They are not as worried about how fun it is to code in, how ugly your templates are, or how it might take longer to do something or to build something. I agree if you could get the finance industry interested in D that would be great. These are just my opinions and I left HFT a couple years back now. But, in this field, to expect a firm to get behind something like D you would need demonstrable improvements in performance irrespective of cost to develop. If they can do equivalent or better performance in C++ with more time to develop - then C++ still wins. Now if D can actually beat C++ significantly in quant programming performance things might change. Do you have ideas on how that could be done? If you consider automated, black box trading, most firms roll their own options infrastructure and these days QuantLib would not make the cut. But if you could demonstrate same functionality with improved performance by using D you could make a case. Thanks DanOn Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:Dan, Why a tough sell? Please explain. TJBWalter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJBMaybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately. Thanks Dan
Mar 21 2014
On Sat, 2014-03-22 at 00:14 +0000, Daniel Davidson wrote: […]Maybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately.I would certainly agree that (at least initially) pitching D against the Excel/Python/R/Julia/Mathematica is an easier fight. The question is how to convince someone to take the first step. I suspect a rewrite of QuantLib in D is a bad idea, much better to create an adapter and offer it to the QuantLib folks. The ones they have already tend to be created using SWIG. JQuantLib is an attempt to rewrite QuantLib in pure Java, but I do not know if it is gaining any traction over the Java adapter to QuantLib. The angle here to get D traction would be to have the data visualization capability: the reason for the success of SciPy, R, Julia has been very fast turnaround of changes to the models and the rendering of the results of the computations. Certainly in bioinformatics, and I guess in finance, there is a lot of use of hardware floating point numbers, but also of arbitrary size integers, not just hardware integers. If your languages cannot calculate correctly factorial(40) then there is no hope in these domains, this is why Python, R, Julia get traction they manage integers and the use of hardware and software representations so that the programmer doesn't have to care, it all just works. This is clearly not true of C++ :-) -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 22 2014
On Saturday, 22 March 2014 at 12:06:37 UTC, Russel Winder wrote:I suspect a rewrite of QuantLib in D is a bad idea, much better to create an adapter and offer it to the QuantLib folks. The ones they have already tend to be created using SWIG. JQuantLib is an attempt to rewrite QuantLib in pure Java, but I do not know if it is gaining any traction over the Java adapter to QuantLib.I guess it depends on the goal. The OP was interested in replacing C++ with D for quant work. If the goal is to use QuantLib functionality in D then you are correct - wrappers are the way to go. But if you want to push D into the quant side of things and show off the benefits there are not many bragging rights to having a great wrapper over C++. I think the exercise of moving some of QuantLib to D would be the education of the benefits/drawbacks of that move and the hope it would be representative of the D vs C++ tradeoffs for quant programming in general.The angle here to get D traction would be to have the data visualization capability: the reason for the success of SciPy, R, Julia has been very fast turnaround of changes to the models and the rendering of the results of the computations.Data storage for high volume would also be nice. A D implementation of HDF5, via wrappers or otherwise, would be a very useful project. Imagine how much more friendly the API could be in D. Python's tables library makes it very simple. You have to choose a language to not only process and visualize data, but store and access it as well. Thanks Dan
Mar 22 2014
On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson wrote:Data storage for high volume would also be nice. A D implementation of HDF5, via wrappers or otherwise, would be a very useful project. Imagine how much more friendly the API could be in D. Python's tables library makes it very simple. You have to choose a language to not only process and visualize data, but store and access it as well. Thanks DanWell, I for one, would be hugely interested in such a thing. A nice D API to HDF5 would be a dream for my data problems. Did you use HDF5 in your finance industry days then? Just curious. TJB
Mar 22 2014
On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:Well, I for one, would be hugely interested in such a thing. A nice D API to HDF5 would be a dream for my data problems. Did you use HDF5 in your finance industry days then? Just curious.A bit. You can check out some of my C++ code generation support for hdf5 here: https://github.com/patefacio/codegen/tree/master/cpp/fcs/h5 A description of the code generation infrastructure is available here: https://github.com/patefacio/codegen/blob/master/doc/codegen.pdf And some python usage of tables to parse/store data for HFT analysis of potential benefits of periodic auctions as opposed to continuous markets: https://github.com/patefacio/auction Using hdf5 with tables and simple usage in C++ is very powerful. I did not do a lot with it - simple tables for writing/reading from C++. The API is huge and very old school, even with the C++ wrappers. IMHO an awesome project would be a rewrite in D that abandoned the API but provided file compatibility with cleaner access. D would be great for that.TJB
Mar 22 2014
On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson wrote:Well for HDF5 - the bindings are here now - pre alpha but will get there soone enough - and wrappers coming along also. Any thoughts/suggestions/help appreciated. Github here: https://github.com/Laeeth/d_hdf5 I wonder how much work it would be to port or implement Pandas type functionality in a D library.Data storage for high volume would also be nice. A D implementation of HDF5, via wrappers or otherwise, would be a very useful project. Imagine how much more friendly the API could be in D. Python's tables library makes it very simple. You have to choose a language to not only process and visualize data, but store and access it as well. Thanks DanWell, I for one, would be hugely interested in such a thing. A nice D API to HDF5 would be a dream for my data problems. Did you use HDF5 in your finance industry days then? Just curious. TJB
Dec 22 2014
On Monday, 22 December 2014 at 08:35:59 UTC, Laeeth Isharc wrote:On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:Laeeth As a matter of fact, I've been working on HDF5 bindings for D as well -- I'm done with the binding/wrapping part so far (with automatic throwing of D exceptions whenever errors occur in the C library, and other niceties) and am hacking at the higher level OOP API -- can publish it soon if anyone's interested :) Maybe we can join efforts and make it work (that and standardizing a multi-dimensional array library in D).On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson wrote:Well for HDF5 - the bindings are here now - pre alpha but will get there soone enough - and wrappers coming along also. Any thoughts/suggestions/help appreciated. Github here: https://github.com/Laeeth/d_hdf5 I wonder how much work it would be to port or implement Pandas type functionality in a D library.Data storage for high volume would also be nice. A D implementation of HDF5, via wrappers or otherwise, would be a very useful project. Imagine how much more friendly the API could be in D. Python's tables library makes it very simple. You have to choose a language to not only process and visualize data, but store and access it as well. Thanks DanWell, I for one, would be hugely interested in such a thing. A nice D API to HDF5 would be a dream for my data problems. Did you use HDF5 in your finance industry days then? Just curious. TJB
Dec 22 2014
On Monday, 22 December 2014 at 11:59:11 UTC, aldanor wrote:Laeeth As a matter of fact, I've been working on HDF5 bindings for D as well -- I'm done with the binding/wrapping part so far (with automatic throwing of D exceptions whenever errors occur in the C library, and other niceties) and am hacking at the higher level OOP API -- can publish it soon if anyone's interested :) Maybe we can join efforts and make it work (that and standardizing a multi-dimensional array library in D).Oh, well :) I would certainly be interested to see what you have, even if not finished yet. My focus was sadly getting something working soon in a sprint, rather than building something excellent later, and I would think your work will be cleaner. In any case, I would very much be interested in exchanging ideas or working together - on HDF5, on multi-dim or on other projects relating to finance/quant/scientific computing and the like. So maybe you could send me a link when you are ready - either post here or my email address is my first name at my first name.com Thanks.
Dec 22 2014
On Saturday, 22 March 2014 at 12:06:37 UTC, Russel Winder wrote:On Sat, 2014-03-22 at 00:14 +0000, Daniel Davidson wrote: […]In that case, a good start might be a D kernel for IPython/Jupyter. Seeing an interactive D REPL session inside a notebook should make a pretty convincing demo.Maybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately.I would certainly agree that (at least initially) pitching D against the Excel/Python/R/Julia/Mathematica is an easier fight. The question is how to convince someone to take the first step.
Dec 22 2014
On Tuesday, 23 December 2014 at 07:51:18 UTC, Oren Tirosh wrote:On Saturday, 22 March 2014 at 12:06:37 UTC, Russel Winder wrote:That's an interesting idea, how would you approach it though with a compiled non-functional language? Maybe in the same way the %%cython magic is done?On Sat, 2014-03-22 at 00:14 +0000, Daniel Davidson wrote: […]In that case, a good start might be a D kernel for IPython/Jupyter. Seeing an interactive D REPL session inside a notebook should make a pretty convincing demo.Maybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately.I would certainly agree that (at least initially) pitching D against the Excel/Python/R/Julia/Mathematica is an easier fight. The question is how to convince someone to take the first step.
Dec 23 2014
On Tuesday, 23 December 2014 at 13:28:22 UTC, aldanor wrote:On Tuesday, 23 December 2014 at 07:51:18 UTC, Oren Tirosh wrote:I think the trick used by http://drepl.dawg.eu is to incrementally compile each command as a subclass of the previous one, link it as a shared object, load it and call a function.On Saturday, 22 March 2014 at 12:06:37 UTC, Russel Winder wrote:That's an interesting idea, how would you approach it though with a compiled non-functional language? Maybe in the same way the %%cython magic is done?On Sat, 2014-03-22 at 00:14 +0000, Daniel Davidson wrote: […]In that case, a good start might be a D kernel for IPython/Jupyter. Seeing an interactive D REPL session inside a notebook should make a pretty convincing demo.Maybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately.I would certainly agree that (at least initially) pitching D against the Excel/Python/R/Julia/Mathematica is an easier fight. The question is how to convince someone to take the first step.
Dec 23 2014
On Saturday, 22 March 2014 at 00:14:11 UTC, Daniel Davidson wrote:On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:In case it wasn't obvious from the discussion that followed: finance is a broad field with many different kinds of creature within, and there are different kinds of problems faced by different participants. High Frequency Trading has peculiar requirements (relating to latency, amongst other things) that will not necessarily be representative of other areas. Even within this area there is a difference between the needs of a Citadel in its option marketmaking activity versus the activity of a pure delta HFT player (although they also overlap). A JP Morgan that needs to be able to price and calculate risk for large portfolios of convex instruments in its vanilla and exotic options books has different requirements, again. You would typically use Monte Carlo (or quasi MC) to price more complex products for which there is not a good analytical approximation. (Or to deal with the fact that volatility is not constant). So that fits very much with the needs of large banks - and perhaps some hedge funds - but I don't think a typical HFT guy would be all that interested to know about this. They are different domains. Quant/CTA funds also have decent computational requirements, but these are not necessarily high frequency. Winton Capital, for example, is one of the larger hedge funds in Europe by assets, but they have talked publicly about emphasizing longer-term horizons because even in liquid markets there simply is not the liquidity to turn over the volume they would need to to make an impact on their returns. In this case, whilst execution is always important, the research side of things is where the value gets created. And its not unusual to have quant funds where every portfolio manager also programs. (I will not mention names). One might think that rapid iteration here could have value. http://www.efinancialcareers.co.uk/jobs-UK-London-Senior_Data_Scientist_-_Quant_Hedge_Fund.id00654869 Fwiw having spoken to a few people the past few weeks, I am struck by how hollowed-out front office has become, both within banks and hedge funds. It's a nice business when things go well, but there is tremendous operating leverage, and if one builds up fixed costs then losing assets under management and having a poor period of performance (which is part of the game, not necessarily a sign of failure) can quickly mean that you cannot pay people (more than salaries) - which hurts morale and means you risk losing your best people. So people have responded by paring down quant/research support to producing roles, even when that makes no sense. (Programmers are not expensive). In that environment, D may offer attractive productivity without sacrificing performance.Walter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJBMaybe a good starting point would be to port some of QuantLib and see how the performance compares. In High Frequency Trading I think D would be a tough sell, unfortunately. Thanks Dan
Dec 22 2014
On Monday, 22 December 2014 at 12:24:52 UTC, Laeeth Isharc wrote:In case it wasn't obvious from the discussion that followed: finance is a broad field with many different kinds of creature within, and there are different kinds of problems faced by different participants. High Frequency Trading has peculiar requirements (relating to latency, amongst other things) that will not necessarily be representative of other areas. Even within this area there is a difference between the needs of a Citadel in its option marketmaking activity versus the activity of a pure delta HFT player (although they also overlap). A JP Morgan that needs to be able to price and calculate risk for large portfolios of convex instruments in its vanilla and exotic options books has different requirements, again. You would typically use Monte Carlo (or quasi MC) to price more complex products for which there is not a good analytical approximation. (Or to deal with the fact that volatility is not constant). So that fits very much with the needs of large banks - and perhaps some hedge funds - but I don't think a typical HFT guy would be all that interested to know about this. They are different domains. Quant/CTA funds also have decent computational requirements, but these are not necessarily high frequency. Winton Capital, for example, is one of the larger hedge funds in Europe by assets, but they have talked publicly about emphasizing longer-term horizons because even in liquid markets there simply is not the liquidity to turn over the volume they would need to to make an impact on their returns. In this case, whilst execution is always important, the research side of things is where the value gets created. And its not unusual to have quant funds where every portfolio manager also programs. (I will not mention names). One might think that rapid iteration here could have value. http://www.efinancialcareers.co.uk/jobs-UK-London-Senior_Data_Scientist_-_Quant_Hedge_Fund.id00654869 Fwiw having spoken to a few people the past few weeks, I am struck by how hollowed-out front office has become, both within banks and hedge funds. It's a nice business when things go well, but there is tremendous operating leverage, and if one builds up fixed costs then losing assets under management and having a poor period of performance (which is part of the game, not necessarily a sign of failure) can quickly mean that you cannot pay people (more than salaries) - which hurts morale and means you risk losing your best people. So people have responded by paring down quant/research support to producing roles, even when that makes no sense. (Programmers are not expensive). In that environment, D may offer attractive productivity without sacrificing performance.I agree with most of these points. For some reason, people often relate quant finance / high frequency trading with one of the two: either ultra-low-latency execution or option pricing, which is just wrong. In most likelihood, the execution is performed on FPGA co-located grids, so that part is out of question; and options trading is just one of so many things hedge funds do. What takes the most time and effort is the usual "data science" (which in many cases boil down to data munging), as in, managing huge amounts of raw structured/unstructured high-frequency data; extracting the valuable information and learning strategies; implementing fast/efficient backtesting frameworks, simulators etc. The need for "efficiency" here naturally comes from the fact that a typical task in the pipeline requires dozens/hundreds GB of RAM and dozens of hours of runtime on a high-grade box (so noone would really care if that GC is going to stop the world for 0.05 seconds). In this light, as I see it, D's main advantage is a high "runtime-efficiency / time-to-deploy" ratio (whereas one of the main disadvantages for practitioners would be the lack of standard tools for working with structured multidimensional data + linalg, something like numpy or pandas). Cheers.
Dec 22 2014
On Monday, 22 December 2014 at 13:37:55 UTC, aldanor wrote:For some reason, people often relate quant finance / high frequency trading with one of the two: either ultra-low-latency execution or option pricing, which is just wrong. In most likelihood, the execution is performed on FPGA co-located grids, so that part is out of question; and options trading is just one of so many things hedge funds do. What takes the most time and effort is the usual "data science" (which in many cases boil down to data munging), as in, managing huge amounts of raw structured/unstructured high-frequency data; extracting the valuable information and learning strategies;This description feels too broad. Assume that it is the "data munging" that takes the most time and effort. Included in that usually involves some transformations like (Data -> Numeric Data -> Mathematical Data Procssing -> Mathematical Solutions/Calibrations -> Math consumers (trading systems low frequency/high frequency/in general)). The quantitative "data science" is about turning data into value using numbers. The better you are at first getting to an all numbers world to start analyzing the better off you will be. But once in the all numbers world isn't it all about math, statistics, mathematical optimization, insight, iteration/mining, etc? Isn't that right now the world of R, NumPy, Matlab, etc and more recently now Julia? I don't see D attempting to tackle that at this point. If the bulk of the work for the "data sciences" piece is the maths, which I believe it is, then the attraction of D as a "data sciences" platform is muted. If the bulk of the work is preprocessing data to get to an all numbers world, then in that space D might shine.implementing fast/efficient backtesting frameworks, simulators etc. The need for "efficiency" here naturally comes from the fact that a typical task in the pipeline requires dozens/hundreds GB of RAM and dozens of hours of runtime on a high-grade box (so noone would really care if that GC is going to stop the world for 0.05 seconds).What is a backtesting system in the context of Winton Capital? Is it primarily a mathematical backtesting system? If so it still may be better suited to platforms focusing on maths.
Dec 22 2014
On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson wrote: I don't see D attempting to tackle that at this point.If the bulk of the work for the "data sciences" piece is the maths, which I believe it is, then the attraction of D as a "data sciences" platform is muted. If the bulk of the work is preprocessing data to get to an all numbers world, then in that space D might shine.That is one of my points exactly -- the "bulk of the work", as you put it, is quite often the data processing/preprocessing pipeline (all the way from raw data parsing, aggregation, validation and storage to data retrieval, feature extraction, and then serialization, various persistency models, etc). One thing is fitting some model on a pandas dataframe on your lap in an ipython notebook, another thing is running the whole pipeline on massive datasets in production on a daily basis, which often involves very low-level technical stuff, whether you like it or not. Coming up with cool algorithms and doing fancy maths is fun and all, but it doesn't take nearly as much effort as integrating that same thing into an existing production system (or developing one from scratch). (and again, production != execution in this context) On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson wrote:What is a backtesting system in the context of Winton Capital? Is it primarily a mathematical backtesting system? If so it still may be better suited to platforms focusing on maths.Disclaimer: I don't work for Winton :) Backtesting in trading is usually a very CPU-intensive (and sometimes RAM-intensive) task that can be potentially re-run millions of times to fine-tune some parameters or explore some sensitivities. Another common task is reconciling with how the actual trading system works which is a very low-level task as well.
Dec 22 2014
On Monday, 22 December 2014 at 19:25:51 UTC, aldanor wrote:On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson wrote: I don't see D attempting to tackle that at this point.From what I have learned in Skills Matter presentations, for that in Hadoop/Spark/Azure clusters, backed up by big data databases. -- PauloIf the bulk of the work for the "data sciences" piece is the maths, which I believe it is, then the attraction of D as a "data sciences" platform is muted. If the bulk of the work is preprocessing data to get to an all numbers world, then in that space D might shine.That is one of my points exactly -- the "bulk of the work", as you put it, is quite often the data processing/preprocessing pipeline (all the way from raw data parsing, aggregation, validation and storage to data retrieval, feature extraction, and then serialization, various persistency models, etc). One thing is fitting some model on a pandas dataframe on your lap in an ipython notebook, another thing is running the whole pipeline on massive datasets in production on a daily basis, which often involves very low-level technical stuff, whether you like it or not. Coming up with cool algorithms and doing fancy maths is fun and all, but it doesn't take nearly as much effort as integrating that same thing into an existing production system (or developing one from scratch). (and again, production != execution in this context) On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson wrote:What is a backtesting system in the context of Winton Capital? Is it primarily a mathematical backtesting system? If so it still may be better suited to platforms focusing on maths.Disclaimer: I don't work for Winton :) Backtesting in trading is usually a very CPU-intensive (and sometimes RAM-intensive) task that can be potentially re-run millions of times to fine-tune some parameters or explore some sensitivities. Another common task is reconciling with how the actual trading system works which is a very low-level task as well.
Dec 22 2014
On Monday, 22 December 2014 at 19:25:51 UTC, aldanor wrote:On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson wrote: I don't see D attempting to tackle that at this point.I don't know about low frequency which is why I asked about Winton. Some of this is true in HFT but it is tough to break that pipeline that exists in C++. Take live trading vs backtesting: you require all that data processing before getting to the math of it to be as low latency as possible for live trading which is why you use C++ in the first place. To break into that pipeline with another language like D to add value, say for backtesting, is risky not just because the duplication of development cost but also the risk of live not matching backtesting. Maybe you have some ideas in mind where D would help that data processing pipeline, so some specifics might help?If the bulk of the work for the "data sciences" piece is the maths, which I believe it is, then the attraction of D as a "data sciences" platform is muted. If the bulk of the work is preprocessing data to get to an all numbers world, then in that space D might shine.That is one of my points exactly -- the "bulk of the work", as you put it, is quite often the data processing/preprocessing pipeline (all the way from raw data parsing, aggregation, validation and storage to data retrieval, feature extraction, and then serialization, various persistency models, etc).
Dec 22 2014
Hi. Sorry if this is a bit long, but perhaps it may be interesting to one or two. On Monday, 22 December 2014 at 22:00:36 UTC, Daniel Davidson wrote:On Monday, 22 December 2014 at 19:25:51 UTC, aldanor wrote:I have been working as a PM for quantish buy side places since 98, after starting in a quant trading role on sell side in 96, with my first research summer job in 93. Over time I have become less quant and more discretionary, so I am less in touch with the techniques the cool kids are using when it doesn't relate to what I do. But more generally there is a kind of silo mentality where in a big firm people in different groups don't know much about what the guy sitting at the next bank of desks might be doing, and even within groups the free flow of ideas might be a lot less than you might think Against that, firms with a pure research orientation may be a touch different, which just goes hex again to say that from the outside it may be difficult to make useful generalisations. A friend of mine who wrote certain parts of the networking stack in linux is interviewing with HFT firms now, so I may have a better idea about whether D might be of interest. He has heard of D but suggests Java instead. (As a general option, not for HFT). Even smart people can fail to appreciate beauty ;) I think its public that GS use a python like language internally, JPM do use python for what you would expect, and so do AHL (one of the largest lower freq quant firms). More generally, in every field, but especially in finance, it seems like the data processing aspect is going to be key - not just a necessary evil. Yes, once you have it up and running you can tick it off, but it is going to be some years before you start to tick off items faster than they appear. Look at what Bridgewater are doing with gauging real time economic activity (and look at Google Flu prediction if one starts to get too giddy - it worked and then didn't). There is a spectrum of different qualities of data. What is most objective is not necessarily what is most interesting. Yet work on affect, media, and sentiment analysis is in its very early stages. One can do much better than just affect bad, buy stocks once they stop going down... Someone that asked me to help with something are close to Twitter, and I have heard the number of firms and rough breakdown by sector taking their full feed. It is shockingly small in the financial services field, and that's probably in part just that it takes people time to figure out something new. Ravenpack do interesting work from the point of view of a practitioner, and I heard a talk by their former technical architect, and he really seemed to know his stuff. Not sure what they use as a platform. I can't see why the choice of language will affect your back testing results (except that it is painful to write good algorithms in a klunky language and risk of bugs higher - but that isn't what you meant). Anyway, back to D and finance. I think this mental image people have of back testing as being the originating driver of research may be mistaken. Its funny but sometimes it seems the moment you take a scientist out of his lab and put him on a trading floor he wants to know if such and such beats transaction costs. But what you are trying to do is understand certain dynamics, and one needs to understand that markets are non linear and have highly unstable parameters. So one must be careful about just jumping to a back test. (And then of course, questions of risk management and transaction costs really matter also). To a certain extent one must recognise that the asset management business has a funny nature. (This does not apply to many HFT firms that manage partners money), It doesn't take an army to make a lot of money with good people because of the intrinsic intellectual leverage of the business. But to do that one needs capital, and investors expect to see something tangible for the fees if you are managing size. Warren Buffett gets away with having a tiny organisation because he is Buffett, but that may be harder for a quant firm. So since intelligent enough people are cheap, and investors want you to hire people, it can be tempting to hire that army after all and set them to work on projects that certainly cover their costs but really may not be big determinants of variations in investment outcomes. Ie one shouldn't mistake the number of projects for what is truly important. I agree that it is setting up and keeping everything in production running smoothly that creates a challenge. So it's not just a question of doing a few studies in R. And the more ways of looking at the world, the harder you have to think about how to combine them. Spreadsheets don't cut the mustard anymore - they haven't for years, yet it emerged even recently with the JPM whale that lack of integrity in the spreadsheet worsened communication problems between departments (risk especially). Maybe pypy and numpy will pick up all of slack, but I am not so sure. In spreadsheet world (where one is a user, not a pro), one never finishes and says finally I am done building sheets. One question leads to another in the face of an unfolding and generative reality. It's the same with quant tools for trading. Perhaps that means value to tooling suited to rapid iteration and building of robust code that won't need later to be totally rewritten from scratch later. At one very big US hf I worked with, the tools were initially written in Perl (some years back). They weren't pretty, but they worked, and were fast and robust enough. I has many new features I needed for my trading strategy. But the owner - who liked to read about ideas on the internet - came to the conclusion that Perl was not institutional quality and that we should therefore cease new development and rewrite everything in C++. Two years later a new guy took over the larger group, and one way or the other everyone left. I never got my new tools, and that certainly didn't help on the investment front. After he left a year after that they scrapped the entire code base and bought Murex as nobody could understand what they had. If we had had D then, its possible the outcome might have been different. So in any case, hard to generalise, and better to pick a few sympathetic people that see in D a possible solution to their pain, and use patterns will emerge organically out of that. I am happy to help where I can, and that is somewhat my own perspective - maybe D can help me solve my pain of tools not up to scratch because good investment tool design requires investment and technology skills to be combined in one person whereas each of these two are rare found on their own. (D makes a vast project closer to brave than foolhardy), It would certainly be nice to have matrices, but I also don't think it would be right to say D is dead in water here because it is so far behind. It also seems like the cost of writing such a library is v small vs possible benefit. One final thought. It's very hard to hire good young people. We had 1500 cvs for one job with very impressive backgrounds - French grande ecoles, and the like. But ask a chap how he would sort a list of books without a library, and results were shocking, seems like looking amongst D programmers is a nice heuristic, although perhaps the pool is too small for now. Not hiring now, but was thinking about for future.On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson wrote: I don't see D attempting to tackle that at this point.I don't know about low frequency which is why I asked about Winton. Some of this is true in HFT but it is tough to break that pipeline that exists in C++. Take live trading vs backtesting: you require all that data processing before getting to the math of it to be as low latency as possible for live trading which is why you use C++ in the first place. To break into that pipeline with another language like D to add value, say for backtesting, is risky not just because the duplication of development cost but also the risk of live not matching backtesting. Maybe you have some ideas in mind where D would help that data processing pipeline, so some specifics might help?If the bulk of the work for the "data sciences" piece is the maths, which I believe it is, then the attraction of D as a "data sciences" platform is muted. If the bulk of the work is preprocessing data to get to an all numbers world, then in that space D might shine.That is one of my points exactly -- the "bulk of the work", as you put it, is quite often the data processing/preprocessing pipeline (all the way from raw data parsing, aggregation, validation and storage to data retrieval, feature extraction, and then serialization, various persistency models, etc).
Dec 22 2014
On Tuesday, 23 December 2014 at 03:07:10 UTC, Laeeth Isharc wrote:At one very big US hf I worked with, the tools were initially written in Perl (some years back). They weren't pretty, but they worked, and were fast and robust enough. I has many new features I needed for my trading strategy. But the owner - who liked to read about ideas on the internet - came to the conclusion that Perl was not institutional quality and that we should therefore cease new development and rewrite everything in C++. Two years later a new guy took over the larger group, and one way or the other everyone left. I never got my new tools, and that certainly didn't help on the investment front. After he left a year after that they scrapped the entire code base and bought Murex as nobody could understand what they had. If we had had D then, its possible the outcome might have been different.Interesting perspective on the FI group's use of perl. Yes that group was one of the reasons a whole new architecture committee was established to prevent IT tool selection (like Perl and especially Java) the firm did not want to be used or supported. Imagine after that being prohibited from using Python. Having to beg to get to use it embedded from C++ and when finally granted permission having to rewrite the much of boost python since boost was not a sanctioned tool. Big companies make decisions differently than others. I believe D would not have been a help in that organization and requesting its use would have been the surest way to get a termination package. That said, in other organizations D might have been a good choice.So in any case, hard to generalise, and better to pick a few sympathetic people that see in D a possible solution to their pain, and use patterns will emerge organically out of that. I am happy to help where I can, and that is somewhat my own perspective - maybe D can help me solve my pain of tools not up to scratch because good investment tool design requires investment and technology skills to be combined in one person whereas each of these two are rare found on their own. (D makes a vast project closer to brave than foolhardy), It would certainly be nice to have matrices, but I also don't think it would be right to say D is dead in water here because it is so far behind. It also seems like the cost of writing such a library is v small vs possible benefit.I did not say D is dead in the water here. But when it comes to math platforms it helps to have lots of people behind the solution. For math julia seems to have that momentum now. Maybe you can foster that in D.
Dec 22 2014
On Tuesday, 23 December 2014 at 03:07:10 UTC, Laeeth Isharc wrote:It would certainly be nice to have matrices, but I also don't think it would be right to say D is dead in water here because it is so far behind. It also seems like the cost of writing such a library is v small vs possible benefit.I have a longer horizon than the HFT guys, but I still have quite a demand for high performance computing when backtesting a quantitative strategy. A backtest will typically involve 1) Put some data in a database 2) Apply statistical models to appropriate data 3) Create forecast distribution 4) Optimize portfolio given forecast 5) Repeat 2:4 in each period and calculate performance of strategy The biggest limiting factor to implementing it in D is a mature math/stats library (I understand SciD is a thing, but I have not tried it). Most optimization packages are written in C and could probably be called in D (haven't tried, but I imagine). There's a mysql module for D, though I think python has much better options here (I have been pretty impressed with blaze). Python's Pandas is also pretty helpful, but as it is built upon numpy, something equivalent would need to built upon a matrix library in D. I think it would also be helpful for bindings to Julia and C++ (so I can use MC Stan or Quantlib). I think the pyd project is pretty important. Might be good to write up an example using it for a finance application.
Jan 14 2015
On Monday, 22 December 2014 at 13:37:55 UTC, aldanor wrote: ...In this light, as I see it, D's main advantage is a high "runtime-efficiency / time-to-deploy" ratio (whereas one of the main disadvantages for practitioners would be the lack of standard tools for working with structured multidimensional data + linalg, something like numpy or pandas). Cheers.There is no lack of tools if you can integrate well with existing ones like numpy, pandas, matplotlib, etc. I think a good role for D in such an ecosystem would be implementation of algorithms. D's excellent template system can be leveraged to help it play well with dynamically typed languages. A D module to be called from Python may be kept in source form that is compiled and specialized on demand according to argument types and the dtypes and dimensions of numpy array arguments. Specific specializations will be cached so from the second call it will not incur the 1-2 second overhead of compilation. If you only use the safe subset there should be no danger in dynamically compiling bits of code and loading them into the address space of your session.
Dec 23 2014
You are absolutely correct - the finance industry _wants_ to switch away fromC++. I work in a fledgeling HFT startup firm and we are actively pursuing D. We have tested it out in a live trading environment and the results are very promising. 1. We are measuring better latency numbers in D (As compared to the older C++ systems). This is good enough reason to switch :) 2. After much testing, we concluded that fear-of-GC-collects is overblown. Avoid allocating in the main loops as far as possible. 3. Code is _much_ more maintainable and easier to understand. 4. It's fun to code again - and this point cannot be stressed enough. C++ is a major headache but earlier we had no choice. I'm quite confident that D is going to make good inroads into the financial industry in the coming years. Looking forward to Walter's talk in DConf and indeed all the talks in DConf. Wish I could attend - but the flight costs too much :(. Maybe next year. Saurabh On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:Walter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJB
Mar 22 2014
On Saturday, 22 March 2014 at 12:35:50 UTC, Saurabh Das wrote:You are absolutely correct - the finance industry _wants_ to switch away fromC++. I work in a fledgeling HFT startup firm and we are actively pursuing D. We have tested it out in a live trading environment and the results are very promising.Well, the finance industry is pretty big and therefore diverse. By, "the finance industry _wants_ to switch away from C++" I assume you mean on average or maybe those you work with. Glad to hear you are having promising results with D.1. We are measuring better latency numbers in D (As compared to the older C++ systems). This is good enough reason to switch :)You should share some of the numbers and analysis - maybe some techniques as well. Where do you suspect that edge comes from and would it be possible to duplicate the improvements by changing/improving the C++ - (not that you would want to)?2. After much testing, we concluded that fear-of-GC-collects is overblown. Avoid allocating in the main loops as far as possible.Really? Why is that... because it has no noticeable effect in your system?3. Code is _much_ more maintainable and easier to understand.I would bet so.4. It's fun to code again - and this point cannot be stressed enough. C++ is a major headache but earlier we had no choice.Fun is important - especially at an HFT startup. Just imagine the pain endured by new hires at firms with mounds of C++.I'm quite confident that D is going to make good inroads into the financial industry in the coming years.My bet is it will make inroads in finance but more on the infrastructure, web, general programming side and less on the HFT, low-latency side. Probably less on the Quant side as well without something compelling. Julia is compelling with its focus on math and clear benefits over current RAD alternatives numpy, R, Matlab, etc. I don't yet see where D adds distinction to that game yet - other than being a great language. We'll see. Thanks Dan
Mar 22 2014
Hi Dan, On Saturday, 22 March 2014 at 12:56:03 UTC, Daniel Davidson wrote:On Saturday, 22 March 2014 at 12:35:50 UTC, Saurabh Das wrote:Yes. The finance industry is very big and diverse. I am in particular referring to the parts of the industry that work heavily in C++ currently. Maybe my sample set is skewed because of the segment I work in - but there is a lot of enthusiasm that I have seen from those I am in contact with.You are absolutely correct - the finance industry _wants_ to switch away fromC++. I work in a fledgeling HFT startup firm and we are actively pursuing D. We have tested it out in a live trading environment and the results are very promising.Well, the finance industry is pretty big and therefore diverse. By, "the finance industry _wants_ to switch away from C++" I assume you mean on average or maybe those you work with. Glad to hear you are having promising results with D.I don't think I would be allowed to share numbers :( I will check and get back on that. We have been debating about starting a blog where we can share techniques with the wider world. Unfortunately we are just so overloaded currently that getting down to doing this may take some time. The edge for D in our case comes from 3 factors - 1. A lot of statistical data from older C++ systems means better assumptions and decisions in the new D system; and 2. 20% of the system is latency-critical and 80% is not. D allows us to quickly finish 80% and really concentrate on the critical 20%. I must also comment upon how much more productive it is to write a new system in D as compared with C++ - gives us more time to think about the actual problem than try to jump through the C++ hoops. 3. A much better type system - some checks can be moved to compile time. Major benefit.1. We are measuring better latency numbers in D (As compared to the older C++ systems). This is good enough reason to switch :)You should share some of the numbers and analysis - maybe some techniques as well. Where do you suspect that edge comes from and would it be possible to duplicate the improvements by changing/improving the C++ - (not that you would want to)?Yes. I am commenting about our systems - we disable the GC during critical periods and then go back and collect collect collect later when things calm down. And as such there is minimal allocation in critical sections - we try to ensure all allocation is done beforehand. We followed a similar approach in C++ too since malloc/new is so slow.2. After much testing, we concluded that fear-of-GC-collects is overblown. Avoid allocating in the main loops as far as possible.Really? Why is that... because it has no noticeable effect in your system?Yes - R, Matlab et all won't be replaced by D most likely. Let's wait and watch. However I disagree about the HFT/low-latency side. Ofcourse there's no way to say for sure. Let's check again in a year :)3. Code is _much_ more maintainable and easier to understand.I would bet so.4. It's fun to code again - and this point cannot be stressed enough. C++ is a major headache but earlier we had no choice.Fun is important - especially at an HFT startup. Just imagine the pain endured by new hires at firms with mounds of C++.I'm quite confident that D is going to make good inroads into the financial industry in the coming years.My bet is it will make inroads in finance but more on the infrastructure, web, general programming side and less on the HFT, low-latency side. Probably less on the Quant side as well without something compelling. Julia is compelling with its focus on math and clear benefits over current RAD alternatives numpy, R, Matlab, etc. I don't yet see where D adds distinction to that game yet - other than being a great language. We'll see.Thanks DanSaurabh
Mar 22 2014
On Saturday, 22 March 2014 at 13:36:01 UTC, Saurabh Das wrote:The edge for D in our case comes from 3 factors - 1. A lot of statistical data from older C++ systems means better assumptions and decisions in the new D system; andBut, clearly that is not necessarily a benefit of D. It is a benefit of prior experience and the learning curve. If you said, we use our data to not only make better assumptions/decisions, but to do things in D that can not be done in C++ - then you make a very strong case.2. 20% of the system is latency-critical and 80% is not. D allows us to quickly finish 80% and really concentrate on the critical 20%. I must also comment upon how much more productive it is to write a new system in D as compared with C++ - gives us more time to think about the actual problem than try to jump through the C++ hoops.Productivity is very important and can mean big $$ for most firms. But if latency is the critical factor in an all-or-nothing game, then it is much less so. Maybe your game is different and you have edge beyond low latency. I hope that is the case.3. A much better type system - some checks can be moved to compile time. Major benefit.What is a simple example of something that could be done with D but not C++ that has nothing to do with building things with less developer time? For example, I could see technical reasons why in certain non-quant areas like XML parsing where D can be faster than C++. (http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so-f st-at-parsing-xml/) But then, with a large amount of time and unlimited funding the techniques could probably be duplicated in C++. Again, I don't think it is necessary to have any/many cases where D beats C++ hands down in performance for its adoption to widen. But to push D to a wider audience by focusing on areas where the bar is already super high is tough. If I had money to invest in D I would invest it in vibe rather than quant because the relative advantages of D are so much higher.Yes - R, Matlab et all won't be replaced by D most likely. Let's wait and watch. However I disagree about the HFT/low-latency side. Ofcourse there's no way to say for sure. Let's check again in a year :)Sounds good - keep us posted! Thanks Dan
Mar 22 2014
On 3/22/2014 7:04 AM, Daniel Davidson wrote:But then, with a large amount of time and unlimited funding the techniques could probably be duplicated in C++.That's correct. It's also true of writing code in C or even assembler. Productivity matters because time-to-deploy matters. I.e. if you can deploy a better system in one month rather than two months, you've got an extra month of making money with it.
Mar 22 2014
On Saturday, 22 March 2014 at 14:04:01 UTC, Daniel Davidson wrote:On Saturday, 22 March 2014 at 13:36:01 UTC, Saurabh Das wrote:Yes - I didn't mean this as a point in favour of D, but just to put down the factors that made a difference. However in all microbencharks thus far, D has not done worse than C++.The edge for D in our case comes from 3 factors - 1. A lot of statistical data from older C++ systems means better assumptions and decisions in the new D system; andBut, clearly that is not necessarily a benefit of D. It is a benefit of prior experience and the learning curve. If you said, we use our data to not only make better assumptions/decisions, but to do things in D that can not be done in C++ - then you make a very strong case.To clarify - for us latency is critical. The reason that productivity matters is that given resource constraints, I can spend much more time optimizing the 20% in D because the 80% can be written quickly.2. 20% of the system is latency-critical and 80% is not. D allows us to quickly finish 80% and really concentrate on the critical 20%. I must also comment upon how much more productive it is to write a new system in D as compared with C++ - gives us more time to think about the actual problem than try to jump through the C++ hoops.Productivity is very important and can mean big $$ for most firms. But if latency is the critical factor in an all-or-nothing game, then it is much less so. Maybe your game is different and you have edge beyond low latency. I hope that is the case.None of the type-system stuff CANNOT be duplicated in C++ (given enough resources), but D makes it easy. I'll give you an example which is similar to XML parsing - consider FIX messages. Using templates judiciously, it is possible to write a blazing fast FIX message processing system in D where a lot of the processing is unrolled at compile time. Surely this is possible in C++, but it's going to be a hell of a task.3. A much better type system - some checks can be moved to compile time. Major benefit.What is a simple example of something that could be done with D but not C++ that has nothing to do with building things with less developer time? For example, I could see technical reasons why in certain non-quant areas like XML parsing where D can be faster than C++. (http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so-f st-at-parsing-xml/) But then, with a large amount of time and unlimited funding the techniques could probably be duplicated in C++.Again, I don't think it is necessary to have any/many cases where D beats C++ hands down in performance for its adoption to widen. But to push D to a wider audience by focusing on areas where the bar is already super high is tough. If I had money to invest in D I would invest it in vibe rather than quant because the relative advantages of D are so much higher.Yes I agree with that.Yes - R, Matlab et all won't be replaced by D most likely. Let's wait and watch. However I disagree about the HFT/low-latency side. Ofcourse there's no way to say for sure. Let's check again in a year :)Sounds good - keep us posted! Thanks Dan
Mar 22 2014
On Saturday, 22 March 2014 at 14:04:01 UTC, Daniel Davidson wrote:For example, I could see technical reasons why in certain non-quant areas like XML parsing where D can be faster than C++. (http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so-f st-at-parsing-xml/) But then, with a large amount of time and unlimited funding the techniques could probably be duplicated in C++.Try no funding and a trivial amount of time. The JSON parser I wrote for work in C performs zero allocations and unescaping is performed on demand. D arguably makes this easier by building slicing into the language, but not decoding or copying is a design decision, not a language artifact (at least in the case of C/C++ where aliasing data is allowed). The take-away from that Tango article is that the performance hit for parsing is aggressively decoding data the user may not care about or may not want decoded in the first place. This just happens to be the approach that basically every XML parser on the planet uses for some ridiculous reason.
Mar 23 2014
Am 23.03.2014 18:38, schrieb Sean Kelly:On Saturday, 22 March 2014 at 14:04:01 UTC, Daniel Davidson wrote:At least on Java world it is not quite true. If you use XML parsers that return a DOM or SAX, yes quite true. But as far as I can tell, XML streaming parsers (StAX) only parse on demand. Unless I am missing something. -- PauloFor example, I could see technical reasons why in certain non-quant areas like XML parsing where D can be faster than C++. (http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so-fast-at-parsing-xml/) But then, with a large amount of time and unlimited funding the techniques could probably be duplicated in C++.Try no funding and a trivial amount of time. The JSON parser I wrote for work in C performs zero allocations and unescaping is performed on demand. D arguably makes this easier by building slicing into the language, but not decoding or copying is a design decision, not a language artifact (at least in the case of C/C++ where aliasing data is allowed). The take-away from that Tango article is that the performance hit for parsing is aggressively decoding data the user may not care about or may not want decoded in the first place. This just happens to be the approach that basically every XML parser on the planet uses for some ridiculous reason.
Mar 23 2014
On Sun, 2014-03-23 at 19:15 +0100, Paulo Pinto wrote: […]At least on Java world it is not quite true. If you use XML parsers that return a DOM or SAX, yes quite true. But as far as I can tell, XML streaming parsers (StAX) only parse on demand. Unless I am missing something.This is exactly why Groovy has two distinct XML parsers. British Library consider a small XML document to be about 6GB, you don't use DOM for these ;-) -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Mar 23 2014
On Sunday, 23 March 2014 at 18:15:16 UTC, Paulo Pinto wrote:At least on Java world it is not quite true.And that's why I said a language like C/C++ that allows aliasing.If you use XML parsers that return a DOM or SAX, yes quite true. But as far as I can tell, XML streaming parsers (StAX) only parse on demand.It's been a while since I used it, but the Apache SAX parser (Xerces?) converts all string input to wchar_t before passing it to the callback. And since XML input is nearly always in UTF-8, this can mean a ton of transcoding.
Mar 23 2014
On Monday, 24 March 2014 at 05:41:38 UTC, Sean Kelly wrote:On Sunday, 23 March 2014 at 18:15:16 UTC, Paulo Pinto wrote:Ah Xerces! Last time I looked into it was around 2003. I doubt it has any optimizations of modern XML parsers looking how little the web site has changed since then. -- PauloAt least on Java world it is not quite true.And that's why I said a language like C/C++ that allows aliasing.If you use XML parsers that return a DOM or SAX, yes quite true. But as far as I can tell, XML streaming parsers (StAX) only parse on demand.It's been a while since I used it, but the Apache SAX parser (Xerces?) converts all string input to wchar_t before passing it to the callback. And since XML input is nearly always in UTF-8, this can mean a ton of transcoding.
Mar 24 2014
On 3/23/2014 10:38 AM, Sean Kelly wrote:Try no funding and a trivial amount of time. The JSON parser I wrote for work in C performs zero allocations and unescaping is performed on demand. D arguably makes this easier by building slicing into the language, but not decoding or copying is a design decision, not a language artifact (at least in the case of C/C++ where aliasing data is allowed). The take-away from that Tango article is that the performance hit for parsing is aggressively decoding data the user may not care about or may not want decoded in the first place. This just happens to be the approach that basically every XML parser on the planet uses for some ridiculous reason.Lazy evaluation FTW. Ranges and algorithms fit right in with that.
Mar 23 2014
On Sunday, 23 March 2014 at 17:38:17 UTC, Sean Kelly wrote:On Saturday, 22 March 2014 at 14:04:01 UTC, Daniel Davidson wrote:This isn't for ridiculous reasons, this is because in other languages you have no guarantee that what you work on is immutable. So you must aggressively copy anyway. With a separate decoding step, you'll ends up copying twice, which is also wasteful.For example, I could see technical reasons why in certain non-quant areas like XML parsing where D can be faster than C++. (http://dotnot.org/blog/archives/2008/03/12/why-is-dtango-so-f st-at-parsing-xml/) But then, with a large amount of time and unlimited funding the techniques could probably be duplicated in C++.Try no funding and a trivial amount of time. The JSON parser I wrote for work in C performs zero allocations and unescaping is performed on demand. D arguably makes this easier by building slicing into the language, but not decoding or copying is a design decision, not a language artifact (at least in the case of C/C++ where aliasing data is allowed). The take-away from that Tango article is that the performance hit for parsing is aggressively decoding data the user may not care about or may not want decoded in the first place. This just happens to be the approach that basically every XML parser on the planet uses for some ridiculous reason.
Mar 24 2014
On 3/22/2014 5:56 AM, Daniel Davidson wrote:I don't yet see where D adds distinction to that game yet - other than being a great language.Isn't that the best kind of distinction?
Mar 22 2014
On 3/22/2014 5:35 AM, Saurabh Das wrote:I'm quite confident that D is going to make good inroads into the financial industry in the coming years. Looking forward to Walter's talk in DConf and indeed all the talks in DConf. Wish I could attend - but the flight costs too much :(. Maybe next year.C'mon, man, you gotta come. I want to hear more about the HFT stuff!
Mar 22 2014
On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:Walter, I see that you will be discussing "High Performance Code Using D" at the 2014 DConf. This will be a very welcomed topic for many of us. I am a Finance Professor. I currently teach and do research in computational finance. Might I suggest that you include some finance (say Monte Carlo options pricing) examples? If you can get the finance industry interested in D you might see a massive adoption of the language. Many are desperate for an alternative to C++ in that space. Just a thought. Best, TJB1+ for finance talk
Jan 14 2015