www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - D GUI Framework (responsive grid teaser)

reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
Hi, we are currently build up our new technology stack and for this 
create a 2D GUI framework.

https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um
2022.32.46.mov?dl=0 


The screencast shows a responsive 40x40 grid. Layouting the grid takes 
about 230ms, drawing it about 10ms. The mouse clicks are handled via a 
reactive message stream and routed to all graphical objects that are 
hit using a spatial-index. The application code part is about 50 lines 
of code, the rest is handled by the framework.

With all this working now, we have all necessary building blocks 
working together.

Next steps are to create more widgets and add a visual style system. 
The widgets themself are style-free and wire-frame only for debugging 
purposes.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster
May 19 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
 Hi, we are currently build up our new technology stack and for 
 this create a 2D GUI framework.

 https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um%2022.32.46.mov?dl=0


 The screencast shows a responsive 40x40 grid. Layouting the 
 grid takes about 230ms,
Interesting, is each cell a separate item then? So assuming 3GHz cpu, we get 0.23*3e9/1600 = 431250 cycles per cell? That's a lot of work. Are you using some kind of iterative physics based approach since you use hundreds of thousands of computations per cell?
May 19 2019
next sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-19 21:21:55 +0000, Ola Fosheim Grøstad said:

 Interesting, is each cell a separate item then?
Yes, it's organized like this: root => grid => 1..X columns ==(each colum)==> 1..Y cells
 So assuming 3GHz cpu, we get 0.23*3e9/1600 = 431250 cycles per cell? 
 That's a lot of work.
I must be fair, and add some other measurement in between. This includes managing a 2D spatial index for hit testing too, not only layouting.
  Are you using some kind of iterative physics based approach since you 
 use hundreds of thousands of computations per cell?
It's like the browser's flex-box model. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 20 2019
prev sibling parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-19 21:21:55 +0000, Ola Fosheim Grøstad said:

 Interesting, is each cell a separate item then?
 
 So assuming 3GHz cpu, we get 0.23*3e9/1600 = 431250 cycles per cell?
 
 That's a lot of work.
Here is a new screencast: https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um 2015.20.59.mov?dl=0 I optimized the whole thing a bit, so now a complete screen with layouting, hittesting, drawing takes about 28ms, that's 8x faster than before. Drawing is still around 10ms, layouting around 16ms, spatial index handling 2ms. So this gives us 36 FPS which is IMO pretty good for a desktop app target. There might be some 2-3ms speed-up still possible but not worth the effort yet. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 21 2019
next sibling parent reply Basile B. <b2.temp gmx.com> writes:
On Tuesday, 21 May 2019 at 14:04:29 UTC, Robert M. Münch wrote:
 On 2019-05-19 21:21:55 +0000, Ola Fosheim Grøstad said:

 Interesting, is each cell a separate item then?
 
 So assuming 3GHz cpu, we get 0.23*3e9/1600 = 431250 cycles per 
 cell?
 
 That's a lot of work.
Here is a new screencast: https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um%2015.20.59.mov?dl=0 I optimized the whole thing a bit, so now a complete screen with layouting, hittesting, drawing takes about 28ms, that's 8x faster than before. Drawing is still around 10ms, layouting around 16ms, spatial index handling 2ms. So this gives us 36 FPS which is IMO pretty good for a desktop app target. There might be some 2-3ms speed-up still possible but not worth the effort yet.
openGL backend I presume ?
May 21 2019
parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-21 15:57:20 +0000, Basile B. said:

 openGL backend I presume ?
No, CPU rendering to memory-buffer. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 21 2019
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 21 May 2019 at 14:04:29 UTC, Robert M. Münch wrote:
 Here is a new screencast: 
 https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um%2015.20.59.mov?dl=0
That looks better :-)
 So this gives us 36 FPS which is IMO pretty good for a desktop 
 app target. There might be some 2-3ms speed-up still possible 
 but not worth the effort yet.
That's true. High efficiency spatial datastructures are hard to refactor so better to keep it simple in the beginning. Leave yourself room to experiment with different class hierarchies etc. Just make sure that you pick an architecture that allows you to use spatial datastructures later on! Another option is to use 2 passes. Pass 1: collects geometric information, could even use virtual function calls. Pass 2: highly optimized algorithm for calculating layout plugin-style (meaning you can start with something simple and just replace it wholesale since it doesn't depend on the object hierarchies) Then you can think more about usability and less about performance. Sure, there is a performance price, but flexibility is more important in the first iterations. KISS until the design is locked down FTW.
May 21 2019
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-21 17:29:51 +0000, Ola Fosheim Grøstad said:

 On Tuesday, 21 May 2019 at 14:04:29 UTC, Robert M. Münch wrote:
 Here is a new screencast: 
 https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um
2015.20.59.mov?dl=0 
 
That looks better :-)
:-) For a pixel perfect full responsive GUI I need to think about it a bit more. But that's not high priority at the moment.
 Just make sure that you pick an architecture that allows you to use 
 spatial datastructures later on!
The nice thing about the design is, that the necessary parts are totally indepdent. For example changing the spatial index to an other approach just needed 5 lines of code change in the framework. The places where the different parts meet, are kept to an absolut minimum and the interfaces are as thin as possible. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 21 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 21 May 2019 at 18:08:52 UTC, Robert M. Münch wrote:
 :-) For a pixel perfect full responsive GUI I need to think 
 about it a bit more. But that's not high priority at the moment.
Right, there is no point in making that part too complicated early on because you may find that you want to enable user interface elements that requires something completely different later on!
 The nice thing about the design is, that the necessary parts 
 are totally indepdent. For example changing the spatial index 
 to an other approach just needed 5 lines of code change in the 
 framework. The places where the different parts meet, are kept 
 to an absolut minimum and the interfaces are as thin as 
 possible.
That's what I like to hear. So now you can focus on usability both for programmers and end users without being too concerned with the geometric details.
May 21 2019
prev sibling parent reply kdevel <kdevel vogtner.de> writes:
On Tuesday, 21 May 2019 at 14:04:29 UTC, Robert M. Münch wrote:

[...]

 Here is a new screencast: 
 https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um%2015.20.59.mov?dl=0


 I optimized the whole thing a bit, so now a complete screen 
 with layouting, hittesting, drawing takes about 28ms, that's 8x 
 faster than before. Drawing is still around 10ms, layouting 
 around 16ms, spatial index handling 2ms.
Awesome. Compared to the video you posted some days ago there is also almost no visible aliasing. Do you plan to create a web browser based on your framework?
May 23 2019
parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-23 09:28:59 +0000, kdevel said:

 Awesome. Compared to the video you posted some days ago there is also 
 almost no visible aliasing.
Thanks.
  Do you plan to create a web browser based on your framework?
No, I don't see any business model behind a web browser... -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 23 2019
prev sibling next sibling parent Suliman <evermind live.ru> writes:
On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
 Hi, we are currently build up our new technology stack and for 
 this create a 2D GUI framework.

 https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um%2022.32.46.mov?dl=0


 The screencast shows a responsive 40x40 grid. Layouting the 
 grid takes about 230ms, drawing it about 10ms. The mouse clicks 
 are handled via a reactive message stream and routed to all 
 graphical objects that are hit using a spatial-index. The 
 application code part is about 50 lines of code, the rest is 
 handled by the framework.

 With all this working now, we have all necessary building 
 blocks working together.

 Next steps are to create more widgets and add a visual style 
 system. The widgets themself are style-free and wire-frame only 
 for debugging purposes.
Thanks! Very interesting project!
May 19 2019
prev sibling next sibling parent reply Basile B. <b2.temp gmx.com> writes:
On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
 Hi, we are currently build up our new technology stack and for 
 this create a 2D GUI framework.

 https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um%2022.32.46.mov?dl=0


 The screencast shows a responsive 40x40 grid. Layouting the 
 grid takes about 230ms, drawing it about 10ms. The mouse clicks 
 are handled via a reactive message stream and routed to all 
 graphical objects that are hit using a spatial-index. The 
 application code part is about 50 lines of code, the rest is 
 handled by the framework.

 With all this working now, we have all necessary building 
 blocks working together.

 Next steps are to create more widgets and add a visual style 
 system. The widgets themself are style-free and wire-frame only 
 for debugging purposes.
What kind of layouting ? GTK-like ? DelphiVCL-like ? Flex-like ?
May 21 2019
parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-21 16:07:33 +0000, Basile B. said:

 What kind of layouting ? GTK-like ? DelphiVCL-like ? Flex-like ?
Flex-Box like. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 21 2019
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On Sun, May 19, 2019 at 2:05 PM Robert M. Münch via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 Hi, we are currently build up our new technology stack and for this
 create a 2D GUI framework.

 https://www.dropbox.com/s/iu988snx2lqockb/Bildschirmaufnahme%202019-05-19%20um%2022.32.46.mov?dl=0


 The screencast shows a responsive 40x40 grid. Layouting the grid takes
 about 230ms, drawing it about 10ms.
O_o ... I feel like 230 *microseconds* feels about the right time, and ~100 microseconds for rendering.
 So this gives us 36 FPS which is IMO pretty good for a desktop app target
Umm, no. I would expect 240fps is the modern MINIMUM for a desktop app, you can easily make it that fast. Incidentally, we have a multimedia library workgroup happening to build out a flexible and as-un-opinionated-as-we-can gfx and gui libraries which may serve a wider number of users than most existing libraries, perhaps you should join that effort, and leverage the perf experts we have? There's a channel #graphics on the dlang discord.
May 21 2019
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-21 16:51:43 +0000, Manu said:

 The screencast shows a responsive 40x40 grid. Layouting the grid takes
 about 230ms, drawing it about 10ms.
O_o ... I feel like 230 *microseconds* feels about the right time, and ~100 microseconds for rendering.
I don't think that's fast enough :-)
 So this gives us 36 FPS which is IMO pretty good for a desktop app target
Umm, no. I would expect 240fps is the modern MINIMUM for a desktop app, you can easily make it that fast.
;-) Well, they key is to layout & render only changes. A responsive grid is an evil test-case as this requires a full cylce on every frame.
 Incidentally, we have a multimedia library workgroup happening to
 build out a flexible and as-un-opinionated-as-we-can gfx and gui
 libraries which may serve a wider number of users than most existing
 libraries,
Ah, ok. Sounds interesting...
 perhaps you should join that effort, and leverage the perf
 experts we have? There's a channel #graphics on the dlang discord.
I will have a look... need to get discord up & running. Too many chat channels these days... -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 21 2019
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 22/05/2019 7:51 AM, Robert M. Münch wrote:
 perhaps you should join that effort, and leverage the perf
 experts we have? There's a channel #graphics on the dlang discord.
I will have a look... need to get discord up & running. Too many chat channels these days...
Use the web client and come say hello in the mean time :)
May 21 2019
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On Tue, May 21, 2019 at 12:55 PM Robert M. Münch via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On 2019-05-21 16:51:43 +0000, Manu said:

 The screencast shows a responsive 40x40 grid. Layouting the grid takes
 about 230ms, drawing it about 10ms.
O_o ... I feel like 230 *microseconds* feels about the right time, and ~100 microseconds for rendering.
I don't think that's fast enough :-)
It probably is :P
 So this gives us 36 FPS which is IMO pretty good for a desktop app target
Umm, no. I would expect 240fps is the modern MINIMUM for a desktop app, you can easily make it that fast.
;-) Well, they key is to layout & render only changes. A responsive grid is an evil test-case as this requires a full cylce on every frame.
The worst case defines your application performance, and grids are pretty normal. You can make a UI run realtime ;) I mean, there are video games that render a complete screen full of zillions of high-detail things every frame!
May 22 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 22 May 2019 at 17:01:39 UTC, Manu wrote:
 You can make a UI run realtime ;)
 I mean, there are video games that render a complete screen 
 full of
 zillions of high-detail things every frame!
But you shouldn't design a UI framework like a game engine. Especially not if you also want to run on embedded devices addressing pixels over I2C.
May 22 2019
next sibling parent reply Manu <turkeyman gmail.com> writes:
On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Wednesday, 22 May 2019 at 17:01:39 UTC, Manu wrote:
 You can make a UI run realtime ;)
 I mean, there are video games that render a complete screen
 full of
 zillions of high-detail things every frame!
But you shouldn't design a UI framework like a game engine. Especially not if you also want to run on embedded devices addressing pixels over I2C.
I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life. This extends to server software in data-centers, even more so in that case. People really should look at games for how to write good software in general. There's a reason games can simulate a rich world full of dynamic data and produce hundreds of frames a second, is because the industry has spent decades getting really good at software design and patterns that treat computers like computers with respect to perf.
May 22 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 22 May 2019 at 21:18:58 UTC, Manu wrote:
 I couldn't possibly agree less; I think cool kids would design
 literally all computer software like a game engine, if they 
 generally
 cared about fluid experience, perf, and battery life.
A game engine is designed for full redraw on every frame. He said he wanted to draw pixel by pixel and only update pixels that change. I guess this would be useful on a slow I2C serial bus. It is also useful for X-Windows. Or any other scenario where you transmit graphics over a wire. Games aren't really relevant in those two scenarios, but I don't know what the framework is aiming for either.
 There's a reason games can simulate a rich world full of 
 dynamic data and produce hundreds of frames a second, is
Yes, it is because they cut corners and make good use of special cases... The cool kids in the demo-scene even more so. That does not make them good examples to follow for people who care about accuracy and correctness. But I don't know the goal for this GUI framework is. So could you make good use of a GPU, even in the early stages in this case? Yes. If you keep it as a separate stage so that you have no dependencies to the object hierarchy. I would personally have done it in two passes for a prototype. Basically translating the object hierarchy into geometric data every frame then use a GPU to take that and push it to the screen. Not very efficient, perhaps, but good enough to get 60FPS with max flexibility. Is that related to games, yes sure, or any other realt-time simulation software. So not really game specific.
May 22 2019
next sibling parent reply Manu <turkeyman gmail.com> writes:
On Wed, May 22, 2019 at 3:40 PM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Wednesday, 22 May 2019 at 21:18:58 UTC, Manu wrote:
 I couldn't possibly agree less; I think cool kids would design
 literally all computer software like a game engine, if they
 generally
 cared about fluid experience, perf, and battery life.
A game engine is designed for full redraw on every frame.
I mean, you don't need to *draw* anything... it's really just a style of software design that lends to efficiency. Our servers don't draw anything!
 He said he wanted to draw pixel by pixel and only update pixels
 that change. I guess this would be useful on a slow I2C serial
 bus. It is also useful for X-Windows. Or any other scenario where
 you transmit graphics over a wire.

 Games aren't really relevant in those two scenarios, but I don't
 know what the framework is aiming for either.
Minimising wasted calculation is always relevant. If you don't change part of an image, then you'd better have the tech to skip rendering it (or skip transmitting it in this scenario), otherwise you're wasting resources like a boss ;)
 There's a reason games can simulate a rich world full of
 dynamic data and produce hundreds of frames a second, is
Yes, it is because they cut corners and make good use of special cases... The cool kids in the demo-scene even more so. That does not make them good examples to follow for people who care about accuracy and correctness. But I don't know the goal for this GUI framework is.
I don't think you know what you're talking about. I don't think we 'cut corners' (I'm not sure what that even means)... we have data to process, and aim to maximise efficiency, that is all. Architecture is carefully designed towards that goal; it changes your patterns. You won't tend to have OO hierarchies and sparsely allocated graphs, and you will naturally tend to arrange data in tables destined for batch processing. These are key to software efficiency in general.
 So could you make good use of a GPU, even in the early stages in
 this case? Yes. If you keep it as a separate stage so that you
 have no dependencies to the object hierarchy.
'Object hierarchy' is precisely where it tends to go wrong. There are a million ways to approach this problem space; some are naturally much more efficient, some rather follow design pattern books and propagate ideas taught in university to kids.
 I would personally
 have done it in two passes for a prototype. Basically translating
 the object hierarchy into geometric data every frame then use a
 GPU to take that and push it to the screen. Not very efficient,
 perhaps, but good enough to get 60FPS with max flexibility.
Sure, maybe that's a reasonable design. Maybe you can go a step further and transform your arrangement a 'hierarchy'? Data structures are everything.
 Is that related to games, yes sure, or any other realt-time
 simulation software. So not really game specific.
Right. I only advocate good software engineering! But when I look around, the only field I can see that's doing a really good job at scale is gamedev. Some libs here and there enclose some tight worker code, but nothing much at the systemic level.
May 22 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 00:23:50 UTC, Manu wrote:
 it's really just a style
 of software design that lends to efficiency.
 Our servers don't draw anything!
Then it isn't specific to games, or particularly relevant to rendering. Might as well talk about people writing search engines or machine learning code.
 Minimising wasted calculation is always relevant. If you don't 
 change part of an image, then you'd better have the tech to 
 skip rendering it (or skip transmitting it in this scenario), 
 otherwise you're wasting resources like a boss ;)
Well, it all depends on your priorities. The core difference is that (at least for the desktop) a game rendering engine can focus on 0% overhead for the most demanding scenes, while 40% overhead on light scenes has no impact on the game experience. Granted for mobile engines then battery life might change that equation, though I am not sure if gamers would notice a 20% difference in battery life... For a desktop application you might instead decide to favour 50% GPU overhead across the board as a trade off for a more flexible API that saves application programmer hours and freeing up CPU time to processing application data. (If your application only uses 10% of the GPU, then going to 15% is a low price to pay.)
 I don't think you know what you're talking about.
Let's avoid the ad hominems… I know what I am talking about, but perhaps I don't know what you are talking about? I thought you were talking about the rendering engines used in games, not software engineering as a discipline.
 I don't think we 'cut corners' (I'm not sure what that even 
 means)...
What is means is that in a game you have a negotiation between the application design requirements and the technology requirements. You can change the game design to take advantage of the technology and change the technology to accommodate the game design. Visual quality only matters as seen from the particular vantage points that the gamer will take in that particular game or type of game. When creating a generic GUI API you cannot really assume too much. Let's say you added ray-traced widgets. It would make little sense to say that you can only have 10 ray-traced widgets on display at the same time for a GUI API. In a game that is completely acceptable. You'd rather have the ability to put some extra impressive visuals on screen in a limited fashion where it matters the most. So the priorities is more like in film production. You can pay a price in terms of technological special casing to create a more intense emotional experience. You can limit your focus to what the user is supposed to do (both end user and application programmer) and give priority to "emotional impact". And you also have the ability to train a limited set of workers (programmers) to make good use of the novelty of your technology. When dealing with unknown application programmers writing unknown applications you have to be more conservative.
 patterns. You won't tend to have OO hierarchies and sparsely 
 allocated
 graphs, and you will naturally tend to arrange data in tables 
 destined
 for batch processing. These are key to software efficiency in 
 general.
If you are talking about something that isn't available to the application programmer then that is fine. For a GUI framework the most important thing after providing a decent UI experience is to make the application programmers life easier and more intuitive. Basically, your goal is to save programmer hours and make it easy to change direction due to changing requirements. If OO hierarchies is more intuitive to the typical application programmers, then that is what you should use at the API level. If your write your own internal GUI framework then you have a different trade-off, you might put more of a burden on the application developer in order to make better overall use of your workforce. Or you might limit the scope of the GUI framework to getter better end-user results.
 'Object hierarchy' is precisely where it tends to go wrong. 
 There are a million ways to approach this problem space; some 
 are naturally much more efficient, some rather follow design 
 pattern books and propagate ideas taught in university to kids.
You presume that efficiency is a problem. That's not necessarily the case. If your framework is for embedded LCDs then you are perhaps limited to under 500 objects on screen anyway. I also know that Open Inventor (from SGI) and VRML made people more productive. It allowed people to create experiences that they otherwise would not have been able to, both in industrial prototypes and artistic works. Overhead isn't necessarily bad. A design with some overhead might cut the costs enough for the application developer to make a project feasible. Or even make it accessible for tinkering. You see the same thing with the Processing language.
 Sure, maybe that's a reasonable design. Maybe you can go a step 
 further and transform your arrangement a 'hierarchy'? Data 
 structures are everything.
In the early stages it is most important to have freedom to change things, but with an idea of where you could insert spatial data-structures. Having a plan for where you can place accelerating data-structures and algorithms do matter, of course. But you don't need to start there. So I think he is doing well by keeping rendering simple in the first iterations.
 Right. I only advocate good software engineering!
 But when I look around, the only field I can see that's doing a 
 really good job at scale is gamedev. Some libs here and there 
 enclose some tight worker code, but nothing much at the 
 systemic level.
It is a bit problematic for generic libraries to use worker code (I assume you mean actors running on separate threads) as you put some serious requirements on the architecture of the application. More actor-oriented languages and run-times could make it pleasant though, so maybe an infrastructure issue where programming languages need to evolve. But you could for a GUI framework, sure. Although I think the rendering structure used in browser graphical backends is closer to what people would want for an UI than a typical game rendering engine. Especially the styling engine.
May 22 2019
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/22/19 6:39 PM, Ola Fosheim Grøstad wrote:
 There's a reason games can simulate a rich world full of dynamic data 
 and produce hundreds of frames a second, is
Yes, it is because they cut corners and make good use of special cases... The cool kids in the demo-scene even more so. That does not make them good examples to follow for people who care about accuracy and correctness.
Serious photographers and videographers use things like JPEG and MPEG which are *fundamentally based* on cutting imperceptible corners and trading accuracy for other benefits. The idea of a desktop GUI absolutely requiring perfect pristine accuracy in all things is patently laughable.
May 23 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 19:13:11 UTC, Nick Sabalausky 
(Abscissa) wrote:
 Serious photographers and videographers use things like JPEG 
 and MPEG which are *fundamentally based* on cutting 
 imperceptible corners and trading accuracy for other benefits. 
 The idea of a desktop GUI absolutely requiring perfect pristine 
 accuracy in all things is patently laughable.
What do you mean? Besides, it is wrong. If you create a font editor you want accuracy. If you create an image editor you want accuracy. If you create a proofing application you want accuracy. If you create a PDF application you want accuracy. When designing a game, you can adapt your game design to the provided engine. Or you can design an engine to fit a type of game design. When creating a user interface framework you should work with everything from sound editors, oscilloscopes, image editors, vector editors, CAD programs, spreadsheets etc. You cannot really assume much about anything. What you want is max flexibility. Most GUI frameworks fail at this, so you have to do all yourself if you want anything with descent quality, but that is not how it should be. Apple's libraries provides options for higher accuracy. This is good. This is what you want. You don't want to roll your own all the time because the underlying framework is just "barely passing" in terms of quality.
May 23 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 19:29:26 UTC, Ola Fosheim Grøstad 
wrote:
 Most GUI frameworks fail at this, so you have to do all 
 yourself if you want anything with descent quality, but that is 
 not how it should be.
I meant «decent»! *grin* (But really, photographers and videographers use RAW format exactly because they want to be able to edit without artifacts showing up. Not really relevant in this context though.)
May 23 2019
prev sibling next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/23/19 3:29 PM, Ola Fosheim Grøstad wrote:
 On Thursday, 23 May 2019 at 19:13:11 UTC, Nick Sabalausky (Abscissa) wrote:
 Serious photographers and videographers use things like JPEG and MPEG 
 which are *fundamentally based* on cutting imperceptible corners and 
 trading accuracy for other benefits. The idea of a desktop GUI 
 absolutely requiring perfect pristine accuracy in all things is 
 patently laughable.
What do you mean? Besides, it is wrong. If you create a font editor you want accuracy. If you create an image editor you want accuracy. If you create a proofing application you want accuracy. If you create a PDF application you want accuracy.
They want accuracy TO THE EXTENT THEY (and others) CAN PERCEIVE IT. That is the key. Human perception is far more limited than most people realize.
May 23 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 20:13:29 UTC, Nick Sabalausky 
(Abscissa) wrote:
 They want accuracy TO THE EXTENT THEY (and others) CAN PERCEIVE 
 IT. That is the key. Human perception is far more limited than 
 most people realize.
Well, what I meant by "cutting corners" it that games reach efficiency by narrowing down what they allow you to do. STILL, I think Robert M. Münch is onto something good if he aims for accuracy and provides say a canvas that draws bezier curves to the spec (whether it is PDF or SVG). I think many niche application areas involve accuracy, like a CNC router program, or a logo cutter or 3D printing. So I think there is a market. If you can provide everything people need in one framework, then people might want to pay for it. If you just provide what everyone else sloppily does, then why bother (just use Gtk, Qt or electron instead). *shrug*
May 23 2019
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-23 20:22:28 +0000, Ola Fosheim Grøstad said:

 STILL, I think Robert M. Münch is onto something good if he aims for 
 accuracy and provides say a canvas that draws bezier curves to the spec 
 (whether it is PDF or SVG). I think many niche application areas 
 involve accuracy, like a CNC router program, or a logo cutter or 3D 
 printing. So I think there is a market.
I'm not fully understand the discussion about accuracy WRT GUIs. Of course you need to draw things accurate. And my interjection WRT 35-FPS was just to give an idea about the possible achievable performance. I like desktop apps that are fast and small, nothing more.
 If you can provide everything people need in one framework, then people 
 might want to pay for it. If you just provide what everyone else 
 sloppily does, then why bother (just use Gtk, Qt or electron instead). 
 *shrug*
Exactly. Our goals is to create a GUI framework which you can use to make desktop apps without caring about the OS specifics (which doesn't mean we are limiting in a way that you can't care if you wish). For this we are creating a set of building-blocks that fit perfectly together following a radical KISS and minimal dependency strategy. If you want, you should be able to maintain a desktop app using a specific version of the framework for 15+ years, without running into any limitations. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 24 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 May 2019 at 08:35:27 UTC, Robert M. Münch wrote:
 I'm not fully understand the discussion about accuracy WRT 
 GUIs. Of course you need to draw things accurate. And my 
 interjection WRT 35-FPS was just to give an idea about the 
 possible achievable performance. I like desktop apps that are 
 fast and small, nothing more.
Yes. What I meant is that it is better for an application developer to have a GUI framework that is predictable and solid than to have the highest possible performance. So if someone provides a drawing canvas then I'd rather have correctly drawn anti-aliased primitives (like bezier curves) than something that is 20% faster but incorrect. Just an example. Just in general, predictable, less to worry about, so that the application developer can focus on the application and not the peculiarities of the GUI framework.
 care if you wish). For this we are creating a set of 
 building-blocks that fit perfectly together following a radical 
 KISS and minimal dependency strategy.
Sounds reasonable.
May 24 2019
prev sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-23 19:29:26 +0000, Ola Fosheim Grøstad said:

 When creating a user interface framework you should work with 
 everything from sound editors, oscilloscopes, image editors, vector 
 editors, CAD programs, spreadsheets etc. You cannot really assume much 
 about anything. What you want is max flexibility.
That's exactly the right direction.
 Most GUI frameworks fail at this, so you have to do all yourself if you 
 want anything with descent quality, but that is not how it should be.
Yep, I can't agree more. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 24 2019
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Wednesday, 22 May 2019 at 21:18:58 UTC, Manu wrote:
 People really should look at games for how to write good
 software in general.
While I agree for some AAA games (and I'm sure your employer can afford excellent development practics), I'd like to counteract that point for balance: for good practice of stability, threading and error reporting, people should look at high-availability, long-lived server software. A single memory leak will be a problem there, a single deadlock. Games are also particular software in that they simulate worlds with many numbers of entities, and that exercise the limits of OO. That's a bit specific to games! (and possibly UI) There also areas where performance matters immensely, such as HFT and video, where people spend more time than in games optimizing the last percent. Arguably, HFT is maybe the one domain that goes the further with performance. If you want an example of how (sometimes) strangely insular game development can be, maybe look at the Jai language. It is assuming game developement is a gold standard for software and software needs, without ever proving that point.
May 23 2019
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce wrote:
 On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
 Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
[...]
 But you shouldn't design a UI framework like a game engine.

 Especially not if you also want to run on embedded devices
 addressing pixels over I2C.
I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life.
[...] Wait, wha...?! Write game-engine-like code if you care about *battery life*?? I mean... fluid experience, sure, perf, OK, but *battery life*?! Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever. I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life? I think I need to sit down. T -- Never step over a puddle, always step around it. Chances are that whatever made it is still dripping.
May 22 2019
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/22/19 6:33 PM, H. S. Teoh wrote:
 On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce
wrote:
 On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
 Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
[...]
 But you shouldn't design a UI framework like a game engine.

 Especially not if you also want to run on embedded devices
 addressing pixels over I2C.
I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life.
[...] Wait, wha...?! Write game-engine-like code if you care about *battery life*?? I mean... fluid experience, sure, perf, OK, but *battery life*?! Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever. I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life? I think I need to sit down.
You're conflating "game engine" with "game" here. And YES, there is very meaningful distinction: Game engines *MUST* be *EFFICIENT* in order facilitate the demands the games place on them. And "efficiency" *means* efficiency: it means minimizing wasted processing, and that *inherently* means *both* speed and battery. The *games*, not the engines, then take that efficiency and use it to fill the hardware to the absolute brim, maximizing detail and data and overall lushness of the simulated world (and, in the case of indie titles, it's also increasingly used to offset sloppy game code - with engines like Unity, indie game programming is increasingly done by people with very little programming experience). THAT is what kills battery: Taking an otherwise efficient engine and using it to saturate the hardware, thus trading battery for either maximal data being processed or for lowering the programmer's barrier to entry. Due to the very nature of "efficiency", the fundamental designs behind any good game engine could just as easily be applied to minimizing battery usage as they can be to maximizing CPU/GPU utilization.
May 23 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 19:32:28 UTC, Nick Sabalausky 
(Abscissa) wrote:
 Game engines *MUST* be *EFFICIENT* in order facilitate the 
 demands the games place on them. And "efficiency" *means* 
 efficiency: it means minimizing wasted processing, and that 
 *inherently* means *both* speed and battery.
I think there is a slight disconnection in how different people view efficency. You argue that this is some kind of absolute metric. I would argue that it is a relative metric, and it is relative to flexibility and power. This isn't specific to games. For instance, there is no spatial datatructure that is inherently better or more efficient than all other spatial datastructures. It depends on what you need to represent. It depends on how often you need to update. It depends on what kind of queries you want to do. And so on. This is where a generic application/UI framework will have to give priority to being generally useful in the most general sense and give priority to flexibility and expressiveness. A first person shooter game engine, can however make a lot of assumptions. That will make it more efficient for a narrow set of cases, but also completely useless in the most general sense. It also limits what you can do, quite severely.
May 23 2019
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/23/19 3:52 PM, Ola Fosheim Grøstad wrote:
 On Thursday, 23 May 2019 at 19:32:28 UTC, Nick Sabalausky (Abscissa) wrote:
 Game engines *MUST* be *EFFICIENT* in order facilitate the demands the 
 games place on them. And "efficiency" *means* efficiency: it means 
 minimizing wasted processing, and that *inherently* means *both* speed 
 and battery.
I think there is a slight disconnection in how different people view efficency. You argue that this is some kind of absolute metric. I would argue that it is a relative metric, and it is relative to flexibility and power. This isn't specific to games. For instance, there is no spatial datatructure that is inherently better or more efficient than all other spatial datastructures. It depends on what you need to represent. It depends on how often you need to update. It depends on what kind of queries you want to do. And so on. This is where a generic application/UI framework will have to give priority to being generally useful in the most general sense and give priority to flexibility and expressiveness. A first person shooter game engine, can however make a lot of assumptions. That will make it more efficient for a narrow set of cases, but also completely useless in the most general sense. It also limits what you can do, quite severely.
Of course there's always tradeoffs, but I think you are very much overestimating the connection between inherent performance limitations and things like API and general usefulness and flexibility. And I think you're *SEVERELY* underestimating the flexibility of modern game engines. And I say this having personally used modern game engines. Have you? FWIW, On 80's technology, I would completely agree with you. And even to some extent on 90's tech. But not today.
May 23 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 20:20:52 UTC, Nick Sabalausky 
(Abscissa) wrote:
 flexibility. And I think you're *SEVERELY* underestimating the 
 flexibility of modern game engines. And I say this having 
 personally used modern game engines. Have you?
No, I don't use them. I read about how they are organized, but I have no need for the big gaming frameworks which seems to look very bloated, and frankly limiting. I am not really interested in big static photorealistic landscapes. If I went there then I would go for algorithmic surrealistic landscapes, and the frameworks won't fit that. Too static, too euclidean. When I (which is very rare) hit the hardware I tend to favour bare bones for my simple needs which won't benefit from any big framework. Hardware is fast enough anyway, the limit is in trying to figure out clever ways to use shaders for things like audio-waveform zooming and getting decent quality from it etc. Hardware is fast enough, the limit is in figuring out the best way to do it. But I am moving towards doing everything in the browser, and am adopting Angular for regular UI which is even another layer on top of that. It appears to make me more productive. Maybe I'll change my mind later, but right now Angular seems to be more productive than other options. So the whining about browsers being inefficient is lost on me for regular UI. Programmer productivity matters. Browsers are actually doing quite well with simple 2D graphics today. Even some 3D is starting to look ok.
 FWIW, On 80's technology, I would completely agree with you. 
 And even to some extent on 90's tech. But not today.
Ok, I've always been interested in spatial datastructures, audio, 2D/3D, raytracing and I don't think there, on a fundamental level, has been any significant theoretical achievements/conceptual shifts since the early 2000s. Except perhaps for the increased focus on point-clouds. So, I think what you see has more to do with GPU performance and availability of RAM and more mature frameworks than anything else?
May 23 2019
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/23/19 5:01 PM, Ola Fosheim Grøstad wrote:
 On Thursday, 23 May 2019 at 20:20:52 UTC, Nick Sabalausky (Abscissa) wrote:
 flexibility. And I think you're *SEVERELY* underestimating the 
 flexibility of modern game engines. And I say this having personally 
 used modern game engines. Have you?
No, I don't use them. I read about how they are organized, but I have no need for the big gaming frameworks which seems to look very bloated, and frankly limiting. I am not really interested in big static photorealistic landscapes.
Wow, you're just deliberately *trying* not to listen at this point, aren't you? Fine, forget it, then.
May 24 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 May 2019 at 19:32:38 UTC, Nick Sabalausky 
(Abscissa) wrote:
 Wow, you're just deliberately *trying* not to listen at this 
 point, aren't you? Fine, forget it, then.
I have no problem listening. As far as I can tell generic scenegraph frameworks like Inventor, Ogre (and I presume Horde) seem to have lost terrain in favour of more dedicated solutions.
May 24 2019
prev sibling next sibling parent Manu <turkeyman gmail.com> writes:
On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce
<digitalmars-d-announce puremagic.com> wrote:
 On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce
wrote:
 On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via
 Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
[...]
 But you shouldn't design a UI framework like a game engine.

 Especially not if you also want to run on embedded devices
 addressing pixels over I2C.
I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life.
[...] Wait, wha...?! Write game-engine-like code if you care about *battery life*?? I mean... fluid experience, sure, perf, OK, but *battery life*?! Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever. I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life?
Yes. Efficiency == battery life. Game engines tend to be the most efficient software written these days. You don't have to run applications at an unbounded rate. I mean, games will run as fast as possible maximising device resources, but assuming it's not a game, then you only execute as much as required rather than trying to produce frames at the highest rate possible. Realtime software is responding to constantly changing simulation, but non-game software tends to only respond to input-driven entropy; if entropy rate is low, then exec-to-sleeping ratio heavily biases towards sleeping. If you have a transformation to make, and you can do it in 1ms, or 100us, then you burn 10 times less energy doing it in 100us.
 I think I need to sit down.
If you say so :)
May 22 2019
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, May 22, 2019 at 05:11:06PM -0700, Manu via Digitalmars-d-announce wrote:
 On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce
 <digitalmars-d-announce puremagic.com> wrote:
 On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce
wrote:
[...]
 I couldn't possibly agree less; I think cool kids would design
 literally all computer software like a game engine, if they
 generally cared about fluid experience, perf, and battery life.
[...] Wait, wha...?! Write game-engine-like code if you care about *battery life*?? I mean... fluid experience, sure, perf, OK, but *battery life*?! Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever. I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life?
Yes. Efficiency == battery life. Game engines tend to be the most efficient software written these days. You don't have to run applications at an unbounded rate. I mean, games will run as fast as possible maximising device resources, but assuming it's not a game, then you only execute as much as required rather than trying to produce frames at the highest rate possible. Realtime software is responding to constantly changing simulation, but non-game software tends to only respond to input-driven entropy; if entropy rate is low, then exec-to-sleeping ratio heavily biases towards sleeping. If you have a transformation to make, and you can do it in 1ms, or 100us, then you burn 10 times less energy doing it in 100us.
[...] But isn't that just writing good code in general? 'cos when I think of game engines, I think of framerate maximization, which equals maximum battery drain because you're trying to do as much as possible in any given time interval. Moreover, I've noticed a recent trend of software trying to emulate game-engine-like behaviour, e.g., smooth scrolling, animations, etc.. In the old days, GUI apps primarily only respond to input events and that was it -- click once, the code triggers once, does its job, and goes back to sleep. These days, though, apps seem to be bent on animating *everything* and smoothing *everything*, so one click translates to umpteen 60fps animation frames / smooth-scrolling frames instead of only triggering once. All of which *increases* battery drain rather than decrease it. And this isn't just for mobile apps; even the pervasive desktop browser nowadays seems bent on eating up as much CPU, memory, and disk as physically possible -- everybody has their neighbour's dog wants ≥60fps hourglass / spinner animations and smooth scrolling, eating up GBs of memory, soaking up 99% CPU, and cluttering the disk with caches of useless paraphrenelia like spinner animations. Such is the result of trying to emulate game-engine-like behaviour. And now you're recommending that everyone should write code like a game engine! (Once, just out of curiosity (and no small amount of frustration), I went into Firefox's about:config and turned off all smooth scrolling, animation, etc., settings. The web suddenly sped up by at least an order of magnitude, probably more. Down with 60fps GUIs, I say. Unless you're making a game, you don't *need* 60fps. It's squandering resources for trivialities where we should be leaving those extra CPU cycles for actual, useful work instead, or *actually* saving battery life by not trying to make everything emulate a ≥60fps game engine.) T -- Give me some fresh salted fish, please.
May 22 2019
next sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/22/19 8:34 PM, H. S. Teoh wrote:
 And this isn't just for mobile apps; even the pervasive desktop browser
 nowadays seems bent on eating up as much CPU, memory, and disk as
 physically possible -- everybody has their neighbour's dog wants ≥60fps
 hourglass / spinner animations and smooth scrolling, eating up GBs of
 memory, soaking up 99% CPU, and cluttering the disk with caches of
 useless paraphrenelia like spinner animations.
 
 Such is the result of trying to emulate game-engine-like behaviour.
No, that resource drain is BECAUSE they're trying to do game-like things WITHOUT understanding what game engine developers have learned from experience about how to do so *efficiently*.
  And
 now you're recommending that everyone should write code like a game
 engine!
Why is it so difficult for programmers who haven't worked on games to understand the basic fundamental notion that (ex.) 0.1 milliseconds of actual CPU/GPU work is ALWAYS, ALWAYS, ALWAYS *both* faster *and* lower power drain than (ex.) 10 milliseconds of actual CPU/GPU work. And that *that* is *ALL* there is to software efficiency! Nothing more! So yes, absolutely. If you *are* going to be animating the entire screen every frame for a desktop UI (and I agree that's not always a great thing to do, in part for battery reasons), then yes, I'd ABSOLUTELY rather it be doing so in a game-engine-like way so that it can achieve the same results with less demand on the hardware. And if you're NOT animating the entire screen every frame, then I'd STILL rather it take advantage of a game-engine like architecture, because I'd rather my static desktop UI take 0.01% CPU utilization than 2% CPU utilization (for example).
 
 (Once, just out of curiosity (and no small amount of frustration), I
 went into Firefox's about:config and turned off all smooth scrolling,
 animation, etc., settings.  The web suddenly sped up by at least an
 order of magnitude, probably more. Down with 60fps GUIs, I say.  Unless
 you're making a game, you don't *need* 60fps. It's squandering resources
 for trivialities where we should be leaving those extra CPU cycles for
 actual, useful work instead, or *actually* saving battery life by not
 trying to make everything emulate a ≥60fps game engine.)
Yes, this is true. There's no surprise there. Doing less work is more efficient. Period. But what I'm continually amazed that the majority of non-game developers seem to find so incredibly difficult to grasp is that NO MATTER WHAT FRAMERATE or update rate you're targeting: What is MORE efficient and what is LESS efficient...DOES NOT CHANGE!!! PERIOD. If you ARE cursed to run a 60fps GUI desktop, which would you prefer: A. 80% system resource utilization, *consistent* 60fps, and 2 hours of battery power. Plus the option of turning OFF animations to achieve 1% system resource utilization and 12 hours of battery. or: B. 100% system resource utilization, *inconsistent* 60fps with frequent drops to 30fps or lower, and 45 minutes of battery power. Plus the option of turning OFF animations to achieve 15% system resource utilization and 4 hours of battery. Which is better? Because letting you have A instead of B is *exactly* what game engine technology does for us. This is what efficiency is all about.
May 23 2019
prev sibling parent reply Ron Tarrant <rontarrant gmail.com> writes:
On Thursday, 23 May 2019 at 00:34:42 UTC, H. S. Teoh wrote:

 And this isn't just for mobile apps; even the pervasive desktop 
 browser nowadays seems bent on eating up as much CPU, memory, 
 and disk as physically possible
This has been going on ever since the Amiga 1000, Atari 1040ST, and the 286 started edging out the C-64. If we ever break out of this anti-Moorean loop and start seeing 8 gHz or even 16 gHz CPU speeds, maybe the machines will finally manage to keep the resource hounds at bay.
May 25 2019
parent reply user1234 <user1234 12.de> writes:
On Saturday, 25 May 2019 at 19:10:44 UTC, Ron Tarrant wrote:
 On Thursday, 23 May 2019 at 00:34:42 UTC, H. S. Teoh wrote:

 And this isn't just for mobile apps; even the pervasive 
 desktop browser nowadays seems bent on eating up as much CPU, 
 memory, and disk as physically possible
This has been going on ever since the Amiga 1000, Atari 1040ST, and the 286 started edging out the C-64. If we ever break out of this anti-Moorean loop and start seeing 8 gHz or even 16 gHz CPU speeds, maybe the machines will finally manage to keep the resource hounds at bay.
I'm not sure. Maybe there's something of the human nature in resources wasting. Maybe at the beginning everybody will be happy but at the end people would start using slower scripting languages, less optimized, or more simply would use those existing to achieve more complex tasks and after a while the situation we know now will repeat itself.
May 25 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 May 2019 at 19:35:35 UTC, user1234 wrote:
 Maybe at the beginning everybody will be happy but at the end 
 people would start using slower scripting languages, less 
 optimized, or more simply would use those existing to achieve 
 more complex tasks and after a while the situation we know now 
 will repeat itself.
Haha, yup. As neural network and deep learning algorithms become available in hardware, programmers will start using them as building blocks for implementing nondeterministic solutions to problems that we would create carefully crafted deterministic algorithms for. Meaning, they will move towards "sculpting" software whereas we are "constructing" software. We see this already in webdev. Some devs just "sculpt" WordPress sites with plugins and tweaks. With very little insight into what the various pieces of code actually do… Actually, they might have a very vague idea of what programming is… Oh well.
May 25 2019
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On Wed, May 22, 2019 at 5:34 PM H. S. Teoh via Digitalmars-d-announce
<digitalmars-d-announce puremagic.com> wrote:
 On Wed, May 22, 2019 at 05:11:06PM -0700, Manu via Digitalmars-d-announce
wrote:
 On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce
 <digitalmars-d-announce puremagic.com> wrote:
 On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce
wrote:
[...]
 I couldn't possibly agree less; I think cool kids would design
 literally all computer software like a game engine, if they
 generally cared about fluid experience, perf, and battery life.
[...] Wait, wha...?! Write game-engine-like code if you care about *battery life*?? I mean... fluid experience, sure, perf, OK, but *battery life*?! Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever. I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life?
Yes. Efficiency == battery life. Game engines tend to be the most efficient software written these days. You don't have to run applications at an unbounded rate. I mean, games will run as fast as possible maximising device resources, but assuming it's not a game, then you only execute as much as required rather than trying to produce frames at the highest rate possible. Realtime software is responding to constantly changing simulation, but non-game software tends to only respond to input-driven entropy; if entropy rate is low, then exec-to-sleeping ratio heavily biases towards sleeping. If you have a transformation to make, and you can do it in 1ms, or 100us, then you burn 10 times less energy doing it in 100us.
[...] But isn't that just writing good code in general?
Yes, but I can't point at many industries that systemically do that.
  'cos when I think of
 game engines, I think of framerate maximization, which equals maximum
 battery drain because you're trying to do as much as possible in any
 given time interval.
And how do you do "as much as possible"? I mean, if you write some code, and then push data through the pipe until resources are at 100%... where do you go from there? ... make the pipeline more efficient. Hardware isn't delivering much improvement these days, we have had to get MUCH better at efficiency in the last few years to maintain competitive advantage. I don't know any other industry so laser focused on raising the bar on that front in a hyper-competitive way. We don't write code like we used to... we're all doing radically different shit these days.
 Moreover, I've noticed a recent trend of software trying to emulate
 game-engine-like behaviour, e.g., smooth scrolling, animations, etc..
 In the old days, GUI apps primarily only respond to input events and
 that was it -- click once, the code triggers once, does its job, and
 goes back to sleep.  These days, though, apps seem to be bent on
 animating *everything* and smoothing *everything*, so one click
 translates to umpteen 60fps animation frames / smooth-scrolling frames
 instead of only triggering once.
That's a different discussion. I don't actually endorse this. I'm a fan of instantaneous response from my productivity software... 'Instantaneous' being key, and running without delay means NOT waiting many cycles of the event pump to flow typical modern event-driven code through some complex latent machine to finally produce an output.
 All of which *increases* battery drain rather than decrease it.
I'm with you. Don't unnecessarily animate!
 And this isn't just for mobile apps; even the pervasive desktop browser
 nowadays seems bent on eating up as much CPU, memory, and disk as
 physically possible -- everybody has their neighbour's dog wants ≥60fps
 hourglass / spinner animations and smooth scrolling, eating up GBs of
 memory, soaking up 99% CPU, and cluttering the disk with caches of
 useless paraphrenelia like spinner animations.
You're conflating a lot of things here... running smooth and eating GBs of memory are actually at odds with eachother. If you try and do both things, then you're almost certainly firmly engaged in gratuitous systemic inefficiency. I'm entirely against that, that's my whole point! You should use as little memory as possible. I have no idea how a webpage eats as much memory as it does... that's a perfect example of the sort of terrible software engineering I'm against!
 Such is the result of trying to emulate game-engine-like behaviour.
No, there's ABSOLUTELY NOTHING in common between a webpage and a game engine. As I see, they are at polar ends of the spectrum. Genuinely couldn't be further from each other in terms of software engineering discipline!
 And now you're recommending that everyone should write code like a game
 engine!
Yes, precisely so the thing you just described will stop.
 (Once, just out of curiosity (and no small amount of frustration), I
 went into Firefox's about:config and turned off all smooth scrolling,
 animation, etc., settings.  The web suddenly sped up by at least an
 order of magnitude, probably more. Down with 60fps GUIs, I say.
You're placing your resentment in the wrong place. My 8mhz Amiga 500 ran 60hz gui's without breaking a sweat... you're completely misunderstanding the actual issue here.
 Unless you're making a game, you don't *need* 60fps.
Incorrect. My computer is around 100,000 times faster than my Amiga 500. We can have fluid execution. We just need to stop writing software like fucking retards. The only industry that I know of that knows how to do that at a systemic level is gamedev.
 It's squandering resources
 for trivialities where we should be leaving those extra CPU cycles for
 actual, useful work instead, or *actually* saving battery life by not
 trying to make everything emulate a ≥60fps game engine.)
You've missed the point completely. You speak of systemic waste, I'm talking about state-of-the-art efficiency as baseline expectation and nothing less is acceptable.
May 22 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 01:22:20 UTC, Manu wrote:
 That's a different discussion. I don't actually endorse this. 
 I'm a fan of instantaneous response from my productivity 
 software... 'Instantaneous' being key, and running without 
 delay means NOT waiting many cycles of the event pump to flow 
 typical modern event-driven code through some complex latent 
 machine to finally produce an output.
Yes, you are of course right if the effort is spent where it matters. In my mind CygnusED (CED) on the Amiga is STILL the smoothest editor I have ever used and it was because it used smooth hardware assisted scrolling (Copper lists) so my eyes could regain focus very fast. I guess also the phosphor on the screen helped, because other editors that try to spin down a scroll gradually does not feel as good as CED did. *shrugs* One could certainly come up with a better UI experience by combining a good understanding of visual perception with low level optimization and good use of hardware. But that sounds like different project to me. One would then have to start with a good theoretical understanding of human perception, how the brain works and so on. Then see if you can pick up ideas from interactive software like games. That would however lead to a new concept for user-interface design. Which would be interesting, for sure, but requires much more than coding up a UI framework.
 You should use as little memory as possible. I have no idea how 
 a webpage eats as much memory as it does... that's a perfect 
 example of the sort of terrible software engineering I'm 
 against!
In chrome each page runs in a separate process for security reasons, that's how. AFAIK. Also, service workers are very useful, but it is probably tempting to let them grow large to get better responsiveness (from the network layer). Basically a proxy replicating the web server within the browser, so that you can use the website as an offline app.
 You're placing your resentment in the wrong place.
 My 8mhz Amiga 500 ran 60hz gui's without breaking a sweat...
But people also used the hardware almost directly though, you could install copper-lists even when using the OS with UI (in full screen mode). In my mind the copper-list concept was alway more impact-full than the blitter. I'm not sure where they got the idea to expose it to ordinary applications, but it had a very real impact on the end user experience and what applications could do (e.g. drawing programs).
May 23 2019
prev sibling parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-22 17:01:39 +0000, Manu said:

 The worst case defines your application performance, and grids are 
 pretty normal.
That's true, but responsive grids are pretty unusal.
 You can make a UI run realtime ;)
I know, that's what we seek for.
 I mean, there are video games that render a complete screen full of 
 zillions of high-detail things every frame!
Show me a game that renders this with a CPU only approach into a memory buffer, no GPU allowed. Total different use-case. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 22 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 06:07:53 UTC, Robert M. Münch wrote:
 On 2019-05-22 17:01:39 +0000, Manu said:
 I mean, there are video games that render a complete screen 
 full of zillions of high-detail things every frame!
Show me a game that renders this with a CPU only approach into a memory buffer, no GPU allowed. Total different use-case.
I wrote a very flexible generic scanline prototype renderer in the 90s that rendered 1024x768 using 11 bits each for red and green and 10 for blue and hardcoded alpha blending. It provided interactive framerates on the lower end for a large number of circular objects covering the screen, but it took almost all the CPU. It even used callbacks for flexibility and X-Windows with shared-memory, so it was written for flexibility, not very high performance. Today this very simple renderer would probably run at 400-4000FPS on the CPU rendering to RAM. So, it isn't difficult to write a decent performance scanline renderer today. You just have to think a lot about the specifics of the CPU pipeline and CPU caching. That's all. A tile based one is more work, but will easily perform way beyond any requirement. I'm not saying you should do it. It would be CPU specific and seems like a waste of time, but the basics are really very simple. Just use a very fast bin sort for the left and right edge in the x-direction, then use a sorting algorithm that is fast for almost-sorted-lists for the z-direction (to handle alpha blending). Basically brute force, no fancy datastructure. Brute force can perform decently if you use algorithms that tend to be linear on average.
May 23 2019
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-23 07:28:49 +0000, Ola Fosheim Grøstad said:

 On Thursday, 23 May 2019 at 06:07:53 UTC, Robert M. Münch wrote:
 On 2019-05-22 17:01:39 +0000, Manu said:
 I mean, there are video games that render a complete screen full of 
 zillions of high-detail things every frame!
Show me a game that renders this with a CPU only approach into a memory buffer, no GPU allowed. Total different use-case.
I wrote a very flexible generic scanline prototype renderer in the 90s that rendered 1024x768 using 11 bits each for red and green and 10 for blue and hardcoded alpha blending. It provided interactive framerates on the lower end for a large number of circular objects covering the screen, but it took almost all the CPU.
When doing the real-time resizing in the screencast, the CPU usage is around 5% - 6% -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 23 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 May 2019 at 16:36:17 UTC, Robert M. Münch wrote:
 When doing the real-time resizing in the screencast, the CPU 
 usage is around 5% - 6%
Yeah, that leaves a lot of headroom to play with. Do you think there is a market for a x86 CPU software renderer though? Or do you plan on support CPUs where there is no GPU available?
May 23 2019
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-23 17:27:09 +0000, Ola Fosheim Grøstad said:

 Yeah, that leaves a lot of headroom to play with. Do you think there is 
 a market for a x86 CPU software renderer though?
Well, the main market I see for a software renderer is the embedded market and server rendering. Making money with development tools, components or frameworks is most likely only possible in the B2B sector. One needs to find a niche where companies are interested in: speed and ressource-efficency is definetely one.
 Or do you plan on support CPUs where there is no GPU available?
Currently we don't use a GPU, it's only CPU based. I think CPU rendering has its merits and is underestimated a lot. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 24 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 May 2019 at 08:42:48 UTC, Robert M. Münch wrote:
 Well, the main market I see for a software renderer is the 
 embedded market and server rendering. Making money with 
 development tools, components or frameworks is most likely only 
 possible in the B2B sector.
Indeed. Software that should be easy to port to new hardware, like point-of-sale terminals, calling systems etc. I guess server rendering means that you can upgrade the software without touching the clients, so that you have a network protocol that transfers the graphics to a simple and cheap client-display. Like, for floor information in a building.
 Or do you plan on support CPUs where there is no GPU available?
Currently we don't use a GPU, it's only CPU based. I think CPU rendering has its merits and is underestimated a lot.
You are probably right. What I find particularly annoying about GPUs is that the OS vendors keep changing and deprecating the APIs. Like Apple is no longer supporting OpenGL, IIRC. Sadly, GPU features provide a short path to (forced) obsoletion…
May 24 2019
parent reply =?utf-8?Q?Robert_M._M=C3=BCnch?= <robert.muench saphirion.com> writes:
On 2019-05-24 10:12:10 +0000, Ola Fosheim Grøstad said:

 I guess server rendering means that you can upgrade the software 
 without touching the clients, so that you have a network protocol that 
 transfers the graphics to a simple and cheap client-display. Like, for 
 floor information in a building.
Even much simpler use-cases make sense, example: Render 3D previews of 100.000 CAD models and keep them up to date when things change. You need some CLI tool to render it, but most likely you don't have OpenGL or a GPU on the server. If running an app on a server and using an own front-end client instead of a browser these days makes sense, I'm not sure. However, all people have tremendous CPU power on their desks, which isn't used. So, I'm still favoring desktop apps, and a lot of users do it too. Be contrarian in this sector makes a lot of sense :-)
 You are probably right. What I find particularly annoying about GPUs is 
 that the OS vendors keep changing and deprecating the APIs. Like Apple 
 is no longer supporting OpenGL, IIRC.
Yep, way too much hassles and possibilities to break things from external. Can become a support hell. Better to stay on your own as much as possible.
 Sadly, GPU features provide a short path to (forced) obsoletion…
In the 2D realm I don't see so much gain using a GPU over using a CPU. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 24 2019
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 25/05/2019 5:04 AM, Robert M. Münch wrote:
 On 2019-05-24 10:12:10 +0000, Ola Fosheim Grøstad said:
 
 I guess server rendering means that you can upgrade the software 
 without touching the clients, so that you have a network protocol that 
 transfers the graphics to a simple and cheap client-display. Like, for 
 floor information in a building.
Even much simpler use-cases make sense, example: Render 3D previews of 100.000 CAD models and keep them up to date when things change. You need some CLI tool to render it, but most likely you don't have OpenGL or a GPU on the server.
Be careful with that assumption. Server motherboards made by Intel come with GPU's as standard.
May 24 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 May 2019 at 17:19:23 UTC, rikki cattermole wrote:
 Be careful with that assumption. Server motherboards made by 
 Intel come with GPU's as standard.
Yes, they also have CPUs with FPGAs... And NVIDIA has embedded units with crazy architectures, like this entry level mode ($99?): https://developer.nvidia.com/embedded/buy/jetson-nano-devkit The stagnation of CPU capabilities had led to some interesting moves. Anyway, having a solid CPU renderer doesn't prevent one from using a GPU as well, if the architecture is right.
May 24 2019
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 25/05/2019 5:33 AM, Ola Fosheim Grøstad wrote:
 On Friday, 24 May 2019 at 17:19:23 UTC, rikki cattermole wrote:
 Be careful with that assumption. Server motherboards made by Intel 
 come with GPU's as standard.
Yes, they also have CPUs with FPGAs... And NVIDIA has embedded units with crazy architectures, like this entry level mode ($99?): https://developer.nvidia.com/embedded/buy/jetson-nano-devkit The stagnation of CPU capabilities had led to some interesting moves. Anyway, having a solid CPU renderer doesn't prevent one from using a GPU as well, if the architecture is right.
Oh no, you found something that I want now.
May 24 2019
prev sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Friday, 24 May 2019 at 08:42:48 UTC, Robert M. Münch wrote:
 Currently we don't use a GPU, it's only CPU based. I think CPU 
 rendering has its merits and is underestimated a lot.
+1 One big bottleneck for CPU renderer is pixel upload, but apart from that it's pretty rad.
May 24 2019
prev sibling next sibling parent reply Exil <Exil gmall.com> writes:
Is the source available anywhere? Would be interesting to look 
through unless this is close source?
May 24 2019
parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-24 23:35:18 +0000, Exil said:

 Is the source available anywhere? Would be interesting to look through 
 unless this is close source?
No, not yet. Way to early and because we can't support it in anyway. I see that there is quite some interest in the topic, but I think we should get it to some usable point before releasing. Otherwise the noise level will be to high. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 25 2019
prev sibling next sibling parent reply Ethan <gooberman gmail.com> writes:
On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
 Hi, we are currently build up our new technology stack and for 
 this create a 2D GUI framework.
This entire thread is an embarrassment, and a perfect example of the kind of interaction that keeps professionals away from online communities such as this one. It's been little more than an echo chamber of people being wrong, congratulating each other on being wrong, encouraging people to continue being wrong and shooting down anyone speaking sense with wrong facts and wrong opinions. The amount of misinformation flying around in here would make <insert political regime of your own taste here> proud. Let's just start with being blunt straight up: Congratulations, you've announced a GUI framework that can render a grid of squares less efficiently than Microsoft Excel. So from there, I'm only going to highlight points that need to be thoroughly shot down.
 So this gives us 36 FPS which is IMO pretty good for a desktop 
 app target
Wrong. A 144Hz monitor, for example, gives you less than 7 milliseconds to provide a new frame. Break that down further. On Windows, the thread scheduler will give you 4 milliseconds before your thread is put to sleep. That's if you're a foreground process. Background processes only get 1 millisecond. So from that you can assume for a standard 60Hz monitor, your worst case is that you need to provide a new frame in 1 millisecond. I currently have 15 programs and 60 browser tabs open. On a laptop. WPF can keep up. You can't.
 But you shouldn't design a UI framework like a game engine.
Wrong. Game engines excel at laying out high-fidelity data in sync with a monitor's default refresh rate. You're insane if you think a 2D interface shouldn't be done in a similar manner. Notice Unity and Unreal implement their own WIMP framework across multiple platforms, designed it like a game engine, and can keep it responsive. And just like a UI framework, whatever the client is doing separate to the layout and rendering is *not* its responsibility.
 Write game-engine-like code if you care about *battery life*??
The core of a game engine will aim to do everything as quickly as possible and go to sleep as quickly as possible. Everyone here is assuming false equivalency between a game engine, and the game systems and massive volumes of data that just plain take time to process.
 A game engine is designed for full redraw on every frame.
Wrong. A game engine is designed to render new frames when the viewpoint is dirty. Any engine that decouples simulation frame from monitor frame won't do a full redraw every simulation frame. A game engine will often include effects that get rendered at half of the target framerate to save time. Your definition for "full redraw" is flawed and wrong.
 cos when I think of game engines, I think of framerate 
 maximization, which equals maximum battery drain because you're 
 trying to do as much as possible in any given time interval.
Source: I've released a mobile game that lets you select battery options that basically result in 60Hz/30Hz/20Hz. You know all I did? Decoupled the renderer, ran the simulation 1/2/3 times, and rendered once. Suits burst processing, which is known to be very good for the battery. If you find a game engine that renders its UI every frame despite having no dirty element, you've found baby's first game UI.
 for good practice of stability, threading and error reporting, 
 people should look at high-availability, long-lived server 
 software. A single memory leak will be a problem there, a 
 single deadlock.
Many games *already have* this requirement. There's plenty of knowledge within the industry of reducing server costs with optimisations.
 For instance, there is no spatial datatructure that is 
 inherently better or more efficient than all other spatial 
 datastructures.
Wrong. Three- and four-dimensional vectors. We have hardware registers to take advantage of them. Represent your object's transformation with an object comprising a translation, a quaternion rotation, and if you're feeling nice to your users a scale vector. WPF does exactly this. In a round-about way. But it's there.
 Well, what I meant by "cutting corners" it that games reach 
 efficiency by narrowing down what they allow you to do.
Really. Do tell me more. Actually, don't, because whatever you say is going to be wrong and I'm not going to reply to it anyway. Hint: We provide more flexibility than your out-of-the-box WPF/GTK/etc for whatever systems we provide.
 Browsers are actually doing quite well with simple 2D graphics 
 today.
Browsers have been rendering that on GPU for years. Which starts getting us in to this point.
 I think CPU rendering has its merits and is underestimated a 
 lot.
 In the 2D realm I don't see so much gain using a GPU over using 
 a CPU.
So. On a 4K or higher desktop (Apple shift 5K monitors). Let's say you need to redraw every one of those 3840x2160 pixels at 60Hz. Let's just assume that by some miracle you've managed to get a pixel filled down to 20 cycles. But that's still 8,294,400 pixels. That's 16.6MHz for one frame. Almost a full GHz to keep it responsive at 60 frames per second. 2.4GHz for a 144Hz display. So you're going to get one thread doing all that? Maybe vectorise it? And hope there's plenty of blanks space so you can run the same algorithm on four contiguous pixels at a time. Hmmm. Oh, I know, multithread it! Parallel for each! Oh, well, now there's an L2 cache to worry about, we'll have to work at different chunks at different times and hope each chunk is roughly equal in cost since any attempt to redistribute the load in to the same cache area another thread is working on will result in constant cache flushes. OOOOORRRRRRRRRR. Hey. Here's this hardware that executes tiny programs simultaneously. How many shader units does your hardware have? That many tiny programs. And its cache is set up to accept the results of those programs without massive flush penalties. And they're natively SIMD and can handle, say, multi-component RBG colours without breaking a sweat. You don't even have to worry about complicated sorting logic and pixel overwrites, the Z-buffer can handle it if you assign the depth of your UI element to the Z value. And if you *really* want to avoid driver issues with the pixel and vertex pipeline - just write compute shaders for everything for hardware-independent results. Oh, hey, wait a minute, Nick's dcompute could be exactly what you're want if you're only doing this to show a UI framework can be written in D. Problem solved by doing what Manu suggested and *WORKING WITH COMMUNITY MEMBERS WHO ALREADY INTIMATELY UNDERSTAND THE PROBLEMS INVOLVED* --- Right. I'm done. This thread reeks of a "Year of Linux desktop" mentality and I will also likely never read it again just for my sanity. I expect better from this community if it actually wants to see D used and not have the forums turn in to Stack Overflow Lite.
May 25 2019
next sibling parent Ethan <gooberman gmail.com> writes:
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
 So. On a 4K or higher desktop (Apple shift 5K monitors). Let's 
 say you need to redraw every one of those 3840x2160 pixels at 
 60Hz. Let's just assume that by some miracle you've managed to 
 get a pixel filled down to 20 cycles. But that's still 
 8,294,400 pixels. That's 16.6MHz for one frame. Almost a full 
 GHz to keep it responsive at 60 frames per second. 2.4GHz for a 
 144Hz display.
I are math good. 8,294,400 * 20 cycles is 165.8MHz. Times 60 frames per second is 9.5GHz. CPU rendering is not even remotely the future.
May 25 2019
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
 But you shouldn't design a UI framework like a game engine.
Wrong. Game engines excel at laying out high-fidelity data in sync with a monitor's default refresh rate.
You are confusing rendering engine with UI API.
 A game engine is designed for full redraw on every frame.
frame. A game engine will often include effects that get rendered at half of the target framerate to save time.
You still do a full redraw of the framebuffer. Full frame. Meaning not just tiny small clip rectangles like on x-windows.
 For instance, there is no spatial datatructure that is 
 inherently better or more efficient than all other spatial 
 datastructures.
Wrong. Three- and four-dimensional vectors. We have hardware registers to take advantage of them. Represent your object's transformation with an object comprising a translation, a quaternion rotation, and if you're feeling nice to your users a scale vector.
Those are not spatial datastructures. (octrees, bsptrees etc are spatial datastructures)
 Really. Do tell me more. Actually, don't, because whatever you 
 say is going to be wrong and I'm not going to reply to it anyway
Good. Drink less, sleep more.
May 25 2019
prev sibling next sibling parent reply NaN <divide by.zero> writes:
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
 On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
 Browsers are actually doing quite well with simple 2D graphics 
 today.
Browsers have been rendering that on GPU for years.
Just because (for example) Chrome supports GPU rendering doesn't mean every device it runs on does too. For example... Open an SVG in your browser, take a screenshot and zoom in on an almost vertical / horizontal edge, EG.. https://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg If you look for an almost vertical or almost horizontal line and check whether the antialiasing is stepped or smooth. GPU typically maxes out at 16x for path rendering, CPU you generally get 256x analytical. So for GPU you'll see more granularity in the antialising at the edges, runs of a few pixels then a larger change, for CPU you'll see each pixel change a small bit along the egde. Chrome is still doing path rendering on the CPU for me. (I did make sure that the "use hardware acceleration when available" flag was set in the advanced settings.)
May 26 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 26 May 2019 at 11:09:52 UTC, NaN wrote:
 Chrome is still doing path rendering on the CPU for me. (I did 
 make sure that the "use hardware acceleration when available" 
 flag was set in the advanced settings.)
*nods* Switching hardware acceleration on/off has very little impact on my machine, even for things like slide shows. However, I suspect that Chrome gets basic hardware acceleration through the OS windowing-system whether the setting is on or off.
May 26 2019
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On Sun, May 26, 2019 at 4:10 AM NaN via Digitalmars-d-announce
<digitalmars-d-announce puremagic.com> wrote:
 On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
 On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
 Browsers are actually doing quite well with simple 2D graphics
 today.
Browsers have been rendering that on GPU for years.
Just because (for example) Chrome supports GPU rendering doesn't mean every device it runs on does too. For example... Open an SVG in your browser, take a screenshot and zoom in on an almost vertical / horizontal edge, EG.. https://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg If you look for an almost vertical or almost horizontal line and check whether the antialiasing is stepped or smooth. GPU typically maxes out at 16x for path rendering, CPU you generally get 256x analytical.
What? ... this thread is bizarre. Why would a high quality SVG renderer decide to limit to 16x AA? Are you suggesting that they use hardware super-sampling to render the SVG? Why would you use SSAA to render an SVG that way? I can't speak for their implementation, which you can only possible speculate upon if you read the source code... but I would; for each pixel, calculate the distance from the line, and use that as the falloff value relative to the line weighting property. How is the web browser's SVG renderer even relevant? I have absolutely no idea how this 'example' (or almost anything in this thread) could be tied to the point I made way back at the start before it went way off the rails. Just stop, it's killing me.
May 26 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 26 May 2019 at 16:39:53 UTC, Manu wrote:
 How is the web browser's SVG renderer even relevant? I have 
 absolutely no idea how this 'example' (or almost anything in 
 this thread) could be tied to the point I made way back at the 
 start before it went way off the rails. Just stop, it's killing 
 me.
I don't think the discussion is about your idea that software engineering should be done like it is done in the games industry. Path rendering on the GPU is a topic that is covered relatively frequently in papers the past decade so… more than one approach. If the SVG renderer in the browser is relevant? Depends. SVG is animated through CSS, so the browser must be able to redraw on every frame. For some interfaces it certainly would be relevant, but I don't think Robert is aiming for that type of interface. Anyway, for some interfaces like for VST plugins, you don't need very fancy options. Just blitting and a bit of realtime line-drawing. But portability is desired, so JUCE appears to be popular. If you Robert creates something simpler than JUCE, but with the same portability, then it could be very useful.
May 26 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 26 May 2019 at 16:56:39 UTC, Ola Fosheim Grøstad wrote:
 If the SVG renderer in the browser is relevant? Depends. SVG is 
 animated through CSS, so the browser must be able to redraw on 
 every frame. For some interfaces it certainly would be 
 relevant, but I don't think Robert is aiming for that type of 
 interface.
Anyway Skia is available under a BSD license here: https://skia.org/ I don't find anything on Ganesh or any other GPU backend, but maybe someone else have found something? One could probably do worse than using the software renderer in Skia… but I don't know how difficult it is to hook it up.
May 26 2019
parent reply NaN <divide by.zero> writes:
On Sunday, 26 May 2019 at 17:36:20 UTC, Ola Fosheim Grøstad wrote:
 On Sunday, 26 May 2019 at 16:56:39 UTC, Ola Fosheim Grøstad 
 wrote:
 If the SVG renderer in the browser is relevant? Depends. SVG 
 is animated through CSS, so the browser must be able to redraw 
 on every frame. For some interfaces it certainly would be 
 relevant, but I don't think Robert is aiming for that type of 
 interface.
Anyway Skia is available under a BSD license here: https://skia.org/ I don't find anything on Ganesh or any other GPU backend, but maybe someone else have found something?
AFAIK Ganesh sucked and it was dropped. They use nv path rendering now. https://developer.nvidia.com/nv-path-rendering
May 26 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 26 May 2019 at 17:42:13 UTC, NaN wrote:
 AFAIK Ganesh sucked and it was dropped. They use nv path 
 rendering now.

 https://developer.nvidia.com/nv-path-rendering
Ah, do you know if this is in Chromium as well, or is it something that is closed off to Chrome? I also noticed that the author of the OpenVG renderer AmanithVG has a high quality software renderer in addition to the GPU renderer.
May 26 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 26 May 2019 at 18:37:49 UTC, Ola Fosheim Grøstad wrote:
 On Sunday, 26 May 2019 at 17:42:13 UTC, NaN wrote:
 AFAIK Ganesh sucked and it was dropped. They use nv path 
 rendering now.

 https://developer.nvidia.com/nv-path-rendering
Ah, do you know if this is in Chromium as well, or is it something that is closed off to Chrome?
Nevermind, based on what is going on at the Skia repo they seem to implement Metal support for Skia. So I presume that means that GPU support is built into Skia. *shrugs* Fairly big repo… but interesting to look at.
May 26 2019
prev sibling parent reply NaN <divide by.zero> writes:
On Sunday, 26 May 2019 at 16:39:53 UTC, Manu wrote:
 On Sun, May 26, 2019 at 4:10 AM NaN via Digitalmars-d-announce 
 <digitalmars-d-announce puremagic.com> wrote:

 What? ... this thread is bizarre.

 Why would a high quality SVG renderer decide to limit to 16x 
 AA? Are
 you suggesting that they use hardware super-sampling to render 
 the
 SVG?
They do both super-sampling and multi-sampling. https://developer.nvidia.com/nv-path-rendering
 Why would you use SSAA to render an SVG that way?
 I can't speak for their implementation, which you can only 
 possible
 speculate upon if you read the source code... but I would; for 
 each
 pixel, calculate the distance from the line, and use that as the
 falloff value relative to the line weighting property.
Because "path" in vector graphics terms is not just a line with thickness, it's like a glyph, it has inside and outside areas defined by the winding rule, it could be self intersecting etc. Working out how far a pixel is from a given line doesnt tell whether you should fill the pixel or not, or by how much. Whether a pixel should be filled or not depends on everything that has happened either to the left or the right depending on which way your processing. It's not GPU friendly apparently. You could decompose the path into triangles which would be more GPU friendly, but that's actually quite an involved problem. To put it in perspective decomposing a glyph into triangles so the gpu can render it is probably gonna take lot longer than just rendering it on the CPU.
 How is the web browser's SVG renderer even relevant? I have 
 absolutely no idea how this 'example' (or almost anything in 
 this thread) could be tied to the point I made way back at the 
 start before it went way off the rails. Just stop, it's killing 
 me.
Somebody said browsers have been doing 2D on the GPU for years i just pointed out that it was more complicated than that. I wasn't replying to anything you said and dont really know why what i've said has got your hackles up.
May 26 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 26 May 2019 at 17:41:01 UTC, NaN wrote:
 On Sunday, 26 May 2019 at 16:39:53 UTC, Manu wrote:
 On Sun, May 26, 2019 at 4:10 AM NaN via Digitalmars-d-announce 
 <digitalmars-d-announce puremagic.com> wrote:

 What? ... this thread is bizarre.

 Why would a high quality SVG renderer decide to limit to 16x 
 AA? Are
 you suggesting that they use hardware super-sampling to render 
 the
 SVG?
They do both super-sampling and multi-sampling. https://developer.nvidia.com/nv-path-rendering
Btw, skia anti-aliasing slides and doc is here: https://skia.org/dev/design/aaa
May 26 2019
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
 Oh, hey, wait a minute, Nick's dcompute could be exactly what 
 you're want if you're only doing this to show a UI framework
FWIW, OpenCL is deprecated on OS-X. You should use Metal for everything. GPU-APIs are not very future proof.
May 26 2019
prev sibling next sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-25 23:23:31 +0000, Ethan said:

 Right. I'm done. This thread reeks of a "Year of Linux desktop" 
 mentality and I will also likely never read it again just for my sanity.
That's your best statement so far. Greate move. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 26 2019
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/25/19 7:23 PM, Ethan wrote:
 
 [...]
+1 (trillion) In my entire software career, I have yet to ever come across even one programmer without direct game engine experience who actually has anything intelligent (or otherwise just simply NOT flat-out wrong) to say about game programming. People hear the word "game", associate it with "insignificant" and promptly shut their brains off. (Much like how average Joes hear the word "computer", associate it with "difficult", and promptly shut their brains off.)
May 26 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 00:33:45 UTC, Nick Sabalausky 
(Abscissa) wrote:
 flat-out wrong) to say about game programming. People hear the 
 word "game", associate it with "insignificant" and promptly 
 shut their brains off.
Not insignificant, but also not necessarily relevant for the project in this thread. There is nothing wrong with Robert's approach from a software engineering and informatics perspective. Why do you guys insist on him doing it your way? Anyway, if you were to pick up a starting point for a generic GUI engine then you would be better off with Skia than with Unity, that is pretty certain. And it is not an argument that is difficult to make.
May 26 2019
next sibling parent reply Manu <turkeyman gmail.com> writes:
On Sun, May 26, 2019 at 6:35 PM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Monday, 27 May 2019 at 00:33:45 UTC, Nick Sabalausky
 (Abscissa) wrote:
 flat-out wrong) to say about game programming. People hear the
 word "game", associate it with "insignificant" and promptly
 shut their brains off.
Not insignificant, but also not necessarily relevant for the project in this thread. There is nothing wrong with Robert's approach from a software engineering and informatics perspective. Why do you guys insist on him doing it your way?
I don't insist, I was just inviting him to the chat channel where a similar effort is already ongoing, and where there are perf experts who can help.
 Anyway, if you were to pick up a starting point for a generic GUI
 engine then you would be better off with Skia than with Unity,
 that is pretty certain. And it is not an argument that is
 difficult to make.
Unity is perhaps the worst possible comparison point. That's not an example of "designing computer software like a game engine", it's more an example of "designing a game engine like a GUI application", which is completely backwards. Optimising Unity games is difficult and tiresome, and doesn't really have much relation to high-end games. There's virtually no high-end games written in Unity, it's made for small hobby or indy stuff. They favour accessibility over efficiency at virtually all costs. that's the start of something sensible in Unity.
May 26 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 01:52:05 UTC, Manu wrote:
 I don't insist, I was just inviting him to the chat channel 
 where a similar effort is already ongoing, and where there are 
 perf experts who can help.
Yes, sure, is always a good thing to hash out ideas with others who have an interest in the same field. Not to change your own ideas, but to see more options. Absolutely!
 Unity is perhaps the worst possible comparison point. That's 
 not an
 example of "designing computer software like a game engine", 
 it's more
 an example of "designing a game engine like a GUI application", 
 which
 is completely backwards. Optimising Unity games is difficult and
 tiresome, and doesn't really have much relation to high-end 
 games.
It does look a bit bloated, but I haven't tried it. Just skimmed over the docs. Anyway, I think the fact that people buy JUCE is a sure sign that you don't have to provide a perfect UI-framework. You just have to provide a "complete" solution that is better than the alternatives for some specific domains. For JUCE that appears to be VST audio plugins, so JUCE also provide some DSP algorithms. Unity and Godot are "complete" solutions for small indy games. I guess one major decision is to decide whether one provides a limited library or a solution for a domain. Seems to me that it is easier to gain traction by providing a solution.
May 26 2019
prev sibling parent "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/26/19 9:52 PM, Manu wrote:
 
 Unity is perhaps the worst possible comparison point. That's not an
 example of "designing computer software like a game engine", it's more
 an example of "designing a game engine like a GUI application", which
 is completely backwards. Optimising Unity games is difficult and
 tiresome, and doesn't really have much relation to high-end games.
 There's virtually no high-end games written in Unity, it's made for
 small hobby or indy stuff. They favour accessibility over efficiency
 at virtually all costs.
While I agree completely (based on direct experience), I also feel compelled to point out, just as an aside: I've seen some games that make Unity look WAY worse than it really is. Ex: The PS4 versions of "The Golf Club 2" and "Wheel of Fortune". As much as I admit I enjoy *playing* those games, well...Unity may have its technical drawbacks, but cooome ooon!!, even at that, it's still WAY more capable than the steaming joke those games make Unity appear to be. I've seen indie games on PS3 that did much a better job with Unity.

 that's the start of something sensible in Unity.
I'm chomping at the bit for that, particularly "Project Tiny". I'm converting some old flash stuff to unity/webgl and that framework would be fantastic for it. Only problem is I'm currently relying on some things (like procedural geometry, though I suppose I could potentially change that...) that AFAICT aren't yet supported by Project Tiny. But, maybe if I'm lucky (or realllyyy slloooowwww....) it'll get there by the time I'm done with the basic flash->unity conversion and ready to re-architect it some more.
May 26 2019
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/26/19 9:32 PM, Ola Fosheim Grøstad wrote:> > Why do you guys insist 
on him doing it your way?
I never said that. And just to further clarify, I also never said he 
should USE a game engine for this.

I was only responding to the deluge of misinformation about 
game-engine-like approaches, all stemming from Manu's suggestion that 
Robert could get this going an order of magnitude faster without too 
terribly much trouble. Luckily, Ethan explained my stance better than I 
was able to.
May 26 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 03:35:48 UTC, Nick Sabalausky 
(Abscissa) wrote:
 suggestion that Robert could get this going an order of 
 magnitude faster without too terribly much trouble. Luckily, 
 Ethan explained my stance better than I was able to.
I think you guys overestimate the importance of performance at this early stage. The hardest problem is to create a good usability experience and also provide an easy to use API for the programmer.
May 26 2019
next sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/26/19 11:46 PM, Ola Fosheim Grøstad wrote:
 On Monday, 27 May 2019 at 03:35:48 UTC, Nick Sabalausky (Abscissa) wrote:
 suggestion that Robert could get this going an order of magnitude 
 faster without too terribly much trouble. Luckily, Ethan explained my 
 stance better than I was able to.
I think you guys overestimate the importance of performance at this early stage. The hardest problem is to create a good usability experience and also provide an easy to use API for the programmer.
Again, I don't think anyone actually said that it absolutely needs to be done *at this early stage*. I know I certainly didn't. Besides, from what Robert described, it sounds like he already has it decoupled and modular enough that performance *can* likely be improved later (probably by an order of magnitude) without too much disruption to it's core design. So, on that, believe it or not, it sounds like we already agree. ;) And I'll point out *again*, the only points I was trying to make here were to dispel the misunderstandings, misinformation, and frankly knee-jerk reactions towards game engines and, more importantly, game-engine-like approaches. But please understand, (and I strongly suspect this also speaks to the reason for Ethan's arguably abrasive tone): It gets REALLY, *REALLY* tiring when you spend the majority of your life studying a particular discipline (videogame code) and, as far as you've EVER been able to tell, pretty much the ENTIRE so-called-professional community outside of that specific discipline has absolutely 100% verifiably WRONG ideas about your field, and then they go and defend those falsehoods and prejudices with more misinformation and dismissal. And Robert: FWIW, I *am* definitely curious to see where this project goes. Also: While it *looks* in the video like a simple grid being resized, you've commented that under-the-hood it's really more of a flexbox-like design. This suggests that the computations you're doing are (or will be) capable of far more flexibility than what is apparent in the video. I'm curious what sorts of CSS flex-like features are currently being accommodated for in the computations, and are projected in the (hopefully?) near future?
May 26 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 04:46:42 UTC, Nick Sabalausky 
(Abscissa) wrote:
 without too much disruption to it's core design. So, on that, 
 believe it or not, it sounds like we already agree. ;)
Alright! :-)
 And I'll point out *again*, the only points I was trying to 
 make here were to dispel the misunderstandings, misinformation, 
 and frankly knee-jerk reactions towards game engines and, more 
 importantly, game-engine-like approaches.
Well, I don't think I knee-jerk. Sitting on my bookshelf: Graphic Gems I-V, Computer Graphics by Hughes et al, Advanced Animation and Rendering Techniques by Watt&Watt, Real-Time Rendering by Akenine-Moller et al, a bunch of MMO design books etc.
May 26 2019
prev sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-27 04:46:42 +0000, Nick Sabalausky (Abscissa) said:

 Besides, from what Robert described, it sounds like he already has it 
 decoupled and modular enough that performance *can* likely be improved 
 later (probably by an order of magnitude) without too much disruption 
 to it's core design. So, on that, believe it or not, it sounds like we 
 already agree. ;)
That's the case. The 2D layer could be replaced. It's not yet perfectly isolated and minified, because we are still learning & experimenting to see how things fit together. Refactoring for isolation comes after this.
 And  Robert: FWIW, I *am* definitely curious to see where this project goes.
We too :-)
 Also: While it *looks* in the video like a simple grid being resized, 
 you've commented that under-the-hood it's really more of a flexbox-like 
 design.
Correct. The grid is structured like this: root-> 1..X columns -> 1..Y cells per column and the only property given is to use the available vertical and horizontal space evenly.
 This suggests that the computations you're doing are (or will be) 
 capable of far more flexibility than what is apparent in the video.
Yes, the idea is that you get a responsive app GUI, which resizes in a smart way which fits your app layout. So, you have control over this. Getting browsers to do what you want can be pretty tedious. We want to avoid this.
 I'm curious what sorts of CSS flex-like features are currently being 
 accommodated for in the computations, and are projected in the 
 (hopefully?) near future?
The stuff that one really needs, so no frills. We want to create a node structure with layout rules/hints per node and as a result you get a perfect responsive layout for your app. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 27 2019
prev sibling parent reply Manu <turkeyman gmail.com> writes:
On Sun, May 26, 2019 at 8:50 PM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Monday, 27 May 2019 at 03:35:48 UTC, Nick Sabalausky
 (Abscissa) wrote:
 suggestion that Robert could get this going an order of
 magnitude faster without too terribly much trouble. Luckily,
 Ethan explained my stance better than I was able to.
I think you guys overestimate the importance of performance at this early stage.
Performance is a symptom of architecture, and architecture *is* the early stage.
 The hardest problem is to create a good usability experience and
 also provide an easy to use API for the programmer.
They're somewhat parallel problems, although the architecture will inform the API design substantially. If you don't understand your architecture up front, then you'll likely just write a typical ordinary thing, and then it doesn't matter what the API looks like; someone will always feel compelled to re-write a mediocre library. I think it's possible to check both boxes, but it begins with architectural concerns. That doesn't work as an afterthought... (or you get Unity, or [insert library that you're not satisfied with])
May 26 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 05:01:36 UTC, Manu wrote:
 Performance is a symptom of architecture, and architecture *is* 
 the early stage.
I expected that answer, but the renderer itself can just be a placeholder. So yes, you need to think about where accelerating datastructures/processes fit in. That is clear. But you don't need to have them implemented.
May 26 2019
parent reply Manu <turkeyman gmail.com> writes:
On Sun, May 26, 2019 at 10:25 PM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Monday, 27 May 2019 at 05:01:36 UTC, Manu wrote:
 Performance is a symptom of architecture, and architecture *is*
 the early stage.
I expected that answer, but the renderer itself can just be a placeholder.
Actually, I'm not really interested in rendering much. From the original posts, the rendering time is most uninteresting cus it's the end of the pipeline, the time that I was commenting on at the start is the non-rendering time, which was substantial.
 So yes, you need to think about where accelerating
 datastructures/processes fit in. That is clear. But you don't
 need to have them implemented.
How does the API's threadsafety mechanisms work? How does it scale to my 64-core PC? How does it schedule the work? etc...
May 26 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 05:31:29 UTC, Manu wrote:
 How does the API's threadsafety mechanisms work? How does it 
 scale to my 64-core PC? How does it schedule the work? etc...
Ah yes, if you don't run the GUI on a single thread then you have a lot to take into account.
May 27 2019
parent reply Manu <turkeyman gmail.com> writes:
On Mon, May 27, 2019 at 1:05 AM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Monday, 27 May 2019 at 05:31:29 UTC, Manu wrote:
 How does the API's threadsafety mechanisms work? How does it
 scale to my 64-core PC? How does it schedule the work? etc...
Ah yes, if you don't run the GUI on a single thread then you have a lot to take into account.
Computers haven't had only one thread for almost 20 years. Even mobile phones have 8 cores! This leads me back to my original proposition.
May 27 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 20:14:26 UTC, Manu wrote:
 Computers haven't had only one thread for almost 20 years. Even 
 mobile
 phones have 8 cores!
 This leads me back to my original proposition.
If Robert is aiming for embedded and server rendering then he probably wants a simple structure with limited multi-threading. *shrug*
May 27 2019
next sibling parent reply Manu <turkeyman gmail.com> writes:
On Mon, May 27, 2019 at 2:00 PM Ola Fosheim Grøstad via
Digitalmars-d-announce <digitalmars-d-announce puremagic.com> wrote:
 On Monday, 27 May 2019 at 20:14:26 UTC, Manu wrote:
 Computers haven't had only one thread for almost 20 years. Even
 mobile
 phones have 8 cores!
 This leads me back to my original proposition.
If Robert is aiming for embedded and server rendering then he probably wants a simple structure with limited multi-threading.
Huh? Servers take loads-of-cores as far as you possibly can! Zen2 parts announced the other day, they'll give our servers something like 256 threads! Even embedded parts have many cores; look at every mobile processor. But here's the best part; if you design your software to run well on computers... it does! Multi-core focused software tends to perform better on single-core setups than software that was written for single-core in my experience. My most surprising example was when we rebooted our engine in 2005 for XBox360 and PS3 because we needed to fill 6-8 cores with work and our PS2 era architecture did not do that effectively. At the time, we worried about how the super-scalar choices we were making would affect Gamecube which still had just one core. It was a minor platform so we thought we'd just wear the loss to minimise tech baggage... boy were we wrong! Right out of the gate, our scalability-focused architecture ran better on the single-core machines than the previous highly mature code that had received years of optimisation. It looked like there were more moving parts in the architecture, but it still ran meaningfully faster. The key reason was proper partitioning of work. If you write a single-threaded app, you are almost 100% guaranteed to blatantly disregard software engineering in favour of a laser focus on your API and user experience, and you will write bad software as a result. Every time.
May 27 2019
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 21:21:35 UTC, Manu wrote:
 Huh? Servers take loads-of-cores as far as you possibly can!
Yes, but you might want to limit a single client to a process and limit the thread count, for isolation and simple load balancing. But I am not sure what the use scenario is...
 Even embedded parts have many cores; look at every mobile 
 processor.
Usually limited. Correctness is more difficult to prove with high levels of parallelism... And you can use FPGA...
 Multi-core focused software tends to perform better on 
 single-core
 setups than software that was written for single-core in my
 experience.
That is odd. Parallell algorithms usually comes with overhead. At the time,
 we
 worried about how the super-scalar choices we were making would 
 affect
 Gamecube which still had just one core.
You meant parallell? (super scalar just means that a single core has multiple execution units, a single core can schedule multiple instructions at the same time)
 If you write a
 single-threaded app, you are almost 100% guaranteed to blatantly
 disregard software engineering in favour of a laser focus on 
 your API
 and user experience, and you will write bad software as a 
 result.
Well, if you need more than a single thread to update a gui (sans rendering) then you've picked the wrong layout strategy in my opinion. I dont want the gui to suck up a lot of cpu time and polute the caches. And yes I think layout engine API as well as styling are very important to do well, even if it incurs overhead. I'd rather have simple layout if that is the bottle neck... Layout engine design a styling engine design is really the biggest challenge. You cannot design the architecture before those two are known. Also embedded have fixed screen dimensions... No need for real time resizing... What is missing is a good detailed description of primary use scenarios. Without that no rational design decisions can be made. That is software engineering 101. Without that we will just talk past eachother making different assumptions. What is certain is that the long page browser layout engine isn't very well suited for fixed dimensions...
May 27 2019
parent reply dayllenger <dayllenger protonmail.com> writes:
On Tuesday, 28 May 2019 at 05:52:23 UTC, Ola Fosheim Grøstad 
wrote:
 Also embedded have fixed screen dimensions... No need for real 
 time resizing...
Every element that can include other elements and can be resized behaves *the same* as the resizable main window. Examples: floating window, docked window, SplitView, table's column. You can change the element dimensions programmatically - directly or via styling, maybe with transition effects. They can be dependent on each other or on the parent/sibling dimensions. In all these cases, the computations layout engine performs are the same.
 What is certain is that the long page browser layout engine 
 isn't very well suited for fixed dimensions...
Why fixed screen device cannot have scrolling? Logic?
May 28 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 28 May 2019 at 07:33:39 UTC, dayllenger wrote:
 You can change the element dimensions programmatically - 
 directly or via styling, maybe with transition effects. They 
 can be dependent on each other or on the parent/sibling 
 dimensions. In all these cases, the computations layout engine 
 performs are the same.
You can, but why should you? We need to think about this in terms of usability requirements and use scenario constraints, not in terms of what is possible programatically. 1. There is no need to recompute the layout 60 times per second in this use case. Maybe it is better to have 2 types of layout: one static and one dynamic. 2. On some displays any type of motion looks really bad because the screen refresh is slow (high latency). E.g. electronic ink displays.
 Why fixed screen device cannot have scrolling? Logic?
You can have scrolling, but it makes for very poor usability. If you create a UI for controlling big dangerous equipment then you don't want the information to be off-screen.
May 28 2019
prev sibling parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-27 20:56:15 +0000, Ola Fosheim Grøstad said:

 If Robert is aiming for embedded and server rendering then he probably 
 wants a simple structure with limited multi-threading.
We are considering MT. A GUI should never stuck, as a user I'm the most important part and my time is most valuable. So, an application should never ever slow me down. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 27 2019
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 28 May 2019 at 06:37:47 UTC, Robert M. Münch wrote:
 We are considering MT. A GUI should never stuck, as a user I'm 
 the most important part and my time is most valuable. So, an 
 application should never ever slow me down.
Just be aware that implementing a multithreaded constraint solver is something that you will have to spend a lot of time on. Rendering is easy to do in parallel.
May 28 2019
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 28 May 2019 at 07:22:06 UTC, Ola Fosheim Grøstad 
wrote:
 Just be aware that implementing a multithreaded constraint 
 solver is something that you will have to spend a lot of time 
 on.
Btw, Apple is using a version of Cassowary. There are many implementations available: http://overconstrained.io/
May 28 2019
prev sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-28 07:22:06 +0000, Ola Fosheim Grøstad said:

 Just be aware that implementing a multithreaded constraint solver is 
 something that you will have to spend a lot of time on.
I am... and I didn't meant that we want use MT everywhere. MT is a tool, and after measuring and understanding the problem, the decision for or against MT is made.
 Rendering is easy to do in parallel.
Yep, and that's something that will be done with MT. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 28 2019
prev sibling parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/28/19 2:37 AM, Robert M. Münch wrote:
 
 We are considering MT. A GUI should never stuck, as a user I'm the most 
 important part and my time is most valuable. So, an application should 
 never ever slow me down.
 
It's incredibly refreshing to hear a developer say that, instead of the usual, "As a *developer*, I'm the most important part and my time is most valuable. So, my users should just go buy faster hardware."
May 28 2019
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-28 15:54:14 +0000, Nick Sabalausky (Abscissa) said:

 It's incredibly refreshing to hear a developer say that, instead of the 
 usual, "As a *developer*, I'm the most important part and my time is 
 most valuable. So, my users should just go buy faster hardware."
In the mid 90s I co-invented & designed reconfigurable CPU systems. Our goal these days was to do real-time ray-tracing. So, we even said: just wait, we are going to make a faster hardware first. ;-) Anyway, you statement is unfortunatly very true. It's the same idotic thing happening in every company: "I make my life easy and do things quick & dirty" without understanding that down the processes there are hundred of points where this stuff is used, and everyone needs to figure out the pitfalls over and over again... that's what I call very efficient thinking. The software we sell, would still fit on one floppy disk (if there are still people knowing what it is). And I'm always saying: "Every good software fits on one floppy-disk." Most people can't believe that this is still possible. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
May 28 2019
parent reply Abdulhaq <alynch4047 gmail.com> writes:
On Tuesday, 28 May 2019 at 20:54:59 UTC, Robert M. Münch wrote:
.
 The software we sell, would still fit on one floppy disk (if 
 there are still people knowing what it is). And I'm always 
 saying: "Every good software fits on one floppy-disk." Most 
 people can't believe that this is still possible.
I remember VisiCalc.com. And I remember when no program would need more than 640k RAM. But I also remember installing msoffice from 31 floppy discs....
May 28 2019
parent reply "Nick Sabalausky (Abscissa)" <SeeWebsiteToContactMe semitwist.com> writes:
On 5/28/19 6:50 PM, Abdulhaq wrote:
 On Tuesday, 28 May 2019 at 20:54:59 UTC, Robert M. Münch wrote:
 .
 The software we sell, would still fit on one floppy disk (if there are 
 still people knowing what it is). And I'm always saying: "Every good 
 software fits on one floppy-disk." Most people can't believe that this 
 is still possible.
I'd argue games are the obvious exception, just on account of the graphics/sound/etc assets, but you definitely make a fair point.
 
 I remember VisiCalc.com.
 And I remember when no program would need more than 640k RAM.
 But I also remember installing msoffice from 31 floppy discs....
I think I still have a stack of floppies from an early version of MS Visual C/C++. Plus similar floppy stacks from other 90's compilers[1] But 31 install disks is quite impressive, I'm not sure I can match that[2]. I tip my retro-hat to you, good sir! [1] I realize others have me beat here, but just the memory of taking college classes that utilized Visual Basic 6 makes me feel old... [2] I think my earliest version of Office is new enough to be on CD-ROM - back from the "Encarta" era - anyone remember that? I actually kinda miss it, it made heavier use of audio/video than Wikipedia does. Heck, it made the future seem much brighter than what we ended up with! (And, interestingly, it also served as an early example of MS disregarding their OS's UI design guidelines in their own apps. Still relevant today!)
May 28 2019
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 29 May 2019 at 05:56:45 UTC, Nick Sabalausky 
(Abscissa) wrote:
 I think I still have a stack of floppies from an early version 
 of MS Visual C/C++. Plus similar floppy stacks from other 90's 
 compilers[1] But 31 install disks is quite impressive, I'm not 
 sure I can match that[2]. I tip my retro-hat to you, good sir!

 [1] I realize others have me beat here, but just the memory of 
 taking college classes that utilized Visual Basic 6 makes me 
 feel old...
My first assembler for the C64 loaded from tape… so I had to reload everything from tape every time my program crashed… On the other hand, it made you look very closely at the code before you ran it…
May 29 2019
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 May 2019 at 05:31:29 UTC, Manu wrote:
 Actually, I'm not really interested in rendering much. From the 
 original posts, the rendering time is most uninteresting cus 
 it's the end of the pipeline, the time that I was commenting on 
 at the start is the non-rendering time, which was substantial.
Btw, I agree that spatial constraint solvers are tricky… it is a rather specialized field with very specialized algorithms… …but as Robert said CSS Flex then we know that there are free implementations out there that perform very well (Chromium, Firefox, etc). I don't know how useful Flex is for UI layout though, based on my experience with browser layout. But I guess that is something one will have to experiment with.
May 27 2019
prev sibling next sibling parent =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-19 21:01:33 +0000, Robert M. Münch said:

 Hi, we are currently build up our new technology stack and for this 
 create a 2D GUI framework.
Hi, some more teaser showing a text-input field, with clipping, scrolling, etc.: https://www.dropbox.com/s/wp3d0bohnd59pyp/Bildschirmaufnahme%202020-01-28%20um 2017.16.26.mov?dl=0 We have text-lables, text-input and basic text-list now working. Slow but steady progress. The whole framework follows a free compisition idea. You can add a table to the slider-knob of a slider if you want and it will just work. Style and decoration is totally seperate and currently not done at all. Hence, the ugly wireframe look. This whole project is a side-project I currently do beside our normal product development work. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Jan 28 2020
prev sibling parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2019-05-19 21:01:33 +0000, Robert M. Münch said:

 Hi, we are currently build up our new technology stack and for this 
 create a 2D GUI framework.
Some now teaser, again might not look like a lot had happend but we move forward, slow but steady: https://www.dropbox.com/s/jjefzyneqnxr7pb/dgui_teaser-1.mp4 The framework can now handle 9-patch images for decoration of any widget parts. And here an older one I think I never posted, about text editing: https://www.dropbox.com/s/cfqy21q4s7d0zxr/Bildschirmaufnahme%202020-04-07%20um 2017.08.24.mov?dl=0 Cut & Paste, marking, cursor movement etc. is all working correctly. All text stuff (rendering and handling) is done in Unicode. What we first get to work is a way to create simple applications with input-forms, text-lists feed from a database and single line text-editing. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Jun 22 2020
next sibling parent reply aberba <karabutaworld gmail.com> writes:
On Monday, 22 June 2020 at 16:43:12 UTC, Robert M. Münch wrote:
 On 2019-05-19 21:01:33 +0000, Robert M. Münch said:

 [...]
Some now teaser, again might not look like a lot had happend but we move forward, slow but steady: https://www.dropbox.com/s/jjefzyneqnxr7pb/dgui_teaser-1.mp4 The framework can now handle 9-patch images for decoration of any widget parts. And here an older one I think I never posted, about text editing: https://www.dropbox.com/s/cfqy21q4s7d0zxr/Bildschirmaufnahme%202020-04-07%20um%2017.08.24.mov?dl=0 Cut & Paste, marking, cursor movement etc. is all working correctly. All text stuff (rendering and handling) is done in Unicode. What we first get to work is a way to create simple applications with input-forms, text-lists feed from a database and single line text-editing.
Will it be open source? Curious why its not hosted public
Jun 22 2020
parent reply =?iso-8859-1?Q?Robert_M._M=FCnch?= <robert.muench saphirion.com> writes:
On 2020-06-22 23:56:47 +0000, aberba said:

 Will it be open source? Curious why its not hosted public
We will see... The main point is, such a thing only lifts off if the quality and out-of-the-box experience is high enough. I think we don't need an other project, that might not get any tracktion. And, if things have a good quality and are used, a project needs a bit of management. This bit can become quite intensive, especially in the beginning until enough know-how is build-up by others. And, this requires that others are picking it up, which needs a good quality... and the circle closes. Reality tells us, that most OS projects don't take off. Small libs, with a narrow scope are a totally different story than a GUI framework. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Jun 23 2020
parent aberba <karabutaworld gmail.com> writes:
On Tuesday, 23 June 2020 at 17:29:05 UTC, Robert M. Münch wrote:
 On 2020-06-22 23:56:47 +0000, aberba said:

 Will it be open source? Curious why its not hosted public
We will see... The main point is, such a thing only lifts off if the quality and out-of-the-box experience is high enough. I think we don't need an other project, that might not get any tracktion. And, if things have a good quality and are used, a project needs a bit of management. This bit can become quite intensive, especially in the beginning until enough know-how is build-up by others. And, this requires that others are picking it up, which needs a good quality... and the circle closes. Reality tells us, that most OS projects don't take off. Small libs, with a narrow scope are a totally different story than a GUI framework.
I meant it more like your code itself is very valuable for someone else to borrow ideas. See BeamUI using soem of DlangUI code to build something without starting completely from scratch.
Jun 23 2020
prev sibling parent reply =?UTF-8?B?0JLQuNGC0LDQu9C40Lkg0KTQsNC0?= =?UTF-8?B?0LXQtdCy?= writes:
On Monday, 22 June 2020 at 16:43:12 UTC, Robert M. Münch wrote:
 On 2019-05-19 21:01:33 +0000, Robert M. Münch said:

 Hi, we are currently build up our new technology stack and for 
 this create a 2D GUI framework.
Some now teaser, again might not look like a lot had happend but we move forward, slow but steady: https://www.dropbox.com/s/jjefzyneqnxr7pb/dgui_teaser-1.mp4 The framework can now handle 9-patch images for decoration of any widget parts. And here an older one I think I never posted, about text editing: https://www.dropbox.com/s/cfqy21q4s7d0zxr/Bildschirmaufnahme%202020-04-07%20um%2017.08.24.mov?dl=0 Cut & Paste, marking, cursor movement etc. is all working correctly. All text stuff (rendering and handling) is done in Unicode. What we first get to work is a way to create simple applications with input-forms, text-lists feed from a database and single line text-editing.
Width of the element can be set: - by hand --- fixed - by automate --- inherited from parent --- from childs ( calculated max width ) --- generated by parent layout ( like a HBox, VBox, may be CircleLayout... ) and for each case: - check min width - check max width https://drive.google.com/file/d/1ZbeSkQD2BY06JB1R17CT17te1H9ecRnI/view?usp=sharing and childs can be aligned in container cell to: left. center, right, stretched. https://drive.google.com/file/d/1Xm4m7DLaUoPu5wzvPSalgW3i1-WkTeek/view?usp=sharing It will be good. I love beauty UI too. :) I love fast perfect UI too. And I do D Windows GUI too. :)
Jun 22 2020
parent reply =?utf-8?Q?Robert_M._M=C3=BCnch?= <robert.muench saphirion.com> writes:
On 2020-06-23 04:29:48 +0000, Виталий Фадеев said:

 Width of the element can be set:
 - by hand
 --- fixed
 - by automate
 --- inherited from parent
 --- from childs ( calculated max width )
 --- generated by parent layout ( like a HBox, VBox, may be CircleLayout... )
 
 and for each case:
 - check min width
 - check max width
 
 https://drive.google.com/file/d/1ZbeSkQD2BY06JB1R17CT17te1H9ec
nI/view?usp=sharing 
 
Not sure if this is a question or some project you do. However, yes on all points for what we do.
 and childs can be aligned in container cell to: left. center, right, stretched.
 
 https://drive.google.com/file/d/1Xm4m7DLaUoPu5wzvPSalgW3i1-WkT
ek/view?usp=sharing 
 
Yes.
 I love beauty UI too. :)
Well, beauty lies in the eye of the beholder.
 I love fast perfect UI too.
:-)
 And I do D Windows GUI too.  :)
Cool... so, anything to see? -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Jun 23 2020
parent =?UTF-8?B?0JLQuNGC0LDQu9C40Lkg0KTQsNC0?= =?UTF-8?B?0LXQtdCy?= writes:
On Tuesday, 23 June 2020 at 17:41:35 UTC, Robert M. Münch wrote:
 On 2020-06-23 04:29:48 +0000, Виталий Фадеев said:

 [...]
Not sure if this is a question or some project you do. However, yes on all points for what we do.
 [...]
Yes.
 [...]
Well, beauty lies in the eye of the beholder.
 [...]
:-)
 [...]
Cool... so, anything to see?
Of course, when we are will ready. You, I, and job! )
Jun 24 2020