c++.windows.16-bits - Compiler Insertions for Huge Pointers
- Mark Evans (11/11) Aug 08 2001 Walter,
- Mark Evans (2/20) Aug 08 2001
- Walter (12/23) Aug 08 2001 The easiest way is to compile your huge pointer code with -gl, and run
- Mark Evans (8/42) Aug 08 2001 Walter,
- Walter (21/63) Aug 09 2001 It's difficult to understand what's happening with huge pointers without
- Mark Evans (7/18) Aug 09 2001 Ah, that is a critical piece of knowledge. I have been using arbitrary ...
- Walter (11/29) Aug 10 2001 The rule applies to the entire size of the object, not the sizes of its
- Mark Evans (5/5) Aug 10 2001 Walter,
- Walter (7/12) Aug 10 2001 No, the rule is if an array of objects is allocated, then 64k must be ev...
- Mark Evans (5/26) Aug 10 2001 Then I am back to square one. All I have is an array of characters. Th...
- Walter (10/36) Aug 10 2001 Not about that particular problem, no. Are you able to identify a partic...
- Mark Evans (8/22) Aug 10 2001 Not really but I could look harder.
- Walter (15/37) Aug 10 2001 Huge pointer arithmetic should work or fail, not work 999 times and fail
- Mark Evans (8/8) Aug 10 2001 Here is my candidate for a preprocessor macro to enforce the rule.
Walter, I'm having some mysterious hang-ups which seem to disappear when I shrink my huge pointer blocks to less than a segment in size. This leads me to ask about the code inserted by the compiler to handle huge pointers. Could you give me some feeling for the nature of this code? I'm using a huge pointer block as a circular buffer. When this buffer is < 1 segment, it runs indefinitely and without problems. When the buffer is > 1 segment long, there is a repeatable hang-up which occurs. The bug is probably mine but if there is anything I can learn about the compiler's behavior it might give me some clues. Mark
Aug 08 2001
(In particular, is there any possibility of memory manager invocations or of my block being moved around, and if so how would I lock it.) On Wed, 08 Aug 2001 17:37:07 GMT, Mark Evans <mevans zyvex.com> wrote:Walter, I'm having some mysterious hang-ups which seem to disappear when I shrink my huge pointer blocks to less than a segment in size. This leads me to ask about the code inserted by the compiler to handle huge pointers. Could you give me some feeling for the nature of this code? I'm using a huge pointer block as a circular buffer. When this buffer is < 1 segment, it runs indefinitely and without problems. When the buffer is > 1 segment long, there is a repeatable hang-up which occurs. The bug is probably mine but if there is anything I can learn about the compiler's behavior it might give me some clues. Mark
Aug 08 2001
The easiest way is to compile your huge pointer code with -gl, and run OBJ2ASM on the output. You'll see just what code is generated for each line of source. -Walter "Mark Evans" <mevans zyvex.com> wrote in message news:1103_997292227 dphillips...Walter, I'm having some mysterious hang-ups which seem to disappear when I shrinkmy huge pointerblocks to less than a segment in size. This leads me to ask about thecode inserted bythe compiler to handle huge pointers. Could you give me some feeling forthe nature ofthis code? I'm using a huge pointer block as a circular buffer. When this buffer is< 1 segment, itruns indefinitely and without problems. When the buffer is > 1 segmentlong, there is arepeatable hang-up which occurs. The bug is probably mine but if there is anything I can learn about thecompiler'sbehavior it might give me some clues. Mark
Aug 08 2001
Walter, This is asking me to reverse-engineer something which you wrote. All I need are a few philosophical tips about the design of your huge pointer code. Only then would doing what you suggest even be worthwhile. Otherwise I am reverse engineering in the blind. I'm not that much of a Win16 expert to begin with, and not intimate with x86 assembly (much more Motorola / DSP assembly experience than Intel x86). I do wonder whether some DS == SS type issue could be causing problems at critical points when the compiler insertions have to compute offsets. Thanks, Mark On Wed, 8 Aug 2001 11:49:32 -0700, "Walter" <walter digitalmars.com> wrote:The easiest way is to compile your huge pointer code with -gl, and run OBJ2ASM on the output. You'll see just what code is generated for each line of source. -Walter "Mark Evans" <mevans zyvex.com> wrote in message news:1103_997292227 dphillips...Walter, I'm having some mysterious hang-ups which seem to disappear when I shrinkmy huge pointerblocks to less than a segment in size. This leads me to ask about thecode inserted bythe compiler to handle huge pointers. Could you give me some feeling forthe nature ofthis code? I'm using a huge pointer block as a circular buffer. When this buffer is< 1 segment, itruns indefinitely and without problems. When the buffer is > 1 segmentlong, there is arepeatable hang-up which occurs. The bug is probably mine but if there is anything I can learn about thecompiler'sbehavior it might give me some clues. Mark
Aug 08 2001
It's difficult to understand what's happening with huge pointers without knowing what code is generated for it, it least that's the way it is for me <g>. But there is something else you need to be aware of with huge pointers. The objects you point to with them must have a size that evenly divides into 64k. In other words, objects cannot straddle a 64k boundary, they must sit wholly on one side or the other. -Walter "Mark Evans" <mevans zyvex.com> wrote in message news:1103_997300825 dphillips...Walter, This is asking me to reverse-engineer something which you wrote. All I need are a few philosophical tips about the design of your hugepointer code. Only then would doing what you suggest even be worthwhile. Otherwise I am reverse engineering in the blind. I'mnot that much of a Win16 expert to begin with, and not intimate with x86assembly (much more Motorola / DSP assembly experience than Intel x86).I do wonder whether some DS == SS type issue could be causing problems atcritical points when the compiler insertions have to compute offsets.Thanks, Mark On Wed, 8 Aug 2001 11:49:32 -0700, "Walter" <walter digitalmars.com>wrote:lineThe easiest way is to compile your huge pointer code with -gl, and run OBJ2ASM on the output. You'll see just what code is generated for eachshrinkof source. -Walter "Mark Evans" <mevans zyvex.com> wrote in message news:1103_997292227 dphillips...Walter, I'm having some mysterious hang-ups which seem to disappear when Iformy huge pointerblocks to less than a segment in size. This leads me to ask about thecode inserted bythe compiler to handle huge pointers. Could you give me some feelingisthe nature ofthis code? I'm using a huge pointer block as a circular buffer. When this buffersegment< 1 segment, itruns indefinitely and without problems. When the buffer is > 1thelong, there is arepeatable hang-up which occurs. The bug is probably mine but if there is anything I can learn aboutcompiler'sbehavior it might give me some clues. Mark
Aug 09 2001
Ah, that is a critical piece of knowledge. I have been using arbitrary sizes, not segment multiples. The runtime library (_halloc) should burp if the size requested is not a segment multiple. If not that, it should automatically increase the caller's request to equal the next highest segment multiple. In my case the "object" is just an array of chars, a giant string if you like. Maybe I'm OK then, because characters are not structures that can straddle a boundary? Or should I only allocate an exact segment multiple for an array of char? Thanks Walter! Mark On Thu, 9 Aug 2001 10:50:36 -0700, "Walter" <walter digitalmars.com> wrote:It's difficult to understand what's happening with huge pointers without knowing what code is generated for it, it least that's the way it is for me <g>. But there is something else you need to be aware of with huge pointers. The objects you point to with them must have a size that evenly divides into 64k. In other words, objects cannot straddle a 64k boundary, they must sit wholly on one side or the other. -Walter
Aug 09 2001
The rule applies to the entire size of the object, not the sizes of its individual components. -Walter Mark Evans wrote in message <1103_997386537 dphillips>...Ah, that is a critical piece of knowledge. I have been using arbitrarysizes, not segment multiples.The runtime library (_halloc) should burp if the size requested is not asegment multiple. If not that, it should automatically increase the caller's request to equal the next highest segment multiple.In my case the "object" is just an array of chars, a giant string if youlike. Maybe I'm OK then, because characters are not structures that can straddle a boundary? Or should I only allocate an exact segmentmultiple for an array of char? Thanks Walter! Mark On Thu, 9 Aug 2001 10:50:36 -0700, "Walter" <walter digitalmars.com> wrote:meIt's difficult to understand what's happening with huge pointers without knowing what code is generated for it, it least that's the way it is forThe<g>. But there is something else you need to be aware of with huge pointers.sitobjects you point to with them must have a size that evenly divides into 64k. In other words, objects cannot straddle a 64k boundary, they mustwholly on one side or the other. -Walter
Aug 10 2001
Walter, Thanks. Is that a fundamental Win16 issue, or just a compiler issue that could be improved? It would be nice if huge pointers did not have this restriction. As I understand what you are saying, the only valid huge memory blocks are N times 64K in size (contiguous) up to the limit of 1 MB; and behavior of nonconforming huge blocks is undefined. Mark
Aug 10 2001
No, the rule is if an array of objects is allocated, then 64k must be evenly divisible by that object size. That is because offset arithmetic, as in h->offset, cannot wrap. -Walter Mark Evans wrote in message <1103_997452984 dphillips>...Walter, Thanks. Is that a fundamental Win16 issue, or just a compiler issue that could beimproved? It would be nice if huge pointers did not have this restriction.As I understand what you are saying, the only valid huge memory blocks areN times 64K in size (contiguous) up to the limit of 1 MB; and behavior of nonconforming huge blocks is undefined.Mark
Aug 10 2001
Then I am back to square one. All I have is an array of characters. The object size of a character object is 1. Any number is evenly divisible by 1. So I guess I don't have to worry? Thanks, Mark On Fri, 10 Aug 2001 09:38:19 -0700, "Walter" <walter digitalmars.com> wrote:No, the rule is if an array of objects is allocated, then 64k must be evenly divisible by that object size. That is because offset arithmetic, as in h->offset, cannot wrap. -Walter Mark Evans wrote in message <1103_997452984 dphillips>...Walter, Thanks. Is that a fundamental Win16 issue, or just a compiler issue that could beimproved? It would be nice if huge pointers did not have this restriction.As I understand what you are saying, the only valid huge memory blocks areN times 64K in size (contiguous) up to the limit of 1 MB; and behavior of nonconforming huge blocks is undefined.Mark
Aug 10 2001
Not about that particular problem, no. Are you able to identify a particular line of code where you're getting a segment wrap? Mark Evans wrote in message <1103_997458762 dphillips>...Then I am back to square one. All I have is an array of characters. Theobject size of a character object is 1. Any number is evenly divisible by 1.So I guess I don't have to worry? Thanks, Mark On Fri, 10 Aug 2001 09:38:19 -0700, "Walter" <walter digitalmars.com>wrote:evenlyNo, the rule is if an array of objects is allocated, then 64k must bebedivisible by that object size. That is because offset arithmetic, as in h->offset, cannot wrap. -Walter Mark Evans wrote in message <1103_997452984 dphillips>...Walter, Thanks. Is that a fundamental Win16 issue, or just a compiler issue that couldrestriction.improved? It would be nice if huge pointers did not have thisareAs I understand what you are saying, the only valid huge memory blocksN times 64K in size (contiguous) up to the limit of 1 MB; and behavior of nonconforming huge blocks is undefined.Mark
Aug 10 2001
Not really but I could look harder. Again, any design tips about how DM treats huge pointers would be useful. I'm not too worried about this because the bug is probably mine and pertains to some obscure, rare situation that only happens after several thousand calls have been made. I just wanted to pulse you before starting a full-scale investigation. Any tips about huge pointer behavior/design would help. Already I've learned some new things about them. As far as I know right now, my circular buffer code is perfect and works indefinitely (rolling over and over and over) when the size is < 1 segment. The identical C code works fine for a long time when the buffer is > 1 segment but not forever. After several thousand calls something goes wrong. I will look into it. Mark On Fri, 10 Aug 2001 11:45:23 -0700, "Walter" <walter digitalmars.com> wrote:Not about that particular problem, no. Are you able to identify a particular line of code where you're getting a segment wrap? Mark Evans wrote in message <1103_997458762 dphillips>...Then I am back to square one. All I have is an array of characters. Theobject size of a character object is 1. Any number is evenly divisible by 1.So I guess I don't have to worry? Thanks, Mark
Aug 10 2001
Huge pointer arithmetic should work or fail, not work 999 times and fail once. It sounds like you have a program bug. Look for uninitialized variables, dangling pointers, etc. -Walter Mark Evans wrote in message <1103_997477807 dphillips>...Not really but I could look harder. Again, any design tips about how DM treats huge pointers would be useful. I'm not too worried about this because the bug is probably mine andpertains to some obscure, rare situation that only happens after several thousand calls have been made. I just wanted to pulse you beforestarting a full-scale investigation. Any tips about huge pointerbehavior/design would help. Already I've learned some new things about them.As far as I know right now, my circular buffer code is perfect and worksindefinitely (rolling over and over and over) when the size is < 1 segment. The identical C code works fine for a long time when thebuffer is > 1 segment but not forever. After several thousand callssomething goes wrong. I will look into it.Mark On Fri, 10 Aug 2001 11:45:23 -0700, "Walter" <walter digitalmars.com>wrote:particularNot about that particular problem, no. Are you able to identify aTheline of code where you're getting a segment wrap? Mark Evans wrote in message <1103_997458762 dphillips>...Then I am back to square one. All I have is an array of characters.byobject size of a character object is 1. Any number is evenly divisible1.So I guess I don't have to worry? Thanks, Mark
Aug 10 2001
Here is my candidate for a preprocessor macro to enforce the rule. #ifndef ROUND_TO_NEXT_64K_MULTIPLE #define ROUND_TO_NEXT_64K_MULTIPLE( size ) \ (((unsigned long int)(size) & (unsigned long int)0xFFFF0000L) + (unsigned long int)0x00010000L) #endif Should this macro be included in the Digital Mars headers somewhere? It could also be written as an inline function. Mark
Aug 10 2001