www.digitalmars.com         C & C++   DMDScript  

D - Date & time

reply DrWhat? <DrWhat nospam.madscientist.co.uk> writes:
On the current debate about internationalisation of D,  we know there are 
may different fomats for the date and time in use around the world so how 
about using a standard format for d&t representation (instead of the 
Georgian calendar and standard time (with problems of daylight savings and 
time zone).

I propose we represent the Date in Julian Cycles (international standard 
commonly used in astronomy) minus .5 and the time in seconds (or milli 
seconds) since midnight universal standard time (UST).  Any other formats 
could be calculated from this simple and standard representation avoiding 
errors such as Y2K for several million years if we store the date in 64 bit 
(long) format.  Ie. day of week is Julian cycles % 7.

struct date
{       uint    time    /* 32 bit time */
 ;      ulong   date    /* 64 bit date */
 ;
}

A little info (copied from somewhere on the Internet many years ago - sorry 
lost the URL and it probably no longer exists anyway)

---------------------

What is a Julian date and a modified Julian date?
 
It's the number of days since noon 4713 BCE January 1. What's so special 
about this date?
 
Joseph Justus Scaliger (1540--1609) was a noted Italian-French philologist 
and historian who was interested in chronology and reconciling the dates in 
historical documents. As many calendars were in use around the world this 
created the problem of which one to use. To solve this Scaliger invented 
his own era and reckoned dates by counting days. He started with 4713 BCE 
January 1 because that was when solar cycle of 28 years (when the days of 
the week and the days of the month in the Julian calendar coincide again), 
the Metonic cycle of 19 years (because 19 solar years are roughly equal to 
235 lunar months) and the Roman indiction of 15 years (decreed by the 
Emperor Constantine) all coincide. There was no recorded history as old as 
4713 BCE known in Scaliger's day, so it had the advantage of avoiding 
negative dates. Joseph Justus's father was Julius Caesar Scaliger, which 
might be why he called it the Julian Cycle. Astronomers adopted the Julian 
cycle to avoid having to remember "30 days hath September ...." and to 
avoid the 10/11 day hiatus in the Gregorian calendar.
 
For reference, Julian day 2450000 began at noon on 1995 October 9. Because 
Julian dates are so large, astronomers often make use of a "modified Julian 
date"; MJD = JD - 2400000.5. (Though, sometimes they're sloppy and subtract 
2400000 instead.)

-----------------

There are lots of programmes to convert Julian date to other calendars,  
and adding/subtracting the current time zone and daylight savings info 
should be easy.

This format has the additional benifit that the date and time can be easily 
copied - all that is required is a library (locale.d) which can convert 
this standard format to/from what ever the local format is.

I would have a go at creating this library - but as I do not run Windows I 
am unable to run the D compiler (have to wait for the Solaris or Linux 
version),  I could have a go at specifying it however.

All the best :

C 2002/3/23
Mar 24 2002
next sibling parent reply "Pavel Minayev" <evilone omen.ru> writes:
"DrWhat?" <DrWhat nospam.madscientist.co.uk> wrote in message
news:a7koqi$20lc$1 digitaldaemon.com...

 On the current debate about internationalisation of D,  we know there are
 may different fomats for the date and time in use around the world so how
 about using a standard format for d&t representation (instead of the
 Georgian calendar and standard time (with problems of daylight savings and
 time zone).

 I propose we represent the Date in Julian Cycles (international standard
 commonly used in astronomy) minus .5 and the time in seconds (or milli
 seconds) since midnight universal standard time (UST).  Any other formats
 could be calculated from this simple and standard representation avoiding
 errors such as Y2K for several million years if we store the date in 64
bit
 (long) format.  Ie. day of week is Julian cycles % 7.

 struct date
 {       uint    time    /* 32 bit time */
  ;      ulong   date    /* 64 bit date */
  ;
 }


 This format has the additional benifit that the date and time can be
easily
 copied - all that is required is a library (locale.d) which can convert
 this standard format to/from what ever the local format is.

 I would have a go at creating this library - but as I do not run Windows I
 am unable to run the D compiler (have to wait for the Solaris or Linux
 version),  I could have a go at specifying it however.
It could be used as an internal representation, probably. The problem is, D locale system has to be designed first =)
Mar 24 2002
parent reply DrWhat? <blackmarlin nospam.asean-mail.com> writes:
Pavel Minayev wrote:

 It could be used as an internal representation, probably.
 The problem is, D locale system has to be designed first =)
That is the idea - to use a _common_ internal represnetation then design the locale system around that - for example (sorry the syntax is not D - you should be able to understand anyway) class UKLocale is Locale { constant dateFormat = "dd/mm/yyyy" constant hoursDivider = ":" constant minutesDivider = "." ` etc ... method getTime is Unsigned ;;; out Unsigned hours , out Unsigned minutes , out Unsigned seconds , out Unsigned milliSeconds { ` return current time hours = timeSeconds / ( 60*60*1000) minutes = (timeSeconds / (60*1000)) % 60 seconds = (timeSeconds/1000) % 60 milliSeconds = timeSeconds % 1000 return timeSeconds } method getTimeString is Character[] { ` convert time to hours / minutes / seconds getTime.this()( h, m, s, ms ) toString.ms() } method setTime ;; in Unsigned hours , in Unsigned minutes , in Unsigned seconds , in Unsigned milliseconds { ` convert time into standard format & store timeSeconds = ... } method getDateString ` you get the idea method getDate method setDate } then alias the current local class (CurrentLocale) to what ever this local is (this could be done either at compile time or run time depending on the implementation.
Mar 24 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
news:a7l3d9$2ncu$1 digitaldaemon.com...

 That is the idea - to use a _common_ internal represnetation then design
 the locale system around that - for example (sorry the syntax is not D -
 you should be able to understand anyway)
The idea is great. But it's more than dates, it also needs to cover numbers, monetary, boolean, string equality and comparison etc... and it should be extensible, so you could for example make a (new!) phone number facet and write functions which rely on it.
Mar 24 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a7l75r$4r$1 digitaldaemon.com...
 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7l3d9$2ncu$1 digitaldaemon.com...

 That is the idea - to use a _common_ internal represnetation then design
 the locale system around that - for example (sorry the syntax is not D -
 you should be able to understand anyway)
The idea is great. But it's more than dates, it also needs to cover numbers, monetary, boolean,
Are there really international differences in the representation of boolean data? Oh my!!!!!!!! What could they possibly be?????? -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 24 2002
parent "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7ldfq$dto$1 digitaldaemon.com...

 Are there really international differences in the representation of
boolean
 data?  Oh my!!!!!!!!  What could they possibly be??????
Translations of English words "true" and "false". By the way, C++ does it, if you enable boolalpha for streams.
Mar 24 2002
prev sibling next sibling parent reply "Walter" <walter digitalmars.com> writes:
"DrWhat?" <DrWhat nospam.madscientist.co.uk> wrote in message
news:a7koqi$20lc$1 digitaldaemon.com...
 I propose we represent the Date in Julian Cycles (international standard
commonly used in astronomy) minus .5 and the time in seconds (or milli seconds) since midnight universal standard time (UST). Any other formats could be calculated from this simple and standard representation avoiding errors such as Y2K for several million years if we store the date in 64
bit
 (long) format.  Ie. day of week is Julian cycles % 7.
I think you have a great idea. But I have a question. My existing plan is to represent time as a 64 bit signed quantity of milliseconds since Jan 1, 1970. The nice thing about that is I have a number of tested and debugged functions for coversion to/from that format into more recognizable values. Would I be correct in that conversion of that to/from julian cycles would be simply adding an offset?
 I would have a go at creating this library - but as I do not run Windows I
 am unable to run the D compiler (have to wait for the Solaris or Linux
 version),  I could have a go at specifying it however.
That'd be great.
Mar 24 2002
parent reply DrWhat? <blackmarlin nospam.asean-mail.com> writes:
Walter wrote:
 
 I think you have a great idea. But I have a question. My existing plan is
 to represent time as a 64 bit signed quantity of milliseconds since Jan 1,
 1970. The nice thing about that is I have a number of tested and debugged
 functions for coversion to/from that format into more recognizable values.
 
 Would I be correct in that conversion of that to/from julian cycles would
 be simply adding an offset?
Adding an offset after converting from days - in my opinion julian cycles are not that good at representing time (uses fractions to represent time and looks like a stardate :-), the advantage of julian cycles is that its origin exists at the point where many dating systems coincide. Julian cycles are days since noon BCE4713/1/1 (as mentioned in my previous post), as a day is a fairly standard international concept, counting in days should be the standard international method. The Julian cycles date system has been around for about 450 years (longer than the one we use), is not based on an entirely arbitrary start date and is large enough to contain nearly all of recorded history. A 64 bit count is a good idea but if we count from 1970 then how do we represent dates before then (ie. birthdays - some of which could be as early as 1880) - surely a proprietary format is not a good idea and will decrease interoperability and negitive dates are not pleasant. And frequently we would want the date separate from the time - supplying both lumped together could be misleading, though concatenating the two is simple. Finally taking count of leap seconds could pose a problem, as i believe they are not added on a standard basis - anyone have info on this? Though in my proposal the suggestion of a 64 bit value for the date was in reflection a little excessive - 32 bit would be ok for near a million years - after that you can respecify it :-) Your format does have one advantage - it would still be applicable on other planets, though I do not expect that to be a major concern for a while yet, and the locale module could take care of varying seconds in the day.
 I would have a go at creating this library - but as I do not run Windows
 I am unable to run the D compiler (have to wait for the Solaris or Linux
 version),  I could have a go at specifying it however.
That'd be great.
Ok, as I am working on a similar project I could use that and convert to D. Of course there will be a lot more to do than just time & date - measurement, decimal points, numbers 1 000 000 000 .000 1,000,000,000.000 1.000.000.000,000 etcetera... , currency (£$¥), string compare, case conversion (just knowing what is a letter and what is not), daylight savings, time zones and probably a few other bits and bobs (anything obvious I have missed - by the way I would not include boolean translations ie. FALSE -> FALSCH, FAUX, etcetera. - they would be better either represented with a symbol [tick / cross] - the unicode character of which could be returned from a method or translated with the remainder of the programme if a new language is required). (hmm ... have to find that book in the library about international date systems to start with.) C 2002/3/24
Mar 25 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
news:a7n7d3$2cv1$1 digitaldaemon.com...

 A 64 bit count is a good idea but if we count from 1970 then how do we
 represent dates before then (ie. birthdays - some of which could be as
 early as 1880) - surely a proprietary format is not a good idea and will
 decrease interoperability and negitive dates are not pleasant.  And
We could use signed ints. Also, this is the format used by UNIX, I believe, not something taken randomly.
 Ok, as I am working on a similar project I could use that and convert to
D.
  Of course there will be a lot more to do than just time & date -
 measurement,  decimal points, numbers
         1 000 000 000 .000
         1,000,000,000.000
         1.000.000.000,000       etcetera...
 , currency (£$¥), string compare, case conversion (just knowing what is a
 letter and what is not),  daylight savings,  time zones and probably a few
 other bits and bobs (anything obvious I have missed - by the way I would
 not include boolean translations ie. FALSE -> FALSCH, FAUX, etcetera. -
 they would be better either represented with a symbol [tick / cross] - the
Well maybe better for you. I think this is what locales are for, so we can choose another formats of displaying data, and probably make them ourselves... Is the library as flexible as C++ locale (or even better)? How does its interface look?
Mar 25 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a7nj12$2jad$1 digitaldaemon.com...
 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7n7d3$2cv1$1 digitaldaemon.com...

 A 64 bit count is a good idea but if we count from 1970 then how do we
 represent dates before then (ie. birthdays - some of which could be as
 early as 1880) - surely a proprietary format is not a good idea and will
 decrease interoperability and negitive dates are not pleasant.  And
We could use signed ints. Also, this is the format used by UNIX, I
believe,
 not something taken randomly.
So because Unix got it wrong, we should continue to suffer? :-) I like the Julian cycles idea because it is consistant and doesn't need "negative dates". But it is a small point. One related comment. If we are going to use a 64 bit value for the time, the base should be a unit time much smaller than milliseconds. Microseconds at least, and probably nanoseconds. There should be some minimal accuracy specified (milliseconds is OK for the minimum, with the low order parts set to zero), but we should allow a consistant mechanism for time handling for those systems that support a more precise clock. Since we have to do a divide anyway if we want seconds, having to divide by a larger number isn't a substantial price to pay for the seemlessnes of a single mechanism for high precision timers as well as more mundane uses. -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 25 2002
next sibling parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7nr2a$2nes$1 digitaldaemon.com...

 So because Unix got it wrong, we should continue to suffer?  :-) I like
the
 Julian cycles idea because it is consistant and doesn't need "negative
 dates".   But it is a small point.
You are never going to deal with the internal representation - well, almost. =) Most often you'll use functions like: long makeDate(int day, int month, int year); long dateDiff(long d1, long d2); ...
 One related comment.  If we are going to use a 64 bit value for the time,
 the base should be a unit time much smaller than milliseconds.
Microseconds
 at least, and probably nanoseconds.  There should be some minimal accuracy
64-bit integer is used to store both date and time. Will it be enough to hold large dates with microsecond resolution?
Mar 25 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a7ntlu$2op0$1 digitaldaemon.com...
 "Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
 news:a7nr2a$2nes$1 digitaldaemon.com...

 So because Unix got it wrong, we should continue to suffer?  :-) I like
the
 Julian cycles idea because it is consistant and doesn't need "negative
 dates".   But it is a small point.
You are never going to deal with the internal representation - well,
almost.
 =)
 Most often you'll use functions like:

     long makeDate(int day, int month, int year);
     long dateDiff(long d1, long d2);
     ...

 One related comment.  If we are going to use a 64 bit value for the
time,
 the base should be a unit time much smaller than milliseconds.
Microseconds
 at least, and probably nanoseconds.  There should be some minimal
accuracy
 64-bit integer is used to store both date and time. Will it be
 enough to hold large dates with microsecond resolution?
Well, pretty large. If you keep the crecision of microseconds, you are limited to something like 584,000 years. I think that is enough. If you want to go to nanosecond precision, then you are limited to 584 years, which is probably enough in practice (with a base of say 1800 it will last till nearly 2400), but surely someone will object that it isn't sufficient. To avoid that argument, I would be quite willing to accept a precision of microseconds. -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 27 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7t4d2$2jfd$5 digitaldaemon.com...

 Well, pretty large.  If you keep the crecision of microseconds, you are
 limited to something like 584,000 years.  I think that is enough.  If you
 want to go to nanosecond precision, then you are limited to 584 years,
which
 is probably enough in practice (with a base of say 1800 it will last till
 nearly 2400), but surely someone will object that it isn't sufficient.

 To avoid that argument, I would be quite willing to accept a precision of
 microseconds.
Agreed. I don't know why somebody would want to measure dates with nanosecond precision. Nanoseconds are used in timers, but you most likely aren't going to have your program run for more than 500 years =)
Mar 27 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a7tdid$2o6t$1 digitaldaemon.com...
 "Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
 news:a7t4d2$2jfd$5 digitaldaemon.com...

 Well, pretty large.  If you keep the crecision of microseconds, you are
 limited to something like 584,000 years.  I think that is enough.  If
you
 want to go to nanosecond precision, then you are limited to 584 years,
which
 is probably enough in practice (with a base of say 1800 it will last
till
 nearly 2400), but surely someone will object that it isn't sufficient.

 To avoid that argument, I would be quite willing to accept a precision
of
 microseconds.
Agreed. I don't know why somebody would want to measure dates with nanosecond precision. Nanoseconds are used in timers, but you most likely aren't
going
 to have your program run for more than 500 years =)
True, of course, but people might want to measure elapsed time in nanoseconds or time some precision event in nanoseconds (remember in a few years, 1 nanosecond will be something like 10 instructions on a then current CPU) and, in your other post, you objected to having date and time be separate variables because you couldn't handle time differences across midnight by a simple subtraction, etc. -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 27 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7tm70$2sjd$1 digitaldaemon.com...

 True, of course, but people might want to measure elapsed time in
 nanoseconds or time some precision event in nanoseconds (remember in a few
 years, 1 nanosecond will be something like 10 instructions on a then
current
 CPU) and, in your other post, you objected to having date and time be
 separate variables because you couldn't handle time differences across
 midnight by a simple subtraction, etc.
You don't need a date variable to measure elapsed time - a simple ulong counter will do, and you can use arbitrary precision there, nanoseconds or whatever else... dates and timers are two different things, why mix them?
Mar 27 2002
parent reply Russell Borogove <kaleja estarcion.com> writes:
Pavel Minayev wrote:
 You don't need a date variable to measure elapsed time - a simple
 ulong counter will do, and you can use arbitrary precision there,
 nanoseconds or whatever else... dates and timers are two different
 things, why mix them?
Time is time, why make a distinction? If it meant that I had to remember one time API instead of two, I'd be willing to spend an extra 64 bits per time object in order to unify high-precision and high-range time. <here he goes again> Given operator overloading, the 128-bit time class could look like a first-class datatype, as well. </here he goes again> That said, two APIs isn't so much of a problem, and some people might well want to save the space, so I'm okay either way. -RB
Mar 28 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Russell Borogove" <kaleja estarcion.com> wrote in message
news:3CA364D3.2010105 estarcion.com...
 Pavel Minayev wrote:
 You don't need a date variable to measure elapsed time - a simple
 ulong counter will do, and you can use arbitrary precision there,
 nanoseconds or whatever else... dates and timers are two different
 things, why mix them?
Time is time, why make a distinction? If it meant that I had to remember one time API instead of two, I'd be willing to spend an extra 64 bits per time object in order to unify high-precision and high-range time.
Yes. Why do you (Pavel) think dates and timers are different things? Let me give an example. For a health insurance program, you might want to calculate the length of someones's stay in the hospital (in days). This requires essentially subtracting the release date from the entry date and adding one day. How is this a totally different thing from getting the duration of some program loop by subtracting the start time from the completion time? Having "timers" and absolute date/time retrieval use the same interface simplifies things. One fewer thing to learn. That is the primary motivation for combining them. As for taking up more space, I believe Walter was proposing using a single 64 bit quantity, which is what I agreed was the best solution. The only difference was I proposed adding the bits for higher precision, which costs nothing (except some totally beyond the fringe date ranges) by essentially shifting the field left. There was a quibble about the base starting date, but we agreed that was a quibble. -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 28 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7vtfk$10i4$1 digitaldaemon.com...

 Yes.  Why do you (Pavel) think dates and timers are different things?  Let
They aren't
 me give an example.  For a health insurance program, you might want to
 calculate the length of someones's stay in the hospital (in days).  This
 requires essentially subtracting the release date from the entry date and
 adding one day.  How is this a totally different thing from getting the
 duration of some program loop by subtracting the start time from the
 completion time?
Yep. Both will be uints. Only for program, you might want it to run with nanosecond precision, while date would be measured in microseconds. But you still find the delta with operator-. Dates are different in a sense they denote some concrete moment. Date can be converted to some human-readable format. On other hand, timers are used to measure time elapsed from some arbitrary moment that you define, so for example the dayOfWeek() function would make sence on dates but not on timers. Timer is just a raw counter, and thus one could use uint without any typedefs (what functions would you expect to work with timers?). Date, however, would be a typedef, so it could be overloaded separately.
 As for taking up more space, I believe Walter was proposing using a single
 64 bit quantity, which is what I agreed was the best solution.  The only
 difference was I proposed adding the bits for higher precision, which
costs So what is the suggested precision? Nanoseconds? Or microseconds?
Mar 28 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a800hs$12dg$1 digitaldaemon.com...
 "Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
 news:a7vtfk$10i4$1 digitaldaemon.com...

 Yes.  Why do you (Pavel) think dates and timers are different things?
Let
 They aren't

 me give an example.  For a health insurance program, you might want to
 calculate the length of someones's stay in the hospital (in days).  This
 requires essentially subtracting the release date from the entry date
and
 adding one day.  How is this a totally different thing from getting the
 duration of some program loop by subtracting the start time from the
 completion time?
Yep. Both will be uints. Only for program, you might want it to run with nanosecond precision, while date would be measured in microseconds. But you still find the delta with operator-. Dates are different in a sense they denote some concrete moment. Date can be converted to some human-readable format. On other hand, timers are used to measure time elapsed from some arbitrary moment that you define, so for example the dayOfWeek() function would make sence on dates but not on timers. Timer is just a raw counter, and thus one could use uint without any typedefs (what functions would you expect to work with timers?). Date, however, would be a typedef, so it could be overloaded separately.
 As for taking up more space, I believe Walter was proposing using a
single
 64 bit quantity, which is what I agreed was the best solution.  The only
 difference was I proposed adding the bits for higher precision, which
costs So what is the suggested precision? Nanoseconds? Or microseconds?
If you are willing to live with a range of about 580 years, then nanoseconds. However, there are a number of people who feel that is insufficient (I think it is OK, but I can see their point), in which case microseconds precision with a range of 580,000 years is still a better solution than what Walter originally proposed (millisecond resolution) and is probably the best compromise. The extra precision you get by going from milliseconds to microseconds is essentially free if you use a 64 bit int for the value. If, as some prefer, but a separate question, we kept time in a separate 64 bit value from date, then nanosecond precision (and even more precise, but that seems to be to be guilding the lilly) is free and thus should IMHO be adopted. -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 28 2002
parent "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a811ku$1jut$1 digitaldaemon.com...

 If you are willing to live with a range of about 580 years, then
 nanoseconds.  However, there are a number of people who feel that is
 insufficient (I think it is OK, but I can see their point), in which case
 microseconds precision with a range of 580,000 years is still a better
 solution than what Walter originally proposed (millisecond resolution) and
 is probably the best compromise.
I agree. Microsecond precision gives wide range of dates, and is enough for most practical timing purposes.
Mar 29 2002
prev sibling next sibling parent reply Russell Borogove <kaleja estarcion.com> writes:
Stephen Fuld wrote:
 So because Unix got it wrong, we should continue to suffer?  :-) I like the
 Julian cycles idea because it is consistant and doesn't need "negative
 dates".   But it is a small point.
4713 BCE isn't the beginning of time. You _will_ need negative dates. Or rather, someone will -- albeit not necessarily with high precision.
 One related comment.  If we are going to use a 64 bit value for the time,
 the base should be a unit time much smaller than milliseconds.  Microseconds
 at least, and probably nanoseconds. 
64 bits isn't quite enough to specify billions of years with billionths of seconds precision. It's probably enough to handle almost every need, but I'm inclined to use: int64 signed_seconds_since_epoch; uint64 unsigned_fractional_seconds; Or, looked at another way, a signed 64.64 fixed- point, with the point at the seconds place. I don't really care if Epoch is noon/1/1/4713BCE or midnight/1/1/1970 or 10:23am/8/23/1969. Whatever's convenient for one bunch of people (astronomers, accountants, physicists, historians) will be inconvenient for another bunch, but all you need to know is what to add/subtract to convert. -RB
Mar 25 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Russell Borogove" <kaleja estarcion.com> wrote in message
news:3C9F8E94.8030001 estarcion.com...

 64 bits isn't quite enough to specify billions of
 years with billionths of seconds precision. It's
 probably enough to handle almost every need, but
 I'm inclined to use:

 int64
 signed_seconds_since_epoch;
 uint64          unsigned_fractional_seconds;
A single uint is enough to hold any date in approximate range of 0-500000 years with microsecond precision. I guess it's pretty enough for any exact date; yes, sometimes they say "it was in year 30,000,000 BC", but it's not a date, it's just a year. So, for any practical purpose, I guess this method fits. What will be the zero-point is not so important - as you've stated, it's just the matter of adding/subtracting some constant value...
Mar 25 2002
parent reply Russell Borogove <kaleja estarcion.com> writes:
Pavel Minayev wrote:
 A single uint is enough to hold any date in approximate range of 0-500000
 years with microsecond precision. I guess it's pretty enough for any
 exact date; yes, sometimes they say "it was in year 30,000,000 BC", but
 it's not a date, it's just a year. 
My point is that some people need bigger range, and other people need more precision. If you want a single time type in the language, it needs to cover both those needs, otherwise someone is going to say "microseconds? bah! I need nanoseconds!" or "millions of years? bah! I need billions!", and 64 bits might not be enough. -R
Mar 25 2002
next sibling parent "Walter" <walter digitalmars.com> writes:
"Russell Borogove" <kaleja estarcion.com> wrote in message
news:3C9FA0E2.4080000 estarcion.com...
 Pavel Minayev wrote:
 A single uint is enough to hold any date in approximate range of
0-500000
 years with microsecond precision. I guess it's pretty enough for any
 exact date; yes, sometimes they say "it was in year 30,000,000 BC", but
 it's not a date, it's just a year.
My point is that some people need bigger range, and other people need more precision. If you want a single time type in the language, it needs to cover both those needs, otherwise someone is going to say "microseconds? bah! I need nanoseconds!" or "millions of years? bah! I need billions!", and 64 bits might not be enough.
A 64 bit signed int measured in milliseconds will handle time for +- 285,000 years. That's enough for system programming use. I agree it will likely not be adequate for astronomers, but there should be no trouble adding a package that has time measured in astronomical units. 64 bit ints are nice because they are conveniently handled by the compiler. Sometimes I need microsecond times - but for computing elapsed time for things like profiling, not for measuring calendar time. I think those are handled adequately by being separate. In any case, it will be a typedef, and if the API for it is followed, there should be no trouble if someone upgrades to a 128 bit time or whatever.
Mar 25 2002
prev sibling parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Russell Borogove" <kaleja estarcion.com> wrote in message
news:3C9FA0E2.4080000 estarcion.com...
 Pavel Minayev wrote:
 A single uint is enough to hold any date in approximate range of
0-500000
 years with microsecond precision. I guess it's pretty enough for any
 exact date; yes, sometimes they say "it was in year 30,000,000 BC", but
 it's not a date, it's just a year.
My point is that some people need bigger range, and other people need more precision. If you want a single time type in the language, it needs to cover both those needs, otherwise someone is going to say "microseconds? bah! I need nanoseconds!" or "millions of years? bah! I need billions!", and 64 bits might not be enough.
Note that the original poster suggested two 64 bit values, one for days and one for time of day . Using this idea, we have plenty of bits for time accurate to nanoseconds, (or even less) and dates even if you set the start date to something like the big bang. :-) -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 27 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7t4db$2jfd$6 digitaldaemon.com...

 Note that the original poster suggested two 64 bit values, one for days
and
 one for time of day .  Using this idea, we have plenty of bits for time
 accurate to nanoseconds, (or even less) and dates even if you set the
start
 date to something like the big bang.  :-)
However, this requires twice more memory, and you can't just subtract one value from another, or compare dates, etc.
Mar 27 2002
parent reply DrWhat? <blackmarlin nospam.asean-mail.com> writes:
Pavel Minayev wrote:

 "Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
 news:a7t4db$2jfd$6 digitaldaemon.com...
 
 Note that the original poster suggested two 64 bit values, one for days
and
 one for time of day .  Using this idea, we have plenty of bits for time
 accurate to nanoseconds, (or even less) and dates even if you set the
start
 date to something like the big bang.  :-)
However, this requires twice more memory, and you can't just subtract one value from another, or compare dates, etc.
Yes, that is a problem - and it a later post I suggested two 32 bit values would be be more efficient and nearly as effective. I expect the date and time values to be used for time stamps on file systems etc, not for nano second precision - that is the job of specialist timers (and besides the overhead of a call to the date/time routines could easily mess up a nano second count, rendering its output effectively worthless). The problem with storing date and time in a single 64 bit value is that leap _seconds_ are added (and subtracted) in an arbitrary way to compensate for planetary deceleration (ok, it is not truly arbitrary but for our results it may as well be). Do you really want programmes written in D to be a few minutes out compared to real time? After all the purpose of a standard date/time library is to prevent such problems (even with people using different date and time representations) by using a standard format which is easily convertable. Finally the date and time, if stored in separate 32 bit uints, could be split up - if you want to know, for example, the date of a birthday the time is commonly unimportant. And if creating an alarm clock the date is similarly unwanted. Although addition and subtraction of dates in this format is not as trivial, comparasons are just as simple. (add/sub -> time+=newTime; date+=newDate+time/secondsInDay; time%=secondsInDay; is that really that much more difficult - these operations could easily be added to the time/date library) ` newTime <- eax ` 586 timings (Intel) ` newDate <- ebx add eax, time ` 1 U xor edx, edx ` V `cwde ` could be used - but c = 3 NP div secondsInDay ` 41 NP (ouch) add ebx, eax ` 1 U mov time, edx ` V add date, ebx ` 1 U `== 44 total (&free slot [pop ebp?]) ` ( time for 64 bit add is 4 cycles ) ` ( nb: I think divides are faster on newer processors, and they take ` less cycles on older processors (38-386,40-486) ) C 2002/3/28
Mar 28 2002
parent "OddesE" <OddesE_XYZ hotmail.com> writes:
"DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
news:a7v0f9$ger$1 digitaldaemon.com...
 Pavel Minayev wrote:

 "Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
 news:a7t4db$2jfd$6 digitaldaemon.com...

 Note that the original poster suggested two 64 bit values, one for days
and
 one for time of day .  Using this idea, we have plenty of bits for time
 accurate to nanoseconds, (or even less) and dates even if you set the
start
 date to something like the big bang.  :-)
However, this requires twice more memory, and you can't just subtract
one
 value from another, or compare dates, etc.
Yes, that is a problem - and it a later post I suggested two 32 bit values would be be more efficient and nearly as effective. I expect the date and time values to be used for time stamps on file systems etc, not for nano second precision - that is the job of specialist timers (and besides the overhead of a call to the date/time routines could easily mess up a nano second count, rendering its output effectively worthless). The problem with storing date and time in a single 64 bit value is that leap _seconds_ are added (and subtracted) in an arbitrary way to
compensate
 for planetary deceleration (ok,  it is not truly arbitrary but for our
 results it may as well be).  Do you really want programmes written in D to
 be a few minutes out compared to real time?  After all the purpose of a
 standard date/time library is to prevent such problems (even with people
 using different date and time representations) by using a standard format
 which is easily convertable.

 Finally the date and time, if stored in separate 32 bit uints, could be
 split up - if you want to know, for example, the date of a birthday the
 time is commonly unimportant.  And if creating an alarm clock the date is
 similarly unwanted.  Although addition and subtraction of dates in this
 format is not as trivial,  comparasons are just as simple.
 (add/sub -> time+=newTime; date+=newDate+time/secondsInDay;
 time%=secondsInDay;  is that really that much more difficult - these
 operations could easily be added to the time/date library)

         `       newTime <- eax          ` 586 timings (Intel)
         `       newDate <- ebx
         add     eax, time               ` 1     U
         xor     edx, edx                `       V
         `cwde   ` could be used - but c = 3 NP
         div     secondsInDay            ` 41    NP      (ouch)
         add     ebx, eax                ` 1     U
         mov     time, edx               `       V
         add     date, ebx               ` 1     U
                                         `== 44 total (&free slot [pop
ebp?])
         ` ( time for 64 bit add is 4 cycles )
         ` ( nb: I think divides are faster on newer processors,  and they
take
         `   less cycles on older processors (38-386,40-486) )

 C       2002/3/28
So use an OLE DATE compatible format, 64 bits float. This way splitting date and time is very easy, you don't need to invent difficult routines (MFC COleDateTime and Delphi TDateTime routines are open- source and all the problems you mention have been solved by other people before. Also, you get instantly free compatibility with OLE. See my other post... -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
Mar 28 2002
prev sibling parent "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a7nr2a$2nes$1 digitaldaemon.com...
 "Pavel Minayev" <evilone omen.ru> wrote in message
 news:a7nj12$2jad$1 digitaldaemon.com...
 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7n7d3$2cv1$1 digitaldaemon.com...

 A 64 bit count is a good idea but if we count from 1970 then how do we
 represent dates before then (ie. birthdays - some of which could be as
 early as 1880) - surely a proprietary format is not a good idea and
will
 decrease interoperability and negitive dates are not pleasant.  And
We could use signed ints. Also, this is the format used by UNIX, I
believe,
 not something taken randomly.
So because Unix got it wrong, we should continue to suffer? :-) I like
the
 Julian cycles idea because it is consistant and doesn't need "negative
 dates".   But it is a small point.

 One related comment.  If we are going to use a 64 bit value for the time,
 the base should be a unit time much smaller than milliseconds.
Microseconds
 at least, and probably nanoseconds.  There should be some minimal accuracy
 specified (milliseconds is OK for the minimum, with the low order parts
set
 to zero), but we should allow a consistant mechanism for time handling for
 those systems that support a more precise clock.  Since we have to do a
 divide anyway if we want seconds, having to divide by a larger number
isn't
 a substantial price to pay for the seemlessnes of a single mechanism for
 high precision timers as well as more mundane uses.

 --
  - Stephen Fuld
    e-mail address disguised to prevent spam
Mar 27 2002
prev sibling parent reply "OddesE" <OddesE_XYZ hotmail.com> writes:
"DrWhat?" <DrWhat nospam.madscientist.co.uk> wrote in message
news:a7koqi$20lc$1 digitaldaemon.com...
<SNIP>
 A little info (copied from somewhere on the Internet many years ago -
sorry
 lost the URL and it probably no longer exists anyway)

 ---------------------
Guess what, it still exists... You can't beat google :) http://www.faqs.org/faqs/astronomy/faq/part3/section-6.html http://www.geocities.com/CapeCanaveral/Lab/7671/julian.htm I like your idea by the way! Has someone looked at OleDateTime? I believe it's a 64 bit float? It represents the amount of days passed since some moment in time. Parts of a day are represented with the fractional part of the float, so 6 hours would translate to the number 0.25, 12 hours to 0.5, two days to 2.0 and 3 days and 18 hours to 3.75. The advantage of this system is that it is very easy to extract either the date or the time part of a datetime. Also, calculations with precision in days do not need to divide the number. D's more advanced version could use an extended. I don't know what kind of range and precision that would give, but I bet it's enormeous! -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net __________________________________________ Remove _XYZ from my address when replying by mail
Mar 25 2002
parent reply "OddesE" <OddesE_XYZ hotmail.com> writes:
"OddesE" <OddesE_XYZ hotmail.com> wrote in message
news:a7obqv$30n9$1 digitaldaemon.com...
 "DrWhat?" <DrWhat nospam.madscientist.co.uk> wrote in message
 news:a7koqi$20lc$1 digitaldaemon.com...
 <SNIP>
 A little info (copied from somewhere on the Internet many years ago -
sorry
 lost the URL and it probably no longer exists anyway)

 ---------------------
Guess what, it still exists... You can't beat google :) http://www.faqs.org/faqs/astronomy/faq/part3/section-6.html http://www.geocities.com/CapeCanaveral/Lab/7671/julian.htm I like your idea by the way! Has someone looked at OleDateTime? I believe it's a 64 bit float? It represents the amount of days passed since some moment in time. Parts of a day are represented with the fractional part of the float, so 6 hours would translate to the number 0.25, 12 hours to 0.5, two days to 2.0 and 3 days and 18 hours to 3.75. The advantage of this system is that it is very easy to extract either the date or the time part of a datetime. Also, calculations with precision in days do not need to divide the number. D's more advanced version could use an extended. I don't know what kind of range and precision that would give, but I bet it's enormeous! -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net __________________________________________ Remove _XYZ from my address when replying by mail
So, how about it? An extended for a datetime?
Mar 27 2002
parent reply DrWhat? <blackmarlin nospam.asean-mail.com> writes:
 
 So, how about it? An extended for a datetime?
 
Two problems which I see, 1 converting time -> seconds is none trivial 2 not all computers have extended precision floats (some do not even have double precision) (Though the dates and up looking like Treky stardates - which should appeal to geeks :-)
Mar 28 2002
next sibling parent reply "OddesE" <OddesE_XYZ hotmail.com> writes:
"DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
news:a7uu46$epg$1 digitaldaemon.com...
 So, how about it? An extended for a datetime?
Two problems which I see, 1 converting time -> seconds is none trivial
alias DateTime extended; DateTime dt = 3.25; DateTime date = (long) dt; DateTime time = dt - date; // time is a fraction of a day. // 1 Day == 24 hours, so mul. time by 24 to get hours... DateTime hours = time * 24; // 1 Day == 1440 minutes, so mul. time by 1440 to get min... DateTime min = time * 1440; // 1 Day == 86400 seconds, so mul. time by 86400 to get sec... DateTime sec = time * 86400; Ofcourse you could go on for milliseconds, microseconde, nanoseconds etc... To me it seems quitte trivial, or am I missing something important? Also, this format is used by Ole DATE and the MFC COleDateTime and Delphi TDateTime classes, both of which come with source, so getting all the other algorithms should be easy.
         2       not all computers have extended precision floats
                 (some do not even have double precision)
OK, Fair enough... So, buy a new computer! Sounds very bulish, I admit, but some processors don't have 32-bit addressing either, but it didn't stop Walter from making that the lowest supported format for D. And fully justified too I think. We have to look for the future, not always back at the past... How about this: If there are common 32-bit processors that do not support 80-bit extended, go for 64 bit double instead. If they all support it, use 80 bit, we want precision, not speed, because most 3D graphics do not need dates! :) Conversion to and from double to extended is easy anyway as long as you stay within the safe range. When you go out of range you have just proven that we actually *need* 80 bits!
 (Though the dates and up looking like Treky stardates - which should
appeal
 to geeks :-)
LOL! You got me, that's why I like it! :) -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
Mar 28 2002
parent reply DrWhat? <blackmarlin nospam.asean-mail.com> writes:
OddesE wrote:

 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7uu46$epg$1 digitaldaemon.com...
 So, how about it? An extended for a datetime?
Two problems which I see, 1 converting time -> seconds is none trivial
alias DateTime extended; DateTime dt = 3.25; DateTime date = (long) dt; DateTime time = dt - date;
(need a multiply here, then another conversion to convert to seconds before we can even think of converting it into a national standard). total - 3 conversions, 1 subtract, 1 multiply. That is going to take even more processor cycles that my solution, and may have some problems in terms of precision. Also why use the FPU when there is no need, especially when it is less efficient. But that is not the real problem.
 // time is a fraction of a day.
 // 1 Day == 24 hours, so mul. time by 24 to get hours...
 DateTime hours = time * 24;
 
 // 1 Day == 1440 minutes, so mul. time by 1440 to get min...
 DateTime min = time * 1440;
 
 // 1 Day == 86400 seconds, so mul. time by 86400 to get sec...
 DateTime sec = time * 86400;
Wrong - 1 Day may equal 86399 or 86401 seconds - remember leap seconds, they are a problem when converting dates into seconds, and that is the problem I am trying to avoid.
 Ofcourse you could go on for milliseconds, microseconde,
 nanoseconds etc... To me it seems quitte trivial, or am
 I missing something important?
 
 Also, this format is used by Ole DATE and the MFC
 COleDateTime and Delphi TDateTime classes, both of
 which come with source, so getting all the other
 algorithms should be easy.
I do not programme or even use Windows, I could not comment on this.
         2       not all computers have extended precision floats
                 (some do not even have double precision)
OK, Fair enough... So, buy a new computer! Sounds very bulish, I admit, but some processors don't have 32-bit addressing either, but it didn't stop Walter from making that the lowest supported format for D. And fully justified too I think. We have to look for the future, not always back at the past...
I think a flat memory model is necessary for D not necessarily 32 bit addressing - this would rule out the 8086 but still allow porting to ie. the Z80 or similar embedded system CPUs with a 16 bit flat memory model.
 How about this: If there are common 32-bit processors
 that do not support 80-bit extended, go for 64 bit
 double instead. If they all support it, use 80 bit, we
 want precision, not speed, because most 3D graphics
 do not need dates! :)
 
 Conversion to and from double to extended is easy
 anyway as long as you stay within the safe range.
 When you go out of range you have just proven that
 we actually *need* 80 bits!
This attitude is what will make D a Intel / Windows only programming language - if D is going to be truely sucessful it must be able to support the majority of architectures. Few processors support 80 bit extended precision floating point numbers, and many (ie. RISC processors) do not even have a floating point capibility and emulate it using integer instructions if required. Using a floating point format (especially the rubbish one that is IEEE 754) would only decrease the ease of porting which would lead to D being a wide spread langauge.
 (Though the dates and up looking like Treky stardates - which should
 appeal to geeks :-)
LOL! You got me, that's why I like it! :)
I guess me being a B5 fan is why I do not :-) The year is 2259 ... but how the %^&* are we going to represent it. C 2002/3/28
Mar 28 2002
next sibling parent "Pavel Minayev" <evilone omen.ru> writes:
"DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
news:a7vmt7$t7i$1 digitaldaemon.com...

 (Though the dates and up looking like Treky stardates - which should
 appeal to geeks :-)
LOL! You got me, that's why I like it! :)
I guess me being a B5 fan is why I do not :-)
I second that, being a B5 fan myself. I guess ulong is just enough.
 The year is 2259 ... but how the %^&* are we going to represent it.
Nah, it's 2262! =)
Mar 28 2002
prev sibling parent reply "OddesE" <OddesE_XYZ hotmail.com> writes:
"DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
news:a7vmt7$t7i$1 digitaldaemon.com...
 OddesE wrote:

 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7uu46$epg$1 digitaldaemon.com...
 So, how about it? An extended for a datetime?
Two problems which I see, 1 converting time -> seconds is none trivial
alias DateTime extended; DateTime dt = 3.25; DateTime date = (long) dt; DateTime time = dt - date;
(need a multiply here, then another conversion to convert to seconds before we can even think of converting it into a national standard). total - 3 conversions, 1 subtract, 1 multiply.
Where do we need the multiply? DateTime dt = 3.25; DateTime date = (long) dt; // This should truncate dt into date, yielding 3.0; DateTime time = dt - date; // 3.25 - 3.0 = 0.25 // Represents 06:00u We only need a multiply if we want to convert to hours, minutes or seconds is it not? If we use a longint, won't we need a divide? Aren't divides more expensive than multiplies? I am not claiming any of these things to be true, I am really just asking, because this is what I thought it was. Also, why did Borland and MS both picked a 64 bit float if it would be that bad? - Getting the date is a truncating assignment. - Getting the time is a truncating assignment and a subtract. - Getting both is a truncating assignment, a subtract and another assignment. - Getting hours, seconds, minutes etc is a truncating assignment, a subtract and a multiply...That is not too expensive is it?
 That is going to take even more processor cycles that my solution,  and
may
 have some problems in terms of precision.  Also why use the FPU when there
 is no need,  especially when it is less efficient.  But that is not the
 real problem.
How are you going to avoid floating point operations if you use a longint to represent elapsed microseconds? Won't you need to divide to get seconds, minutes, hours etc? I thought FPU operations were fast becoming as cheap as fp-emulation operations?
 // time is a fraction of a day.
 // 1 Day == 24 hours, so mul. time by 24 to get hours...
 DateTime hours = time * 24;

 // 1 Day == 1440 minutes, so mul. time by 1440 to get min...
 DateTime min = time * 1440;

 // 1 Day == 86400 seconds, so mul. time by 86400 to get sec...
 DateTime sec = time * 86400;
Wrong - 1 Day may equal 86399 or 86401 seconds - remember leap seconds, they are a problem when converting dates into seconds, and that is the problem I am trying to avoid.
You are right here...Then again, 1 day might be 23, 24 or 25 hours depending on daylight saving time and one year might be 365 or 366 days depending on the leapyear. If you have a solution that solves this, great! I just reread the Julian cycle story, but isn't it also counting days from a certain starting point? If you are proposing to keep date and time separate, then that might be a solution, but what is the real advantage of that? Usually when I use dates in my program I want to do things like compare them and add or subtract them. These operations benefit greatly from a format with date and time stuffed together. It isn't all that much I want to actually display them, and that is the only moment when it really matters that there are leapyears, -hours and -seconds. A conversion will do just fine then, and who cares if it is slow, displaying information tends to be slow anyhow. Also, when you think about it, what is the difference between a date and a time? Isn't a date just time measured in days, while time is time measured in parts of days?
 Ofcourse you could go on for milliseconds, microseconde,
 nanoseconds etc... To me it seems quitte trivial, or am
 I missing something important?

 Also, this format is used by Ole DATE and the MFC
 COleDateTime and Delphi TDateTime classes, both of
 which come with source, so getting all the other
 algorithms should be easy.
I do not programme or even use Windows, I could not comment on this.
Ah come on...This has got nothing to do with windows. Open source is open source and algorithms are algorithms, no matter on what system you are. I agree that MS achieved 'polluting' C with WinMain et al, but you should still be able to read it... ;)
         2       not all computers have extended precision floats
                 (some do not even have double precision)
OK, Fair enough... So, buy a new computer! Sounds very bulish, I admit, but some processors don't have 32-bit addressing either, but it didn't stop Walter from making that the lowest supported format for D. And fully justified too I think. We have to look for the future, not always back at the past...
I think a flat memory model is necessary for D not necessarily 32 bit addressing - this would rule out the 8086 but still allow porting to ie. the Z80 or similar embedded system CPUs with a 16 bit flat memory model.
You are probably right...
 How about this: If there are common 32-bit processors
 that do not support 80-bit extended, go for 64 bit
 double instead. If they all support it, use 80 bit, we
 want precision, not speed, because most 3D graphics
 do not need dates! :)

 Conversion to and from double to extended is easy
 anyway as long as you stay within the safe range.
 When you go out of range you have just proven that
 we actually *need* 80 bits!
This attitude is what will make D a Intel / Windows only programming language - if D is going to be truely sucessful it must be able to support the majority of architectures. Few processors support 80 bit extended precision floating point numbers, and many (ie. RISC processors) do not even have a floating point capibility and emulate it using integer instructions if required. Using a floating point format (especially the rubbish one that is IEEE 754) would only decrease the ease of porting
which
 would lead to D being a wide spread langauge.
I do not understand... Are these systems not capable of handling a double or extended? If they do, what is the difference with a date? I think my attitude has got nothing to do with Windows or Intel, but with old and new. I understand the need to support old hardware up to a point, but in such a young language as D is, I'd rather look forward and make sure we are forwards compatible with the future. If 64-bit's float is too much to ask for too many systems, that's it, you won't hear me about it again, but I just find it hard to believe that there are that many systems out there that do not support it. Could you name some examples?
 (Though the dates and up looking like Treky stardates - which should
 appeal to geeks :-)
LOL! You got me, that's why I like it! :)
I guess me being a B5 fan is why I do not :-) The year is 2259 ... but how the %^&* are we going to represent it. C 2002/3/28
Whatever the format, *please* make it big! (So we can still use it in 2259 when people live to be a thousand years old!) :) -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
Mar 28 2002
parent reply "Stephen Fuld" <s.fuld.pleaseremove att.net> writes:
"OddesE" <OddesE_XYZ hotmail.com> wrote in message
news:a7vvc1$11or$1 digitaldaemon.com...
 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7vmt7$t7i$1 digitaldaemon.com...
 OddesE wrote:

 "DrWhat?" <blackmarlin nospam.asean-mail.com> wrote in message
 news:a7uu46$epg$1 digitaldaemon.com...
 So, how about it? An extended for a datetime?
Two problems which I see, 1 converting time -> seconds is none trivial
alias DateTime extended; DateTime dt = 3.25; DateTime date = (long) dt; DateTime time = dt - date;
(need a multiply here, then another conversion to convert to seconds before we can even think of converting it into a national standard). total - 3 conversions, 1 subtract, 1 multiply.
Where do we need the multiply? DateTime dt = 3.25; DateTime date = (long) dt; // This should truncate dt into date, yielding 3.0; DateTime time = dt - date; // 3.25 - 3.0 = 0.25 // Represents 06:00u We only need a multiply if we want to convert to hours, minutes or seconds is it not? If we use a longint, won't we need a divide? Aren't divides more expensive than multiplies? I am not claiming any of these things to be true, I am really just asking, because this is what I thought it was. Also, why did Borland and MS both picked a 64 bit float if it would be that bad?
Because they were not at all concerned with being able to run on systems that didn't have floating point support. If D is successfull, it will get used in a lot of embedded applications where floating point support in hardware just isn't there. We should not lightly write off these types of systems for a small benefit.
 - Getting the date is a truncating assignment.
 - Getting the time is a truncating assignment and a subtract.
 - Getting both is a truncating assignment, a subtract and
   another assignment.
 - Getting hours, seconds, minutes etc is a truncating assignment,
   a subtract and a multiply...That is not too expensive is it?


 That is going to take even more processor cycles that my solution,  and
may
 have some problems in terms of precision.  Also why use the FPU when
there
 is no need,  especially when it is less efficient.  But that is not the
 real problem.
How are you going to avoid floating point operations if you use a longint to represent elapsed microseconds? Won't you need to divide to get seconds, minutes, hours etc?
You have to do a divide, but not a floating point divide. Many CPUs support integer divide instructions, or at least hardware assist for divide instructions.
 I thought FPU operations were fast becoming as cheap as
 fp-emulation operations?
No. And again, many embedded systems have no need for floating point at all. They would be reluctant to include a floating point emulation package just to be able to handle dates and times. Just to remind people, the number of CPUs sold for embedded applications *far* exceeds the number sold for desktop and server applications. -- - Stephen Fuld e-mail address disguised to prevent spam
Mar 28 2002
parent reply "Walter" <walter digitalmars.com> writes:
"Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
news:a811kv$1jut$2 digitaldaemon.com...
 No.  And again, many embedded systems have no need for floating point at
 all.  They would be reluctant to include a floating point emulation
package
 just to be able to handle dates and times.
 Just to remind people, the number of CPUs sold for embedded applications
 *far* exceeds the number sold for desktop and server applications.
I think by using careful typedef's for the time, whether it is a 64 bit int or a 128 bit int or milliseconds or microseconds, etc., will be an implementation issue. The programmer will just use the typedef's and the api's for it, and it should not be relevant to him what the underlying representation is. I know about the ole floating point date format, but I can't see any superiority in using floats for time. I just see losing precision as the exponent bits take away from the precision. Conversions from 64 bit int time to 64 bit float time is trivial, anyway, it's just a divide and an add.
Mar 29 2002
next sibling parent reply "OddesE" <OddesE_XYZ hotmail.com> writes:
"Walter" <walter digitalmars.com> wrote in message
news:a826te$69l$1 digitaldaemon.com...
 "Stephen Fuld" <s.fuld.pleaseremove att.net> wrote in message
 news:a811kv$1jut$2 digitaldaemon.com...
 No.  And again, many embedded systems have no need for floating point at
 all.  They would be reluctant to include a floating point emulation
package
 just to be able to handle dates and times.
 Just to remind people, the number of CPUs sold for embedded applications
 *far* exceeds the number sold for desktop and server applications.
I think by using careful typedef's for the time, whether it is a 64 bit
int
 or a 128 bit int or milliseconds or microseconds, etc., will be an
 implementation issue. The programmer will just use the typedef's and the
 api's for it, and it should not be relevant to him what the underlying
 representation is.

 I know about the ole floating point date format, but I can't see any
 superiority in using floats for time. I just see losing precision as the
 exponent bits take away from the precision. Conversions from 64 bit int
time
 to 64 bit float time is trivial, anyway, it's just a divide and an add.
OK, You've all convinced me, and I am outnumbered anyhow :) Can anyone point me to a location where I can get some specs on datetime formats, because I would like to try writing a datetime module... -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
Mar 29 2002
parent "Walter" <walter digitalmars.com> writes:
"OddesE" <OddesE_XYZ hotmail.com> wrote in message
news:a829dg$qva$1 digitaldaemon.com...
 OK, You've all convinced me, and I am outnumbered anyhow :)
 Can anyone point me to a location where I can get some specs
 on datetime formats, because I would like to try writing
 a datetime module...
I posted the OLE format in another post in this thread. Found it on google <g>.
Mar 29 2002
prev sibling parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Walter" <walter digitalmars.com> wrote in message
news:a826te$69l$1 digitaldaemon.com...

 I think by using careful typedef's for the time, whether it is a 64 bit
int
 or a 128 bit int or milliseconds or microseconds, etc., will be an
 implementation issue. The programmer will just use the typedef's and the
 api's for it, and it should not be relevant to him what the underlying
 representation is.
Is it a good idea? I mean, when you do (a - b), you don't know whether the result is in milli-, micro-, or nanoseconds... probably it is better to fix measuring units, but vary size - int64, int128 etc.
Mar 29 2002
parent reply "Walter" <walter digitalmars.com> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a82foe$2ked$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:a826te$69l$1 digitaldaemon.com...
 I think by using careful typedef's for the time, whether it is a 64 bit
int
 or a 128 bit int or milliseconds or microseconds, etc., will be an
 implementation issue. The programmer will just use the typedef's and the
 api's for it, and it should not be relevant to him what the underlying
 representation is.
Is it a good idea? I mean, when you do (a - b), you don't know whether the result is in milli-, micro-, or nanoseconds... probably it is better to fix measuring units, but vary size - int64, int128 etc.
You have a constant like CLOCKS_PER_SECOND.
Mar 29 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Walter" <walter digitalmars.com> wrote in message
news:a82km3$2m2l$1 digitaldaemon.com...

 Is it a good idea? I mean, when you do (a - b), you don't know whether
 the result is in milli-, micro-, or nanoseconds... probably it is
 better to fix measuring units, but vary size - int64, int128 etc.
You have a constant like CLOCKS_PER_SECOND.
So, (a - b) / CLOCKS_PER_SECOND? But this means additional division, which is not that good in time-critical situations (where timers are used frequently). Hm, and what's the problem with fixed units? Microseconds seem to be enough for most purposes, WHY would somebody decide to use something else in his implementation? Isn't it just better to standartize it?
Mar 29 2002
parent reply "Walter" <walter digitalmars.com> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a82n5c$nqn$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:a82km3$2m2l$1 digitaldaemon.com...
 Is it a good idea? I mean, when you do (a - b), you don't know whether
 the result is in milli-, micro-, or nanoseconds... probably it is
 better to fix measuring units, but vary size - int64, int128 etc.
You have a constant like CLOCKS_PER_SECOND.
So, (a - b) / CLOCKS_PER_SECOND? But this means additional division, which is not that good in time-critical situations (where timers are used frequently).
The divide is only necessary when doing the report, which is not performance critical. (I've written profilers using the cycle timer instruction on the Pentium.)
 Hm, and what's the problem with fixed units? Microseconds seem
 to be enough for most purposes, WHY would somebody decide
 to use something else in his implementation? Isn't it just better
 to standartize it?
That's what people did years ago when unix time was in seconds. It worked for a couple decades, then there was all the falderol because it needed to change <g>.
Mar 29 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Walter" <walter digitalmars.com> wrote in message
news:a82rr9$2e3f$1 digitaldaemon.com...

 The divide is only necessary when doing the report, which is not
performance
 critical. (I've written profilers using the cycle timer instruction on the
 Pentium.)
Not really. Suppose I want something to happen in one mcs after: time t = clock(); ... if ((clock() - t) / CLOCKS_PER_SEC / 1000000 >= 1) // do it
 That's what people did years ago when unix time was in seconds. It worked
 for a couple decades, then there was all the falderol because it needed to
 change <g>.
Then, use two longs, and nanosecond precision. THIS is going to be enough just for everybody.
Mar 29 2002
parent reply "Walter" <walter digitalmars.com> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a83g0b$1slr$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:a82rr9$2e3f$1 digitaldaemon.com...
 The divide is only necessary when doing the report, which is not
performance
 critical. (I've written profilers using the cycle timer instruction on
the
 Pentium.)
Not really. Suppose I want something to happen in one mcs after: time t = clock(); ... if ((clock() - t) / CLOCKS_PER_SEC / 1000000 >= 1) // do it
Try this for, say, a delay of 1 hundreth of a second: if ((clock() - t) >= CLOCKS_PER_SEC / 100) The division is done at compile time.
Mar 29 2002
parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Walter" <walter digitalmars.com> wrote in message
news:a83kjb$1vtr$2 digitaldaemon.com...

 Try this for, say, a delay of 1 hundreth of a second:
     if ((clock() - t) >= CLOCKS_PER_SEC / 100)

 The division is done at compile time.
Okay, you've catched me =) But... not so fast! I don't like the name of the constant! =) Probably something like TicksPerSecond would be better?
Mar 30 2002
parent "Walter" <walter digitalmars.com> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:a849mg$2kim$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:a83kjb$1vtr$2 digitaldaemon.com...
 Try this for, say, a delay of 1 hundreth of a second:
     if ((clock() - t) >= CLOCKS_PER_SEC / 100)
 The division is done at compile time.
Okay, you've catched me =)
It's an old trick <g>.
 But... not so fast! I don't like the name of the constant! =)
Nobody's ever happy!
 Probably something like TicksPerSecond would be better?
Probably. I just threw out the former because that's what C uses.
Mar 30 2002
prev sibling parent Russell Borogove <kaleja estarcion.com> writes:
DrWhat? wrote:
So, how about it? An extended for a datetime?
Two problems which I see, 1 converting time -> seconds is none trivial
seconds_into_the_day = fmod( days, 1.0 ) * 86400; // == 60*60*24
Mar 28 2002