From: glen herrmannsfeldt on 10 Jun 2010 13:34 deltaquattro <deltaquattro(a)gmail.com> wrote: (snip, I wrote) >> Well, you could check for the (unlikely) accidental occurance and >> substitute a different (nearby) value. > Hmmm, not sure I got your point, could you show me a short example? Say that 9.99e30 was the "missing point" value. In single precision, it isn't so unlikely that could occur in generating billions of data points. So, test for it and substitute something else: if(x.eq.9.99e30) x=9.99001e30 >> Note that 9e99 is too big >> for the single precision REAL on most systems. I believe that >> before IEEE, this was the usual solution. Likely still even with >> IEEE, as long as non-IEEE machines are around. > No problem, as said before I made a mistake, it's really a DOUBLE > PRECISION variable. OK, must less likely to accidentally hit on a specific value in normal computation. If this is data that is read in with only a small amount of computation done on it, then it is somewhat more likely. -- glen
From: glen herrmannsfeldt on 10 Jun 2010 13:40 Richard Maine <nospam(a)see.signature> wrote: > deltaquattro <deltaquattro(a)gmail.com> wrote: >> Ok, so you're for the solution using LOGICAL arrays. Do you think it >> would be better to implement a "parallel" logical array for each real >> array, or to define arrays of derived type like: >> type safearray >> logical :: isvalid >> real(r8):: cell >> end type > Either works. Depends which is most convenient for other reasons. And even though the logical is smaller than the real(r8), on most machines to maintain alignment of the real(r8) the whole structure will have twice the size of a real(r8). It might be better to have the real(r8) before the logical, though as far as I know it doesn't make much difference. -- glen
From: Gib Bogle on 10 Jun 2010 17:00 glen herrmannsfeldt wrote: > The language has no rule against equality tests for floating > point values, but one does have to be careful. I expect one would be safe doing something like this: real, parameter :: MISSING = 1.0e30 real, parameter :: NEAR_MISS = 9.9999e29 .... !when generating data values if (datavalue == MISSING) then datavalue = NEAR_MISS endif if (missing_value()) then datavalue = MISSING endif .... !when using data values if (datavalue == MISSING) then ! treat value as missing endif Of course, it would be best if MISSING were an extremely unlikely data value, and in the unlikely event of its occurrence, one would have to be unconcerned about replacing MISSING with NEAR_MISS.
From: aruzinsky on 10 Jun 2010 18:33 On Jun 9, 10:31 am, deltaquattro <deltaquat...(a)gmail.com> wrote: > Hi, > > this is really more of a "numerical computing" question, so I cross- > post to sci.math.num.analysis too. I decided to post on > comp.lang.fortran, anyway, because here is full of computational > scientists and anyway there are some sides of the issue specifically > related to Fortran language. > > The problem is this: I am modifying a legacy code, and I need to > compute some REAL values which I then store in large arrays. Sometimes > it's impossible to compute these values: for example, think of > interpolating a table to a given abscissa, it may happen that the > abscissa falls outside the curve boundaries. I have code which checks > for this possibility, and if this happens the interpolation is not > performed. However, now I must "store" somewhere the information that > interpolation was not possible for that array element, and inform the > user of it. Since the values can be either positive or negative, I > cannot use tricks like initializing the array element to a negative > values. > > I'm sure this has happened to you before: which solution did you use? > Basically, I can think of three ways: > > 1. For each REAL array, I declare a LOGICAL array of the same shape, > which contains 0 for correct values and 1 for missing values. I guess > that's the cleanest way, but I have a lot of arrays and I'd rather not > declare an extra array for each of them. I know it's not a memory > issues (obviously LOGICAL arrays don't occupy a lot of space, even if > they do are big in my case!), but to me it seems like I'm adding > redundant code. It would be better to declare arrays of a derived > type, each element containing a REAL and a LOGICAL, but this would > force me to modify the code in all the places where the arrays are > used, and it's quite a big code. > > 2. I initialize a missing value to an extremely large positive or > negative value, like 9e99. I think that's how the problem is usually > solved in practice, isn't it? I'm a bit worried that this is not > entirely "clean", since such values could in theory also result from > the interpolation. However, since reasonable values of all the > interpolated quantities are usually in the range -100/100, when this > happens usually it is related to errors in the interpolation table > data. So most likely it indicates an error which must be signaled to > the user. > > 3. One could initialize the "missing" values to NaN. However, I then > have to test for the array element being a NaN, when I produce my > output for the user. From what I remember about Fortran and NaN, > there's (or there was) no portable way to do this...am I wrong? > > I would really appreciate your help on this issue, since I really > don't know which way to choose and currently I'm stuck! Thanks in > advance, > > Best Regards > > Sergio Rossi What is the size of your Fortran's LOGICAL data type? I only use C+ +. Visual C++ has the data type, bool, but sizeof(bool) = 1 byte, so I wrote my own array class (both 1D and 2D) of 1 bit per element. To my surprise, access is very fast and, now, I never hesitate to use it. Maybe, you can do the same in Fortran.
From: aruzinsky on 10 Jun 2010 20:27
On Jun 10, 4:33 pm, aruzinsky <aruzin...(a)general-cathexis.com> wrote: > On Jun 9, 10:31 am, deltaquattro <deltaquat...(a)gmail.com> wrote: > > > > > > > Hi, > > > this is really more of a "numerical computing" question, so I cross- > > post to sci.math.num.analysis too. I decided to post on > > comp.lang.fortran, anyway, because here is full of computational > > scientists and anyway there are some sides of the issue specifically > > related to Fortran language. > > > The problem is this: I am modifying a legacy code, and I need to > > compute some REAL values which I then store in large arrays. Sometimes > > it's impossible to compute these values: for example, think of > > interpolating a table to a given abscissa, it may happen that the > > abscissa falls outside the curve boundaries. I have code which checks > > for this possibility, and if this happens the interpolation is not > > performed. However, now I must "store" somewhere the information that > > interpolation was not possible for that array element, and inform the > > user of it. Since the values can be either positive or negative, I > > cannot use tricks like initializing the array element to a negative > > values. > > > I'm sure this has happened to you before: which solution did you use? > > Basically, I can think of three ways: > > > 1. For each REAL array, I declare a LOGICAL array of the same shape, > > which contains 0 for correct values and 1 for missing values. I guess > > that's the cleanest way, but I have a lot of arrays and I'd rather not > > declare an extra array for each of them. I know it's not a memory > > issues (obviously LOGICAL arrays don't occupy a lot of space, even if > > they do are big in my case!), but to me it seems like I'm adding > > redundant code. It would be better to declare arrays of a derived > > type, each element containing a REAL and a LOGICAL, but this would > > force me to modify the code in all the places where the arrays are > > used, and it's quite a big code. > > > 2. I initialize a missing value to an extremely large positive or > > negative value, like 9e99. I think that's how the problem is usually > > solved in practice, isn't it? I'm a bit worried that this is not > > entirely "clean", since such values could in theory also result from > > the interpolation. However, since reasonable values of all the > > interpolated quantities are usually in the range -100/100, when this > > happens usually it is related to errors in the interpolation table > > data. So most likely it indicates an error which must be signaled to > > the user. > > > 3. One could initialize the "missing" values to NaN. However, I then > > have to test for the array element being a NaN, when I produce my > > output for the user. From what I remember about Fortran and NaN, > > there's (or there was) no portable way to do this...am I wrong? > > > I would really appreciate your help on this issue, since I really > > don't know which way to choose and currently I'm stuck! Thanks in > > advance, > > > Best Regards > > > Sergio Rossi > > What is the size of your Fortran's LOGICAL data type? I only use C+ > +. Visual C++ has the data type, bool, but sizeof(bool) = 1 byte, so > I wrote my own array class (both 1D and 2D) of 1 bit per element. To > my surprise, access is very fast and, now, I never hesitate to use it. > Maybe, you can do the same in Fortran.- Hide quoted text - > > - Show quoted text - Dear Sergio Rossi, Do not cross post in this manner. Many people hate cross posters for good reason. I replied in Sci.Math.Num-analysis and my post doesn't appear there but instead shows up in comp.lang.fortran. If I didn't know better from experience, you would have jerked me around into replying more than once. And, later I might wonder why my Google profile shows me posting in forums I never visited. Because of people like you. |