Prev: Simple hack to get $600 to your home.
Next: Applied Computing 2010: 1st call extension until 25 June 2010
From: D Yuniskis on 2 Jul 2010 17:49 Hi Peter, Peter Dickerson wrote: [snip] >>> Sorry, I don't understand where the 2.5 sqrts/iteration comes from. When >>> you split the line and have two sets of four control points you need >>> eight distances (sqrt) but none of those have been computed at the >>> previous level. The only place I see five appear is that the are five new >>> points to calculate, one of which is used twice (giving six) and the >>> start and end points from the original (now we have eight). None of that >>> uses sqrts and the time to calculate it is potentially insignificant >>> relative to the sqrts (e.g. it can be done with integers of sufficient >>> size by rescaling by eight at each nesting). >> Given a Start point (S), departing control point (D), arriving >> control point (A), and End point (E), [i.e., these are P0 - P3]: >> >> The first iteration computes the lengths of |SD|, |DA|, |AE| and >> compares them against |SE| (i.e., 4 sqrts). >> >> Then the curve is subdivided into two half-curves ("Left" and >> "Right"). The control points for the two half curve are: >> M = (D+A)/2 >> >> L0 = S >> R3 = E >> >> L1 = (S+D)/2 >> R2 = (A+E)/2 >> >> L2 = (L1+M)/2 >> R1 = (M+R2)/2 >> >> L3 = R0 = (L2+R1)/2 >> >> Looking at the Left half curve L0, L1, L2, L3: >> >> Compute |L0L1|, |L1L2| and |L2L3| and compare against |L0L3|. >> >> But, |L0L1| is actually just half of |AD| -- which we already >> have from the first iteration! > > Not sure that's so. You mean |SD|, I guess. Yes, sorry. (I was mentally using A, B, C, D as point names and had to adjust them to fit the previous names I had used. Obviously got my wires crossed. <:-( ) >> However, we do have to compute |L1L2|. And |L0L3|. >> There's two sqrts. >> >> OTOH, when we compute |L2L3|, this is really half of |L2R1|. >> So, whatever result we get there we can use for the |R0R1| >> computation! So, that's a 0.5 sqrt. >> >> (Did I do that right?) > > OK. L2 is the middle of L2 and R1. An aweful lot of degneracy here. > > So, you're say that you need five additional sqrts to compute one line, not > five sqrts. In my tables the sqrts column is the number of sqrts needed to I need 3 to compute one "half curve" -- or, 5 to compute *two* half curves. > calculate the second column. But in you case you need to include *some of* I need to include *all* of the counts above. Sometimes, I might not have to use all of the counts on the "current line" (or on lines below) -- if I can assure myself that the estimate for that "half curve" is good enough. (i.e., stop recursing on that half curve) > the counts higher up the table. Then when it comes to using Romberg on your > method you need all the sqrts done for the previous row whereas you only > need some to compute the current. So not directly comparable. In particular Correct. If you don't know a priori how far down the table you are going to have to "start", then you have to work your way down to it, dynamically examining the goodness of the estimate to determine if you're far enough down. > if we have some method to estimate the depth needed ahead of time then my > method would not need to start from the top (although its a convenient way Exactly! Hence my interest in being able to determine which curves are "hard" (require fine subdivision) vs. "easy" (can be approximated with simple linen segments) by examining the control points! > to calculate the sub-curve endpoints). So I would only do sqrts that are > needed where as you'd need to do some earlier ones and pass them down. Yes. Without knowledge of how good an estimate will be "a priori", I make a stab at computing the length (estimate) in the *hope* that it will be "good enough". The subdivision algorithm lends itself to a recursive divide-and-conquer approach whereby the initial problem is now two similar, but *smaller*, problems (which are tackled the same way). > Still, the naive method+Romberg is very similar work to your's without, and > botgh are O(h^4). > >>>> Trusting your "results", I assume the table would be: >>>> >>>> sqrts DY length DY Romberg >>>> 4 1.000000000 >>>> 5 1.002470605 1.002635312 >>>> 10 1.000128079 0.999971911 >>>> 20 1.000007988 0.999999982 >>>> 40 1.000000499 1.000000000 >>>> > > see above. > >>>> I.e., at this point, I have invested *79* sqrts (4+5+10+...). >>>> Note, however, that some of those might never occur if the >>>> estimate for a particular "subcurve" (ick!) looks "good >>>> enough" (it need not be further subdivided). >>> You've still lost e. >> Assume I am correct with 2.5 sqrts per "half curve". >> And 4 on the first iteration. >> >> So, second iteration is 5 (2.5 + 2.5) sqrts (assuming >> I am not looking at the closeness of fit). >> >> Third iteration cuts each of the two half curves from >> the second iteration in half using 5 sqrts for each >> of the second iteration half curves -- 10 sqrts total. >> >> 20 for the next. etc. >> >> Since my algorithm starts with the *first* ITERATION >> and conditionally subdivides, it takes on the costs of >> each iteration as it subdivides the curve(s). So, >> my total cost is the sum of the costs of *every* >> iteration. >> >> I.e., my code computes "sum of segments" and "endpoint >> length"; checks to see how close they are to each other >> (or, does your magic Romberg estimate and compares >> *that* to these lengths to see how closely *it* >> fits); and, if the fit isn;t good enough, *then* >> it subdivides the curve and does another iteration of >> chord length computations. (but, it has already >> incurred the cost of the computations leading up to >> that decision to recurse!) > > I'm just doing the calculation to a fixed depth to see how fast it > converges. There is no reason why I couldn't compare row against the > previous and stop when they are close enough. Close enough means meeting > some nearness criterion for the non-romberg results. That nearness will be > much larger than the actual error after Romberg. Yes. But, if you do that, then you have to assume the costs of computing that "not-good-enough" estimate in the total cost of your final estimate. E.g., if I "get lucky" and my first estimate fits "perfectly" (for sufficiently small values of "perfectly"), then I quit after 4 sqrts. If *not*, I get to salvage up to two of those sqrts in my next iteration (recursion). >>>> If I blindly force N subdivisions, it changes the mix. >>>> >>>> I'll try to make some time to code up a "real" algorithm >>>> and instrument SQRT() to do the counting for me with >>>> various "estimate tolerances". >>> Surely you can identify any 'sharp' regions of the curve from the radius >>> of curvature (or at least its square, otherwise you'll need a sqrt for >>> that). A proxy might be dx/dt*d2y/dt2 - dy/dt*d2x/dt2 being small.
From: Albert van der Horst on 3 Jul 2010 12:11 In article <i0ld5g$ka9$1(a)news.eternal-september.org>, Peter Dickerson <first.last(a)tiscali.invalid> wrote: >"Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message <SNIP> >> /* From numerical recipes with improved accuracy constants >> from LGPL-ed internet */ >> double qgauss( double (*func)(double), double a, double b ) >> { >> int j; >> double xr,xm,dx,s ; >> static double x[] = { >> 0.0, >> 0.1488743389816312108848260, >> 0.4333953941292471907992659, >> 0.6794095682990244062343274, >> 0.8650633666889845107320967, >> 0.9739065285171717200779640 >> }; >> static double w[] = { >> 0.0, >> 0.2955242247147528701738930, >> 0.2692667193099963550912269, >> 0.2190863625159820439955349, >> 0.1494513491505805931457763, >> 0.0666713443086881375935688 >> }; >> xm=0.5*(b+a); >> xr=0.5*(b-a); >> s=0; >> for (j=1; j<=5; j++) >> { >> dx = xr*x[j]; >> s += w[j] * ( (*func)(xm+dx) + (*func)(xm-dx) ); >> } >> return s *= xr; >> } >> <SNIP> > >obviously I did something wrong. I was expecting great things but got poor >results. Perhaps a typo in one of the constants. However, 5 points ought to >be 5 sqrts... The above code from numerical recipes uses 10 points, there are just 5 constants, because they take advantage of symmetry where those are pairwise the same. At each point I need to calculate a square root. > >peter > > -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert(a)spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
From: Peter Dickerson on 4 Jul 2010 01:14 "Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message news:l4zozh.axz(a)spenarnc.xs4all.nl... > In article <i0ld5g$ka9$1(a)news.eternal-september.org>, > Peter Dickerson <first.last(a)tiscali.invalid> wrote: >>"Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message > <SNIP> >>> /* From numerical recipes with improved accuracy constants >>> from LGPL-ed internet */ >>> double qgauss( double (*func)(double), double a, double b ) >>> { >>> int j; >>> double xr,xm,dx,s ; >>> static double x[] = { >>> 0.0, >>> 0.1488743389816312108848260, >>> 0.4333953941292471907992659, >>> 0.6794095682990244062343274, >>> 0.8650633666889845107320967, >>> 0.9739065285171717200779640 >>> }; >>> static double w[] = { >>> 0.0, >>> 0.2955242247147528701738930, >>> 0.2692667193099963550912269, >>> 0.2190863625159820439955349, >>> 0.1494513491505805931457763, >>> 0.0666713443086881375935688 >>> }; >>> xm=0.5*(b+a); >>> xr=0.5*(b-a); >>> s=0; >>> for (j=1; j<=5; j++) >>> { >>> dx = xr*x[j]; >>> s += w[j] * ( (*func)(xm+dx) + (*func)(xm-dx) ); >>> } >>> return s *= xr; >>> } >>> > > <SNIP> >> >>obviously I did something wrong. I was expecting great things but got poor >>results. Perhaps a typo in one of the constants. However, 5 points ought >>to >>be 5 sqrts... > > The above code from numerical recipes uses 10 points, there are just > 5 constants, because they take advantage of symmetry where those > are pairwise the same. > At each point I need to calculate a square root. Yes, I was using five points in total. If this works for the tricker cases that Don raised then I'd say that's job done! Of course the static double arrays should be static const and the 0-th element is wasted presumably due to translation from the Fortran version. An integerized version of this scheme would be more troublesome since the constants need high precision. Peter
From: Albert van der Horst on 5 Jul 2010 18:26 In article <i0p5c1$of3$1(a)news.eternal-september.org>, Peter Dickerson <first.last(a)tiscali.invalid> wrote: >"Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message >news:l4zozh.axz(a)spenarnc.xs4all.nl... >> In article <i0ld5g$ka9$1(a)news.eternal-september.org>, >> Peter Dickerson <first.last(a)tiscali.invalid> wrote: >>>"Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message >> <SNIP> >>>> /* From numerical recipes with improved accuracy constants >>>> from LGPL-ed internet */ >>>> double qgauss( double (*func)(double), double a, double b ) >>>> { >>>> int j; >>>> double xr,xm,dx,s ; >>>> static double x[] = { >>>> 0.0, >>>> 0.1488743389816312108848260, >>>> 0.4333953941292471907992659, >>>> 0.6794095682990244062343274, >>>> 0.8650633666889845107320967, >>>> 0.9739065285171717200779640 >>>> }; >>>> static double w[] = { >>>> 0.0, >>>> 0.2955242247147528701738930, >>>> 0.2692667193099963550912269, >>>> 0.2190863625159820439955349, >>>> 0.1494513491505805931457763, >>>> 0.0666713443086881375935688 >>>> }; >>>> xm=0.5*(b+a); >>>> xr=0.5*(b-a); >>>> s=0; >>>> for (j=1; j<=5; j++) >>>> { >>>> dx = xr*x[j]; >>>> s += w[j] * ( (*func)(xm+dx) + (*func)(xm-dx) ); >>>> } >>>> return s *= xr; >>>> } >>>> >> >> <SNIP> >>> >>>obviously I did something wrong. I was expecting great things but got poor >>>results. Perhaps a typo in one of the constants. However, 5 points ought >>>to >>>be 5 sqrts... >> >> The above code from numerical recipes uses 10 points, there are just >> 5 constants, because they take advantage of symmetry where those >> are pairwise the same. >> At each point I need to calculate a square root. > >Yes, I was using five points in total. If this works for the tricker cases >that Don raised then I'd say that's job done! >Of course the static double arrays should be static const and the 0-th >element is wasted presumably due to translation from the Fortran version. An >integerized version of this scheme would be more troublesome since the >constants need high precision. Should be no problem with fixed point, as long as the precision of the ints is sufficient. I would suspect overflow need more consideration than precision. Somehow I have the impression that it is not clear how different my approach is. Maybe an example. Suppose we want to know the length of a circle. The parametrization is (sin(t), cos(t) ) t=0,2.pi L = int( len(t) ) dt We know dsin(t)=cos(t)dt dcos(t) = -sin(t)dt Here again ds = sqrt( dx^2, dy^2) = sqrt( cos(t)^2+sin(t)^2 ) dt = ( 1) dt or L = int( 1 ) dt , ==> L=t1-t0=2.pi-0=2.pi So as long as the length along the curve goes smooth with t, calculation is precise. There is no talk about dividing the curve itself in segments. (Gauss intervals could be subdivided, where the *length* depends on t more wildly. > >Peter > > -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert(a)spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
From: Peter Dickerson on 6 Jul 2010 02:11
"Albert van der Horst" <albert(a)spenarnc.xs4all.nl> wrote in message news:l53vou.irg(a)spenarnc.xs4all.nl... > In article <i0p5c1$of3$1(a)news.eternal-september.org>, [snip] > > Should be no problem with fixed point, as long as the precision of > the ints is sufficient. I would suspect overflow need more consideration > than precision. > > Somehow I have the impression that it is not clear how different my > approach is. Maybe an example. > Suppose we want to know the length of a circle. > The parametrization is (sin(t), cos(t) ) t=0,2.pi > > L = int( len(t) ) dt > We know dsin(t)=cos(t)dt dcos(t) = -sin(t)dt > > Here again ds = sqrt( dx^2, dy^2) = sqrt( cos(t)^2+sin(t)^2 ) dt = > ( 1) dt > > or L = int( 1 ) dt , ==> L=t1-t0=2.pi-0=2.pi > > So as long as the length along the curve goes smooth with t, > calculation is precise. There is no talk about dividing the > curve itself in segments. (Gauss intervals could be subdivided, > where the *length* depends on t more wildly. Yes, I do understand how it works quite well, including the derivation. My problem was that I was only using five points. However, some experimentation with your code and more inconvenient control points leads to poor results in the same way that I saw with five points, except of course that the errors are smaller. The problem is that the Gauss quadrature is a polynomial fit procedure but the ds/dt has a square root in it. That is problematic for the Romberg procedure too. The reason you got such good results on the examle set of control points is that the curve was symmetric. If ds/dt is significantly different in one region of the curve to another then the sqrt term leads to a poor poly fit. If not then the sqrt can be well approximated by a power series expansion. For the record I have a masters in mathematics from Cambridge. Its just that 30 years of gravity since then has caused the contents of neurons to drain to my feet and be lost in nail clippings! Peter |