From: mike3 on 6 Jan 2010 20:16 Hi. I think I *may* have found a proof that it is Impossible to, in general, give a method or procedure for producing consistent solutions to difference equations of analytic functions, at least in a sufficiently "natural" way. F(x+1) - F(x) = f(x) for F(x). The proposed proof: -- Suppose f(x) = sum_{n=0...inf} a_n x^n. Then by Faulhaber's formula, F(x) = sum_{k=1...inf} (sum_{n=1...inf} a_(n-1)/n! C_(n,k) B_(n-k)) x^k, at least formally, where B_n is a Bernoulli number and B_1 = -1/2. Now, if f(x) is not entire, or entire but not of exponential type less than 2pi, or 1-periodic, this series does not converge. Indeed, with appropriate choice of a_(n-1), one of the inner sums can be made to resemble any divergent series. Thus, a method to compute F(x) for a given f(x), consistent with the above when it converges, would be, essentially, a divergent summation method for every divergent series with a growth rate less than a certain amount (namely, less than what it takes for the corresponding a_n-described function to have a nonzero convergence radius), that is regular and linear, though not necessarily stable. Yet, it appears that the existence of such a thing depends on the axiom of choice (see Hahn-Banach theorem), and this means that it is impossible to construct (try knocking yourself out making a well-ordering of the reals, or cutting and gluing a (mathematical) ball into 3 balls of the same size, etc. to get my drift. I think there's even a logical proof somewhere or something that proves that such things are not definable/decidable/whatever, at least with usual Zermelo-Fraenkel set theory). -- Possible attack points: Note that I'm not entirely sure if this is 100% good. Especially given that the stability requirement may not be necessary, and that there is the caveat that it may not need to work for 1-periodic functions (which would exclude some series). Note that one may say F(x) = x f(x) for such functions, but I'm not sure if that would be consistent with any methods for extending the above formula (i.e. you can add it and the method remain regular and linear, esp. linear), or if methods exist for which it is inconsistent (so adding it to them would not yield a consistent method), and if the construction of *those* methods also requires AC. If, however, the proof can be filled out and these points addressed, then the next question would obviously be, how far can one extend the operator before running into problems?
From: master1729 on 6 Jan 2010 21:21 mike 3 wrote : > Hi. > > I think I *may* have found a proof that it is > Impossible to, in > general, give a method or procedure for producing > consistent solutions > to difference equations of analytic functions, at > least in a > sufficiently "natural" way. > > F(x+1) - F(x) = f(x) > > for F(x). > > The proposed proof: > -- > > Suppose f(x) = sum_{n=0...inf} a_n x^n. Then by > Faulhaber's formula, > > F(x) = sum_{k=1...inf} (sum_{n=1...inf} a_(n-1)/n! > C_(n,k) B_(n-k)) > x^k, > > at least formally, where B_n is a Bernoulli number > and B_1 = -1/2. > Now, if f(x) is not entire, or entire but not of > exponential type less > than 2pi, or 1-periodic, this series does not > converge. Indeed, with > appropriate choice of a_(n-1), one of the inner sums > can be made to > resemble any divergent series. Thus, a method to > compute F(x) for a > given f(x), consistent with the above when it > converges, would be, > essentially, a divergent summation method for every > divergent series > with a growth rate less than a certain amount > (namely, less than what > it takes for the corresponding a_n-described function > to have a > nonzero convergence radius), that is regular and > linear, though not > necessarily stable. Yet, it appears that the > existence of such a thing > depends on the axiom of choice (see Hahn-Banach > theorem), and this > means that it is impossible to construct (try > knocking yourself out > making a well-ordering of the reals, or cutting and > gluing a > (mathematical) ball into 3 balls of the same size, > etc. to get my > drift. I think there's even a logical proof somewhere > or something > that proves that such things are not > definable/decidable/whatever, at > least with usual Zermelo-Fraenkel set theory). > > -- > > Possible attack points: Note that I'm not entirely > sure if this is > 100% good. Especially given that the stability > requirement may not be > necessary, and that there is the caveat that it may > not need to work > for 1-periodic functions (which would exclude some > series). Note that > one may say F(x) = x f(x) for such functions, but I'm > not sure if that > would be consistent with any methods for extending > the above formula > (i.e. you can add it and the method remain regular > and linear, esp. > linear), or if methods exist for which it is > inconsistent (so adding > it to them would not yield a consistent method), and > if the > construction of *those* methods also requires AC. > > If, however, the proof can be filled out and these > points addressed, > then the next question would obviously be, how far > can one extend the > operator before running into problems? i never believed in " the " continuum sum anyway. just as i dont believe in " the " summability method if they do not agree with analytic continuation. the best you can do is - i think - make a taylor series expansions and replace x^n by its sum polynomial. ( replace 1 with x , x with x(x+1)/2 etc ) and then use mittag leffler expansion if necessary. plz reply with your thoughts. regards the master tommy1729
From: mike3 on 8 Jan 2010 19:28 On Jan 7, 5:21 am, master1729 <tommy1...(a)gmail.com> wrote: > mike 3 wrote : > > > > > Hi. > > > I think I *may* have found a proof that it is > > Impossible to, in > > general, give a method or procedure for producing > > consistent solutions > > to difference equations of analytic functions, at > > least in a > > sufficiently "natural" way. > > > F(x+1) - F(x) = f(x) > > > for F(x). > > > The proposed proof: > > -- > > > Suppose f(x) = sum_{n=0...inf} a_n x^n. Then by > > Faulhaber's formula, > > > F(x) = sum_{k=1...inf} (sum_{n=1...inf} a_(n-1)/n! > > C_(n,k) B_(n-k)) > > x^k, > > > at least formally, where B_n is a Bernoulli number > > and B_1 = -1/2. > > Now, if f(x) is not entire, or entire but not of > > exponential type less > > than 2pi, or 1-periodic, this series does not > > converge. Indeed, with > > appropriate choice of a_(n-1), one of the inner sums > > can be made to > > resemble any divergent series. Thus, a method to > > compute F(x) for a > > given f(x), consistent with the above when it > > converges, would be, > > essentially, a divergent summation method for every > > divergent series > > with a growth rate less than a certain amount > > (namely, less than what > > it takes for the corresponding a_n-described function > > to have a > > nonzero convergence radius), that is regular and > > linear, though not > > necessarily stable. Yet, it appears that the > > existence of such a thing > > depends on the axiom of choice (see Hahn-Banach > > theorem), and this > > means that it is impossible to construct (try > > knocking yourself out > > making a well-ordering of the reals, or cutting and > > gluing a > > (mathematical) ball into 3 balls of the same size, > > etc. to get my > > drift. I think there's even a logical proof somewhere > > or something > > that proves that such things are not > > definable/decidable/whatever, at > > least with usual Zermelo-Fraenkel set theory). > > > -- > > > Possible attack points: Note that I'm not entirely > > sure if this is > > 100% good. Especially given that the stability > > requirement may not be > > necessary, and that there is the caveat that it may > > not need to work > > for 1-periodic functions (which would exclude some > > series). Note that > > one may say F(x) = x f(x) for such functions, but I'm > > not sure if that > > would be consistent with any methods for extending > > the above formula > > (i.e. you can add it and the method remain regular > > and linear, esp. > > linear), or if methods exist for which it is > > inconsistent (so adding > > it to them would not yield a consistent method), and > > if the > > construction of *those* methods also requires AC. > > > If, however, the proof can be filled out and these > > points addressed, > > then the next question would obviously be, how far > > can one extend the > > operator before running into problems? > > i never believed in " the " continuum sum anyway. > > just as i dont believe in " the " summability method if they do not agree with analytic continuation. > > the best you can do is - i think - > > make a taylor series expansions and replace x^n by its sum polynomial. ( replace 1 with x , x with x(x+1)/2 etc ) > > and then use mittag leffler expansion if necessary. > > plz reply with your thoughts. > I do not believe the Mittag-Leffler expansion would work here. The problem is that when you do the procedure above, the coefficients of the series for the continuum sum are those inner sums b_k = sum_{n=1...inf} a_(n-1)/n! C_(n,k) B_(n-k). So if that diverges, there is nothing finite that can be inserted into the Mittag-Leffler expansion. Note that my (potential, at least) proof does not disprove the ability to choose some solution to the difference equation F(x+1) - F(x) = f(x) in a systematic way: just arbitrarily pick some function to fill a unit interval, then apply the recurrence F(x+1) = f(x) + F(x) again and again to construct the solution. For example, we could fill the interval [0, 1] with a linear interpolation between 0 and f(0), and then apply this equation. This would construct a continuous solution for all continuous fnctions f(x), analytic or not. It would not be differentiable, much less analytic (except if f(x) is linear), but it would be _a_ solution, and so "a" continuum sum/indefinite sum operator. Indeed it would be extremely general as it could even be applied to discontinuous functions, though the result would not be continuous then. The possible proof merely says that there is no way to construct/define a continuum sum operator that sends analytic function to analytic functions, at least not within usual Zermelo-Fraenkel set theory. And I myself am not totally sure of its truth, see the possible caveats in the post. Also, would anyone else here like to comment on or critique my proof?
From: master1729 on 8 Jan 2010 21:13 mike 3 wrote : > On Jan 7, 5:21 am, master1729 <tommy1...(a)gmail.com> > wrote: > > mike 3 wrote : > > > > > > > > > Hi. > > > > > I think I *may* have found a proof that it is > > > Impossible to, in > > > general, give a method or procedure for producing > > > consistent solutions > > > to difference equations of analytic functions, at > > > least in a > > > sufficiently "natural" way. > > > > > F(x+1) - F(x) = f(x) > > > > > for F(x). > > > > > The proposed proof: > > > -- > > > > > Suppose f(x) = sum_{n=0...inf} a_n x^n. Then by > > > Faulhaber's formula, > > > > > F(x) = sum_{k=1...inf} (sum_{n=1...inf} > a_(n-1)/n! > > > C_(n,k) B_(n-k)) > > > x^k, > > > > > at least formally, where B_n is a Bernoulli > number > > > and B_1 = -1/2. > > > Now, if f(x) is not entire, or entire but not of > > > exponential type less > > > than 2pi, or 1-periodic, this series does not > > > converge. Indeed, with > > > appropriate choice of a_(n-1), one of the inner > sums > > > can be made to > > > resemble any divergent series. Thus, a method to > > > compute F(x) for a > > > given f(x), consistent with the above when it > > > converges, would be, > > > essentially, a divergent summation method for > every > > > divergent series > > > with a growth rate less than a certain amount > > > (namely, less than what > > > it takes for the corresponding a_n-described > function > > > to have a > > > nonzero convergence radius), that is regular and > > > linear, though not > > > necessarily stable. Yet, it appears that the > > > existence of such a thing > > > depends on the axiom of choice (see Hahn-Banach > > > theorem), and this > > > means that it is impossible to construct (try > > > knocking yourself out > > > making a well-ordering of the reals, or cutting > and > > > gluing a > > > (mathematical) ball into 3 balls of the same > size, > > > etc. to get my > > > drift. I think there's even a logical proof > somewhere > > > or something > > > that proves that such things are not > > > definable/decidable/whatever, at > > > least with usual Zermelo-Fraenkel set theory). > > > > > -- > > > > > Possible attack points: Note that I'm not > entirely > > > sure if this is > > > 100% good. Especially given that the stability > > > requirement may not be > > > necessary, and that there is the caveat that it > may > > > not need to work > > > for 1-periodic functions (which would exclude > some > > > series). Note that > > > one may say F(x) = x f(x) for such functions, but > I'm > > > not sure if that > > > would be consistent with any methods for > extending > > > the above formula > > > (i.e. you can add it and the method remain > regular > > > and linear, esp. > > > linear), or if methods exist for which it is > > > inconsistent (so adding > > > it to them would not yield a consistent method), > and > > > if the > > > construction of *those* methods also requires AC. > > > > > If, however, the proof can be filled out and > these > > > points addressed, > > > then the next question would obviously be, how > far > > > can one extend the > > > operator before running into problems? > > > > i never believed in " the " continuum sum anyway. > > > > just as i dont believe in " the " summability > method if they do not agree with analytic > continuation. > > > > the best you can do is - i think - > > > > make a taylor series expansions and replace x^n by > its sum polynomial. ( replace 1 with x , x with > x(x+1)/2 etc ) > > > > and then use mittag leffler expansion if necessary. > > > > plz reply with your thoughts. > > > > I do not believe the Mittag-Leffler expansion would > work here. The problem is that when you do the > procedure above, the coefficients of the series for > the continuum sum are those inner sums b_k = > sum_{n=1...inf} a_(n-1)/n! C_(n,k) B_(n-k). So if > that diverges, there is nothing finite that can be > inserted into the Mittag-Leffler expansion. yes , but i think you missed a part of what i meant. suppose f(0) + f(1) + f(2) + ... + f(x) does converge for all positive integer x and x = oo. then F(x) = f(x) + f(x-1) and F(x) is given by my method mentioned in my first reply : make a taylor series expansions and replace x^n by its sum polynomial. ( replace 1 with x , x with x(x+1)/2 etc ) and then use mittag leffler expansion if necessary. ( and if possible of course ) the goal is to express e.g. F(pi). not sum sum sum ... y times ... f(x) nor sum a_n = oo => ( summability method ) 1/2 > > Note that my (potential, at least) proof does not > disprove the ability to choose some solution to the > difference equation F(x+1) - F(x) = f(x) in a > systematic way: just arbitrarily pick some function > to fill a unit interval, then apply the recurrence > F(x+1) = f(x) + F(x) again and again to construct > the solution. For example, we could fill the > interval [0, 1] with a linear interpolation between > 0 and f(0), and then apply this equation. This > would construct a continuous solution for all > continuous fnctions f(x), analytic or not. It > would not be differentiable, much less analytic > (except if f(x) is linear), but it would be _a_ > solution, and so "a" continuum sum/indefinite sum > operator. Indeed it would be extremely general as > it could even be applied to discontinuous functions, > though the result would not be continuous then. The > possible proof merely says that there is no way to > construct/define a continuum sum operator that sends > analytic function to analytic functions, at least > not within usual Zermelo-Fraenkel set theory. And I > myself am not totally sure of its truth, see the > possible caveats in the post. right. thats part of why i dont believe in 'the' continuum sum or 'the' summability method. > > Also, would anyone else here like to comment on or > critique my proof? im sorry its just me :) regards the master tommy1729 ps : in the context of tetration and half-iterates , one could add restrictions that makes general(ized) sums of reals unique , usually a kind of strictly increasing condition.
From: mike3 on 9 Jan 2010 15:24 On Jan 9, 5:13 am, master1729 <tommy1...(a)gmail.com> wrote: > mike 3 wrote : <snip> > > I do not believe the Mittag-Leffler expansion would > > work here. The problem is that when you do the > > procedure above, the coefficients of the series for > > the continuum sum are those inner sums b_k = > > sum_{n=1...inf} a_(n-1)/n! C_(n,k) B_(n-k). So if > > that diverges, there is nothing finite that can be > > inserted into the Mittag-Leffler expansion. > > yes , but i think you missed a part of what i meant. > > suppose f(0) + f(1) + f(2) + ... + f(x) does converge for all positive integer x and x = oo. > > then F(x) = f(x) + f(x-1) and F(x) is given by my method mentioned in my first reply : > > make a taylor series expansions and replace x^n by its sum polynomial. ( replace 1 with x , x with x(x+1)/2 etc ) > > and then use mittag leffler expansion if necessary. > ( and if possible of course ) > Did you see what I mentioned about how this will not work? Indeed, it appears that the direct formula (no Mittag-Leffler stuff -- that doesn't help any!) does not work for any function not of exponential type less than 2pi (on the _complex_ plane), and also any function with singularities. > the goal is to express e.g. F(pi). > > not sum sum sum ... y times ... f(x) > > nor sum a_n = oo => ( summability method ) 1/2 > > > > > > > Note that my (potential, at least) proof does not > > disprove the ability to choose some solution to the > > difference equation F(x+1) - F(x) = f(x) in a > > systematic way: just arbitrarily pick some function > > to fill a unit interval, then apply the recurrence > > F(x+1) = f(x) + F(x) again and again to construct > > the solution. For example, we could fill the > > interval [0, 1] with a linear interpolation between > > 0 and f(0), and then apply this equation. This > > would construct a continuous solution for all > > continuous fnctions f(x), analytic or not. It > > would not be differentiable, much less analytic > > (except if f(x) is linear), but it would be _a_ > > solution, and so "a" continuum sum/indefinite sum > > operator. Indeed it would be extremely general as > > it could even be applied to discontinuous functions, > > though the result would not be continuous then. The > > possible proof merely says that there is no way to > > construct/define a continuum sum operator that sends > > analytic function to analytic functions, at least > > not within usual Zermelo-Fraenkel set theory. And I > > myself am not totally sure of its truth, see the > > possible caveats in the post. > > right. > > thats part of why i dont believe in 'the' continuum sum or 'the' summability method. > You mean because it can't be constructed? (At least, that is, if you want the operator to be linear.) > > > > Also, would anyone else here like to comment on or > > critique my proof? > > im sorry its just me :) > > regards > > the master > > tommy1729 > > ps : in the context of tetration and half-iterates , one could add restrictions that makes general(ized) sums of reals unique , usually a kind of strictly increasing condition.
|
Pages: 1 Prev: Integral operator cannot take constant values Next: differentiability of convex functions |