Prev: Algorithm book recommendation?
Next: TaskScheduler.dll
From: pete on 21 Jun 2005 15:47 Randy Howard wrote: > > In article <1119341543.541636.99590(a)g44g2000cwa.googlegroups.com>, > spinoza1111(a)yahoo.com says... > > Chris Dams wrote: > > > Of course not. > > > Even if the bitwise representation is not accessible any > > > language that allows division by two can easily be > > > used to make you own > > > binary representation. The algorithm will still be O(n). > > > > Yes, but...its execution time will be n*W, > > a constant (the bit width) > > which is rather large. > > So you *still* do not understand O() notation. He still doesn't realise that Big Oh is only the order of the dominant term of the equation that governs the running time, and not the equation of the running time itself. Doubling the computer speed, cuts the running time in half and has no effect on Big Oh. It is, as it should be. Any change in either the algorithm or the hardware that cuts the running time in half, has no effect on Big Oh. Diverse implementations of bubblesort, some of which run more than twice as fast as others, are all O(n * n) for worst case and also for average case. #include <stddef.h> #define BYTE_SWAP(A, B) \ { \ p1 = (A); \ p2 = (B); \ end = p2 + size; \ do { \ swap = *p1; \ *p1++ = *p2; \ *p2++ = swap; \ } while (p2 != end); \ } void bbsort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) { unsigned char *last, *middle; unsigned char swap, *p1, *p2, *end; if (nmemb-- > 1) { last = base; last += nmemb * size; do { middle = base; do { if (compar(middle, middle + size) > 0) { BYTE_SWAP(middle, middle + size); } middle += size; } while (middle != last); } while (--nmemb != 0); } } void b0sort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) { unsigned char *last, *middle; unsigned char swap, *p1, *p2, *end; if (nmemb-- > 1) { last = base; last += nmemb * size; do { middle = base; do { if (compar(middle, middle + size) > 0) { BYTE_SWAP(middle, middle + size); } middle += size; } while (middle != last); last -= size; } while (--nmemb != 0); } } void b_sort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) { unsigned char *last, *next, *middle; unsigned char swap, *p1, *p2, *end; if (nmemb-- > 1) { last = base; last += nmemb * size; do { middle = next = base; do { if (compar(middle, middle + size) > 0) { BYTE_SWAP(middle, middle + size); next = middle; } middle += size; } while (middle != last); last = next; } while (last != base); } } void c_sort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) { unsigned char *last, *next, *middle; unsigned char *p1, *p2, *end, swap; if (nmemb-- > 1) { next = base; next += nmemb * size; do { middle = last = next; do { if (compar(middle - size, middle) > 0) { BYTE_SWAP(middle - size, middle); next = middle; } middle -= size; } while (middle != base); if (last == next) { break; } middle = base = next; do { if (compar(middle, middle + size) > 0) { BYTE_SWAP(middle, middle + size); next = middle; } middle += size; } while (middle != last); } while (base != next); } } -- pete
From: Tim Rentsch on 21 Jun 2005 16:19 pete <pfiland(a)mindspring.com> writes: > Richard Harter wrote: > > > > On Tue, 21 Jun 2005 13:23:53 GMT, pete <pfiland(a)mindspring.com> wrote: > > > > >An simple O(n*log(n)) solution for floating types > > >would be to heapsort the array > > >and do a sequential check for equality. > > > > And, of course, one can do it in O(n) time and O(1) space without the > > xor trick. To quote someone (me): > > > > "If the array may be rearranged then we can again find the singleton > > with O(1) space and O(n) time. The general idea is to select a pivot > > value and then partition the array into elements smaller than the > > pivot, equal to the pivot, and larger than the pivot. If the pivot is > > the singleton we are done; otherwise repeat on the partition having an > > odd number of elements." > > I think that algorithm suffers from the same O(n * n) > worst case as simple quick sort. Richard's algorithm is making use of finding the median value of the array, which can be done in O(n) time. If you can guarantee that the array is split nearly in half each time, then it's possible to subdivide and zero in on the unique element in O(n) time; using the median as the pivot value provides that guarantee. > It's possible that in a worst case, > that all of your even sized partitions > might be two elements in size. > In that case, the partioning will be O(n), > and the iterations will be O(n). At each step there are only two partions - the values larger than the pivot, and the values smaller than the pivot. We take care to exclude the pivot pair from the partitions, or if there is no pair value for the pivot then that's the unique element. With an odd number of elements to start with and an even number of elements removed, the two partitions must have an odd number of elements in one case and an even number in the other case. The partition with the odd number of elements has the unique element value, and can't have more than (K+1)/2 elements if the array being divided had K elements. (In fact, it can't have even as many as (K+1)/2 elements, but it certainly can't have *more* than (K+1)/2 elements.) So the algorithm really does have a worst case performance that is O( n + n/2 + n/4 + ... ) = O( n ). [And, may I add to Richard: clever algorithm!]
From: Richard Harter on 21 Jun 2005 16:29 On Tue, 21 Jun 2005 19:14:15 GMT, pete <pfiland(a)mindspring.com> wrote: >Richard Harter wrote: >> >> On Tue, 21 Jun 2005 13:23:53 GMT, pete <pfiland(a)mindspring.com> wrote: >> >> >spinoza1111(a)yahoo.com wrote: >> >> (almost nothing worth quoting) >> >> [snip] >> >> >An simple O(n*log(n)) solution for floating types >> >would be to heapsort the array >> >and do a sequential check for equality. >> >> And, of course, one can do it in O(n) time and O(1) space without the >> xor trick. To quote someone (me): >> >> "If the array may be rearranged then we can again find the singleton >> with O(1) space and O(n) time. The general idea is to select a pivot >> value and then partition the array into elements smaller than the >> pivot, equal to the pivot, and larger than the pivot. If the pivot is >> the singleton we are done; otherwise repeat on the partition having an >> odd number of elements." > >I think that algorithm suffers from the same O(n * n) >worst case as simple quick sort. > >It's possible that in a worst case, >that all of your even sized partitions >might be two elements in size. >In that case, the partioning will be O(n), >and the iterations will be O(n). Yes and no. Recall that there are algorithms[1] for finding the median that are strictly O(n) (aka theta(n) in some circles) so we can guarantee that each partitioning produces two equal[2] sized partitions. Ergo, the worst case can be forced to be O(n). It is quite true, of course, that the commonly cited O(n) worst case median algorithms have large coefficients and are not really practical. Whether or not this matters in answering a puzzle is open to question. On the other hand, if one doesn't ask that the pivot be exactly the median but merely that it be "good" then there are several efficient methods for finding a good pivot. [1] O(n) worst case algorithms for finding the median have been discussed numerous times in this venue. Google and Knuth are your friends. [2] Where "equal" means differing in size by no more than two. Richard Harter, cri(a)tiac.net http://home.tiac.net/~cri, http://www.varinoma.com Save the Earth now!! It's the only planet with chocolate.
From: Richard Harter on 21 Jun 2005 16:55 On Tue, 21 Jun 2005 19:47:27 GMT, pete <pfiland(a)mindspring.com> wrote: >Randy Howard wrote: >> >> In article <1119341543.541636.99590(a)g44g2000cwa.googlegroups.com>, >> spinoza1111(a)yahoo.com says... >> > Chris Dams wrote: >> > > Of course not. >> > > Even if the bitwise representation is not accessible any >> > > language that allows division by two can easily be >> > > used to make you own >> > > binary representation. The algorithm will still be O(n). >> > >> > Yes, but...its execution time will be n*W, >> > a constant (the bit width) >> > which is rather large. >> >> So you *still* do not understand O() notation. > >He still doesn't realise that Big Oh >is only the order of the dominant term >of the equation that governs the running time, >and not the equation of the running time itself. To do him justice (would that he were done justice) the text you quote implies the converse, that he does understand the O() concept. His point, such as it is, might be that in practice what is important is the cost in the range of n in interest. (At least that is what it appears that he is grasping at, other than an assortment of straws.) This is often an important consideration. For example, suppose that algorithm A has a cost of 1000000*n and algorithm B has a cost of n*n, and that n will never be greater than 1000. The catch, of course, is that the O(n*n) algorithm doesn't scale. One thing that is frequently forgotten when using the O() notation is that it is asymptotic as n becomes arbitrarily large. The universe being what it is, n is never arbitrarily large. Another thing frequently forgotten is that the O() formulation seldom is about actual costs; rather it is usually about metrics on models of computation. For example it is often stated the time cost of a decent comparison sort is O(n*log(n)). This is wrong. The number of comparisons is O(n*log(n)); however the cost of comparisons and the cost of data move grows with n. For "small" n this growth is subsumed in a constant; when n is large it is not. The main point that I am making here is that while the order function is a useful heuristic we should be aware of the caveats. Richard Harter, cri(a)tiac.net http://home.tiac.net/~cri, http://www.varinoma.com Save the Earth now!! It's the only planet with chocolate.
From: Mike on 21 Jun 2005 18:12
In article <d98hlv$bq8$1(a)sunnews.cern.ch>, Emlyn.NOSPAM.Corrin(a)cern.NOSPAM.ch says... > Mike wrote: > > So as an alternative solution, how > > about: > > > > function ReturnOddElement(A : array of integer) : integer; > > var > > p : integer; > > i : integer; > > begin > > p := 1; > > for i := Low(A) to High(A) do > > if (p mod A[i]) = 0 > > then > > p := p div A[i] > > else > > p := p * A[i]; > > result := p; > > end; > > > > This requires no bitwise operations and will be O(n) on any > > system on which MaxInt is sufficiently large. > > > > Mike. > > > > Very clever, but unfortunately it doesn't work, try it on the sequence > 6, 2, 3, 4, 2, 3, 6 > > Emlyn > Rats - I am stabbed and bleeding. Sounded good at the time but I But to continue in the spirit of Eddie... 1) It works for a finite subset of cases. 1a) ...and always when all of the A[i] are primes. 2) It is definately O(n). 3) Maybe I should digress here, something about Benthamites should do the trick. 4) Or I could quote Nash, Derrida or Spivak. 5) Hey - I watched "A Beautiful Mind" - twice(!) - so I know what I am talking about. 6) Emlyn.NOSPAM.Corrin - how can I trust someone who chose ..NOSPAM. as their middle name? Sounds like the sort of middle name a retiree with serious problems in anger management might choose. 7) I'll call my lawyer... Sorry Emlyn - couldn't resist. Mike |