From: Steve Howell on 23 Jan 2010 00:42 On Jan 22, 6:20 pm, Steven D'Aprano <st...(a)REMOVE-THIS- cybersource.com.au> wrote: > On Fri, 22 Jan 2010 14:38:18 -0800, Steve Howell wrote: > > I know the Python programmer could simply iterate through the list > > rather than popping off unused elements, but that just means that you > > not only tie up the memory for the pointers just as long, but you also > > prevent the objects themselves from being garbage collected. > > > In my case I'm not really concerned about giving the memory back right > > away, it's more about keeping my code simple. Once I'm done with an > > element, I do just want to pop it off and keep all the simplicity of the > > lists, but I am just concerned enough speed that for a 1000 element > > list, I don't want to be doing 1000 memmoves for an average of 500 > > pointers, which effectively moves about a million bytes around for no > > reason. > > > I suppose the solution is to either give up the sugar of lists, or try > > to wrap something like deque or list-with-offset into a sequence. > > I don't understand what the actual problem you're trying to solve is. > Despite your disclaimer about not being concerned about reclaiming the > memory, it sounds like you're trying to micro-optimize memory usage. My discussion of memory probably distracted you from the fact that I'm not proposing a micro-optimization of memory; I am proposing a mega- optimization of CPU. This innocent program here literally moves about a million bytes of memory around for no good reason: lst = [] for i in range(2000): lst.append(i) while lst: print lst.pop(0) Why? Because list.pop(0) is implemented in O(N) instead of O(1). Why wouldn't you get a competent C programmer simply make list_ass_slice smart enough to make list.pop(0) O(1)? The brilliant computer scientist, Christian Heimes, provides the answers, and I am paraphrasing here, of course: 1) You can save 64 bits for every list by not storing an extra pointer! 2) You can simplify the CPython implementation! 3) You can avoid the oh-so-expensive check "if ilow == 1" for all operations that don't need this optimization! Sounds like two micro-optimizations to me (and a copout to boot). > That > is, you give me the impression that you prefer this: > > while alist: > x = alist.pop(0) > process(x) > > over this: > > for x in alist: > process(x) > # allow alist to be garbage collected when it goes out of scope > No, to be more precise, I prefer this implementation of a recursive parser (using lists) to one that would have to use deque's because of the lameness of Python's list implementation: https://bitbucket.org/showell/shpaml_website/src/tip/shpaml.py
From: Steve Howell on 23 Jan 2010 00:58 On Jan 22, 7:09 pm, Roy Smith <r...(a)panix.com> wrote: > In article > <3ac173bd-4124-434d-b726-0b9baaeec...(a)36g2000yqu.googlegroups.com>, > Steve Howell <showel...(a)yahoo.com> wrote: > > > In my case I'm not really concerned about giving the memory back right > > away, it's more about keeping my code simple. Once I'm done with an > > element, I do just want to pop it off and keep all the simplicity of > > the lists, but I am just concerned enough speed that for a 1000 > > element list, I don't want to be doing 1000 memmoves for an average of > > 500 pointers, which effectively moves about a million bytes around for > > no reason. > > If you really want to pop all the elements from a long list, reverse the > list and pop them off the end. Popping every element off the beginning of > the list is O(n^2), as you pointed out. Reversing the list is O(n), and > each pop after that is O(1), so the overall complexity is O(n). I really want to use list *normally* with all its perfectly good semantics and reasonable implementation, except for its blind spot with respect to popping the first element off the list. The whole reason I use CPython vs. C in the first place is that CPython programmers can generally program basic data structures better than I can. But list.pop(0) is the exception. And, with the possible exception of dicts, lists are the most fundamental data structures that Python has. I know Python's number one concern will never be speed, but if Python makes an O(1) operation into an unnecessarily O(N) operation for no good reasons other than "it's too complicated, " or it "adds another pointer to the structure," or "it adds another conditional check to list_ass_slice for operations that aren't popping off the top," I think it's reasonable to challenge the design philosophy.
From: Aahz on 23 Jan 2010 02:10 In article <83082e19-9130-45a8-91f2-8601c1fdaac3(a)22g2000yqr.googlegroups.com>, Steve Howell <showell30(a)yahoo.com> wrote: > >I really want to use list *normally* with all its perfectly good >semantics and reasonable implementation, except for its blind spot >with respect to popping the first element off the list. The whole >reason I use CPython vs. C in the first place is that CPython >programmers can generally program basic data structures better than I >can. But list.pop(0) is the exception. And, with the possible >exception of dicts, lists are the most fundamental data structures >that Python has. > >I know Python's number one concern will never be speed, but if Python >makes an O(1) operation into an unnecessarily O(N) operation for no >good reasons other than "it's too complicated, " or it "adds another >pointer to the structure," or "it adds another conditional check to >list_ass_slice for operations that aren't popping off the top," I >think it's reasonable to challenge the design philosophy. "Rough consensus and running code." You have a good point, but nobody will ever give your idea serious attention until there's a patch and benchmarks. -- Aahz (aahz(a)pythoncraft.com) <*> http://www.pythoncraft.com/ import antigravity
From: Alf P. Steinbach on 23 Jan 2010 02:43 * Steve Howell: > On Jan 22, 7:09 pm, Roy Smith <r...(a)panix.com> wrote: >> In article >> <3ac173bd-4124-434d-b726-0b9baaeec...(a)36g2000yqu.googlegroups.com>, >> Steve Howell <showel...(a)yahoo.com> wrote: >> >>> In my case I'm not really concerned about giving the memory back right >>> away, it's more about keeping my code simple. Once I'm done with an >>> element, I do just want to pop it off and keep all the simplicity of >>> the lists, but I am just concerned enough speed that for a 1000 >>> element list, I don't want to be doing 1000 memmoves for an average of >>> 500 pointers, which effectively moves about a million bytes around for >>> no reason. >> If you really want to pop all the elements from a long list, reverse the >> list and pop them off the end. Popping every element off the beginning of >> the list is O(n^2), as you pointed out. Reversing the list is O(n), and >> each pop after that is O(1), so the overall complexity is O(n). > > I really want to use list *normally* with all its perfectly good > semantics and reasonable implementation, except for its blind spot > with respect to popping the first element off the list. The whole > reason I use CPython vs. C in the first place is that CPython > programmers can generally program basic data structures better than I > can. But list.pop(0) is the exception. And, with the possible > exception of dicts, lists are the most fundamental data structures > that Python has. Having optimized list.pop() for first element, then you would have a blind spot with respect to insertion at the front... Which could then be optimized for the cases where there is some buffer space at the front, left over from a pop. I think it would just be harder to understand and harder to explain. And I think that due to that, as usual people would build their own elaborate "explanations", with erroneous conclusions, and would then use it in inefficient ways (like, popping from the middle or inserting at the front). I think the "harder to use correctly" is the strongest argument against the optimization, but there is also a non-obvious *memory overhead*... Having popped just one element at the front, in order for the implementation to not /accumulate/ unused memory, that is, in order to avoid an ongoing /memory leak/, extending the buffer to accomodate e.g. an append() can no longer be done as a (on relevant systems) constant time reallocation (letting the OS do its virtual memory paging magic). Instead, a new buffer would have to be allocated and all data copied over. One could still have amortized constant time for appends by using a doubling buffer (which is the strategy used in C++ 'std::vector'), but at the cost of wasting some memory, a percentage... The implementation as a pure array is easy to understand and use correctly, and doesn't waste memory. But it would IMHO have been better if it wasn't called "list", which brings in the wrong associations for someone used to other languages. The name does matter. E.g. I don't think this discussion about a pop optimization would have been there except for the name, which makes that optimization sound reasonable. Perhaps some standard alternative name could be devised. Like, "array" would have been nice, except that that's already grabbed by the array module. > I know Python's number one concern will never be speed, but if Python > makes an O(1) operation into an unnecessarily O(N) operation for no > good reasons other than "it's too complicated, " or it "adds another > pointer to the structure," or "it adds another conditional check to > list_ass_slice for operations that aren't popping off the top," I > think it's reasonable to challenge the design philosophy. See above. In addition to being more difficult /to use/ correctly, that is, being much easier to misunderstand, it incurs a memory overhead -- not the one directly from the pop optimization, but by the requirements of buffer extension. Essentially, as discussed above, it would then have to use a doubling buffer. Cheers & hth., - Alf
From: Steve Howell on 23 Jan 2010 02:43
On Jan 22, 11:10 pm, a...(a)pythoncraft.com (Aahz) wrote: > > >I know Python's number one concern will never be speed, but if Python > >makes an O(1) operation into an unnecessarily O(N) operation for no > >good reasons other than "it's too complicated, " or it "adds another > >pointer to the structure," or "it adds another conditional check to > >list_ass_slice for operations that aren't popping off the top," I > >think it's reasonable to challenge the design philosophy. > > "Rough consensus and running code." > > You have a good point, but nobody will ever give your idea serious > attention until there's a patch and benchmarks. Here is a benchmark of O(N*N) vs. O(N) for two C programs. One does memmove; the other simply advances the pointer. showell(a)showell-laptop:~$ time ./slow real 0m1.446s user 0m1.444s sys 0m0.004s showell(a)showell-laptop:~$ time ./fast real 0m0.003s user 0m0.004s sys 0m0.000s showell(a)showell-laptop:~$ diff slow.c fast.c 23,24c23 < lst.size -= 1; < memmove(lst.p, lst.p + 1, lst.size); --- > lst.p += 1; showell(a)showell-laptop:~$ cat slow.c #include <stdlib.h> #include <stdio.h> #include <string.h> struct ob_item { int whatever; }; struct list { struct ob_item *p; int size; }; struct list make_list(int n) { struct list lst; lst.p = malloc(n); lst.size = n; return lst; } void list_pop_left(struct list lst) { lst.size -= 1; memmove(lst.p, lst.p + 1, lst.size); } int main() { struct list lst; int i; int n = 40000; int t; lst = make_list(n); for (i = 0; i < n; ++i) { list_pop_left(lst); } } |