Prev: Calculations in .csv via SED AWK
Next: How to list all calls to external programs in a bash script
From: Dan Stromberg on 4 Mar 2010 16:17 In case you're interested, I've put a fast GUI pipemeter (measures how fast data is moving through a pipe or redirect and gives two estimates of time-to-completion - one based on the entire transfer so far, and one based on a user-specifiable number of blocks) up at: http://stromberg.dnsalias.org/~dstromberg/gprog/ It uses a dual process design (to make things a bit faster on dual core or better systems) with a cache oblivious algorithm (to self-tune block sizes for good performance) - I've seen it sustain over 2 gigabits/second, and that despite Linux' /dev/zero insisting on a tiny blocksize. I wasn't able to construct a RAM disk large enough to get anything like a sustained result with larger blocksizes than what Linux' /dev/zero likes - that is, not without springing for a new machine with a huge amount of RAM. IOW, your disk or network will very likely be the bottleneck, not the tool. I hope it helps someone.
From: pk on 4 Mar 2010 16:15 Dan Stromberg wrote: > In case you're interested, I've put a fast GUI pipemeter (measures how > fast data is moving through a pipe or redirect and gives two estimates > of time-to-completion - one based on the entire transfer so far, and > one based on a user-specifiable number of blocks) up at: > > http://stromberg.dnsalias.org/~dstromberg/gprog/ > > It uses a dual process design (to make things a bit faster on dual > core or better systems) with a cache oblivious algorithm (to self-tune > block sizes for good performance) - I've seen it sustain over 2 > gigabits/second, and that despite Linux' /dev/zero insisting on a tiny > blocksize. I wasn't able to construct a RAM disk large enough to get > anything like a sustained result with larger blocksizes than what > Linux' /dev/zero likes - that is, not without springing for a new > machine with a huge amount of RAM. IOW, your disk or network will > very likely be the bottleneck, not the tool. > > I hope it helps someone. This sounds similar to "pv", although pv does not have a GUI.
From: Dan Stromberg on 4 Mar 2010 19:25 On Mar 4, 1:15 pm, pk <p...(a)pk.invalid> wrote: > Dan Stromberg wrote: > > In case you're interested, I've put a fast GUI pipemeter (measures how > > fast data is moving through a pipe or redirect and gives two estimates > > of time-to-completion - one based on the entire transfer so far, and > > one based on a user-specifiable number of blocks) up at: > > >http://stromberg.dnsalias.org/~dstromberg/gprog/ > > > It uses a dual process design (to make things a bit faster on dual > > core or better systems) with a cache oblivious algorithm (to self-tune > > block sizes for good performance) - I've seen it sustain over 2 > > gigabits/second, and that despite Linux' /dev/zero insisting on a tiny > > blocksize. I wasn't able to construct a RAM disk large enough to get > > anything like a sustained result with larger blocksizes than what > > Linux' /dev/zero likes - that is, not without springing for a new > > machine with a huge amount of RAM. IOW, your disk or network will > > very likely be the bottleneck, not the tool. > > > I hope it helps someone. > > This sounds similar to "pv", although pv does not have a GUI. Um, yes, pv is similar and has a pretty nice character cell GUI as it were. I suppose I'd neglected to mention that I put a list of similar tools at the beginning of the gprog page, including pv. Thanks for making sure we were aware of pv. Interesting that pv seems to be successfully getting 128K blocks out of /dev/zero. For some reason, gprog always gets back 16K blocks from /dev/zero, even when requesting blocks of sizes substantially larger. gprog automatically detects this and just starts asking for 16K. Python folk: Any guesses why a simple file.read(blocksize) would have such an affinity for returning 16K when redirected from /dev/zero? If I run the program against a file on disk, it gets larger blocksizes fine.
From: bsh on 4 Mar 2010 20:05 On Mar 4, 1:17 pm, Dan Stromberg <strom...(a)gmail.com> wrote: > ... > http://stromberg.dnsalias.org/~dstromberg/gprog/ > ... Thanks, Dan. This is but one more of your fine sh, python, perl, and C programs. I suppose now you are going to invent a way to grow programs like plants in gardens? :) Just add water?? :) You seem to be taking up the slack for what (now) obsoleted fine utilities and libraries are offered by Daniel J. Bernstein, such as his most excellent daemontools suite. I'm interested in utilities to output system metrics, and I've downloaded and [attempted to have] used hcm.bash, mtee.py, contextual.sh, looper.py, IQS, and maxtime.c. Don't you believe in specialization?! ;) As I am also interested in k/sh parsing, I have done something quite close to your bashquote.py. Are you going to make Cobble available? Technical documents of yours which have been of past assistance to me include: "Copying lots Of Data" http://stromberg.dnsalias.org/~dstromberg/copy-lots-of-data.html "Keeping Backups From Pounding A System As Hard" http://stromberg.dnsalias.org/~dstromberg/Keeping-backups-from-pounding-a-system-as-hard.html And in general, all of your documents under: http://stromberg.dnsalias.org/~dstromberg/tech-tidbits.html http://stromberg.dnsalias.org/~dstromberg/table.html So, the reason for having mentioned this, in addition to thanking you for these contributions to the technical community, is to beg you to organize and update your website for explicability and searching, so that less motivated folk can take advantage of your treasures. What say you? How 'bout it? =Brian
From: Dan Stromberg on 5 Mar 2010 22:09 On Mar 4, 5:05 pm, bsh <brian_hi...(a)rocketmail.com> wrote: > On Mar 4, 1:17 pm, Dan Stromberg <strom...(a)gmail.com> wrote: > > > ... > >http://stromberg.dnsalias.org/~dstromberg/gprog/ > > ... > > Thanks, Dan. This is but one more of your fine sh, python, > perl, and C programs. I suppose now you are going to invent > a way to grow programs like plants in gardens? :) Just add > water?? :) LOL. I believe John Koza has already done basically that - no water required though. ^_^ > You seem to be taking up the slack for what (now) > obsoleted fine utilities and libraries are offered by Daniel > J. Bernstein, such as his most excellent daemontools suite. I'm flattered. > I'm interested in utilities to output system metrics, and I've > downloaded and [attempted to have] used hcm.bash, mtee.py, > contextual.sh, looper.py, IQS, and maxtime.c. Nice to hear others are using them. > As I am also interested in k/sh parsing, I have done > something quite close to your bashquote.py. I'm interested in seeing your bashquote-like project... > Are you going > to make Cobble available? I've just added a link to the cobble section of my subversion repository, from the cobble doc page you appear to have seen. > So, the reason for having mentioned this, in addition to > thanking you for these contributions to the technical community, > is to beg you to organize and update your website for > explicability and searching, so that less motivated folk > can take advantage of your treasures. What say you? > How 'bout it? I probably could organize things better. I was thinking about adding a pyjamas version of the table, for example. As far as searching, does something like this work (not intending to be snarky at all - please ignore the "was that so hard?" at the end)? http://lmgtfy.com/?q=site%3Astromberg.dnsalias.org+cobble If not, I could probably add a search engine without much trouble. > =Brian
|
Next
|
Last
Pages: 1 2 Prev: Calculations in .csv via SED AWK Next: How to list all calls to external programs in a bash script |