Prev: Hot! Hot! Hot!
Next: simple circuit simulator
From: JosephKK on 29 May 2010 01:35 On Fri, 28 May 2010 11:19:30 -0700, dplatt(a)radagast.org (Dave Platt) wrote: >In article <n6kuv5p4cka9huuerh8v9abneon06nve0e(a)4ax.com>, >JosephKK <quiettechblue(a)yahoo.com> wrote: > >>>It is the huge data bandwidth that a push system like Usenet uses >>>(especially the binary groups). Everything is moved to everywhere by >>>NNTP whether anyone has asked for it or not. >> >>Bzzzt. Wrong. The protocols are defined as a user pull system, and as a >>transport system. Each server pulls all the locally (user) requested NG >>and the articles within. Each server transports all the NG that the >>Admins choose to transport, concentrating on the local users pull NG. > >The NNTP protocol supports both push-style and pull-style operation. > >Backbone-to-backbone connections often operate in "push" mode, with >the sending site having a receiving site's subscription list (which >may include wildcards to push all articles in entire newsgroup >hierarchies). The sending site will send an IHAVE message for each >qualifying article it has, and the receiving site will send back a >specific request for each article it hasn't received via another path. >The IHAVE messages identify the article by its messageID, but do not >specify what newsgroups it's posted to... and thus the filtering of >articles by newsgroup has to be done at the pushing side. Look at that again, that is an offer and pull system. > >Reader connections (client-to-server) generally operate in a "pull" >mode, with the client asking for the article IDs of new messages in >specific newsgroups, and then pulling the articles desired. > >Small "leaf node" servers often use the client (reader) protocols to >connect to larger (backbone) servers. This allows the leaf site's >subscription list to be managed locally (e.g. adjusting it >automatically based on read requests from local clients). > >Historically, USENET news has also been handled via protocols other >than NNTP. Originally, UUCP transport was used, and this was (I >believe) always in "push" or "offer" mode. Yes, UUCP was originally used for the *Transport* part. The control level from back then was not well documented. I expect it was an offer and pull system, much like it is today.
From: Dave Platt on 29 May 2010 02:25 In article <ck9106lisul9gb30a62lojbfb799hoiaf1(a)4ax.com>, JosephKK <quiettechblue(a)yahoo.com> wrote: >Look at that again, that is an offer and pull system. It's an offer-and-pull system at the article level. However, the decision as to which articles to offer is strictly a "push" decision... the server knows which newsgroups the receiving system is subscribed to, and offers every article which arrives in any of those newsgroups. The receiving system has the ability to pull, or not pull, a given article it's offered... but it can do so *only* on the basis of the article's message-ID. It isn't given any other information about the article from the server... the server doesn't tell it what newsgroups the article was posted to. So, what it boils down to in practice (with this approach) is that the server will attempt to push all articles in specific newsgroups or distributions to a given downstream node. The downstream node can veto the transfer of individual articles (by not responding with a SENDME) if it already has the individual article in its spool. >Yes, UUCP was originally used for the *Transport* part. The control >level from back then was not well documented. I expect it was an offer >and pull system, much like it is today. For the original UUCP-based implementation of USENET, you'd be mistaken... it was strictly a push system. Upon receiving an article, each node would forward it to every outbound peer (it had a list of the newsgroup hierarchies each peer wanted to receive). Articles might be sent as individual UUCP jobs (for very small feeds) but were more commonly batched up, compressed, and then sent as large chunks. At the receiving node, articles were decompressed, split apart, and then either written to disk (if wanted) or just discarded (if the article had already been received from another upstream). It wasn't uncommon for sites with multiple upstream feeds to discard half of the incoming articles as duplicates. An IHAVE/SENDME capability was added to later UUCP-based versions of the USENET system, in order to reduce the bandwidth waste at sites with multiple upstreams. This worked very much as the NNTP-based IHAVE/SENDME system does... the sending system would create a spool file full of IHAVE article-IDs, compress the file, and UUCP it to the receiving system... which would eventually turn around and send back a compressed batch of SENDME requests, which would trigger the actual batching and compression and UUCP'ing of the articles. Much more efficient, although it was/is still possible for downstream nodes to receive articles from more than one upstream and end up discarding the unwanted duplicates. -- Dave Platt <dplatt(a)radagast.org> AE6EO Friends of Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads!
From: JosephKK on 31 May 2010 09:58
On Fri, 28 May 2010 23:25:07 -0700, dplatt(a)radagast.org (Dave Platt) wrote: >In article <ck9106lisul9gb30a62lojbfb799hoiaf1(a)4ax.com>, >JosephKK <quiettechblue(a)yahoo.com> wrote: > >>Look at that again, that is an offer and pull system. > >It's an offer-and-pull system at the article level. However, the >decision as to which articles to offer is strictly a "push" >decision... the server knows which newsgroups the receiving system is >subscribed to, and offers every article which arrives in any of those >newsgroups. > >The receiving system has the ability to pull, or not pull, a given >article it's offered... but it can do so *only* on the basis of >the article's message-ID. It isn't given any other information about >the article from the server... the server doesn't tell it what >newsgroups the article was posted to. I thought the (lead) group was embedded in the message-ID. > >So, what it boils down to in practice (with this approach) is that the >server will attempt to push all articles in specific newsgroups or >distributions to a given downstream node. The downstream node can >veto the transfer of individual articles (by not responding with a >SENDME) if it already has the individual article in its spool. Still describes offer and pull. > >>Yes, UUCP was originally used for the *Transport* part. The control >>level from back then was not well documented. I expect it was an offer >>and pull system, much like it is today. > >For the original UUCP-based implementation of USENET, you'd be >mistaken... it was strictly a push system. Upon receiving an article, >each node would forward it to every outbound peer (it had a list of >the newsgroup hierarchies each peer wanted to receive). Articles >might be sent as individual UUCP jobs (for very small feeds) but were >more commonly batched up, compressed, and then sent as large chunks. > >At the receiving node, articles were decompressed, split apart, and >then either written to disk (if wanted) or just discarded (if the >article had already been received from another upstream). It wasn't >uncommon for sites with multiple upstream feeds to discard half of the >incoming articles as duplicates. Upstream - downstream does not really match the structure as designed, only as currently implemented. The various servers are called peers by each other, and joining at the server level is called 'peering' regardless of the relative power of the two nodes. And most server nodes strive to have a multiplicity of peers. > >An IHAVE/SENDME capability was added to later UUCP-based versions of >the USENET system, in order to reduce the bandwidth waste at sites >with multiple upstreams. This worked very much as the NNTP-based >IHAVE/SENDME system does... the sending system would create a spool >file full of IHAVE article-IDs, compress the file, and UUCP it to the >receiving system... which would eventually turn around and send back a >compressed batch of SENDME requests, which would trigger the actual >batching and compression and UUCP'ing of the articles. Much more >efficient, although it was/is still possible for downstream nodes to >receive articles from more than one upstream and end up discarding the >unwanted duplicates. |