From: Le Chaud Lapin on 25 Jul 2010 03:39 On Jul 23, 6:25 pm, eric.jacob...(a)ieee.org (Eric Jacobsen) wrote: > On Wed, 21 Jul 2010 04:53:47 +0000 (UTC), glen herrmannsfeldt > > <g...(a)ugcs.caltech.edu> wrote: > >steveu <steveu(a)n_o_s_p_a_m.coppice.org> wrote: > >(snip on broadband) > > >>>Now, does this distinction make any sense in terms > >>>of digital network communication? It seems to me that > >>>it doesn't. (Though it does seem that cable modem > >>>channels are still designed around the 6MHz TV channel > >>>bandwidth.) > > >> Once a technical term has been picked up by popular culture, > >> why would you expect to find any meaning in it at all? > > >Well, there is that. > > >Then there is Wi-fi which doesn't seem to have any meaning > >at all in the technical sense. > > As has been mentioned, I think "broadband" has gone the way of being > whatever people want it to mean. My guess would be that "broadband" results from relativity with regard to narrow bandwidth of POTS line. Remember when 9600 baud over POTS was high-tech, and talk of 56kbaud was silly to talk of, as all the possible tricks had "already been discovered"?!!! We paid > $1000 for 9600 baud modem from Telebit, and thought we were getting a great deal. > FWIW, Wi-Fi is the trademarked name used by the Wi-Fi Alliance, an > industry consortium that manages compliance with the Wi-Fi > specifications, which are essentially the 802.11 air interface > standards. People may use it to mean wireless LAN generically, which > is probably fine with the Wi-Fi people since that's what they exist to > promote. I've often wondered where do we stand on the path toward a regular model for wireless networking. Wi-Fi works, of course, and there is Bluetooth, Zigbee, etc. I never liked Bluetooth because it tries to do too much, going to vertical too soon, violating the "just do your part" rule of stacked protocol design. Zigbee sorta kinda did same thing. I think the wireless guys should just do their part, and trust that the software protocol will do their part. Under a regular model, the wirless system would have a deliberate set of features, and the protocol stack engineer would be able to have his cake and eat it too. It might include: 1. software-controlled variable power output 2. no "handover" whatsoever, unlike Wi-Fi access points, to make handover ultra-fast 3. CDMA-like simulcasting [maybe, not sure] 4. awareness of transmit power vs receive power on each individual frame 5. ultra-strong FEC [retransmission is killer at extremely high bandwidths] 6. jumbo frames, 8192+ bytes of payload [frame atomicity of link-local communication is required to solve some networking problems] Arguably, the current Wi-Fi association model, where a Wi-Fi station (STA) "associates" with a Wi-Fi access point (AP), is highly inappropriate for trying to solve the generalized mobility problem, where a node are moving at, say, 100 km/hour, making and breaking connections rapidly, say, every few meters, facilitates continuous connectivity between software agents inside itself and their peers inside foreign stationary or mobile nodes. Zigbee and Bluetooth have the same problem: by making the link layer so intelligent, upper layers are being robbed of the possibility for sophisticated, dynamic reconfiguration according to environment. But it is at these upper layers where dynamic reconfiguration makes sense. So the ideal wireless transceiver should be dumb, but not so dumb that the upper layers cannot detect when it should forsake one wireless peer for another as it tries to maintain overall global connectivity. The optimum wireless technology, IMO, is one where someone has managed figured out which features rightly belong at the link layer, and which don't, so as to make the entire stack effective. I do not know what the solution is, but I have strong feeling that it is not Wi-Fi or any of the other wireless technologies. -Le Chaud Lapin-
From: Vladimir Vassilevsky on 25 Jul 2010 09:21 Le Chaud Lapin wrote: > Under a regular model, the wirless system would have a deliberate set > of features, and the protocol stack engineer would be able to have his > cake and eat it too. It might include: Protocol stack engineer has no idea about wireless specifics. > 1. software-controlled variable power output In conditions of multiuser interferrence, everybody will crank the power all way up. So everything will be jammed. > 2. no "handover" whatsoever, unlike Wi-Fi access points, to make > handover ultra-fast Handover is tremendous overhead on the infrastructure and bandwidth. > 3. CDMA-like simulcasting [maybe, not sure] CDMA is tremendous overhead on bandwidth. > 4. awareness of transmit power vs receive power on each individual > frame Will get a bunch of random numbers representing nothing. > 5. ultra-strong FEC [retransmission is killer at extremely high > bandwidths] Tremendous overhead on bandwith without any benefit. > 6. jumbo frames, 8192+ bytes of payload [frame atomicity of link-local > communication is required to solve some networking problems] High likelihood of dropped frames. Slow responsiveness of the network. > Arguably, the current Wi-Fi association model, where a Wi-Fi station > (STA) "associates" with a Wi-Fi access point (AP), is highly > inappropriate for trying to solve the generalized mobility problem, Of course. It was not designed for that. > where a node are moving at, say, 100 km/hour, making and breaking > connections rapidly, say, every few meters, Disaster. > facilitates continuous > connectivity between software agents inside itself and their peers > inside foreign stationary or mobile nodes. Zigbee and Bluetooth have > the same problem: by making the link layer so intelligent, upper > layers are being robbed of the possibility for sophisticated, dynamic > reconfiguration according to environment. This is good. Keep the scripties away. Otherwise one sysadmin could mess up the whole network in the area. > But it is at these upper > layers where dynamic reconfiguration makes sense. So the ideal > wireless transceiver should be dumb, but not so dumb that the upper > layers cannot detect when it should forsake one wireless peer for > another as it tries to maintain overall global connectivity. Sure. Semi-proprietary incompatible protocols, like 20 years ago. > The optimum wireless technology, IMO, is one where someone has managed > figured out which features rightly belong at the link layer, and which > don't, so as to make the entire stack effective. If you say the word "optimum", it means you computed derivative of something with respect to something, and this derivative is zero. > I do not know what > the solution is, but I have strong feeling that it is not Wi-Fi or any > of the other wireless technologies. What is the particular problem that you are looking for solution? VLV
From: Le Chaud Lapin on 26 Jul 2010 14:16 On Jul 25, 8:21 am, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote: > Le Chaud Lapin wrote: > > Under a regular model, the wirless system would have a deliberate set > > of features, and the protocol stack engineer would be able to have his > > cake and eat it too. It might include: [snip] > What is the particular problem that you are looking for solution? Not sure if it has a name, but it might be called: "The Network-Centric Wireless Adapter Problem" There are a lot of researchers who believe that the future of distributed communication includes mobile nodes, thousands of them in a relatively local area, moving at high speed, making and breaking connections within the local region, say, along a highway, while the applications inside the mobile nodes never miss a beat. Watching internet television on an LCD headrest in a vehicle moving at 100 km/hr, where connectivity from vehicle to rest of internet is via a series of access points along the highway, would illustrate the problem. The bandwidth would have to be large from transceiver in vehicle to access point on highway, and there would be potentially thousands of nodes within few km along highway. So the question might be... Assuming a generalized model where: 1. Inter-node bit rate is large, say 100 mb/s. 2. All nodes are potentially mobile, and velocity is large (> 100 km/ h) 3. Density of nodes is large (> 1000 over length of 1 km) 4. End-to-end round-trip delay must be minimized if path includes wireless link 5. Topological optimality is sought always between two nodes, mobile or not [mininum distance of path that is] 6. Software application in stationary node S is always able to "connect" to mobile node at M at any moment, and maintain connectivity regardless of what M is doing (Yahoo Messenger/FTP/etc. must not disconnect as M moves) What model should be chosen for the wireless transceiver to facilitate all of these things at once? -Le Chaud Lapin-
From: Vladimir Vassilevsky on 26 Jul 2010 15:36 Le Chaud Lapin wrote: > There are a lot of researchers who believe that the future of > distributed communication includes mobile nodes, thousands of them in > a relatively local area, moving at high speed, making and breaking > connections within the local region, say, along a highway, while the > applications inside the mobile nodes never miss a beat. There is a lot of researches who put their beliefs beyond the reason, the arithmetics and the 40+ years of evolution of communication systems. > Watching internet television on an LCD headrest in a vehicle moving at > 100 km/hr, where connectivity from vehicle to rest of internet is via > a series of access points along the highway, would illustrate the > problem. If a node makes and maintains N connections, it gobbles the resource that could be otherwise used by more then N nodes. This resource includes both wireless bandwidth and infrastracture. For any given infrastructure, there is only so much of Mbit/s/km^3 that could be delivered. You can share this Mbit/s/km^3 in the different ways. Most efficient way of sharing is centralized system where the central control keeps track of every subscriber and manages the resource accordingly. > The bandwidth would have to be large from transceiver in vehicle to > access point on highway, and there would be potentially thousands of > nodes within few km along highway. Put a leaky waveguide along the highway. Trace every node with a beam of smart antenna. Those things are known for centuries. > So the question might be... > > Assuming a generalized model where: > > 1. Inter-node bit rate is large, say 100 mb/s. > 2. All nodes are potentially mobile, and velocity is large (> 100 km/ > h) > 3. Density of nodes is large (> 1000 over length of 1 km) > 4. End-to-end round-trip delay must be minimized if path includes > wireless link > 5. Topological optimality is sought always between two nodes, mobile > or not [mininum distance of path that is] > 6. Software application in stationary node S is always able to > "connect" to mobile node at M at any moment, and maintain connectivity > regardless of what M is doing (Yahoo Messenger/FTP/etc. must not > disconnect as M moves) > > What model should be chosen for the wireless transceiver to facilitate > all of these things at once? Centralized privatized government controlled licensed huge expensive nationwide structure like 2+ G cellular networks. VLV
First
|
Prev
|
Pages: 1 2 Prev: Signals on FFT bins, and Windowing Next: Announcing AjarDSP - an open source VLIW DSP |