From: Arkadiy on 14 Jan 2008 14:59 On Jan 14, 2:18 pm, Rick Jones <rick.jon...(a)hp.com> wrote: > Basically, the upshot of what everyone is saying is that TCP was not > designed for the sort of "perishable" data you wish to send/recv with > it. The question is then what timeouts should be used for? Only for detecting some abnormal situations, like server congested or server rebooted? What follows from the discussion seems to be that the usage of timeouts to limit the response time is not recommended, correct? Regards, Arkadiy
From: David Schwartz on 14 Jan 2008 17:25 On Jan 14, 11:59 am, Arkadiy <vertl...(a)gmail.com> wrote: > The question is then what timeouts should be used for? Only for > detecting some abnormal situations, like server congested or server > rebooted? What follows from the discussion seems to be that the usage > of timeouts to limit the response time is not recommended, correct? I still think you are approaching this problem from the wrong direction. You should follow the specifications for the protocol you are implementing or the server you are talking to. If you have no such specifications, you should do everything in your power to develop some. It sounds like you are using a protocol that is fundamentally broken in that it provides no way to detect a truly abnormal situation. Ideally, the protocol would be fixed. If the solution to a timeout is going to be to close the socket, the server has to know this. It needs to detect socket closure and release resources associated with that connection immediately. This is not common practice. If you have no other choice, use a back off and retry model. Make sure to back off a bit more (up to some reasonable cap) with each successive failure. At least this way, if the server does get overloaded, you will reduce the load on it and give it some hope of recovering. DS
From: Rick Jones on 14 Jan 2008 19:10 Arkadiy <vertleyb(a)gmail.com> wrote: > However, what's bothering me, is how I tell the congested server > from something like the server's host rebooted. In the later > situation, I do want to reconnect. The server's host rebooting will cause an RST to come back to the client end in response to the first segment the client sends to the server after it reboots because the server's host TCP will have no knowledge of the connection. The RST arriving at the client should result in the socket becoming readable and a read/recv against the socket returning EPIPE or somesuch. rick jones -- Wisdom Teeth are impacted, people are affected by the effects of events. these opinions are mine, all mine; HP might not want them anyway... :) feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...
From: Rick Jones on 14 Jan 2008 19:12 Arkadiy <vertleyb(a)gmail.com> wrote: > The question is then what timeouts should be used for? Only for > detecting some abnormal situations, like server congested or server > rebooted? What follows from the discussion seems to be that the > usage of timeouts to limit the response time is not recommended, > correct? In the context of TCP, application-level timeouts are there to protect against sitting there waiting "forever" when something is amiss. The only way they would be involved in response time would be when the response time got to the point where it was considered unusable. rick jones -- The computing industry isn't as much a game of "Follow The Leader" as it is one of "Ring Around the Rosy" or perhaps "Duck Duck Goose." - Rick Jones these opinions are mine, all mine; HP might not want them anyway... :) feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...
From: Barry Margolin on 14 Jan 2008 20:52
In article <2a028093-c32c-4d98-9e3c-996bdc47a193(a)z17g2000hsg.googlegroups.com>, Arkadiy <vertleyb(a)gmail.com> wrote: > The question is then what timeouts should be used for? Only for > detecting some abnormal situations, like server congested or server > rebooted? What follows from the discussion seems to be that the usage > of timeouts to limit the response time is not recommended, correct? Correct. The only thing I'd use a timeout for is to report to the caller that the server appears to be dead. Like when a browser reports to the user that the web server didn't respond. But you have to be careful to set your timeouts appropriately. E.g. web servers often invoke complex applications that take a long time to perform their computation, and it's a real pain when a web browser kills the connection after too short a time. Server designers have had to resort to displaying progress indicators to keep the browsers happy. -- Barry Margolin, barmar(a)alum.mit.edu Arlington, MA *** PLEASE post questions in newsgroups, not directly to me *** *** PLEASE don't copy me on replies, I'll read them in the group *** |