From: Barry Margolin on 13 Jan 2008 22:55 In article <f85b40b7-81bf-40ab-a4e3-182dfb0e6da2(a)v46g2000hsv.googlegroups.com>, K-mart Cashier <cdalten(a)gmail.com> wrote: > On Jan 12, 1:17 am, David Schwartz <dav...(a)webmaster.com> wrote: > > On Jan 11, 8:27 pm, Arkadiy <vertl...(a)gmail.com> wrote: > > > > > Yes. My protocol is -- send request, get response, if it times out, > > > forget the whole thing, send the next request, get the next response, > > > and so on... > > > > If the protocol permits the other side to not respond, it should also > > require the other side to specifically identify what request each > > response is to. If it doesn't do that, the protocol is broken. > > > > I agree with Rainer Weikusat. It sounds like TCP was a bad choice, as > > it provides you no ability to control the transmit timing. > > > > DS > > > Okay, I'm having a brain fart. How does UDP provide a way to control > the transmit timing (as opposed to TCP). The point is that in UDP, each message is independent. When you use TCP, if there's a delay in responding to request N, it will also delay response N+1, N+2, etc. because they're all queued behind it in the stream. Since UDP doesn't have a stream, a problem with one request need not delay others. Of course, if all the requests are being handled by a sequential server, IT might end up serializing everything, causing a similar delay. In that case it probably doesn't matter which protocol you use. -- Barry Margolin, barmar(a)alum.mit.edu Arlington, MA *** PLEASE post questions in newsgroups, not directly to me *** *** PLEASE don't copy me on replies, I'll read them in the group ***
From: David Schwartz on 14 Jan 2008 07:37 On Jan 13, 8:54 am, Arkadiy <vertl...(a)gmail.com> wrote: > I can do this with UDP, but with TCP the server I am using doesn't > implement request ids. That's too bad. > Do you mean that timeouts don't make sence with TCP? Can't I just > drop the connection? It depends upon why you would expect to experience timeouts. Dropping and reforming a TCP connection is expensive. It seems that you would want an efficient way to do this. DS
From: Arkadiy on 14 Jan 2008 10:13 On Jan 14, 7:37 am, David Schwartz <dav...(a)webmaster.com> wrote: > > Do you mean that timeouts don't make sence with TCP? Can't I just > > drop the connection? > > It depends upon why you would expect to experience timeouts. Dropping > and reforming a TCP connection is expensive. It seems that you would > want an efficient way to do this. The server will be accessed through LAN, so I expect both connect and read/write operations be very fast. But you are right, I generally don't want the price of connection affect my throughput (I mean the throughput of the code using my API). The application requirements are -- either do it fast (how fast depends on the timeout provided by the user code) or return an error. So right now I have a dedicated thread that is responsible for creating connections and placing them in the pool. The request thread just takes a connection from the pool. If it can't do this in a given amount of time (all busy, all broken, etc.) this is also a timeout (a different one). If it gets a connection than it uses it to make a request to the server. This request may also time out (this timeout is a subject of this topic). What would be the reasons, I am not sure. Probably any reason that can't be diagnosed right away -- server is congested, server entered an infinite loop (probably won't happen), server host crashed, etc. If I drop the connection, I will schedule one new connection to be created by the dedicated thread. Does this make sence? Also, since the TCP server doesn't implement request ids, it looks like dropping and recreating the connection is the only possibility. Or am I missing something? Regards, Arkadiy
From: David Schwartz on 14 Jan 2008 10:42 On Jan 14, 7:13 am, Arkadiy <vertl...(a)gmail.com> wrote: > The server will be accessed through LAN, so I expect both connect and > read/write operations be very fast. But you are right, I generally > don't want the price of connection affect my throughput (I mean the > throughput of the code using my API). The application requirements > are -- either do it fast (how fast depends on the timeout provided by > the user code) or return an error. So right now I have a dedicated > thread that is responsible for creating connections and placing them > in the pool. The request thread just takes a connection from the > pool. If it can't do this in a given amount of time (all busy, all > broken, etc.) this is also a timeout (a different one). If it gets a > connection than it uses it to make a request to the server. This > request may also time out (this timeout is a subject of this topic). > What would be the reasons, I am not sure. Probably any reason that > can't be diagnosed right away -- server is congested, server entered > an infinite loop (probably won't happen), server host crashed, etc. > If I drop the connection, I will schedule one new connection to be > created by the dedicated thread. > > Does this make sence? Not to me, no. Suppose the timeout is due to the server being overloaded. If you just close the connection, what will the server do? Will it abort the operation or finish it? Suppose the first operation just barely times out, but is still partially being worked on when you send the second operation. This means the second operation has to wait for the first to finish inside the server, or is slowed down by it. That makes the second operation timeout as well. This cases you to start a third operation. Before you know it, you've got ten operations the server is still trying to finish, you've closed all the connections, and the server has no hope of ever catching up. You're firing more and more connections at it, and it's getting more and more overloaded. That makes sense to you? > Also, since the TCP server doesn't implement request ids, it looks > like dropping and recreating the connection is the only possibility. > Or am I missing something? You need some way to figure out what's going on, otherwise you need to back off and retry. I don't know enough about your application to figure out what's sensible, but you should probably be measuring the server response time and backing off when it's too high. Increasing the load you place on a server is not a rational response to the server being overloaded. DS
From: Rick Jones on 14 Jan 2008 13:01
Arkadiy <vertleyb(a)gmail.com> wrote: > Do you mean that timeouts don't make sence with TCP? Can't I just > drop the connection? Well, if the reason the message was "late" was either the client or the server was overloaded, killing the connection and establishing a new one will be a lot of additional overhead, which makes it unlikely that the oversaturated situation will resolve itself. rick jones -- portable adj, code that compiles under more than one compiler these opinions are mine, all mine; HP might not want them anyway... :) feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH... |