From: Rick Parrish on 21 Jan 2010 13:00 I have an application that accepts an existing socket connection (passed to it by the server that actually accepted the incoming connection), and I'm trying to find a way to have the application NOT close the socket when it terminates. So basically what I've done is created a class that inherits from System.Net.Sockets.Socket. In the .NET source code I can see that the Dispose() method closes the socket if it is still open, so my first thought was to override the Dispose() method and leave out the socket closing portion of the code. This failed, so maybe my understanding of overriding isn't correct, and the base method still gets called? Anyway, in my trial and error attempts I accidentally stumbled across a method that works, and seems reliable on my machine, but I'd like something I can be a little more certain of on other machines. Basically if I call System.Threading.Thread.Sleep(2500) in my overridden Dispose(), the connection stays open after the application terminates. Like I said, I have no idea if this will hold true on other machines, which is why I'm wondering if there is an easier/better/more reliable way to keep a Socket from being closed when an application quits?
From: Peter Duniho on 21 Jan 2010 13:29 Rick Parrish wrote: > [...] > Like I said, I have no idea if this will hold true on other machines, > which is why I'm wondering if there is an easier/better/more reliable > way to keep a Socket from being closed when an application quits? IMHO, the first step is to understand better why you think it's a good idea to leave the socket open. Even if you successfully terminate your application without closing the socket, the OS is going to notice and eventually close it on your behalf. Without seeing the code, it's impossible to know for sure why one approach you tried had the appearance of the effect you wanted while another did not. It's possible that your call to Sleep() delayed the finalizer thread enough that the run-time just gave up on it, thus interrupting the disposal (but also the finalization of EVERY OTHER OBJECT also needing finalization!). But really, I think it would be more useful to try to discuss whatever actual problem you're trying to solve, than this particular solution to the problem you've decided upon. The solution you're trying to implement seems like a bad idea in any case, and once your process has terminated, it's unlikely to have any specific lasting effect (i.e. probably won't do what you seem to want it to do anyway). Pete
From: Rick Parrish on 21 Jan 2010 13:59 > IMHO, the first step is to understand better why you think it's a good > idea to leave the socket open. Even if you successfully terminate your > application without closing the socket, the OS is going to notice and > eventually close it on your behalf. Any experience with old BBS Software? Basically I'm looking to make a modern equivalent. So I have a telnet server accepting telnet connections, and after connecting, a user may want to run an external program. So the server launches the program and passes the socket handle so the program can communicate with the user. When the program quits, the user should go back to the telnet server so they can do something else, but if the program closes the socket, then the user is just disconnected. I implemented this long ago in Delphi, so I know the OS will allow what I want, now it's just a matter of whether the Socket class can do what I want, or if I'll have to rewrite it for this one stupid little feature. > Without seeing the code, it's impossible to know for sure why one > approach you tried had the appearance of the effect you wanted while > another did not. It's possible that your call to Sleep() delayed the > finalizer thread enough that the run-time just gave up on it, thus > interrupting the disposal (but also the finalization of EVERY OTHER > OBJECT also needing finalization!). This is what I was thinking, and also why I'm looking for a better solution
From: Patrice on 21 Jan 2010 14:11 Hi, > I'm trying to find a way to have the application NOT > close the socket when it terminates. It would be similar to leaving a file open even if the app is terminated. I would not be surprised if it was just not possible as AFAIK all resources owned by a process are supposed to be cleaned up when the process dies... You may want to explain fist what is your overall goal... -- Patrice
From: Peter Duniho on 21 Jan 2010 14:15 Rick Parrish wrote: >> IMHO, the first step is to understand better why you think it's a good >> idea to leave the socket open. Even if you successfully terminate your >> application without closing the socket, the OS is going to notice and >> eventually close it on your behalf. > > Any experience with old BBS Software? Basically I'm looking to make a > modern equivalent. So I have a telnet server accepting telnet > connections, and after connecting, a user may want to run an external > program. So the server launches the program and passes the socket > handle so the program can communicate with the user. That doesn't make sense. Socket handles are valid only within the owning process. You have to use WSADuplicateSocket(), directly or indirectly, to marshal the socket information to another process for it to use the socket. Note that when using WSADuplicateSocket(), one process can close the socket without affecting the other. They share the same socket instance, and the OS keeps track of how many processes are using the socket, only actually closing the socket when the last process is done with it. (Obviously you can't use the .NET method Socket.DuplicateAndClose(), because it automatically closes the socket in the current process; but the unmanaged Winsock API supports leaving the duplicated socket open). > [...] > I implemented this long ago in Delphi, so I know the OS will allow > what I want, now it's just a matter of whether the Socket class can do > what I want, or if I'll have to rewrite it for this one stupid little > feature. Frankly, I'm still not even sure sharing a socket is the best approach. A much more common approach would be to make the external process use stdin and stdout, and have the original parent process be the only one using the socket. This avoids having to deal with any synchronization issues between processes sharing the socket, and provides a more general-purpose interface too. The parent process can just redirect the input and output to the child process, and handle all the communications with the remote endpoint itself, proxying communications between the remote endpoint and the child process. Pete
|
Next
|
Last
Pages: 1 2 3 Prev: copy text on the screen as text? Next: read from TCP/IP socket with blocking |