Prev: Updatable subscription issue - Either no algorithm ...
Next: Peer to Peer replication for partitioned table using sliding windo
From: Ronnie R on 13 Feb 2010 04:27 Greetings all I have P2P set up, SQL 2008 Ent, 2 peers, been working well for a month or 2. After a recent large 'update' statement affecting 25k large records, data hasn't made its way to the subscriber after some 10 hours. These records can be 800k in size. (Yes i know this probably should have been in chunks, but I am where I am right now, so I'd like to understand what I can do to get things to take their course). The update itself was only to update a bit value in one column in each record. In replication manager, the 'Distributor to Subscriber history' is showing the following chain of events repeated roughly every 30 mins ----------------------- Query timeout expired The process is running and is waiting for a response from the server Initializing... ----------------------- (Repeated many times with finally...) Agent 'xxxx' is retrying after an error, nn retries attempted It seems to me that I have 2 choices 1) The large stick approach - Tear the whole thing down (rather not but hey ho), backup, restore and learn. 2) Understand what I can do to clear this constipation of records through the tubes. Is there 'timeout' value that needs adjusting here? or is this such a lot of data, the tubes are so bunged, that no about of greasing will ease the path ? :-) Just for my knowledge, I'd like to be able to achieve number 2 (pardon the pun). Your help appreciated as always folks.
From: Ronnie R on 13 Feb 2010 07:04 Hi All, I think I may have solved my own problem, I increased -QueryTimeout to 65000 for logreader and distributor...and now...my transactions are being delivered. If anyone has anything useful to add then great, otherwise thanks. "Ronnie R" wrote: > Greetings all > > I have P2P set up, SQL 2008 Ent, 2 peers, been working well for a month or 2. > > After a recent large 'update' statement affecting 25k large records, data > hasn't made its way to the subscriber after some 10 hours. These records can > be 800k in size. (Yes i know this probably should have been in chunks, but I > am where I am right now, so I'd like to understand what I can do to get > things to take their course). The update itself was only to update a bit > value in one column in each record. > > In replication manager, the 'Distributor to Subscriber history' is showing > the following chain of events repeated roughly every 30 mins > > ----------------------- > Query timeout expired > The process is running and is waiting for a response from the server > Initializing... > ----------------------- > (Repeated many times with finally...) > Agent 'xxxx' is retrying after an error, nn retries attempted > > It seems to me that I have 2 choices > > 1) The large stick approach - Tear the whole thing down (rather not but hey > ho), backup, restore and learn. > 2) Understand what I can do to clear this constipation of records through > the tubes. Is there 'timeout' value that needs adjusting here? or is this > such a lot of data, the tubes are so bunged, that no about of greasing will > ease the path ? :-) > > Just for my knowledge, I'd like to be able to achieve number 2 (pardon the > pun). > > Your help appreciated as always folks. > > >
From: Ronnie R on 14 Feb 2010 06:34
Further update. Dear all Despite my databases being 'in sync', performance is critical on both nodes, it latency being the following P2D-> 4 secs D2S-> 2:57 In the activity monitor, there's a LOT of activity and also in reads/writes in task manager. Multiple instances of sys.sp_MSget_repl_commands;1 for the same Session ID (58)...all being blocked by...58(?). Could I be right in assuming that sys.sp_MSget_repl_commands;1 is still taking a long time because of the size of my recent (past) update? Again, any thoughts here appreciated as always. "Ronnie R" wrote: > Hi All, > > I think I may have solved my own problem, I increased -QueryTimeout to 65000 > for logreader and distributor...and now...my transactions are being delivered. > > If anyone has anything useful to add then great, otherwise thanks. > > "Ronnie R" wrote: > > > Greetings all > > > > I have P2P set up, SQL 2008 Ent, 2 peers, been working well for a month or 2. > > > > After a recent large 'update' statement affecting 25k large records, data > > hasn't made its way to the subscriber after some 10 hours. These records can > > be 800k in size. (Yes i know this probably should have been in chunks, but I > > am where I am right now, so I'd like to understand what I can do to get > > things to take their course). The update itself was only to update a bit > > value in one column in each record. > > > > In replication manager, the 'Distributor to Subscriber history' is showing > > the following chain of events repeated roughly every 30 mins > > > > ----------------------- > > Query timeout expired > > The process is running and is waiting for a response from the server > > Initializing... > > ----------------------- > > (Repeated many times with finally...) > > Agent 'xxxx' is retrying after an error, nn retries attempted > > > > It seems to me that I have 2 choices > > > > 1) The large stick approach - Tear the whole thing down (rather not but hey > > ho), backup, restore and learn. > > 2) Understand what I can do to clear this constipation of records through > > the tubes. Is there 'timeout' value that needs adjusting here? or is this > > such a lot of data, the tubes are so bunged, that no about of greasing will > > ease the path ? :-) > > > > Just for my knowledge, I'd like to be able to achieve number 2 (pardon the > > pun). > > > > Your help appreciated as always folks. > > > > > > |