From: "Johnson, David" on 29 Jan 2007 19:23 Thank you Martin, I had considered it, but it is a Raid array connected by IEEE1394, and not strictly a NAS, and the 120Gb space I have on my local drive is almost entirely consumed during one of the sort and split processes. Sadly too, it isn't just the external Raid 5 array (permanent data) that has the problem, it is also the internal Raid 0 array (work space) that is exhibiting the same issue. Any and all suggestions are welcome, sometimes the most irritating problems are solved with a flash of the blindingly obvious!!! Incidentally, the latest word from the "birdies" is still to suspect the AV package, and thankfully I now seem to have a means to exclude the work directory from the scanning process for the moment. Interestingly enough, it is not the tables of tens or more of MB that seem to have problems, but often the smaller ones. The implication is that code like the following (untested) example might be more prone to problems. Proc Sort Data = PERM.TENROWTABLE Out = TENROWTABLE( Keep = KEYID COLUMN1 COLUMN3 COLUMN6 COLUMN7 Rename = ( COLUMN1 = GOODTHINGS COLUMN3 = BETTERTHINGS) ); By KEYID; Run; Data TENROWTABLE; Set TENROWTABLE; COST = Input( GOODTHINGS, 10.2) * 36 + 55; Run; Little data is retrieved and a work table is written in hundredths of a second, and is then updated in place with newly derived values; also in hundredths of a second. Meantime, the AV engine has spotted the first table and is trying to scan it, but SAS has already tried to replace the table with a new version and found a file lock in place on the original file deferring its deletion. The hypothesis makes sense, but the problem is a little too unpredictable for it to be tested. Maybe it is time for a Mickey Mouse job to do a few thousand of those and try to break the process. While the AV changes might be effective, when the problem occurs about once a week, how many weeks do you have to wait before you can be reasonably confident that the problem is fixed? Kind regards David -----Original Message----- From: SAS(r) Discussion [mailto:SAS-L(a)LISTSERV.UGA.EDU] On Behalf Of Martin Gregory Sent: Friday, 26 January 2007 8:46 AM To: SAS-L(a)LISTSERV.UGA.EDU Subject: Re: ERROR: Rename... Losing data sets from network drives David, you have probably considered and discarded this idea, but in case you haven't: if you have enough space on the workstation, can you: - copy the entire library to the workstation - assign libref on workstation - run your existing code - if all went well, copy back after deleting files on NAS cheers, Martin On 01/25/2007 12:12 AM, Johnson, David wrote: > Thank you Martin, I've suspected there may be hidden processes running > which is why I am trying to pin down the instances. Unfortunately > they are not predictable. Last night for instance 11:59:38 of > processing went through without any incident which means I don't have > a probative test of the resubmission changes I made. We'll see > whether this holiday weekend produces anything different. > > A lot of clients now have outsourced IT infrastructure, and to manage > the process the outsourcing company often places pervasive and hidden > processes on the machines to monitor software, synchronise user data > between local and remote drives and lock down settings. This prevents > using some of the excellent tools that have been recommended for > various issues, and may also mean any of a number of covert processes > is conflicting with the batch. > > Nobody has said it yet; but this is a workstation, not a server, and > batch processing should be done on the right platform, and if you get > conflicts from running batches on a workstation then sometimes you > just have to accept that or use the process to migrate the job to the > platform for which it is suited. Unfortunately, some regulatory > authorities might be unwilling to wait extra time for that migration > to be completed for delivery of their information. > > Kind regards > > David > > -----Original Message----- > From: SAS(r) Discussion [mailto:SAS-L(a)LISTSERV.UGA.EDU] On Behalf Of > Martin Gregory > Sent: Thursday, 25 January 2007 3:49 AM > To: SAS-L(a)LISTSERV.UGA.EDU > Subject: Re: ERROR: Rename... Losing data sets from network drives > > Possibly a long shot, but I once had a similar issue with files in the > WORK library on a client PC. I don't recall the exact message, but SAS > was not able to access file in WORK. It was also apparently random, > sometimes it would be a dataset created by our application, sometimes > one of the utility files created by SAS. It turned out that a network > backup program had been scheduled to run every 15 minutes (!) and it > was locking files in the WORK library while it was doing the back up. > > Is it possible that something similar is going on? Does this NAS keep > snapshots? It might be doing some behind the scenes backing up in a > not very intelligent way. > > -Martin > >>> Curtis Amick <curtis(a)SC.RR.COM> wrote: >>> Got a difficult problem here. Recently my company upgraded network >>> storage to an EMC NAS (Network Attached Storage), from a non-NAS >>> system. Now, those of us who store SAS data sets on the network are >>> encountering a serious problem. When updating data sets, sometimes >>> (rarely) those data sets will be deleted. The error message looks >>> like: >>> "ERROR: Rename of temporary member for (data set name) failed. File >>> may >>> be found in a directory (your directory)"and the permanent data >>> set is gone. >>> >>> This happens randomly, and (apparently) only when the data set >>> already exists. That is, when doing like this: >>> >>> DATA NETDRIVE.DATASET; SET DATASET2; RUN; If netdrive.dataset >>> already >>> exists (it's being "updated" by work.dataset2), then this error >>> *might* >>> occur. If netdrive.dataset does not yet exist (it's being created by >>> work.dataset2), then problem will not occur. >>> >>> From SI Tech Support: They've seen this before (see SAS NOTE 005781, >>> link >>> here: http://support.sas.com/techsup/unotes/SN/005/005781.html ), >>> but > >>> can't fix it because (according to TS rep) once SAS wants to write >>> to > >>> NAS, they "hand it off" to the network. And that's when the problem >>> occurs. >>> >>> Here's what I think: When SAS updates a data set, it creates a >>> temporary data set to work on, keeping the original intact. When the >>> step ends, (think PROC SORT DATA=ND.dataset; RUN; (this killed me on >>> Saturday. Had a macro that sorted 20+ data sets, and lost 4!!! of >>> them.)) the original data set is over-written by the temp, taking on >>> the name of the original. And I'm thinking it's during that >>> writing/re-naming process that the storage system is losing our data >>> sets. (SI calls it a "timing issue"). Doesn't happen when working on >>> local drives, and, like I mentioned earlier, hasn't happened yet >>> when > >>> *creating* permanent data sets; only when updating. >>> >>> Some suggestions (from SITS): change engines (v8, v612) (doesn't >>> work, not feasible), use -SYNCHIO (have tried it; doesn't seem to >>> help), remove SAS data sets from on-line virus scanning in the NAS >>> (our IS dept is leery of that one). Personally, I'd like to go back >>> to previous storage (non-NAS, IS dept isn't thrilled with that one). >>> >>> Probably can get around this problem by programming like so: >>> DATA ABC; >>> SET ND.DATASET; >>> (play with data set ABC...) >>> RUN: >>> >>> (delete ND.DATASET) >>> >>> DATA ND.DATASET; >>> SET ABC; >>> RUN: >>> >>> But I'd prefer something cleaner, less intrusive (especially for our >>> less "sophisticated" users). Plus, we've got LOTS of programs that >>> are run daily, weekly, monthly, etc that contain steps like: "proc >>> sort data=ND.xxxx; run;" >>> and/or "data ND.xxxx; set ND.xxxx abc; run;" and/or (well, you get >>> the picture). >>> >>> To the point: has anyone else had this problem, and (if so) what did >>> you do to solve it? > > ************** IMPORTANT MESSAGE ***************************** This > e-mail message is intended only for the addressee(s) and contains > information which may be confidential. > If you are not the intended recipient please advise the sender by > return email, do not use or disclose the contents, and delete the > message and any attachments from your system. Unless specifically > indicated, this email does not constitute formal advice or commitment by the sender or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries. > We can be contacted through our web site: commbank.com.au. > If you no longer wish to receive commercial electronic messages from > us, please reply to this e-mail by typing Unsubscribe in the subject line. > ************************************************************** ************** IMPORTANT MESSAGE ***************************** This e-mail message is intended only for the addressee(s) and contains information which may be confidential. If you are not the intended recipient please advise the sender by return email, do not use or disclose the contents, and delete the message and any attachments from your system. Unless specifically indicated, this email does not constitute formal advice or commitment by the sender or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries. We can be contacted through our web site: commbank.com.au. If you no longer wish to receive commercial electronic messages from us, please reply to this e-mail by typing Unsubscribe in the subject line. **************************************************************
First
|
Prev
|
Pages: 1 2 3 Prev: Default Character Set = Japanese???? Next: How to reset fmtsearch= option to default catalog |