From: AlexG on 3 Nov 2006 02:34 We're using Data Integration Studio and the performance when we make a checkin of our repository is terrible. While checkin is processing, nobody can work and it can take 15/30 minutes.... Any solution?? Thanks a lot
From: "Howard Schreier <hs AT dc-sug DOT org>" on 3 Nov 2006 16:06 On Thu, 2 Nov 2006 23:34:29 -0800, AlexG <algomlop(a)GMAIL.COM> wrote: >We're using Data Integration Studio and the performance when we make a >checkin of our repository is terrible. While checkin is processing, >nobody can work and it can take 15/30 minutes.... > >Any solution?? > >Thanks a lot An important distinction: it is not DIS which is slow; rather, it is the code which DIS generates. I think you have to look at that code and at the log, and try to isolate and understand the problem(s). When it comes time to fix things, you may not have to tweak code directly. You may have enough insight to be able to go back to the DIS interface and point and click and drag and drop your way to something which will run faster. Or, you may realize that your data require some kind of conditioning.
From: T.M.Goossens@gmail.com on 4 Nov 2006 06:51 Is the performance poor overall or is it just when doing a checkin ? I have been able to improve the performance of the metadata repositories with the %OMABACKUP macro, that SAS supplies. It will clean up all metadata that was marked for deletion (but not yet deleted phyhically). "Howard Schreier <hs AT dc-sug DOT org>" schreef: > On Thu, 2 Nov 2006 23:34:29 -0800, AlexG <algomlop(a)GMAIL.COM> wrote: > > >We're using Data Integration Studio and the performance when we make a > >checkin of our repository is terrible. While checkin is processing, > >nobody can work and it can take 15/30 minutes.... > > > >Any solution?? > > > >Thanks a lot > > An important distinction: it is not DIS which is slow; rather, it is the > code which DIS generates. > > I think you have to look at that code and at the log, and try to isolate and > understand the problem(s). > > When it comes time to fix things, you may not have to tweak code directly. > You may have enough insight to be able to go back to the DIS interface and > point and click and drag and drop your way to something which will run > faster. Or, you may realize that your data require some kind of conditioning.
From: T.M.Goossens@gmail.com on 4 Nov 2006 06:52 Is the performance poor overall or is it just when doing a checkin ? I have been able to improve the performance of the metadata repositories with the %OMABACKUP macro, that SAS supplies. It will clean up all metadata that was marked for deletion (but not yet deleted phyhically). "Howard Schreier <hs AT dc-sug DOT org>" schreef: > On Thu, 2 Nov 2006 23:34:29 -0800, AlexG <algomlop(a)GMAIL.COM> wrote: > > >We're using Data Integration Studio and the performance when we make a > >checkin of our repository is terrible. While checkin is processing, > >nobody can work and it can take 15/30 minutes.... > > > >Any solution?? > > > >Thanks a lot > > An important distinction: it is not DIS which is slow; rather, it is the > code which DIS generates. > > I think you have to look at that code and at the log, and try to isolate and > understand the problem(s). > > When it comes time to fix things, you may not have to tweak code directly. > You may have enough insight to be able to go back to the DIS interface and > point and click and drag and drop your way to something which will run > faster. Or, you may realize that your data require some kind of conditioning.
|
Pages: 1 Prev: FTP Binary File and Verify Next: error: conversation termination status=2 |