From: Mason Freed on 21 Jun 2010 13:34 "Yair Altman" <altmanyDEL(a)gmailDEL.comDEL> wrote in message <hvo77o$naj$1(a)fred.mathworks.com>... > No magic suggestions for the simple reason that Matlab m-files should not have a space in their filenames... If you're doing check-in sanity checks as you wrote above then this would be one obvious source of reported error. > > Yair Altman > http://UndocumentedMatlab.com Thanks. I was under the impression originally that the issue occurred on PATHS with spaces in the name (e.g. "C:\Documents and settings") but you're exactly right. The only issue is with file NAMES that contain spaces. So no issue at all (and a good addition to the check!). Thanks again for the quick responses Yair. Mason
From: Walter Roberson on 21 Jun 2010 15:01 Mason Freed wrote: > I can't make the policy that mlint reports no warnings at all, > because the other developers would kill me for it. (Much as I would like > to enforce that as a constraint.) It isn't a reasonable constraint until at least 2009b: [A, B, C] = SomeCall(D); If it happens that the code only needs C and not A or B, then the author should not be forced to do something like write A and B to an output file that is then deleted just so that the values are "used" so as to avoid the warning. The latest versions of Matlab allow [~, ~, C] = SomeCall(D) but then you are relying on a new feature and destroying backwards compatibility. Likewise, it is not uncommon to write routines that follow a standard calling procedure and have arguments passed in to them that that _particular_ routine doesn't happen to use but where the routines are "black boxes" from outside. For example if you are doing table-driven calls to an appropriate optimization function then the calls have to pass in all the parameters that _any_ of the optimization routines might need. There might, for example, be a tuning parameter that only makes sense for one kind of optimization but because of the common calling sequence has to be passed to all of them. You don't want to force the author to somehow uselessly "use" the parameter just to avoid the warning. (This too is solved with ~ in the very newest Matlabs, but again that has costs.)
From: Mason Freed on 21 Jun 2010 16:42 Walter Roberson <roberson(a)hushmail.com> wrote in message <hvod1p$855$1(a)canopus.cc.umanitoba.ca>... > Mason Freed wrote: > It isn't a reasonable constraint until at least 2009b: I would disagree - as far as I know, the '%#ok' construct has been around for as long as mlint has. So it can be reasonably used to explicitly suppress warnings for known cases (as your examples show). I always come down on the side of eliminating all warnings - it is just a safer way to do things, and doesn't add too much overhead to the programmer. Just my humble opinion, of course! Thanks, Mason
From: Walter Roberson on 22 Jun 2010 04:20 Mason Freed wrote: > Walter Roberson <roberson(a)hushmail.com> wrote in message > <hvod1p$855$1(a)canopus.cc.umanitoba.ca>... >> Mason Freed wrote: > >> It isn't a reasonable constraint until at least 2009b: > > I would disagree - as far as I know, the '%#ok' construct has been > around for as long as mlint has. So it can be reasonably used to > explicitly suppress warnings for known cases (as your examples show). I > always come down on the side of eliminating all warnings - it is just a > safer way to do things, and doesn't add too much overhead to the > programmer. Just my humble opinion, of course! That policy will just lead to people routinely turning off "all instances in file" of warnings without examination. The medicine will be worse than the problem. We turn off warnings only when we completely understand their causes and know that the existing code is appropriate. Sometimes we don't completely understand the code we have written or sometimes the warning points out future work that should be done and turning off the warning would hide that hint from us. As long as the warning is there it is "nag-ware" prompting us to continue thinking about the issue. I said above that sometimes we don't completely understand the code we have written, and that is true. For example I wrote a section of code that represents projection of a 2D set of points on to a line, and when I wrote it in accordance with all the standard formula it produced provably wrong answers. When I reversed the X and Y coordinate the result produced the expected answers. I consulted over half a dozen different references and the theoretical implementation is the right one but doesn't work. We have a vague notion as to why exchanging X and Y might work, but it would take more time to dig in to it and prove or disprove it than the matter is worth. There are a number of other places where we don't know _why_ our code works. We write experimental formula and test the formula against data; we don't always understand the math behind the formula. There are lots of different possible formula, and working out the theoretical math of each of them could take decades *each*. That isn't meant to be an exaggeration either: there are some formula in the field we are working in that have been used since the 1960's and their implications are still not fully understood. So we do a "Darwinian selection": we try formula and only go deeper into the theory of the ones that work well for us (or for the ones that really look like they should do well but turn out not to: a good explanation for why something seemingly obvious fails can provide very useful insights into what has to be taken into account by something that works.) If this all sounds sloppy... it's because we do applied *research*, and research _commonly_ has a 90% failure rate for ideas, and only a 1% or 2% real success rate (with the other 8-9% being academically interesting but not good enough to be worth pursuing for applied research.)
From: Mason Freed on 22 Jun 2010 11:38 Walter Roberson <roberson(a)hushmail.com> wrote in message <3n_Tn.2420$Yo5.136(a)newsfe01.iad>... > That policy will just lead to people routinely turning off "all > instances in file" of warnings without examination. The medicine will be > worse than the problem. That, as you point out, would be bad programming practice too. I would discourage that as much as I would discourage having warnings left over. > We turn off warnings only when we completely understand their causes and > know that the existing code is appropriate. Sometimes we don't > completely understand the code we have written or sometimes the warning > points out future work that should be done and turning off the warning > would hide that hint from us. As long as the warning is there it is > "nag-ware" prompting us to continue thinking about the issue. See http://blogs.mathworks.com/desktop/2008/03/17/whats-on-my-todo-list for an alternative (perhaps better) method for keeping track of hacks and todo items. As I said in the beginning, this is my opinion on how to do things. There are infinitely many others, and the best one likely depends on the particulars of each situation. To each, his own. Thanks, Mason
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 5 Prev: Loading a very long excel file on a Mac platform Next: guide de matlab |