From: Thomas on
Walter Roberson <roberson(a)hushmail.com> wrote in message <eClZn.6702$OU6.1776(a)newsfe20.iad>...
> That is odd.
>
> Are some of the lines _quite_ long? I'm wondering now if you happen to
> be reaching some magic file size after the 104th line?

Hi Walter,

All of the lines are the same length - the software records all numerical values to the same precision, and the text values aren't changing. There are 20 columns of data being imported.

My thought is that it may be something to do with the lack of COM server support on OSX - I borrowed a colleague's Windows computer with R2007a also on it and ran the file, with the 300+ line data file and it worked fine. If this is the case, is there any way round the limitation, without splitting the data file?
From: Walter Roberson on
Thomas wrote:

> All of the lines are the same length - the software records all
> numerical values to the same precision, and the text values aren't
> changing. There are 20 columns of data being imported.
>
> My thought is that it may be something to do with the lack of COM server
> support on OSX - I borrowed a colleague's Windows computer with R2007a
> also on it and ran the file, with the 300+ line data file and it worked
> fine. If this is the case, is there any way round the limitation,
> without splitting the data file?

Okay, so the lines are all the same length -- but how long is that?

The only reasons that I know of that you might run into difficulty reading in
a text .xls file under OSX is if the file is very big or has more than the
permitted number of columns or more than the permitted number of rows. Your
number of rows and columns is within reasonable limits, but we don't know yet
about your file size.
From: Thomas on
Walter Roberson <roberson(a)hushmail.com> wrote in message <i157gs$ft7$1(a)canopus.cc.umanitoba.ca>...
>
> Okay, so the lines are all the same length -- but how long is that?
>
> The only reasons that I know of that you might run into difficulty reading in
> a text .xls file under OSX is if the file is very big or has more than the
> permitted number of columns or more than the permitted number of rows. Your
> number of rows and columns is within reasonable limits, but we don't know yet
> about your file size.

The file in question is 361 rows (including the header row) by 22 columns. The file is 143,872 bytes on my hard disk (probably a lot of this is Excel stuff though!). If I extract one row and save it as a .xls file, it is 16,868 byes. However, there is only 149 bytes of raw data per row, when exported to a text/notepad/textedit file. So raw-data wise, each file is only about 50 - 60Kb.

I have a smaller Excel file, also 22 columns of the same data, but only 81 rows. This is about 49Kb (excel file size) and runs fine.
From: Walter Roberson on
Thomas wrote:

> The file in question is 361 rows (including the header row) by 22
> columns. The file is 143,872 bytes on my hard disk (probably a lot of
> this is Excel stuff though!). If I extract one row and save it as a .xls
> file, it is 16,868 byes. However, there is only 149 bytes of raw data
> per row, when exported to a text/notepad/textedit file. So raw-data
> wise, each file is only about 50 - 60Kb.

If I average the 143872 bytes over 361 rows, I get about 398.5 bytes per row,
which is just over twice the 149 bytes per row that you get when you export to
text. That tends to suggest to me that the file might perhaps be UTF-16 or
unicode, two bytes per character, with the first of the two bytes being binary
0 for the majority of characters. I think it likely that Matlab would
auto-detect that, though.

What you describe sounds like it should be easily readable using textscan() if
the file was opened as text ('rt' instead of 'r' for fopen()). On the other
hand I can't think of any reason why what you have should be a problem.

Would it be possible for you to make a sample file that has the problem
available? I do not have Matlab on MacOS myself but I could try from Linux and
perhaps someone else could try from MacOS.
From: Thomas on
Hi Walter,

Apologies for the delayed reply. I'll have a look at the textscan option.

How do I attach a sample file to this thread - is it possible? I would guess that on Linux you'd have the same problem as I do, as I think this may be linked to COM server report, since it works on Windows.

Walter Roberson <roberson(a)hushmail.com> wrote in message <i15ars$krf$1(a)canopus.cc.umanitoba.ca>...
> If I average the 143872 bytes over 361 rows, I get about 398.5 bytes per row,
> which is just over twice the 149 bytes per row that you get when you export to
> text. That tends to suggest to me that the file might perhaps be UTF-16 or
> unicode, two bytes per character, with the first of the two bytes being binary
> 0 for the majority of characters. I think it likely that Matlab would
> auto-detect that, though.
>
> What you describe sounds like it should be easily readable using textscan() if
> the file was opened as text ('rt' instead of 'r' for fopen()). On the other
> hand I can't think of any reason why what you have should be a problem.
>
> Would it be possible for you to make a sample file that has the problem
> available? I do not have Matlab on MacOS myself but I could try from Linux and
> perhaps someone else could try from MacOS.