From: Ernesto on
Hello,
Running some tests with a list of data I came to realize that when the
data was in XML and grater than 100 items of data, the browser
processes the data faster than if the data was on JSON.
I sort of surprised me.. .I expected the opossite.
My assumption is that I am using activex to process the xml while the
json needs to be interpreted by the browser thus the activex is
running like a normal program while the json is on top of the vm the
browser creates for javascript.

Does anyone has a better explanation?

TIA
From: Thomas 'PointedEars' Lahn on
Ernesto wrote:

> Running some tests with a list of data I came to realize that when the
> data was in XML and grater than 100 items of data, the browser
> processes the data faster than if the data was on JSON.
> I sort of surprised me.. .I expected the opossite.
> My assumption is that I am using activex to process the xml while the
> json needs to be interpreted by the browser thus the activex is
> running like a normal program while the json is on top of the vm the
> browser creates for javascript.

[x] You only know Internet Explorer.

ActiveX suggests MSHTML, which supports JScript.
But ActiveX is not the only way to parse XML, of course.

> Does anyone has a better explanation?

Talk is cheap. Show me the code.
-- Linus Torvalds


PointedEars
--
var bugRiddenCrashPronePieceOfJunk = (
navigator.userAgent.indexOf('MSIE 5') != -1
&& navigator.userAgent.indexOf('Mac') != -1
) // Plone, register_function.js:16
From: RobG on
On Apr 1, 8:53 am, Ernesto <ernesto.tej...(a)gmail.com> wrote:
> Hello,
> Running some tests with a list of data I came to realize that when the
> data was in XML and grater than 100 items of data, the browser
> processes the data faster than if the data was on JSON.
> I sort of surprised me.. .I expected the opossite.

I expect that depends very much on what you are trying to do and how
you are doing it. If you are using say XPath to get certain XML
elements, the equivalent object access will likely require walking
down an object structure to find equivalent elements.

The XML document likely looks very different to the JSON structure,
e.g. consider the following XML:

<foo name="root" version="1.2">
<bar name="bar0"/>
<bar name="bar1"/>
<fred name="fred0">
<bar name="bar2"/>
<bar name="bar3"/>
</fred>
</foo>

An equivalent JSON object might be:

var dataObj = {
foo0: {
nodeType: 'foo',
name: 'root',
version: '1.2',
childNodes: {
bar0: {
nodeType: 'bar',
name: 'bar0'
},
bar1: {
nodeType: 'bar',
name: 'bar0'
},
fred0: {
name: 'fred0',
childNodes: {
bar2: {
nodeType: 'bar',
name: 'bar2'
},
bar3: {
nodeType: 'bar',
name: 'bar3'
}
}
}
}
}
};

An equivalent to "getElementsByTagName" is:

function getNodesByType(obj, propertyName, store) {
var t;
store = store || [];

for (var p in obj) {
t = obj[p]

if (t.nodeType == propertyName) {
store.push(t);
}

if (typeof t == 'object') {
getNodesByType(t, propertyName, store);
}
}
return store;
}

To get all bar elements:

var allBars = getNodesByType(dataObj, 'bar');

Presumably if you are using XML you can use getElementsByTagName or
XPath, I would expect them to be faster than "object walking" for
large documents. But if I was using a large, complex object I might
also create an index of frequently accessed elements so I don't need
to walk the structure every time.

I might also create object mutation methods to keep those indexes
current ("live") so that adding, moving or deleting elements also
maintains relevant indexes.


> My assumption is that I am using activex to process the xml while the
> json needs to be interpreted by the browser thus the activex is
> running like a normal program while the json is on top of the vm the
> browser creates for javascript.

Since you have not shown any code, any such explanation is just
opinion[1]. It is rare that a useful conclusion can be drawn from an
unsubstantiated assumption.

Not all browsers use ActiveX to process XML, therefore it is not
necessarily a factor.

JSON is a data transport mechanism, as is XML. It is not inherently
slower, it may be for a particular case if it is more verbose
(requires more bits to be transferred) than the equivalent XML.

Methods to deal with JSON are native to the browser and likely also to
the platform and are therefore (from a code optimisation and
prioritisation perspective) roughly equivalent to the code that
processes XML. I don't see that one is necessarily any slower or
faster than the other.

Built-in methods for processing XML may not be available for JSON
where the JSON is structured as if it is XML (as above). Therefore
native methods must be used for JSON which may be slower for being
native rather than built-in. But it may also be possible to modify the
structure of the JSON to take better advantage of the built-in methods
that are available, rather than trying to use JSON like XML. It might
also be discovered that trying to structure XML to match an optimised
JSON structure reverses the perceived performance.

But that is all conjecture.

1. "Opinions are like armpits - everyone has at least one and there is
nothing special about them." -- Dr Karl Kruszelnicki


--
Rob
From: Hans-Georg Michna on
On Wed, 31 Mar 2010 15:53:13 -0700 (PDT), Ernesto wrote:

>Running some tests with a list of data I came to realize that when the
>data was in XML and grater than 100 items of data, the browser
>processes the data faster than if the data was on JSON.
>I sort of surprised me.. .I expected the opossite.
>My assumption is that I am using activex to process the xml while the
>json needs to be interpreted by the browser thus the activex is
>running like a normal program while the json is on top of the vm the
>browser creates for javascript.
>
>Does anyone has a better explanation?

No. It has been mentioned already. In my tests Internet Explorer
8, in spite of having the JSON object built-in, runs it about as
fast as if it were interpreted JavaScript. I suspect, under the
hood it actually is.

Firefox's JSON is many times faster, possibly faster than the
one in Google's Chrome. I haven't tested the others.

I had done an extreme test, where multi-Megabyte files were
loaded by way of JSON. In IE it was unbearably slow, while in
Firefox this took only a few seconds.

Hans-Georg
From: Hans-Georg Michna on
On Thu, 01 Apr 2010 09:31:14 +0200, Hans-Georg Michna wrote:

>I had done an extreme test, where multi-Megabyte files were
>loaded by way of JSON. In IE it was unbearably slow, while in
>Firefox this took only a few seconds.

The test is still online here: http://winhlp.com/telly/

The initial loading of the language dictionary via
JSON.parse(...) takes an inordinate time in IE and is very quick
in Firefox. The dictionary sizes (before gzip compression) are:

English: 12.3 MB
Deutsch: 5.7 MB

Try for yourself, if you like.

Hans-Georg