From: Ron Mayer on
+1 for such a feature, simply to avoid the need of
writing a hstore-parser (which wasn't too bad
to write, but it felt unnecessary). Doesn't
matter to me if it's hstore-to-json or hstore-to-xml
or hstore-to-yaml. Just something that parsers are
readily available for.

Heck, I wouldn't mind if hstore moved to using any one
of those for it's external representations by default.

Tom Lane wrote:
> a ton of special syntax for xml support, ...a json type...
> [ I can already hear somebody insisting on a yaml type :-( ]

If these were CPAN-like installable modules, I'd hope
there would be eventually. Don't most languages and
platforms have both YAML and JSON libraries? Yaml's
user-defined types are an example of where this might
be useful eventually.

Tom Lane wrote:
> Well, actually, now that you mention it: how much of a json type would
> be duplicative of the xml stuff? Would it be sufficient to provide
> json <-> xml converters and let the latter type do all the heavy lifting?

I imagine eventually a JSON type could validate fields using
JSON Schema. But that's drifting away from hstore.

> (If so, this patch ought to be hstore_to_xml instead.)

Doesn't matter to me so long as it's any format with readily
available parsers.




--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Peter Eisentraut on
On fre, 2009-12-18 at 11:51 -0500, Robert Haas wrote:
> On Fri, Dec 18, 2009 at 11:32 AM, David E. Wheeler <david(a)kineticode.com> wrote:
> > On Dec 18, 2009, at 4:49 AM, Peter Eisentraut wrote:
> >
> >> Should we create a json type before adding all kinds of json formatted
> >> data? Or are we content with json as text?
> >
> > json_data_type++
>
> What would that do for us?

At the moment it would be more of a placeholder, because if we later
decide to add full-blown JSON-constructing and -destructing
functionality, it would be difficult to change the signatures of all the
existing functionality.



--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Robert Haas on
On Fri, Dec 18, 2009 at 4:39 PM, Peter Eisentraut <peter_e(a)gmx.net> wrote:
> On fre, 2009-12-18 at 11:51 -0500, Robert Haas wrote:
>> On Fri, Dec 18, 2009 at 11:32 AM, David E. Wheeler <david(a)kineticode.com> wrote:
>> > On Dec 18, 2009, at 4:49 AM, Peter Eisentraut wrote:
>> >
>> >> Should we create a json type before adding all kinds of json formatted
>> >> data?  Or are we content with json as text?
>> >
>> > json_data_type++
>>
>> What would that do for us?
>
> At the moment it would be more of a placeholder, because if we later
> decide to add full-blown JSON-constructing and -destructing
> functionality, it would be difficult to change the signatures of all the
> existing functionality.

Good thought.

....Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Andrew Dunstan on


Robert Haas wrote:
> On Fri, Dec 18, 2009 at 3:00 PM, Tom Lane <tgl(a)sss.pgh.pa.us> wrote:
>
>> Alvaro Herrera <alvherre(a)commandprompt.com> writes:
>>
>>> Tom Lane escribi�:
>>>
>>>> Well, actually, now that you mention it: how much of a json type would
>>>> be duplicative of the xml stuff? Would it be sufficient to provide
>>>> json <-> xml converters and let the latter type do all the heavy lifting?
>>>> (If so, this patch ought to be hstore_to_xml instead.)
>>>>
>>> But then there's the matter of overhead: how much would be wasted by
>>> transforming to XML, and then parsing the XML back to transform to JSON?
>>>
>> Well, that would presumably happen only when sending data to or from the
>> client. It's not obvious that it would be much more expensive than the
>> syntax checking you'd have to do anyway.
>>
>> If there's some reason to think that operating on json data would be
>> much less expensive than operating on xml, there might be a case for
>> having two distinct sets of operations internally, but I haven't heard
>> anybody make that argument.
>>
>
> One problem is that there is not a single well-defined mapping between
> these types. I would say generally that XML and YAML both have more
> types of constructs than JSON. The obvious ways of translating an
> arbitrary XML document to JSON are likely not to be what people want
> in particular cases.
>

Right. XML semantics are richer, as I pointed out when we were
discussing the various EXPLAIN formats.


> I think the performance argument is compelling, too, but we can't even
> try benchmarking it unless we can define what we're even talking
> about.
>
>
>

Yes, there is indeed reason to think that JSON processing, especially
parsing, will be more efficient, and I suspect we can provide ways of
accessing the data that are lots faster than XPath. JSON is designed to
be lightweight, XML is not.

Mind you, the XML processing is not too bad - I have been working much
of the last few months on a large custom billing system which produces
XML output to create paper/online invoices from, and the XML
construction is one of the fastest parts of the whole system.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

From: Robert Haas on
On Fri, Dec 18, 2009 at 7:05 PM, Andrew Dunstan <andrew(a)dunslane.net> wrote:
>> One problem is that there is not a single well-defined mapping between
>> these types.  I would say generally that XML and YAML both have more
>> types of constructs than JSON.  The obvious ways of translating an
>> arbitrary XML document to JSON are likely not to be what people want
>> in particular cases.
> Right. XML semantics are richer, as I pointed out when we were discussing
> the various EXPLAIN formats.

You say "richer"; I say "harder to map onto data structures". But we
can agree to disagree on this one... I'm sure there are good tools out
there. :-)

>> I think the performance argument is compelling, too, but we can't even
>> try benchmarking it unless we can define what we're even talking
>> about.
>
> Yes, there is indeed reason to think that JSON processing, especially
> parsing, will be more efficient, and I suspect we can provide ways of
> accessing the data that are lots faster than XPath. JSON is designed to be
> lightweight, XML is not.
>
> Mind you, the XML processing is not too bad - I have been working much of
> the last few months on a large custom billing system which produces XML
> output to create paper/online invoices from, and the XML construction is one
> of the fastest parts of the whole system.

That doesn't surprise me very much. If there's a problem with
operations on XML, I think it tends to be more on the parsing side
than the generation side. But even there I agree it's not terrible.
The main reason I like JSON is for the simpler semantics - there's
exactly one way to serialize and deserialize a data structure, and
everyone agrees on what it is so the error cases are all handled by
the parser itself, rather than left to the application programmer.

....Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers