Skip to content
Snippets Groups Projects
  1. Mar 15, 2016
  2. Feb 03, 2016
    • Tom Lane's avatar
      Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage. · e6ecc93a
      Tom Lane authored
      Commit e09996ff was one brick shy of a load: it didn't insist
      that the detected JSON number be the whole of the supplied string.
      This allowed inputs such as "2016-01-01" to be misdetected as valid JSON
      numbers.  Per bug #13906 from Dmitry Ryabov.
      
      In passing, be more wary of zero-length input (I'm not sure this can
      happen given current callers, but better safe than sorry), and do some
      minor cosmetic cleanup.
      e6ecc93a
  3. Jan 02, 2016
  4. Dec 23, 2015
  5. Oct 20, 2015
    • Tom Lane's avatar
      Fix incorrect translation of minus-infinity datetimes for json/jsonb. · d4355425
      Tom Lane authored
      Commit bda76c1c caused both plus and
      minus infinity to be rendered as "infinity", which is not only wrong
      but inconsistent with the pre-9.4 behavior of to_json().  Fix that by
      duplicating the coding in date_out/timestamp_out/timestamptz_out more
      closely.  Per bug #13687 from Stepan Perlov.  Back-patch to 9.4, like
      the previous commit.
      
      In passing, also re-pgindent json.c, since it had gotten a bit messed up by
      recent patches (and I was already annoyed by indentation-related problems
      in back-patching this fix ...)
      d4355425
  6. Oct 05, 2015
    • Noah Misch's avatar
      Prevent stack overflow in json-related functions. · 08fa47c4
      Noah Misch authored
      Sufficiently-deep recursion heretofore elicited a SIGSEGV.  If an
      application constructs PostgreSQL json or jsonb values from arbitrary
      user input, application users could have exploited this to terminate all
      active database connections.  That applies to 9.3, where the json parser
      adopted recursive descent, and later versions.  Only row_to_json() and
      array_to_json() were at risk in 9.2, both in a non-security capacity.
      Back-patch to 9.2, where the json type was introduced.
      
      Oskari Saarenmaa, reviewed by Michael Paquier.
      
      Security: CVE-2015-5289
      08fa47c4
  7. Sep 18, 2015
    • Andrew Dunstan's avatar
      Cache argument type information in json(b) aggregate functions. · c00c3249
      Andrew Dunstan authored
      These functions have been looking up type info for every row they
      process. Instead of doing that we only look them up the first time
      through and stash the information in the aggregate state object.
      
      Affects json_agg, json_object_agg, jsonb_agg and jsonb_object_agg.
      
      There is plenty more work to do in making these more efficient,
      especially the jsonb functions, but this is a virtually cost free
      improvement that can be done right away.
      
      Backpatch to 9.5 where the jsonb variants were introduced.
      c00c3249
  8. Jul 18, 2015
    • Andrew Dunstan's avatar
      Support JSON negative array subscripts everywhere · e02d44b8
      Andrew Dunstan authored
      Previously, there was an inconsistency across json/jsonb operators that
      operate on datums containing JSON arrays -- only some operators
      supported negative array count-from-the-end subscripting.  Specifically,
      only a new-to-9.5 jsonb deletion operator had support (the new "jsonb -
      integer" operator).  This inconsistency seemed likely to be
      counter-intuitive to users.  To fix, allow all places where the user can
      supply an integer subscript to accept a negative subscript value,
      including path-orientated operators and functions, as well as other
      extraction operators.  This will need to be called out as an
      incompatibility in the 9.5 release notes, since it's possible that users
      are relying on certain established extraction operators changed here
      yielding NULL in the event of a negative subscript.
      
      For the json type, this requires adding a way of cheaply getting the
      total JSON array element count ahead of time when parsing arrays with a
      negative subscript involved, necessitating an ad-hoc lex and parse.
      This is followed by a "conversion" from a negative subscript to its
      equivalent positive-wise value using the count.  From there on, it's as
      if a positive-wise value was originally provided.
      
      Note that there is still a minor inconsistency here across jsonb
      deletion operators.  Unlike the aforementioned new "-" deletion operator
      that accepts an integer on its right hand side, the new "#-" path
      orientated deletion variant does not throw an error when it appears like
      an array subscript (input that could be recognized by as an integer
      literal) is being used on an object, which is wrong-headed.  The reason
      for not being stricter is that it could be the case that an object pair
      happens to have a key value that looks like an integer; in general,
      these two possibilities are impossible to differentiate with rhs path
      text[] argument elements.  However, we still don't allow the "#-"
      path-orientated deletion operator to perform array-style subscripting.
      Rather, we just return the original left operand value in the event of a
      negative subscript (which seems analogous to how the established
      "jsonb/json #> text[]" path-orientated operator may yield NULL in the
      event of an invalid subscript).
      
      In passing, make SetArrayPath() stricter about not accepting cases where
      there is trailing non-numeric garbage bytes rather than a clean NUL
      byte.  This means, for example, that strings like "10e10" are now not
      accepted as an array subscript of 10 by some new-to-9.5 path-orientated
      jsonb operators (e.g. the new #- operator).  Finally, remove dead code
      for jsonb subscript deletion; arguably, this should have been done in
      commit b81c7b40.
      
      Peter Geoghegan and Andrew Dunstan
      e02d44b8
  9. May 24, 2015
  10. Mar 31, 2015
  11. Feb 26, 2015
    • Andrew Dunstan's avatar
      Render infinite date/timestamps as 'infinity' for json/jsonb · bda76c1c
      Andrew Dunstan authored
      Commit ab14a73a raised an error in these cases and later the
      behaviour was copied to jsonb. This is what the XML code, which we
      then adopted, does, as the XSD types don't accept infinite values.
      However, json dates and timestamps are just strings as far as json is
      concerned, so there is no reason not to render these values as
      'infinity'.
      
      The json portion of this is backpatched to 9.4 where the behaviour was
      introduced. The jsonb portion only affects the development branch.
      
      Per gripe on pgsql-general.
      bda76c1c
  12. Jan 30, 2015
    • Tom Lane's avatar
      Fix jsonb Unicode escape processing, and in consequence disallow \u0000. · 451d2808
      Tom Lane authored
      We've been trying to support \u0000 in JSON values since commit
      78ed8e03, and have introduced increasingly worse hacks to try to
      make it work, such as commit 0ad1a816.  However, it fundamentally
      can't work in the way envisioned, because the stored representation looks
      the same as for \\u0000 which is not the same thing at all.  It's also
      entirely bogus to output \u0000 when de-escaped output is called for.
      
      The right way to do this would be to store an actual 0x00 byte, and then
      throw error only if asked to produce de-escaped textual output.  However,
      getting to that point seems likely to take considerable work and may well
      never be practical in the 9.4.x series.
      
      To preserve our options for better behavior while getting rid of the nasty
      side-effects of 0ad1a816, revert that commit in toto and instead
      throw error if \u0000 is used in a context where it needs to be de-escaped.
      (These are the same contexts where non-ASCII Unicode escapes throw error
      if the database encoding isn't UTF8, so this behavior is by no means
      without precedent.)
      
      In passing, make both the \u0000 case and the non-ASCII Unicode case report
      ERRCODE_UNTRANSLATABLE_CHARACTER / "unsupported Unicode escape sequence"
      rather than claiming there's something wrong with the input syntax.
      
      Back-patch to 9.4, where we have to do something because 0ad1a816
      broke things for many cases having nothing to do with \u0000.  9.3 also has
      bogus behavior, but only for that specific escape value, so given the lack
      of field complaints it seems better to leave 9.3 alone.
      451d2808
  13. Jan 06, 2015
  14. Dec 12, 2014
    • Andrew Dunstan's avatar
      Add several generator functions for jsonb that exist for json. · 7e354ab9
      Andrew Dunstan authored
      The functions are:
          to_jsonb()
          jsonb_object()
          jsonb_build_object()
          jsonb_build_array()
          jsonb_agg()
          jsonb_object_agg()
      
      Also along the way some better logic is implemented in
      json_categorize_type() to match that in the newly implemented
      jsonb_categorize_type().
      
      Andrew Dunstan, reviewed by Pavel Stehule and Alvaro Herrera.
      7e354ab9
  15. Dec 02, 2014
    • Tom Lane's avatar
      Fix JSON aggregates to work properly when final function is re-executed. · 75ef4352
      Tom Lane authored
      Davide S. reported that json_agg() sometimes produced multiple trailing
      right brackets.  This turns out to be because json_agg_finalfn() attaches
      the final right bracket, and was doing so by modifying the aggregate state
      in-place.  That's verboten, though unfortunately it seems there's no way
      for nodeAgg.c to check for such mistakes.
      
      Fix that back to 9.3 where the broken code was introduced.  In 9.4 and
      HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.
      Make some cosmetic cleanups as well.
      75ef4352
  16. Dec 01, 2014
    • Andrew Dunstan's avatar
      Fix hstore_to_json_loose's detection of valid JSON number values. · e09996ff
      Andrew Dunstan authored
      We expose a function IsValidJsonNumber that internally calls the lexer
      for json numbers. That allows us to use the same test everywhere,
      instead of inventing a broken test for hstore conversions. The new
      function is also used in datum_to_json, replacing the code that is now
      moved to the new function.
      
      Backpatch to 9.3 where hstore_to_json_loose was introduced.
      e09996ff
  17. Sep 29, 2014
    • Stephen Frost's avatar
      Revert 95d737ff to add 'ignore_nulls' · c8a026e4
      Stephen Frost authored
      Per discussion, revert the commit which added 'ignore_nulls' to
      row_to_json.  This capability would be better added as an independent
      function rather than being bolted on to row_to_json.  Additionally,
      the implementation didn't address complex JSON objects, and so was
      incomplete anyway.
      
      Pointed out by Tom and discussed with Andrew and Robert.
      c8a026e4
  18. Sep 25, 2014
  19. Sep 12, 2014
    • Stephen Frost's avatar
      Add 'ignore_nulls' option to row_to_json · 95d737ff
      Stephen Frost authored
      Provide an option to skip NULL values in a row when generating a JSON
      object from that row with row_to_json.  This can reduce the size of the
      JSON object in cases where columns are NULL without really reducing the
      information in the JSON object.
      
      This also makes row_to_json into a single function with default values,
      rather than having multiple functions.  In passing, change array_to_json
      to also be a single function with default values (we don't add an
      'ignore_nulls' option yet- it's not clear that there is a sensible
      use-case there, and it hasn't been asked for in any case).
      
      Pavel Stehule
      95d737ff
  20. Aug 18, 2014
  21. Aug 09, 2014
    • Tom Lane's avatar
      Clean up handling of unknown-type inputs in json_build_object and friends. · 92f57c9a
      Tom Lane authored
      There's actually no need for any special case for unknown-type literals,
      since we only need to push the value through its output function and
      unknownout() works fine.  The code that was here was completely bizarre
      anyway, and would fail outright in cases that should work, not to mention
      suffering from some copy-and-paste bugs.
      92f57c9a
    • Tom Lane's avatar
      Further cleanup of JSON-specific error messages. · 495cadda
      Tom Lane authored
      Fix an obvious typo in json_build_object()'s complaint about invalid
      number of arguments, and make the errhint a bit more sensible too.
      
      Per discussion about how to word the improved hint, change the few places
      in the documentation that refer to JSON object field names as "names" to
      say "keys" instead, since that's what we've said in the vast majority of
      places in the docs.  Arguably "name" is more correct, since that's the
      terminology used in RFC 7159; but we're stuck with "key" in view of the
      naming of json_object_keys() so let's at least be self-consistent.
      
      I adjusted a few code comments to match this as well, and failed to
      resist the temptation to clean up some odd whitespace choices in the
      same area, as well as a useless duplicate PG_ARGISNULL() check.  There's
      still quite a bit of code that uses the phrase "field name" in non-user-
      visible ways, so I left those usages alone.
      495cadda
  22. Aug 05, 2014
  23. Jul 22, 2014
  24. Jul 15, 2014
  25. Jul 06, 2014
  26. Jun 12, 2014
  27. Jun 04, 2014
    • Andrew Dunstan's avatar
      Use EncodeDateTime instead of to_char to render JSON timestamps. · ab14a73a
      Andrew Dunstan authored
      Per gripe from Peter Eisentraut and Tom Lane.
      
      The output is slightly different, but still ISO 8601 compliant: to_char
      doesn't output the minutes when time zone offset is an integer number of
      hours, while EncodeDateTime outputs ":00".
      
      The code is slightly adapted from code in xml.c
      ab14a73a
  28. Jun 03, 2014
    • Andrew Dunstan's avatar
      Do not escape a unicode sequence when escaping JSON text. · 0ad1a816
      Andrew Dunstan authored
      Previously, any backslash in text being escaped for JSON was doubled so
      that the result was still valid JSON. However, this led to some perverse
      results in the case of Unicode sequences, These are now detected and the
      initial backslash is no longer escaped. All other backslashes are
      still escaped. No validity check is performed, all that is looked for is
      \uXXXX where X is a hexidecimal digit.
      
      This is a change from the 9.2 and 9.3 behaviour as noted in the Release
      notes.
      
      Per complaint from Teodor Sigaev.
      0ad1a816
    • Andrew Dunstan's avatar
      Output timestamps in ISO 8601 format when rendering JSON. · f30015b6
      Andrew Dunstan authored
      Many JSON processors require timestamp strings in ISO 8601 format in
      order to convert the strings. When converting a timestamp, with or
      without timezone, to a JSON datum we therefore now use such a format
      rather than the type's default text output, in functions such as
      to_json().
      
      This is a change in behaviour from 9.2 and 9.3, as noted in the release
      notes.
      f30015b6
  29. May 09, 2014
    • Tom Lane's avatar
      Get rid of bogus dependency on typcategory in to_json() and friends. · 0ca6bda8
      Tom Lane authored
      These functions were relying on typcategory to identify arrays and
      composites, which is not reliable and not the normal way to do it.
      Using typcategory to identify boolean, numeric types, and json itself is
      also pretty questionable, though the code in those cases didn't seem to be
      at risk of anything worse than wrong output.  Instead, use the standard
      lsyscache functions to identify arrays and composites, and rely on a direct
      check of the type OID for the other cases.
      
      In HEAD, also be sure to look through domains so that a domain is treated
      the same as its base type for conversions to JSON.  However, this is a
      small behavioral change; given the lack of field complaints, we won't
      back-patch it.
      
      In passing, refactor so that there's only one copy of the code that decides
      which conversion strategy to apply, not multiple copies that could (and
      have) gotten out of sync.
      0ca6bda8
    • Tom Lane's avatar
      Teach add_json() that jsonb is of TYPCATEGORY_JSON. · 62e57ff0
      Tom Lane authored
      This code really needs to be refactored so that there aren't so many copies
      that can diverge.  Not to mention that this whole approach is probably
      wrong.  But for the moment I'll just stick my finger in the dike.
      Per report from Michael Paquier.
      62e57ff0
    • Heikki Linnakangas's avatar
      Avoid some pnstrdup()s when constructing jsonb · d3c72e23
      Heikki Linnakangas authored
      This speeds up text to jsonb parsing and hstore to jsonb conversions
      somewhat.
      d3c72e23
  30. May 06, 2014
    • Bruce Momjian's avatar
      pgindent run for 9.4 · 0a783200
      Bruce Momjian authored
      This includes removing tabs after periods in C comments, which was
      applied to back branches, so this change should not effect backpatching.
      0a783200
  31. Mar 23, 2014
    • Andrew Dunstan's avatar
      Introduce jsonb, a structured format for storing json. · d9134d0a
      Andrew Dunstan authored
      The new format accepts exactly the same data as the json type. However, it is
      stored in a format that does not require reparsing the orgiginal text in order
      to process it, making it much more suitable for indexing and other operations.
      Insignificant whitespace is discarded, and the order of object keys is not
      preserved. Neither are duplicate object keys kept - the later value for a given
      key is the only one stored.
      
      The new type has all the functions and operators that the json type has,
      with the exception of the json generation functions (to_json, json_agg etc.)
      and with identical semantics. In addition, there are operator classes for
      hash and btree indexing, and two classes for GIN indexing, that have no
      equivalent in the json type.
      
      This feature grew out of previous work by Oleg Bartunov and Teodor Sigaev, which
      was intended to provide similar facilities to a nested hstore type, but which
      in the end proved to have some significant compatibility issues.
      
      Authors: Oleg Bartunov,  Teodor Sigaev, Peter Geoghegan and Andrew Dunstan.
      Review: Andres Freund
      d9134d0a
  32. Mar 17, 2014
  33. Jan 28, 2014
    • Andrew Dunstan's avatar
      New json functions. · 10563990
      Andrew Dunstan authored
      json_build_array() and json_build_object allow for the construction of
      arbitrarily complex json trees. json_object() turns a one or two
      dimensional array, or two separate arrays, into a json_object of
      name/value pairs, similarly to the hstore() function.
      json_object_agg() aggregates its two arguments into a single json object
      as name value pairs.
      
      Catalog version bumped.
      
      Andrew Dunstan, reviewed by Marko Tiikkaja.
      10563990
  34. Jan 22, 2014
  35. Jan 07, 2014
Loading