- Dec 04, 2015
-
-
Tom Lane authored
In commit 1ea0c73c I added a section to user-manag.sgml about how to drop roles that own objects; but as pointed out by Stephen Frost, I neglected that shared objects (databases or tablespaces) may need special treatment. Fix that. Back-patch to supported versions, like the previous patch.
-
- Dec 01, 2015
-
-
Tom Lane authored
We tried to fetch statistics data from the index metapage, which does not work if the index isn't actually present. If the index is hypothetical, instead extrapolate some plausible internal statistics based on the index page count provided by the index-advisor plugin. There was already some code in gincostestimate() to invent internal stats in this way, but since it was only meant as a stopgap for pre-9.1 GIN indexes that hadn't been vacuumed since upgrading, it was pretty crude. If we want it to support index advisors, we should try a little harder. A small amount of testing says that it's better to estimate the entry pages as 90% of the index, not 100%. Also, estimating the number of entries (keys) as equal to the heap tuple count could be wildly wrong in either direction. Instead, let's estimate 100 entries per entry page. Perhaps someday somebody will want the index advisor to be able to provide these numbers more directly, but for the moment this should serve. Problem report and initial patch by Julien Rouhaud; modified by me to invent less-bogus internal statistics. Back-patch to all supported branches, since we've supported index advisors since 9.0.
-
Tom Lane authored
The previous coding could overrun the provided buffer size for a very large input, or lose precision for a very small input. Adopt the methodology that's been in use in the equivalent backend code for a long time. Per private report from Bas van Schaik. Back-patch to all supported branches.
-
- Nov 26, 2015
-
-
Tom Lane authored
Failure to initially palloc the comboCids array, or to realloc it bigger when needed, left combocid's data structures in an inconsistent state that would cause trouble if the top transaction continues to execute. Noted while examining a user complaint about the amount of memory used for this. (There's not much we can do about that, but it does point up that repalloc failure has a non-negligible chance of occurring here.) In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer in SerializeComboCIDState; cf commit 13bba022.
-
- Nov 25, 2015
-
-
Tom Lane authored
PQhost() can return NULL in non-error situations, namely when a Unix-socket connection has been selected by default. That behavior is a tad debatable perhaps, but for the moment we should make sure that psql copes with it. Unfortunately, do_connect() failed to: it could pass a NULL pointer to strcmp(), resulting in crashes on most platforms. This was reported as a security issue by ChenQin of Topsec Security Team, but the consensus of the security list is that it's just a garden-variety bug with no security implications. For paranoia's sake, I made the keep_password test not trust PQuser or PQport either, even though I believe those will never return NULL given a valid PGconn. Back-patch to all supported branches.
-
- Nov 24, 2015
-
-
Bruce Momjian authored
Also fix getErrorText() to return the right error string on failure. This behavior now matches that of other operating systems. Report by Noah Misch Backpatch through 9.1
-
- Nov 23, 2015
-
-
Tom Lane authored
I missed adding src/port/tar.c to the Windows build files when back-patching the addition of that file to 9.2 and 9.1. Per buildfarm.
-
- Nov 22, 2015
-
-
Tom Lane authored
The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
-
- Nov 20, 2015
-
-
Tom Lane authored
The previous way of reconstructing check constraints was to do a separate "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance hierarchy. However, that way has no hope of reconstructing the check constraints' own inheritance properties correctly, as pointed out in bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do a regular "ALTER TABLE", allowing recursion, at the topmost table that has a particular constraint, and then suppress the work queue entries for inherited instances of the constraint. Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546c, but we failed to notice that it wasn't reconstructing the pg_constraint field values correctly. As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to always schema-qualify the target table name; this seems like useful backup to the protections installed by commit 5f173040. In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused. (I could alternatively have modified it to also return conislocal, but that seemed like a pretty single-purpose API, so let's not pretend it has some other use.) It's unused in the back branches as well, but I left it in place just in case some third-party code has decided to use it. In HEAD/9.5, also rename pg_get_constraintdef_string to pg_get_constraintdef_command, as the previous name did nothing to explain what that entry point did differently from others (and its comment was equally useless). Again, that change doesn't seem like material for back-patching. I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well. Otherwise, back-patch to all supported branches.
-
- Nov 18, 2015
-
-
Tom Lane authored
Per buildfarm member anchovy, 2.6.0 exists in the wild now. Hopefully it works with Postgres; if not, we'll have to do something about that, but in any case claiming it's "too old" is pretty silly.
-
- Nov 17, 2015
-
-
Tom Lane authored
div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed
-
- Nov 15, 2015
-
-
Tom Lane authored
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches.
-
- Nov 14, 2015
-
-
Peter Eisentraut authored
The error message wording for AttributeError has changed in Python 3.5. For the plpython_error test, add a new expected file. In the plpython_subtransaction test, we didn't really care what the exception is, only that it is something coming from Python. So use a generic exception instead, which has a message that doesn't vary across versions.
-
Bruce Momjian authored
Previously, file copy failures were ignored on Windows due to an incorrect return value check. Report by Manu Joye Backpatch through 9.1
-
- Nov 10, 2015
-
-
Tom Lane authored
In commit a5ec86a7 I wrote a quick hack that reduced the number of TeX string pool entries created while converting our documentation to PDF form. That held the fort for awhile, but as of HEAD we're back up against the same limitation. It turns out that the original coding of \FlowObjectSetup actually results in *three* string pool entries being generated for every "flow object" (that is, potential cross-reference target) in the documentation, and my previous hack only got rid of one of them. With a little more care, we can reduce the string count to one per flow object plus one per actually-cross-referenced flow object (about 115000 + 5000 as of current HEAD); that should work until the documentation volume roughly doubles from where it is today. As a not-incidental side benefit, this change also causes pdfjadetex to stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file. It had been making one willy-nilly for every flow object; now it's just one per actually-cross-referenced object. This results in close to a 2X savings in PDF file size. We will still want to run the output through "jpdftweak" to get it to be compressed; but we no longer need removal of unreferenced bookmarks, so we might be able to find a quicker tool for that step. Although the failure only affects HEAD and US-format output at the moment, 9.5 cannot be more than a few pages short of failing likewise, so it will inevitably fail after a few rounds of minor-version release notes. I don't have a lot of faith that we'll never hit the limit in the older branches; and anyway it would be nice to get rid of jpdftweak across the board. Therefore, back-patch to all supported branches.
-
- Nov 08, 2015
-
-
Noah Misch authored
At least OpenBSD, NetBSD, and Windows don't support it. This repairs pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'. Since pg_ctl prefers to test a Unix-domain socket, Windows users are most likely to need this change. Back-patch to 9.1 (all supported versions). This could change pg_ctl interaction with loopback-interface firewall rules. Therefore, in 9.4 and earlier (released branches), activate the change only on known-affected platforms. Reported (bug #13611) and designed by Kondo Yuta.
-
- Nov 07, 2015
-
-
Tom Lane authored
Lookahead and lookbehind constraints aren't allowed to contain backrefs, and parentheses within them are always considered non-capturing. Or so says the manual. But the regexp parser forgot about these rules once inside a parenthesized subexpression, so that constructs like (\w)(?=(\1)) were accepted (but then not correctly executed --- a case like this acted like (\w)(?=\w), without any enforcement that the two \w's match the same text). And in (?=((foo))) the innermost parentheses would be counted as capturing parentheses, though no text would ever be captured for them. To fix, properly pass down the "type" argument to the recursive invocation of parse(). Back-patch to all supported branches; it was agreed that silent misexecution of such patterns is worse than throwing an error, even though new errors in minor releases are generally not desirable.
-
- Oct 31, 2015
-
-
Kevin Grittner authored
On insert the CheckForSerializableConflictIn() test was performed before the page(s) which were going to be modified had been locked (with an exclusive buffer content lock). If another process acquired a relation SIReadLock on the heap and scanned to a page on which an insert was going to occur before the page was so locked, a rw-conflict would be missed, which could allow a serialization anomaly to be missed. The window between the check and the page lock was small, so the bug was generally not noticed unless there was high concurrency with multiple processes inserting into the same table. This was reported by Peter Bailis as bug #11732, by Sean Chittenden as bug #13667, and by others. The race condition was eliminated in heap_insert() by moving the check down below the acquisition of the buffer lock, which had been the very next statement. Because of the loop locking and unlocking multiple buffers in heap_multi_insert() a check was added after all inserts were completed. The check before the start of the inserts was left because it might avoid a large amount of work to detect a serialization anomaly before performing the all of the inserts and the related WAL logging. While investigating this bug, other SSI bugs which were even harder to hit in practice were noticed and fixed, an unnecessary check (covered by another check, so redundant) was removed from heap_update(), and comments were improved. Back-patch to all supported branches. Kevin Grittner and Thomas Munro
-
- Oct 20, 2015
-
-
Noah Misch authored
master emits an extra context message compared to 9.5 and earlier.
-
Noah Misch authored
Instead, use transaction abort. Given an unlucky bout of latency, the timeout would cancel the RESET itself. Buildfarm members gharial, lapwing, mereswine, shearwater, and sungazer witness that. Back-patch to 9.1 (all supported versions). The query_canceled test still could timeout before entering its subtransaction; for whatever reason, that has yet to happen on the buildfarm.
-
- Oct 19, 2015
-
-
Tom Lane authored
pg_regprefix was doing nothing with lookahead constraints, which would be fine if it were the right kind of nothing, but it isn't: we have to terminate our search for a fixed prefix, not just pretend the LACON arc isn't there. Otherwise, if the current state has both a LACON outarc and a single plain-color outarc, we'd falsely conclude that the color represents an addition to the fixed prefix, and generate an extracted index condition that restricts the indexscan too much. (See added regression test case.) Terminating the search is conservative: we could traverse the LACON arc (thus assuming that the constraint can be satisfied at runtime) and then examine the outarcs of the linked-to state. But that would be a lot more work than it seems worth, because writing a LACON followed by a single plain character is a pretty silly thing to do. This makes a difference only in rather contrived cases, but it's a bug, so back-patch to all supported branches.
-
- Oct 18, 2015
-
-
Michael Meskes authored
-
- Oct 16, 2015
-
-
Tom Lane authored
Revert our previous addition of "all" flags to copyins() and copyouts(); they're no longer needed, and were never anything but an unsightly hack. Improve a couple of infelicities in the REG_DEBUG code for dumping the NFA data structure, including adding code to count the total number of states and arcs. Add a couple of missed error checks. Add some more documentation in the README file, and some regression tests illustrating cases that exceeded the state-count limit and/or took unreasonable amounts of time before this set of patches. Back-patch to all supported branches.
-
Tom Lane authored
This code previously counted the number of NFA states it created, and complained if a limit was exceeded, so as to prevent bizarre regex patterns from consuming unreasonable time or memory. That's fine as far as it went, but the code paid no attention to how many arcs linked those states. Since regexes can be contrived that have O(N) states but will need O(N^2) arcs after fixempties() processing, it was still possible to blow out memory, and take a long time doing it too. To fix, modify the bookkeeping to count space used by both states and arcs. I did not bother with including the "color map" in the accounting; it can only grow to a few megabytes, which is not a lot in comparison to what we're allowing for states+arcs (about 150MB on 64-bit machines or half that on 32-bit machines). Looking at some of the larger real-world regexes captured in the Tcl regression test suite suggests that the most that is likely to be needed for regexes found in the wild is under 10MB, so I believe that the current limit has enough headroom to make it okay to keep it as a hard-wired limit. In connection with this, redefine REG_ETOOBIG as meaning "regular expression is too complex"; the previous wording of "nfa has too many states" was already somewhat inapropos because of the error code's use for stack depth overrun, and it was not very user-friendly either. Back-patch to all supported branches.
-
Tom Lane authored
The previous coding would create a new intermediate state every time it wanted to interchange the ordering of two constraint arcs. Certain regex features such as \Y can generate large numbers of parallel constraint arcs, and if we needed to reorder the results of that, we created unreasonable numbers of intermediate states. To improve matters, keep a list of already-created intermediate states associated with the state currently being considered by the outer loop; we can re-use such states to place all the new arcs leading to the same destination or source. I also took the trouble to redefine push() and pull() to have a less risky API: they no longer delete any state or arc that the caller might possibly have a pointer to, except for the specifically-passed constraint arc. This reduces the risk of re-introducing the same type of error seen in the failed patch for CVE-2007-4772. Back-patch to all supported branches.
-
Tom Lane authored
The previous coding took something like O(N^4) time to fully process a chain of N EMPTY arcs. We can't really do much better than O(N^2) because we have to insert about that many arcs, but we can do lots better than what's there now. The win comes partly from using mergeins() to amortize de-duplication of arcs across multiple source states, and partly from exploiting knowledge of the ordering of arcs for each state to avoid looking at arcs we don't need to consider during the scan. We do have to be a bit careful of the possible reordering of arcs introduced by the sort-merge coding of the previous commit, but that's not hard to deal with. Back-patch to all supported branches.
-
Tom Lane authored
Change the singly-linked in-arc and out-arc lists to be doubly-linked, so that arc deletion is constant time rather than having worst-case time proportional to the number of other arcs on the connected states. Modify the bulk arc transfer operations copyins(), copyouts(), moveins(), moveouts() so that they use a sort-and-merge algorithm whenever there's more than a small number of arcs to be copied or moved. The previous method is O(N^2) in the number of arcs involved, because it performs duplicate checking independently for each copied arc. The new method may change the ordering of existing arcs for the destination state, but nothing really cares about that. Provide another bulk arc copying method mergeins(), which is unused as of this commit but is needed for the next one. It basically is like copyins(), but the source arcs might not all come from the same state. Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort() call. These changes greatly improve the performance of regex compilation for large or complex regexes, at the cost of extra space for arc storage during compilation. The original tradeoff was probably fine when it was made, but now we care more about speed and less about memory consumption. Back-patch to all supported branches.
-
Tom Lane authored
It's possible to construct regular expressions that contain loops of constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use in fully traversing such a loop at execution, since you'd just end up in the same NFA state without having consumed any input. Worse, such a loop leads to infinite looping in the pullback/pushfwd stage of compilation, because we keep pushing or pulling the same constraints around the loop in a vain attempt to move them to the pre or post state. Such looping was previously recognized in CVE-2007-4772; but the fix only handled the case of trivial single-state loops (that is, a constraint arc leading back to its source state) ... and not only that, it was incorrect even for that case, because it broke the admittedly-not-very-clearly-stated API contract of the pull() and push() subroutines. The first two regression test cases added by this commit exhibit patterns that result in assertion failures because of that (though there seem to be no ill effects in non-assert builds). The other new test cases exhibit multi-state constraint loops; in an unpatched build they will run until the NFA state-count limit is exceeded. To fix, remove the code added for CVE-2007-4772, and instead create a general-purpose constraint-loop-breaking phase of regex compilation that executes before we do pullback/pushfwd. Since we never need to traverse a constraint loop fully, we can just break the loop at any chosen spot, if we add clone states that can replicate any sequence of arc transitions that would've traversed just part of the loop. Also add some commentary clarifying why we have to have all these machinations in the first place. This class of problems has been known for some time --- we had a report from Marc Mamin about two years ago, for example, and there are related complaints in the Tcl bug tracker. I had discussed a fix of this kind off-list with Henry Spencer, but didn't get around to doing something about it until the issue was rediscovered by Greg Stark recently. Back-patch to all supported branches.
-
- Oct 13, 2015
-
-
Tom Lane authored
Postmaster child processes that aren't supposed to be attached to shared memory were not bothering to close the shared memory mapping handle they inherit from the postmaster process. That's mostly harmless, since the handle vanishes anyway when the child process exits -- but the syslogger process, if used, doesn't get killed and restarted during recovery from a backend crash. That meant that Windows doesn't see the shared memory mapping as becoming free, so it doesn't delete it and the postmaster is unable to create a new one, resulting in failure to recover from crashes whenever logging_collector is turned on. Per report from Dmitry Vasilyev. It's a bit astonishing that we'd not figured this out long ago, since it's been broken from the very beginnings of out native Windows support; probably some previously-unexplained trouble reports trace to this. A secondary problem is that on Cygwin (perhaps only in older versions?), exec() may not detach from the shared memory segment after all, in which case these child processes did remain attached to shared memory, posing the risk of an unexpected shared memory clobber if they went off the rails somehow. That may be a long-gone bug, but we can deal with it now if it's still live, by detaching within the infrastructure introduced here to deal with closing the handle. Back-patch to all supported branches. Tom Lane and Amit Kapila
-
Tom Lane authored
pg_ctl start with -w previously relied on a heuristic that the postmaster would surely always manage to create postmaster.pid within five seconds. Unfortunately, that fails much more often than we would like on some of the slower, more heavily loaded buildfarm members. We have known for quite some time that we could remove the need for that heuristic on Unix by using fork/exec instead of system() to launch the postmaster. This allows us to know the exact PID of the postmaster, which allows near-certain verification that the postmaster.pid file is the one we want and not a leftover, and it also lets us use waitpid() to detect reliably whether the child postmaster has exited or not. What was blocking this change was not wanting to rewrite the Windows version of start_postmaster() to avoid use of CMD.EXE. That's doable in theory but would require fooling about with stdout/stderr redirection, and getting the handling of quote-containing postmaster switches to stay the same might be rather ticklish. However, we realized that we don't have to do that to fix the problem, because we can test whether the shell process has exited as a proxy for whether the postmaster is still alive. That doesn't allow an exact check of the PID in postmaster.pid, but we're no worse off than before in that respect; and we do get to get rid of the heuristic about how long the postmaster might take to create postmaster.pid. On Unix, this change means that a second "pg_ctl start -w" immediately after another such command will now reliably fail, whereas previously it would succeed if done within two seconds of the earlier command. Since that's a saner behavior anyway, it's fine. On Windows, the case can still succeed within the same time window, since pg_ctl can't tell that the earlier postmaster's postmaster.pid isn't the pidfile it is looking for. To ensure stable test results on Windows, we can insert a short sleep into the test script for pg_ctl, ensuring that the existing pidfile looks stale. This hack can be removed if we ever do rewrite start_postmaster(), but that no longer seems like a high-priority thing to do. Back-patch to all supported versions, both because the current behavior is buggy and because we must do that if we want the buildfarm failures to go away. Tom Lane and Michael Paquier
-
- Oct 07, 2015
-
-
Tom Lane authored
In general one may have to run both REASSIGN OWNED and DROP OWNED to get rid of all the dependencies of a role to be dropped. This was alluded to in the REASSIGN OWNED man page, but not really spelled out in full; and in any case the procedure ought to be documented in a more prominent place than that. Add a section to the "Database Roles" chapter explaining this, and do a bit of wordsmithing in the relevant commands' man pages.
-
- Oct 06, 2015
-
-
Tom Lane authored
The postmaster now checks every minute or so (worst case, at most two minutes) that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had received SIGQUIT. The original goal behind this change was to ensure that failed buildfarm runs would get fully cleaned up, even if the test scripts had left a postmaster running, which is not an infrequent occurrence. When the buildfarm script removes a test postmaster's $PGDATA directory, its next check on postmaster.pid will fail and cause it to exit. Previously, manual intervention was often needed to get rid of such orphaned postmasters, since they'd block new test postmasters from obtaining the expected socket address. However, by checking postmaster.pid and not something else, we can provide additional robustness: manual removal of postmaster.pid is a frequent DBA mistake, and now we can at least limit the damage that will ensue if a new postmaster is started while the old one is still alive. Back-patch to all supported branches, since we won't get the desired improvement in buildfarm reliability otherwise.
-
- Oct 05, 2015
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: d8bd45466e3980b5ab4582ff1705fcd1fff42908
-
Tom Lane authored
Add entries for security and not-quite-security issues. Security: CVE-2015-5288, CVE-2015-5289
-
Andres Freund authored
The documentation for the autovacuum_multixact_freeze_max_age and autovacuum_freeze_max_age relation level parameters contained: "Note that while you can set autovacuum_multixact_freeze_max_age very small, or even zero, this is usually unwise since it will force frequent vacuuming." which hasn't been true since these options were made relation options, instead of residing in the pg_autovacuum table (834a6da4). Remove the outdated sentence. Even the lowered limits from 2596d705 are high enough that this doesn't warrant calling out the risk in the CREATE TABLE docs. Per discussion with Tom Lane and Alvaro Herrera Discussion: 26377.1443105453@sss.pgh.pa.us Backpatch: 9.0- (in parts)
-
Noah Misch authored
The tsquery, ltxtquery and query_int data types have a common ancestor. Having acquired check_stack_depth() calls independently, each was missing at least one call. Back-patch to 9.0 (all supported versions).
-
Noah Misch authored
A range type can name another range type as its subtype, and a record type can bear a column of another record type. Consequently, functions like range_cmp() and record_recv() are recursive. Functions at risk include operator family members and referents of pg_type regproc columns. Treat as recursive any such function that looks up and calls the same-purpose function for a record column type or the range subtype. Back-patch to 9.0 (all supported versions). An array type's element type is never itself an array type, so array functions are unaffected. Recursion depth proportional to array dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.
-
Noah Misch authored
Sufficiently-deep recursion heretofore elicited a SIGSEGV. If an application constructs PostgreSQL json or jsonb values from arbitrary user input, application users could have exploited this to terminate all active database connections. That applies to 9.3, where the json parser adopted recursive descent, and later versions. Only row_to_json() and array_to_json() were at risk in 9.2, both in a non-security capacity. Back-patch to 9.2, where the json type was introduced. Oskari Saarenmaa, reviewed by Michael Paquier. Security: CVE-2015-5289