- May 06, 2013
-
-
Tom Lane authored
The initial implementation of this feature was really unsupportable, because it's relying on the physical size of an on-disk file to carry the relation's populated/unpopulated state, which is at least a modularity violation and could have serious long-term consequences. We could say that an unlogged matview goes to empty on crash, but not everybody likes that definition, so let's just remove the feature for 9.3. We can add it back when we have a less klugy implementation. I left the grammar and tab-completion support for CREATE UNLOGGED MATERIALIZED VIEW in place, since it's harmless and allows delivering a more specific error message about the unsupported feature. I'm committing this separately to ease identification of what should be reverted when/if we are able to re-enable the feature.
-
- Apr 27, 2013
-
-
Tom Lane authored
Move checking for unscannable matviews into ExecOpenScanRelation, which is a better place for it first because the open relation is already available (saving a relcache lookup cycle), and second because this eliminates the problem of telling the difference between rangetable entries that will or will not be scanned by the query. In particular we can get rid of the not-terribly-well-thought-out-or-implemented isResultRel field that the initial matviews patch added to RangeTblEntry. Also get rid of entirely unnecessary scannability check in the rewriter, and a bogus decision about whether RefreshMatViewStmt requires a parse-time snapshot. catversion bump due to removal of a RangeTblEntry field, which changes stored rules.
-
- Apr 26, 2013
-
-
Tom Lane authored
ORDER BY expressions were being treated the same as regular aggregate arguments for purposes of collation determination, but really they should not affect the aggregate's collation at all; only collations of the aggregate's regular arguments should affect it. In many cases this mistake would lead to incorrectly throwing a "collation conflict" error; but in some cases the corrected code will silently assign a different collation to the aggregate than before, for example agg(foo ORDER BY bar COLLATE "x") which will now use foo's collation rather than "x" for the aggregate. Given this risk and the lack of field complaints about the issue, it doesn't seem prudent to back-patch. In passing, rearrange code in assign_collations_walker so that we don't need multiple copies of the standard logic for computing collation of a node with children. (Previously, CaseExpr duplicated the standard logic, and we would have needed a third copy for Aggref without this change.) Andrew Gierth and David Fetter
-
- Apr 20, 2013
-
-
Peter Eisentraut authored
In most cases, these were just references to the SQL standard in general. In a few cases, a contrast was made between SQL92 and later standards -- those have been kept unchanged.
-
- Apr 13, 2013
-
-
Tom Lane authored
Revert the matview-related changes in explain.c's API, as per recent complaint from Robert Haas. The reason for these appears to have been principally some ill-considered choices around having intorel_startup do what ought to be parse-time checking, plus a poor arrangement for passing it the view parsetree it needs to store into pg_rewrite when creating a materialized view. Do the latter by having parse analysis stick a copy into the IntoClause, instead of doing it at runtime. (On the whole, I seriously question the choice to represent CREATE MATERIALIZED VIEW as a variant of SELECT INTO/CREATE TABLE AS, because that means injecting even more complexity into what was already a horrid legacy kluge. However, I didn't go so far as to rethink that choice ... yet.) I also moved several error checks into matview parse analysis, and made the check for external Params in a matview more accurate. In passing, clean things up a bit more around interpretOidsOption(), and fix things so that we can use that to force no-oids for views, sequences, etc, thereby eliminating the need to cons up "oids = false" options when creating them. catversion bump due to change in IntoClause. (I wonder though if we really need readfuncs/outfuncs support for IntoClause anymore.)
-
- Mar 28, 2013
-
-
Alvaro Herrera authored
This event takes place just before ddl_command_end, and is fired if and only if at least one object has been dropped by the command. (For instance, DROP TABLE IF EXISTS of a table that does not in fact exist will not lead to such a trigger firing). Commands that drop multiple objects (such as DROP SCHEMA or DROP OWNED BY) will cause a single event to fire. Some firings might be surprising, such as ALTER TABLE DROP COLUMN. The trigger is fired after the drop has taken place, because that has been deemed the safest design, to avoid exposing possibly-inconsistent internal state (system catalogs as well as current transaction) to the user function code. This means that careful tracking of object identification is required during the object removal phase. Like other currently existing events, there is support for tag filtering. To support the new event, add a new pg_event_trigger_dropped_objects() set-returning function, which returns a set of rows comprising the objects affected by the command. This is to be used within the user function code, and is mostly modelled after the recently introduced pg_identify_object() function. Catalog version bumped due to the new function. Dimitri Fontaine and Álvaro Herrera Review by Robert Haas, Tom Lane
-
- Mar 22, 2013
-
-
Kevin Grittner authored
Problem with assertion failure in restoring from pg_dump output reported by Joachim Wieland. Review and suggestions by Tom Lane and Robert Haas.
-
- Mar 20, 2013
-
-
Alvaro Herrera authored
Introduce pg_identify_object(oid,oid,int4), which is similar in spirit to pg_describe_object but instead produces a row of machine-readable information to uniquely identify the given object, without resorting to OIDs or other internal representation. This is intended to be used in the event trigger implementation, to report objects being operated on; but it has usefulness of its own. Catalog version bumped because of the new function.
-
- Mar 14, 2013
-
-
Heikki Linnakangas authored
The error rule used to avoid backtracking with the U&'...' UESCAPE 'x' syntax bloated the flex tables, so refactor that. This patch makes the error rule shorter, by introducing a new exclusive flex state that's entered after parsing U&'...'. This shrinks the postgres binary by about 220kB.
-
- Mar 12, 2013
-
-
Tom Lane authored
There's still some discussion about exactly how postgres_fdw ought to handle this case, but there seems no debate that we want to allow defaults to be used for inserts into foreign tables. So remove the core-code restrictions that prevented it. While at it, get rid of the special grammar productions for CREATE FOREIGN TABLE, and instead add explicit FEATURE_NOT_SUPPORTED error checks for the disallowed cases. This makes the grammar a shade smaller, and more importantly results in much more intelligible error messages for unsupported cases. It's also one less thing to fix if we ever start supporting constraints on foreign tables.
-
- Mar 10, 2013
-
-
Tom Lane authored
This patch adds the core-system infrastructure needed to support updates on foreign tables, and extends contrib/postgres_fdw to allow updates against remote Postgres servers. There's still a great deal of room for improvement in optimization of remote updates, but at least there's basic functionality there now. KaiGai Kohei, reviewed by Alexander Korotkov and Laurenz Albe, and rather heavily revised by Tom Lane.
-
- Mar 04, 2013
-
-
Kevin Grittner authored
A materialized view has a rule just like a view and a heap and other physical properties like a table. The rule is only used to populate the table, references in queries refer to the materialized data. This is a minimal implementation, but should still be useful in many cases. Currently data is only populated "on demand" by the CREATE MATERIALIZED VIEW and REFRESH MATERIALIZED VIEW statements. It is expected that future releases will add incremental updates with various timings, and that a more refined concept of defining what is "fresh" data will be developed. At some point it may even be possible to have queries use a materialized in place of references to underlying tables, but that requires the other above-mentioned features to be working first. Much of the documentation work by Robert Haas. Review by Noah Misch, Thom Brown, Robert Haas, Marko Tiikkaja Security review by KaiGai Kohei, with a decision on how best to implement sepgsql still pending.
-
- Feb 27, 2013
-
-
Heikki Linnakangas authored
This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding psql \copy syntax. Like with reading/writing files, the backend version is superuser-only, and in the psql version, the program is run in the client. In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you the stdin/stdout is quoted, it's now interpreted as a filename. For example, "\copy foo from 'stdin'" now reads from a file called 'stdin', not from standard input. Before this, there was no way to specify a filename called stdin, stdout, pstdin or pstdout. This creates a new function in pgport, wait_result_to_str(), which can be used to convert the exit status of a process, as returned by wait(3), to a human-readable string. Etsuro Fujita, reviewed by Amit Kapila.
-
- Feb 18, 2013
-
-
Peter Eisentraut authored
This generalizes the existing ALTER ROLE ... SET and ALTER DATABASE ... SET functionality to allow creating settings that apply to all users in all databases. reviewed by Pavel Stehule
-
- Feb 09, 2013
-
-
Tom Lane authored
Ali Dar, reviewed by Dean Rasheed.
-
- Feb 06, 2013
-
-
Alvaro Herrera authored
The wording changes applied in 0ac5ad51 were universally disliked. Per gripe from Andrew Dunstan
-
- Feb 01, 2013
-
-
Peter Eisentraut authored
This is specified in the SQL standard. The CREATE RECURSIVE VIEW specification is transformed into a normal CREATE VIEW statement with a WITH RECURSIVE clause. reviewed by Abhijit Menon-Sen and Stephen Frost
-
- Jan 30, 2013
-
-
Tom Lane authored
Such cases should work, but the grammar failed to accept them because of our ancient precedence hacks to convince bison that extra parentheses around a sub-SELECT in an expression are unambiguous. (Formally, they *are* ambiguous, but we don't especially care whether they're treated as part of the sub-SELECT or part of the expression. Bison cares, though.) Fix by adding a redundant-looking production for this case. This is a fine example of why fixing shift/reduce conflicts via precedence declarations is more dangerous than it looks: you can easily cause the parser to reject cases that should work. This has been wrong since commit 3db4056e or maybe before, and apparently some people have been working around it by inserting no-op casts. That method introduces a dump/reload hazard, as illustrated in bug #7838 from Jan Mate. Hence, back-patch to all active branches.
-
- Jan 26, 2013
-
-
Tom Lane authored
The SQL standard does not have general functions-in-FROM, but it does allow UNNEST() there (see the <collection derived table> production), and the semantics of that are defined to include lateral references. So spec compliance requires allowing lateral references within UNNEST() even without an explicit LATERAL keyword. Rather than making UNNEST() a special case, it seems best to extend this flexibility to any function-in-FROM. We'll still allow LATERAL to be written explicitly for clarity's sake, but it's now a noise word in this context. In theory this change could result in a change in behavior of existing queries, by allowing what had been an outer reference in a function-in-FROM to be captured by an earlier FROM-item at the same level. However, all pre-9.3 PG releases have a bug that causes them to match variable references to earlier FROM-items in preference to outer references (and then throw an error). So no previously-working query could contain the type of ambiguity that would risk a change of behavior. Per a suggestion from Andrew Gierth, though I didn't use his patch.
-
Bruce Momjian authored
Previously, CREATE TABLE IF EXIST threw an error if the schema was nonexistent. This was done by passing 'missing_ok' to the function that looks up the schema oid.
-
- Jan 23, 2013
-
-
Alvaro Herrera authored
This patch introduces two additional lock modes for tuples: "SELECT FOR KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each other, in contrast with already existing "SELECT FOR SHARE" and "SELECT FOR UPDATE". UPDATE commands that do not modify the values stored in the columns that are part of the key of the tuple now grab a SELECT FOR NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently with tuple locks of the FOR KEY SHARE variety. Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this means the concurrency improvement applies to them, which is the whole point of this patch. The added tuple lock semantics require some rejiggering of the multixact module, so that the locking level that each transaction is holding can be stored alongside its Xid. Also, multixacts now need to persist across server restarts and crashes, because they can now represent not only tuple locks, but also tuple updates. This means we need more careful tracking of lifetime of pg_multixact SLRU files; since they now persist longer, we require more infrastructure to figure out when they can be removed. pg_upgrade also needs to be careful to copy pg_multixact files over from the old server to the new, or at least part of multixact.c state, depending on the versions of the old and new servers. Tuple time qualification rules (HeapTupleSatisfies routines) need to be careful not to consider tuples with the "is multi" infomask bit set as being only locked; they might need to look up MultiXact values (i.e. possibly do pg_multixact I/O) to find out the Xid that updated a tuple, whereas they previously were assured to only use information readily available from the tuple header. This is considered acceptable, because the extra I/O would involve cases that would previously cause some commands to block waiting for concurrent transactions to finish. Another important change is the fact that locking tuples that have previously been updated causes the future versions to be marked as locked, too; this is essential for correctness of foreign key checks. This causes additional WAL-logging, also (there was previously a single WAL record for a locked tuple; now there are as many as updated copies of the tuple there exist.) With all this in place, contention related to tuples being checked by foreign key rules should be much reduced. As a bonus, the old behavior that a subtransaction grabbing a stronger tuple lock than the parent (sub)transaction held on a given tuple and later aborting caused the weaker lock to be lost, has been fixed. Many new spec files were added for isolation tester framework, to ensure overall behavior is sane. There's probably room for several more tests. There were several reviewers of this patch; in particular, Noah Misch and Andres Freund spent considerable time in it. Original idea for the patch came from Simon Riggs, after a problem report by Joel Jacobson. Most code is from me, with contributions from Marti Raudsepp, Alexander Shulgin, Noah Misch and Andres Freund. This patch was discussed in several pgsql-hackers threads; the most important start at the following message-ids: AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com 1290721684-sup-3951@alvh.no-ip.org 1294953201-sup-2099@alvh.no-ip.org 1320343602-sup-2290@alvh.no-ip.org 1339690386-sup-8927@alvh.no-ip.org 4FE5FF020200002500048A3D@gw.wicourts.gov 4FEAB90A0200002500048B7D@gw.wicourts.gov
-
- Jan 22, 2013
-
-
Tom Lane authored
Originally we didn't bother to mark FuncExprs with any indication whether VARIADIC had been given in the source text, because there didn't seem to be any need for it at runtime. However, because we cannot fold a VARIADIC ANY function's arguments into an array (since they're not necessarily all the same type), we do actually need that information at runtime if VARIADIC ANY functions are to respond unsurprisingly to use of the VARIADIC keyword. Add the missing field, and also fix ruleutils.c so that VARIADIC ANY function calls are dumped properly. Extracted from a larger patch that also fixes concat() and format() (the only two extant VARIADIC ANY functions) to behave properly when VARIADIC is specified. This portion seems appropriate to review and commit separately. Pavel Stehule
-
- Jan 21, 2013
-
-
Alvaro Herrera authored
Remove duplicate implementations of catalog munging and miscellaneous privilege checks. Instead rely on already existing data in objectaddress.c to do the work. Author: KaiGai Kohei, changes by me Reviewed by: Robert Haas, Álvaro Herrera, Dimitri Fontaine
-
- Jan 14, 2013
-
-
Tom Lane authored
In commit 71450d7f, we added code to inform suitably-intelligent compilers that ereport() doesn't return if the elevel is ERROR or higher. This patch extends that to elog(), and also fixes a double-evaluation hazard that the previous commit created in ereport(), as well as reducing the emitted code size. The elog() improvement requires the compiler to support __VA_ARGS__, which should be available in just about anything nowadays since it's required by C99. But our minimum language baseline is still C89, so add a configure test for that. The previous commit assumed that ereport's elevel could be evaluated twice, which isn't terribly safe --- there are already counterexamples in xlog.c. On compilers that have __builtin_constant_p, we can use that to protect the second test, since there's no possible optimization gain if the compiler doesn't know the value of elevel. Otherwise, use a local variable inside the macros to prevent double evaluation. The local-variable solution is inferior because (a) it leads to useless code being emitted when elevel isn't constant, and (b) it increases the optimization level needed for the compiler to recognize that subsequent code is unreachable. But it seems better than not teaching non-gcc compilers about unreachability at all. Lastly, if the compiler has __builtin_unreachable(), we can use that instead of abort(), resulting in a noticeable code savings since no function call is actually emitted. However, it seems wise to do this only in non-assert builds. In an assert build, continue to use abort(), so that the behavior will be predictable and debuggable if the "impossible" happens. These changes involve making the ereport and elog macros emit do-while statement blocks not just expressions, which forces small changes in a few call sites. Andres Freund, Tom Lane, Heikki Linnakangas
-
- Jan 05, 2013
-
-
Peter Eisentraut authored
-
- Jan 01, 2013
-
-
Bruce Momjian authored
Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
-
- Dec 23, 2012
-
-
Tom Lane authored
transformExpr() is required to cope with already-transformed expression trees, for various ugly-but-not-quite-worth-cleaning-up reasons. However, some of its newer subroutines hadn't gotten the memo. This accounts for bug #7763 from Norbert Buchmuller: transformRowExpr() was overwriting the previously determined type of a RowExpr during CREATE TABLE LIKE INCLUDING INDEXES. Additional investigation showed that transformXmlExpr had the same kind of problem, but all the other cases seem to be safe. Andres Freund and Tom Lane
-
- Dec 21, 2012
-
-
Peter Eisentraut authored
-
- Dec 01, 2012
-
-
Simon Riggs authored
When a relfilenode is created in this subtransaction or a committed child transaction and it cannot otherwise be seen by our own process, mark tuples committed ahead of transaction commit for all COPY commands in same transaction. If FREEZE specified on COPY and pre-conditions met then rows will also be frozen. Both options designed to avoid revisiting rows after commit, increasing performance of subsequent commands after data load and upgrade. pg_restore changes later. Simon Riggs, review comments from Heikki Linnakangas, Noah Misch and design input from Tom Lane, Robert Haas and Kevin Grittner
-
- Nov 29, 2012
-
-
Tom Lane authored
Commit 8cb53654, which introduced DROP INDEX CONCURRENTLY, managed to break CREATE INDEX CONCURRENTLY via a poor choice of catalog state representation. The pg_index state for an index that's reached the final pre-drop stage was the same as the state for an index just created by CREATE INDEX CONCURRENTLY. This meant that the (necessary) change to make RelationGetIndexList ignore about-to-die indexes also made it ignore freshly-created indexes; which is catastrophic because the latter do need to be considered in HOT-safety decisions. Failure to do so leads to incorrect index entries and subsequently wrong results from queries depending on the concurrently-created index. To fix, add an additional boolean column "indislive" to pg_index, so that the freshly-created and about-to-die states can be distinguished. (This change obviously is only possible in HEAD. This patch will need to be back-patched, but in 9.2 we'll use a kluge consisting of overloading the formerly-impossible state of indisvalid = true and indisready = false.) In addition, change CREATE/DROP INDEX CONCURRENTLY so that the pg_index flag changes they make without exclusive lock on the index are made via heap_inplace_update() rather than a normal transactional update. The latter is not very safe because moving the pg_index tuple could result in concurrent SnapshotNow scans finding it twice or not at all, thus possibly resulting in index corruption. This is a pre-existing bug in CREATE INDEX CONCURRENTLY, which was copied into the DROP code. In addition, fix various places in the code that ought to check to make sure that the indexes they are manipulating are valid and/or ready as appropriate. These represent bugs that have existed since 8.2, since a failed CREATE INDEX CONCURRENTLY could leave a corrupt or invalid index behind, and we ought not try to do anything that might fail with such an index. Also fix RelationReloadIndexInfo to ensure it copies all the pg_index columns that are allowed to change after initial creation. Previously we could have been left with stale values of some fields in an index relcache entry. It's not clear whether this actually had any user-visible consequences, but it's at least a bug waiting to happen. In addition, do some code and docs review for DROP INDEX CONCURRENTLY; some cosmetic code cleanup but mostly addition and revision of comments. This will need to be back-patched, but in a noticeably different form, so I'm committing it to HEAD before working on the back-patch. Problem reported by Amit Kapila, diagnosis by Pavan Deolassee, fix by Tom Lane and Andres Freund.
-
- Nov 12, 2012
-
-
Heikki Linnakangas authored
errcontext() is typically used in an error context callback function, not within an ereport() invocation like e.g errmsg and errdetail are. That means that the message domain that the TEXTDOMAIN magic in ereport() determines is not the right one for the errcontext() calls. The message domain needs to be determined by the C file containing the errcontext() call, not the file containing the ereport() call. Fix by turning errcontext() into a macro that passes the TEXTDOMAIN to use for the errcontext message. "errcontext" was used in a few places as a variable or struct field name, I had to rename those out of the way, now that errcontext is a macro. We've had this problem all along, but this isn't doesn't seem worth backporting. It's a fairly minor issue, and turning errcontext from a function to a macro requires at least a recompile of any external code that calls errcontext().
-
Tom Lane authored
Since transformSetOperationTree() recurses, it can be driven to stack overflow with enough UNION/INTERSECT/EXCEPT clauses in a query. Add a check to ensure it fails cleanly instead of crashing. Per report from Matthew Gerber (though it's not clear whether this is the only thing going wrong for him). Historical note: I think the reasoning behind not putting a check here in the beginning was that the check in transformExpr() ought to be sufficient to guard the whole parser. However, because transformSetOperationTree() recurses all the way to the bottom of the set-operation tree before doing any analysis of the statement's expressions, that check doesn't save it.
-
- Oct 12, 2012
-
-
Tom Lane authored
We don't need this hack any more.
-
- Oct 11, 2012
-
-
Peter Eisentraut authored
Numerous flex and bison make rules have appeared in the source tree over time, and they are all virtually identical, so we can replace them by pattern rules with some variables for customization. Users of pgxs will also be able to benefit from this.
-
- Oct 04, 2012
-
-
Tom Lane authored
The previous coding of the YYLLOC_DEFAULT macro behaved strangely for empty productions, assigning the previous nonterminal's location as the parse location of the result. The usefulness of that was (at best) debatable already, but the real problem is that in list-generating nonterminals like OptFooList: /* EMPTY */ { ... } | OptFooList Foo { ... } ; the initially-identified location would get copied up, so that even a nonempty list would be given a bogus parse location. Document how to work around that, and do so for OptSchemaEltList, so that the error condition just added for CREATE SCHEMA IF NOT EXISTS produces a sane error cursor. So far as I can tell, there are currently no other cases where the situation arises, so we don't need other instances of this coding yet.
-
Tom Lane authored
Per discussion, schema-element subcommands are not allowed together with this option, since it's not very obvious what should happen to the element objects. Fabrízio de Royes Mello
-
- Oct 03, 2012
-
-
Alvaro Herrera authored
Remove duplicate implementation of catalog munging and miscellaneous privilege and consistency checks. Instead rely on already existing data in objectaddress.c to do the work. Author: KaiGai Kohei Tweaked by me Reviewed by Robert Haas
-
- Oct 02, 2012
-
-
Alvaro Herrera authored
Instead of having each object type implement the catalog munging independently, centralize knowledge about how to do it and expand the existing table in objectaddress.c with enough data about each object type to support this operation. Author: KaiGai Kohei Tweaks by me Reviewed by Robert Haas
-
- Sep 27, 2012
-
-
Tom Lane authored
This script is a bit slow, but still it only takes a fraction of the time the bison run does, so the overhead doesn't seem intolerable. And we definitely need some mechanical aid here, because people keep missing the need to add new keywords to the appropriate keyword-list production. While at it, I moved check_keywords.pl from src/tools into src/backend/parser where it's actually used, and did some very minor cleanup on the script.
-
Tom Lane authored
Once again, somebody who ought to know better forgot this. We really need some automated cross-check on the keyword-list productions, I think. Per report from Brian Weaver.
-