- Jan 26, 2011
-
-
Peter Eisentraut authored
Older versions of GCC appear to report these with the current standard option set, newer versions need -Wformat-security.
-
- Jan 24, 2011
-
-
Peter Eisentraut authored
This way errors from fetching tuples are correctly reported as errors in the SPI call. While at it, avoid palloc(0). Jan Urbański
-
Peter Eisentraut authored
Instead of checking whether the arglist is NULL and then if its length is 0, do it in one step, and outside of the try/catch block. Jan Urbański
-
- Jan 23, 2011
-
-
Tom Lane authored
This reverts commit 740e54ca, which seems to have tickled an optimization bug in gcc 4.5.x, as reported upstream at https://bugzilla.redhat.com/show_bug.cgi?id=671899 Since this patch had no purpose beyond code beautification, it's not worth expending a lot of effort to look for another workaround.
-
Tom Lane authored
It's not clear to me what should happen to the other plpython_unicode variant expected files, but this patch gets things passing on my own machines and at least some of the buildfarm.
-
- Jan 22, 2011
-
-
Peter Eisentraut authored
Global error handling led to confusion and was hard to manage. With this change, errors from PostgreSQL are immediately reported to Python as exceptions. This requires setting a Python exception after reporting the caught PostgreSQL error as a warning, because PLy_elog destroys the Python exception state. Ideally, all places where PostgreSQL errors need to be reported back to Python should be wrapped in subtransactions, to make going back to Python from a longjmp safe. This will be handled in a separate patch. Jan Urbański
-
- Jan 21, 2011
-
-
Peter Eisentraut authored
The way the exception types where added to the module was wrong for Python 3. Exception classes were not actually available from plpy. Fix that by factoring out code that is responsible for defining new Python exceptions and make it work with Python 3. New regression test makes sure the plpy module has the expected contents. Jan Urbanśki, slightly revised by me
-
- Jan 20, 2011
-
-
Peter Eisentraut authored
Hitoshi Harada
-
Peter Eisentraut authored
Hitoshi Harada
-
Peter Eisentraut authored
This makes PLy_procedure_create a bit more manageable. Jan Urbański
-
- Jan 19, 2011
-
-
Peter Eisentraut authored
Jan Urbański, reviewed by Peter Eisentraut, Álvaro Herrera, Tom Lane :-)
-
- Jan 18, 2011
-
-
Peter Eisentraut authored
Jan Urbański
-
Peter Eisentraut authored
The previous code would try to print out a null pointer. Jan Urbański
-
Peter Eisentraut authored
The latter is undocumented and the speed gain is negligible. Jan Urbański
-
Peter Eisentraut authored
Pay attention to the attisdropped field and skip over TupleDesc fields that have it set. Not a real problem until we get table returning functions, but it's the right thing to do anyway. Jan Urbański
-
Peter Eisentraut authored
As discussed, even if the PL needs a permanent memory location, it should use palloc, not malloc. It also makes error handling easier. Jan Urbański
-
Peter Eisentraut authored
If the function using yield to return rows fails halfway, the iterator stays open and subsequent calls to the function will resume reading from it. The fix is to unref the iterator and set it to NULL if there has been an error. Jan Urbański
-
- Jan 17, 2011
-
-
Peter Eisentraut authored
Two separate hash tables are used for regular procedures and for trigger procedures, since the way trigger procedures work is quite different from normal stored procedures. Change the signatures of PLy_procedure_{get,create} to accept the function OID and a Boolean flag indicating whether it's a trigger. This should make implementing a PL/Python validator easier. Using HTABs instead of Python dictionaries makes error recovery easier, and allows for procedures to be cached based on their OIDs, not their names. It also allows getting rid of the PyCObject field that used to hold a pointer to PLyProcedure, since PyCObjects are deprecated in Python 2.7 and replaced by Capsules in Python 3. Jan Urbański
-
Alvaro Herrera authored
Per bug #5835 by Julien Demoor Author: Alex Hunsaker
-
- Nov 23, 2010
-
-
Peter Eisentraut authored
-
- Nov 15, 2010
-
-
Tom Lane authored
We must stay in the function's SPI context until done calling the iterator that returns the set result. Otherwise, any attempt to invoke SPI features in the python code called by the iterator will malfunction. Diagnosis and patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot. Back-patch to 8.2; there was no support for SRFs in previous versions of plpython.
-
- Oct 12, 2010
-
-
Tom Lane authored
This was broken in 9.0 while improving plpython's conversion behavior for bytea and boolean. Per bug report from maizi.
-
- Oct 10, 2010
-
-
Tom Lane authored
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which is fired instead of performing a physical insert/update/delete. The trigger function is passed the entire old and/or new rows of the view, and must figure out what to do to the underlying tables to implement the update. So this feature can be used to implement updatable views using trigger programming style rather than rule hacking. In passing, this patch corrects the names of some columns in the information_schema.triggers view. It seems the SQL committee renamed them somewhere between SQL:99 and SQL:2003. Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
-
- Oct 08, 2010
-
-
Tom Lane authored
Various places were testing TRIGGER_FIRED_BEFORE() where what they really meant was !TRIGGER_FIRED_AFTER(), or vice versa. This needs to be cleaned up because there are about to be more than two possible states. We might want to note this in the 9.1 release notes as something for trigger authors to double-check. For consistency's sake I also changed some places that assumed that TRIGGER_FIRED_FOR_ROW and TRIGGER_FIRED_FOR_STATEMENT are necessarily mutually exclusive; that's not in immediate danger of breaking, but it's still sloppier than it should be. Extracted from Dean Rasheed's patch for triggers on views. I'm committing this separately since it's an identifiable separate issue, and is the only reason for the patch to touch most of these particular files.
-
- Sep 22, 2010
-
-
Tom Lane authored
Also do some further work in the back branches, where quite a bit wasn't covered by Magnus' original back-patch.
-
- Sep 20, 2010
-
-
Magnus Hagander authored
-
- Aug 25, 2010
-
-
Peter Eisentraut authored
This is reproducibly possible in Python 2.7 if the user turned PendingDeprecationWarning into an error, but it's theoretically also possible in earlier versions in case of exceptional conditions. backpatched to 8.0
-
- Aug 19, 2010
-
-
Peter Eisentraut authored
at end of files.
-
- Jul 08, 2010
-
-
Tom Lane authored
(_PG_init should be called only once anyway, but as long as it's got an internal guard against repeat calls, that should be in front of the version check.)
-
Peter Eisentraut authored
-
- Jul 06, 2010
-
-
Bruce Momjian authored
-
- Jun 29, 2010
-
-
Peter Eisentraut authored
pg_pltemplate This should have a catversion bump, but it's still being debated whether it's worth it during beta.
-
- Jun 12, 2010
-
-
Peter Eisentraut authored
-
Peter Eisentraut authored
-
- Jun 10, 2010
-
-
Tom Lane authored
conversion. Per bug #5497 from David Gardner.
-
- May 13, 2010
-
-
Peter Eisentraut authored
-
- May 01, 2010
-
-
Tom Lane authored
Per report from Andres Freund.
-
- Apr 30, 2010
-
-
Tom Lane authored
memory if the result had zero rows, and also if there was any sort of error while converting the result tuples into Python data. Reported and partially fixed by Andres Freund. Back-patch to all supported versions. Note: I haven't tested the 7.4 fix. 7.4's configure check for python is so obsolete it doesn't work on my current machines :-(. The logic change is pretty straightforward though.
-
- Mar 18, 2010
-
-
Peter Eisentraut authored
with a few strategically placed pg_verifymbstr calls.
-
Peter Eisentraut authored
In PLy_spi_execute_plan, use the data-type specific Python-to-PostgreSQL conversion function instead of passing everything through InputFunctionCall as a string. The equivalent fix was already done months ago for function parameters and return values, but this other gateway between Python and PostgreSQL was apparently forgotten. As a result, data types that need special treatment, such as bytea, would misbehave when used with plpy.execute.
-