Newer
Older
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/arch-dev.sgml,v 2.16 2001/11/21 05:53:40 thomas Exp $
-->
<chapter id="overview">
<title>Overview of PostgreSQL Internals</title>
<note>
<title>Author</title>
<para>
This chapter originally appeared as a part of
Master's Thesis prepared at Vienna University of Technology under the direction
of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr.
</para>
</note>
<para>
This chapter gives an overview of the internal structure of the
backend of <productname>PostgreSQL</productname>.
After having read the following sections you
should have an idea of how a query is processed. Don't expect a
detailed description here (I think such a description dealing with
all data structures and functions used within <productname>PostgreSQL</productname>
would exceed 1000
pages!). This chapter is intended to help understanding the general
control and data flow within the backend from receiving a query to
sending the results.
</para>
Peter Eisentraut
committed
<sect1 id="query-path">
<title>The Path of a Query</title>
<para>
Here we give a short overview of the stages a query has to pass in
order to obtain a result.
</para>
<procedure>
<step>
<para>
A connection from an application program to the <productname>PostgreSQL</productname>
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
server has to be established. The application program transmits a
query to the server and receives the results sent back by the server.
</para>
</step>
<step>
<para>
The <firstterm>parser stage</firstterm> checks the query
transmitted by the application
program (client) for correct syntax and creates
a <firstterm>query tree</firstterm>.
</para>
</step>
<step>
<para>
The <firstterm>rewrite system</firstterm> takes
the query tree created by the parser stage and looks for
any <firstterm>rules</firstterm> (stored in the
<firstterm>system catalogs</firstterm>) to apply to
the <firstterm>querytree</firstterm> and performs the
transformations given in the <firstterm>rule bodies</firstterm>.
One application of the rewrite system is given in the realization of
<firstterm>views</firstterm>.
</para>
<para>
Whenever a query against a view
(i.e. a <firstterm>virtual table</firstterm>) is made,
the rewrite system rewrites the user's query to
a query that accesses the <firstterm>base tables</firstterm> given in
the <firstterm>view definition</firstterm> instead.
</para>
</step>
<step>
<para>
The <firstterm>planner/optimizer</firstterm> takes
the (rewritten) querytree and creates a
<firstterm>queryplan</firstterm> that will be the input to the
<firstterm>executor</firstterm>.
</para>
<para>
It does so by first creating all possible <firstterm>paths</firstterm>
leading to the same result. For example if there is an index on a
relation to be scanned, there are two paths for the
scan. One possibility is a simple sequential scan and the other
possibility is to use the index. Next the cost for the execution of
each plan is estimated and the
cheapest plan is chosen and handed back.
</para>
</step>
<step>
<para>
The executor recursively steps through
the <firstterm>plan tree</firstterm> and
retrieves tuples in the way represented by the plan.
The executor makes use of the
<firstterm>storage system</firstterm> while scanning
relations, performs <firstterm>sorts</firstterm> and <firstterm>joins</firstterm>,
evaluates <firstterm>qualifications</firstterm> and finally hands back the tuples derived.
</para>
</step>
</procedure>
<para>
In the following sections we will cover every of the above listed items
in more detail to give a better understanding on <productname>PostgreSQL</productname>'s internal
control and data structures.
</para>
</sect1>
Peter Eisentraut
committed
<sect1 id="connect-estab">
<title>How Connections are Established</title>
<para>
<productname>PostgreSQL</productname> is implemented using a simple "process per-user"
client/server model. In this model there is one <firstterm>client process</firstterm>
connected to exactly one <firstterm>server process</firstterm>.
As we don't know <foreignphrase>per se</foreignphrase>
how many connections will be made, we have to use a <firstterm>master process</firstterm>
that spawns a new server process every time a connection is
requested. This master process is called <literal>postmaster</literal> and
listens at a specified TCP/IP port for incoming connections. Whenever
a request for a connection is detected the <literal>postmaster</literal> process
spawns a new server process called <literal>postgres</literal>. The server
tasks (<literal>postgres</literal> processes) communicate with each other using
<firstterm>semaphores</firstterm> and <firstterm>shared memory</firstterm>
to ensure data integrity
throughout concurrent data access. Figure
\ref{connection} illustrates the interaction of the master process
<literal>postmaster</literal> the server process <literal>postgres</literal> and a client
application.
</para>
<para>
The client process can either be the <application>psql</application> frontend (for
interactive SQL queries) or any user application implemented using
the <filename>libpg</filename> library. Note that applications implemented using
<application>ecpg</application>
(the <productname>PostgreSQL</productname> embedded SQL preprocessor for C)
also use this library.
</para>
<para>
Once a connection is established the client process can send a query
to the <firstterm>backend</firstterm> (server). The query is transmitted using plain text,
i.e. there is no parsing done in the <firstterm>frontend</firstterm> (client). The
server parses the query, creates an <firstterm>execution plan</firstterm>,
executes the plan and returns the retrieved tuples to the client
by transmitting them over the established connection.
</para>
<!--
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/connection.ps}
\caption{How a connection is established}
\label{connection}
\end{center}
\end{figure}
-->
</sect1>
Peter Eisentraut
committed
<sect1 id="parser-stage">
<title>The Parser Stage</title>
<para>
The <firstterm>parser stage</firstterm> consists of two parts:
<itemizedlist>
<listitem>
<para>
The <firstterm>parser</firstterm> defined in
<filename>gram.y</filename> and <filename>scan.l</filename> is
built using the Unix tools <application>yacc</application>
and <application>lex</application>.
</para>
</listitem>
<listitem>
<para>
The <firstterm>transformation process</firstterm> does
modifications and augmentations to the data structures returned by the parser.
</para>
</listitem>
</itemizedlist>
</para>
<sect2>
<title>Parser</title>
<para>
The parser has to check the query string (which arrives as
plain ASCII text) for valid syntax. If the syntax is correct a
<firstterm>parse tree</firstterm> is built up and handed back otherwise an error is
returned. For the implementation the well known Unix
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
tools <application>lex</application> and <application>yacc</application>
are used.
</para>
<para>
The <firstterm>lexer</firstterm> is defined in the file
<filename>scan.l</filename> and is responsible
for recognizing <firstterm>identifiers</firstterm>,
the <firstterm>SQL keywords</firstterm> etc. For
every keyword or identifier that is found, a <firstterm>token</firstterm>
is generated and handed to the parser.
</para>
<para>
The parser is defined in the file <filename>gram.y</filename> and consists of a
set of <firstterm>grammar rules</firstterm> and <firstterm>actions</firstterm>
that are executed
whenever a rule is fired. The code of the actions (which
is actually C-code) is used to build up the parse tree.
</para>
<para>
The file <filename>scan.l</filename> is transformed to
the C-source file <filename>scan.c</filename>
using the program <application>lex</application>
and <filename>gram.y</filename> is transformed to
<filename>gram.c</filename> using <application>yacc</application>.
After these transformations have taken
place a normal C-compiler can be used to create the
parser. Never make any changes to the generated C-files as they will
be overwritten the next time <application>lex</application>
or <application>yacc</application> is called.
<note>
<para>
The mentioned transformations and compilations are normally done
automatically using the <firstterm>makefiles</firstterm>
shipped with the <productname>PostgreSQL</productname>
source distribution.
</para>
</note>
</para>
<para>
A detailed description of <application>yacc</application> or
the grammar rules given in <filename>gram.y</filename> would be
beyond the scope of this paper. There are many books and
documents dealing with <application>lex</application> and
<application>yacc</application>. You should be familiar with
<application>yacc</application> before you start to study the
grammar given in <filename>gram.y</filename> otherwise you won't
understand what happens there.
</para>
<para>
For a better understanding of the data structures used in
<productname>PostgreSQL</productname>
for the processing of a query we use an example to illustrate the
changes made to these data structures in every stage.
This example contains the following simple query that will be used in
various descriptions and figures throughout the following
sections. The query assumes that the tables given in
<citetitle>The Supplier Database</citetitle>
<!--
XXX The above citetitle should really be an xref,
but that part has not yet been converted from Stefan's original document.
- thomas 1999-02-11
-->
have already been defined.
<example id="simple-select">
<title>A Simple Select</title>
<programlisting>
select s.sname, se.pno
from supplier s, sells se
where s.sno > 2 and s.sno = se.sno;
</programlisting>
<para>
Figure \ref{parsetree} shows the <firstterm>parse tree</firstterm> built by the
grammar rules and actions given in <filename>gram.y</filename> for the query
given in <xref linkend="simple-select">
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
(without the <firstterm>operator tree</firstterm> for
the <firstterm>where clause</firstterm> which is shown in figure \ref{where_clause}
because there was not enough space to show both data structures in one
figure).
</para>
<para>
The top node of the tree is a <literal>SelectStmt</literal> node. For every entry
appearing in the <firstterm>from clause</firstterm> of the SQL query a <literal>RangeVar</literal>
node is created holding the name of the <firstterm>alias</firstterm> and a pointer to a
<literal>RelExpr</literal> node holding the name of the <firstterm>relation</firstterm>. All
<literal>RangeVar</literal> nodes are collected in a list which is attached to the field
<literal>fromClause</literal> of the <literal>SelectStmt</literal> node.
</para>
<para>
For every entry appearing in the <firstterm>select list</firstterm> of the SQL query a
<literal>ResTarget</literal> node is created holding a pointer to an <literal>Attr</literal>
node. The <literal>Attr</literal> node holds the <firstterm>relation name</firstterm> of the entry and
a pointer to a <literal>Value</literal> node holding the name of the
<firstterm>attribute</firstterm>.
All <literal>ResTarget</literal> nodes are collected to a list which is
connected to the field <literal>targetList</literal> of the <literal>SelectStmt</literal> node.
</para>
<para>
Figure \ref{where_clause} shows the operator tree built for the
where clause of the SQL query given in
<xref linkend="simple-select">
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
which is attached to the field
<literal>qual</literal> of the <literal>SelectStmt</literal> node. The top node of the
operator tree is an <literal>A_Expr</literal> node representing an <literal>AND</literal>
operation. This node has two successors called <literal>lexpr</literal> and
<literal>rexpr</literal> pointing to two <firstterm>subtrees</firstterm>. The subtree attached to
<literal>lexpr</literal> represents the qualification <literal>s.sno > 2</literal> and the one
attached to <literal>rexpr</literal> represents <literal>s.sno = se.sno</literal>. For every
attribute an <literal>Attr</literal> node is created holding the name of the
relation and a pointer to a <literal>Value</literal> node holding the name of the
attribute. For the constant term appearing in the query a
<literal>Const</literal> node is created holding the value.
</para>
<!--
XXX merge in the figures later... - thomas 1999-01-29
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/parsetree.ps}
\caption{{\it TargetList} and {\it FromList} for query of example \ref{simple_select}}
\label{parsetree}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/where_clause.ps}
\caption{{\it WhereClause} for query of example \ref{simple_select}}
\label{where_clause}
\end{center}
\end{figure}
-->
</sect2>
<sect2>
<title>Transformation Process</title>
<para>
The <firstterm>transformation process</firstterm> takes the tree handed back by
the parser as input and steps recursively through it. If
a <literal>SelectStmt</literal> node is found, it is transformed
to a <literal>Query</literal>
Peter Eisentraut
committed
node that will be the top most node of the new data structure. Figure
\ref{transformed} shows the transformed data structure (the part
for the transformed <firstterm>where clause</firstterm> is given in figure
\ref{transformed_where} because there was not enough space to show all
parts in one figure).
</para>
<para>
Now a check is made, if the <firstterm>relation names</firstterm> in the
<firstterm>FROM clause</firstterm> are known to the system. For every relation name
that is present in the <firstterm>system catalogs</firstterm> a <abbrev>RTE</abbrev> node is
created containing the relation name, the <firstterm>alias name</firstterm> and
the <firstterm>relation id</firstterm>. From now on the relation ids are used to
refer to the <firstterm>relations</firstterm> given in the query. All <abbrev>RTE</abbrev> nodes
Peter Eisentraut
committed
are collected in the <firstterm>range table entry list</firstterm> that is connected
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
to the field <literal>rtable</literal> of the <literal>Query</literal> node. If a name of a
relation that is not known to the system is detected in the query an
error will be returned and the query processing will be aborted.
</para>
<para>
Next it is checked if the <firstterm>attribute names</firstterm> used are
contained in the relations given in the query. For every
attribute} that is found a <abbrev>TLE</abbrev> node is created holding a pointer
to a <literal>Resdom</literal> node (which holds the name of the column) and a
pointer to a <literal>VAR</literal> node. There are two important numbers in the
<literal>VAR</literal> node. The field <literal>varno</literal> gives the position of the
relation containing the current attribute} in the range
table entry list created above. The field <literal>varattno</literal> gives the
position of the attribute within the relation. If the name
of an attribute cannot be found an error will be returned and
the query processing will be aborted.
</para>
<!--
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/transform.ps}
\caption{Transformed {\it TargetList} and {\it FromList} for query of example \ref{simple_select}}
\label{transformed}
\end{center}
\end{figure}
\noindent Figure \ref{transformed_where} shows the transformed {\it where
clause}. Every {\tt A\_Expr} node is transformed to an {\tt Expr}
node. The {\tt Attr} nodes representing the attributes are replaced by
{\tt VAR} nodes as it has been done for the {\it targetlist}
above. Checks if the appearing {\it attributes} are valid and known to the
system are made. If there is an {\it attribute} used which
is not known an error will be returned and the {\it query
processing} will be aborted. \\
\\
The whole {\it transformation process} performs a transformation of
the data structure handed back by the {\it parser} to a more
comfortable form. The character strings representing the {\it
relations} and {\it attributes} in the original tree are replaced by
{\it relation ids} and {\tt VAR} nodes whose fields are referring to
the entries of the {\it range table entry list}. In addition to the
transformation, also various checks if the used {\it relation}
and {\it attribute} names (appearing in the query) are valid in the
current context are performed.
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/transform_where.ps}
\caption{Transformed {\it where clause} for query of example \ref{simple_select}}
\label{transformed_where}
\end{center}
\end{figure}
\pagebreak
\clearpage
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/plan.ps}
\caption{{\it Plan} for query of example \ref{simple_select}}
\label{plan}
\end{center}
\end{figure}
-->
</sect1>
Peter Eisentraut
committed
<sect1 id="rule-system">
<title>The <productname>PostgreSQL</productname> Rule System</title>
<para>
<productname>PostgreSQL</productname> supports a powerful
<firstterm>rule system</firstterm> for the specification
of <firstterm>views</firstterm> and ambiguous <firstterm>view updates</firstterm>.
Originally the <productname>PostgreSQL</productname>
rule system consisted of two implementations:
<itemizedlist>
<listitem>
<para>
The first one worked using <firstterm>tuple level</firstterm> processing and was
implemented deep in the <firstterm>executor</firstterm>. The rule system was
called whenever an individual tuple had been accessed. This
implementation was removed in 1995 when the last official release
of the <productname>PostgreSQL</productname> project was transformed into
<productname>Postgres95</productname>.
</para>
</listitem>
<listitem>
<para>
The second implementation of the rule system is a technique
called <firstterm>query rewriting</firstterm>.
The <firstterm>rewrite system</firstterm>} is a module
that exists between the <firstterm>parser stage</firstterm> and the
<firstterm>planner/optimizer</firstterm>. This technique is still implemented.
</para>
</listitem>
</itemizedlist>
</para>
<para>
For information on the syntax and creation of rules in the
<productname>PostgreSQL</productname> system refer to
<citetitle>The PostgreSQL User's Guide</citetitle>.
</para>
<sect2>
<title>The Rewrite System</title>
<para>
The <firstterm>query rewrite system</firstterm> is a module between
the parser stage and the planner/optimizer. It processes the tree handed
back by the parser stage (which represents a user query) and if
there is a rule present that has to be applied to the query it
rewrites the tree to an alternate form.
</para>
<sect3 id="view-impl">
<title>Techniques To Implement Views</title>
<para>
Now we will sketch the algorithm of the query rewrite system. For
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
better illustration we show how to implement views using rules
as an example.
</para>
<para>
Let the following rule be given:
<programlisting>
create rule view_rule
as on select
to test_view
do instead
select s.sname, p.pname
from supplier s, sells se, part p
where s.sno = se.sno and
p.pno = se.pno;
</programlisting>
</para>
<para>
The given rule will be <firstterm>fired</firstterm> whenever a select
against the relation <literal>test_view</literal> is detected. Instead of
selecting the tuples from <literal>test_view</literal> the select statement
given in the <firstterm>action part</firstterm> of the rule is executed.
</para>
<para>
Let the following user-query against <literal>test_view</literal> be given:
<programlisting>
select sname
from test_view
where sname <> 'Smith';
</programlisting>
</para>
<para>
Here is a list of the steps performed by the query rewrite
system whenever a user-query against <literal>test_view</literal> appears. (The
following listing is a very informal description of the algorithm just
intended for basic understanding. For a detailed description refer
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
</para>
<procedure>
<title><literal>test_view</literal> Rewrite</title>
<step>
<para>
Take the query given in the action part of the rule.
</para>
</step>
<step>
<para>
Adapt the targetlist to meet the number and order of
attributes given in the user-query.
</para>
</step>
<step>
<para>
Add the qualification given in the where clause of the
user-query to the qualification of the query given in the
action part of the rule.
</para>
</step>
</procedure>
<para>
Given the rule definition above, the user-query will be
rewritten to the following form (Note that the rewriting is done on
the internal representation of the user-query handed back by the
parser stage but the derived new data structure will represent the following
query):
<programlisting>
select s.sname
from supplier s, sells se, part p
where s.sno = se.sno and
p.pno = se.pno and
s.sname <> 'Smith';
</programlisting>
</para>
</sect2>
</sect1>
Peter Eisentraut
committed
<sect1 id="planner-optimizer">
<title>Planner/Optimizer</title>
<para>
The task of the <firstterm>planner/optimizer</firstterm> is to create an optimal
execution plan. It first combines all possible ways of
<firstterm>scanning</firstterm> and <firstterm>joining</firstterm>
the relations that appear in a
query. All the created paths lead to the same result and it's the
task of the optimizer to estimate the cost of executing each path and
find out which one is the cheapest.
</para>
<sect2>
<title>Generating Possible Plans</title>
<para>
The planner/optimizer decides which plans should be generated
based upon the types of indexes defined on the relations appearing in
a query. There is always the possibility of performing a
sequential scan on a relation, so a plan using only
sequential scans is always created. Assume an index is defined on a
relation (for example a B-tree index) and a query contains the
restriction
<literal>relation.attribute OPR constant</literal>. If
<literal>relation.attribute</literal> happens to match the key of the B-tree
index and <literal>OPR</literal> is anything but '<>' another plan is created using
the B-tree index to scan the relation. If there are further indexes
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
present and the restrictions in the query happen to match a key of an
index further plans will be considered.
</para>
<para>
After all feasible plans have been found for scanning single
relations, plans for joining relations are created. The
planner/optimizer considers only joins between every two relations for
which there exists a corresponding join clause (i.e. for which a
restriction like <literal>where rel1.attr1=rel2.attr2</literal> exists) in the
where qualification. All possible plans are generated for every
join pair considered by the planner/optimizer. The three possible join
strategies are:
<itemizedlist>
<listitem>
<para>
<firstterm>nested iteration join</firstterm>: The right relation is scanned
once for every tuple found in the left relation. This strategy
is easy to implement but can be very time consuming.
</para>
</listitem>
<listitem>
<para>
<firstterm>merge sort join</firstterm>: Each relation is sorted on the join
attributes before the join starts. Then the two relations are
merged together taking into account that both relations are
ordered on the join attributes. This kind of join is more
attractive because every relation has to be scanned only once.
</para>
</listitem>
<listitem>
<para>
<firstterm>hash join</firstterm>: the right relation is first hashed on its
join attributes. Next the left relation is scanned and the
appropriate values of every tuple found are used as hash keys to
locate the tuples in the right relation.
</para>
</listitem>
</itemizedlist>
</para>
</sect2>
<sect2>
<title>Data Structure of the Plan</title>
<para>
Here we will give a little description of the nodes appearing in the
plan. Figure \ref{plan} shows the plan produced for the query in
example \ref{simple_select}.
</para>
<para>
Peter Eisentraut
committed
The top node of the plan is a <literal>MergeJoin</literal> node that has two
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
successors, one attached to the field <literal>lefttree</literal> and the second
attached to the field <literal>righttree</literal>. Each of the subnodes represents
one relation of the join. As mentioned above a merge sort
join requires each relation to be sorted. That's why we find
a <literal>Sort</literal> node in each subplan. The additional qualification given
in the query (<literal>s.sno > 2</literal>) is pushed down as far as possible and is
attached to the <literal>qpqual</literal> field of the leaf <literal>SeqScan</literal> node of
the corresponding subplan.
</para>
<para>
The list attached to the field <literal>mergeclauses</literal> of the
<literal>MergeJoin</literal> node contains information about the join attributes.
The values <literal>65000</literal> and <literal>65001</literal>
for the <literal>varno</literal> fields in the
<literal>VAR</literal> nodes appearing in the <literal>mergeclauses</literal> list (and also in the
<literal>targetlist</literal>) mean that not the tuples of the current node should be
considered but the tuples of the next "deeper" nodes (i.e. the top
nodes of the subplans) should be used instead.
</para>
<para>
Note that every <literal>Sort</literal> and <literal>SeqScan</literal> node appearing in figure
\ref{plan} has got a <literal>targetlist</literal> but because there was not enough space
only the one for the <literal>MergeJoin</literal> node could be drawn.
</para>
<para>
Another task performed by the planner/optimizer is fixing the
<firstterm>operator ids</firstterm> in the <literal>Expr</literal>
and <literal>Oper</literal> nodes. As
mentioned earlier, <productname>PostgreSQL</productname> supports a variety of different data
types and even user defined types can be used. To be able to maintain
the huge amount of functions and operators it is necessary to store
them in a system table. Each function and operator gets a unique
operator id. According to the types of the attributes used
within the qualifications etc., the appropriate operator ids
have to be used.
</para>
</sect2>
</sect1>
Peter Eisentraut
committed
<sect1 id="executor">
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
<title>Executor</title>
<para>
The <firstterm>executor</firstterm> takes the plan handed back by the
planner/optimizer and starts processing the top node. In the case of
our example (the query given in example \ref{simple_select}) the top
node is a <literal>MergeJoin</literal> node.
</para>
<para>
Before any merge can be done two tuples have to be fetched (one from
each subplan). So the executor recursively calls itself to
process the subplans (it starts with the subplan attached to
<literal>lefttree</literal>). The new top node (the top node of the left subplan) is a
<literal>SeqScan</literal> node and again a tuple has to be fetched before the node
itself can be processed. The executor calls itself recursively
another time for the subplan attached to <literal>lefttree</literal> of the
<literal>SeqScan</literal> node.
</para>
<para>
Now the new top node is a <literal>Sort</literal> node. As a sort has to be done on
the whole relation, the executor starts fetching tuples
from the <literal>Sort</literal> node's subplan and sorts them into a temporary
relation (in memory or a file) when the <literal>Sort</literal> node is visited for
the first time. (Further examinations of the <literal>Sort</literal> node will
always return just one tuple from the sorted temporary
relation.)
</para>
<para>
Every time the processing of the <literal>Sort</literal> node needs a new tuple the
executor is recursively called for the <literal>SeqScan</literal> node
attached as subplan. The relation (internally referenced by the
value given in the <literal>scanrelid</literal> field) is scanned for the next
tuple. If the tuple satisfies the qualification given by the tree
attached to <literal>qpqual</literal> it is handed back, otherwise the next tuple
is fetched until the qualification is satisfied. If the last tuple of
the relation has been processed a <literal>NULL</literal> pointer is
returned.
</para>
<para>
After a tuple has been handed back by the <literal>lefttree</literal> of the
<literal>MergeJoin</literal> the <literal>righttree</literal> is processed in the same way. If both
tuples are present the executor processes the <literal>MergeJoin</literal>
node. Whenever a new tuple from one of the subplans is needed a
recursive call to the executor is performed to obtain it. If a
joined tuple could be created it is handed back and one complete
processing of the plan tree has finished.
</para>
<para>
Now the described steps are performed once for every tuple, until a
<literal>NULL</literal> pointer is returned for the processing of the
<literal>MergeJoin</literal> node, indicating that we are finished.
</para>
</sect1>
<!--
**********************************************************
**********************************************************
**********************************************************
**********************************************************
**********************************************************
\pagebreak
\clearpage
\section{The Realization of the Having Clause}
\label{having_impl}
The {\it having clause} has been designed in SQL to be able to use the
results of {\it aggregate functions} within a query qualification. The
handling of the {\it having clause} is very similar to the handling of
the {\it where clause}. Both are formulas in first order logic (FOL)
that have to evaluate to true for a certain object to be handed back:
%
\begin{itemize}
<step> The formula given in the {\it where clause} is evaluated for
every tuple. If the evaluation returns {\tt true} the tuple is
returned, every tuple not satisfying the qualification is ignored.
%
<step> In the case of {\it groups} the {\it having clause} is evaluated
for every group. If the evaluation returns {\tt true} the group is
taken into account otherwise it is ignored.
\end{itemize}
%
\subsection{How Aggregate Functions are Implemented}
\label{aggregates}
Before we can describe how the {\it having clause} is implemented we
will have a look at the implementation of {\it aggregate functions} as
long as they just appear in the {\it targetlist}. Note that {\it aggregate
functions} are applied to groups so the query must contain a {\it
group clause}.
%
\begin{example}
\label{having}
Here is an example of the usage of the {\it aggregate
function} {\tt count} which counts the number of part numbers ({\tt
pno}) of every group. (The table {\tt sells} is defined in example
\ref{supplier}.)
%
\begin{verbatim}
select sno, count(pno)
from sells
group by sno;
\end{verbatim}
%
\end{example}
%
A query like the one in example \ref{having} is processed by the usual
stages:
%
\begin{itemize}
<step> the parser stage
<step> the rewrite system
<step> the planner/optimizer
<step> the executor
\end{itemize}
%
and in the following sections we will describe what every stage does
to the query in order to obtain the appropriate result.
\subsubsection{The Parser Stage}
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/parse_having.ps}
\caption{{\it Querytree} built up for the query of example \ref{having}}
\label{parse_having}
\end{center}
\end{figure}
The parser stage builds up a {\it querytree} containing the {\it
where} qualification and information about the {\it grouping} that has
to be done (i.e. a list of all attributes to group for is attached to
the field {\tt groupClause}). The main difference to {\it querytrees}
built up for queries without {\it aggregate functions} is given in the
field {\tt hasAggs} which is set to {\tt true} and in the {\it
targetlist}. The {\tt expr} field of the second {\tt TLE} node of the
{\it targetlist} shown in figure \ref{parse_having} does not point
directly to a {\tt VAR} node but to an {\tt Aggreg} node representing
the {\it aggregate function} used in the query.
A check is made that every attribute grouped for appears only without
an {\it aggregate function} in the {\it targetlist} and that every
Peter Eisentraut
committed
attribute that appears without an {\it aggregate function} in the
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
{\it targetlist} is grouped for.
%
\pagebreak
\subsubsection{The Rewrite System}
The rewriting system does not make any changes to the {\it querytree}
as long as the query involves just {\it base tables}. If any {\it
views} are present the query is rewritten to access the tables
specified in the {\it view definition}.
%
\subsubsection{Planner/Optimizer}
Whenever an {\it aggregate function} is involved in a query (which is
indicated by the {\tt hasAggs} flag set to {\tt true}) the planner
creates a {\it plantree} whose top node is an {\tt AGG} node. The {\it
targetlist} is searched for {\it aggregate functions} and for every
function that is found, a pointer to the corresponding {\tt Aggreg}
node is added to a list which is finally attached to the field {\tt aggs} of
the {\tt AGG} node. This list is needed by the {\it executor} to know which
{\it aggregate functions} are present and have to be
handled.
The {\tt AGG} node is followed by a {\tt GRP} node. The implementation
of the {\it grouping} logic needs a sorted table for its operation so
the {\tt GRP} node is followed by a {\tt SORT} node. The {\tt SORT}
operation gets its tuples from a kind of {\tt Scan} node (if no
indexes are present this will be a simple {\tt SeqScan} node). Any
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
qualifications present are attached to the {\tt Scan} node. Figure
\ref{plan_having} shows the {\it plan} created for the query given in
example \ref{having}.
\begin{figure}[ht]
\begin{center}
\epsfig{figure=figures/plan_having.ps}
\caption{{\it Plantree} for the query of example \ref{having}}
\label{plan_having}
\end{center}
\end{figure}
Note that every node has its own {\it targetlist} which may differ from the
one of the node above or below. The field {\tt varattno} of every
{\tt VAR} node included in a {\it targetlist} contains a number representing
the position of the attribute's value in the tuple of the current node.
\pagebreak
\clearpage
\subsubsection{Executor}
The {\it executor} uses the function {\tt execAgg()} to execute {\tt AGG}
nodes. As described earlier it uses one main function {\tt
ExecProcNode} which is called recursively to execute subtrees. The
following steps are performed by {\tt execAgg()}:
%
\begin{itemize}
%
<step> The list attached to the field {\tt aggs} of the {\tt AGG} node is
examined and for every {\it aggregate function} included the {\it transition
functions} are fetched from a {\it function table}. Calculating the
value of an {\it aggregate function} is done using three functions:
%
\begin{itemize}
<step> The {\it first transition function} {\tt xfn1} is called with the
current value of the attribute the {\it aggregate function} is applied
to and changes its internal state using the attribute's value given as
an argument.
%
<step> The {\it second transition function} {\tt xfn2} is called without any
arguments and changes its internal state only according to internal rules.
%
<step> The {\it final function} {\tt finalfn} takes the final states of {\tt
xfn1} and {\tt xfn2} as arguments and finishes the {\it aggregation}.
\end{itemize}
%
\begin{example} Recall the functions necessary to implement the
{\it aggregate function} {\tt avg} building the average over all
values of an attribute in a group (see section \ref{ext_agg} {\it
Extending Aggregates}):
%
\begin{itemize}
%
<step> The first transition function {\tt xfn1} has to be a function that
takes the actual value of the attribute {\tt avg} is applied to as an
argument and adds it to the internally stored sum of previous
calls.
%
<step> The second transition function {\tt xfn2} only increases an internal
counter every time it is called.
%
<step> The final function {\tt finalfn} divides the result of {\tt xfn1} by
the counter of {\tt xfn2} to calculate the average.
%
\end{itemize}
%
\end{example}
%
Note that {\tt xfn2} and {\tt finalfn} may be absent (e.g. for the
{\it aggregate function} {\tt sum} which simply sums up all values of
the given attribute within a group. \\
\\
{\tt execAgg()} creates an array containing one entry for every {\it
aggregate function} found in the list attached to the field {\tt
aggs}. The array will hold information needed for the execution of
every {\it aggregate function} (including the {\it transition functions}
described above).
%
<step> The following steps are executed in a loop as long as there are
still tuples returned by the subplan (i.e. as long as there are still
tuples left in the current group). When there are no tuples left
in the group a {\tt NULL} pointer is returned indicating the end of the
group.
%
\begin{itemize}
<step> A new tuple from the subplan (i.e. the {\it plan} attached to the
field {\tt lefttree}) is fetched by recursively calling {\tt
ExecProcNode()} with the subplan as argument.
%
<step> For every {\it aggregate function} (contained in the array created
before) apply the transition functions {\tt xfn1} and {\tt xfn2} to
the values of the appropriate attributes of the current tuple.
\end{itemize}
%
<step> When we get here, all tuples of the current group have been
processed and the {\it transition functions} of all {\it aggregate
functions} have been applied to the values of the attributes. We are
now ready to complete the {\it aggregation} by applying the {\it final
function} ({\tt finalfn}) for every {\it aggregate function}.
%
<step> Store the tuple containing the new values (the results of the
{\it aggregate functions}) and hand it back.
\end{itemize}
%
Note that the procedure described above only returns one tuple
(i.e. it processes just one group and when the end of the group is
detected it processes the {\it aggregate functions} and hands back one