Skip to content
Snippets Groups Projects
Select Git revision
  • benchmark-tools
  • postgres-lambda
  • master default
  • REL9_4_25
  • REL9_5_20
  • REL9_6_16
  • REL_10_11
  • REL_11_6
  • REL_12_1
  • REL_12_0
  • REL_12_RC1
  • REL_12_BETA4
  • REL9_4_24
  • REL9_5_19
  • REL9_6_15
  • REL_10_10
  • REL_11_5
  • REL_12_BETA3
  • REL9_4_23
  • REL9_5_18
  • REL9_6_14
  • REL_10_9
  • REL_11_4
23 results

fe-misc.c

  • Tom Lane's avatar
    f7672c8c
    Avoid buffer bloat in libpq when server is consistently faster than client. · f7672c8c
    Tom Lane authored
    If the server sends a long stream of data, and the server + network are
    consistently fast enough to force the recv() loop in pqReadData() to
    iterate until libpq's input buffer is full, then upon processing the last
    incomplete message in each bufferload we'd usually double the buffer size,
    due to supposing that we didn't have enough room in the buffer to finish
    collecting that message.  After filling the newly-enlarged buffer, the
    cycle repeats, eventually resulting in an out-of-memory situation (which
    would be reported misleadingly as "lost synchronization with server").
    Of course, we should not enlarge the buffer unless we still need room
    after discarding already-processed messages.
    
    This bug dates back quite a long time: pqParseInput3 has had the behavior
    since perhaps 2003, getCopyDataMessage at least since commit 70066eb1
    in 2008.  Probably the reason it's not been isolated before is that in
    common environments the recv() loop would always be faster than the server
    (if on the same machine) or faster than the network (if not); or at least
    it wouldn't be slower consistently enough to let the buffer ramp up to a
    problematic size.  The reported cases involve Windows, which perhaps has
    different timing behavior than other platforms.
    
    Per bug #7914 from Shin-ichi Morita, though this is different from his
    proposed solution.  Back-patch to all supported branches.
    f7672c8c
    History
    Avoid buffer bloat in libpq when server is consistently faster than client.
    Tom Lane authored
    If the server sends a long stream of data, and the server + network are
    consistently fast enough to force the recv() loop in pqReadData() to
    iterate until libpq's input buffer is full, then upon processing the last
    incomplete message in each bufferload we'd usually double the buffer size,
    due to supposing that we didn't have enough room in the buffer to finish
    collecting that message.  After filling the newly-enlarged buffer, the
    cycle repeats, eventually resulting in an out-of-memory situation (which
    would be reported misleadingly as "lost synchronization with server").
    Of course, we should not enlarge the buffer unless we still need room
    after discarding already-processed messages.
    
    This bug dates back quite a long time: pqParseInput3 has had the behavior
    since perhaps 2003, getCopyDataMessage at least since commit 70066eb1
    in 2008.  Probably the reason it's not been isolated before is that in
    common environments the recv() loop would always be faster than the server
    (if on the same machine) or faster than the network (if not); or at least
    it wouldn't be slower consistently enough to let the buffer ramp up to a
    problematic size.  The reported cases involve Windows, which perhaps has
    different timing behavior than other platforms.
    
    Per bug #7914 from Shin-ichi Morita, though this is different from his
    proposed solution.  Back-patch to all supported branches.