[Issues] [apr_memcache 0000067]: Large Data sent via apr_memcache_set fails due to nonblocking socket
issues at outoforder.cc
issues at outoforder.cc
Wed Dec 13 14:59:50 EST 2006
The following issue has been SUBMITTED.
======================================================================
http://issues.outoforder.cc/view.php?id=67
======================================================================
Reported By: JimHull
Assigned To:
======================================================================
Project: apr_memcache
Issue ID: 67
Category: Other
Reproducibility: always
Severity: block
Priority: normal
Status: new
======================================================================
Date Submitted: 12-13-2006 14:59 EST
Last Modified: 12-13-2006 14:59 EST
======================================================================
Summary: Large Data sent via apr_memcache_set fails due to
nonblocking socket
Description:
Calling apr_memcache_set with a large data segment returns an error. a data
size of 65535 is big enough for me to cause this to occur.
What appears to be happening is the socket is in nonblocking mode because
apr_socket_timeout_set() is set (as I understand it via apr
documentation). Later when the data is sent, apr_socket_sendv() returns
success, but the number of bytes written does not match the number of
bytes sent (this is normal for a nonblocking socket). The desired response
is for the consumer of apr_socket_sendv() to resend the remaining data.
I have supplied a patch that fixes this issue, it applies to release
0.7.0, but should also apply to the current trunk. I am not a regular apr
consumer so I'm not sure if this is how they intended it to be handled,
but the approach I am providing is straight forward and works.
======================================================================
Issue History
Date Modified Username Field Change
======================================================================
12-13-06 14:59 JimHull New Issue
12-13-06 14:59 JimHull File Added: apr_memcache.patch
======================================================================
More information about the Issues
mailing list