SpdyProxyClientSocket waits for 200 OK before returning OK for Connect.
Change that behavior to returning OK immediately after CONNECT header.
This feature is enabled by default. It should probably be turned on
through an interface but that implies passing a flag through deep
interface chains right now requiring intrusive changes to multiple
places.
Design notes:
The current approach is better than the obvious TCP Fast Open style fake
Connect().
Fast Open should not be used for preconnects as preconnects need actual
connections set up. The Naive client does not use preconnects per se
(using "...RawConnect") but the user agent will use preconnects and the
Naive client has to infer that. Hence there is a need to check the
incoming socket for available bytes right before Connect() and configure
whether a socket should be connected with Fast Open. But fake Connect()
make it difficult to check the incoming socket because it immediately
returns and there is not enough time for the first read of the incoming
socket to arrive.
To check for preconnects it is best to push the first read of the
incoming socket to as late as possible. The other (wrong) way of doing
that is to pass in an early read callback and call it immediately after
sending HEADERS and then send the available bytes right there. This way
is wrong because it does not work with late binding, which assumes
Connect() is idempotent and causes sockets opened in this way to be
potentially bound to the wrong socket requests.
The current approach is to return OK in Connect() right after sending
HEADERS before getting the reply, which is to be received later. If the
reply is received during a subsequent Read() and the reply indicates an
error, the error is returned to the callback of the Read(); otherwise
the error is ignored with the connection disconnected and subsequent
Read() and Write() should discover the disconnection.
So the delegate can close the socket instead of keeping sending data.
Read EOF or h2 half-closed (remote) state was introduced in
https://codereview.chromium.org/129543002. But StreamSocket doesnt
really supports a half closed state, so upon a read EOF the only sane
action is to close the socket immediately even if in theory more send
is possible.
Per RFC 7540#6.4:
However, after sending the RST_STREAM, the sending endpoint MUST be
prepared to receive and process additional frames sent on the stream
that might have been sent by the peer prior to the arrival of the
RST_STREAM.
After the upstream large refactor, now only WebSocket sockets
have tunneling via HTTP/1 proxies. "Raw" sockets in the normal
socket pool don't have tunneling via HTTP/1 proxies, i.e.
CONNECT headers are not sent, instead the raw payload is sent
as-is to the HTTP/1 proxy, which makes it not work.
For the reference the official code does this:
- HTTP sockets via HTTP/1 proxies: normal pool, no tunneling.
- HTTPS sockets via HTTP/1 proxies: normal pool, no tunneling
but does its own proxy encapsulation.
- WS sockets via HTTP/1 proxies: WS pool, tunneling.
We want the normal pool because the WS pool has some extra
restrictions but we also want tunneling to expose a client socket
with the proxy built in.
Therefore we can force tunneling for all sockets. This will always
send CONNECT headers first and thus break HTTP client sockets via
HTTP/1 proxies, but since we don't use this combination this is ok.