Clients sending too many RST_STREAM is an irregular behavior.
Hack in a preceding END_STREAM DATA frame padded towards [48, 72]
before RST_STREAM so that the TLS record looks like a HEADERS frame.
The server often replies to this with a WINDOW_UPDATE because padding
is accounted in flow control. Whether this constitudes a new irregular
behavior is still unclear.
Client: On the first connection does a full Open and detects if the
server supports padding by checking for "Padding" header in the
response. Applies padding if the server does. In the following
connections it's back to Fast Open.
Server: Detects if the client supports padding by checking for "Padding"
header in the CONNECT request. Applies padding if the client does.
Both client and server always send "Padding" headers to somewhat protect
the request and response headers' packet lengths, even if the other side
may not acknowledge padding negotiation, either due to old version or
"Padding" headers being dropped by the frontend.
The manual option --padding is removed.
In HttpProxySocket there can be data immediately after HTTP headers,
as in the case of fast HTTP CONNECT.
Instead of reporting an error, handle this case by returning
the data after HTTP headers in the next Read() call.
SpdyProxyClientSocket uses read_callback_ for both Connect() and
Read(), and its OnIOComplete() calls read_callback_, thus its fast
connect code checks read_callback_. The code was ported to
QuicProxyClientSocket without much change.
But QuicProxyClientSocket uses a separate connect_callback_ apart from
read_callback_, and its OnIOComplete() calls connect_callback_, thus
when headers are received after Connect() it doesn't need to check
read_callback_ and should always avoid calling connect_callback_.
SpdyProxyClientSocket waits for 200 OK before returning OK for Connect.
Change that behavior to returning OK immediately after CONNECT header.
This feature is enabled by default. It should probably be turned on
through an interface but that implies passing a flag through deep
interface chains right now requiring intrusive changes to multiple
places.
Design notes:
The current approach is better than the obvious TCP Fast Open style fake
Connect().
Fast Open should not be used for preconnects as preconnects need actual
connections set up. The Naive client does not use preconnects per se
(using "...RawConnect") but the user agent will use preconnects and the
Naive client has to infer that. Hence there is a need to check the
incoming socket for available bytes right before Connect() and configure
whether a socket should be connected with Fast Open. But fake Connect()
make it difficult to check the incoming socket because it immediately
returns and there is not enough time for the first read of the incoming
socket to arrive.
To check for preconnects it is best to push the first read of the
incoming socket to as late as possible. The other (wrong) way of doing
that is to pass in an early read callback and call it immediately after
sending HEADERS and then send the available bytes right there. This way
is wrong because it does not work with late binding, which assumes
Connect() is idempotent and causes sockets opened in this way to be
potentially bound to the wrong socket requests.
The current approach is to return OK in Connect() right after sending
HEADERS before getting the reply, which is to be received later. If the
reply is received during a subsequent Read() and the reply indicates an
error, the error is returned to the callback of the Read(); otherwise
the error is ignored with the connection disconnected and subsequent
Read() and Write() should discover the disconnection.
So the delegate can close the socket instead of keeping sending data.
Read EOF or h2 half-closed (remote) state was introduced in
https://codereview.chromium.org/129543002. But StreamSocket doesnt
really supports a half closed state, so upon a read EOF the only sane
action is to close the socket immediately even if in theory more send
is possible.
Per RFC 7540#6.4:
However, after sending the RST_STREAM, the sending endpoint MUST be
prepared to receive and process additional frames sent on the stream
that might have been sent by the peer prior to the arrival of the
RST_STREAM.
After the upstream large refactor, now only WebSocket sockets
have tunneling via HTTP/1 proxies. "Raw" sockets in the normal
socket pool don't have tunneling via HTTP/1 proxies, i.e.
CONNECT headers are not sent, instead the raw payload is sent
as-is to the HTTP/1 proxy, which makes it not work.
For the reference the official code does this:
- HTTP sockets via HTTP/1 proxies: normal pool, no tunneling.
- HTTPS sockets via HTTP/1 proxies: normal pool, no tunneling
but does its own proxy encapsulation.
- WS sockets via HTTP/1 proxies: WS pool, tunneling.
We want the normal pool because the WS pool has some extra
restrictions but we also want tunneling to expose a client socket
with the proxy built in.
Therefore we can force tunneling for all sockets. This will always
send CONNECT headers first and thus break HTTP client sockets via
HTTP/1 proxies, but since we don't use this combination this is ok.