Design Http Client And Comparing to Standard Lib¶
The motive of this article is I cannot manage my code well especially for a client, which uses the other libraries to achieve my own goals.
As a result, I check with the code of http client. I hope I could get a lot from the source code about how to design a client properly.
Design firstly¶
When you want to learn further about something, it’s better to design first and then compare the design to the current one to see whether you lack some important things.
Design details¶
What the client looks like if I design it by myself? The following diagram shows it.
The design here, client not only constructs the requests based on the user data, but also focus on how to send the data to Tcp channel. But how to send data is actually the implementation details.
How about my design¶
My design here just made a brief, but from current stage(year 2022), it lacks a lot of details and abstraction. I will mentioned the details in the following topics, but for abstraction, it’s worth to state here as dit’s similiar with the origin go http client design.d
Note that it’s a quite early version of golang, which has a tag named weekly.2011-02-15: release.2011-02-15.
The client is crude here, my design is near it. It uese a send function to send http/https request:
- TLNR: Implementation
// Send issues an HTTP request. Caller should close resp.Body when done reading it.
//
// TODO: support persistent connections (multiple requests on a single connection).
// send() method is nonpublic because, when we refactor the code for persistent
// connections, it may no longer make sense to have a method with this signature.
func send(req *Request) (resp *Response, err os.Error) {
if req.URL.Scheme != "http" && req.URL.Scheme != "https" {
return nil, &badStringError{"unsupported protocol scheme", req.URL.Scheme}
}
addr := req.URL.Host
if !hasPort(addr) {
addr += ":" + req.URL.Scheme
}
info := req.URL.RawUserinfo
if len(info) > 0 {
enc := base64.URLEncoding
encoded := make([]byte, enc.EncodedLen(len(info)))
enc.Encode(encoded, []byte(info))
if req.Header == nil {
req.Header = make(map[string]string)
}
req.Header["Authorization"] = "Basic " + string(encoded)
}
var conn io.ReadWriteCloser
if req.URL.Scheme == "http" {
conn, err = net.Dial("tcp", "", addr)
if err != nil {
return nil, err
}
} else { // https
conn, err = tls.Dial("tcp", "", addr, nil)
if err != nil {
return nil, err
}
h := req.URL.Host
if hasPort(h) {
h = h[0:strings.LastIndex(h, ":")]
}
if err := conn.(*tls.Conn).VerifyHostname(h); err != nil {
return nil, err
}
}
err = req.Write(conn)
if err != nil {
conn.Close()
return nil, err
}
reader := bufio.NewReader(conn)
resp, err = ReadResponse(reader, req.Method)
if err != nil {
conn.Close()
return nil, err
}
resp.Body = readClose{resp.Body, conn}
return
}
There is no abstraction in this early version, I interpreted there might be two reasons:
- there is actually no http2 proposal in 2011, refer to wikipedia.
- go was still quickly developing at that time.
Let’s back to topic here, for a entry level engineer, it’s very common to design such a behavior. But for me, in 2022, the design like this is not proper as it does take a long time for the network infrastructure. I would chlak it up to a lack of protocol complexities and awareness of client side responsibilities.
As the time passed by, go developed quite a lot on its http library and it expanded a lot. The following topics will learn more about what’s the go http client library looks like nowerdays.
net/http Client API in v1.19¶
This article IS NOT A MUNUAL, so I wouldn’t paste a large paragraph of comments or documents, only as small as possible to make the feature study easier.
List API aims to check why there are some APIs are not considered but actually exist in library.
- method Get, Post, Head, PostForm and Do
- method CloseIdleConnections .
timeout
,jar
andtransport
field.- function NewRequest and NewFileTransport.
Above shows the API of a client. Honestly, in my design, I only have Get, Post, Put and Delete method. Should pay a lot of time to know why there are some other APIs.
In this page, I only discuss about RoundTripper, they other fields and functions will mention in the next page: Cancel and Timeout Implementation in Http Standard Lib
Field: RoundTripper¶
The client hold a transport
field with type RoundTripper, based on the signature we could know a RoundTripper is something to make a request and get a response and an error.
type Client struct {
// Transport specifies the mechanism by which individual
// HTTP requests are made.
// If nil, DefaultTransport is used.
Transport RoundTripper
// ignore lines
}
type RoundTripper interface {
RoundTrip(*Request) (*Response, error)
}
The problem is: hy we need a RoundTripper? Can client do the RoundTripper job, what benefits we could enjoy from it?
Overall, we could say it’s an abstraction as client doesn’t care about any details about sending/receiving http data. It’s correct obviously but also too abstract and everyone knows the interface aims to decouple the project and reduce the complexity for caller.
The points here should be:
- what the RoundTripper does?
- what’s the motive of adding RoundTripper?
To make clear about it, let’s see RFC HTTP first, to make a clear panorama about http.
RFC HTTP¶
There are amount of rfc documents about http 1.1 protocol, in golang it uses RFC 7230 though 7235. There are also the obsoletions as RFC 2616 and so on. What’s more, there are http/2.0 and http/3.0.
When it comes to RFC http1.1, it’s easy to fall into a trap of thinking the rfc only enacts the rules but not care about the implementation details.
However, there will be a paradox, if the rfc doesn’t care about the implementation, what’s the different between http 1.1, 2.0 and 3? The wrong underestimation of complexity leads the false idea: rfc doesn’t care about implementation details.
RFC rules what’s the http protocol, refer to the Overall Operation topic, it talks about the flow of http.
The HTTP protocol is a request/response protocol. A client sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content over a connection with a server. The server responds with a status line, including the message's protocol version and a success or error code, followed by a MIME-like message containing server information, entity metainformation, and possible entity-body content.
As a protocol description, it’s common for RFC to encat rules as Requirements, also based on this part there is a corollary: RFC suggests all compulsory and compliant features for implementation.
An implementation is not compliant if it fails to satisfy one or more of the MUST or REQUIRED level requirements for the protocols it implements. An implementation that satisfies all the MUST or REQUIRED level and all the SHOULD level requirements for its protocols is said to be "unconditionally compliant"; one that satisfies all the MUST level requirements but not all the SHOULD level requirements for its protocols is said to be "conditionally compliant.”
Note that the two references above are comes from rfc2616, though it’s obsolete but I think it’s worth to put it here.
However, based on the paradox mentioned above, rfc http 1.1 also cares about the implementation details, so what should be the details?
- Cache: Any client or server MAY employ a cache
A cache stores cacheable responses in order to reduce the response time and network bandwidth consumption on future, equivalent requests.
- Chunked Transfer Coding: wrap the payload body
The chunked transfer coding wraps the payload body in order to transfer it as a series of chunks, each with its own size indicator, followed by an OPTIONAL trailer containing header fields
- Connection management: try to reuse
HTTP implementations are expected to engage in connection management, which includes maintaining the state of current connections, establishing a new connection or reusing an existing connection, processing messages received on a connection, detecting connection failures, and closing each connection.
Those just a small part from the whole details, from those, there is a corollary: rfc enacts the http rules until the part of http becomes to binary, which means then it will be sent by tcp socket.
RoundTripper¶
RoundTripper is an interface representing the ability to execute a single HTTP transaction, obtaining the Response for a given Request.
- Q: Is it the duty of auto-completed default fields of a request in RoundTripper? It should be, RoundTripper SHOULD only validate the request, not auto fills it with default value.
After clarifying the rfc http, we could easily find the RoundTripper also hide all complexities, it only requires a request object, and then return a response and an error.
From here, we can make sure the RroundTripper does:
- provide an interface strands for the implementation to separate the implementation details such as cache, connection management from the caller side. The caller don’t care about impl details.
It’s the answer of the first problem mentioned above. The most important here should be: d**the complexity in http implementation itself**, it is quite far away the complesity esteminated in my original design.
Then, we should learn about the second question, “what’s the motive of adding RoundTripper”. Let’s look the vissitude of golang source code by git blame, to see why the RoundTripper was introduced.
The RoundTripper had a name Transport before, refer to this line.
// COMES FROM COMMIT dbff0adaa784ef0f822
// !! Note it was the previous code !!
// Transport is an interface representing the ability to execute a
// single HTTP transaction, obtaining the Response for a given Request.
type Transport interface {
Do(req *Request) (resp *Response, err os.Error)
}
The transport was introduced in this pull request:
Much yet to come, but this is a safe first step, introducing an in-the-future configurable Client object (where policy for cookies, auth, redirects will live) as well as introducing a ClientTransport interface for sending requests.
Note the time, 2011-02, where the proposal of http2 happened in early 2012. To fit the future requirement, they introduced the RoundTripper to:
- use one http client for http1.1, http2.
- decouple the implementation from the client layer, to another package.
There were no conflicts in the code review, the design could be a good practice in go language.