Is there anyway for golang HTTP client,not to escape the requested URL.
For example, a request for URL "/test(a)", gets escaped to "/test%28a%29".
I'm running the code from https://github.com/cmpxchg16/gobench
You can set an opaque url.
Assuming you want the url to point to http://example.com/test(a) you would want to do:
req.NewRequest("GET", "http://example.com/test(a)", nil)
req.URL = &url.URL{
Scheme: "http",
Host: "example.com",
Opaque: "//example.com/test(a)",
}
client.Do(req)
Example: http://play.golang.org/p/09V67Hbo6H
Documentation: http://golang.org/pkg/net/url/#URL
This appears to be a bug in the URL package:
https://code.google.com/p/go/issues/detail?id=6784
and https://code.google.com/p/go/issues/detail?id=5684
You need to set Unescaped Path to URL Path and Encoded Path to RawURL or else it will double escape e.g.
package main
import (
"fmt"
"net/url"
)
func main() {
doubleEncodedURL := &url.URL{
Scheme: "http",
Host: "example.com",
Path: "/id/EbnfwIoiiXbtr6Ec44sfedeEsjrf0RcXkJneYukTXa%2BIFVla4ZdfRiMzfh%2FEGs7f/events",
}
rawEncodedURL := &url.URL{
Scheme: "http",
Host: "example.com",
Path: "/id/EbnfwIoiiXbtr6Ec44sfedeEsjrf0RcXkJneYukTXa+IFVla4ZdfRiMzfh/EGs7f/events",
RawPath: "/id/EbnfwIoiiXbtr6Ec44sfedeEsjrf0RcXkJneYukTXa%2BIFVla4ZdfRiMzfh%2FEGs7f/events",
}
fmt.Printf("doubleEncodedURL is : %s\n", doubleEncodedURL)
fmt.Printf("rawEncodedURL is : %s\n", rawEncodedURL)
}
Playground link: https://play.golang.org/p/rOrVzW8ZJCQ
Related
I created a simple Vue3 app, and I'm trying to call another local API (on a different port) on my machine. To better replicate the production server environment, I'm making a call to a relative API path. That means I need to use a proxy on the vite server to forward the API request to the correct localhost port for my local development. I defined my vite proxy like this in my vite.config.ts file:
import { fileURLToPath, URL } from "node:url";
import { defineConfig } from "vite";
import vue from "#vitejs/plugin-vue";
import basicSsl from '#vitejs/plugin-basic-ssl'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [
basicSsl(),
vue()
],
resolve: {
alias: {
"#": fileURLToPath(new URL("./src", import.meta.url)),
},
},
server: {
https: true,
proxy: {
'/api': {
target: 'https://localhost:44326', // The API is running locally via IIS on this port
changeOrigin: true,
rewrite: (path) => path.replace(/^\/api/, '') // The local API has a slightly different path
}
}
}
});
I'm successfully calling my API from the Vue app, but I get this error in the command line where I'm running the vite server:
5:15:14 PM [vite] http proxy error:
Error: self signed certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:526:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12)
I already tried to add the basic ssl package, and I don't particularly want to install the other NPM package that is in the top voted answer. Why does the vite server complain about a self signed certificate when I'm trying to call another API on my local machine? What can I do to fix this?
you could try secure: false
server: {
https: true,
proxy: {
'/api': {
target: 'https://localhost:44326', // The API is running locally via IIS on this port
changeOrigin: true,
secure: false,
rewrite: (path) => path.replace(/^\/api/, '') // The local API has a slightly different path
}
}
}
the set of full options is available at https://github.com/http-party/node-http-proxy#options
Options
httpProxy.createProxyServer supports the following options:
target: url string to be parsed with the url module
forward: url string to be parsed with the url module
agent: object to be passed to http(s).request (see Node's https agent and http agent objects)
ssl: object to be passed to https.createServer()
ws: true/false, if you want to proxy websockets
xfwd: true/false, adds x-forward headers
secure: true/false, if you want to verify the SSL Certs
toProxy: true/false, passes the absolute URL as the path (useful for proxying to proxies)
prependPath: true/false, Default: true - specify whether you want to prepend the target's path to the proxy path
ignorePath: true/false, Default: false - specify whether you want to ignore the proxy path of the incoming request (note: you will have to append / manually if required).
localAddress: Local interface string to bind for outgoing connections
changeOrigin: true/false, Default: false - changes the origin of the host header to the target URL
preserveHeaderKeyCase: true/false, Default: false - specify whether you want to keep letter case of response header key
auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
hostRewrite: rewrites the location hostname on (201/301/302/307/308) redirects.
autoRewrite: rewrites the location host/port on (201/301/302/307/308) redirects based on requested host/port. Default: false.
protocolRewrite: rewrites the location protocol on (201/301/302/307/308) redirects to 'http' or 'https'. Default: null.
cookieDomainRewrite: rewrites domain of set-cookie headers. Possible values:
false (default): disable cookie rewriting
String: new domain, for example cookieDomainRewrite: "new.domain". To remove the domain, use cookieDomainRewrite: "".
Object: mapping of domains to new domains, use "*" to match all domains.
For example keep one domain unchanged, rewrite one domain and remove other domains:
cookieDomainRewrite: {
"unchanged.domain": "unchanged.domain",
"old.domain": "new.domain",
"*": ""
}
cookiePathRewrite: rewrites path of set-cookie headers. Possible values:
false (default): disable cookie rewriting
String: new path, for example cookiePathRewrite: "/newPath/". To remove the path, use cookiePathRewrite: "". To set path to root use cookiePathRewrite: "/".
Object: mapping of paths to new paths, use "*" to match all paths.
For example, to keep one path unchanged, rewrite one path and remove other paths:
cookiePathRewrite: {
"/unchanged.path/": "/unchanged.path/",
"/old.path/": "/new.path/",
"*": ""
}
headers: object with extra headers to be added to target requests.
proxyTimeout: timeout (in millis) for outgoing proxy requests
timeout: timeout (in millis) for incoming requests
followRedirects: true/false, Default: false - specify whether you want to follow redirects
selfHandleResponse true/false, if set to true, none of the webOutgoing passes are called and it's your responsibility to appropriately return the response by listening and acting on the proxyRes event
buffer: stream of data to send as the request body. Maybe you have some middleware that consumes the request stream before proxying it on e.g. If you read the body of a request into a field called 'req.rawbody' you could restream this field in the buffer option:
'use strict';
const streamify = require('stream-array');
const HttpProxy = require('http-proxy');
const proxy = new HttpProxy();
module.exports = (req, res, next) => {
proxy.web(req, res, {
target: 'http://localhost:4003/',
buffer: streamify(req.rawBody)
}, next);
};
Sample Code:
package main
import (
"fmt"
"net/http"
"net/http/httputil"
)
func main() {
client := &http.Client{
Transport: &http.Transport{
DisableCompression: true,
},
}
url := "https://google.com"
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return
}
//req.Header.Set("Accept-Encoding", "*")
//req.Header.Del("Accept-Encoding")
requestDump, err := httputil.DumpRequestOut(req, false)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(requestDump))
client.Do(req)
}
Output:
GET / HTTP/1.1
Host: google.com
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
With only req.Header.Set("Accept-Encoding", "*" uncommented:
GET / HTTP/1.1
Host: google.com
User-Agent: Go-http-client/1.1
Accept-Encoding: *
With only req.Header.Del("Accept-Encoding") uncommented:
GET / HTTP/1.1
Host: google.com
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
With both lines uncommented:
GET / HTTP/1.1
Host: google.com
User-Agent: Go-http-client/1.1
Accept-Encoding: gzip
Does DisableCompression actually do anything to the HTTP Request itself?
According to the godocs:
// DisableCompression, if true, prevents the Transport from
// requesting compression with an "Accept-Encoding: gzip"
// request header when the Request contains no existing
// Accept-Encoding value. If the Transport requests gzip on
// its own and gets a gzipped response, it's transparently
// decoded in the Response.Body. However, if the user
// explicitly requested gzip it is not automatically
// uncompressed.
As per document:
DumpRequestOut is like DumpRequest but for outgoing client requests.
It includes any headers that the standard http.Transport adds, such as User-Agent.
That means it adds "Accept-Encoding: gzip" to the printed wire format.
To test what is actually written to the connection, you need to wrap Transport.Dial or Transport.DialContext to provide connection that logs written data.
If you are using a transport that supports httptrace (which all built-in and "x/http/..." transport implementation supports), you may set up a WroteHeaderField callback to inspect written header fields.
If you just need to inspect the headers, however, you can spawn up a httptest.Server.
Playground link provided by #EmilePels:
https://play.golang.org/p/ZPi-_mfDxI8
Is there a way to intercept a bad HEAD request in a Go HTTP server? A bad request here would be to send a JSON payload with a HEAD request. I call this a Bad Request, but when I attempt a HEAD request with a body via curl, I get this error. However, no logging occurs in Go.
package main
import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
log.Println(r.Method, r.URL)
_, _ = fmt.Fprintf(w, "Hello")
}
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
If I send a curl request without a body, it works as expected and a log entry is generated 2019/11/28 10:58:59 HEAD / .
$ curl -v -X HEAD http://localhost:8080
curl -i -X HEAD http://localhost:8080
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
HTTP/1.1 200 OK
Date: Thu, 28 Nov 2019 16:03:22 GMT
Content-Length: 5
Content-Type: text/plain; charset=utf-8
However, if I send a curl request with a body, then I get a Bad Request status but no log is updated.
$ curl -i -X HEAD http://localhost:8080 -d '{}'
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad Request
I want to catch this error so I can send my own custom error message back. How can I intercept this?
You can't. The HTTP server of the standard lib does not provide any interception point or callback for this case.
The invalid request is "killed" before your handler would be called. You can see this in server.go, conn.serve() method:
w, err := c.readRequest(ctx)
// ...
if err != nil {
switch {
// ...
default:
publicErr := "400 Bad Request"
if v, ok := err.(badRequestError); ok {
publicErr = publicErr + ": " + string(v)
}
fmt.Fprintf(c.rwc, "HTTP/1.1 "+publicErr+errorHeaders+publicErr)
return
}
}
// ...
serverHandler{c.server}.ServeHTTP(w, w.req)
Go's HTTP server provides you an implementation to handle incoming requests from clients that use / adhere to the HTTP protocol. All browsers and notable clients follow the HTTP protocol. It's not the implementation's goal to provide a fully customizable server.
I have some Delphi code that connects to a servlet and I´m trying to switch from TIdTCPClient to TIdHTTP.
I connect to the servlet this way
try
lHTTP := TIdHTTP.Create( nil );
responseStream := TMemoryStream.Create;
lHTTP.Get(HttpMsg, responseStream);
SetString( html, PAnsiChar(responseStream.Memory), responseStream.Size);
AnotarMensaje( odDepurar, 'IMPFIS: Impresora fiscal reservada ' + html );
Where HttpMsg is localhost:6080/QRSRPServer/PedirImpresion?usuarioDMS=hector
All I´m getting is
GET localhost:6080/QRSRPServer/PedirImpresion?usuarioDMS=hector HTTP/1.1
Content-Type: text/html
Accept: text/html, */*
User-Agent: Mozilla/3.0 (compatible; Indy Library)
HTTP/1.1 400 Bad Request
The HTTP dialog that I had before was like this
GET /QRSRPServer/PedirImpresion?usuarioDMS=hector HTTP/1.1
Host: localhost:6080
HTTP/1.1 200 OK
So, I try to add the Host header, with this host: localhost:6080
try
lHTTP := TIdHTTP.Create( nil );
lHTTP.Host := Host;
responseStream := TMemoryStream.Create;
lHTTP.Get(HttpMsg, responseStream);
SetString( html, PAnsiChar(responseStream.Memory), responseStream.Size);
AnotarMensaje( odDepurar, 'IMPFIS: Impresora fiscal reservada ' + html );
And I get
Socket Error # 11004
Where HttpMsg is localhost:6080/QRSRPServer/PedirImpresion?usuarioDMS=hector
HttpMsg must begin with http:// or https://:
http://localhost:6080/QRSRPServer/PedirImpresion?usuarioDMS=hector
You should be getting an EIdUnknownProtocol exception raised when TIdHTTP parses the URL and sees the missing protocol scheme.
TIdHTTP should always be sending a Host header, but especially for an HTTP 1.1 request, but you claim it is not. This is why you are getting a Bad Request error, because HTTP 1.1 servers are required to reject an HTTP 1.1 request that omits that header.
You also claim that TIdHTTP is including the host and port values in the GET line. The ONLY time it ever does that is when connecting to a host through an HTTP proxy, but I don't see you configuring the TIdHTTP.ProxyParams property at all.
In short, TIdHTTP should not be behaving the way you claim.
The correct solution is to make sure you are passing a full URL to TIdHTTP.Get().
On a side note, your code requires html to be an AnsiString. You should change it to a standard string (which is AnsiString in D2007 and earlier) and let TIdHTTP return a string for you, then you don't need the TMemoryStream anymore:
html := lHTTP.Get(HttpMsg);
It was easier than I thought. I was assuming that having a "host" paremeter that included the port would be enough but looking at a Wireshark capture I saw it was sending everything over the standard HTTP port.
So this did the trick
try
lHTTP := TIdHTTP.Create( nil );
lHTTP.Host := GatewayIp;
lHTTP.Port := GatewayPuerto;
responseStream := TMemoryStream.Create;
lHTTP.Request.CustomHeaders.Clear;
lHTTP.Request.CustomHeaders.Add('Host: ' + Host );
lHTTP.Get(HttpMsg, responseStream);
SetString( html, PAnsiChar(responseStream.Memory), responseStream.Size);
AnotarMensaje( odDepurar, 'IMPFIS: Impresora fiscal reservada ' + html );
I'm trying to send a POST request which needs to modify the Header.
Here is my code:
import (
"net/http"
"net/url"
"fmt"
)
const API_URL = "https://api.site.com/api/"
func SendOne(str string) {
v := url.Values{}
v.Add("source", "12345678")
v.Add("text", str)
client := &http.Client{nil, nil, nil}
req, err := http.NewRequest("POST", API_URL, strings.NewReader(v.Encode()))
if err != nil {
fmt.Println(err)
}
req.Header.Add("Authorization", "123456")
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
}
defer res.Body.Close()
}
I have no idea why the code doesn't work. Any clue?
Thanks in advance.
Edit: I forgot to say I was using OAuth 2.0 for authorization.
Using tcpdump we can see that the request headers and body for the code you pasted looks like:
POST / HTTP/1.1
Host: example.com
User-Agent: Go 1.1 package http
Content-Length: 45
Authorization: 123456
Accept-Encoding: gzip
source=12345678&text=http%3A%2F%2Fexample.com
You mention in the comment above that if you add a Content-Type header it works. Doing the same process and dumping the communication between the two peers we get:
POST / HTTP/1.1
Host: example.com
User-Agent: Go 1.1 package http
Content-Length: 45
Authorization: 123456
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip
source=12345678&text=http%3A%2F%2Fexample.com
Which is exactly the same as the prior payload, except it now includes the provided Content-Type header. So, in terms of the behavior within the Go application itself, there's nothing special happening other than what you explicitly told it to do.
The reason why it works when you add the Content-Type header then must be that the actual server you're talking to wants to know how the content body you're providing is encoded.