I'm actually dealing with some issues.
In fact I'm coding an HTTP server with Deno and I have the same problem that I did with Node.js.
I use Postman to simulate POST request to my server.
When I do a Request with the URL of my localhost the server is working:
The deno server code :
const listener = Deno.listen({ port: 8000 });
console.log(`Server is running on http://localhost:8000`);
for await (const conn of listener) {
handleNewConnection(conn);
}
async function handleNewConnection(conn: Deno.Conn) {
for await (const req of Deno.serveHttp(conn)) {
await handleRequest(req.request, req.respondWith);
}
}
async function handleRequest(req: Request, resp: any) {
if (req.method === "POST") {
let body = await req.json();
console.log(body);
resp(new Response(null));
} else {
console.log(req.method)
resp(new Response(null));
}
}
The request that I'm sending in localhost:8000:
What I receive in console.log :
{
name: "Molecule Man",
age: 29,
secretIdentity: "Dan Jukes",
powers: [ "Radiation resistance", "Turning tiny", "Radiation blast" ]
}
Then I'm gonna show you when I use a Cloudflare tunnel.
This Cloudflare tunnel is directly sending the requests to my server on port 8000.
The POST request that I'm sending :
What I receive with the console.log() :
error: Uncaught (in promise) SyntaxError: Unexpected end of JSON input
let body = await req.json();
^
at parse (<anonymous>)
at packageData (deno:ext/fetch/22_body.js:328:16)
at Request.json (deno:ext/fetch/22_body.js:267:18)
at async handleRequest (file:///C:/Users/Utilisateur/Projets/poc-pdu/HTTP-serv-deno/http-serv-deno.ts:17:16)
at async handleNewConnection (file:///C:/Users/Utilisateur/Projets/poc-pdu/HTTP-serv-deno/http-serv-deno.ts:10:5)
That's the error, in my opinion, this is due to the time it takes for the request to arrive at it's destination.
I thank all the people who will take the time to respond to this post.
Bye!
Related
node js, and ngRock, it seems that ng rock is not receiving the GET method every time i make a GET request the method deployed in ngrok is OPTIONS /category, instead of GET / category.
picture
and im not getting any response from the server
react fetch
try {
const response = await fetch(global.config.Node_API + 'categorias', {
method: 'GET'
});
if (!response.ok) {
throw new Error(`Error!, Fallo en la coneccion`);
}
const result = await response.json();
this.setState({cont:1,categor: result});
} catch (err) {
console.log(err.message);
}
in the console im getting error
Access to fetch at 'https://5833-45-229-42-135.ngrok.io/categorias' from origin 'http://localhost:3001' has been blocked by CORS policy: Request header field content-type is not allowed by Access-Control-Allow-Headers in preflight response.
in nodeJs im using
app.use(cors())
I thought that event.passThroughOnException(); should set the fail open strategy for my worker, so that if an exception is raised from my code, original requests are sent to my origin server, but it seems that it’s missing post data. I think that’s because the request body is a readable stream and once read it cannot be read again, but how to manage this scenario?
addEventListener('fetch', (event) => {
event.passThroughOnException();
event.respondWith(handleRequest(event));
});
async function handleRequest(event: FetchEvent): Promise<Response> {
const response = await fetch(event.request);
// do something here that potentially raises an Exception
// #ts-ignore
ohnoez(); // deliberate failure
return response;
}
As you can see in the below image, the origin server did not receive any body (foobar):
Unfortunately, this is a known limitation of passThroughOnException(). The Workers Runtime uses streaming for request and response bodies; it does not buffer the body. As a result, once the body is consumed, it is gone. So if you forward the request, and then throw an exception afterwards, the request body is not available to send again.
Did a workaround by cloning event.request, then add a try/catch in handleRequest. On catch(err), send the request to origin using fetch while passing the cloned request.
// Pass request to whatever it requested
async function passThrough(request: Request): Promise<Response> {
try {
let response = await fetch(request)
// Make the headers mutable by re-constructing the Response.
response = new Response(response.body, response)
return response
} catch (err) {
return ErrorResponse.NewError(err).respond()
}
}
// request handler
async function handleRequest(event: FetchEvent): Promise<Response> {
const request = event.request
const requestClone = event.request.clone()
let resp
try {
// handle request
resp = await handler.api(request)
} catch (err) {
// Pass through manually on exception (because event.passThroughOnException
// does not pass request body, so use that as a last resort)
resp = await passThrough(requestClone)
}
return resp
}
addEventListener('fetch', (event) => {
// Still added passThroughOnException here
// in case the `passThrough` function throws exception
event.passThroughOnException()
event.respondWith(handleRequest(event))
})
Seems to work OK so far. Would love to know if there are other solutions as well.
I want know about good practices with golang and gRPC and protobuf.
I am implementing the following gRPC service
service MyService {
rpc dosomethink(model.MyModel) returns (model.Model) {
option (google.api.http) = { post: "/my/path" body: "" };
}
}
I compiled the protobufs. In fact, the protobuf give us a httpproxy from http to grpc.
The code to implement this service:
import "google.golang.org/grpc/status"
func (Abcd) Dosomethink(c context.Context, sessionRequest *model.MyModel) (*model.Model, error) {
return nil, status.New(400,"Default error message for 400")
}
I want a 400 http error (in the http proxy) with the message "Default error message for 400", the message works, but the http error always is 500.
Do you know any post or doc about this?
You need to return empty model.Model object in order for protobufs to be able to properly serialise the message.
Try
import "google.golang.org/grpc/status"
func (Abcd) Dosomethink(c context.Context, sessionRequest *model.MyModel) (*model.Model, error) {
return &model.Model{}, status.Error(400,"Default error message for 400")
}
Error Handler:
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
return data, status.Errorf(
codes.InvalidArgument,
fmt.Sprintf("Your message", req.data),
)
For need more info about the error handling take a look below links.
https://grpc.io/docs/guides/error.html
http://avi.im/grpc-errors/
I'm working with Meteor and trying to establish NTLM authentication. I'm using the HTTP package to go through the NTLM "handshake" with the remote server. However, I'm running into an issue that I think is caused by the connection not being kept alive. I can't make it all the way through the handshake, because the remote server expects me to reuse the same HTTP connection but I'm making a new one every time (I'm not sure how note to). Below is some of what I'm using to do this. I'd appreciate any help here. Is there any alternative package to HTTP that has a keep alive option? Can I somehow reuse the same connection with the default HTTP package? Thanks.
var msg1 = Ntlm.createMessage1(hostname);
HTTP.get(url, {
headers: {
"Authorization": "NTLM " + msg1.toBase64()
}
}, function (error, result) {
if (result != null) {
var response = result.headers["www-authenticate"];
var msg2 = Ntlm.decodeMessage2(response);
//here I should respond to msg2 on same kept-alive connection, but I'm not, so it's failing
var msg3 = Ntlm.createMessage3(msg2, hostname);
HTTP.get(url, {
headers: {
"Authorization": "NTLM " + msg3.toBase64()
}
}, function (error, req) {
if (req != null) {
if (req.statusCode == 200) {
Ntlm.authorized = true; //success
} else {
//error, what I'm getting now
//"401 - Unauthorized: Access is denied due to invalid credentials"
}
} else Ntlm.error(error);
});
} else Ntlm.error(error);
});
I don't think you can keep alive a connection with the default meteor http package. However, there are extensions and replacements to this default package. One of them that seems to support setting keepalive to true is this called http-more. There is more info about this package "requests" support here: https://github.com/request/request#requestoptions-callback
This error happens whenever the node process make a http request to get user's information from a web API.
The scenario is :
I'm running a TCP server using node, and when it get "login" request from a client, it will send a http GET request to another web API to retrieve the user's information.
While users increasing, the node process starts to throw the "ETIMEOUT" error when retrieving user's info. And once if the error happened, all the request after that will throw the same error.
I've tried to perform the same request with wget but everything is fine, so I think maybe it's not a network problem.
And strangely, after increasing the open file limit to 10,0000 using ulimit -n, it goes well until the next level user increment.
The fetch function is here:
fetchUserInfo = function(callback) {
var http = require('http');
var opt = {
agent: false,
host: 'www.someapi.net',
port: 80,
path: '/userInfo.php'
}
var body = '';
var req = http.request(opt, function(res) {
res.setEncoding('utf8');
res.on('data', function(chunk){
body += chunk;
});
res.on('end', function() {
if(callback) {
callback(body);
}
});
});
req.on('error', function(e) {
sys.log("User info fetch error: " + e.message);
if(callback) {
callback();
}
});
req.end();
}
My environment is Debian GNU/Linux 6.0 with node v0.4.10.