I'm new to nginx and I'm trying to develope a simple nginx module,a handler module specifically. Although it's not what I really wanna do,I try to finish this task first. I'm trying to get the socketfd when a browser(or a client) connects to nginx.And I have get it successfully.However, when I tried to output something using dup2(),the nginx is always pending and just outputs nothing.Sometimes I can get output after a long time and once I stop nginx like nginx -s stop,and the output appears immediately.
Like this:
reach http://100.100.60.199/nc?search=123456
get
search=123456 HTTP/1.l
HOST
output
I have read some blogs about nginx module and I found that handler module has its own pattern (to my understanding?).For example, the output should be nginx_chain_t,and I should construct that chain instead of using dup2 like a regular c code.So I wonder if it's feasible to get output like the function below.
Here is my handler function:
static ngx_int_t ngx_http_nc_handler(ngx_http_request_t *r){
//ngx_int_t rc;
ngx_socket_t connfd = r->connection->fd;
int nZero=0;
//if(setsockopt(connfd,SOL_SOCKET,SO_SNDBUF,(const void*)&nZero,sizeof(nZero))==0)
if(setsockopt(connfd,IPPROTO_TCP,TCP_NODELAY,(const void*)&nZero,sizeof(int))==0){
setbuf(stdout,NULL);
setbuf(stdin,NULL);
setbuf(stderr,NULL);
dup2(connfd,STDOUT_FILENO);
dup2(connfd,STDERR_FILENO);
dup2(connfd,STDIN_FILENO);
printf("%s\n", r->args.data);
//close(connfd);
}
return NGX_OK;
}
So I wonder if it's feasible,how can I get things right using the method above or can anybody just say it's impossible and construct a chain is the only way?
I finally solved this problem by trying to understand how exactly nginx works.In short,all I need to do is to add a http header to the output.But it's not that easy like what I described.
Related
Basically, I'm developing an HTTP endpoint to get the metrics from prometheus package.
Following the instructions in this link [https://stackoverflow.com/a/65609042/17150602] I created a handler to be able to call promhttp.Handler() like so:
g.GET("/metrics", prometheusHandler())
func prometheusHandler() gin.HandlerFunc {
h := promhttp.Handler()
return func(c *gin.Context) {
h.ServeHTTP(c.Writer, c.Request)
}
}
The thing is, when I call localhost:1080/metrics the output shows like this (btw, I'm using Postman):
Postman request to get metrics with wrong output
But if, for instance, I change the port and use http instead of gin package like so:
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(promAddr, nil)
The output shows OK as you can see here:
Postman reuest to get metrics with correct output
What and why is this happening?
Thanks
You probably forgot to remove a compression middleware like gzip.
My use case is simple: I have a Postman Collection with a few requests, one of them is:
GET www.example.com/stores?country={{country}}
Then a simple Test:
pm.test("Check number of stores", function () {
var jsonData = pm.response.json();
pm.expect(jsonData.stores.length).to.equal(pm.iterationData.get("size"));
});
So everything is nice and merry with the following Collection data used in the Collection Runner:
country,size
UK,15
US,32
However when I simply want to run this via the main Postman window, obviously the request and the Test fails. I can set a collection variable country to SE, but I have no idea how to set size in pm.iterationData just to try if my request and test script is working fine for another "country" - without running the whole collection/iterations.
Thanks in advance for all the help!
I'm not sure if you can modify iteration data variable in runtime, but a workaround for this issue is to copy your request from the original folder into a new folder, then you can run the whole folder with only one request, you dun have then to run all the requests within the collection.
I am defining a pipeline in data factory, I had some errors that I correct.
The first activity is calling an usql script to do some aggregation, I changed the script plenty of time but the error is still:
[{"errorId":"E_CSC_USER_SYNTAXERROR","severity":"Error","component":"CSC","source":"USER","message":"syntax
error. Final statement did not end with a semicolon","details":"at
token 'usql', line 4\r\nnear the ###:\r\n**************\r\nCLARE
#lineitemsfile string =
\"/datalakerepo/input/2016/01/01lineitems.txt\";\nDECLARE #ordersfile
string = \"/datalakerepo/input/2016/01/01orders.txt\";\nsales.usql ###
\n","description":"Invalid syntax found in the
script.","resolution":"Correct the script syntax, using expected
token(s) as a
guide.","helpLink":"","filePath":"","lineNumber":4,"startOffset":228,"endOffset":232}].
seem like not all usql script is read from the data factory, so I though that may be the "rerun in upstream in pipeline" have something to do with this, like clear cache from previous script.
Anyone knows what "rerun in upstream in pipeline" does?
Many thanks!
"Rerun with upstream in pipeline" basically means "recalculate with all dependencies". For example, if one has pipeline1 -> dataset1 -> pipeline2 and tries to rerun pipeline2 with dependecies, then pipeline1 and pipeline2 will be both executed. I believe it works same with several chained activities within single pipeline.
I reach a limit in the number of requests I can get using the default httpClient that my API wrapper provides.
//I was getting something like this at the beginning
Head www.example.com:80/: lookup example.com: no such host
To solve this I want to change the MaxIdleConnsPerHost setting for the httpClient.Transport of my client. It looks much more like this:
However, the Transport my client uses is a RoundTripper and subsequently, it doesn't have a MaxIdleConnsPerHost param.
&oauth2.Transport{Source:(*oauth2.reuseTokenSource)(0xc2082ac0c0),
Base:http.RoundTripper(nil),
mu:sync.Mutex{state:0, sema:0x0},
modReq:map[*http.Request]*http.Request(nil)
}
The one I'm providing is mostly a default one and it lacks the proper configuration I suppose or simple what I want to do is not feasible.
&http.Transport{idleMu:sync.Mutex{state:0, sema:0x0},
idleConn:map[http.connectMethodKey][]*http.persistConn(nil),
idleConnCh:map[http.connectMethodKey]chan *http.persistConn(nil),
reqMu:sync.Mutex{state:0, sema:0x0},
reqCanceler:map[*http.Request]func()(nil),
altMu:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0},
writerSem:0x0,
readerSem:0x0,
readerCount:0,
readerWait:0},
altProto:map[string]http.RoundTripper(nil),
Proxy:(func(*http.Request) (*url.URL, error))(nil),
Dial:(func(string, string) (net.Conn, error))(nil),
TLSClientConfig:(*tls.Config)(nil),
TLSHandshakeTimeout:0,
DisableKeepAlives:false,
DisableCompression:false,
MaxIdleConnsPerHost:200,
ResponseHeaderTimeout:0}
Can someone guide me on the right direction?
I want to have a behavior like this:
Camel reads a file from a directory, splits it into chunks (using streaming), sends each chunk to a seda queue for concurrent processing, and after the processing is done, a report generator is invoked.
This is my camel route:
from("file://c:/mydir?move=.done")
.to("bean:firstBean")
.split(ExpressionBuilder.beanExpression("splitterBean", "split"))
.streaming()
.to("seda:processIt")
.end()
.to("bean:reportGenerator");
from("seda:processIt")
.to("bean:firstProcessingBean")
.to("bean:secondProcessingBean");
When I run this, the reportGenerator bean is run concurrently with the seda processing.
How to make it run once after the whole seda processing is done?
The splitter has built-in parallel so you can do this easier as follows:
from("file://c:/mydir?move=.done")
.to("bean:firstBean")
.split(ExpressionBuilder.beanExpression("splitterBean", "split"))
.streaming().parallelProcessing()
.to("bean:firstProcessingBean")
.to("bean:secondProcessingBean");
.end()
.to("bean:reportGenerator");
You can see more details about the parallel option at the Camel splitter page: http://camel.apache.org/splitter
I think you can use the delayer pattern of Camel on the second route to achieve the purpose.
delay(long) in which the argument indicates time in milliseconds. You can read more abuout this pattern here
For eg; from("seda:processIt").delay(2000)
.to("bean:firstProcessingBean"); //delays this route by 2 seconds
I'd suggest the usage of startupOrder to configure the startup of route though.
The official documentation provides good details on the topic. Kindly read it here
Point to note - " The routes with the lowest startupOrder is started first. All startupOrder defined must be unique among all routes in your CamelContext."
So, I'd suggest something like this -
from("endpoint1").startupOrder(1)
.to("endpoint2");
from("endpoint2").startupOrder(2)
.to("endpoint3");
Hope that helps..
PS : I'm new to Apache Camel and to stackoverflow as well. Kindly pardon any mistake that might've occured.