How to start a new http server or using an existing one for pprof? - http

The pprof package documentation says
The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/."
The documentation says if you already have an http server running you don't need to start another one but if you are not using DefaultServeMux, you will have to register handlers with the mux you are using.
Shouldn't I always use a separate port for pprof? Is it okay to use the same port that I am using for prometheus metrics?

net/http/pprof is a convenience package. It always registers handlers on DefaultServeMux, because DefaultServeMux is a global variable that it can actually do that with.
If you want to serve pprof results on some other ServeMux there's really nothing to it; all it takes is calling runtime/pprof.StartCPUProfile(w) with an http.ResponseWriter and then sleeping, or calling p.WriteTo(w, debug) on a runtime/pprof.Profile object. You can look at the source of net/http/pprof to see how it does it.
In a slightly better universe, net/http/pprof would have a RegisterHandlers(*http.ServeMux) function that could be used anywhere, you would be able to import it without anything being registered implicitly, and there would be another package (say net/http/pprof/sugar) that did nothing except call pprof.RegisterHandlers(http.DefaultServeMux) in its init. However, we don't live in that universe.

Related

How to dispatch message to several destination with apache camel?

My problematic seem to be simple, but I haven't find yet a way to solve it...
I have a legacy system which is working and a new system which will replace it. This is only rest webservices call, so I'm using simple bridge endpoint on http service.
To ensure the iso-functional run, I want to put them behind a camel route dispatching message to both system but returning only the response of the legacy one and log the response of both system to be sure there are running in same way...
I create this route :
from("servlet:proxy?matchOnUriPrefix=true")
.streamCaching()
.setHeader("CamelHttpMethod", header("CamelHttpMethod"))
.to("log:com.mylog?showAll=true&multiline=true&showStreams=true")
.multicast()
.to(urlServer1 + "?bridgeEndpoint=true")
.to(urlServer2 + "?bridgeEndpoint=true")
.to("log:com.mylog?showAll=true&multiline=true&showStreams=true")
;
It works to call each services and to log messages, but response are in a mess...
If the first server doesn't respond, the second is not call, if the second respond an error, only that error is send back to client...
Any Idea ?
You can check for some more details in multicast docs http://camel.apache.org/multicast.html
Default behaviour of multicast (your case) is:
parallelProcessing is false so routes are called one by one
To correctly implement your case you need probably:
add error handling for each external service call so exception will not stop correct processing
configure or implement some aggregator strategy and put it to the strategyRef so you can combine results from all calls to the single multicast result

how to use the example of scrapy-redis

I have read the example of scrapy-redis but still don't quite understand how to use it.
I have run the spider named dmoz and it works well. But when I start another spider named mycrawler_redis it just got nothing.
Besides I'm quite confused about how the request queue is set. I didn't find any piece of code in the example-project which illustrate the request queue setting.
And if the spiders on different machines want to share the same request queue, how can I get it done? It seems that I should firstly make the slave machine connect to the master machine's redis, but I'm not sure which part to put the relative code in,in the spider.py or I just type it in the command line?
I'm quite new to scrapy-redis and any help would be appreciated !
If the example spider is working and your custom one isn't, there must be something that you have done wrong. Update your question with the code, including all relevant parts, so we can see what went wrong.
Besides I'm quite confused about how the request queue is set. I
didn't find any piece of code in the example-project which illustrate
the request queue setting.
As far as your spider is concerned, this is done by appropriate project settings, for example if you want FIFO:
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# Don't cleanup redis queues, allows to pause/resume crawls.
SCHEDULER_PERSIST = True
# Schedule requests using a queue (FIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'
As far as the implementation goes, queuing is done via RedisSpider which you must inherit from your spider. You can find the code for enqueuing requests here: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/scheduler.py#L73
As for the connection, you don't need to manually connect to the redis machine, you just specify the host and port information in the settings:
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
And the connection is configured in the Ä‹onnection.py: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/connection.py
The example of usage can be found in several places: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/pipelines.py#L17

Meteor: execute calls from client console "everywere"

Meteor is said to automagically (in most cases) figure out what code to run on the client and what code to run on the server so you could theoretically just write all your code in one .js file.
I would like to be able to write code in my browser console and have it executed pretty much as if I had put the code in a file on my server.
For example, in my browser console:
[20:08:19.397] Pages = new Meteor.Collection("pages");
[20:08:30.612] Pages.insert({name:"bro"});
[20:08:30.614] "sGmRrQfezZMXuPfW8"
[20:08:30.618] insert failed: Method not found
Meteor says "method not found" because I need to do new Meteor.Collection("pages"); on the server.
But is there a workaround for this, whether using the above-mentioned automagic or by explicitly saying in my browser console "run the following line of code on the server!"?
Well it doesn't "automagically" figure it out - you have to very explicitly do one of two things:
Separate the code into client and server directories.
Wrap the code in an isClient or an isServer section.
Otherwise, any code you write will execute in both environments. However, any code input by the user on the client will only be executed on the client. Meteor has been specifically designed to protect this boundary.
You can call a method on the server from the client, but again the server cannot be tricked into executing client-defined functions.
In your specific example, you can always define the collection only on the client like so:
Pages = new Meteor.Collection(null);
That will allow you do freely manipulate the collection data on the client, but it will not involve the server (nothing will be stored in the db).

Programs have to access to global variables in package

I've a package with global variables related to a file open
(*os.File), and its logger associated.
By another side, I'll build several commands that are going to use
that package and I want not open the file to set it as logger every
time I run a command.
So, the first program to run will set the global variables, and here
is my question:
Do the next programs to use the package can access to those global
variables without problem? It could be created a command with a flag
to initialize those values before of be used by another programs, and
another flag to finish it (unset the global variables in the package).
If that is not possible, which is the best option to avoid such IO-bound? To use a server in Unix sockets?
Assuming by 'program' you actually mean 'process', the answer is no.
If you want to share a (customized perhaps) logging functionality between processes then I would consider a daemon-like (Go doesn't yet AFAIK support writing true daemons) process/server and any kind of IPC you see handy.

Canceling a WinUSB asynchronous control transfer

For a user application (not a driver) using WinUSB, I use WinUsb_ControlTransfer in combination with overlapped I/O to asynchronously send a control message. Is it possible to cancel the asynchronous operation? WinUsb_AbortPipe works for all other endpoints but gives an 'invalid parameter' error when the control endpoint is passed (0x00 or 0x80 as the pipe address). I also tried CancelIo and CancelIoEx but both give an 'invalid handle' error on the WinUSB handle. The only related information I could find is on http://www.winvistatips.com/winusb-bugchecks-t335323.html, but offers no solution. Is this just impossible?
Probably not useful to the original asker any more, but in case anyone else comes across this: you can use CancelIo() or CancelIoEx() with the file handle that you originally passed in to WinUsb_Initialize().
This is similar to how the documentation of WinUsb_GetOverlappedResult says:
This function is like the Win32 API routine, GetOverlappedResult, with one difference—instead of passing a file handle that is returned from CreateFile, the caller passes an interface handle that is returned from WinUsb_Initialize. The caller can use either API routine, if the appropriate handle is passed. The WinUsb_GetOverlappedResult function extracts the file handle from the interface handle and then calls GetOverlappedResult.

Resources