Before go 1.1.2 this code printed the last log.Println an executed the code afterwards, since 1.2 I had to run http.ListenAndServe() as a seperate go routine.
The changelog from 1.1.2 did not indicate any such changes nor do I know if this is the right way to have webserver running besides other code.
log.Println("Starting WebAPI Server")
http.HandleFunc("/", bot.httpHandler())
http.ListenAndServe(":8181", nil) // with preceeding "go" in 1.2 to make my program working
log.Println("Started WebAPI Server")
I'm sorry, but http.ListenAndServe has always blocked.
Related
The pprof package documentation says
The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/."
The documentation says if you already have an http server running you don't need to start another one but if you are not using DefaultServeMux, you will have to register handlers with the mux you are using.
Shouldn't I always use a separate port for pprof? Is it okay to use the same port that I am using for prometheus metrics?
net/http/pprof is a convenience package. It always registers handlers on DefaultServeMux, because DefaultServeMux is a global variable that it can actually do that with.
If you want to serve pprof results on some other ServeMux there's really nothing to it; all it takes is calling runtime/pprof.StartCPUProfile(w) with an http.ResponseWriter and then sleeping, or calling p.WriteTo(w, debug) on a runtime/pprof.Profile object. You can look at the source of net/http/pprof to see how it does it.
In a slightly better universe, net/http/pprof would have a RegisterHandlers(*http.ServeMux) function that could be used anywhere, you would be able to import it without anything being registered implicitly, and there would be another package (say net/http/pprof/sugar) that did nothing except call pprof.RegisterHandlers(http.DefaultServeMux) in its init. However, we don't live in that universe.
I am working in a Shiny app that connects to Comscore using their API. Any attempt at executing POST commands inside future/promises fails with the cryptic error:
Warning: Error in curl::curl_fetch_memory: Bulk data encryption algorithm failed in selected cipher suite.
This happens with any POST attempt, not only when/if I try to call ComscoreĀ“s servers. As an example of a simple, harmless and uncomplicated POST request that fails, here is one:
rubbish <- future(POST('https://appsilon.com/an-example-of-how-to-use-the-new-r-promises-package/'))
print(value(rubbish))
But everything works fine if I don not use futures/promises.
The problem I want to solve is that currently we have an app that works fine in a single user environment, but it must be upgraded for a multiuser scenario to be served by a dedicated Shiny Server machine. The app makes several such calls in a row (from a few dozen to some hundreds), taking from 5 to 15 minutes.
The code is running inside an observeEvent block, triggered by a button clicked by the user when he has configured the request to be submitted.
My actual code is longer, and there are other lines both before and after the POST command in order to prepare the request and then process the received answer.
I have verified that all lines before the POST command get executed, so the problem seems to be there, in the attempt to do a POST connecting to the outside world from inside a promise.
I am using RStudio Server 1.1.453 along with R 3.5.0 in a RHEL server.
Package versions are
shiny: 1.1.0
httr: 1.3.1
future; 1.9.0
promise: 1.0.1
Thanks in advance,
Since I installed munin and enabled alerts, it has been intermittently sending me this for four of my hard drives:
WARNINGs: smartctl_exit_status is 4.00 (outside range [:1]).
The munin documentation says:
The ignoreexit parameter can be useful to exclude some bits in smartctl exit code, which is a bit mask described in its main page, from consideration.
So I added the following to /etc/munin/plugin-conf.d/munin-node:
[smart_sdg;smart_sdh;smart_sdi;smart_sdj]
env.ignoreexit 4
(Note that this corresponds with the four drives sending the alerts.)
Alas, the alerts keep coming. I can't make them stop and I don't understand why. Is my config location wrong? Am I doing the configuration wrong? Why isn't this working? Any help?
It turns out that I had an old version of Munin and it doesn't support the env.ignoreexit parameters. I grabbed the latest version of the plugin from Munin's repository and it seems to work.
This was a fun exploration. I wish Munin threw errors when you gave it configurations it didn't recognize, but alas.
I am writing some code where I download many pages of from a web API, do some processing, and combine them into a data frame. The API takes ~30 seconds to respond to each request, so it would be convenient to send the request for the next page while doing the processing for the current page. I can do this using, e.g., mcparallel, but that seems like overkill. The curl package claims that it can make non-blocking connections, but this does not seem to work for me.
From vignette("intro", "curl"):
As of version 2.3 it is also possible to open connetions in
non-blocking mode. In this case readBin and readLines will return
immediately with data that is available without waiting. For
non-blocking connections we use isIncomplete to check if the download
has completed yet.
con <- curl("https://httpbin.org/drip?duration=1&numbytes=50")
open(con, "rb", blocking = FALSE)
while(isIncomplete(con)){
buf <- readBin(con, raw(), 1024)
if(length(buf))
cat("received: ", rawToChar(buf), "\n")
}
close(con)
The expected result is that the open should return immediately, and then 50 asterisks should be progressively printed over 1 second as the results come in. For me, the open blocks for about a second, and then the asterisks are printed all at once.
Is there something else I need to do? Does this work for anyone else?
I am using R version 3.3.2, curl package version 3.1, and libcurl3 version 7.47.0 on Ubuntu 16.04 LTS. I have tried in RStudio and the command line R console, with the same results.
I have Symfony2 application with RabbitMQBundle installed. I've setup consumers and producers as it's described in the bundle documentations and everything works correct. But my consumers started with ./app/console rabbitmq:consumer take all available CPU time. Basically consumer does nothing but waiting for a message and output it. If I start demo consumer from php-amqplib CPU consumption is almost zero. I tried different virsions of Symfony (2.6 and 2.3) but this does not affect CPU load. My server configuration:
Debian 7
PHP 5.6.4 (also tried 5.4)
no database used
RabbitMq 3.4.2
Is there any way to reduce CPU consumption? Thanks
Just ran into a very similar issue and after some debugging realized that I was using an old way of instantiating the connection to rabbitmq.
The new signature of the method is described here: https://github.com/videlalvaro/php-amqplib/blob/master/PhpAmqpLib/Connection/AbstractConnection.php#L136
I was sending in something that looked more like
$this->connection = new Connection\AMQPConnection(
$server->host,
$server->port,
$server->user,
$server->password,
$server->vhost,
$server->insist,
$server->login_method,
$server->locale,
$server->connection_timeout,
$server->read_write_timeout,
$server->context,
$server->keepalive,
$server->heartbeat
);
As per a very old definition around somewhere in version 2. https://github.com/videlalvaro/php-amqplib/blob/v2.0.0/PhpAmqpLib/Connection/AMQPConnection.php#L31
So your plugin seems to use a new version of the library but not the new way to initiate a connection.