Erlang stopping application doesn't end all processes? - http

When I stop an Erlang application that I built, the cowboy listener process stays alive, continuing to handle requests. In the gen_server that I wrote I start a server on init. As you can see below:
init([Port]) ->
Dispatch = cowboy_router:compile([
{'_', [
{"/custom/[...]", ?MODULE, []},
% Serve index.html as default file
% Serve entire directory
{"/[...]", cowboy_static, {priv_dir,
app, "www"}}
]}
]),
Name = custom_name,
{ok, Pid} = cowboy:start_http(Name, 100,
[{port, Port}],
[{env, [{dispatch, Dispatch}]}]),
{ok, #state{handler_pid = Pid}}.
This starts the cowboy http server, which uses cowboy_static to server some stuff in the priv/app/ dir and the current module to handle custom stuff (module implements all the cowboy http handle callbacks). It takes the pid returned from the call and assigns it to handler_pid in the state record. This all works. However when I startup the application containing this module (which works) and then I stop it. All processes end (at least the ones in my application). The custom handler (which is implemented in the same module as the gen_server) no longer works. But the cowboy_static handle continues to handle requests. It continues to serve static files until I kill the node. I tried fixing this by adding this to the gen_server:
terminate(_Reason, State) ->
exit(State#state.handler_pid, normal),
cowboy:stop_listener(listener_name()),
ok.
But nothing changes. The cowboy_static handler continues to serve static files.
Questions:
Am I doing anything wrong here?
Is cowboy_static running under the cowboy application? I assume it is.
If so, how do I stop it?
And also, should I be concerned about stopping it? Maybe this isn't that big a deal.
Thanks in advance!

I don't think it is really important, generally you use one node/VM per application (in fact a bunch of erlang application working together, but I haven't a better word). But I think you can stop the server using application:stop(cowboy), application:stop(ranch).

You should fix 3 things:
the symbol in start_http(Name, ...) and stop_listener(Name) should match.
trap exit in service init: process_flag(trap_exit, true)
remove exit call from terminate.

Related

How to get visibility into completion queue on C++ gRPC server

Note: Help with the immediate problem would be great, but mostly I'm looking for advice on troubleshooting gRPC timing issues in general (this isn't my first such issue).
I am adding a new server streaming service to a C++ module which has an existing server streaming service, and the two appear to be conflicting. Specifically, the completion queue Next() call on the server is crashing intermittently after the C# client calls Cancel() on the cancellation token for one of the services. This doesn't happen if I run each service independently.
On the client, I get this at the response stream MoveNext() call:
System.InvalidOperationException
HResult=0x80131509
Message=Shutdown has already been called
Source=Grpc.Core
StackTrace:
at Grpc.Core.Internal.CompletionQueueSafeHandle.BeginOp()
at Grpc.Core.Internal.CallSafeHandle.StartReceiveMessage(IReceivedMessageCallback callback)
at Grpc.Core.Internal.AsyncCallBase`2.ReadMessageInternalAsync()
at Grpc.Core.Internal.ClientResponseStream`2.<MoveNext>d__5.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at MyModule.Connection.<DoSubscriptionReceives>d__7.MoveNext() in C:\snip\Connection.cs:line 67
On the server, I get this at the completion queue next() call:
Exception thrown: read access violation.
core_cq_tag->**** was 0xDDDDDDDD.
The stack trace:
MyModule.exe!grpc_impl::CompletionQueue::AsyncNextInternal(void * * tag, bool * ok, gpr_timespec deadline) Line 59 C++
> MyModule.exe!grpc_impl::CompletionQueue::Next(void * * tag, bool * ok) Line 176 C++
...snip...
It appears something is being added to the queue after shutdown. The difficulty is I have little visibility into what is being added into the queue and in what order.
I'm trying to write a server-side interceptor to log all requests & responses, but there seems to be no documentation. So far, poking through the API hasn't gotten me very far. Is there any documentation available on wiring up an interceptor in C++? Or, are there other approaches for troubleshooting timing conflicts between services?
Windows 11, Grpc.Core 1.27
What I've tried:
I first played with the GRPC_TRACE & GRPC_VERBOSITY environment variables. I was able to get some unhelpful output from the client, but nothing from the server. Of course, there's been lots of debugging, stripping the client & server down to barebones, disabling keep alives, ensuring we aren't using deadlines, having the services share a cancellation token, etc.
Update: I have found that the crash only happens when the client is run from an NUnit test. In that environment, the completion queue is getting more hits on Next(), but I'm still trying to figure out where they are coming from.
Is 1.27 the version you are using? That seems pretty old.. There might have been fixes since then.
For using the C++ server interception API, I think you would find this very useful - https://github.com/grpc/grpc/blob/0f2a0f5fc9b9e9b9c98d227d16575d106f1e8d43/test/cpp/end2end/server_interceptors_end2end_test.cc#L48
One suggestion I have is to run the code another sanitizers https://github.com/google/sanitizers to make sure that we don't have a heap-use-after-free type bug.
I would also check for API misuse issues. (If you had posted the code, I could've given a look to see if anything seems weird..)

How to make async requests using HTTPoison?

Background
We have an app that deals with a considerable amount of requests per second. This app needs to notify an external service, by making a GET call via HTTPS to one of our servers.
Objective
The objective here is to use HTTPoison to make async GET requests. I don't really care about the response of the requests, all I care is to know if they failed or not, so I can write any possible errors into a logger.
If it succeeds I don't want to do anything.
Research
I have checked the official documentation for HTTPoison and I see that they support async requests:
https://hexdocs.pm/httpoison/readme.html#usage
However, I have 2 issues with this approach:
They use flush to show the request was completed. I can't loggin into the app and manually flush to see how the requests are going, that would be insane.
They don't show any notifications mechanism for when we get the responses or errors.
So, I have a simple question:
How do I get asynchronously notified that my request failed or succeeded?
I assume that the default HTTPoison.get is synchronous, as shown in the documentation.
This could be achieved by spawning a new process per-request. Consider something like:
notify = fn response ->
# Any handling logic - write do DB? Send a message to another process?
# Here, I'll just print the result
IO.inspect(response)
end
spawn(fn ->
resp = HTTPoison.get("http://google.com")
notify.(resp)
end) # spawn will not block, so it will attempt to execute next spawn straig away
spawn(fn ->
resp = HTTPoison.get("http://yahoo.com")
notify.(resp)
end) # This will be executed immediately after previoius `spawn`
Please take a look at the documentation of spawn/1 I've pointed out here.
Hope that helps!

F# Http.AsyncRequestStream just 'hangs' on long queries

I am working with:
let callTheAPI = async {
printfn "\t\t\tMAKING REQUEST at %s..." (System.DateTime.Now.ToString("yyyy-MM-ddTHH:mm:ss"))
let! response = Http.AsyncRequestStream(url,query,headers,httpMethod,requestBody)
printfn "\t\t\t\tREQUEST MADE."
}
And
let cts = new System.Threading.CancellationTokenSource()
let timeout = 1000*60*4//4 minutes (4 mins no grace)
cts.CancelAfter(timeout)
Async.RunSynchronously(callTheAPI,timeout,cts.Token)
use respStrm = response.ResponseStream
respStrm.Flush()
writeLinesTo output (responseLines respStrm)
To call a web API (REST) and the let! response = Http.AsyncRequestStream(url,query,headers,httpMethod,requestBody) just hangs on certain queries. Ones that take a long time (>4 minutes) particularly. This is why I have made it Async and put a 4 minute timeout. (I collect the calls that timeout and make them with smaller time range parameters).
I started Http.RequestStream from FSharp.Data first, but I couldn't add a timeout to this so the script would just 'hang'.
I have looked at the API's IIS server and the application pool Worker Process active requests in IIS manager and I can see the requests come in and go again. They then 'vanish' and the F# script hangs. I can't find an error message anywhere on the script side or server side.
I included the Flush() and removed the timeout and it still hung. (Removing the Async in the process)
Additional:
Successful calls are made. Failed calls can be followed by successful calls. However, it seems to get to a point where all the calls time out and the do so without even reaching the server any more. (Worker Process Active Requests doesn't show the query)
Update:
I made the Fsx script output the queries and ran them through IRM with now issues (I have timeout and it never locks up). I have a suspicion that there is an issue with FSharp.Data.Http.
Async.RunSynchronously blocks. Read the remarks section in the docs: RunSynchronously. Instead, use Async.AwaitTask.

Haskell System.Timeout.timeout crashing when called from certain function

I'm scraping some data from the frontpages of a list of website domains. Some of them are not answering, or are very slow, causing the scraper to halt.
I wanted to solve this by using a timeout. The various HTTP libraries available don't seem to support that, but System.Timeout.timeout seems to do what I need.
Indeed, it seems to work fine when I test the scraping function, but it crashes as soon as I run the enclosing function: (Sorry for bad/ugly code. I'm learning.)
fetchPage domain =
-- Try to read the file from disk.
catch
(System.IO.Strict.readFile $ "page cache/" ++ domain)
(\e -> downloadAndCachePage domain)
downloadAndCachePage domain =
catch
(do
-- Failed, so try to download it.
-- This craches when called by fetchPage, but works fine when called from directly.
maybePage <- timeout 5000000 (simpleHTTP (getRequest ("http://www." ++ domain)) >>= getResponseBody)
let page = fromMaybe "" maybePage
-- This mostly works, but wont timeout if the domain is slow. (lswb.com.cn)
-- page <- (simpleHTTP (getRequest ("http://www." ++ domain)) >>= getResponseBody)
-- Cache it.
writeFile ("page cache/" ++ domain) page
return page)
(\e -> catch
(do
-- Failed, so just fuggeddaboudit.
writeFile ("page cache/" ++ domain) ""
return "")
(\e -> return "")) -- Failed BIG, so just don't give a crap.
downloadAndCachePage works fine with the timeout, when called from the repl, but fetchPage crashes. If I remove the timeout from downloadAndCachePage, fetchPage will work.
Anyone who can explain this, or know an alternative solution?
Your catch handler in fetchPage looks wrong -- it seems you're trying to read a file, and on file not found exception are directly calling into your http function from the exception handler. Don't do this. For complicated reasons, as I recall, code in exception handlers doesn't always behave like normal code -- particularly when it attempts to handle exceptions itself. And indeed, under the covers, timeout uses asynchronous exceptions to kill threads.
In general, you should put as little code as possible in exception handlers, and especially not put code that tries to handle further exceptions (although it is generally fine to reraise a handled exception to "pass it on" [as with bracket]).
That said, even if you're not doing the right thing, a crash (if it is a segfault type crash as opposed to a <<loop>> type crash), even from weird code, is nearly always wrong behavior from GHC, and if you're on GHC 7 then you should consider reporting this.

Check if file is finished copying

I'm writing an ASP.NET webapp that will copy the contents of a CD to a network share. I need to check periodically if the copy job is finished.
One way of doing this is checking the network share folder to see if the file size has changed since the last check, but that seems kind of dodgy. Does anyone have a better idea how to do this?
Thanks in advance,
Stijn
EDIT
some more explanation:
Basically I'm calling a JsonResult action method every 5 seconds, called getStatus(source,destination). This method needs to check the following:
- if the source dir is still empty, copy cannot start --> return status "waiting"
- if the source dir contains files, copy can start -_> call copy method + return status "copying"
- if the destination dir contains files, and file size stays the same, copy is finished --> return status "finished"
Thanks!
In your webapp, use a blocking file copy operation, such as File.Copy, but run the procedure that does the copying in a background thread. In your background thread, write status information (e.g. "3 of 9 files finished" or "I'm done!" or "Error occurred: ...") into some shared object (static variable, Session object, database, ...). Then write some Status.aspx page which shows the content of that shared object.
Create web services available from client's javascript side with 2 methods: StartCopying, CheckStatus.
Implementation of StartCopying can either start backgorund thread to copy, or have [SoapDocumentMethod(OneWay = true)] that is mean that method returns immediately without waiting accomplishment.
CheckStatus just checks what you have described above, and return to client status of task.

Resources