Microsoft ASP.NET WebHooks custom receiver gets multiple attempts - asp.net

I have implemented a custom receiver for Microsoft ASP.NET WebHooks by implementing WebHookHandler.
public class Web_WebHookHandler : WebHookHandler
{
public Web_WebHookHandler()
{
this.Receiver = CustomWebHookReceiver.ReceiverName;
}
public override Task ExecuteAsync(string generator, WebHookHandlerContext context)
{
SendNotification();
return Task.FromResult(true);
}
private void SendNotification()
{
Task.Factory.StartNew(() => {
// doing some processing
});
}
}
Whenever some event gets fired, it hits my above receiver 3 times. I have tried everything but nothing made any difference. Please help me sort it out.

Try adding bellow code in the ExecuteAsync before return. .i.e.
context.Response = new System.Net.Http.HttpResponseMessage (System.Net.HttpStatusCode.Gone);
return Task.FromResult(true);
Actually webhooks dispatcher inspects response from your receiver and retries if proper response is not sent back. So in order to tell dispatcher that request has been processed and everything is okay, you need to set context.Response and also return Task.FromResult(true).
Otherwise it will keep trying for 3 times atleast.

Related

How to make application insights ignore cancel

Gives a 500 response code when the cancel button is pressed multiple types by the users.
Not causing performance issues but just a lot of clutter in application insights.
Any way to filter this out would be helpful.
Nothing is shown in the telemetry to share too, only the API method that is been called with a 500 code and time. sharing the screenshot of that.
Since you know the response code is 500, you can use telemetry processor to filter out these kinds of request with code 500.
Assume you're using .NET core, you can follow the steps below:
Create a class which implements ITelemetryProcessor, then filter out the request whose response code is 500(or more conditions as per your need.). The sample code looks like below:
public class IgnoreCancelFilter : ITelemetryProcessor
{
private ITelemetryProcessor Next { get; set; }
// next will point to the next TelemetryProcessor in the chain.
public IgnoreCancelFilter(ITelemetryProcessor next)
{
this.Next = next;
}
public void Process(ITelemetry item)
{
var request = item as RequestTelemetry;
if (request != null &&
request.ResponseCode.Equals("500", StringComparison.OrdinalIgnoreCase))
{
// To filter out an item, return without calling the next processor.
return;
}
// Send everything else
this.Next.Process(item);
}
}
Then, register it in ConfigureServices method of your Startup.cs class.
public void ConfigureServices(IServiceCollection services)
{
// ...
services.AddApplicationInsightsTelemetry();
services.AddApplicationInsightsTelemetryProcessor<IgnoreCancelFilter>();
}
If the programming language is not .NET core, you can find the proper method for .NET framework / js etc. in this article.

Get request body in web api IExceptionHandler

I'm trying to write a global error handler for ASP.NET web api that is able to log the request details of requests that cause unhandled exception in my api. I've registered the below GlobalExceptionHandler class in my OWIN startup class, but I'm unable to retrieve the content of any data posted in the body of requests.
public class GlobalExecptionHander : ExceptionHandler
{
public override void Handle(ExceptionHandlerContext context)
{
var body = context.Request.Content.ReadAsStringAsync().Result;
//body here is an empty string
context.Result = new UnhandledErrorActionResult
{
Request = context.Request,
};
}
}
In my startup class
config.Services.Replace(typeof(IExceptionHandler), new GlobalExecptionHander());
Since I just came across this exact problem, I was amazed to find this question without an answer! I hope you've managed to solve the problem after all this time. I'd still like to answer this question regardless.
The thing is that by the time your GlobalExceptionHandler handles the exception, something (like Newtonsoft Json or any other request handler) has already read the contentstream of the HTTP request. When the stream is read, you cannot read it again, unless there was some way to reset that stream to its initial position...
public override void Handle(ExceptionHandlerContext context)
{
string requestContent = "";
using(System.IO.Stream stream = context.Request.Content.ReadAsStreamAsync().Result)
{
// Set the stream back to position 0...
if (stream.CanSeek)
{
stream.Position = 0;
}
// ... and read the content once again!
requestContent = context.Request.Content.ReadAsStringAsync().Result;
}
/* ..Rest of code handling the Exception.. */
}
The reason requestContent is outside that using block, is because the stream gets disposed after the block closes. You could also get rid of using and call stream.Dispose() after you've done reading the content.

Vert.x publish HttpServerRequest to other module

if i receive a HttpServerRequest in a Handler, is it somehow possible to publish the request?
I want to implement a small demo website with an index.html and an unknown number of sub sites. At first there should be a main vert.x module, which starts the HttpServer. In this main module it should be possible to add other dependent modules. I will call them submodules now. I don't know how many submodules i will have later, but each submodule should contain the logic to handle the http response for a specific URL (the sub html files). I guess i have to do the same for the WebSocketHandler...
A small example of the code inside the start():
//My Main Module:
vertx.createHttpServer().requestHandler(new Handler<HttpServerRequest>() {
public void handle(HttpServerRequest req) {
vertx.eventBus().publish("HTTP_REQUEST_CONSTANT", req);
}
}).listen(8080);
// My submodule 1
vertx.eventBus().registerHandler("HTTP_REQUEST_CONSTANT", new Handler<HttpServerRequest>() {
#Override
public void handle(HttpServerRequest req) {
if (req.uri().equals("/")) {
req.response();
}
}
});
// Other submodules which handles other URLs
Or any other solutions? I just don't wanna have the logic for sub sites in the main module.
Edit: Or could i call the vertx.createHttpServer() method in each submodule?
I have a similar Vert.x based application and I ended up doing the following:
I have a HttpServerVerticle that is started from the MainVerticle. There I created an HttpServer with several matchers. Each matcher receives a request and forwards it to a dedicated verticle through theEventBus. Upon getting the response from a dedicated verticle it writes the answer to the response.
Here is a code snippet:
RouteMatcher restMatcher = new RouteMatcher();
EventBus eventBus = vertx.eventBus();
HttpServer httpServer = vertx.createHttpServer();
restMatcher.post("/your/url",
r -> {
r.bodyHandler(requestBody -> {
final int length = requestBody.length();
if(length == 0) {
//return bad request
r.response().setStatusCode(HttpResponseStatus.BAD_REQUEST.code());
r.response().end();
return;
}
eventBus.send("your.address.here",
requestBody.getString(0, length),
new Handler<Message<JsonObject>>(){
#Override
public void handle(Message<JsonObject> message) {
//return the response from the other verticle
r.response().setStatusCode(HttpResponseStatus.OK.code());
if (message.body() != null) {
r.response().end(message.body().toString());
}
else{
r.response().end();
}
}
});
});
});
httpServer.requestHandler(restMatcher);
httpServer.listen(yourPort,yourHost);
In the dedicated verticle you register a listener to the address:
vertx.eventBus().registerHandler("your.address.here", this::yourHandlerMethod);
The handler method would look something like the following:
protected void yourHandlerMethod(Message<String> message){
// do your magic, produce an answer
message.reply(answer);
}
This way you separate your logic from your HTTP mappings and can have different pieces of logic in separate verticles using multiple event bus addresses.
Hope this helps.

Are there any restrictions in writing multiple http responses?

I am building a HTTP proxy with netty, which supports HTTP pipelining. Therefore I receive multiple HttpRequest Objects on a single Channel and got the matching HttpResponse Objects. The order of the HttpResponse writes is the same than I got the HttpRequest. If a HttpResponse was written, the next one will be written when the HttpProxyHandler receives a writeComplete event.
The Pipeline should be convenient:
final ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("decoder", new HttpRequestDecoder());
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("writer", new HttpResponseWriteDelayHandler());
pipeline.addLast("deflater", new HttpContentCompressor(9));
pipeline.addLast("handler", new HttpProxyHandler());
Regarding this question only the order of the write calls should be important, but to be sure I build another Handler (HttpResponseWriteDelayHandler) which suppresses the writeComplete event until the whole response was written.
To test this I enabled network.http.proxy.pipelining in Firefox and visited a page with many images and connections (a news page). The problem is, that the browser does not receive some responses in spite of the logs of the proxy consider them as sent successfully.
I have some findings:
The problem only occurs if the connection from proxy to server is faster than the connection from proxy to browser.
The problem occurs more often after sending a larger image on that connection, e.g. 20kB
The problem does not occur if only 304 - Not Modified responses were sent (refreshing the page considering browser cache)
Setting bootstrap.setOption("sendBufferSize", 1048576); or above does not help
Sleeping a timeframe dependent on the responses body size in before sending the writeComplete event in HttpResponseWriteDelayHandler solves the problem, but is a very bad solution.
I found the solution and want to share it, if anyone else has a similar problem:
The content of the HttpResponse is too big. To analyze the content the whole HTML document was in the buffer. This must be splitted in Chunks again to send it properly. If the HttpResponse is not chunked I wrote a simple solution to do it. One needs to put a ChunkedWriteHandler next to the logic handler and write this class instead of the response itself:
public class ChunkedHttpResponse implements ChunkedInput {
private final static int CHUNK_SIZE = 8196;
private final HttpResponse response;
private final Queue<HttpChunk> chunks;
private boolean isResponseWritten;
public ChunkedHttpResponse(final HttpResponse response) {
if (response.isChunked())
throw new IllegalArgumentException("response must not be chunked");
this.chunks = new LinkedList<HttpChunk>();
this.response = response;
this.isResponseWritten = false;
if (response.getContent().readableBytes() > CHUNK_SIZE) {
while (CHUNK_SIZE < response.getContent().readableBytes()) {
chunks.add(new DefaultHttpChunk(response.getContent().readSlice(CHUNK_SIZE)));
}
chunks.add(new DefaultHttpChunk(response.getContent().readSlice(response.getContent().readableBytes())));
chunks.add(HttpChunk.LAST_CHUNK);
response.setContent(ChannelBuffers.EMPTY_BUFFER);
response.setChunked(true);
response.setHeader(HttpHeaders.Names.TRANSFER_ENCODING, HttpHeaders.Values.CHUNKED);
}
}
#Override
public boolean hasNextChunk() throws Exception {
return !isResponseWritten || !chunks.isEmpty();
}
#Override
public Object nextChunk() throws Exception {
if (!isResponseWritten) {
isResponseWritten = true;
return response;
} else {
HttpChunk chunk = chunks.poll();
return chunk;
}
}
#Override
public boolean isEndOfInput() throws Exception {
return isResponseWritten && chunks.isEmpty();
}
#Override
public void close() {}
}
Then one can call just channel.write(new ChunkedHttpResponse(response) and the chunking is done automatically if needed.

ASP.net how to long polling with PokeIn?

I want to make a service that notify the user in case there are some new messages sent to him. Thus I want to use some Comet framework that provide the server push ability. So I have looked into PokeIn.
Just wondering a thing. I have checked on the samples that they have on the website. None of them look into the database to retrieve new entries if there are some. But it is just a matter of modification to it I guess.
One of the sample implement this long polling by using a sleep on the server side. So if I use the same approach I can check the database, if there are any new entries, every 5 seconds. However this approach doesn't seem to be much different from when using polling on the client side with javascript.
This part is from a sample. As can be seen they put a sleep there for to update current time for everybody.
static void UpdateClients()
{
while (true)
{
//.. code to check database
if (CometWorker.ActiveClientCount > 0)
{
CometWorker.SendToAll(JSON.Method("UpdateTime", DateTime.Now));
}
Thread.Sleep(500);
}
}
So I wonder is this how I should implement the message notifier? It seems that the above approach is still going to push a huge load demand on the server side. The message notifier is intend to work same way as the one found Facebook.
You shouldn't implement this way, that sample is only implemented like that because the keep PokeIn related part is clear. You should implement SQL part as mentioned http://www.codeproject.com/Articles/12335/Using-SqlDependency-for-data-change-events
in order to track changes on database.
So, when you have something to send, call one of the PokeIn methods for the client side delivery. I don't know, how much your application is time critical because in addition to reverse ajax, PokeIn's internal websocket feature is very easy to activate and delivers messages to client quite fast.
You can do this with database as #Zuuum said, but I implemented it in a different way.
I'm using ASP.NET MVC with PokeIn and EF in a Windows Azure environment:
I have domain events similar to this approach: Strengthening your domain: Domain Events
When someone invokes an action, that's a Unit of Work
If that UOW succeeds then I raise a domain event (e.g. ChatMessageSent)
I have subscribers for these domain events so they can receive the event and forward the message to the PokeIn listeners
I use this pattern for all my real-time needs on my game site (making moves, actions etc in a game), I don't want to advertise it here, you can find it through me if you want.
I always use this pattern as a duplex communication solution so everybody gets their update via PokeIn, even the user who invoked the action so every client will behave the same. So when someone calls an action it won't return anything except the success signal.
The next examples are won't work because they are only snippets to demonstrate the flow
Here is an action snippet from my code:
[HttpPost]
[UnitOfWork]
[RestrictToAjax]
[ValidateAntiForgeryToken]
public JsonResult Post(SendMessageViewModel msg)
{
if (ModelState.IsValid)
{
var chatMessage = new ChatMessage
{
ContainerType = msg.ContainerType,
ContainerID = msg.ContainerID,
Message = _xssEncoder.Encode(msg.Message),
User = _profileService.CurrentUser
};
_chatRepository.AddMessage(chatMessage);
OnSuccessfulUoW = () => EventBroker.Current.Send(this, new ChatMessageSentPayload(chatMessage));
}
return Json(Constants.AjaxOk);
}
And the (simplified) EventBroker implementation:
public class UnityEventBroker : EventBroker
{
private readonly IUnityContainer _container;
public UnityEventBroker(IUnityContainer container)
{
_container = container;
}
public override void Send<TPayload>(object sender, TPayload payload)
{
var subscribers = _container.ResolveAll<IEventSubscriber<TPayload>>();
if (subscribers == null) return;
foreach (var subscriber in subscribers)
{
subscriber.Receive(sender, payload);
}
}
}
And the even more simplified subscriber:
public class ChatMessageSentSubscriber : IEventSubscriber<ChatMessageSentPayload>
{
public void Receive(object sender, ChatMessageSentPayload payload)
{
var message = payload.Message;
var content = SiteContent.Global;
var clients = Client.GetClients(c => c.ContentID == message.ContainerID && c.Content == content)
.Select(c => c.ClientID)
.ToArray();
var dto = ObjectMapper.Current.Map<ChatMessage, ChatMessageSentDto>(message);
var json = PokeIn.JSON.Method("pokein", dto);
CometWorker.SendToClients(clients, json);
}
}

Resources