We have created an http module to mobile detection and redirection. What the http module will do is according to the coming request it will find the device and redirect accordingly.
So here we want to log the methods in the http module. But i don't know whether it is a good practice to log all the requests? We planned to use text file logging. We have concerns about the performance since it will log all requests.
Please give your suggestions.
Depending on what information you want to log, all request are logged in iis. You can these log at %SystemDrive%\inetpub\logs\LogFiles. If the information is not there you can add extra information or add extra logging in the global asax. Good loggers will have minimal impact on you application. You should watch you diskspace at all time, and make a business case on what you want to do with the information.
Related
I have .Net core App deployed on azure and enabled application insights.
Sometimes Azure application insights End-to-end transaction details do not display all telemetry.
Here it only logs the error and not request or maybe request logged but both do not display together over here(difficult to find out due to many people use it)
Should be like:
Sometimes request log but with no error log.
What could be the reason for happening this? do I need to look into application insights specific set-up/feature?
Edit:
As suggested by people here, try to disable the Sampling feature but still not works, Here is open question as well.
This usually happens due to sampling. By default, adaptive sampling is enabled in the ApplicationInsights.config which basically means that only a certain percentage of each telemetry item type (Event, Request, Dependency, Exception, etc.) is sent to Application insights. In your example probably one part of the end to end transaction got sent to the server, another part got sampled out. If you want, you can turn off sampling for specific types, or completely remove the
AdaptiveSamplingTelemetryProcessor
from the config which completely disables sampling. Bear in mind that this leads to higher ingestion traffic and higher costs.
You can also configure sampling in the code itself, if you prefer.
Please find here a good overview of how sampling works and can be configured.
This may be related to :
When using SDK 2.x, you have to track all events and send the telemetries to Application insights
When using auto-instrumentation with 3.x agent, in this case the agent collect automatically the traffic, logs ... and you have to pay attention to the sampling file applicationinsights.json where you can filter the events.
If you are using java, below the accepted Logging libraries :
-java.util.logging
-Log4j, which includes MDC properties
-SLF4J/Logback, which includes MDC properties
How to know what nginx handles exactly now?
I search for request uri, request start time, request headers and another info.
I could not use access log becaise it contains only finished requests.
Nginx provides only the total numbers of accepted, dropped and active connections in the community builds. The complete description of statistics provided is here.
Commercial builds provide more information, but frankly I don't see anything that would match your needs. The documentation is here.
The Situation
I have come across some very suspicious PUT and GET requests in my IIS server logs. After Googling the requesters address, I have found information linking the IP's to known hacking teams. After each PUT there is an immediate GET for the same resource that was attempted to be uploaded to my server.
Question 1:
Would this be considered a remote code execution attack?
Additional Testing Completed By Me:
The IIS logs show that the response given for the PUT request was 412 'Invalid file type all files are not uploaded'
I have turned on Failed Request Tracing and attempted to upload text files using CURL and this is the same response I am provided with and was not able to upload a file.
Question 2:
What can I do to help prevent these type of attacks from being successful?
I can turn on IIS request filtering, but I am concerned that if I deny PUT my IIS application may be negatively impacted for any future web services.
Question 1: Would this be considered a remote code execution attack?
It is impossible to determine the intentions of the attacker from the information given. They could be looking to gain code execution, or they may simply settle for uploading their own content to your server for you to host, or to try and deface your site with their content.
Question 2: What can I do to help prevent these type of attacks from being successful?
Server configuration and patching. The best advice I could give you is to reduce the attack surface - only enable the features you need. If you're not using PUT in your application, then disable it. Only reenable it if needed. Make sure you have the latest updates for your OS installed.
Security is a wide subject. You need everything from secure code when developing applications to rigorous security testing after.
We are looking to add some performance measuring into our LOB web application. Is there a way to log all requests into IIS including the details of the request, the upload speed and time, the latency and the download speed and time?
We will store this into a log file so the customer can post this to us for analysis (the customer internally hosts our LOB web application).
Thanks
IIS 7 natively provides logging features. It will give you basic informations about requests (status code, date, call duration, IP, referer, ...) It's already a good starting point and it's very easy to enable in IIS manager.
Advanced Logging, distributed here or via WPI, give you a way to log additional information (http headers, http responses, custom fields...) . A really good introduction is available here.
that's the best you can do without entering into asp.net
There is no out-of-box direct solution for your problem. As Cybermaxs suggests you can use W3C logs to get information about requests, but those logs do not break down the request/response times in the way you seek.
You have two options:
1) Write an IIS module (C++ implementing CHttpModule in HTTPSERV.H) which intercepts all the relevant events and logs the times as you require. The problem with this solution is that writing these modules can be tricky and is error-prone.
2) Leverage IIS's Failed Request Tracing (http://www.iis.net/learn/troubleshoot/using-failed-request-tracing/troubleshoot-with-failed-request-tracing) which will cause IIS to write detailed logs which include a break down of time spent per request in a verbose/parseable XML format. You can enable "Failed Request Tracing" even for successful requests. The problem is that an individual XML file is generated for each request so you'll have to manage the directory (and Failed Request tracing configuration) so that this behaviour doesn't cause too much pain for your customer.
I need to design a bug alert system, where the web support team is notified via email when a user of our website encounters an error of any sort (database exception, or a 404)
What would be the best way to design this section of the project? Any ideas would be appreciated.
You may want to look into using the global.asax file for application-wide error intercepting. A quick search yields this step-by-step walk-through:
http://aspnetresources.com/articles/CustomErrorPages.aspx
Depending on the volume of traffic you're expecting, sending an e-mail every time an error is intercepted may not be the best approach. At best, you'd flood inboxes (and make the support staff very unhappy), and at worst you'd get your mail servers blacklisted for spamming. The approach that I've used in the past on high-traffic sites is to queue up errors in a table that is read and purged at a set interval by a separate process. The process would aggregate the errors, grouping them by type, number of occurrences, etc, then send out an e-mail report to the support mailing lists.
ASP.NET health monitoring may be of interest: http://msdn.microsoft.com/en-us/library/ms998306.aspx. It's really simpler to use than this article first appears and doesn't require any additional components - it's all built-in.
I would implement an HTTPmodule that captures the onError event.
This is would allow the module to be reused over multiple applications. The destination email addresses, SMTP server etc, could be in the HTTPmodule, overriden in the web.config file for maximum flexibility.