Using Rate Limiting in ASP.NET Core 7 Web API by IP address - asp.net-core-webapi

There is currently a nuget package that manages rate limiting by IP address called AspNetCoreRateLimit. However, .NET 7 introduced its own versino of rate limiting and I wanted to use this instead as its published by MS. I have not been able to find a good example that imitates this third party package by limiting by IP address. My code I put together is as follows:
builder.Services.AddRateLimiter(options =>
{
options.RejectionStatusCode = 429;
options.AddPolicy("api", httpContext =>
{
var IpAddress = httpContext.Connection.RemoteIpAddress.ToString();
if (IpAddress != null)
{
return RateLimitPartition.GetFixedWindowLimiter(httpContext.Connection.RemoteIpAddress.ToString(),
partition => new FixedWindowRateLimiterOptions
{
AutoReplenishment = true,
PermitLimit = 5,
Window = TimeSpan.FromMinutes(1)
});
}
else
{
return RateLimitPartition.GetNoLimiter("");
}
});
});
However, the issue I am getting is a warning "Warning CS8602: Dereference of a possibly null reference." which I assume is because RemoteIpAddress could be null. I am curious if there is a better way to implement this IP rate limiting using this new .NET 7 library. If it matter I am planning to host this web api in Azure app services (windows) and it is accessed by a SPA also hosted in an app service.

Related

.NET 7 Rate Limiting in Azure Function

Is there a way to use .NET 7 Rate Limiting on Azure Function v4 (dotnet-isolated) HttpTrigger?
I've added RateLimiter in my ConfigureServices like this:
var builder = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(s =>
{
// ...
s.AddRateLimiter(_ =>
{
_.AddPolicy("myfunction", httpContext =>
RateLimitPartition.GetSlidingWindowLimiter(httpContext.Request.Headers["X-Forwarded-For"],
_ => new SlidingWindowRateLimiterOptions
{
AutoReplenishment = true,
PermitLimit = 1,
Window = TimeSpan.FromSeconds(5)
}));
});
})
.Build();
and
[Function("myfunction")]
[EnableRateLimiting("myfunction")]
public async Task<IActionResult> MyFunction(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestData req)
{ // ...
}
I'm pretty sure it shouldn't even work like this, but just to give an example of the scenario. My architecture is Azure Static Web App --> API Management (NOTE! consumption plan) --> Azure Function, and I can get the valid client IP from the X-Forwarded-For header in the Azure Function, but
So, is it possible to apply the rate limiting policy to a Azure Function on a function level?
Thanks!
As #Silent mentioned, you can use rate-limiting policy in Azure APIM Consumption Plan.
You can import multiple Function APIS to the Azure APIM Service and can add the Rate-limiting policy to each API Level.
I have consumption plan APIM, and I’d very much like to have a IP based rate limiter instead of API based, like it is with consumption plan APIM
I understand that you need to limit the number of requests per IP basis. If yes and this is the scenario, we have “IP address throttling” concept to limit the requests/API Calls from the IP address as mentioned in this MS Doc of Custom key-based throttling in Rate-limiting policy.
Note:
Yes, the rate-limit-by-key is not available in APIM Consumption Plan.

NServiceBus Router events published on Amazon SQS transport are not handled by an Azure Service Bus transport endpoint

I've been trying to get NServiceBus.Router working to allow endpoints using the AmazonSQS transport and the AzureServiceBus transport to communicate with each other. So far, I am able to get a command sent from the ASB endpoint through the router and handled by the SQS endpoint. However, when I publish an event from the SQS endpoint, it is not handled by the ASB endpoint even though I have registered the SQS endpoint as a publisher. I have no idea what I'm doing wrong, but looking at every example I can find from from the docs, it seems like it should work.
I have already tried adding another forwarding route in the reverse of what is below (SQS to ASB), but that did not solve the issue.
The endpoints and router are each running in .net 5 worker services.
I've made a sample project that reproduces the issue here, but here are some quick at-a-glance snippets that show the relevant setup:
Router Setup
var routerConfig = new RouterConfiguration("ASBToSQS.Router");
var azureInterface = routerConfig.AddInterface<AzureServiceBusTransport>("ASB", t =>
{
t.ConnectionString(Environment.GetEnvironmentVariable("ASB_CONNECTION_STRING"));
t.Transactions(TransportTransactionMode.ReceiveOnly);
t.SubscriptionRuleNamingConvention((entityType) =>
{
var entityPathOrName = entityType.Name;
if (entityPathOrName.Length >= 50)
{
return entityPathOrName.Split('.').Last();
}
return entityPathOrName;
});
});
var sqsInterface = routerConfig.AddInterface<SqsTransport>("SQS", t =>
{
t.UnrestrictedDurationDelayedDelivery();
t.Transactions(TransportTransactionMode.ReceiveOnly);
var settings = t.GetSettings();
// Avoids a missing setting error
//https://github.com/SzymonPobiega/NServiceBus.Raw/blob/master/src/AcceptanceTests.SQS/Helper.cs#L18
bool isMessageType(Type t) => true;
var ctor = typeof(MessageMetadataRegistry).GetConstructor(
BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance, null,
new[] {typeof(Func<Type, bool>)}, null);
#pragma warning disable CS0618 // Type or member is obsolete
settings.Set<MessageMetadataRegistry>(ctor.Invoke(new object[] {(Func<Type, bool>) isMessageType}));
#pragma warning restore CS0618 // Type or member is obsolete
});
var staticRouting = routerConfig.UseStaticRoutingProtocol();
staticRouting.AddForwardRoute("ASB", "SQS");
routerConfig.AutoCreateQueues();
ASB Endpoint Setup
var endpointConfiguration = new EndpointConfiguration("ASBToSQSRouter.ASBEndpoint");
var transport = endpointConfiguration.UseTransport<AzureServiceBusTransport>();
transport.SubscriptionRuleNamingConvention((entityType) =>
{
var entityPathOrName = entityType.Name;
if (entityPathOrName.Length >= 50)
{
return entityPathOrName.Split('.').Last();
}
return entityPathOrName;
});
transport.Transactions(TransportTransactionMode.ReceiveOnly);
transport.ConnectionString(Environment.GetEnvironmentVariable("ASB_CONNECTION_STRING"));
var bridge = transport.Routing().ConnectToRouter("ASBToSQS.Router");
bridge.RouteToEndpoint(typeof(ASBToSQSCommand), "ASBToSQSRouter.SQSEndpoint");
bridge.RegisterPublisher(typeof(ASBToSQSEvent), "ASBToSQSRouter.SQSEndpoint");
endpointConfiguration.EnableInstallers();
SQS Endpoint Setup (nothing special because it doesn't need to know about the router)
var endpointConfiguration = new EndpointConfiguration("ASBToSQSRouter.SQSEndpoint");
var transport = endpointConfiguration.UseTransport<SqsTransport>();
transport.UnrestrictedDurationDelayedDelivery();
transport.Transactions(TransportTransactionMode.ReceiveOnly);
endpointConfiguration.EnableInstallers();
Any help would be greatly appreciated!
Unfortunately one of the recent SQS transport releases contains a change that makes the subscription work only by default in the context of a full NServiceBus endpoint. This feature is subscription batching.
In order for the Router to work correctly (Router does not run a full endpoint, just NServiceBus transport), you need to add this magic line to the SQS interface configuration:
settings.Set("NServiceBus.AmazonSQS.DisableSubscribeBatchingOnStart", true);
This is an undocumented flag that disables the subscription batching and allows router to complete the subscribe operations normally.
I am sorry for the inconvenience.

ASP.NET Core 2.2 kestrel server's performance issue

I'm facing problem with kestrel server's performance. I have following scenario :
TestClient(JMeter) -> DemoAPI-1(Kestrel) -> DemoAPI-2(IIS)
I'm trying to create a sample application that could get the file content as and when requested.
TestClient(100 Threads) requests to DemoAPI-1 which in turn request to DemoAPI-2. DemoAPI-2 reads a fixed XML file(1 MB max) and returns it's content as a response(In production DemoAPI-2 is not going to be exposed to outside world).
When I tested direct access from TestClient -> DemoAPI-2 I got expected result(good) which is following :
Average : 368ms
Minimum : 40ms
Maximum : 1056ms
Throughput : 40.1/sec
But when I tried to access it through DemoAPI-1 I got following result :
Average : 48232ms
Minimum : 21095ms
Maximum : 49377ms
Throughput : 2.0/sec
As you can see there is a huge difference.I'm not getting even the 10% throughput of DemoAPI-2. I was told has kestrel is more efficient and fast compared to traditional IIS. Also because there is no problem in direct access, I think we can eliminate the possible of problem on DemoAPI-2.
※Code of DemoAPI-1 :
string base64Encoded = null;
var request = new HttpRequestMessage(HttpMethod.Get, url);
var response = await this.httpClient.SendAsync(request, HttpCompletionOption.ResponseContentRead).ConfigureAwait(false);
if (response.StatusCode.Equals(HttpStatusCode.OK))
{
var content = await response.Content.ReadAsByteArrayAsync().ConfigureAwait(false);
base64Encoded = Convert.ToBase64String(content);
}
return base64Encoded;
※Code of DemoAPI-2 :
[HttpGet("Demo2")]
public async Task<IActionResult> Demo2Async(int wait)
{
try
{
if (wait > 0)
{
await Task.Delay(wait);
}
var path = Path.Combine(Directory.GetCurrentDirectory(), "test.xml");
var file = System.IO.File.ReadAllText(path);
return Content(file);
}
catch (System.Exception ex)
{
return StatusCode(500, ex.Message);
}
}
Some additional information :
Both APIs are async.
Both APIs are hosted on different EC2 instances(C5.xlarge Windows Server 2016).
DemoAPI-1(kestrel) is a self-contained API(without reverse proxy)
TestClient(jMeter) is set to 100 thread for this testing.
No other configuration is done for kestrel server as of now.
There are no action filter, middleware or logging that could effect the performance as of now.
Communication is done using SSL on 5001 port.
Wait parameter for DemoAPI2 is set to 0 as of now.
The CPU usage of DEMOAPI-1 is not over 40%.
The problem was due to HttpClient's port exhaustion issue.
I was able to solve this problem by using IHttpClientFactory.
Following article might help someone who faces similar problem.
https://www.stevejgordon.co.uk/httpclient-creation-and-disposal-internals-should-i-dispose-of-httpclient
DEMOAPI-1 performs a non-asynchronous read of the streams:
var bytes = stream.Read(read, 0, DataChunkSize);
while (bytes > 0)
{
buffer += System.Text.Encoding.UTF8.GetString(read, 0, bytes);
// Replace with ReadAsync
bytes = stream.Read(read, 0, DataChunkSize);
}
That can be an issue with throughput on a lot of requests.
Also, I'm not fully aware of why are you not testing the same code with IIS and Kestrel, I would assume you need to make only environmental changes and not the code.

SignalR self host connection issue

I recently created a proof of concept console application using SignalR (self host). It worked a treat for our use. The client connected fine and I was able to send updates from the server to the client. Lovely!
I've now transferred the code from the Console application to a winforms application for a prettier UI. Now that same client won't connect to the server yet it will still connect to the old Console version.
Winforms code:
string url = "http://localhost:8080";
using (WebApp.Start(url))
{
// Let the app know the server is up
}
Console code:
string url = "http://localhost:8080";
using (WebApp.Start(url))
{
Console.WriteLine("Server running on {0}", url);
Console.ReadLine();
}
Client connection code:
if (!connected)
{
int i = 0;
// Try 3 times
while (i <= 2)
{
try
{
string server = Properties.Settings.Default.Server + ":" + Properties.Settings.Default.PortNumber.ToString();
connection = new HubConnection(server);
connection.StateChanged += connection_StateChanged;
hub = connection.CreateHubProxy("MyHub");
connection.Start().Wait();
hub.On<string>("addMessage", param => { UpdateAlarmStatus(param); });
return true;
}
catch (Exception)
{
i++;
}
}
return false;
}
else
{
return true;
}
The error the client is reporting is:
Exception:Thrown: "No connection could be made because the target machine actively refused it" (System.Net.Sockets.SocketException)
A System.Net.Sockets.SocketException was thrown: "No connection could be made because the target machine actively refused it"
Time: 25/01/2015 15:09:23
Thread:Worker Thread[8232]
Why would the target machine (localhost) refuse itself which the Console version doesn't? I've been looking at the code over and over and I cannot see where I'm going wrong. Can anyone point me in the right direction please?
Thank you for reading.
Paul.
I suspect this is an issue with the configuration of your machine/infrastructure rather than the code itself, which looks fine at first glance.
Have you checked the console debug output in Visual Studio? I recently encountered an issue with similar symptoms and that was what gave me the initial clue to keep investigating. In my particular case, an exception was written to the console debug output that didn't make it to the client.
SignalR will normally negotiate with the server automatically to determine the best transport method to use. In a .NET client, the available options are LongPollingTransport, ServerSentEventsTransport and WebSocketTransport. So for some reason, your console app can use at least one of those methods, whereas your WinForms client cannot.
You can perhaps enable tracing to give you more information to work with. To do this, enter the below before you create the hub proxy:
hubConnection.TraceLevel = TraceLevels.All;
hubConnection.TraceWriter = Console.Out;
ASP.NET doco on SignalR tracing

How to get the user IP address in Meteor server?

I would like to get the user IP address in my meteor application, on the server side, so that I can log the IP address with a bunch of things (for example: non-registered users subscribing to a mailing list, or just doing anything important).
I know that the IP address 'seen' by the server can be different than the real source address when there are reverse proxies involved. In such situations, X-Forwarded-For header should be parsed to get the real public IP address of the user. Note that parsing X-Forwarded-For should not be automatic (see http://www.openinfo.co.uk/apache/index.html for a discussion of potential security issues).
External reference: This question came up on the meteor-talk mailing list in august 2012 (no solution offered).
1 - Without a http request, in the functions you should be able to get the clientIP with:
clientIP = this.connection.clientAddress;
//EX: you declare a submitForm function with Meteor.methods and
//you call it from the client with Meteor.call().
//In submitForm function you will have access to the client address as above
2 - With a http request and using iron-router and its Router.map function:
In the action function of the targeted route use:
clientIp = this.request.connection.remoteAddress;
3 - using Meteor.onConnection function:
Meteor.onConnection(function(conn) {
console.log(conn.clientAddress);
});
Similar to the TimDog answer but works with newer versions of Meteor:
var Fiber = Npm.require('fibers');
__meteor_bootstrap__.app
.use(function(req, res, next) {
Fiber(function () {
console.info(req.connection.remoteAddress);
next();
}).run();
});
This needs to be in your top-level server code (not in Meteor.startup)
This answer https://stackoverflow.com/a/22657421/2845061 already does a good job on showing how to get the client IP address.
I just want to note that if your app is served behind proxy servers (usually happens), you will need to set the HTTP_FORWARDED_COUNT environment variable to the number of proxies you are using.
Ref: https://docs.meteor.com/api/connections.html#Meteor-onConnection
You could do this in your server code:
Meteor.userIPMap = [];
__meteor_bootstrap__.app.on("request", function(req, res) {
var uid = Meteor.userId();
if (!uid) uid = "anonymous";
if (!_.any(Meteor.userIPMap, function(m) { m.userid === uid; })) {
Meteor.userIPMap.push({userid: uid, ip: req.connection.remoteAddress });
}
});
You'll then have a Meteor.userIPMap with a map of userids to ip addresses (to accommodate the x-forwarded-for header, use this function inside the above).
Three notes: (1) this will fire whenever there is a request in your app, so I'm not sure what kind of performance hit this will cause; (2) the __meteor_bootstrap__ object is going away soon I think with a forthcoming revamped package system; and (3) the anonymous user needs better handling here..you'll need a way to attach an anonymous user to an IP by a unique, persistent constraint in their request object.
You have to hook into the server sessions and grab the ip of the current user:
Meteor.userIP = function(uid) {
var k, ret, s, ss, _ref, _ref1, _ref2, _ref3;
ret = {};
if (uid != null) {
_ref = Meteor.default_server.sessions;
for (k in _ref) {
ss = _ref[k];
if (ss.userId === uid) {
s = ss;
}
}
if (s) {
ret.forwardedFor = ( _ref1 = s.socket) != null ?
( _ref2 = _ref1.headers) != null ?
_ref2['x-forwarded-for'] : void 0 : void 0;
ret.remoteAddress = ( _ref3 = s.socket) != null ?
_ref3.remoteAddress : void 0;
}
}
return ret.forwardedFor ? ret.forwardedFor : ret.remoteAddress;
};
Of course you will need the current user to be logged in. If you need it for anonymous users as well follow this post I wrote.
P.S. I know it's an old thread but it lacked a full answer or had code that no longer works.
Here's a way that has worked for me to get a client's IP address from anywhere on the server, without using additional packages. Working in Meteor 0.7 and should work in earlier versions as well.
On the client, get the socket URL (unique) and send it to the server. You can view the socket URL in the web console (under Network in Chrome and Safari).
socket_url = Meteor.default_connection._stream.socket._transport.url
Meteor.call('clientIP', socket_url)
Then, on the server, use the client's socket URL to find their IP in Meteor.server.sessions.
sr = socket_url.split('/')
socket_path = "/"+sr[sr.length-4]+"/"+sr[sr.length-3]+"/"+sr[sr.length-2]+"/"+sr[sr.length-1]
_.each(_.values(Meteor.server.sessions), (session) ->
if session.socket.url == socket_path
user_ip = session.socket.remoteAddress
)
user_ip now contains the connected client's IP address.

Resources