We have a gRPC based client and server. And we use gRPC call authentication. That is, we use the (username, password) with each gRPC call from the client to the server. We have an RPC that gets called every minute.
The problem is this:
On the server:
/etc/pam.d$ grep unlock *
login:auth required pam_tally2.so file=/var/log/tallylog deny=3 unlock_time=300
sshd:auth required pam_tally2.so file=/var/log/tallylog deny=3 unlock_time=300
If someone logs into the server manually using (username, <wrong_password>) 3 times, the user gets locked. At this point the gRPC call starts failing as UNAUTHENTICATED. After this, the gRPC calls which use the right (username, password) also gets counted by pam_tally2.so, resulting in the user never getting unlocked.
The only way out is to do:
pam_tally2 -r
This is a very simple way to create a DOS attack for the gRPC service.
Is there anyway to make pam_tally not account for valid login attempts when the user is locked.
Is there a way to still protect the gRPC service while still using call level credentials?
Thanks for your time.
I think gRPC doesn't have a built-in authentication module using pam so it's more likely a custom-made plugin. My guess is that you may need to take a look at the pam configuration about its lock; https://man7.org/linux/man-pages/man8/pam_tally2.8.html
Related
I would like to ask about Rebus Timeout Manager. I know we have Internal timeout manager and External timeout manager and I have been using Internal timeout manager for quite some time. And I have been sharing one timeout database (Sql Server) for all my endpoints.
I would like to know if this is correct.
Secondly I would like to know if I can also use one external Timeout Manager for all my endpoints.
My questions comes from the the fact that the information contained in the Timeouts table (id,due_time,headers,body) has no connection with the endpoint that sent a message to the timeout manager.
I just would like to get assurance.
Regards
You can definitely use the internal timeout manager like you're currently doing.
The MSSSQL-based timeout storage is safe to use concurrently from multiple instances, as it used some finely trimmed lock hints when reading due messages, thus preventing issues that could otherwise have happened due to concurrent access.
But it's also a valid (and often very sensible) approach to create a dedicated timeout manager and then configure all other Rebus instances to use that.
And you are absolutely right that the sender of the timeout is irrelevant. The recipient is determined when sending the timeout, so that
await bus.DeferLocal(TimeSpan.FromMinutes(2), "HELLO FROM THE PAST 🙂");
will send the string to the bus' own input queue, and
await bus.Defer(TimeSpan.FromMinutes(2), "HELLO FROM THE PAST 🙂");
will send the string to the queue mapped as the owner of string:
.Routing(r => r.TypeBased().Map<string>("string-owner"))
In both cases, the message will actually be sent to the timeout manager, which will read the rbs2-deferred-until and rbs2-defer-recipient headers and keep the message until it is due.
I've run into a situation where an infinite loop on the client is crashing the Meteor server. The infinite loop is a bug that I will fix, and not the subject of this question. My concern is that a malicious user could create their own infinite loop and crash the Meteor server.
The infinite loop in question is repeatedly making calls to Meteor.subscribe(...) and Meteor.call(...). It looks like these requests are being queued on the server to the point of incapacitation, even though the client's intention was to abandon them. Is there a way to tell the server that the request has been abandoned and to remove it from the queue?
I suppose this wouldn't protect the server from a client that makes thousands of successive requests without abandoning them, so that question would supersede this one if anyone has an answer to it. How can I limit the number of requests that can be made by a single client?
In these APM charts, you can see how the infinite loop affected performance. I started it at about 13:17, and at 13:25 the app crashed (terminated by Heroku for exceeding its memory quota).
When Meteor.subscribe is called, the Meteor.publish function is executed on the server. You can thus decides in the publish function not to serve the data.
It depends if you expect you users to be logged in or not to serve the data. If you expect the users to be logged in, you can create a collection registering any call to the publish function (ie any client subscription request) with the userID used. You would ask this collection any time a logged user attempts to subscribe and check if this user has been making multiple requests recently. If this client hits you're defined request quota, you can just return null.
You can do the same with non logged in users by using the https://github.com/gadicc/meteor-headers package and registering the IP adress.
You can do the same within the server methods that are repeatedly called by the client meteor.call().
I think that checking in this database (which would stay small as only the recent connection has to be kept in the database) and to decide to serve the data or not would be less time consuming that serving the data everytime.
I hope this helps.
In an app, I Have a network server and clients.
After a handshake, let's say the client sends "userId sessionId SOME_COMMAND param param param".
I have already identified the client and the sessionId is checked on the server accordingly, so identity is no more an issue.
But I'd like to prevent a hacker to modify the message or create a false one, for example sending "userId sessionId SOME_COMMAND paramModified paramModified paramModified".
I thought about using a pair of private/public encryption keys, and send the hash of the message in the message itself. But since it's automated in the client program, I may have to send the public key during the handshake. So the hacker could simply retrieve it and generate the proper hash.
I could also use complex encryption seeds or algorithms, but my experience with hackers has shown me that they will decompile anything.
So the bottom line is: I can hide everything that runs on the server, but I can't hide anything on the client program. And I'd like to to forbid to modify the message that the client program is supposed to send.
I don't even know if it's possible. And I'm opened to any suggestion. And by the way, I'm using Java, although it should not be very relevant. Thanks.
Forget it. Use SSL like everybody else. There are complexities which you haven't even begun to address.
i have to write application for sending newsletter.
what is the best way to send newsletter thoundands of users?
My requirement is
Each mail is seprately as To :
Every mail has unique Unsubscribe link
Is is good to use SMTP mail class of .net?
I look aound may questions in so but can't decide which approcah i should go?
There are many suggestions
Multi threaded Windows service
Use Mail Server
Add thread.sleep(2000) between each send.
can anyone suggest good way to imepement this?
I would not recommend asp.net webpage to send, even if you do start it in a separate background thread. I would think you run the risk of the server recycling your process in the middle of the send, which would mess it up. You really need to write some kind of separate service or application to send your emails.
The simplest option would be to just create a quick and dirty console or windows form application.
Also logging is critical just like the other poster said. If it fails you want to know exactly what got sent out and where it stopped so that when you restart it you don't mail all the people who it did work for again. You want to be able to input the starting point for the send, so if you need to restart at number email #5000 you can.
The classes in System.Net.Mail namespace will work just fine for sending your mail.
One of the biggest problems will be finding a email host that will let you send so many emails. Most email hosts have throttling and sometime it changes depending upon server conditions so if the server is being heavily used then the email limits will be more restrictive, and you may only get to set 500 emails per hour.
We have a newsletter that goes out to around 20000 people as separate emails and we had to play around with the delay between emails until we found one that would work for our email host. We ended up with 1.2 sec between emails, so that might be a good starting point.
I think there are email hosts specialize in bulk mailings though so if you get one of those it might not be a problem.
Also if you host your own email this may not be a problem. And if you do host your own mail you will have the option of dropping the mail in the pickup directory and you could just dump it all in there as fast as you want, and let the email service pick it up at it's own pace.
EDIT: Here is the settings to add to the config file for setting the pickup directory
<system.net>
<mailSettings>
<smtp from="support#test.com" deliveryMethod="SpecifiedPickupDirectory" >
<specifiedPickupDirectory pickupDirectoryLocation="Z:\Path\To\Pickup"/>
</smtp>
</mailSettings>
</system.net>
Definitely do not do this in ASP.NET. This is one of the biggest mistakes that new web developers make.
This needs to be a windows app or service that can handle this much volume.
I've written pages that send emails, but not nearly the volume yours will. Nonetheless, I would recommend the following based on code I have implemented in the past:
Use the web application to write out the email and all the recipient addresses to database table(s).
Have a process that is outside of ASP.NET actually send the emails. This could be a vbs file that is set up as a scheduled task, or (preferably) a windows service. The process would take the text of the email, append the unsubscribe link, and once sent successfully flag the database record as sent. That way, if the send fails, it can try again later (the send process loops over all the records flagged as unsent).
If you need a log of what was sent and when, you just need to keep the sent records in the database tables. Otherwise, just delete the records once sent successfully.
IMHO sending emails within the ASP.NET worker process is a bad idea because you don't know how long it will take and if the send fails there's little opportunity to retry before the page times out.
Create a webpage to "Design" the newsletter in. When they hit Send, queue the newsletter up somewhere (database) and use another program (windows service, etc) to send the queued letter. This will be many times more effecient and potentially fault tolerant if designed properly.
I have written a Newsletter module (as part of a bigger system) in ASPNET MVC 2, Entity Framework and using the System.Net.Mail namespace. It is kicked off in view and actually just runs in a controller with a supporting method to do the send. As each email is sent I track whether there is a hard bouce (an exception is thrown) and I update that database record stating a fail with the exception, otherwise I update the record stating success. We also do personalisation so we have 'tags' that get replaced by an extra field in the database (stored as XML for flexibility). This helps handle an unsubscribe function.
My code is quite simple (please don't flame me for using exception handling as business logic ;) and it works like a charm.
This is all done on a VPS at http://maximumasp.com which also hosts 4 sites with pretty decent traffic. We use their SMTP servers. We notified them that we needed this service and have had no problems relationship-wise.
We had 2GB of RAM on the machine running Windows 2008 and it was doing 6 emails/sec. We bumped it up to 3GB as the web sites needed it and now the mailout is doing about 20emails/sec. Our mailouts range from 2,000 to 100,000 email addresses.
In short, ASP.NET can be used to handle a mailout, and if you add in some logic to handle record updating the worry of losing your way mid-send is mitigated. Yes there are probably slicker ways to do this. We are looking in to MQMS and threading, and separating that out to windows service to make it more stable and scalable as we put more clients and larger lists on, but for now it works just fine with reasonable reporting and error handling.
Hai Guys,
My application deals scheduled mail concept (i.e) every morning 6.00 am my users gets a remainder mail about their activities for the day... I dont know how to do this.... Many told use windows service but i will host my website on a shared server i may not get rights to do windows service... Is there any dll for sending mails at a schduled time through asp.net application ..please help me out guys......
You cant do much in a shared hosting. Try upgrading your hosting or else write a windows service, to run on your machine, which will call an asp.net which can send out emails. Of course your machine has to be switched on all the time or at least during 6:00 AM :). You will have to take proper steps to avoid unauthorized request for that aspx page.
you can check this article too: http://www.codeproject.com/KB/aspnet/ASPNETService.aspx
You can't really do this with ASP.Net. ASP.Net is for web pages - which are reactive to HTTP requests.
You need a scheduled task or a service. All a website can do is respond to requests. I guess you could program the functionality into a web page and have a remote process request the page every morning - but what happens if someone else requests the page?
You can either have a program that runs constantly and has a timer or a loop that checks the time of day and then sleeps for a really long time and when the timer goes off or it's the right time of day it sends an email, or you can launch a program as a scheduled task. The first method can also be implemented as a service if you would like. Keep in mind you dont need ASP.Net to send emails, all you need is a console application that uses System.Net.Mail. Check out the mailer sample on MSDN for a very simple idea.
One other thing you can consider: IIS has an smtp service that you can install and it uses a pickup directory to send mail. You write an email to the pickup directory as an .eml file and IIS grabs it and sends it almost immediately. If you do that, you'll still have to write the emails (System.net.Mail will write the .eml files from a MailMessage, just set SmtpClient.DeliveryMethod to SpecifiedPickupDirectory or PickupDirectoryFromIIS and call SmtpClient.Send) but it will then send them for you. You'll still need to schedule something somehow so this might not be all that more useful but I thought I'd at least let you know that it exists.
One thing to be aware of: when the IIS SMTP service reads the send envelope of the .eml file, the order of the Sender and From headers is significant; if the From header appears before the Sender header then the MAIL FROM command will use the From header, which is incorrect (and MS won't be fixing this one). This appears to be an issue ONLY with the IIS SMTP service as it hasn't been reported anywhere else that I'm aware of. Reversing the order of the headers is the work-around. By default SmtpClient always writes the From header first. I'm aware of the issue and IIS isn't fixing it but I may be able to get a fix into SmtpClient for the .NET 4.0 RC build that re-orders the headers for you but no promises.
If you happen to have it handy (and I assume you do), you can use a SQL Server Agent job to make a request to an ASP.NET page that sends the email.
Here's some example code:
http://nicholasclarke.co.uk/blog/2008/01/16/web-request-from-sql-server-via-c/
Of course, since you're using SQL Server to call CLR code anyway, you could just have that code send out the emails (via System.Net.Mail) rather than requesting a page on IIS to do so. To do this, SQL Server would need:
Access to all of the data needed to send the emails
Outbound firewall access to send an email
CLR code that encapsulates all of the logic needed to know where/what to send.
Okay this is interesting, and what I did fits silky's definite of 'cheating', but no it was pretty cool for me.
What I did was spawn a new thread from ASP.Net code (it was possible on that host), and that thread did the scheduled job.
I checked whether the thread was alive (which is pretty easy) on every visit to the website (not so reliable I know, but it worked cause that website has plenty of visitor).
If at all you do this
Treat this as a stop-gap while you arrange to get a dedicated host or VPS.
Rest assured that the hosting company will kill your thread and withdraw permissions when they discover you're doing this.