Related
My program needs to decrypt an encrypted file after it starts up to load data it requires to function. This data cannot be available to the user.
I'm not a cryptography expert, so what is the best way to protect hardcoded passphrases and other tidbits of data from users, debugging software and disassembling software?
I understand that this is probably bad practice but it's essential for me (at least for now).
If there are other ways to protect my data from the above 3, could you let me know what those are?
Short answer: you can't. Once the software is on the user's disk, a sufficiently smart and determined user will be able to extract the secret data from it.
For a longer answer, see "Storing secrets in software" on the security.SE blog.
what is the best way to protect hardcoded passphrases and other
tidbits of data from users, debugging software and disassembling
software?
Request the password from the user and don't hardcode the passphrase. This is the ONLY way to be safe.
If you can't do that and must be hardcoded in the app then all bets are off.
The simplest thing you can do (if you don't have the luxury to do something elaborate which will only delay the inevidable) is to delegate the responsibility to the user of the system.
I mean explicitely state that you software is as secure as the "machine" it runs.
If the attacker has access to start pocking around the file system then your app would be the user's least of concerns
In my experience this type of questions are often motivated by either of four reasons:
Your application is connecting to a restricted remote service, such as a database server.
You do not want your users to mess with configuration settings, which in turn do not really have to be kept confidential as long as they are unmodified.
Copy protection of your own software.
Copy protection of data.
Like Illmari Karonen wrote in his answer, you can't do exactly what you are asking for, and this means in particular that 3 & 4 cannot be solved by cryptography alone.
However, if your reason for asking is either 1 or 2, you have ended up asking the questions you do, because you have made some bad decisions earlier in your design process. For instance, in case of 1, you should not make a restricted service accessible from systems you do not trust completely. The typical safe solution is to introduce a middle tier that is the only client to your restricted resource, and which you can make public.
In case of 2, the best solution is often to use exactly the same logic for checking your configuration files (or registry settings or what ever) when they are loaded at start up, as you use for checking consistency when the user enters them using your preferred configuration user interface. If you spot an inconsistency, just bring up your configuration UI and highlight the problem.
Im building a site where users enter promo codes. There is no user authentication, but i want to prevent someone entering promo codes by brute force. I'm not allowed to use captcha, so was thinking of using an IP address blocking process. The site would block a user's IP address for X amount of time if they had X failed attempts at entering the promo code.
Is there any glaring issues in implementing something like this?
Blocking IP addresses is a bad idea because that IP address might be the address of a corporate http proxy server.
Most corporates/institutes connect to internet using a gateway. In such a case, the IP address you see is of the gateway and N number of users might be behind that. If you block this IP address because of nuisance caused by one user in that network, IP based blocking will also make your site unavailable for other N users. This is true where ever a bunch of computers are NATed behind a single router.
Scenario 2: What if say X users in that same network did inadvertently provide an incorrect code within your limit of Y minutes. All users in that network again get blocked to enter any more codes.
You can use cookie based system, where you store the number of attempts in past Y minutes in an cookie (or in session variable on server side) and validate it each time. However, this isn't foolproof again as a user who knows your implementation can circumvent that as well.
If you're IIS 7 there's actually an extension that help you to do precisely what you're talking about.
http://www.iis.net/download/DynamicIPRestrictions
This could save you from trying to implement this through code. As for any "glaring issues", this sort of thing is done all the time to prevent brute force attacks on web applications. I can't think of any reason why a real user would need to try to enter in codes in the same manner a computer that's issuing a brute force attack would. Testing any and all possible user experiences would hopefully get you past any issues that might pop up.
-if it's not linked to a shop DO NOT CONSIDER THIS-
tought about placing an hidden tag on your orders? not 100% foolproof but it will discourage some bruteforces.
all you got to check is if the hidden tag pops up with tons of promocodes you block the order.
i would still recommand you to set some kind of login.
I don't think there is one solution that will solve all your problems, but if you want to slowdown a brute force attack just adding a delay of a few hundred milliseconds in the page load will do a lot!
You could also force them to first visit the page where you enter the code, there you could add a hidden field with a value and store the same value in the session, when the user validates the code you compare the hidden field to the session value.
This way you force the attacker to make two requests instead of just one, you could also measure the time between those two requests and if its below a set amount of time you can more or less guarantee its a bot.
I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.
This is the first time I have been faced with someone trying to penetrate a website I have created. What can I do to put a stop to the attempts?
As a side note, their sql injection stands no chance of ever working and there isn't any data that we have that isn't already available by anyone using this site normally.
Appended:
I think the code part is covered for most XSS and sql injection but I am definitely considering a security audit. I was just curious about the response. Am I really only limited to blocking ip addresses?
If you already are protected against SQL injections, you've got a major attack covered. The next biggest threat (in my opinion) would be Cross-Site Scripting (XSS) since it would allow an attacker to have another user do something malicious, making it hard to track that activity.
You should also be aware of Cross-Site Request Forgeries (CSRF), since that is one that many people seem to miss a lot of times.
I would take a look at OWASP's Top 10 Web Security Vulnerabilities and make sure you protect against all 10 them as best as possible. Any one of them could seriously open yourself up to attackers if you aren't careful.
Unless this is your first public website, all of the websites you have worked on were under attack roughly 3 minutes in to being accessible whether you knew it or not.
A couple things you can start doing are:
Start blocking the IPs that attacks are coming from. This isn't always feasible as IP addresses frequently change and some types of attacks can work with a spoofed address.
Put an intrusion detection system (IDS) in place and start monitoring everything.
Verify your firewalls are working correctly and monitor the attack vectors. Make sure everything they are going after is pretty well secured.
This answer comes from another one that I answered about IIS getting hacked:
Hopefully you've had your IIS logfiles
turned on and hopefully the hacker
didn't erase them. By default they're
located here:
c:\winnt\system32\LogFiles\W3SVC1 and
will generally be named after the
date.
Then it's probably helpful to figure
out how to use log parser (from
Microsoft), which is free. Then use
this guide to help you with
looking forensically at your logfiles.
Do you have a firewall because it's
syslogs might be helpful.
Another decent tool to help you find
sql injection issues is to go
here and download HP's Scrawlr.
If you have any more questions about
what you've found, come back and ask.
Is it many sources or just a few IP's? We've had a few IP's do shadowy things and have used IIS to block them specifically. If it's a coordinated attack from multiple sources this won't help.
Recently, I've been reading up on the IRC protocol (RFCs 1459, 2810-2813), and I was thinking of implementing my own server.
I'm not necessarily looking into adhering religiously to the IRC protocol (I'm doing this for fun, after all), but one of the things I do like about it is that a network can consist of multiple servers transparently.
There are a number of things I don't like about the protocol or the IRC specification. The first is that nicknames aren't owned. While services like NickServ exist, they're not part of the official protocol. On the other hand, implementing something like NickServ properly kind of defeats the purpose of distribution (i.e. there'd be one place where NickServ is running, and one data store for it).
I was hoping there'd be a way to manage nicknames on a per-server basis. The problem with this is that if you have two servers that have some registered nicknames, and they then link up, you can have collisions.
Is there a way to avoid this, without using one central data store? That is: is it possible to keep the servers loosely connected (such that they each exist as an independent entity, but can also connect to one another) and maintain uniqueness amongst nicknames?
I realize this question is vague, but I can't think of a better way of wording it. I'm looking more for suggestions than I am for actual yes/no answers. So if anyone has any ideas as to how to accomplish nickname uniqueness in a network while still maintaining server independence, I'd be interested in hearing it. Note that adhering strictly to the IRC protocol isn't at all necessary; I've got no problem changing things to suit my purposes. :)
There's a simple solution if you don't care about strictly implementing an IRC server, but rather implementing a distributed message system that's like IRC, but not exactly IRC.
The simple solution is to use nicknames in the form "nick#host", much like email. So instead of merely being "mipadi", my nickname could be "mipadi#free-memorys-server.net". So I register with just your server, but when your server links up with others to form another a big ole' chat network, you can easily union all the usernames together. There might be a "mipadi" on otherserver.net, but then our nicknames become "mipadi#free-memorys-server.net" and "mipadi#otherserver.net", and everything is cool.
Of course, this deviates a good deal from IRC. :)
They have to be aware of each other. If not, you cannot prevent the sharing of nicknames. If they are, you simply need to transfer updates on the back-end. To prevent simultaneous registrations, you need a transaction system that blocks, requests permission from all other servers, and responds.
To prevent simultaneous registrations during outages, you have no choice but to timestamp the registration, and remove all but the last (or a random for truly simultaneous) registered copy of the nick.
It's not very pretty considering these servers aren't initially merged in the first place.
You could still implement nick ownership without a central instance, if your server instances trust each other.
When a user registers a nick, it is registered with the current server he's connected with
When a server receives a registration that it didn't know of, it forwards that information to all other servers that don't know it yet (might need a smart algorithm to avoid spamming the network)
When a server re-connects to another server then it tries to synchronize the list of registered nicks and which server handles which nick
If there is a collision during that sync, then the older registration is used, and the newer one marked as invalid
If you can't trust your servers, then it'll get a lot harder, as a servers could easily claim every username and even claim the oldest registration for each one.
Since you are trying to come up with something new, the idea that springs to mind, is simply including something unique about the server as part of the nick name when communicating outside of the server. So if you want to message a user on a different server you might have something like user#server
If you don't need them to be completely separate you might want to consider creating some kind of multiple-master replicated database of accounts. Where each server stores a complete copy of the account database, and each server can create new accounts which will be replicated to other servers as possible. You'll probably still have to deal with collisions on occasion though.
While services like NickServ exist, they're not part of the official protocol.
Services are not part of the official protocol because they've nothing to do with the protocol. They're bots with permissions. There's no reason why you couldn't have one running on each server but it does make them harder to maintain.
If you were to go down that path, I would probably suggest the commonly used "multiple master" database replication technique. If one receives a write (in your case, a new user is created or updated, etc) it sends the data to all the other nodes. You'll have to be careful though. If one node is offline when the others get an update, it will need to know to resync on reconnection.
Another technique would be as above but in reverse. Data is only exchanged between nodes when it's needed. Eg if a user tries to log in on a node where there's no data for it, it'll query the others and issue a move order to get all the data to that one node. This is potentially less painful than the replication version but there could be severe problems in netsplits if somebody signs up on a node disconnected from the pack for a duplicate nick.
One technique to nullify the problems of netsplits would be to make chat nodes and their bots netsplit-aware. When they're split, they probably shouldn't allow any write actions... But this could impact on your network if you're splitting lots.
You've also got to ask how secure this might or might not be. IRC network nodes are distributed for performance but they're not "secure". Because of this, service bots are usually run centrally to keep ultimate control over their running. If you distributed the bots and remote node got hacked, they'd potentially have access to the whole user database (depending on the model).