Suppose I have a valid (i.e. signed by one of the commonly trusted authorities) cryptographic certificate on my server. I could obviously use it to establish https sessions and deliver the contents with confidentiality (only the endpoints can read them), authentication (both endpoints know who they're talking to) and reliability (the message can't be tampered).
Now suppose that I actually don't care about the first two but, instead, I just the need the last one. For example, let's say I have a static resource that I would like to sign (a-la PGP) so that I can give it to other untrusted hosts: if my certificate is public and the resource has been signed with it, any client should be able to verify that the resource has not been tampered (e.g. by the untrusted host).
The question now is: is there a standard way to statically sign a web page? (I obviously mean something builtin in all browsers) I'm aware of someone (Unhosted) who's trying to accomplish something like this by implementing much of the logic via Javascript but still I'm wondering if a more standard way exists.
I'm not aware of any such standard implementation builtin in a browser.
Even in the mail area where such behavior is "standard" for long time (S/MIME), we find issues every other day with different clients, relays and servers.
For a download you may revert to sending a PKCS#7 container and associate a tool that unpacks and verifies. At least plugins and helper applications are availabel everywhere.
I'm also not aware of any standard implementation like that within a browser. But, to back up a bit... some things to consider:
For executable content (like downloaded EXE files, ActiveX controls, Windows Installer, etc.), a common / standard solution is Microsoft Authenticode. See http://www.tech-pro.net/code-signing-for-developers.html. Similar solutions for Java, Adobe, etc. The CA you buy the cert from will verify your identity. When you sign an EXE file with a cert from a trusted CA, Internet Explorer will display the signer information / less scary warning message. Same goes for UAC elevation prompts in Windows Vista/7. You're probably familiar with this?
But for the static content situation, the standard solution is SSL. May I ask why SSL isn't an acceptable solution in your application?
The problem I see is that there's no way for the user to verify the identity of the web page from the web browser, other than clicking the SSL "lock" icon in the browser to view the certificate. The new SSL EV certificates should verify that you control the domain in question, and that you are who you say you are (i.e. not be able to get a "PayPal" certificate for www.paypal.com.hacker.cz).
It sounds from your question that you're looking for an "Authenticode for web pages" sort of thing: a certificate with a subject not tied to a domain name and where the web page could go anywhere. Unfortunately, I'm not aware of any such thing for standard HTML files. I believe you can sign things like Adobe AIR applications, which can be based on HTML / Javascript / etc., although I'm not familiar with that platform. It does place the web page outside of the user's normal web browser, of course.
Related
If I set up a simple web server online (eg nginx), and generate a very large random string (such that it is unguessable), and host that endpoint on my domain, eg
example.com/<very-large-random-string>
would I be safe in say, hosting a webapp at that endpoint with no authentication to store my personal information (like a scratch-pad or notes kind of thing)?
I know google docs does this, is there anything special one has to do (again, eg for nginx) to prevent someone from getting a list of all available pages?
I guess I'm asking is there any way for a malicious actor to find out about the existence of such a page, preferably irrespective of what web-server I used.
I'd be pretty alarmed if my online bank started using this system, but it should give you a basic level of security. Bear in mind that this is security through obscurity, which is rather frowned upon and will immediately turn into no security whatsoever the moment someone discovers the hidden URL.
To prevent this from happening, you will need to take a few precautions:
Install an SSL certificate on your server, and always access the url via https, never via http (otherwise the URL path will be sent in plain view and visible to everyone along the way).
Make sure your secure document contains no outgoing links. This includes not only hyperlinks (<a href="...">) but also embedded images, stylesheets, scripts, media files and so on. Otherwise the URL will be leaked to other domains via the Referer request headers.*1
(A bit of a no-brainer, but) make sure there are also no inbound links to this page. Although they aren't so common now, web hosts used to generate automatic "web stats" pages showing the traffic to each web domain. Some content management systems generate a site map automatically. This would be just as bad.
Disable directory browsing on your server. In other words, make sure that someone who visits the directory level above your hidden directory isn't presented with a list of subdirectories.
Bear in mind that the URL will always be visible in your address bar and browser history, and possibly in other places like your browser's cookie jar. Your browser will probably provide the rest of the URL by auto-complete when someone types the domain into your address bar.
*1: Actually, your browser will only send a Referer header when you access other https pages, but still...
I am writing an app that will be used for a kiosk. The app will be asp.net, I will only want the app accessible from certain computers, using chrome.
I don't think limiting by up address would work since a few of the computers will be taken to conventions and used there.
I was thinking something with a custom certificate but would like advice.
I think the better way is to give login or/and password/special key to allow them to login through an identification page. After that, you put cookie into their chrome.
Finally, like all websites.
Scenario:
I have a ASP.Net / Silverlight website with webservices for supporting the Silverlight apps with data. The website uses forms authentication, and thus the webservices can also authenticate requests.
Now I would like to pull some data from this system to a Android application. I could implement code for running the forms login, and storing the authentication cookie, but it would actually be much simpler to send the username and password in the webservice url and authenticate each call. I don't really see a big problem with this as the communication is SSL encrypted, but I'm open to be conviced otherwise ;)
What do you think ? Bad idea / not so bad idea ?
Conclusion:
After reviewing the answers the only really valid argument against name / pass in the url request string is that it's stored in the server log files. Granted it's my server and if that server is hacked the the data it stores will also be hacked, but I still don't like passwords showing up in logs. (Thats why they are stored salted and encrypted)
Solution:
I will post the username and passord with the request. Minimal extra work, and more secure.
See Are querystring parameters secure in HTTPS (HTTP + SSL)?
Everything will be encrypted, but the URLs, along with the query string (and thus the passwords) will show up in the server log files.
Bad Idea: The contents of your post are encrypted and though the URL parameters may be encrypted as well, they could still be visible to third-party trackers, server logs or some other monitoring software that can directly sniff your traffic. It is just not a good idea to open up a potential security hole in this way.
Users do tend to copy-and-paste URLs straight from their address bar into emails, blogs, etc., and save them in bookmarks, and so on.
And things like plugins, or even other software that reads, for example, window properties (alternate shells, theme managers, accessibility software) could end up with the info. And they might, for example, crash and automatically send crashdumps back to their developers.
And worms far less sophisticated than keloggers - like things that take screendumps - can get passwords this way. Sometimes even security software, for example if deployed in a corporate network.
And if the user has a local proxy, then they might be communicating in plaintext with the proxy which in turn is talking in SSL (not the way it's supposed to be done, but it happens).
And for these and more reasons, URLs with usernames and passwords, that used to be standard - such as ftp URLs with the username and password in the authority segment - are now typically forbidden by browsers.
https://www.rfc-editor.org/rfc/rfc3986#section-7.5
So, an emphatic NO, DO NOT DO THIS.
It is always good programing practice to not provide delicate info like username and password
in the URL. No matter how good a site is it can be compromised. So why provide with more info?
I have my website, and it records the number of visitors, IP and time of access...
I want to identify each visitor... I think that this was possible recording IP Address... but when the IP is dynamic, my system fails. So I think that I can solve it recording MAC address... is possible? What language should use? PHP, ASP, Javascript?
Thanks
Edit: What I can use to identify each user without having login information (username & pwd).
The MAC address, by TCP/IP standards, is never communicated outside of the local-area network to which it pertains — routers beyond that LAN don't even get the information you're trying to record.
There are many other ways to try and identify unique visitors, including matching the user-agent's details in addition to the IP, serving cookies as part of your response, etc… it is, after all, a core functionality in the field of "web analytics".
MAC addresses are simply not part of the gamut of techniques that it makes sense to utilize for it!
It is only possible if you use a technique where you install a "native" app on the client machine. For example, an activeX component, java applet or a client application. Then that application, once installed can get the MAC and then call to your web server with the MAC as an argument. In other words, you have to build your own front end "browser" to handle logging in. Then once the user is logged in, you can launch the app in the default browser.
It would be nice if future browsers allowed users to give permission to specific sites to access the MAC. Then if a site had a button that said "Register this device" the web application could do so without needing an additional native app installed (after all, the browser IS a native app).
Can't you just have them store a cookie, so that when they come back they can be uniquely identified? No username/password requirement.
http://en.wikipedia.org/wiki/HTTP_cookie
Sorry but sending MAC address isn't part of the HTTP. However, you can use cookie to identify different users. Any backend language will do (add cookie in the server side). You can set the cookie in the client side using JavaScript too.
We have an application that among other things, checks the existence of a cookie and reads and decrypts the contents of the cookie. Though the data stored inside the cookie is not sensitive, it has been encrypted via TripleDes encryption. A question was raised today whether the cookie saved on a single PC, could be copied on to another PC and whether the web application would detect the presence of this copied cookie on another machine, and ultimately decrypt what it would have on the original PC.
My question is this:
We use the standard ASP.NET implementation to save cookies (i.e via HttpResponse), does the index.dat file prevent the transplant of a cookie from one machine to the other? What if the index.dat file was also transported and copied over, or is there some internal structure inside index.dat that ties a cookie to a specific machine?
Absolutely. This is one way that cross-site scripting (XSS) attacks work:
I inject javascript into a page
I wait for someone to look at the page
The javascript I injected sends me your cookies
I login as you and do bad things
This particular issue bit SO during the private beta.
Yes, stealing cookies is a common technique to steal a session from a user.
Some sites try to bind a cookie to the IP of the client, but this fails in the face of big corporate proxies with multiple out-bound interfaces or other non-residental setups.
Even if everything else is ok, if someone can get physical access to the user's machine, they could copy the cookies to another machine.
E.g just clone the disk if needed!
In addition to the other answers. Never trust anything coming from the user of a web app, regardless of whether it's encrypted.
This ties into the idea of validate input on both client and server. Don't trust that the validation on the client was done.