Green Bar for Self-Made SSL - asp.net

So, considering the pricing for SSLs, I chose to try creating my own SSL certificate (still in the works).
Once I get that part done, how to I get the EV and green bar aspect of the certificate set up?

It can make sense to create your own certificates (and use your own CA) if you're in an environment where your users can import your own CA certificate in a way they can verify it independently. Typically, this works fine on an institution's network where someone installs the extra CA certificate as part of the OS configuration when configuring centrally-administered machines (and similar cases). Under these conditions, this will get you a blue bar without any problem.
Extended Validation certificate (which produce a green bar) rely on two things:
Extra policy attributes in the certificate: technically, you could define some OIDs yourself, although getting others to recognise it will be a problem, obviously.
OID policies and root certificate fingerprints that are hard-coded in the browser.
This second point is what will prevent you from doing it yourself. Non-EV certificates is something you could potentially implement on machines under your control using a few extra configuration steps. EV certificates also require that you control the compilation of the browser: this is not going to happen for proprietary browsers such as IE, and it can still be quite a lot of work for open-source browsers (e.g. Firefox/Chromium), since you wouldn't be able to rely on the pre-compiled binaries (and you'd have to recompile it yourself for every new release).

I think you're probably going to find that all that nice stuff (assuming you are talking about the padlock and green bar in the browser) depends on the other part of certification, the fact that the certificate has been issued by a trusted provider who has verified your identity - this you probably can't do on your own.

The whole idea of SSL certificates is that somebody everybody trusts (mozilla, microsoft) certify certifiers (thawte etc) to certify sites they audit (somehow) as being genuine. This wont work with your certificate, the idea is a trust chain. microsoft trusts thawte, they trust you. Nobody trusts you in your case so your self-signed certificate means nothing to browsers.

Related

Why all HTTPS communications are visible to other apps on a device? HTTP Toolkit

I noticed that using HTTP Toolkit, you can sniff all HTTPS communications in an unencrypted form, from browsers on Windows and Android OS, plus all applications on a rooted Android device or an emulator or via some workaround on a PC. All fields and data from headers, request bodies, and responses are intercepted without encryption.
I find this to be a significant security flaw as a hacker can easily analyze how an app communicates, thus gaining more knowledge on how the server communicates, plus seeing API keys in the headers.
In addition, installing some spyware to record entered credentials on his PC or a public PC, same way as HTTP Toolkit does.
Is there a reason this is allowed to happen in the first place? Is there a way to prevent this from happening?
It's explicitly allowed because it's extremely useful. It's how all kinds of debugging, testing, and profiling tools are implemented, as well as some kinds of ad blockers and other traffic modifiers.
It's possible because it cannot be prevented in the most general way. A user who fully controls a device can inspect all behavior and traffic on that device. That is what it means to control a device. Traffic is encrypted to protect the user, not to protect apps from their user. If seeing the API would significantly impact the security of the system, the system is already insecure.
Your concern that an attacker may take over a user's machine and observe them is valid, but is far deeper than this. An attacker who has administrative access to the system can observe all kinds of things; mostly commonly by installing a keylogger to watch what they type. There is no way to secure a device that an attacker has complete physical access to.
You can limit TLS sniffing using certificate pinning. Google does not recommend this because it's hard to manage. However, for some situations, it's worth the trouble. See also HTTP Toolkit's discussion on the topic.
You've found a good thing to study. I recommend digging into how HTTP Toolkit works. It will give you a much better understanding of what TLS does and doesn't provide.
I don't think that is too serious.
HTTP Toolkit can not intercept the your normal browser.
It only creates a guest profile of browser, open and intercept it.
This browser does not have related to your own browser and does not share between them.
The same thing happen in Selenium.
Selenium is used widely for automated testing and can be integrated with python, C# and so on.
This also opens their own browser with separated profile and communicate with it from your test code.
Anyway, they can not intercept your normal browser.
If you are serious about the security, then you must not explore the websites with sensitive data via browser that is opened by the HTTP Toolkit or Selenium.
Just use your normal browser.

Varnish to be used for https

Here's the situation. I have clients over a secured network (https) that talk to multiple backends. Now, I wanted to establish a reverse proxy for majorly load balancing (based on header data or cookies) and a little caching. So, I thought varnish could be of use.
But, varnish does not support ssl-connection. As I've read at many places, quoting, "Varnish does not support SSL termination natively". But, I want every connection, ie. client-varnish and varnish-backend to be over https. I cannot have plaintext data anywhere throughout network (there are restrictions) so nothing else can be used as SSL-Terminator (or can be?).
So, here are the questions:
Firstly, what does this mean (if someone can explain in simple terms) that "Varnish does not support SSL termination natively".
Secondly, is this scenario good to implement using varnish?
and Finally, if varnish is not a good contender, should I switch to some other reverse proxy. If yes, then which will be suitable for the scenario? (HA, Nginx etc.)
what does this mean (if someone can explain in simple terms) that "Varnish does not support SSL termination natively"
It means Varnish has no built-in support for SSL. It can't operate in a path with SSL unless the SSL is handled by separate software.
This is an architectural decision by the author of Varnish, who discussed his contemplation of integrating SSL into Varnish back in 2011.
He based this on a number of factors, not the least of which was wanting to do it right if at all, while observing that the de facto standard library for SSL is openssl, which is a labyrinthine collection of over 300,000 lines of code, and he was neither confident in that code base, nor in the likelihood of a favorable cost/benefit ratio.
His conclusion at the time was, in a word, "no."
That is not one of the things I dreamt about doing as a kid and if I dream about it now I call it a nightmare.
https://www.varnish-cache.org/docs/trunk/phk/ssl.html
He revisited the concept in 2015.
His conclusion, again, was "no."
Code is hard, crypto code is double-plus-hard, if not double-squared-hard, and the world really don't need another piece of code that does an half-assed job at cryptography.
...
When I look at something like Willy Tarreau's HAProxy I have a hard time to see any significant opportunity for improvement.
No, Varnish still won't add SSL/TLS support.
Instead in Varnish 4.1 we have added support for Willys PROXY protocol which makes it possible to communicate the extra details from a SSL-terminating proxy, such as HAProxy, to Varnish.
https://www.varnish-cache.org/docs/trunk/phk/ssl_again.html
This enhancement could simplify integrating varnish into an environment with encryption requirements, because it provides another mechanism for preserving the original browser's identity in an offloaded SSL setup.
is this scenario good to implement using varnish?
If you need Varnish, use it, being aware that SSL must be handled separately. Note, though, that this does not necessarily mean that unencrypted traffic has to traverse your network... though that does make for a more complicated and CPU hungry setup.
nothing else can be used as SSL-Terminator (or can be?)
The SSL can be offloaded on the front side of Varnish, and re-established on the back side of Varnish, all on the same machine running Varnish, but by separate processes, using HAProxy or stunnel or nginx or other solutions, in front of and behind Varnish. Any traffic in the clear is operating within the confines of one host so is arguably not a point of vulnerability if the host itself is secure, since it never leaves the machine.
if varnish is not a good contender, should I switch to some other reverse proxy
This is entirely dependent on what you want and need in your stack, its cost/benefit to you, your level of expertise, the availability of resources, and other factors. Each option has its own set of capabilities and limitations, and it's certainly not unheard-of to use more than one in the same stack.

Securing information from a retail POS system

I have created a back-end/processing/statistics for POS transactions for a retail store chain. The thing is, now it is time to move from alpha to beta, and we need some sort of safety for the incoming data. And this is where I am lost. How do I implement some resemblance of security in this kind of system?
What I have come up with is a simple asymetric key/value pair, that is unique for each POS system, where the server has all of the private keys, and each pos has the public part of this exchange. In addition to this, all of the data exchange is sent via HTTPS.
Does this kind of thing make sense? Or is there a better way to keep the data safe?
P.S. Since there is a need to reconfigure each POS seperately, that is in no way connected to this system, having to do manual work at each POS is not a problem.
You'd like to accomplish 2 things:
1) Encrypt the traffic so that it is hidden from outsiders (confidentiality). You can accomplish this quite easily simply by enforcing that SSL is used for traffic between the client(s) and the server. The server will require an x509 certificate to accomplish this.
2) Ensure that all traffic coming to the server originates from a trusted client/POS (integrity). You can accomplish this using a couple of different techniques, both of which require an x509 certificate installed on each client (POS) system:
a) Require that all requests to the server be accompanied by client certificates. In this scenario, the client (POS) has a x509 certificate installed, for which it is able to access its own private key (the server does not, and should not have this private key, it belongs to the client). The server is configured to require client certificates with each request, it also is configured to validate that the client certificate presented does indeed match one of the POS systems. So if you add a new POS later, you need to make a change to the server ensuring that it will consider the new POS cert valid. Here is a description of the protocol for your own enrichment, you shouldn't need to know exactly how it works (because most tools IIS, Apache, etc. will abstract much of this for you) but it does demystify things a bit. http://publib.boulder.ibm.com/infocenter/tivihelp/v5r1/index.jsp?topic=%2Fcom.ibm.itim.infocenter.doc%2Fcpt%2Fcpt_ic_security_ssl_authent2way.html
OR
b) Require that all requests to the server are digitally signed by trusted clients. Public key (asymmetric) encryption allows you to sign a message. Basically it is signed with the client's (POS) private key, and then anyone (including the server) can verify its integrity by validating the signature using the client's public key. Many tools will actually encrypt and sign the message, which is OK, but if you're already using SSL and performance is a concern, you don't need to encrypt twice. If security is more important than performance, encrypting twice can't hurt. Here is some more info on digital signatures: http://www.cgi.com/files/white-papers/cgi_whpr_35_pki_e.pdf
So you should have a pretty good plan of how to proceed. Feel free to ask around here when you set out to implement these solutions, as there are a lot of things that usually don't work the first time around and debugging it is often difficult. I do recommend a tool called Fiddler or WireShark, which can help debug web services to some extent. Be sure that your client(s) can access their own private keys, and that the certificates of the clients are trusted by the server. Good luck.
http://fiddler2.com/

Why HTTP is far more used than HTTPS?

I hope every reason is mentioned, I think that performance is the main reason, but I hope every one to mention what he\she knows about this.
It's more recommended that you explain every thing, I'm still a starter.
Thanks in advance :)
It makes pages load slower, at least historically. Nowadays this may not be so relevant.
It's more complex for the server admin to setup and maintain, and perhaps too difficult for the non-professional.
It's costly for small sites to get and regularly renew a valid SSL certificate from the SSL certificate authorities.
It's unnecessary for most of your web browsing.
It disables the HTTP_REFERER field, so sites can't tell where you've come from. Good for privacy, bad for web statistics analysis, advertisers and marketing.
Edit: forget that you also need a separate IP address for each domain using SSL. This is incompatible with name-based virtual hosting, which is widely used for cheap shared web hosting. This might become a non-issue if/when IPv6 takes off, but it makes it impossible for every domain to have SSL using IPv4.
HTTPS is more expensive than plain HTTP:
Certificates issued by trusted issuer are not free
TLS/SSL handshake costs time
TLS/SSL encryption and compression takes time and additional resources (the same for decryption and decompression)
But I guess the first point is the main reason.
Essentially it's as Gumbo posts. But given the advances in power of modern hardware, there's an argument that there's no reason to not use HTTPS any more.
The biggest barrier is the trusted certificate. You can go self-signed, but that then means all visitors to your site get an "unrested certificate" warning. The traffic will still be encrypted, and it is no less secure, but big certificate warnings can put potential visitors off.
I maybe stating the obvious, but not all content needs transport layer security.

J2ME's extra annoying HTTP permission prompt

Some phones only prompt the user for permission the first time a connection is made. Others pop up the permission prompt whenever the MIDlet attempts to make a HTTP connection! What are the options if we want to suppress the prompt?
Can we sign the JAR using only one CA (Certificate Authority) and have it work on all devices? Do we have to pay for a signature on every release?
Is it an option to create our own CA certificate and tell our customers to install it on there device?
Alternatively, it seems that plain socket connections do not suffer so. Is there a free implementation of HTTP on top of TCP for J2ME?
Some phones allow you to change the setting manually to set once per session. Or try adding
MIDlet-Permissions: javax.microedition.io.connector.http
to the jad file.
Yes, if the build is signed with the root certificate that is available on most devices, Verisign Class 3 certificate, for example
As a security measure, devices don't allow you to install your own certificates, even if it is obtained from a CA.
Plain socket connections may add overhead in processing of the data in the client side. Also some security issues are also involved.
Signing the JAR is not guaranteed to suppress these prompts on all handsets and all networks. It may work on some. AFAIK you usually need to sign per build; so if you use the same build on many handsets, you need to sign only once.
You could write your own implementation of HTTP over sockets, but beware that Socket implementations do not allow access to ports 80 and 8080 (again AFAIK).
Your best option when experiencing multiple prompts for HTTP is to direct the user to the MIDlet permissions setting in their handset menu; this should be changed to "ask once".
HTH,
funkybro
Java Verifieds UTI root certificate is not on all handsets/network combinations, the same is true for other domains in the trusted third party such as Verisign and Thawte (for these bodies in particular Motorola devices)
It is fair to say that the UTI certificate is probably the one to choose to give you the most coverage across handsets
To suppress the HTTP connection prompt, signing an app is the only option. Another would be to get preload on a pre-market phone, but even the handset manufacturers require signed jad/jars.
Making a set of jad/jar work on different devices is not dependent on signing but how you design an app. If you can address this then yes, you can have one signed jad/jar work on multiple devices.
I do not know about creating our own certs and asking customers to install them. I dont think it works as I dont think it is possible.
HTTP over TCP is a fairly easy implementation, provided you know what you are doing, but I dont know of any free implementations of it.
Get it Java Verified and you will find that on all networks and phones - the user will get prompted only once each time they start the app to authorise a connection.

Resources