I have a commercial product that allows users to connect to various SSH end-points. Currently these users are forced to download and use Putty... Seems pretty straightforward, except that my SSH end-points require RSA/Private Key authentication. So now connectivity to these end-points is becoming a pain, because I need to explain to my users how to: 1) Download and configure Putty. 2) Manage, configure and use their PEM private keys. I would like to make everything transparent by 'just working' through the browser. I own all information (both IP addresses and PEM connectivity keys), so is there such a thing as a browser based SSH that is both capable and can access RSA keys for connectivity?
MindTerm, from http://www.appgate.com/index/products/mindterm/mindterm_features.html , has a limited-use free license and supports the features you want.
JCTerm is completely free.
Have you tried SSHtools? I think GSI-SSHTerm is derived from it. GSI-SSHTerm is still actively supported as far as I'm aware. I supports the Grid Security Interface (GSI), so may have more features than you need.
FireSSH is the best tool provided by Google chrome browser,install the firessh from the url,
https://chrome.google.com/webstore/detail/firessh/mcognlamjmofcihollilalojnckfiajm
Related
I noticed that using HTTP Toolkit, you can sniff all HTTPS communications in an unencrypted form, from browsers on Windows and Android OS, plus all applications on a rooted Android device or an emulator or via some workaround on a PC. All fields and data from headers, request bodies, and responses are intercepted without encryption.
I find this to be a significant security flaw as a hacker can easily analyze how an app communicates, thus gaining more knowledge on how the server communicates, plus seeing API keys in the headers.
In addition, installing some spyware to record entered credentials on his PC or a public PC, same way as HTTP Toolkit does.
Is there a reason this is allowed to happen in the first place? Is there a way to prevent this from happening?
It's explicitly allowed because it's extremely useful. It's how all kinds of debugging, testing, and profiling tools are implemented, as well as some kinds of ad blockers and other traffic modifiers.
It's possible because it cannot be prevented in the most general way. A user who fully controls a device can inspect all behavior and traffic on that device. That is what it means to control a device. Traffic is encrypted to protect the user, not to protect apps from their user. If seeing the API would significantly impact the security of the system, the system is already insecure.
Your concern that an attacker may take over a user's machine and observe them is valid, but is far deeper than this. An attacker who has administrative access to the system can observe all kinds of things; mostly commonly by installing a keylogger to watch what they type. There is no way to secure a device that an attacker has complete physical access to.
You can limit TLS sniffing using certificate pinning. Google does not recommend this because it's hard to manage. However, for some situations, it's worth the trouble. See also HTTP Toolkit's discussion on the topic.
You've found a good thing to study. I recommend digging into how HTTP Toolkit works. It will give you a much better understanding of what TLS does and doesn't provide.
I don't think that is too serious.
HTTP Toolkit can not intercept the your normal browser.
It only creates a guest profile of browser, open and intercept it.
This browser does not have related to your own browser and does not share between them.
The same thing happen in Selenium.
Selenium is used widely for automated testing and can be integrated with python, C# and so on.
This also opens their own browser with separated profile and communicate with it from your test code.
Anyway, they can not intercept your normal browser.
If you are serious about the security, then you must not explore the websites with sensitive data via browser that is opened by the HTTP Toolkit or Selenium.
Just use your normal browser.
I want to know which options exist to provision (configure) multiple VoIP phones from multiple vendors for use with an Asterisk server. I'd like some kind of interface to manage extensions, configuration templates and so on.
Here's what I found so far:
FreePBX has a commercial module called Endpoint Manager which seems to do what I want. However, I don't like the idea of having to run a web server on the same machine (or container) that runs Asterisk. It seems like a bad idea which increases the attack surface of the Asterisk server. I would much rather have an endpoint manager on a separate server (or container) but I can't find any information about running or buying the Endpoint Manager outside of FreePBX.
Phonism advertises a "Cloud based IP phone provisioning and management system. Their service looks promising, but the number of supported phones is lower and I'm not completely sold on requiring the internet connection to configure the phone extensions in an office.
All the other solutions I found are tied to their complete proprietary VoIP solution (3CX, Kerio, etc.) or to a particular VoIP phone vendor.
Is anything else available? Or do people usually use a single VoIP phone vendor and use their own specific configuration method?
Since I can't find any phone provisioning solution which fits my needs, I'm questioning my understanding of Asterisk deployment best practices. Is using a plain Asterisk deployment a good idea or is it too bare in terms of related tooling?
You are thinking about this in a way that is too abstract and generic.
A voip equipment vendor will provide documentation which describes what provisioning protocols are used and how to use them. Then you can find a tool to use which meets that requirement and also suits your environment and skills.
Vendors usually provide proprietary tools to generate provisioning files too.
That said you should be advised that TFTP (trivial file transfer protocol) is a common provisioning method.
If you are using a bare bones asterisk install on linux then setting up your own TFTP server on linux is, well, trivial in comparison.
Running a provisioning server and asterisk server on different boxes is of course possible but you'll need to find or build some integration tools to keep provisioning config and asterisk config in sync (if that's important to you). I can't think of a reason why using two boxes makes this work significantly more difficult though.
Sorry if this is a dumb question that's already been asked, but I don't even know what terms to best search for.
I have a situation where a cloud app would deliver a SPA (single page app) to a client web browser. Multiple clients would connect at once and would all work within the same network. An example would be an app a business uses to work together - all within the same physical space (all on the same network).
A concern is that the internet connection could be spotty. I know I can store the client changes locally and then push them all to the server once the connection is restored. The problem, however, is that some of the clients (display systems) will need to show up-to-date data from other clients (mobile input systems). If the internet goes down for a minute or two it would be unacceptable.
My current line of thinking is that the local network would need some kind of "ThinServer" that all the clients would connect to. This ThinServer would then work as a proxy for the main cloud server. If the internet breaks then the ThinServer would take over the job of syncing data. Since all the clients would be full SPAs the only thing moving around would be the data - so the ThinServer would really just need to sync DB info (it probably wouldn't need to host the full SPA - though, that wouldn't be a bad thing).
However, a full dedicated server is obviously a big hurdle for most companies to setup.
So the question is, is there any kind of tech that would allow a web page to act as a web server? Could a business be instructed to go to thinserver.coolapp.com in a browser on any one of their machines? This "webpage" would then say, "All clients in this network should connect to 192.168.1.74:2000" (which would be the IP:port of the machine running this page). All the clients would then connect to this new "server" and that server would act as a data coordinator if the internet ever went down.
In other words, I really don't like the idea of a complicated server setup. A simple URL to start the service would be all that is needed.
I suppose the only option might have to be a binary program that would need to be installed? It's not an ideal solution - but perhaps the only one? If so, are their any programs out there that are single click web servers? I've tried MAMP, LAMP, etc, but all of them are designed for the developer. Any others that are more streamlined?
Thanks for any ideas!
There are a couple of fundamental ways you can approach this. The first is to host a server in a browser as you suggest. Some example projects:
http://www.peer-server.com
https://addons.mozilla.org/en-US/firefox/addon/browser-server/
Another is to use WebRTC peer to peer communication to allow the browsers share information between each other (you could have them all share date or have one act as a 'master' etc deepening not he architecture you wanted). Its likely not going to be that different under the skin, but your application design may be better suited to a more 'peer to peer' model or a more 'client server' one depending on what you need. An example 'peer to peer' project:
https://developer.mozilla.org/en-US/docs/Web/Guide/API/WebRTC/Peer-to-peer_communications_with_WebRTC
I have not used any of the above personally but I would say, from using similar browser extension mechanisms in the past, that you need to check the browser requirements before you decide if they can do what you want. The top one above is Chrome based (I believe) and the second one is Firefox. The peer to peer one contains a list of compatible browser functions, but is effectively Firefox and Chrome based also (see the table in the link). If you are in an environment where you can dictate the browser type and plugins etc then this may be ok for you.
The concept is definitely very interesting (peer to peer web servers) and it is great if you have the time to explore it. However, if you have an immediate business requirement, it might be that a simple on site server based approach may actually be more reliable, support a wider variety of browser and actually be easier to maintain (as the skills required are quite commonly available).
BTW, I should have said - 'WebRTC' is probably a good search term for you, in answer to the first line of your question.
httprelay.io v.s. WebRTC
Pros:
Simple to use
Fast
Supported by all browsers and HTTP clients
Can be used with the not stable network
Opensource and cross-platform
Cons:
Need to run a server instance
No data streaming is supported (yet)
I'm coding an extension for a customer, one of the requirements is that the extension also works offline because internet services are not that reliable, my customer's business can't stop but can deal with "stale" data, thats a nice tradeoff I guess.
Therefore, I want to code some kind of distributed cache as an extension to synchronize local data among the N nodes that will be connected running the same application and thus synchronize with the real database, hosted on the internet.
In order to achieve that I imagined that I would need to make a network broadcast and listen to incoming broadcasts, then every node that starts to run my application will broadcast it's IP address and become available as a new node for the distributed cache, failover is very important here.
I googled some possibilities I initially thought but none of them will work, I guess. The first was to do it just with HTTP, the second was to use Google Native Client to write C++ code that could run network code and thus do the broadcast, but it has limitations. Right now I'm thinking to use Java Applets but I don't really know if they have some limitations related to networking or if Chrome Extensions has any limitation with Java Applets.
Any ideas on how to do it? Using some of the stuff I suggested or another approach?
You could create an NPAPI extension, which would not be restricted by Chrome at all.
Some phones only prompt the user for permission the first time a connection is made. Others pop up the permission prompt whenever the MIDlet attempts to make a HTTP connection! What are the options if we want to suppress the prompt?
Can we sign the JAR using only one CA (Certificate Authority) and have it work on all devices? Do we have to pay for a signature on every release?
Is it an option to create our own CA certificate and tell our customers to install it on there device?
Alternatively, it seems that plain socket connections do not suffer so. Is there a free implementation of HTTP on top of TCP for J2ME?
Some phones allow you to change the setting manually to set once per session. Or try adding
MIDlet-Permissions: javax.microedition.io.connector.http
to the jad file.
Yes, if the build is signed with the root certificate that is available on most devices, Verisign Class 3 certificate, for example
As a security measure, devices don't allow you to install your own certificates, even if it is obtained from a CA.
Plain socket connections may add overhead in processing of the data in the client side. Also some security issues are also involved.
Signing the JAR is not guaranteed to suppress these prompts on all handsets and all networks. It may work on some. AFAIK you usually need to sign per build; so if you use the same build on many handsets, you need to sign only once.
You could write your own implementation of HTTP over sockets, but beware that Socket implementations do not allow access to ports 80 and 8080 (again AFAIK).
Your best option when experiencing multiple prompts for HTTP is to direct the user to the MIDlet permissions setting in their handset menu; this should be changed to "ask once".
HTH,
funkybro
Java Verifieds UTI root certificate is not on all handsets/network combinations, the same is true for other domains in the trusted third party such as Verisign and Thawte (for these bodies in particular Motorola devices)
It is fair to say that the UTI certificate is probably the one to choose to give you the most coverage across handsets
To suppress the HTTP connection prompt, signing an app is the only option. Another would be to get preload on a pre-market phone, but even the handset manufacturers require signed jad/jars.
Making a set of jad/jar work on different devices is not dependent on signing but how you design an app. If you can address this then yes, you can have one signed jad/jar work on multiple devices.
I do not know about creating our own certs and asking customers to install them. I dont think it works as I dont think it is possible.
HTTP over TCP is a fairly easy implementation, provided you know what you are doing, but I dont know of any free implementations of it.
Get it Java Verified and you will find that on all networks and phones - the user will get prompted only once each time they start the app to authorise a connection.