managemnt tab in kaa sandbox URL - kaa

I created Kaa sandbox instance on the AWS Linux host. I am getting some of the issues
Still I am not able to see the management button on the kaa Sandbox console.
I am not able to connect AWS with using ssh. I followed all the required step to connect to AWS Linux host, but not lucky to connect.
My problem is that, I would like to change the host IP in the sandbox setting with my AWS linux host IP, so that my end point device gets connected to host,
Still I am struggling with above points. Please advise.
Regards,
Prasad

That seems to be an issue with the Kaa 0.10.0 Sandbox for AWS. We created a bug for tracking this.
For now, you can use the next workaround:
echo "sudo sed -Ei 's/(gui_change_host_enabled=).*$/\1true/'" \
"/usr/lib/kaa-sandbox/conf/sandbox-server.properties;" \
"sudo service kaa-sandbox restart" | \
ssh -i <your-private-aws-instance-key.pem> ubuntu#<your-aws-instance-host>
Note: this is a multi-line single command that works correctly in bash (should also work in sh and others, but that is not tested).
Note 2: don't forget to replace
<your-private-aws-instance-key.pem>
<your-aws-instance-host>
with the respective key name and host name/IP address.

Related

unable to acaess the application in localhost through saucelabs tunnel

I am using Saucelabs to test my application on Mac, chrome configuration as I am using windows machine.
As per Saucelabs documentation, downloaded the Saucelabs Connect Proxy. Extracted the file and went to bin folder in command line and executed the below command
bin/sc -u <sauce_username> -k <sauce_accesskey> -x <sauce_data_center> -i <tunnel_id>
I got the message on the command line as "Sauce Connect is up, you may start your tests." Showed one tunnel is active on the SauceLabs my account under tunnel tab.
I started the session by going to Live --> Crossbrowser; selected the tunnel, localhost application url, browser(chrome 90) and Mac-Sierra and Start Session
Opened the application but it didn't show the feature which are on localhost.
Anyone, please help me on this, is there anything wrong i am doing in the proxy connection, because the same is working fine, if i directly open the application url on my windows machine with chrome.
I found the answer in the sauce labs documentation itself. The problem i am getting is related to SSL and here the solution.
If you don't want any domains to be SSL re-encrypted, you can specify all with the argument (i.e., -B all or --no-ssl-bump-domains all)
Now when run the below command to start the tunnel it resolve the issue.
bin/sc -u <sauce_username> -k <sauce_accesskey> -x <sauce_data_center> -i <tunnel_id> -B all

VPN killswitch using UFW, but now openvpn3 no longer can start automatically

I successfully implemented this, which blocks all internet connections on my Linux machine UNLESS it connects via a specific VPN :
https://www.comparitech.com/blog/vpn-privacy/how-to-make-a-vpn-kill-switch-in-linux-with-ufw/
If I manually execute openvpn3 session-start --config ~/Desktop/config.ovpn, it successfully connects via the VPN.
I used to have this command in a script (that has #!/bin/bash as header) which ran at device bootup without any issues, UNTIL I configured ufw for the killswitch above (now ufw runs on device bootup).
I use openvpn3 so using instructions in the above tutorial for openvpn commands didn't work at all.
I even tried using a sleep in my bash script to get it to wait a while until after bootup. Doesn't work. But if I issue the connection command manually in the command prompt, it works.
Please help! I need it to connect automatically. Much appreciated!
After spending a whole day on this, I figured out a solution. I found an article that guided me : https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/
I set up a service item using systemd (systemctl) just for that command to connect. Here is what my entry looks like :
#/etc/systemd/system/connectvpn.service
[Unit]
Description=Connect VPN
After=ufw.service network.target
Requires=ufw.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/connect
#/usr/local/bin/connect
#!/bin/bash
openvpn3 session-start --config /home/xyz/Desktop/config.ovpn
Working nicely now, connects to the VPN on bootup.

How to generate fixed url with ngrok

I am working on facebook messenger.
Facebook app only accept one url for webhook but ngrock is generating new URL every time. Now I am unable to test my app because of webhook URL changed.
UPDATE May 2020
Serveo is up and running again! No installation, no signup!
All you need to do is to run this:
ssh -R <unique subdomain>:80:<your local host>:<your local port> serveo.net
like
ssh -R youruniquesubdomain:80:localhost:8000 serveo.net
UPDATE January 2020
Since there are some issues with Serveo and localtunnel, I want to share with you another free ssh-based self-hosting service: Localhost.run
Unfortunately, it does not provide unique subdomains but it is ssh-based so you do not have to install additional applications. Still waiting for Serveo coming back.
UPDATE April 2018
I've found Serveo just now! And it is totally incredible!
UPDATE November 2017
Probably, it is not the best option for you but I started using localtunnel instead of ngrok.
An installation and run flow is very simple:
npm install -g localtunnel
lt --port <your localhost port> --subdomain youruniquesubdomain
Then I can go to my http://youruniquesubdomain.localtunnel.me
That's it!
No more free subdomain support from ngrok.....pls have a error as below
Tunnel session failed: Only paid plans may bind custom subdomains.
Failed to bind the custom subdomain 'arvindpattartestfb.ngrok.io' for the account 'arvccccc'.
This account is on the 'Free' plan.
Upgrade to a paid plan at: https://dashboard.ngrok.com/billing/plan
ERR_NGROK_313
You need to set up auth token. You can find it here https://dashboard.ngrok.com/auth. (W̶o̶r̶k̶s̶ ̶w̶i̶t̶h̶ ̶f̶r̶e̶e̶ ̶v̶e̶r̶s̶i̶o̶n̶,̶ ̶n̶o̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶p̶a̶y̶ it's now paid feature, see ngrok pricing).
Then you can use it like this:
ngrok http 80 --subdomain yoursubdomain
Neither localtunnel.me or Serveo are working for me right now so I created a temporary solution that works for some use-cases including mine (react-native local development): using the ngrok npm package one can save the generated ngrok url into a json file, and that file can be read for any other app.
First make sure to install ngrok using npm install ngrok then use this node script:
const ngrok = require('ngrok');
const fs = require('fs').promises;
(async function() {
const url = await ngrok.connect(3000);
const api = ngrok.getApi();
let data = await api.get('api/tunnels');
data = JSON.parse(data);
let dict = {'domain': data.tunnels[0].public_url}
await fs.writeFile("config.json", JSON.stringify(dict));
console.log("saved " + data.tunnels[0].public_url);
})();
Then from your app you may read it using code similar or equal to:
const backend = require('./config.json').domain;
For DHIS2 local installation, I did this on the terminal on ubuntu server.
Make sure your web is running on the specified port. Mine was on 8080.
ssh -R dani.serveo.net:80:localhost:8080 serveo.net
The beauty of this serveo.net is you can reuse the same hostname prefix url before serveo.net as many time as you want, even if power goes off or internet diconnection.
Staqlab tunnel is providing domain for free. Its works great but need a binary to be downloaded from there website. Using this service for month without any hassle
In 2022 (almost 2023) pagekite.me works for me.
It is very similar to ngrok, and requires the installation of pagekite.py (and, obviously Python).
After installation click on pagekite.py opens pagekite shell.
Run command: 8080 subdomain.pagekite.me
I noticed that no one mentioned how to have static ngrok urls, which was the main question about.
A way to do it is to edit the ngrok.yml file, which is located at
Linux: "~/.config/ngrok/ngrok.yml"
MacOS (Darwin): "~/Library/Application Support/ngrok/ngrok.yml"
Windows: "%HOMEPATH%\AppData\Local\ngrok\ngrok.yml"
You can have content such as:
version: "2"
authtoken: valid_auth_token
tunnels:
first-app:
addr: 3000
proto: http
hostname: yourfixedngrok_id1.ngrok.io
second-app:
addr: 8000
proto: http
hostname: yourfixedngrok_id2.ngrok.io
This will help you expose multiple ports, and have a persisted url for each of which based on the value you set for hostnames.
After that, you run your ngrok using this command:
ngrok start --all
Though its not a solution but take it as workaround, I had the same problem while testing. What i did is keep the ngrok running with my http port, so my ngrok url is not changing. but I frequently changing and restarting my server for testing and debugging.

Keepalived health check can't connect to 127.0.0.1

I've currently got a cluster of servers running Centos 7 and Docker, and I want to use Keepalived to allocate a floating IP between them. I've configured Keepalived to run a check command on each node which just does curl --silent --fail localhost:80 to ensure a HTTP server is listening.
The web app is running using a Docker container bound to port 80 and --net=host on Docker 1.10.3. Firewalld is also completely disabled.
The problem I'm having is that the curl never succeeds. If I change the check command to echo '' or anything else which exits 0 (without any network interaction) it works fine, but for some reason the curl doesn't work. When I run it from a normal bash terminal it is fine, and echo $? prints a 0.
I'm not even sure how to debug this as Keepalived doesn't provide any documentation on the matter and doesn't seem to log anything in relation to errors coming from the vrrp script.
Any help or suggestions would be greatly appreciated.
Turns out I was using an ancient version of Keepalived. Compiling the latest version from source fixed the issue (rather than using the binary from Centos repos)

opensuse network management undefined

I did an update on my opensuse box and networking stopped working. The system is trying to use networkmanager, even though it isn't installed. I am using yast to try and get it to use ifup, but it complains about no network connection. I tried running:
ifup eth0
and I get back:
Network is managed by '' -> skipping
Does anyone out there know why it is coming back empty and if there is a config file that I can manually tweak to fix this?
I'm assuming you are running 12.3 or 13.1 with systemd.
Disable network manager if it exists:
systemctl disable networkmanager.service
Enable network.service:
systemctl enable network.service
Make sure ifcfg-eth0 exists with a configuration in /etc/sysconfig/network/
Run ifup eth0
Hope this will help someone.
1. Disable NetworkManager, Stop is and then enable it and restart it respectively.
2. All this happens in console. Check the status for NetworkManager and in the status messages it should show that the interface(wierless) is disconnected. Confirm this by typing command "sudo nmcli c"
3. Type command "sudo iwlist (wireless-interface) scan" to show you the available wireless networks
4. If you see the network that you want to connect to listed, type command "nmcli a" and enter the corresponding connect phrase/password to connect

Resources