Service Fabric - Local Cluster On VM Not Accessible Outside Network - networking

I have a Virtual Box VM hosted on my desktop, using bridged mode.
On that VM I have installed a one node Service Fabric cluster (secured with a self-signed x509 cert).
I have setup my router to send ports 19000-19100 to that guest machine IP Address.
I am on AT&T Fiber so I am forwarding those ports to a router and then the router forwards them on to the guest OS at a specific IP address.
From my host machine I am able to get to the service fabric explorer and I can deploy services to it from visual studio.
I am not able to deploy to it from azure devops. My friend is not able to see the explorer either.
In DevOps I have configured a service connection, put the certificate in it, etc. In my pipeline I am writing to the hosts file (my public IP and the host name I need sit.mysite.com as an example). One thing to note is that I was previously able to deploy to SF when I had the cluster running on my main machine (as opposed to in a VM as it currently is)
A friend (living in another state) is not able to view my service fabric explorer. I provided the cert to him, he's imported it. He has an entry in his hosts file also. When he goes to https://sit.mysite.com:19080 (the SF explorer address), he gets a 403, not authorized. But it is correctly picking up the cert. He can also ping my IP address so we have connectivity.
Whatever is stopping him from hitting my SF is likely what is preventing me from the ability to deploy from azure devops, but I have no idea what it would be...
Any ideas?

Figured it out. Turns out my cluster config file was referencing localhost for the node as opposed to the IP (or a dns name) and that made the fabric not respond to requests from outside.
"nodes": [
{
"nodeName": "vm0",
"iPAddress": "IP_ADDRESS_HERE",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
}
],

Related

Minikube external IP - access to the app GUI

I am working with Minikube on VM (VirtualBox - Linux Ubuntu). I need to access the ONAP Portal App service's GUI through the internet browser, which I can't.
I have deployed ONAP Portal App on Minikube and now I want to access its GUI through the internet browser. There was no external IP when I looked at "kubectl get services". As I found I used the Minikube tunnel. Now, I have the external IP by the portal-app service (which is by the way same as the cluster IP), but I can't get to its GUI - the website is unreachable/unable to connect. Then I tried "minikube service portal-app --url", which showed me different IP with the correct port, but in the browser I got to the apache tomcat and not to the portal-app. What am I missing?
Thanks for any advice.

How to connect to Community Edition Databricks Cluster via Outside Public Address / Application

Can someone let me know if its possible to connect or PING a Databricks Cluster via its public ip address?
For example I have issued the command ping --all-ip-addresses and I get the ip address 10.172.226.115.
I would like to be able to PING that ip address(10.172.226.115) from my on-premise PC (or connect to the cluster with an application using the ip address?
Can someone let me know if that is possible?
That public IP is not guaranteed to be your cluster; unless somehow you've installed Databricks into your own cloud provider account, where you fully control the network routes, it would be connecting to Databricks managed infrastructure where the public ip would likely be an API gateway or router that serves traffic for more than one account
Note: just because you can ping Google DNS with outbound traffic doesn't mean inbound traffic from the internet is even allowed through the firewall
connect to the cluster with an application
I'd suggest using other Databricks support channels (i.e their community forum) to see if that's even possible, but I thought you're just supposed to upload and run code within their ecosystem. At least, for the community plans
Specifically, they have a REST API to submit a remote job from your local system, but if you want to be able to send data back to your local machine, I think you'd have to write and download from DBFS or other cloud filesystem

WSO2 Identity Server hangs after "Using java memory" line

I had installed WSO2 Identity Server V5.2 on a VirtualBox machine, and it was working fine.
Then, I was doing some network testing/reconfiguration on my home network, where I was trying to separate my development (virtual) machines from my main in-home LAN by having the machine that was hosting VirtualBox connect wirelessly to a small router (a TPLink TL-WR702N) in a Bridge configuration, where the TPLink is connecting to my main WIFI network and then also exposing itself as a different WIFI network).
I was doing this testing because I am going to be working from a different location for awhile, and I wanted to isolate my dev machines while I was there and I only will have WIFI, and no hardwired connection, so I wanted to see if I could bridge wirelessly.
That machine hosting VBox started up ok and actually, the WSO2 machine also came up ok, but then when I tried to start the WSO2 IS (./wso2server.sh), it would output the 1st 3 lines and then hang on the 3rd line which was "User Java memory...".
If I move the hosting machine back to my normal LAN (i.e., not on the "bridged" network), everything works fine.
I noticed that when the hosting machine was on the bridged network, I couldn't ping the network gateway (192.168.0.1) from the VBox guest machines.
Would that cause the WSO2 to hang during startup? What else might be causing this problem?
Thanks,
Jim
I think that the problem was that WSO2 IS seems to need to be able to resolve the hostname during startup, and that was combined with needing to (apparently) bounce the machine to get the networking working. After the bounce, the networking seemed to get straightened out and then the WSO2 IS was able start ok.

Meteor app on local network

I'm learning how to use Meteor by following the tutorial. I'm aware that Meteor automatically hosts the app to both localhost and my IPv4 address (in this instance, 192.168.1.100). When I visit 192.168.1.100:3000 on the computer it's hosted from, the app works fine, however it won't load on any other devices that access 192.168.1.100:3000 from the local network.
I've read the following answers:
Accessing meteor server on LAN
Accessing Meteor local web server from another local device on Mac 10.8
Meteor - accessing the app using public ip
How to run meteor server on a different ip address?
Start Meteor server and let other computers access it
And none of them worked for me. It may be because I'm running Windows. If that's the case, can anyone help on how to host the app on the local network?
There's a number of reasons why you may not be able to
try opening the port
netsh advfirewall firewall add rule name="Meteor 3000" dir=in action=allow protocol=TCP localport=3000
if connecting via wifi, then routers often disallow connections to other devices on the network, check router settings

Can not connect to Azure web site hosted at VM

I deployed a web site into a Azure VM and did the following
1) Create a HTTP Endpoint with TCP protocol and port 80 (both
internal and external) for the VM
2) configure the web site to be assigned with the internal IP
assigned
I can browse to the site within the VM, but can not connect to it from external using either the DNS or the public VIP assigned by Azure. the browser said "can not connect to [vip]".
Have I missed any steps or any advice on how to trouble shoot this issue?
If this is a "normal" VM and not a Cloud Service then you need to connect to the VM and open port 80 in the Windows Firewall directly on the machine as well.
In the end, i found it is caused by the selection of "direct connect" at the Endpoint setting.
Untick it, it works...

Resources