Can Github Enterprise have 2 hostnames (1 hostname for external facing IP and 1 hostname for "internal" facing IP). We have Github Enterprise setup in AWS (on external IP) and we want to save cost as traffic via external IP is expensive (compared to Direct Connect).
So can we have another internal facing hostname in the Github Enterprise setup for people inside the company to do git manipulation (pull, commit, clone, CI/CD operations, etc) so that some of the traffic will be off-loaded to cheaper AWS Direct Connect?
Related
If I peer two Bastion VMs via VNet, and run a web application on one VM, will I be able to access its REST url from the other VM? Is there a charge involved for this type of access?
Sorry that I couldn't find it in me to understand all that jargon about ingress, egresss and gateways. I just want the simple answer to my question.
I have a Virtual Box VM hosted on my desktop, using bridged mode.
On that VM I have installed a one node Service Fabric cluster (secured with a self-signed x509 cert).
I have setup my router to send ports 19000-19100 to that guest machine IP Address.
I am on AT&T Fiber so I am forwarding those ports to a router and then the router forwards them on to the guest OS at a specific IP address.
From my host machine I am able to get to the service fabric explorer and I can deploy services to it from visual studio.
I am not able to deploy to it from azure devops. My friend is not able to see the explorer either.
In DevOps I have configured a service connection, put the certificate in it, etc. In my pipeline I am writing to the hosts file (my public IP and the host name I need sit.mysite.com as an example). One thing to note is that I was previously able to deploy to SF when I had the cluster running on my main machine (as opposed to in a VM as it currently is)
A friend (living in another state) is not able to view my service fabric explorer. I provided the cert to him, he's imported it. He has an entry in his hosts file also. When he goes to https://sit.mysite.com:19080 (the SF explorer address), he gets a 403, not authorized. But it is correctly picking up the cert. He can also ping my IP address so we have connectivity.
Whatever is stopping him from hitting my SF is likely what is preventing me from the ability to deploy from azure devops, but I have no idea what it would be...
Any ideas?
Figured it out. Turns out my cluster config file was referencing localhost for the node as opposed to the IP (or a dns name) and that made the fabric not respond to requests from outside.
"nodes": [
{
"nodeName": "vm0",
"iPAddress": "IP_ADDRESS_HERE",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
}
],
Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference
Can someone let me know if its possible to connect or PING a Databricks Cluster via its public ip address?
For example I have issued the command ping --all-ip-addresses and I get the ip address 10.172.226.115.
I would like to be able to PING that ip address(10.172.226.115) from my on-premise PC (or connect to the cluster with an application using the ip address?
Can someone let me know if that is possible?
That public IP is not guaranteed to be your cluster; unless somehow you've installed Databricks into your own cloud provider account, where you fully control the network routes, it would be connecting to Databricks managed infrastructure where the public ip would likely be an API gateway or router that serves traffic for more than one account
Note: just because you can ping Google DNS with outbound traffic doesn't mean inbound traffic from the internet is even allowed through the firewall
connect to the cluster with an application
I'd suggest using other Databricks support channels (i.e their community forum) to see if that's even possible, but I thought you're just supposed to upload and run code within their ecosystem. At least, for the community plans
Specifically, they have a REST API to submit a remote job from your local system, but if you want to be able to send data back to your local machine, I think you'd have to write and download from DBFS or other cloud filesystem
I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.