Specify an ip for an url in a Jenkins Job - unix

I have the following situation.
The webapp in my company is deployed to several environments before reaching live. Every testing environment is called qa-X and has a different IP Address. What I would like to do is to specify in the jenkins job "test app in qa-x" the app's IP for the x environment so that my tests start running only knowing the apps url.
Jenkins itself is outside the qa-x environments.
I have been looking around for solutions but all of them destroy the other tests of qa-X. For instance, changing /etc/hosts, or changing the dns server. What would be great is that I can specify in that job only the ip as a config parameter and that that definition remains local.
Any thoughts/ideas?

If I'm understanding your query correctly, you should look into creating a Parameterized build which would expose an environment variable with the desired server IP, which your test script could consume.

Related

Bypass internal application calls to proxy

I have created a windows VM in GCloud and have updated proxy settings to ensure all calls go through the proxy.
There are two cases:
I set the Proxy setting to call the proxy server. This ensure all the calls that are made through any browser, go through the proxy.
I have setup http_proxy and https_proxy environment variables, with this, any curl commands I hit through Command Prompt or Bash also go via the proxy.
Now I have a case where I need to bypass a few calls and not allow them through the proxy.
This is only required by some desktop Apps I have in my VM and not for the browser call.
CASE1: From some research, in order to bypass browsers call, there is a .pac file where we can added domains to bypass
CASE2: But for non-browser calls, I could only find a way to add a no_proxy environment variable.
Following are my questions related to CASE2
Question 1? When I setup no_proxy env variable, git bash does not seem to respect that unless I set it explicitly in git bash before making any call. So is this the right way to do? or I am missing something.
Question 2: Google internal makes a few calls from the VM to get Metadata, those calls are getting proxied. But even though I update the no_proxy env variable, it still does not respect and calls still go through the proxy. where should I set this up so that I can bypass these internal VM calls to go through without being proxied?
Following is my setup
VM is on GCP with windows image
Proxy server is Squid setup on a static public IP.
The applications are calling some internal APIs
The vm calls http://metadata.google.internal API
Nay help on this would be highly appreciated.
TIA

Meteor - Custom Deployment

I have deployed my Meteor Application on my local machine using:
https://guide.meteor.com/deployment.html#custom-deployment
Now during the process I used:
$ export ROOT_URL='http://192.168.100.2:9000'
Now my is not accessible on http://192.168.100.2:9000, but instead it is accessible on http://192.168.100.2:46223, so every time I do node main.js, it choose some random port for my application.
How can I specify a port of my own choice here?
You should also supply the PORT environment variable to instruct the app which port to listen on, as it is not inferred from the ROOT_URL. It is also not necessarily the same, as apps may have a reverse proxy in front of them.
See the official documentation for more environment variables.

Can I create a PACT to run on a different hostname?

Can I create a PACT to run on a different hostname? I have been using pact rule and keeping the hostname as localhost. But now I'm trying to create a pact for an application that can not run on localhost.
#Rule
public PactProviderRule provider = new PactProviderRule("ServiceNowClientRestClientProvider", "localhost", 8080, this);
Is it possible to change localhost to something else, if so are there additional configurations that I need. I've tried changing tests that work on localhost to the actual hostname that the code is using but then it fails and I get a various error message "Unresolved address" or "Cannot assign requested address: bind", or "Address in use"
Ronald Holshausen responded with a good answer to my question. Full conversation is on Pact Google forum post:
The hostname is passed through to the HTTP server library to start an HTTP server to be the mock server. This server will be running on the same machine as the test (in fact will also be the same JVM process). The HTTP server library will use the hostname to resolve to an IP address, which will in turn resolve to a network interface on the machine which the port for the server will be bound to.
It is not as simple as a yes/no answer. It is possible to do (there are standalone mock servers you can run on another machine), but the PactProviderRule always starts a mock server on the same host as where the tests are running.
To achieve what you require, you would need to use one of the mock server implementations, and a new JUnit Rule would need to implemented (preferably extended from PactProviderRule).
There are a number of standalone pact mock servers:
https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-server
https://github.com/bethesque/pact-mock_service
https://github.com/pact-foundation/pact-reference/tree/master/rust/pact_mock_server_cli
The only valid values that can be used are: the hostname of the machine where the test is running, the IP address of the machine where the test is running, localhost, 127.0.0.1 or 0.0.0.0
If a standalone mock server is started on another machine (say from your example Hostname: test.service-now.com and Port: 80), then the PactProviderRule will need to know that it should not try start a new mock server but communicate with the one is has been provided with (via the address https://test.service-now.com).
You can in the ruby version using pact-provider-proxy. However, the best use case for consumer driven contracts is when you have development control over both the consumer and the provider, and this generally means that you can stand up an instance of the provider locally. If you are trying to test a public API, or an API you don't have development control over, pact may not be the best tool for you. You can read more here about what pact is not good for.
It is possible to do (there are standalone mock servers you can run on another machine), but the PactProviderRule always starts a mock server on the same host as where the tests are running.
To achieve what you require, you would need to use one of the mock server implementations, and a new JUnit Rule would need to implemented (preferably extended from PactProviderRule).
There are a number of standalone pact mock servers:
https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-server
https://github.com/pact-foundation/pact-reference/tree/master/rust/pact_mock_server_cli
as well as the pact-mock_service from the Ruby implementation (I can't post the link due to reputation restrictions on stack overflow).

Running Kubernetes on vCenter

So Kubernetes has a pretty novel network model, that I believe is based on what it perceives to be a shortcoming with default Docker networking. While I'm still struggling to understand: (1) what it perceives the actual shortcoming(s) to be, and (2) what Kubernetes' general solution is, I'm now reaching a point where I'd like to just implement the solution and perhaps that will clue me in a little better.
Whereas the rest of the Kubernetes documentation is very mature and well-written, the instructions for configuring the network are sparse, largely incoherent, and span many disparate articles, instead of being located in one particular place.
I'm hoping someone who has set up a Kubernetes cluster before (from scratch) can help walk me through the basic procedures. I'm not interested in running on GCE or AWS, and for now I'm not interested in using any kind of overlay network like flannel.
My basic understanding is:
Carve out a /16 subnet for all your pods. This will limit you to some 65K pods, which should be sufficient for most normal applications. All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
Create a cbr0 bridge somewhere and make sure its persistent (but on what machine?)
Remove/disable the MASQUERADE rule installed by Docker.
Some how configure iptables routes (again, where?) so that each pod spun up by Kubernetes receives one of those public IPs.
Some other setup is required to make use of load balanced Services and dynamic DNS.
Provision 5 VMs: 1 master, 4 minions
Install/configure Docker on all 5 VMs
Install/configure kubectl, controller-manager, apiserver and etcd to the master, and run them as services/daemons
Install/configure kubelet and kube-proxy on each minion and run them as services/daemons
This is the best I can collect from 2 full days of research, and they are likely wrong (or misdirected), out of order, and utterly incomplete.
I have unbridled access to create VMs in an on-premise vCenter cluster. If changes need to be made to VLAN/Switches/etc. I can get infrastructure involved.
How many VMs should I set up for Kubernetes (for a small-to-medium sized cluster), and why? What exact corrections do I need to make to my vague instructions above, so as to get networking totally configured?
I'm good with installing/configuring all the binaries. Just totally choking on the network side of the setup.
For a general introduction into kubernetes networking, I found http://www.slideshare.net/enakai/architecture-overview-kubernetes-with-red-hat-enterprise-linux-71 pretty helpful.
On your items (1) and (2): IMHO they are nicely described in https://github.com/kubernetes/kubernetes/blob/master/docs/admin/networking.md#docker-model .
From my experience: What is the Problem with the Docker NAT type of approach? Sometimes you need to configure e.g. into the software all the endpoints of all nodes (172.168.10.1:8080, 172.168.10.2:8080, etc). in kubernetes you can simply configure the IP's of the pods into each others pod, Docker complicates it using NAT indirection.
See also Setting up the network for Kubernetes for a nice answer.
Comments on your other points:
1.
All IPs in this subnet must be "public" and not inside of some traditionally-private (classful) range.
The "internal network" of kubernetes normally uses private IP's, see also slides above, which uses 10.x.x.x as example. I guess confusion comes from some kubernetes texts that refer to "public" as "visible outside of the node", but they do not mean "Internet Public IP Address Range".
For anyone who is interested in doing the same, here is my current plan.
I found the kube-up.sh script which installs a production-ish quality Kubernetes cluster on your AWS account. Essentially it creates 1 Kubernetes master EC2 instance and 4 minion instances.
On the master it installs etcd, apiserver, controller manager, and the scheduler. On the minions it installs kubelet and kube-proxy. It also creates an auto-scaling group for the minions (nice), and creates a whole slew of security- and networking-centric things on AWS for you. If you run the script and it fails creating the AWS S3 bucket, create a bucket of the same exact name manually and then re-run the script.
When the script is finished you will have Kubernetes up and running and ready for near-production usage (I keep saying "near" and "production-ish" because I'm too new to Kubernetes to know what actually constitutes a real deal productionalized cluster). You will need the AWS CLI installed and configured with a user that has full admin access to your AWS account (it goes ahead and creates IAM roles, etc.).
My game plan will be to:
Get comfortable working with Kubernetes on AWS
Keep hounding the Kubernetes team on Slack to help me understand how Kubernetes works under the hood
Reverse engineer the kube-up.sh script so that I can get Kubernetes running on premise (vCenter)
Blog about this process
Update this answer with a link to said blog.
Give me some time and I'll follow through.

Subdomains. How do you do development with subdomains?

I am currently building an web app which also utilizes websockets. (Rails for webserver and Nodejs for socket.io)
I have structured my application to use subdomains to separate between connection to the Nodejs server and the Rails webserver. I have "socket.mysite.com" redirected to the Node server and everything else to the webserver.
I am able to test this functionality on localhost. I simply modified my /etc/hosts to include the following:
127.0.0.1 socket.mysite.com
127.0.0.1 mysite.com
I know that on production I simply have to generate a CNAME record for socket.mysite.com and this will also work on my users' computers.
However, I am accustomed to testing my application by passing an IP address around. My team typically set up the server on our own machines and do development. When we want to test our individual servers, we just pass around an IP like "http://123.45.123.45".
With the new subdomain hack, this is no longer possible without modifying each of my tester's /etc/hosts. I honestly don't expect my testers to modify their /etc/hosts on the spot. What I can do is have each member of my team have their own domain and create the appropriate CNAME records for each individual team member.
Is there an easier way to allow me to run my app on an IP and just pass that IP around?
It sounds like your needs have scaled beyond the days of just simply editing a host file. While you could continue to have everyone on your team continue to edit host files, there are two main risks that I see here:
For your idea to just use IP Addresses, you risk missing something in testing that you wouldn't see unless you were on production, as the issue may be dependent on something in the domain configuration.
For using host entries, you introduce a lot of complexity and unnecessary changes to each developer and tester's configuration, which of course leaves the door open for mistakes, and it also takes time that will add-up over the long term.
Setting up a DNS server may be helpful in your case. You could map a set of domains for each developer that match a certain pattern so that your application will still run correctly. This would allow you to share the URLS without having to constantly reconfigure each person's computer. Additionally, marketing and sales stakeholders can easily view product demos as well, without needing to learn what the elusive host file is for.
If you have an IT department, they can help you setup the DNS. However, if you are a small team without a real IT department, some users have found success using DNS systems designed for home or small office networks.

Resources