We are running a self-hosted build agent behind a corporate firewall. All outbound traffic to the internet is blocked unless the destinations IPs are allowed on the firewall.
When the terraform init statement is executed in the pipeline, the container instance tries to download the most recent provider packages from registry.terraform.io and releases.hashicorp.com from the Terraform/Hashicorp servers and therefore it crashes:
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding latest version of hashicorp/time...
- Finding hashicorp/azurerm versions matching "~> 3.0"...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/time: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": net/http:
│ request canceled while waiting for connection (Client.Timeout exceeded
│ while awaiting headers)
╵
If we allow traffic to this IP for a moment, we receive the same error when it tries to install the provider:
Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "~> 3.0"...
- Finding latest version of hashicorp/azuread...
╷
│ Error: Failed to install provider
│
│ Error while installing hashicorp/azurerm v3.40.0: could not query provider
│ registry for registry.terraform.io/hashicorp/azurerm: failed to retrieve
│ authentication checksums for provider: the request failed after 2 attempts,
│ please try again later: Get
│ "https://releases.hashicorp.com/terraform-provider-azurerm/3.40.0/terraform-provider-azurerm_3.40.0_SHA256SUMS":
│ net/http: request canceled while waiting for connection (Client.Timeout
│ exceeded while awaiting headers)
╵
We tried to manually add the IPs but they are managed dynamically and subjected to change (probably behind CDN or load balancer that masks the IPs).
We also tried to whitelist the whole FQDN (registry.terraform.io and releases.hashicorp.com) but that also did not work.
Is there someone that has dealt with a similar configuration and fixed this?
Or is there a list there somewhere a list that is kept up to date with all Terraform/Hashicorp destination IPs/Subnets/FQDNs?
We also tried to whitelist the whole FQDN (registry.terraform.io and releases.hashicorp.com) but that also did not work.
Keep digging in that direction.
Use packet capture software (e.g. Wireshark) to see which domains are being queried for DNS requests during a terraform init and terraform plan.
Then add those to the firewall exceptions for outbound traffic.
Problem solved!
The ping to registry.terraform.io resolved to .cloudfront.net. CloudFront is a CDN of Amazon.
Palo Alto (the firewall vendor) publishes CloudFront EDLs (External Dynamic Lists) that you can find here. After adding the EDL to the specific firewall allow rule, traffic could flow to the Terraform/HashicCorp servers and therefore, the task that executed the terraform init statements finished with exitcode 0.
Related
Trying to deploy SpringBoot TCP server application on Pivotal Web Service(Cloud Foundry)
The following is in manifest.yml file
applications:
- name: myapp-api
path: target/myapp-api-0.0.1-SNAPSHOT.jar
host: app
domain: cf-tcpapps.io
memory: 1G
instances: 1
When i cf push i get this error
FAILED
Server error, status code: 400, error code: 310009, message: You have exceeded the total reserved route ports for your organization's quota.
when i cf router-groups i get
FAILED
Failed fetching router groups.
Server error, status code: 401, error code: UnauthorizedError, message: You are not authorized to perform the requested action
How can one deploy a spring mvc api that exposes a TCP port
The PWS docs indicate:
Note: By default, PWS only supports routing of HTTP requests to applications.
This implies maybe they do, if you get a special dispensation? May be worth contacting PWS support.
The cf router-groups command is an admin-only command.
If cf domains returns the cf-tcpapps.io domain, a router group has already been set up, but your org or space has not been given permission (i.e., a quota of available TCP ports) to use it.
Ask your platform operator/admin. They may be able to increase your quota, or create a new TCP domain for you to use.
I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.
Trying to deploy a Symfony 3.2/Doctrine application to Swisscom PaaS.
Buildpack (PHP 7, httpd etc.) are installed, composer is running and installing dependencies, but when calling the composer after-commands, like cache:clear I get an:
[Doctrine\DBAL\Exception\ConnectionException]
An exception occured in driver: SQLSTATE[HY000] [2002] Connection refused
my manifest.yml:
applications:
- services:
- dbservice
buildpack: php_buildpack
host: myapp
name: MyApp
instances: 1
memory: 640M
env:
SYMFONY_ENV: prod
PHP_INI_SCAN_DIR: .bp-config/php/conf.d
my options.json:
"WEB_SERVER": "httpd",
"COMPOSER_INSTALL_OPTIONS": ["--no-dev --optimize-autoloader --no-progress --no-interaction"],
"COMPOSER_VENDOR_DIR": "vendor",
"SYMFONY_ENV": "prod",
"WEBDIR": "web",
"PHP_MODULES": "fpm",
"PHP_VERSION": "{PHP_70_LATEST}",
"PHP_EXTENSIONS": [
"bz2",
"zlib",
"curl",
"mcrypt",
"openssl",
"mbstring",
"pdo",
"pdo_mysql"
],
"ZEND_EXTENSIONS": [
"opcache"
]
And this is how I read database credentials from VCAP and set parameters in Symfony (which is perfectly working with a local setup of VCAPSERVICES env vars):
$vcapServices = json_decode($_ENV['VCAP_SERVICES']);
$container->setParameter('database_driver', 'pdo_mysql');
$db = $vcapServices->{'mariadb'}[0]->credentials;
$container->setParameter('database_host', $db->host);
$container->setParameter('database_port', $db->port);
$container->setParameter('database_name', $db->name);
$container->setParameter('database_user', $db->username);
$container->setParameter('database_password', $db->password);
// Just for debug:
echo 'User: ';
var_dump($container->getParameter('database_user'));
echo 'Db: ';
var_dump($db);
Service is running, both var_dump deliver the expected values. But still connection is refused.
What am I doing wrong?
****EDIT****
A similar problem seems to be here, but without a solution: Cloud foundry p-mysql
****EDIT****
I debugged down right to the statement where PDO constructor is called.
It is called with the following parameters:
$dsn = mysql:host=10.0.20.18;port=3306;dbname=CF_DB922DD3_CACB_4344_9948_746E585732B5;
$username = "myrealusername"; // as looked up in VCAP_SERVICES
$password = "myrealpassword"; // as looked up in VCAP_SERVICES
$options = array();
Anything looks exactly like it can be seen in the web console for the service binding.
Is an unix_socket required to successfully connect?
****EDIT****
As Symfony is using some composer post-install-commands (in this case e.g. to clear and warm up the cache) which require already a working database connection that this is not supported with DB services by cloudfoundry as long as the container is not fully built and deployed?
Am running out of ideas.
Sorry for the issue and thanks for feedback. Swisscom modified the security groups. See Application Security Groups for more info.
Cloud Foundry blocks all outbound network connections from application
containers by default. Administrators can override this
block-by-default behavior with Application Security Groups (ASGs).
ASGs are a collection of egress rules that specify one or more
individual protocols, ports, and destinations to allow network access
to.
Cloud Foundry has two default sets of ASGs: default-staging and
default-running. All application containers in Cloud Foundry use a
base policy from one of these.
The rule for Galera as a Service (MariaDB) was only done in running. We added the rule also to staging.
Just in case someone has a similar issue this was the solution:
In the end I could only solve this with PaaS provider (swisscomdev) support.
Obviously database connection was not provided/possible during the staging/deployment of our app, but Symfony's cache:clear/warmup required a full database connection during the composer post processing phase.
After a fix in the cloudfoundry based platform of swisscomdev everything worked as expected.
In my experience, Connection refused can mean one of the things:
The firewall (iptables?) is blocking the non-local access to the port
The mysql server is not even listening to the port
The mysql server is listening on ipv6 port 3306 while client tries to connect via ipv4.
First let's try to narrow it down by running basic telnet test:
telnet 10.0.20.18 3306
This should greet you with some half-garbled message with clear mention of MySQL. If it does not, than you need to go back to more general restrictions like firewalls and policies.
Since you are positive that server is running, I suggest checking if you are being blocked by firewall or SELinux. Don't know how much control do you have over the Swisscom's PaaS system, though.
If you have SSH access to the PaaS service, you could try running tcpdump do capture any traffic. See this article: https://serverfault.com/questions/507627/debugging-a-connection-refused-response-on-port-21
Hope this gives you some hint...
Context
I am trying to run an akka application on a node and make it work with other nodes using akka remoting capabilities.
My node has an IP address, 10.254.55.10, and there is an external IP, 10.10.10.44, redirecting to the former. This external IP is the one on which I want other nodes to contact me.
Extract from my akka app config:
akka {
remote {
netty.tcp {
hostname = "10.10.10.44"
port = 2551
bind-hostname = "10.254.55.10"
bind-port = 2551
}
}
}
I know everything works fine on the network side, because when I listen on my IP with netcat, I can send messages to myself via telnet using the external IP.
In other words, when running these two commands in separate shells:
$ nc -l 10.254.55.10 2551
$ telnet 10.10.10.44 2551
I'm able to communicate with myself, proving the network redirection works fine between the two IPs.
Problem
When launching the application, it crashes with a bind error:
INFO Remoting - Starting remoting
ERROR a.r.t.n.NettyTransport - failed to bind to /10.10.10.44:2551, shutting down Netty transport
Exception in thread "main" java.lang.ExceptionInInitializerError
[...]
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /10.10.10.44:2551
[...]
Caused by: java.net.BindException: Cannot assign requested address
[...]
INFO a.r.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
I assume what makes it crash is that it tries to bind to an IP that is not present locally (i.e. 10.10.10.44). But what I don't understand in first place is why akka is even trying to bind to 10.10.10.44, since it is not my bind-hostname (which is 10.254.55.10). This doc page seemed pretty clear to me on that matter, yet it doesn't work...
The project I was working with was based on akka 2.3.4, in which bind-hostname and bind-port configuration keys do not exist. I upgraded to latest version at the time, akka 2.4.1, and it solved the problem.
I am referring to this project by Jimmy Bogard: http://www.codeplex.com/AutoMapper
The code repository site is: http://code.google.com/p/automapperhome/source/checkout
The instructed checkout command is:
svn checkout http://automapperhome.googlecode.com/svn/trunk/ automapperhome-read-only
This does not work.
I have tried SlikSVN, Tortoise SVN, QSVN, and possibly others that I've forgotten about.
Client: Tortoise SVN
Url: svn://automapperhome.googlecode.com/svn/trunk/ automapperhome-read-only
result
Checkout 'svn://automapperhome.googlecode.com/svn/trunk/ automapperhome-read-only' into 'C:\Development\MVC\automapper'
Can't connect to host 'automapperhome.googlecode.com': A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Client: SlikSVN
Command:
C:\Program Files\SlikSvn\bin>svn checkout http://automapperhome.googlecode.com/svn/trunk/ automapperhome-read-only c:development\automapper
Result:
svn: OPTIONS of 'http://automapperhome.googlecode.com/svn/trunk': 200 OK (http://automapperhome.googlecode.com)
Command:
C:\Program Files\SlikSvn\bin>svn checkout svn://automapperhome.googlecode.com/svn/trunk/ automapperhome-read-only c:development\automapper
Result:
svn: Can't connect to host 'automapperhome.googlecode.com': A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
I'm at a loss. Is there a default port I need to open on my router for this to work. I'm only behind my router's firewall. Any help would be appreciated. Thanks.
Updated with additional commands attempted for sliksvn:
C:\Program Files\SlikSvn\bin>svn checkout svn://automapperhome.googlecode.com/svn/trunk/ c:development\automapper
svn: Can't connect to host 'automapperhome.googlecode.com': A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
C:\Program Files\SlikSvn\bin>svn checkout svn://automapperhome.googlecode.com/svn/trunk/
svn: Can't connect to host 'automapperhome.googlecode.com': A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
C:\Program Files\SlikSvn\bin>svn checkout svn://automapperhome.googlecode.com/svn/trunk/automapperhome-read-only c:development\automapper
svn: Can't connect to host 'automapperhome.googlecode.com': A connection attempt
failed because the connected party did not properly respond after a period of time,
or established connection failed because connected host has failed to respond.
*************UPDATE 3********************
Odd. I run the simple checkout command on Qsvn at work and it pulls everything in just fine. Something is definitely wrong on my home machine's setup, but I'm not sure what. I'll look into utilizing Fiddler to examine what's going on. Thanks for your help guys. I know this has probably frustrated you as much as it has me. Nothing irks worse than things not working when they should, but I'm sure there's some oddity on my end that's doing this.
Your problem is that you assume that the repository URL is http://automapperhome.googlecode.com/svn/trunk/ automapperhome-read-only. The correct URL is http://automapperhome.googlecode.com/svn/trunk/. The automapperhome-read-only part in the SVN checkout command is the target directory, not part of the URL.
Update: Your other mistake is that you are using the SVN:// protocol for the checkout URL. You should be using the HTTP:// protocol. The only attempt you show in your question with the HTTP:// protocol is the first SlikSVN one, where you specify too many options; all other attempts use SVN://. Here's the (partial) output from SlikSVN on my machine:
C:\Users\francip\Desktop\Projects>svn checkout svn://automapperhome.googlecode.com/svn/trunk/ something
svn: Can't connect to host 'automapperhome.googlecode.com': A connection attempt failed because the connected party did not properly respond after a period of time, o
r established connection failed because connected host has failed to respond.
C:\Users\francip\Desktop\Projects>svn checkout http://automapperhome.googlecode.com/svn/trunk/ something
A something\tools
A something\tools\subversion
A something\tools\subversion\ssleay32.dll
A something\tools\subversion\license
The first command uses SVN:// and gets the same error that you get. The second one uses the correct HTTP:// and successfully checks out the source.
Update 2: You have to specify the target directory at the end of the checkout command. I was looking at the example command in your question and your comments and the only places I saw a target directory, it was in the form of c:development\automapper - a relative directory to the current working directory, which in your examples seems to be C:\Program Files\SlikSvn\bin - and this one usually is read-only, unless you are running as an administrator.
If that turns out not to be the problem either, I would suggest to remove all current outputs from your question, run svn checkout http://automapperhome.googlecode.com/svn/trunk/ %USERPROFILE%\Desktop\automapper and copy the command and the output verbatim from the console in the question. Baring more details, it's unlikely we will be able to help you further.
In any case, it seems unlikely that the problem is with your network configuration. The URL uses the standard HTTP protocol over port 80 and SVN is returning to you 200 OK, which indicates it's able to connect to the server. Whatever is going wrong is on the local side. Still, you could verify this by running Fiddler and trying again.
(Since you are using TortoiseSVN), this works for me:
create a local folder, where you want to have the automapper source-code
right-click the new folder, select SVN Checkout...
enter the URL http://automapperhome.googlecode.com/svn/trunk/
click OK
Your error was to include the automapperhome-read-only part in the URL (at least when using TortoiseSVN).