Trying to deploy a Symfony 3.2/Doctrine application to Swisscom PaaS.
Buildpack (PHP 7, httpd etc.) are installed, composer is running and installing dependencies, but when calling the composer after-commands, like cache:clear I get an:
[Doctrine\DBAL\Exception\ConnectionException]
An exception occured in driver: SQLSTATE[HY000] [2002] Connection refused
my manifest.yml:
applications:
- services:
- dbservice
buildpack: php_buildpack
host: myapp
name: MyApp
instances: 1
memory: 640M
env:
SYMFONY_ENV: prod
PHP_INI_SCAN_DIR: .bp-config/php/conf.d
my options.json:
"WEB_SERVER": "httpd",
"COMPOSER_INSTALL_OPTIONS": ["--no-dev --optimize-autoloader --no-progress --no-interaction"],
"COMPOSER_VENDOR_DIR": "vendor",
"SYMFONY_ENV": "prod",
"WEBDIR": "web",
"PHP_MODULES": "fpm",
"PHP_VERSION": "{PHP_70_LATEST}",
"PHP_EXTENSIONS": [
"bz2",
"zlib",
"curl",
"mcrypt",
"openssl",
"mbstring",
"pdo",
"pdo_mysql"
],
"ZEND_EXTENSIONS": [
"opcache"
]
And this is how I read database credentials from VCAP and set parameters in Symfony (which is perfectly working with a local setup of VCAPSERVICES env vars):
$vcapServices = json_decode($_ENV['VCAP_SERVICES']);
$container->setParameter('database_driver', 'pdo_mysql');
$db = $vcapServices->{'mariadb'}[0]->credentials;
$container->setParameter('database_host', $db->host);
$container->setParameter('database_port', $db->port);
$container->setParameter('database_name', $db->name);
$container->setParameter('database_user', $db->username);
$container->setParameter('database_password', $db->password);
// Just for debug:
echo 'User: ';
var_dump($container->getParameter('database_user'));
echo 'Db: ';
var_dump($db);
Service is running, both var_dump deliver the expected values. But still connection is refused.
What am I doing wrong?
****EDIT****
A similar problem seems to be here, but without a solution: Cloud foundry p-mysql
****EDIT****
I debugged down right to the statement where PDO constructor is called.
It is called with the following parameters:
$dsn = mysql:host=10.0.20.18;port=3306;dbname=CF_DB922DD3_CACB_4344_9948_746E585732B5;
$username = "myrealusername"; // as looked up in VCAP_SERVICES
$password = "myrealpassword"; // as looked up in VCAP_SERVICES
$options = array();
Anything looks exactly like it can be seen in the web console for the service binding.
Is an unix_socket required to successfully connect?
****EDIT****
As Symfony is using some composer post-install-commands (in this case e.g. to clear and warm up the cache) which require already a working database connection that this is not supported with DB services by cloudfoundry as long as the container is not fully built and deployed?
Am running out of ideas.
Sorry for the issue and thanks for feedback. Swisscom modified the security groups. See Application Security Groups for more info.
Cloud Foundry blocks all outbound network connections from application
containers by default. Administrators can override this
block-by-default behavior with Application Security Groups (ASGs).
ASGs are a collection of egress rules that specify one or more
individual protocols, ports, and destinations to allow network access
to.
Cloud Foundry has two default sets of ASGs: default-staging and
default-running. All application containers in Cloud Foundry use a
base policy from one of these.
The rule for Galera as a Service (MariaDB) was only done in running. We added the rule also to staging.
Just in case someone has a similar issue this was the solution:
In the end I could only solve this with PaaS provider (swisscomdev) support.
Obviously database connection was not provided/possible during the staging/deployment of our app, but Symfony's cache:clear/warmup required a full database connection during the composer post processing phase.
After a fix in the cloudfoundry based platform of swisscomdev everything worked as expected.
In my experience, Connection refused can mean one of the things:
The firewall (iptables?) is blocking the non-local access to the port
The mysql server is not even listening to the port
The mysql server is listening on ipv6 port 3306 while client tries to connect via ipv4.
First let's try to narrow it down by running basic telnet test:
telnet 10.0.20.18 3306
This should greet you with some half-garbled message with clear mention of MySQL. If it does not, than you need to go back to more general restrictions like firewalls and policies.
Since you are positive that server is running, I suggest checking if you are being blocked by firewall or SELinux. Don't know how much control do you have over the Swisscom's PaaS system, though.
If you have SSH access to the PaaS service, you could try running tcpdump do capture any traffic. See this article: https://serverfault.com/questions/507627/debugging-a-connection-refused-response-on-port-21
Hope this gives you some hint...
Related
we operate CENM(1.2 and use helm template to run on k8s cluster) to construct our own private network and keep on running CENM network map server for a few week, then launching new node start failing.
with further investigation, its appeared that request timeout for http://nmap:10000/network-map causes problem.
in nmap server’s log, we found following output when access to above url with curl.
[NMServer] - Error while handling socket client message com.r3.enm.servicesapi.networkmap.handlers.LatestUnsignedNetworkParametersRetrievalMessage#760c53ea: HikariPool-1 - Connection is not available, request timed out after 30000ms.
netstat shows there is at least 3 establish connection to the database from the container which network map server runs, also I can connect database directly with using CLI.
so I don’t think it is neither database saturated nor network configuration problem.
anyone have an idea why this happens? I think restart probably solve the problem, but want to know the root cause...
regards,
Please test the following options.
Since it is the HikariCP (connection pool) component that is throwing the error it would be worth seeing if increasing the pool size in the network map configuration may help - see below)
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
Has a healthcheck been conducted to verify there are sufficient resources on that postgres database i.e basic diagnostic checks ?
Another option to get more information logged from the network map service is to run with TRACE logging also:
From https://docs.corda.net/docs/cenm/1.2/troubleshooting-common-issues.html
Enabling debug/trace logging
Each service can be configured to run with a deeper log level via command line flags passed at startup:
java -DdefaultLogLevel=TRACE -DconsoleLogLevel=TRACE -jar <enm-service-jar>.jar --config-fi
I'm very new to Docker (in fact I've been only using it for one day) so maybe I'm misunderstanding some basic concept but I couldn't find a solution myself.
Here's the problem. I have an ASP.NET Core server application on a Windows machine. It uses MongoDB as a datastore. Everything works fine. I decided to pack all this stuff into Docker containers and put it to a Linux (Ubuntu Server 18.04) server. I've packed mongo to a container so now its PUBLISHED IP:PORT value is 192.168.99.100:32772
I've hardcoded this address to my ASP.NET server and also packed it to a container (IP 192.168.99.100:5000).
Now if I run my server and mongo containers together on my Windows machine, they work just fine. The server connects to a container with the database and can do whatever it needs.
But when I transfer both containers to Ubuntu and run them, the server cannot connect to the database because this IP address is not available there. I've beed googling for a few hours to find a solution and still I'm struggling with it.
What is the correct way to go about thes IP addresses? Is it possible to set an IP that will be the same for a container regardless of environment?
I recommend using docker-compose for the purpose you described above.
With docker-compose, you can access the database via a service name instead of an IP (which potentially is not available on another system). Here two links to get started
https://docs.docker.com/compose/gettingstarted/
https://docs.docker.com/compose/compose-file/
Updated answer (10.11.2019)
Here a concrete example for your asp.net app:
docker-compose.yaml
version: "3"
services:
frontend:
image: fqdn/aspnet:tag
ports:
- 8080:80
links:
- database
database:
image: mongo
environment:
MONGO_INITDB_DATABASE: "mydatabase"
MONGO_INITDB_ROOT_USERNAME: "root"
MONGO_INITDB_ROOT_PASSWORD: "example"
volumes:
- myMongoVolume:/data/db
volumes:
myMongoVolume: {}
From the frontend container, you can reach the mongo db container via the service name "database" (instead of an IP). Due to the link definition in the frontend service, the frontend service will start after the linked service (database).
Through volume definition, the mongo database will be stored in a volume that persists independently from the container lifecycle.
Additionally, I assume you want to reach the asp.net application via the host IP. I do not know the port that you expose in your application so I assume the default port 80. Via the ports section in the frontend, we define that container port 80 is exposed as port 8080 on the host IP. e.g. you can open your browser and type your host IP and port 8080 e.g. 127.0.0.1:8080 for localhost and reach your application.
With docker-compose installed, you can start your app, which consists of your frontend and database service via
docker-compose up
Available command options for docker-compose can be found here
https://docs.docker.com/compose/reference/overview/
Install instructions for docker-compose
https://docs.docker.com/compose/install/
Updated answer (10.11.2019, v2)
From the comment section
Keep in mind that you need to connect via the servicename (e.g. database) and the correct port. For MongoDB that port is 27017. That would tanslate to database:27017 in your frontend config
Q: will mongo also be available from the outside in this case?
A: No, since the service does not contain any port definition the database itself will not be directly reachable. From a security standpoint, this is preferable.
Q: could you expain this
volumes:
myMongoVolume: {}
A: in the service definition for your database service, we have specified a volume to store the database itself to make the data independent from the container lifecycle. However just by defining a volume in the service section the volume will not be created. Through the definition in the volume section, we create the volume myMongoVolume with the default settings (indicated through {}). If you would like to customize your volume you can do so in this volumes section of your docker-compose.yaml. More information regarding volumes can be found here
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
e.g. if you would like to use a specific storage driver for your volume or use an external storage.
I created a Redis instance using https://console.cloud.google.com/launcher/details/bitnami-launchpad/redis-ha
and the network interface is:
I'm trying to connect to this Redis instance from a Firebase trigger.
The question is: what firewall rule do I need to connect from a cloud function to a compute instance?
Please provide as many details as possible, e.g. IP ranges, ingress/egress, etc, and whether I have to connect the Redis client to the instance on the internal IP, or the external IP.
This is the code:
const redis = require('redis');
let redisInstance = redis.createClient({
/* surely external IP needn't be used
here as it's all GCP infra? */
host: '10.1.2.3',
port: 6379
})
redisInstance.on('connect', () => {
console.log(`connected`);
});
redisInstance.on('error', (err) => {
console.log(`Connection error ${err}`);
});
The error in the log is
Connection error Error: Redis connection to 10.1.2.3:6379 failed - connect ETIMEDOUT 10.1.2.3:6379
I've looked at Google Cloud Function cannot connect to Redis but it's not specific enough about the options when setting up a rule.
What I've tried
I tried to set up a firewall rule with these settings:
ingress
network: default
source filter: my firebase service account
protocols/ports: all
targets: all
Just a note about the service account:
created by Firebase
has the Editor role in IAM
is known to work with BigQuery and other Firebase services from my Firebase triggers
This same firewall rule has been in effect for a few hours now, and I've also redeployed the trigger which tests Redis, but still getting ETIMEDOUT
UPDATES
2018-06-25 morning
I phoned GCP Gold support and the problem isn't obvious to the operator, so they'll open a case, investigate, and leave some notes.
2018-06-25 afternoon
Using a permissive firewall rule (source 0.0.0.0/0, destination "all targets") and connecting to the Redis instance's external IP address works (of course!). However, I mentioned many times now on the phone call I don't want the Redis instance to be open to the Internet, and if there's some sort of solution involving a networking bridge/VPN so I can connect to the 10.x.x.x address from the Cloud Function.
The operator said they'll get back to me in 2 days.
2018-06-25 bit later in the afternoon
I've self-answered that it doesn't seem to be possible to connect to a Compute Engine internal IP from a cloud function.
It looks like it's NOT currently possible to connect to Google Compute Engine internal IP from Google Cloud Funtions so all my (and my helpful Gold support operator's) efforts have been in vain.
Here's the (open) issue: https://issuetracker.google.com/issues/36859738
As it is explained in the question you referred to, when you create a new firewall rule you change the Source Filter field from IP ranges to Service Account. In the following step you won't need to specify any IPs, only the name of the service account for Cloud Functions.
Do we have any way to connect document dB emulator from remote system ?
Can we create procedure , triggers , user defined functions etcs in document db emulator ?
The Emulator is meant for local dev scenarios, since it runs exposing a local port, you probably could (never tried, this is purely theoretical) work around the firewall and expose it, then connect from another system using your external IP and the exposed port.
There is also the local SSL certificate that you must solve (that probably is the biggest issue), though you could try with the TCP connection setting, might want to check this thread about which ports need to be opened.
Also, the Emulator does not have the entire feature set that the live service does:
The DocumentDB Emulator supports only a single fixed account and a well-known master key. Key regeneration is not possible in the DocumentDB Emulator.
The DocumentDB Emulator is not a scalable service and will not support a large number of collections.
The DocumentDB Emulator does not simulate different DocumentDB consistency levels.
The DocumentDB Emulator does not simulate multi-region replication.
The DocumentDB Emulator does not support the service quota overrides that are available in the Azure DocumentDB service (e.g. document size limits, increased partitioned collection storage).
As your copy of the DocumentDB Emulator might not be up to date with the most recent changes with the Azure DocumentDB service, please DocumentDB capacity planner to accurately estimate production throughput (RUs) needs of your application.
So, you are probably better off installing the emulator on the other system via the installer or Chocolatey and avoid all the problems.
UPDATE: My following attempted solution doesn't work. Connection Timeout, 192.168.0.101:8881 using the Node.js DocumentDB sdk. I think because of SSL. :/ Sorry. Leaving this "Answer" for documentation on what doesn't work, and if anyone knows how to bypass DocumentDB Emulator SSL.
I am trying to connect DocumentDB Emulator across my local network. (I dev on a virtual machine)
I am trying to do a port forward, to the 8081 local port that DocumentDB Emulator listens on. In Command Prompt (Run as Administrator)
netsh interface portproxy add v4tov4 listenaddress=192.168.0.101 listenport=8080 connectport=8081 connectaddress=127.0.0.1
192.168.0.101 is the network address of the PC.
Now I'm able to navigate to:
https://192.168.0.101:8080/_explorer/index.html and see the data explorer. Optimistic I can get this working for dev, with SSL turned off?
Also tried to use node.js http-proxy couldn't get it working with self-signed certificates. :(
Update, I actually got http-proxy working, but it only works if you start the servers in a specific order...
start api server
start proxy server (on windows box) with secure: true
make a failed connection
change proxy server (on windows box) to secure: false; restart;
now it's working... but useless for dev, because if you restart the API server after code change, the connection fails again.
Sample Node.js Proxy to be run on Windows box:
```
var fs = require('fs'),
httpProxy = require('http-proxy');
//
// Create the proxy server listening on port 443
//
httpProxy.createServer({
ssl: {
key: fs.readFileSync('valid-ssl-key.pem', 'utf8'),
cert: fs.readFileSync('valid-ssl-cert.pem', 'utf8')
},
target: 'https://localhost:8081',
secure: true // Depends on your needs, could be false.
}).listen(8881);
```
You just need to start the documentdb with additional parameters:
start "" "c:\Program Files\Azure Cosmos DB Emulator\CosmosDB.Emulator.exe" /AllowNetworkAccess /NoFirewall /Key=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
Checkout the documentdb docker file for more details: https://github.com/Azure/azure-cosmos-db-emulator-docker/blob/master/package_scripts/startemu.cmd
I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.