DocumentDB Emulator Remote Connection - azure-cosmosdb

Do we have any way to connect document dB emulator from remote system ?
Can we create procedure , triggers , user defined functions etcs in document db emulator ?

The Emulator is meant for local dev scenarios, since it runs exposing a local port, you probably could (never tried, this is purely theoretical) work around the firewall and expose it, then connect from another system using your external IP and the exposed port.
There is also the local SSL certificate that you must solve (that probably is the biggest issue), though you could try with the TCP connection setting, might want to check this thread about which ports need to be opened.
Also, the Emulator does not have the entire feature set that the live service does:
The DocumentDB Emulator supports only a single fixed account and a well-known master key. Key regeneration is not possible in the DocumentDB Emulator.
The DocumentDB Emulator is not a scalable service and will not support a large number of collections.
The DocumentDB Emulator does not simulate different DocumentDB consistency levels.
The DocumentDB Emulator does not simulate multi-region replication.
The DocumentDB Emulator does not support the service quota overrides that are available in the Azure DocumentDB service (e.g. document size limits, increased partitioned collection storage).
As your copy of the DocumentDB Emulator might not be up to date with the most recent changes with the Azure DocumentDB service, please DocumentDB capacity planner to accurately estimate production throughput (RUs) needs of your application.
So, you are probably better off installing the emulator on the other system via the installer or Chocolatey and avoid all the problems.

UPDATE: My following attempted solution doesn't work. Connection Timeout, 192.168.0.101:8881 using the Node.js DocumentDB sdk. I think because of SSL. :/ Sorry. Leaving this "Answer" for documentation on what doesn't work, and if anyone knows how to bypass DocumentDB Emulator SSL.
I am trying to connect DocumentDB Emulator across my local network. (I dev on a virtual machine)
I am trying to do a port forward, to the 8081 local port that DocumentDB Emulator listens on. In Command Prompt (Run as Administrator)
netsh interface portproxy add v4tov4 listenaddress=192.168.0.101 listenport=8080 connectport=8081 connectaddress=127.0.0.1
192.168.0.101 is the network address of the PC.
Now I'm able to navigate to:
https://192.168.0.101:8080/_explorer/index.html and see the data explorer. Optimistic I can get this working for dev, with SSL turned off?
Also tried to use node.js http-proxy couldn't get it working with self-signed certificates. :(
Update, I actually got http-proxy working, but it only works if you start the servers in a specific order...
start api server
start proxy server (on windows box) with secure: true
make a failed connection
change proxy server (on windows box) to secure: false; restart;
now it's working... but useless for dev, because if you restart the API server after code change, the connection fails again.
Sample Node.js Proxy to be run on Windows box:
```
var fs = require('fs'),
httpProxy = require('http-proxy');
//
// Create the proxy server listening on port 443
//
httpProxy.createServer({
ssl: {
key: fs.readFileSync('valid-ssl-key.pem', 'utf8'),
cert: fs.readFileSync('valid-ssl-cert.pem', 'utf8')
},
target: 'https://localhost:8081',
secure: true // Depends on your needs, could be false.
}).listen(8881);
```

You just need to start the documentdb with additional parameters:
start "" "c:\Program Files\Azure Cosmos DB Emulator\CosmosDB.Emulator.exe" /AllowNetworkAccess /NoFirewall /Key=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
Checkout the documentdb docker file for more details: https://github.com/Azure/azure-cosmos-db-emulator-docker/blob/master/package_scripts/startemu.cmd

Related

Can't connect to Google Cloud SQL from Cloud Run - using R Shiny

I have created an R Shiny app that connects to a Cloud SQL instance. It runs fine on my local server, but when I upload to either shinyapps.io or to Cloud Run via Dockerfile, it is unable to connect.
Here is the code I am using to connect using RPostgres package:
conn <- dbConnect(
drv=RPostgres::Postgres(),
dbname='postgres',
sslrootcert=path to 'server-ca.pem',
sslcert=path to 'client-cert.pem',
sslkey=path to 'client-key.pem',
host='xxxxxxxxxxxxxxxxxxx',
port=5432,
user='username',
password='password_string',
sslmode='verify-ca')
I've checked the logs in Cloud Run, the error message I am seeing is the following:
Warning: Error in : unable to find an inherited method for function 'dbGetQuery' for signature '"character", "character"'
The dbGetQuery() function is called after the dbConnect function, and since it runs fine on my local server, I am fairly confident that what I am seeing is a connection issue, rather than a package namespace issue. But could be wrong.
I have opened up to all IPs by adding 0.0.0.0/0 as an allowed network. The weird thing is that OCCASIONALLY I CAN CONNECT from shinyapps.io, but most of the time it fails. I have not yet got it to work once from Cloud Run. This is leading me to think that it could be a problem with a dynamic IP address or something similar?
Do I need to go through the Cloud Auth proxy to connect directly between Cloud Run and Cloud SQL? Or can I just connect via the dbConnect method above? I figured that 0.0.0.0/0 would also include Cloud Run IPs but I probably don't understand how it works well enough. Any explanations would be greatly appreciated.
Thanks very much!
I have opened up to all IPs by adding 0.0.0.0/0 as an allowed network.
From a security standpoint, this is a terrible, horrible, no good idea. It essentially means the entire world can attempt to connect to your database.
As #john-hanley stated in the comment, the Connecting Cloud Run to Cloud SQL documentation details how to connect. There are two options:
via Public IP (the internet) using the Unix domain socket on /cloudsql/CLOUD_SQL_CONNECTION_NAME
via Private IP, which connects through a VPC using the Serverless VPC Access
If a Unix domain socket is not supported by your library, you'll have to use a different library or choose Option 2 and connect over TCP. Note that Serverless VPC Access connector has additional costs associated with using it.

Connect to local CosmosDB emulator from Xamarin.Forms

I have a Xamarin app, which use Azure Storage and Azure Cosmos.
For now, both storage are emulated from my PC.
As well as a Pixel 2 from the Android emulator.
I found that I need to configure the ip as 10.0.2.2, even if I can't find this IP or even network on my PC.
Although, it works well with the storage account, I can connect to it.
But when I'm connecting to Cosmos, I get the following exception :
Ssl error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
at /Users/builder/jenkins/workspace/archive-mono/2020-02/android/release/external/boringssl/ssl/handshake_client.c:1132
I'm not an expert in Android, How can I add a custom certificate?
Or just disable SSL for Cosmos emulator?

Azure Cosmos DB Emulator on a Local Area Network

I am trying to access Azure Cosmos DB Emulator through a Mac. The emulator is installed on Windows 10 machine. Both machines are obviously part of the local network. I am able to browse the emulator explorer on Windows 10 machine using the following addresses:
1) https://localhost:8081/_explorer/index.html
2) https://192.168.0.104/_explorer/index.html
However I am not able to browse the emulator explorer on Windows 10 machine using the following address:
1) https://192.168.0.104:8081/_explorer/index.html
I am getting following error message in browser:
The site can't be reached.
<192.168.0.104> refused to connect.
Same is the error message I am getting when I browse through Mac.
I have tried the following:
1) Activate "Private" firewall.
2) Turn off "Private" firewall.
3) Create a rule in firewall to allow inbound connections on port 8081.
4) Turn off all kinds of firewalls (Private, Domain, Public)
If anyone has tried to this before, please suggest what am I doing wrong or if it is even possible?
Use following command to generate a authorization key:
1) Microsoft.Azure.Cosmos.Emulator.exe /GenKeyFile=cosmosdbauthkey
2) Then shutdown the emulator if it is already running.
3) Then delete the emulator’s data directory at following location:
(C:\Users\user_name\AppData\Local\CosmosDBEmulator)
4) Restart the emulator with following command:
CosmosDB.Emulator.exe /AllowNetworkAccess /KeyFile=cosmosdbauthkey
Refer: https://learn.microsoft.com/en-us/azure/cosmos-db/local-emulator#command-line-syntax
Try installing the Cosmos DB emulator using the NoFirewall command.
CosmosDB.Emulator.exe /NoFirewall

Google Cloud function times out when connecting to Redis on Compute Engine internal IP

I created a Redis instance using https://console.cloud.google.com/launcher/details/bitnami-launchpad/redis-ha
and the network interface is:
I'm trying to connect to this Redis instance from a Firebase trigger.
The question is: what firewall rule do I need to connect from a cloud function to a compute instance?
Please provide as many details as possible, e.g. IP ranges, ingress/egress, etc, and whether I have to connect the Redis client to the instance on the internal IP, or the external IP.
This is the code:
const redis = require('redis');
let redisInstance = redis.createClient({
/* surely external IP needn't be used
here as it's all GCP infra? */
host: '10.1.2.3',
port: 6379
})
redisInstance.on('connect', () => {
console.log(`connected`);
});
redisInstance.on('error', (err) => {
console.log(`Connection error ${err}`);
});
The error in the log is
Connection error Error: Redis connection to 10.1.2.3:6379 failed - connect ETIMEDOUT 10.1.2.3:6379
I've looked at Google Cloud Function cannot connect to Redis but it's not specific enough about the options when setting up a rule.
What I've tried
I tried to set up a firewall rule with these settings:
ingress
network: default
source filter: my firebase service account
protocols/ports: all
targets: all
Just a note about the service account:
created by Firebase
has the Editor role in IAM
is known to work with BigQuery and other Firebase services from my Firebase triggers
This same firewall rule has been in effect for a few hours now, and I've also redeployed the trigger which tests Redis, but still getting ETIMEDOUT
UPDATES
2018-06-25 morning
I phoned GCP Gold support and the problem isn't obvious to the operator, so they'll open a case, investigate, and leave some notes.
2018-06-25 afternoon
Using a permissive firewall rule (source 0.0.0.0/0, destination "all targets") and connecting to the Redis instance's external IP address works (of course!). However, I mentioned many times now on the phone call I don't want the Redis instance to be open to the Internet, and if there's some sort of solution involving a networking bridge/VPN so I can connect to the 10.x.x.x address from the Cloud Function.
The operator said they'll get back to me in 2 days.
2018-06-25 bit later in the afternoon
I've self-answered that it doesn't seem to be possible to connect to a Compute Engine internal IP from a cloud function.
It looks like it's NOT currently possible to connect to Google Compute Engine internal IP from Google Cloud Funtions so all my (and my helpful Gold support operator's) efforts have been in vain.
Here's the (open) issue: https://issuetracker.google.com/issues/36859738
As it is explained in the question you referred to, when you create a new firewall rule you change the Source Filter field from IP ranges to Service Account. In the following step you won't need to specify any IPs, only the name of the service account for Cloud Functions.

Database connection refused with Symfony / PHP

Trying to deploy a Symfony 3.2/Doctrine application to Swisscom PaaS.
Buildpack (PHP 7, httpd etc.) are installed, composer is running and installing dependencies, but when calling the composer after-commands, like cache:clear I get an:
[Doctrine\DBAL\Exception\ConnectionException]
An exception occured in driver: SQLSTATE[HY000] [2002] Connection refused
my manifest.yml:
applications:
- services:
- dbservice
buildpack: php_buildpack
host: myapp
name: MyApp
instances: 1
memory: 640M
env:
SYMFONY_ENV: prod
PHP_INI_SCAN_DIR: .bp-config/php/conf.d
my options.json:
"WEB_SERVER": "httpd",
"COMPOSER_INSTALL_OPTIONS": ["--no-dev --optimize-autoloader --no-progress --no-interaction"],
"COMPOSER_VENDOR_DIR": "vendor",
"SYMFONY_ENV": "prod",
"WEBDIR": "web",
"PHP_MODULES": "fpm",
"PHP_VERSION": "{PHP_70_LATEST}",
"PHP_EXTENSIONS": [
"bz2",
"zlib",
"curl",
"mcrypt",
"openssl",
"mbstring",
"pdo",
"pdo_mysql"
],
"ZEND_EXTENSIONS": [
"opcache"
]
And this is how I read database credentials from VCAP and set parameters in Symfony (which is perfectly working with a local setup of VCAPSERVICES env vars):
$vcapServices = json_decode($_ENV['VCAP_SERVICES']);
$container->setParameter('database_driver', 'pdo_mysql');
$db = $vcapServices->{'mariadb'}[0]->credentials;
$container->setParameter('database_host', $db->host);
$container->setParameter('database_port', $db->port);
$container->setParameter('database_name', $db->name);
$container->setParameter('database_user', $db->username);
$container->setParameter('database_password', $db->password);
// Just for debug:
echo 'User: ';
var_dump($container->getParameter('database_user'));
echo 'Db: ';
var_dump($db);
Service is running, both var_dump deliver the expected values. But still connection is refused.
What am I doing wrong?
****EDIT****
A similar problem seems to be here, but without a solution: Cloud foundry p-mysql
****EDIT****
I debugged down right to the statement where PDO constructor is called.
It is called with the following parameters:
$dsn = mysql:host=10.0.20.18;port=3306;dbname=CF_DB922DD3_CACB_4344_9948_746E585732B5;
$username = "myrealusername"; // as looked up in VCAP_SERVICES
$password = "myrealpassword"; // as looked up in VCAP_SERVICES
$options = array();
Anything looks exactly like it can be seen in the web console for the service binding.
Is an unix_socket required to successfully connect?
****EDIT****
As Symfony is using some composer post-install-commands (in this case e.g. to clear and warm up the cache) which require already a working database connection that this is not supported with DB services by cloudfoundry as long as the container is not fully built and deployed?
Am running out of ideas.
Sorry for the issue and thanks for feedback. Swisscom modified the security groups. See Application Security Groups for more info.
Cloud Foundry blocks all outbound network connections from application
containers by default. Administrators can override this
block-by-default behavior with Application Security Groups (ASGs).
ASGs are a collection of egress rules that specify one or more
individual protocols, ports, and destinations to allow network access
to.
Cloud Foundry has two default sets of ASGs: default-staging and
default-running. All application containers in Cloud Foundry use a
base policy from one of these.
The rule for Galera as a Service (MariaDB) was only done in running. We added the rule also to staging.
Just in case someone has a similar issue this was the solution:
In the end I could only solve this with PaaS provider (swisscomdev) support.
Obviously database connection was not provided/possible during the staging/deployment of our app, but Symfony's cache:clear/warmup required a full database connection during the composer post processing phase.
After a fix in the cloudfoundry based platform of swisscomdev everything worked as expected.
In my experience, Connection refused can mean one of the things:
The firewall (iptables?) is blocking the non-local access to the port
The mysql server is not even listening to the port
The mysql server is listening on ipv6 port 3306 while client tries to connect via ipv4.
First let's try to narrow it down by running basic telnet test:
telnet 10.0.20.18 3306
This should greet you with some half-garbled message with clear mention of MySQL. If it does not, than you need to go back to more general restrictions like firewalls and policies.
Since you are positive that server is running, I suggest checking if you are being blocked by firewall or SELinux. Don't know how much control do you have over the Swisscom's PaaS system, though.
If you have SSH access to the PaaS service, you could try running tcpdump do capture any traffic. See this article: https://serverfault.com/questions/507627/debugging-a-connection-refused-response-on-port-21
Hope this gives you some hint...

Resources