I'm trying to configure the file cygnus.conf, but, I don't know the FQDM/IP of the Namenodes, hive sever and CKAN API endpoint.
I was search in the catalogue and forge, and I can't see anything about that.
thank you, and the best reggarts.
For the Namenode and the Hive server just use the same endpoint you used to setup your Cosmos account: http://cosmos.lab.fi-ware.org. Regarding CKAN, I'm not sure there is a public CKAN instance running in FI-LAB. Let me check it.
Related
I'm trying to connect my wordpress database with google apps script jdbc.
Here is my code:
var connection = Jdbc.getConnection("jdbc:mysql://34.76.255.159:3306/wordpress", "wordpress", "DBpassword_found_in_wp_config.php_file");
I also tried with localhost instead of 3306, and localhost:3306 also ..
3 options lead to the same error:
Fail to connect, check the chain, username and password ...
Where am I wrong?
This is a bug
https://issuetracker.google.com/119660771
In the V8 runtime of Apps Script the jdbc connector appears to be unreliable.
The workaround for this is to use the Rhino runtime.
Please go and star this issue to show Google that it affects you, and subscribe to updates that way too.
We are migrating from the Magento community to Magento cloud for one of our projects and we need to access DB for our custom developed CRM.
But unfortunately magento cloud does not support DB replication and they have enabled binlogs and they are not supporting for creating replication user and server id setup, The binlog files can be synced to our CRM server periodically.
Now we want to know whether we can use the binlog files to replicate the database or is there any workaround for doing the same?
We have tried using tunnel setup but the query execution time is more while using tunnel setup which will affect our CRM performance badly.
Also we need to reconfirm whether there are any other possibilities we can try to access the Magento Cloud DB in our CRM without performance lag.
Thanks in advance for your suggestions.
Yes, it is possible, but it may be a little fiddly in the setup you are describing. You can replay the binlogs as relay logs. Have a look at this article for more details:
https://lefred.be/content/howto-make-mysql-point-in-time-recovery-faster/
Specifically, these parts are relevant (you'll need to edit them appropriately):
[root#mysql1 mysql]# for i in $(ls /tmp/binlogs/*.0*)
do
ext=$(echo $i | cut -d'.' -f2);
cp $i mysql1-relay-bin.$ext;
done
[root#mysql1 mysql]# ls ./mysql1-relay-bin.0* >mysql1-relay-bin.index
I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.
I'm running a .net server on Amazon's Elastic Beanstalk and when I try to instantiate AmazonDynamoDBClient it fails. According to the information I've been following from these two pages:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.roles.apps.html
http://docs.aws.amazon.com/AWSSdkDocsNET/latest/DeveloperGuide/net-dg-roles.html
It should retrieve the credentials from the IAM role assigned to the EC2 instance.
It has the DynamoDBFullAccess template as a policy.
If I supply the credentials in the code itself through the web.config file it works, but only when debugging locally and not when deployed to elastic beanstalk.
Here is the code that breaks it:
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
or
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new InstanceProfileAWSCredentials());
Neither works. And I can't seem to retrieve crash logs from elastic beanstalk, so it is hard to debug. I'm pretty sure that I'm following the descriptions in those two links to the letter and am confused.
I've been trying to solve this for the better part of a day and would really appreciate some help. If there is any information missing, please let me know.
Thank you.
If you go to aws console and select ec2 view instances you will see the ec2 instance for your elastic beanstalk. The name of that instance will be either "Default-Environment" or whatever name you chose for your elastic beanstalk environment. Then you can ssh to that ec2 instance and view logs. For example my tomcat logs are stored at /var/log/tomcat7
You can also scp your credential file to your ec2 host just for debug and run the app. It should work because you already have it working on your local machine. but I guess this approach is not recommended and may has security concern.
This should at least get you started. may be an EC2 expert can help you solve the real problem :)
Currently I am running puppet from root user. I want to manage Redis using puppet. I found redis module for puppet. I want to run redis-server using "redis" user. So is it possible in puppet to start command/process/script with different user ?
The best way to accomplish this is to make sure your service management framework is starting the service using the correct user.
If you instead want to use Puppet to start the service directly, which is not best practice, then you can use the user parameter of the exec resource type. The documentation for the exec resource type is located at: http://docs.puppetlabs.com/references/latest/type.html#exec