I am trying to get FOSElasticaBundle working on AWS ElasticSearch. At the moment I have my development env all set up and working perfectly using Docker containers for ElasticSearch using
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
If I populate my ElasticSearch using:
docker-compose exec php php /var/www/symfony/bin/console fos:elastica:populate --env=prod
this all works perfectly and the index has searchable items in it.
However moving this to AWS is throwing up an issue.
I have set up a ElasticSearch service (v6.2) within AWS using their VPN option, I am able to connect to this (I know it does connect as I had connection errors until I used this in the config:
fos_elastica:
clients:
default:
transport: 'AwsAuthV4'
aws_access_key_id: '%amazon.s3.key%'
aws_secret_access_key: '%amazon.s3.secret%'
aws_region: '%amazon.s3.region%'
When I run
php bin/console fos:elastica:populate --env=prod it looks like it is populating
3200/6865 [=============>--------------] 46% 4 secs/9 secs
Populating ppc/keywords
Refreshing ppc
But once complete my Amazon console shows 0 searchableDocuments and if run a query I get nothing back.
Has anyone come across this and any idea how to solve it, even being able to get more feedback from populate would help me work out where it is going wrong.
Edit 17:29 31/5
So I created a Elasticsearch install in a docker container on a standard EC2 instance and pointed at that and it indexes perfectly, so it is something to do with the connection with AWS. One of the differences between them is that the Docker install doesn't have to use:
transport: 'AwsAuthV4'
aws_access_key_id: '%amazon.s3.key%'
aws_secret_access_key: '%amazon.s3.secret%'
aws_region: '%amazon.s3.region%'
I presume then it's something to do with this, I would have thought if it wasn't authorised though I would get an error. Although it's working currently I would prefer to use the Amazon service just so it takes a install out of my life to keep an eye on!
I have same problem, but without using access_key.
The solution was adding to the client config key transport with value https
fos_elastica:
clients:
default:
host: vpc-xxxxxxxxxxxxxxxxxxxxxxxxx.es.amazonaws.com
port: 443
transport: https
my problem was in empty aws_access_key_id and aws_secret_access_key values.
Please, check it.
Related
I'm trying to connect my wordpress database with google apps script jdbc.
Here is my code:
var connection = Jdbc.getConnection("jdbc:mysql://34.76.255.159:3306/wordpress", "wordpress", "DBpassword_found_in_wp_config.php_file");
I also tried with localhost instead of 3306, and localhost:3306 also ..
3 options lead to the same error:
Fail to connect, check the chain, username and password ...
Where am I wrong?
This is a bug
https://issuetracker.google.com/119660771
In the V8 runtime of Apps Script the jdbc connector appears to be unreliable.
The workaround for this is to use the Rhino runtime.
Please go and star this issue to show Google that it affects you, and subscribe to updates that way too.
I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.
I followed the steps defined in this link and restarted the WSO2 identity server.
When trying to login at https://localhost:9443/carbon using admin/admin credentials, it's giving me login failed message. Before changing the settings for WSO2, it was working with this credentials.
Obviously, that user is not present in MariaDB. Does anyone has any scripts to insert an admin user in WSO2 Maria DB for? I don't know what all tables I need to update in case of adding user manually. Or is there any other default user?
Kindly suggest.
Had the same problem, even switched to mysql community and it didn't fix it.
So with mysql installed I did the following, after MANY tries,
mysql -u root -p
create DATABASE regdb;
exit;
mysql -u root -p regdb < <WSO2_HOME>/dbscripts/mysql5.7.sql
mysql -u root -p regdb < <WSO2_HOME>/dbscripts/identity/mysql-5.7.sql
nohup <WSO2_HOME>/bin/wso2server.h -Dsetup &
If there is someplace to see the DB version, I haven't found it, but this got it all to working form me. So loading keep failing the DB because of some default time functions in the sql.
My $0.02
I've gotten pretty far into a deployment of my Meteor application on Distelli. Like, almost there. I've done everything as far as setting up the EC2 box, creating a user group [which didn't even seem necessary as I was able to SSH into the box with full rights without specifying my machine's IP], creating an elastic IP, successful build, and deployment to that box. But, I can't seem to check if Meteor is actually running (note: when I ssh in, there are active instances of Mongo and Node, so SOMETHING is running).
The problem has something to do with associating the elastic IP with my ROOT_URL and domain. I'm just not sure what to do at this step and can't seem to find any directions that are Meteor specific. Been using these guides:
https://www.distelli.com/docs/tutorials/how-to-set-up-aws-ec2
https://www.distelli.com/docs/tutorials/deploying-meteor-applications
http://gregblogs.com/tlt-associate-a-namecheap-domain-with-an-amazon-ec2-instance/
Recap: Distelli deployment is a success, but I get the follow error just before finishing:
Error: $ROOT_URL, if specified, must be an URL
I've set my ROOT_URL to my domain, and associated according to the previous guide. I can run traceroute on the IP, but like port 3000, so my inclination is the Meteor build is silently failing.
My manifest: https://gist.github.com/newswim/c642bd9a1cf136da73c3
I've noticed that when I point the CNAME record to my ec2 public DNS, NameCheap (aptly named) adds a . to the end of the record. Beyond that, I'm pretty much stumped.
I'm running a .net server on Amazon's Elastic Beanstalk and when I try to instantiate AmazonDynamoDBClient it fails. According to the information I've been following from these two pages:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.roles.apps.html
http://docs.aws.amazon.com/AWSSdkDocsNET/latest/DeveloperGuide/net-dg-roles.html
It should retrieve the credentials from the IAM role assigned to the EC2 instance.
It has the DynamoDBFullAccess template as a policy.
If I supply the credentials in the code itself through the web.config file it works, but only when debugging locally and not when deployed to elastic beanstalk.
Here is the code that breaks it:
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
or
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new InstanceProfileAWSCredentials());
Neither works. And I can't seem to retrieve crash logs from elastic beanstalk, so it is hard to debug. I'm pretty sure that I'm following the descriptions in those two links to the letter and am confused.
I've been trying to solve this for the better part of a day and would really appreciate some help. If there is any information missing, please let me know.
Thank you.
If you go to aws console and select ec2 view instances you will see the ec2 instance for your elastic beanstalk. The name of that instance will be either "Default-Environment" or whatever name you chose for your elastic beanstalk environment. Then you can ssh to that ec2 instance and view logs. For example my tomcat logs are stored at /var/log/tomcat7
You can also scp your credential file to your ec2 host just for debug and run the app. It should work because you already have it working on your local machine. but I guess this approach is not recommended and may has security concern.
This should at least get you started. may be an EC2 expert can help you solve the real problem :)