elasticsearch index deleted - symfony

I'm facing a serious problem with my elasticsearch server.
I'm using ES 1.7 on a symfony2 project with fosElasticaBundle.
The ES index has been deleted two times today, and I can't figure out why.
Here are the log I can read in my cluster.log:
[cluster.metadata] [server] [index] deleting index
[cluster.metadata] [server] [warning] deleting index
[cluster.metadata] [server] [please_read] creating index, cause [api], templates [], shards [5]/[1], mappings []
[cluster.metadata] [server] [please_read] update_mapping [info] (dynamic)
The thing is that my ES never faced such kind of issue in the past monthes while the website was on pre-prod.
Do you think this can comes from an attack ? Or an configuration error ?

This is very likely coming from an attack. if you do a <Endpoint>/please_read/_search you will probably see a note like
{
"_index": "please_read",
"_type": "info",
"_id": "AVmZfnjEAQ_HIp2JODbw",
"_score": 1.0,
"_source": {
"Info": "Your DB is Backed up at our servers, to restore send 0.5 BTC to the Bitcoin Address then send an email with your server ip",
"Bitcoin Address": "12JNfaS2Gzic2vqzGMvDEo38MQSX1kDQrx",
"Email": "elasticsearch#mail2tor.com"
}
You should try to make your elasticsearch cluster installation more secure to avoid such downfalls.
There have also been reports of attacks on open to internet databases like mongo/elasticsearch eg. http://www.zdnet.com/article/first-came-mass-mongodb-ransacking-now-copycat-ransoms-hit-elasticsearch/

I concur with #dejavu013, this is most likely database ransomware, I would advise securing your elasticsearch with the free and opensource https://github.com/floragunncom/search-guard, or premium solutions like Elastic's Shield, now part of the Elastic X-Pack or Compose's Hosted Elasticsearch.

many elasticsearch clusters was attacked in the last week:
http://www.zdnet.com/article/first-came-mass-mongodb-ransacking-now-copycat-ransoms-hit-elasticsearch/
this is how you can secure it:
http://code972.com/blog/2017/01/107-dont-be-ransacked-securing-your-elasticsearch-cluster-properly

This was indeed an attack as #dejavu013 said.
I started to secure my datas by allowing only localhost to access to my elasticseach datas.
To do so, I've edited my config file elasticseach.yml and added those two lines :
networt.host: 127.0.0.1
http.port: 9200
So only localhost can access to the datas and make requests.

Related

I'm getting an error "BAD REQUEST" while establishing an connection between Airflow and Snowflake

I'm New to Airflow while connecting to Snowflake From Airflow I'm getting an error like "BAD REQUEST" I have installed all required dependencies according to the constraints file https://raw.githubusercontent.com/apache/airflow/constraints-2.2.3/constraints-3.8.txt so anyone please help me to find the exact issue.
the error of the picture bad request
until yesterday I was facing the same issue with establishing the connection with Snowflake from airflow. I might have found a bug with airflow. The workaround for now you can try is:
Delete the existing connection and create a new connection with your details.
Connection id:
Connection Type: Snowflake
Host:12345.eu-west-1.aws.snowflakecomputing.com
schema: public
login:
password:
extra: {
"extra__snowflake__account": "12345",
"extra__snowflake__database": "Sampledb",
"extra__snowflake__insecure_mode": false,
"extra__snowflake__region": "eu-west-1.aws",
"extra__snowflake__role": "accountadmin",
"extra__snowflake__warehouse": "compute_wh"}
Leave rest of it blank.
Once you fill the details, test the connection before you save it. You would not face a BAD REQUEST issue. Unfortunately, if you open the connection again and test, the connection will always return BAD REQUEST.
Don't worry about it as the connection would work. Give it a try
The connection testing functionality in the UI currently doesn't support using the custom form fields (like Database, Warehouse, Role, etc. for the Snowflake connection). This should be fixed with this PR in Airflow 2.3.
For Snowflake specifically, you can add those custom fields in Extra as a workaround.
You can add a connection to Snowflake using CLI:
$ airflow connections add --conn-type Snowflake \
--conn-description 'Connection to Snowflake for bla-bla-bla' \
--conn-uri 'snowflake://<user>:<password>#<host>/<schema>?role=<role>&account=<account>&database=<database>&region=<region>&warehouse=<warehouse>' \
foo_snowflake
You also can use Airflow UI to add a Snowflake connection. In this case following options should be specified in the "Extra field":
{"role": <role>, "account": <account>, "database": <database>, "region": <region>, "warehouse": <warehouse>}
Corresponding fields in the form e.g. "Role", "Account", "Database", "Region" etc. should be empty.
This approach I used in Airflow version 2.2.2.

Getting 'Name servers refused query' on my domain when setting up Google cloud dns

I'm trying to setup Google cloud DNS so I've created a new public managed zone which I'm planning to connect to a firebase project.
After creating the zones I got the 4 NS provided, to be used at the Registrar:
ns-cloud-d1.googledomains.com
ns-cloud-d2.googledomains.com
ns-cloud-d3.googledomains.com
ns-cloud-d4.googledomains.com
I've updated the NS at the registrar but it doesn't seem to be working.
When I query it with https://dns.google.com/ I get this:
{
"Status": 2,
"TC": false,
"RD": true,
"RA": true,
"AD": false,
"CD": false,
"Question": [
{
"name": "[mydomain]",
"type": 1
}
],
"Comment": "Name servers refused query (lame delegation?) [216.239.38.109, 216.239.32.109, 2001:4860:4802:36::6d, 216.239.36.109, 2001:4860:4802:32::6d, 2001:4860:4802:38::6d, 216.239.34.109, 2001:4860:4802:34::6d]."
}
I can't find anything else which I can try in the troubleshooting. Everything seems pretty straight forward - take NS and update the domain at the registrar, although I'm unsuccessful. Any idea what can be wrong?
A lame delegation happens when one (or more) nameserver is asked about a domain and have the information, while other nameserver don't. Both nameserver must have the information, this is explained here in a better way and have some advices for troubleshooting, for example you could use the dig command or other online tools (as suggested on the comments) to retrieve your domain's DNS information. Here is another tool to check DNS propagation.
Take into consideration that DNS changes could take a while until propagation is done, wait some hours and try again, if everything is properly configured on the registrar your domain should be up. Any further changes on Cloud DNS will take some time too, look at this for more details.

What is the configuration to make rabbitmq api and management plugin accessible from the web?

Trying to find a solution for opening RabbitMQ to the internet is not exactly easy. The documentation is sparse and the configuration I can find is in multiple formats. How can I configure RabbitMQ to allow external access?
There are 3 ways to configure RabbitMQ: command line and two styles of configuration file. Here is how to configure RabbitMQ using one of the configuration styles.
First you'll need to find the rabbitmq.config. It is found in the RabbitMQ installation directory. I found mine here:
C:\Users\[User]\AppData\Roaming\RabbitMQ\rabbitmq.config
The config file follows a very particular syntax. You'll need to take care. After a configuration change you will need to restart RabbitMQ for the configuration to take affect. If RabbitMQ stops immediately there is a problem with your configuration.
If RabbitMQ runs this only means that the configuration didn't kill the process. It does not mean that it was applied correctly. It will certainly apply if you get the syntax correctly, but every wrong configuration does not guarantee the service crashes.
Follows are a few example configurations (notice the trailing ".") :
Base of the RabbitMQ configuration style we will be using (should only appear once in the configuration file):
[{rabbit, []}].
Configuration that allows the RabbitMQ management plugin to be accessed from the internet:
[{rabbit,
[
{rabbitmq_management, [{listener, [{port, 15672}]}]},
{rabbitmq_management, [{cors_allow_origins, ["*"]}]}
]
}].
Configuration that allows the RabbitMQ api to be accessed from the internet:
[{rabbit,
[
{tcp_listeners, [5672]}
]
}].
Configuration that allows guest user to access the management plugin from the internet:
[{rabbit,
[
{loopback_users, []},
{rabbitmq_management, [{listener, [{port, 15672}]}]},
{rabbitmq_management, [{cors_allow_origins, ["*"]}]}
]
}].
Configuration that does all of the above:
[{rabbit,
[
{tcp_listeners, [5672]},
{loopback_users, []},
{rabbitmq_management, [{listener, [{port, 15672}]}]},
{rabbitmq_management, [{cors_allow_origins, ["*"]}]}
]
}].
Leaving a trailing comma in an array in your configuration will cause RabbitMQ to stop.
If you are still having issues check that you've permitted the ports through any firewalls.
Considerations This leaves your RabbitMQ server wide open. You will want to certainly restrict the access. A search of tcp_listeners will reveal other options to restrict access to particular ip. It is also not a good idea to expose the default guest user to the internet in any production environment.

Firebase: Cannot Verify Ownership of Namecheap Domain

I'm having trouble verifying namecheap domain with firebase hosting
Tried to follow this posts instructions without success:
Unable to Verify Custom Domain with Firebase Using Namecheap
and this
Adding custom hosting domain: "Unexpected TXT records found. Continuing to watch for changes."
So my definitions are like this:
Type: TXT Record
Host: #
Value: globalsign-domain-verification=...
TTL: Automatic
Type: TXT Record
Host: #
Value: firebase=mydomain.info
TTL: Automatic
Type: CNAME Record
Host: www
Value: myprojectname.firebaseapp.com.
TTL: Automatic
Type: A Record
Host: #
Value: First IP Address retrieved with MXToolbox
TTL: Automatic
Type: A Record
Host: #
Value: Second IP Address retrieved with MXToolbox
TTL: Automatic
When I execute "dig -t txt +noall +answer mydomain.info", it returns:
mydomain.info. 1798 IN TXT "globalsign-domain-verification=..."
mydomain.info. 1798 IN TXT "firebase=myprojectname"
(It has the extra dot at the end of the domain)
But still in firebase dashboard I have this message:
Verifying ownership
Unexpected TXT records found. Continuing to watch for changes.
and later:
Verification failed
Couldn't find the correct TXT records in your DNS records
I'm trying to solve this problem for several days now.
After not having my domain verify for over 24 hours and trying numerous things, I eventually deleted the CNAME record of [projectname].firebaseapp.com. A few minutes later, it verified with no problem.
These are the steps that they want you to do:
Only have the TXT Record for the verification.
After verification, add two A Records.
This is what I did:
Add TXT record and two A records.
Get verified
Hope this helps. I submitted a ticket to firebase to propose that they modify the instructions to make it clear that you can't put the CNAME to your project URL.
Strongly recommend that you contact Firebase support. I had a similar issue going on for days and only after they manually intervened behind the scenes it was resolved. Good luck...
I was facing the same problem. instead of domain, I used # in host and it worked. I am on a Free plan.
Always ensure that you are using a paid version of Firebase. Domains do not get verified if you are using a free account. I had to downgrade one of my old projects in order to verify my new project. Apparently, firebase allows only a certain number of projects to be upgraded to a blaze plan. I hope this helps.

Connect drupal site to moodle site

I am using drupal 7.27 version in which I need to connect to moodle site and its database. So I used drupal module moodle_connection to connect it withmoodle site. As it does not offer any end feature functionality. I installed another module called moodle_views but unfortunately there is no data received from the moodle. When I debug I found that connection does not establish between both the sites.
I am calling moodle_connector_connect() function in custom module to connect to Moodle. But no success. And in the moodle connector settings I put the following information:
Database Type : mysql
Database Server : localhost
Database TCP Port : 3306
Database Name : drupal_moodle ('Name of the moodle database')
Database Prefix : mdl_
Database User : root
Database Password : (I don't have password for my database user so I kept blank)
Moodle URL : drupal_moodle (Moodle site url)
Please help me to get out of this.
Regards
Neha
Reading over the bug reports in the Drupal module moodle_connector, I noticed some issues related to setting values for the moodle database connection variables, and some issues with handling error conditions.
Combine this with your mention of blank password, suggests the following line might be a problem.
Reading moodle_connector.module, around line 51 I notice some lazy checking for unset parameters.
// Return false if settings are incomplete.
if (!$type || !$server || !$port || !$username || !$password || !$database) {
return FALSE;
}
It looks like the check for !$password will cause the function moodle_connector_connect() to exit and not connect to the moodle database if any of the values are unset or empty.
As a workaround, and a step in right direction security-wise, could you create a new MySQL user, specifically grant it the necessary privileges to allow Drupal to read the Moodle DB and set a password.
I would also strongly advise that you read over the MySQL 'post installation' section of the manual which advises setting a password on the root user accounts. Having no root password is convenient during initial installation, but is a security problem. Any ordinary user on the machine, or a nearby machine which can connect to port 3306, could gain full access to the database.
http://dev.mysql.com/doc/refman/5.1/en/postinstallation.html

Resources