Getting 'Name servers refused query' on my domain when setting up Google cloud dns - firebase

I'm trying to setup Google cloud DNS so I've created a new public managed zone which I'm planning to connect to a firebase project.
After creating the zones I got the 4 NS provided, to be used at the Registrar:
ns-cloud-d1.googledomains.com
ns-cloud-d2.googledomains.com
ns-cloud-d3.googledomains.com
ns-cloud-d4.googledomains.com
I've updated the NS at the registrar but it doesn't seem to be working.
When I query it with https://dns.google.com/ I get this:
{
"Status": 2,
"TC": false,
"RD": true,
"RA": true,
"AD": false,
"CD": false,
"Question": [
{
"name": "[mydomain]",
"type": 1
}
],
"Comment": "Name servers refused query (lame delegation?) [216.239.38.109, 216.239.32.109, 2001:4860:4802:36::6d, 216.239.36.109, 2001:4860:4802:32::6d, 2001:4860:4802:38::6d, 216.239.34.109, 2001:4860:4802:34::6d]."
}
I can't find anything else which I can try in the troubleshooting. Everything seems pretty straight forward - take NS and update the domain at the registrar, although I'm unsuccessful. Any idea what can be wrong?

A lame delegation happens when one (or more) nameserver is asked about a domain and have the information, while other nameserver don't. Both nameserver must have the information, this is explained here in a better way and have some advices for troubleshooting, for example you could use the dig command or other online tools (as suggested on the comments) to retrieve your domain's DNS information. Here is another tool to check DNS propagation.
Take into consideration that DNS changes could take a while until propagation is done, wait some hours and try again, if everything is properly configured on the registrar your domain should be up. Any further changes on Cloud DNS will take some time too, look at this for more details.

Related

How to add HealthChecks for AzureKeyVault health status

I was trying to add HealthChecks for AzureKeyVault to my project and added following nuget package for that :
<PackageReference Include="AspNetCore.HealthChecks.AzureKeyVault" Version="6.0.2" />
And in code, added following :
var url = "https://123456.com";
builder.Services
.AddHealthChecks()
.AddAzureKeyVault(new Uri(url), keyVaultCredential,
options => { }, "AKV", HealthStatus.Unhealthy,
tags: new string[] { "azure", "keyvault", "key-vault", "azure-keyvault" });
But issue is, its showing healthy for each and every URL, just it should be proper URL.
and even in keyVaultCredential, if some random values are added, it showing status healthy.
Do some one know, how can use this HealthCheck
I have the same problem, I found we need to add at lease one key vault secret in the opts to make it work. e.g. options => { options.AddSecret("SQLServerConnection--connectionString");}
Please check if there are any restrictions in knowing the health
status of azure resources or with the use of this library in your
company VPN network .
Try the same in different network to check if the cause is network
issue or VPN
Try with debugging tools to capture the traffic to verify and see response.
References:
AzureKeyVault health check always returns "healthy"
(github.com)
AspNetCore.Diagnostics.HealthChecks

Using Azure Quickstart Templates on Azure Government

I'm attempting to use the 3 VM SharePoint Azure QuickStart Template on Azure Government.
Everything works fine except that the deployment errors out due to the fact that Azure Government expects the storageAccountUri to be blob.core.usgovcloudapi.net, while the default is blob.core.windows.net.
I've changed the JSON files to the expected blob.core.usgovcloudapi.net, but it still complains of the error that the blob URL's domain must be blob.core.usgovcloudapi.net.
I'm wondering why it is being overridden and how I can prevent that.
An example of the change I've made is:
"osDisk": {
"name": "osdisk",
"vhd": {
"uri": "[concat('http://',parameters('storageAccountNamePrefix'),'1.blob.core.usgovcloudapi.net/vhds/',parameters('sqlVMName'),'-osdisk.vhd')]"
},
"caching": "ReadWrite",
"createOption": "FromImage"
Any help would be appreciated.
You should be able to reference the storage account and that will ensure you always get the correct address (regardless of cloud):
"osDisk": {"name": "osdisk","vhd": {"uri": "[concat(reference(concat('Microsoft.Storage/storageAccounts/',
variables('storageAccountName')), '2015-06-15').primaryEndpoints.blob, variables('vmStorageAccountContainerName'), '/',variables('OSDiskName'),'.vhd')]"}}
We have some other tips for using a QuickStart that might be hard coded to a single cloud here:
https://blogs.msdn.microsoft.com/azuregov/2016/12/02/azure-quickstart-templates/

HTTP requests not working on aws ec2

I am building an app in node.js and I’m using AWS EC2 to host it. However, my HTTP requests are not working.
My app is split into two repositories: app-ui and app-server. app-server contains all of my server side code/API’s. In app-ui, I am making simple POST requests such as:
$.ajax({
type: "POST",
url: "http://ec2-xx-xxx-xx/api/users",
success: function(data) {
console.log(data);
},
error: function(a) {
console.log(a);
}
});
However, I keep getting the net::ERR_CONNECTION_TIMED_OUT error.
Does anyone know what might be happening?
Add an inbound rule for the security group attached to your server for the specific port you're using.
I'm having the same issue this is because the amazon servers were down today, but take a look on your server to see if it is working in my case:
/etc/init.d/apache2 status
Response:
Active: active (running) since Wed 2017-03-01 02:21:53 UTC; 2h 3min ago
Docs: man:systemd-sysv-generator(8)
Apparently the S3 was one of the services down and also the routing system, if your server was located on AWS EST side you will find this issue, this affected several apps like HockeyApp and Trello
Take a look on the current status: status.aws.amazon.com
Of course assuming that you have the security groups, the elastic or static ip's set and configured and that you see this issue on all your site and not just on your API
I was struggling with the same situation. I managed it. Go to AWS -> login -> ec2 -> select the options in the left sidebar "security and groups". then select your default instance on the right side that is listed in the table then clicked the action button on the top of the table. that will show the inbound menu.
there you click the "add rule" button. there the type is "custom TCP" then you give port 8080 or whatever you prepare. then save it.
Now go ahead with postman it will work. enjoy your work. !!!!

elasticsearch index deleted

I'm facing a serious problem with my elasticsearch server.
I'm using ES 1.7 on a symfony2 project with fosElasticaBundle.
The ES index has been deleted two times today, and I can't figure out why.
Here are the log I can read in my cluster.log:
[cluster.metadata] [server] [index] deleting index
[cluster.metadata] [server] [warning] deleting index
[cluster.metadata] [server] [please_read] creating index, cause [api], templates [], shards [5]/[1], mappings []
[cluster.metadata] [server] [please_read] update_mapping [info] (dynamic)
The thing is that my ES never faced such kind of issue in the past monthes while the website was on pre-prod.
Do you think this can comes from an attack ? Or an configuration error ?
This is very likely coming from an attack. if you do a <Endpoint>/please_read/_search you will probably see a note like
{
"_index": "please_read",
"_type": "info",
"_id": "AVmZfnjEAQ_HIp2JODbw",
"_score": 1.0,
"_source": {
"Info": "Your DB is Backed up at our servers, to restore send 0.5 BTC to the Bitcoin Address then send an email with your server ip",
"Bitcoin Address": "12JNfaS2Gzic2vqzGMvDEo38MQSX1kDQrx",
"Email": "elasticsearch#mail2tor.com"
}
You should try to make your elasticsearch cluster installation more secure to avoid such downfalls.
There have also been reports of attacks on open to internet databases like mongo/elasticsearch eg. http://www.zdnet.com/article/first-came-mass-mongodb-ransacking-now-copycat-ransoms-hit-elasticsearch/
I concur with #dejavu013, this is most likely database ransomware, I would advise securing your elasticsearch with the free and opensource https://github.com/floragunncom/search-guard, or premium solutions like Elastic's Shield, now part of the Elastic X-Pack or Compose's Hosted Elasticsearch.
many elasticsearch clusters was attacked in the last week:
http://www.zdnet.com/article/first-came-mass-mongodb-ransacking-now-copycat-ransoms-hit-elasticsearch/
this is how you can secure it:
http://code972.com/blog/2017/01/107-dont-be-ransacked-securing-your-elasticsearch-cluster-properly
This was indeed an attack as #dejavu013 said.
I started to secure my datas by allowing only localhost to access to my elasticseach datas.
To do so, I've edited my config file elasticseach.yml and added those two lines :
networt.host: 127.0.0.1
http.port: 9200
So only localhost can access to the datas and make requests.

Firebase: Cannot Verify Ownership of Namecheap Domain

I'm having trouble verifying namecheap domain with firebase hosting
Tried to follow this posts instructions without success:
Unable to Verify Custom Domain with Firebase Using Namecheap
and this
Adding custom hosting domain: "Unexpected TXT records found. Continuing to watch for changes."
So my definitions are like this:
Type: TXT Record
Host: #
Value: globalsign-domain-verification=...
TTL: Automatic
Type: TXT Record
Host: #
Value: firebase=mydomain.info
TTL: Automatic
Type: CNAME Record
Host: www
Value: myprojectname.firebaseapp.com.
TTL: Automatic
Type: A Record
Host: #
Value: First IP Address retrieved with MXToolbox
TTL: Automatic
Type: A Record
Host: #
Value: Second IP Address retrieved with MXToolbox
TTL: Automatic
When I execute "dig -t txt +noall +answer mydomain.info", it returns:
mydomain.info. 1798 IN TXT "globalsign-domain-verification=..."
mydomain.info. 1798 IN TXT "firebase=myprojectname"
(It has the extra dot at the end of the domain)
But still in firebase dashboard I have this message:
Verifying ownership
Unexpected TXT records found. Continuing to watch for changes.
and later:
Verification failed
Couldn't find the correct TXT records in your DNS records
I'm trying to solve this problem for several days now.
After not having my domain verify for over 24 hours and trying numerous things, I eventually deleted the CNAME record of [projectname].firebaseapp.com. A few minutes later, it verified with no problem.
These are the steps that they want you to do:
Only have the TXT Record for the verification.
After verification, add two A Records.
This is what I did:
Add TXT record and two A records.
Get verified
Hope this helps. I submitted a ticket to firebase to propose that they modify the instructions to make it clear that you can't put the CNAME to your project URL.
Strongly recommend that you contact Firebase support. I had a similar issue going on for days and only after they manually intervened behind the scenes it was resolved. Good luck...
I was facing the same problem. instead of domain, I used # in host and it worked. I am on a Free plan.
Always ensure that you are using a paid version of Firebase. Domains do not get verified if you are using a free account. I had to downgrade one of my old projects in order to verify my new project. Apparently, firebase allows only a certain number of projects to be upgraded to a blaze plan. I hope this helps.

Resources