I am trying to add my website to Google Search Console but failed, it returns
The connection to your server timed out
The file is there, I can open it on the normal browser, all meta tag set to index,all, robots.txt is added and have User-agent: * Disallow: allowing everything to be crawled.
But it seems I coudn't let Search Console check the verification file, I have tried using the HTML File Verification, HTML Tag Verification, Google Analytics Verification, and Google Tag Verification. But all of them returning the same error , connection time out.
Is there anything else I have to do to verify this?
Thank you
Do you have it on two lines like so: ?
User-agent: *
Disallow:
Related
when I use https://search.developer.apple.com/appsearch-validation-tool/ to test universal link
it said "failed. File is blocked by robots. Please check your url and try again."
But my robots.txt:
User-agent: * Disallow:
I'm troubleshooting an issue that I think may be related to request filtering. Specifically, it seems every connection to a site made with a blank user agent string is being shown a 403 error. I can generate other 403 errors on the server doing things like trying to browse a directory with no default document while directory browsing is turned off. I can also generate a 403 error by using a tool like Modify Headers for Google Chrome (Google Chrome extension) to set my user agent string to the Baidu spider string which I know has been blocked.
What I can't seem to do is generate a request with a BLANK user agent string to try that. The extensions I've looked at require something in that field. Is there a tool or method I can use to make a GET or POST request to a website with a blank user agent string?
I recommend trying a CLI tool like cURL or a UI tool like Postman. You can carefully craft each header, parameter and value that you place in your HTTP request and trace fully the end to end request-response result.
This example straight from the cURL docs on User Agents shows you how you can play around with setting the user agent via cli.
curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL]
In postman its just as easy, just tinker with the headers and params as needed. You can also click the "code" link on the right hand side and view as HTTP when you want to see the resulting request.
You can also use a heap of hther HTTP tools such as Paw and Insomnia, all of which are quite well suited to your task at hand.
One last tip - in your chrome debugging tools, you can right click the specific request from the network tab and copy it as cURL. You can then paste your cURL command and modify as needed. In Postman you can import a request and past from raw text and Postman will interpret the cURL command for you which is particularly handy.
I have been trying to enable HTTPS login on alfresco but it seems to be a challenge to get it working.
I can access my website via HTTPS and get the login page, but when I login with the correct credentials I get the following error :
Something's wrong with this page...
We may have hit an error or something might have been removed or deleted, so check that the URL is correct.
Alternatively you might not have permission to view the page (it could be on a private site) or there could have been an internal error. Try checking with your IT team.
If you're trying to get to your home page and it's no longer available you should change it by clicking your name on the toolbar.
I must login in HTTP then refresh my HTTPS page to be connected in HTTPS.
I have already seen what the offical doc says and tested it but it didn't work.
Has anyone an idea on how to fix the problem ?
Thanks
The alfresco.log / catalina.out should tell your more.
Where / how did you set up https? Have a a reverse proxy like nginx or apache in front of the alfresco tomcat?
If the log says something like "CSRF Token Filter issue" then you need to set share.host / port / protocol in alfresco-global.properties as seen from the browser.
This is a tricky one to explain. I believe the google bot is getting confused because of the way iis/sites are set up. The actual issue is, when searching Google and the result is www.someSiteURL.com the description underneath is:
A description for this result is not available because of this site's robots.txt – learn more.
I think the reason the issue exists is fairly clear. Using the example above there is not page content at www.someSiteURL.com/default.asp At this level there is a default.asp file with a whole bunch of redirects to take the user to the correct physical dir where the sites are. The sites are all living under one root 'Site' in IIS like so:
siteOneDir
siteTwoDir
siteThreeDir
default.asp (this is the page with the redirects)
How do you overcome this without chnaging the site setup/use of IPAddresses?
Here is the robots.txt file:
User-agent: *
Allow: /default.asp
Allow: /siteOneDir/
Allow: /siteTwoDir/
Allow: /siteThreeDir/
Disallow: /
BTW Google webmaster tool says this is valid. I know some clients may not recognize 'Allow' but Google and Bing do so I don't care about this. I would rather disallow all then only allow sites instead of only using this to disallow specific sites.
If I use the Google webmaster tool Crawl > Fetch a Google and type in www.someSiteURL.com/default.asp it does have a status of 'Redirected' and its status is http/1.1 302 found
I believe the order of the items in robot.txt matters. Try putting the disallow first, ie. change to:
User-agent: *
Disallow: /
Allow: /default.asp
Allow: /siteOneDir/
Allow: /siteTwoDir/
Allow: /siteThreeDir/
I have configured 2 providers and used the FOS-Oauth-Bridge. FB works just fine, but when I try connecting to LinkedIn, the browser shows the message
Content Encoding Error
The page you are trying to view cannot be shown because it uses an
invalid or unsupported form of compression.
Please contact the website owners to inform them of this problem.
I tried decoding the generated URL and that seems fine to:
https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=077-dd7c-4815-aea0-8c31e0ce7&scope=r_fullprofile&state=eecd50f81e6ad3e7e23cc11ec50d4768&redirect_uri=http%3A%2F%2Fsf2test.dv%2Fapp_dev.php%2Fsec_area%2Flogin%2Fcheck-linkedin
I tried changing the redirect URL to 127.0.0.1, changing the port to 8X but nothing works
I do not have SSL installed on my Windows / Apache
When I manually change the LinkedIn URL to HTTP (instead of HTTPS), I get this message
Request denied
**Request denied**
Sorry, we are unable to serve your request at this time due to unusual traffic from your network connection.
Reason codes:
3,2,19
Can someone help me figure out the problem?