When I try to use Linkedin to login to my site on Google App Engine I get a 999 error. I think it must be blocked there because on my local machine the login does work fine.
Some other sites on app engine seem to have the same problem. My only conclusion is that the ip range of app engine must have been banned by linked on purpose or by accident. I think it must be by accident because of how many sites this must affect.
I do not think that it is only related to the Google App Engine, but rather kinda strict blocking policy at LinkedIn. Check out this post here: 999 Error Code on HEAD request to LinkedIn
It seems that LinkedIn also blocks request based on user-agent.
and HTTP Error 999: Request denied
and How to avoid "HTTP/1.1 999 Request denied" response from LinkedIn?
Related
I'm facing problems with my client because I can't get the share's statistics for his organization.
Since December 2021 the API is returning Internal Server Error for many request in the organizationalEntityShareStatistics endpoint.
API analytics
The request works for other organizations.
I have tried reaching Linkedin through their support pages but they have told me to post it here on stack overflow; In this private support request I show the app tokens used: https://linkedin.zendesk.com/hc/en-us/requests/26241?page=1 or here: https://www.linkedin.com/help/linkedin/cases/39424961
LinkedIn answered me that I should filter some posts, according to their "lifecycleState" field, which for some cases make the request fail, they should only be PUBLISHED or PUBLISHED_EDITED
I've got the following set up on my Firebase web app (it's a Single Page App built with React):
I'm doing SSR for robots user agents, so they get full rendered HTML and no Javascript
Users get the empty HTML and get the Javascript to run the app.
firebase.json
"rewrites": [{
"source": "/**",
"function": "ssrApp"
}]
Basically every request should go into my ssrApp function, that will detect robot crawlers user-agents and decide wheter it will respond with the SSR version for the robots, or the JS version for the regular users.
It is working as intended. Google is indexing my pages, and I always log some info about the user agents from my ssrApp function. For example, when I'm sharing an URL on Whatsapp, I can see Whatsapp crawler on my logs from Firebase Console (see below):
But the weird thing is that I'm not being able to mimick Googlebot using Chrome's Network Conditions tab:
When I try to access my site by using Googlebot's user agent I get a 500 - Internal error
And my ssrApp functions isn't even triggered, since NOTHING is logged out from it.
Is this a Firebase Hosting built-in protection to avoid fake Googlebots? What could be happening?
NOTE: I'm trying to mimick Googlebot's user agent because I want to inspect the SSR version of my app in production. I know that there are other ways to do that (including some Google Search Console tools), but I thought that this would work.
Could you check that your pages are still in the Google index? I have the exact same experience and 80% of my pages are now gone...
When I look up a page in Google Search Console https://search.google.com/search-console it indicates there was an issue during the last crawl. When I "Test it Live" it spins and reports the error 500 as well and asks to "try again later"...
I had a related issue, so I hope this could help anyone who encounters what I did:
I got the 500 - Internal error response for paths that were routed to a Cloud Run container when serving a GoogeBot user agent. As it happens, if the Firebase Hosting CDN happened to have a cached response for the path, it would successfully serve the cached response, but if it didn't have a cached response, then the request would not reach the Cloud Run container, it would fail at the Firebase Hosting CDN with 500 - Internal error.
It turns out that the Firebase Hosting CDN secretly probes and respects any robots.txt that is served by the Cloud Run container itself (not the Hosting site). My Cloud Run container served a robots.txt which disallowed all access to bots.
Seemingly, when the Hosting CDN attempts to serve a request from a bot from a path that is routed to a Cloud Run container, it will first probe any /robots.txt that is accessible at the root of that container itself, and then refuse to send the request to the container if disallowed by the rules therein.
Removing/adjusting the robots.txt file on the Cloud Run container itself immediately resolved the issue for me.
I'm using Google Calendar API in my application.
The problem that I faced is that Google doesn't send me Push-Notifications.
I setup my app here https://console.developers.google.com/
Verified domain: https://console.developers.google.com/apis/credentials/domainverification
Watched calendar: https://developers.google.com/calendar/v3/reference/calendarList/watch and got successful response.
However, having done all of this, no push-notifications are received by my web-hook. It seems that Google just doesn't send them. Maybe I missed some step? I use https URL.
The problem was that the URL that I used for PUSH-NOTIFICATIONS wasn't whitelisted and if it was requested from other network(e.g. Google), then the request couldn't be processed.
Therefore, if there is such a problem, consider checking your URL availability outside your network. It should be accessible from anywhere and by anyone.
I inherited a program that was written with the old LinkedIn API, and I'm trying to migrate it to the new API. When I try to get the r_basicprofile permission, my oauth token works. However, when I try r_network or rw_nus, I get a response
invalid scope -- your application has not been authorized for
r_network.
Yet, when I go to www.linkedin.com/developer/apps/xxxx/auth, the boxes for r_network and rw_nus are checked.
I.e., A request to
https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=xxxxxx&scope=r_basicprofile&state=yyyy&redirect_uri=http%3A%2F%2Fkalatublog.com%2Fwp-content%2Fmu-plugins%2Fimb-en%2Fhelpers%2Fsocial-connect%2Fapi%2Ffinalize.php%3Fapi%3Dlinkedin%26ch%zzzzz
works, but a request to
https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=xxxxxx&scope=r_network&state=yyyy&redirect_uri=http%3A%2F%2Fkalatublog.com%2Fwp-content%2Fmu-plugins%2Fimb-en%2Fhelpers%2Fsocial-connect%2Fapi%2Ffinalize.php%3Fapi%3Dlinkedin%26ch%zzzzz
gives that error. What am I doing wrong?
As of May 15,
After the grace period expires, several REST API endpoints will no longer be available for general use. The following endpoints are the only ones that will remain available for use:
Profile API — /v1/people/~ `
Share API — /v1/people/~/shares
Companies API — /v1/companies/{id}
If your application is currently using any other API services (e.g. Connections, Groups, People Search, Invitation, Job Search, etc.) you will have to apply to become a member of a relevant Partner Program that provides the necessary API access to continue to leverage any of the endpoints that are not listed above.
It looks like linkedin no longer wants to share anything with their API. Creating a new app indicates that the only possible options are r_basicprofile, r_emailaddress, rw_company_admin, and w_share:
TLDR: they have locked down the API and restricted the usage to an extremely limited set of access points.
I did some more digging. The linkedin website is misleading. On my app linkedin page, it says that I'm approved for rw_nus and r_network, but on this page
https://developer.linkedin.com/support/developer-program-transition
it says those are no longer approved.
So the app home page in linkedin incorrectly said I had those permissions.
Heres the link if you want to Apply for Linkedin
https://help.linkedin.com/app/ask/path/api-dvr
Sorry if this is the incorrect area.
I develop websites using Yootheme and Rockettheme. they have a coded area in the backend where you simply enter the code from Analytics.
however lately im getting emails stating the following;
Message summary
Webmaster Tools sent you the following important messages about sites in your account. To keep your site healthy, we recommend regularly reviewing these messages and addressing any critical issues.
http://www.anigmabeauty.co.nz/: Googlebot can't access your site
Over the last 24 hours, Googlebot encountered 1 errors while attempting to connect to your site. Your site's overall connection failure rate is 50.0%.
You can see more details about these errors in Webmaster Tools.
I've deleted them and re-added the websites, works for a while then does the same thing. Any ideas on how to fix this.
When I open http://www.anigmabeauty.co.nz/ I get 303 See other redirect. It may cause the issue since 303 redirect is not so good for SEO.
Try to replace 303 redirect with 302 redirect.