New to Firebase hosting. I deployed few folders containing images to Firebase hosting.
Today I am facing below issue. I have no clue about it. Any help on this is highly appreciated.
ERROR:
=== Deploying to 'admob-app-id-4307XXXXXX'...
i deploying database, hosting
+ database: rules ready to deploy.
i hosting: preparing public directory for upload...
**Error: Server Error. read ECONNRESET**
C:\Users\NAME\Documents\Projects\Website\firebase>firebase deploy
**Error: HTTP Error: 500, An unknown error occurred. Please contact support.**
Snippet from debug log:
[debug] ----------------------------------------------------------------------
[debug] Command: C:\Program Files\nodejs\node.exe C:\Users\NAME\AppData\Roaming\npm\node_modules\firebase-tools\bin\firebase deploy
[debug] CLI Version: 3.4.0
[debug] Platform: win32
[debug] Node Version: v6.10.0
[debug] Time: Mon Mar 13 2017 14:07:38 GMT+0530 (India Standard Time)
[debug] ----------------------------------------------------------------------
[debug]
[debug] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase"]
[debug] >>> HTTP REQUEST GET https://admin.firebase.com/v1/projects/admob-app-id-XXXXXXXXXX
Mon Mar 13 2017 14:07:38 GMT+0530 (India Standard Time)
[debug] <<< HTTP RESPONSE 200
[debug] >>> HTTP REQUEST GET https://admin.firebase.com/v1/database/admob-app-id-XXXXXXXXXX/tokens
Mon Mar 13 2017 14:07:40 GMT+0530 (India Standard Time)
[debug] <<< HTTP RESPONSE 500
[debug] <<< HTTP RESPONSE BODY
[error]
[error] Error: HTTP Error: 500, An unknown error occurred. Please contact support.
[debug] Error Context: {
"body": {
"error": {
"code": "UNKNOWN_ERROR",
"message": "An unknown error occurred. Please contact support."
}
},
"response": {
"statusCode": 500,
"body": {
"error": {
"code": "UNKNOWN_ERROR",
"message": "An unknown error occurred. Please contact support."
}
},
"headers": {
"server": "nginx",
"date": "Mon, 13 Mar 2017 08:37:34 GMT",
"content-type": "application/json; charset=utf-8",
"content-length": "97",
"connection": "close",
"x-content-type-options": "nosniff"
},
"request": {
"uri": {
"protocol": "https:",
"slashes": true,
"auth": null,
"host": "admin.firebase.com",
"port": 443,
"hostname": "admin.firebase.com",
"hash": null,
"search": null,
"query": null,
"pathname": "/v1/database/admob-app-id-XXXXXXXXXX/tokens",
"path": "/v1/database/admob-app-id-XXXXXXXXXX/tokens",
"href": "https://admin.firebase.com/v1/database/admob-app-id-XXXXXXXXXX/tokens"
},
"method": "GET"
}
}
}
Much thanks!
I just encountered this issue while deploying to Firebase, the solution is what lead me to this post. After making sure that my internet connection was good, i simply deployed successfully.
Make sure your internet is fine, then redeploy!
GoodLuck!
What's the total size of all those folders.
Error: Server Error. read ECONNRESET
looks like network issues. Or maybe you are getting a reset packet after tripping an undocumented limit in Firebase.
Firebase probably uploads a tar or zip of the files. Why don't you try adding half the amount of folders and images at first, and if that works..
ps. I doubt firebase does a diff and only uploads what's NOT there. So you probably can't incrementally add files to get around the limit. All the above method shows you is how much you can get away with before there's a reset.
Try going through a VPN, it worked for me.
Related
I have a couchdb server, which at this moment is for development and it has just one node running in docker.
I would like to authenticate through JWT. I have build my token.
GET https://comp010:6984/_session
Accept: application/json
Content-Type: application/json; charset=utf-8
Authorization: Bearer <JWT token>
I get proper answer (or at least I think so):
{
"ok": true,
"userCtx": {
"name": "uaru",
"roles": "admin"
},
"info": {
"authentication_handlers": [
"jwt",
"cookie",
"default"
],
"authenticated": "jwt"
}
}
When I send request to get actual object from the database
GET https://comp010:6984/db_userspaces/xxxx3
Accept: application/json
Content-Type: application/json; charset=utf-8
I get "unauthorized" exception. This is ok, I did not authenticated this request. So I add the same authorization header:
GET https://comp010:6984/db_userspaces/xxxx3
Accept: application/json
Content-Type: application/json; charset=utf-8
Authorization: Bearer <JWT token>
And I get
{
"error": "internal_server_error",
"reason": "No DB shards could be opened.",
"ref": 179462285
}
But if I switch off the authorization ( [chttpd] require_valid_user = false), and send the same request without Authorization header,
GET https://comp010:6984/db_userspaces/xxxx3
Accept: application/json
Content-Type: application/json; charset=utf-8
I get proper response.
Server: CouchDB/3.2.1 (Erlang OTP/23)
X-Couch-Request-ID: 02c628ce15
X-CouchDB-Body-Time: 0
{
"_id": "xxxx3",
"_rev": "1-a11f390ffa77a03c557ffbbc7c5fda75",
"x": "1"
}
How JWT can relate to shards? I am puzzled and I cannot find anything related.
There are no errors with Fauxton.
Thank you in advance for any suggestions.
Here is the log when the request took place
couchdb-server_1 | [error] 2022-03-09T04:52:34.662593Z nonode#nohost <0.6234.1> 82a6b79f38 rexi_server: from: nonode#nohost(<0.6134.1>) mfa: fabric_rpc:open_shard/2 error:function_clause [{lists,usort,[<<"admin">>],[{file,"lists.erl"},{line,1063}]},{couch_db,check_security,3,[{file,"src/couch_db.erl"},{line,713}]},{couch_db,is_authorized,2,[{file,"src/couch_db.erl"},{line,705}]},{couch_db,is_member,1,[{file,"src/couch_db.erl"},{line,685}]},{couch_db,check_is_member,1,[{file,"src/couch_db.erl"},{line,671}]},{couch_db,open,2,[{file,"src/couch_db.erl"},{line,166}]},{mem3_util,get_or_create_db,2,[{file,"src/mem3_util.erl"},{line,549}]},{fabric_rpc,open_shard,2,[{file,"src/fabric_rpc.erl"},{line,307}]}]
couchdb-server_1 | [error] 2022-03-09T04:52:34.662982Z nonode#nohost <0.6236.1> 82a6b79f38 rexi_server: from: nonode#nohost(<0.6134.1>) mfa: fabric_rpc:open_shard/2 error:function_clause [{lists,usort,[<<"admin">>],[{file,"lists.erl"},{line,1063}]},{couch_db,check_security,3,[{file,"src/couch_db.erl"},{line,713}]},{couch_db,is_authorized,2,[{file,"src/couch_db.erl"},{line,705}]},{couch_db,is_member,1,[{file,"src/couch_db.erl"},{line,685}]},{couch_db,check_is_member,1,[{file,"src/couch_db.erl"},{line,671}]},{couch_db,open,2,[{file,"src/couch_db.erl"},{line,166}]},{mem3_util,get_or_create_db,2,[{file,"src/mem3_util.erl"},{line,549}]},{fabric_rpc,open_shard,2,[{file,"src/fabric_rpc.erl"},{line,307}]}]
couchdb-server_1 | [error] 2022-03-09T04:52:34.663440Z nonode#nohost <0.6134.1> 82a6b79f38 req_err(179462285) internal_server_error : No DB shards could be opened.
couchdb-server_1 | [<<"fabric_util:get_shard/4 L118">>,<<"fabric_util:get_shard/4 L132">>,<<"fabric:get_security/2 L183">>,<<"chttpd_auth_request:db_authorization_check/1 L112">>,<<"chttpd_auth_request:authorize_request/1 L19">>,<<"chttpd:handle_req_after_auth/2 L325">>,<<"chttpd:process_request/1 L310">>,<<"chttpd:handle_request_int/1 L249">>]
couchdb-server_1 | [notice] 2022-03-09T04:52:34.663753Z nonode#nohost <0.6134.1> 82a6b79f38 comp010:6984 ::ffff:150.26.121.46 uaru GET /db_userspaces/xxxx3 500 ok 2
In the payload to be turned into JWT, roles MUST BE an array.
{
:sub => username,
:'_couchdb.roles' => roles,
:exp => ...,
}
In my case, roles was not. But it means that should be error 400 Bad Request.
The whole problem has nothing to do with shards configuration, etc. The error message was misleading.
Thanks to people in CouchDb slack channel for guiding me in the right direction.
We have tried to get the error logs in JSON format as we can do for access logs. But we are getting error there.
Can we not format the error logs we are getting from NGINX server to JSON format as we can do for access logs?
As described here
http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
json log format is available in nginx
log_format logger-json escape=json '{
"source": "nginx",
"time": $msec,
"resp_body_size": $body_bytes_sent,
"host": "$http_host",
"address": "$remote_addr",
"request_length": $request_length,
"method": "$request_method",
"uri": "$request_uri",
"status": $status,
"user_agent": "$http_user_agent",
"resp_time": $request_time,
"upstream_addr": "$upstream_addr"
}';
At the moment is not possible to format the error logs as JSON.
I have been following an article on Medium to deploy Cloud Endpoints v1 in front of a Cloud Run service hosting a REST API and everything works well.
I now have a requirement to enable CORS support and I've added the below configuration to my endpoints YAML file but get an error saying "This service does not allow CORS traffic" when my browser tries to make a pre-flight request (I've tested this with Postman too with the same error). I know there's a flag to enable CORS --cors_preset=basic using environment variables but I'm not sure what key to set with. Any ideas or help is appreciated.
Endpoints YAML snipper:
swagger: '2.0'
info:
title: Cloud Endpoints with Cloud Run
description: Testing Cloud Endpoints with Cloud Run
version: 1.0.0
host: endpoint-<hash>-uc.a.run.app
x-google-endpoints:
- name: endpoint-<hash>-uc.a.run.app
allowCors: true
schemes:
- https
produces:
- application/json
Error:
{
"code": 7,
"message": "The service does not allow CORS traffic.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}
PS: Thanks Guillaum Blaquiere for the awesome article.
UPDATE:
I ended up testing with an incomplete URL and hence received the above error as my backend service wasn't configured to respond to all pre-flight request URLs. Having fixed this, I now get the below error only on the CORS pre-flight configured URL.
{
"code": 13,
"message": "INTERNAL_SERVER_ERROR",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [
],
"detail": "application"
}
]
}
and logs:
invalid URL prefix in "", client: <CLIENT_IP>, server: , request: "OPTIONS /api/v1/<REMAINING_URL> HTTP/1.1", host: "endpoint-<HASH>-uc.a.run.app"
I would say it's necesary to add ESPv2 Config, I've noticed that the note regarding the ESPv2 config was added since last april, and the Medium document was published on 2019, so I think such required step was not mentioned before.
Later in the same section it's mentioned that the flags for cors are passed by the "--set-env-vars" flag of the deploy command.
You can find more about the ESPv2 Beta startup options in here.
I managed to resolve the issue by defining OPTIONS operations in my YAML file with no security, for each path that I had already defined. See below example YAML file for an endpoint path '/api/v1/hello' with GET and OPTIONS operations defined.
swagger: '2.0'
info:
title: Cloud Endpoints with Cloud Run
description: Testing Cloud Endpoints with Cloud Run
version: 1.0.0
host: endpoint-randomhash-uc.a.run.app
x-google-endpoints:
- name: endpoint-randomhash-uc.a.run.app
allowCors: true
schemes:
- https
produces:
- application/json
x-google-backend:
address: https://backend-randomhash-uc.a.run.app
path_translation: APPEND_PATH_TO_ADDRESS
security:
- auth0_jwk: []
paths:
/api/v1/hello:
get:
summary: Say hello
operationId: helloName
parameters:
- name: "name"
in: "query"
description: "Your name"
type: "string"
responses:
'200':
description: Successful operation
schema:
type: string
options:
summary: CORS pre-flight for say hello
operationId: helloNameOptions
parameters:
- name: "name"
in: "query"
description: "Your name"
type: "string"
responses:
'200':
description: Successful operation
schema:
type: string
security: []
securityDefinitions:
auth0_jwk:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://project.auth0.com/"
x-google-jwks_uri: "https://project.auth0.com/.well-known/jwks.json"
x-google-audiences: "firebase-application-host"
As Sergio pointed out in his comment to a SO question, the other option in my case is to use Firebase Hosting proxy to use the same domain and avoid CORS.
Following my previous question ... I changed the Google Calendar API call to batch, but some requests in the batch return a 403 - rateLimitExceeded "Rate Limit Exceeded" error. There is always a maximum of 50 requests of the same type in the batch, most often PATCH. But errors occur even if the batch contains less than 10 requests. On average, only about 50% of all requests are successful.
Example request (part with one reguest):
--googlebatch_20200513_171515_647
Content-Type: application/http
Content-ID: <item:0>
PATCH /calendar/v3/calendars/XXX/YYY
Content-Type: application/json
Content-Length: 449
{
"summary": "Opakovaná aktivita 2",
"description": "",
"id": "XXX",
"start": { "dateTime": "2020-05-07T09:00:00+02:00" },
"end": { "dateTime": "2020-05-07T09:30:00+02:00" },
"location": "",
"visibility": "default",
"reminders": {"useDefault": false},
"transparency": "opaque",
"extendedProperties": {
"private": {
"X-QIID": "29037717,10",
"X-QISyncOn": "1"
}
}
}
...
next requests
...
--googlebatch_20200513_171515_647--
Example response:
--batch_preSx1sqdvk_AAP51GVWcoo
Content-Type: application/http
Content-ID: <response-item:0>
HTTP/1.1 403 Forbidden
Content-Type: application/json; charset=UTF-8
Date: Mon, 13 May 2020 15:15:16 GMT
Expires: Mon, 13 May 2020 15:15:16 GMT
Cache-Control: private, max-age=0
Content-Length: 199
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "Rate Limit Exceeded"
}
],
"code": 403,
"message": "Rate Limit Exceeded"
}
}
--batch_preSx1sqdvk_AAP51GVWcoo--
I don't understand why the Google API returns this error, because when I send a batch request, I just send the batch and the execution of individual requests is controlled by Google itself. So I have no way to influence the speed of their launch. If batch execution is too fast, why doesn't Google slow it down and complete all requests successfully?
What else could I do? Where could be the mistake? What do you recommend me?
Thank you. Regards, Petr.
There are known issues with the Calendar API throwing 403 error sooner than expected when using a service account
The workaround would be using impersonation, that is to make the service-account act on behalf of a user. This allows to avoid the 403 errors.
See here for code samples.
gRPC services (developed in springboot) deployed as docker container on AWS linux (ec2). Started the docker image with port forwarding -p6565:6565.
Now when directly hit via BloomRPC on laptop, it worked : ec2.IP:6565 Package.Service.Method
Configured service & route in Kong:
{
"host": "ec2.IP",
"created_at": 1588403433,
"connect_timeout": 60000,
"id": "e657d8df-6247-458a-a8e8-bec00c41e03c",
"protocol": "grpc",
"name": "aws-grpc1",
"read_timeout": 60000,
"port": 6565,
"path": null,
"updated_at": 1588403433,
"retries": 5,
"write_timeout": 60000,
"tags": null,
"client_certificate": null
}
Route:
{
"strip_path": false,
"path_handling": "v0",
"updated_at": 1588403452,
"destinations": null,
"headers": null,
"protocols": [
"grpc",
"grpcs"
],
"created_at": 1588403452,
"snis": null,
"service": {
"id": "e657d8df-6247-458a-a8e8-bec00c41e03c"
},
"name": "aws-grpc1-route1",
"methods": null,
"preserve_host": false,
"regex_priority": 0,
"paths": [
"/grpc2"
],
"sources": null,
"id": "5739297e-3be7-4a0d-8afb-cfa8ed01cec2",
"https_redirect_status_code": 426,
"hosts": null,
"tags": null
}
Now hitting it via grpcurl -> its not working:
grpcurl -v -d "{}" -insecure ec2.ip:8443 package.service.pingMethod
Error invoking method "package.service.ping": target server does not expose service "package.service"
Here is kong config which looks related:
"proxy_listen": [
"0.0.0.0:8000 reuseport backlog=16384",
"0.0.0.0:8443 **http2** ssl reuseport backlog=16384"
],
So here are queries:
(1) can 8000 also be configured for https as insecure -> via passing a env KONG_PROXY_LISTEN variable at time of kong-container start by
-e "KONG_PROXY_LISTEN=0.0.0.0:8000 http2, 0.0.0.0:8443 http2 ssl"
Is this good to do?
(2) How to enable server side reflection? OR what is use of /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo ?
You need to expose HTTP2 Proxy Listener for Kong.
You can refer to this one: https://konghq.com/blog/manage-grpc-services-kong/
In short, you need to add env variable details for KONG_PROXY_LISTEN like so:
-e "KONG_PROXY_LISTEN=0.0.0.0:8000 http2, 0.0.0.0:8443 http2 ssl, 0.0.0.0:9080 http2, 0.0.0.0:9081 http2 ssl"
Note: apparently Kong uses the ports 9080 for HTTP2 and 9081 for HTTP2 SSL. But I think this can be changed.
And also expose those 9080 and 9081 ports like so, this is example for docker run command:
-p 127.0.0.1:9080:9080 \
-p 127.0.0.1:9081:9081
And use the 9080 port in grpcurl when you try to request, like so:
grpcurl -v -d '{"name": "Ken"}' -plaintext localhost:9080 facade.GreetingService/SayHello
More updates:
gRPC deployed behind kong.ingress is working fine:
grpcurl -v -d "{\"greeting\":\"111\"}" -insecure acfb0xxxxx.elb.us-east-2.amazonaws.com:443 hello.HelloService.SayHello
Response:
Resolved method descriptor:
rpc SayHello ( .hello.HelloRequest ) returns ( .hello.HelloResponse );
Request metadata to send:
(empty)
Response headers received:
content-type: application/grpc
date: Sat, 02 May 2020 07:00:17 GMT
server: openresty
trailer: Grpc-Status
trailer: Grpc-Message
trailer: Grpc-Status-Details-Bin
via: kong/2.0.3
x-kong-proxy-latency: 1
x-kong-upstream-latency: 9
Response contents:
{
"reply": "hello 111"
}
Response trailers received:
(empty)
Sent 1 request and received 1 response
when configured on kong-API-gateway, it is not working:
grpcurl -v -d "{\"greeting\":\"111\"}" -insecure kong.ce-gateway.ip:8443 hello.HelloService.SayHello
Error invoking method "hello.HelloService.SayHello": failed to query for service descriptor "hello.HelloService": rpc error: code = Internal desc = An invalid response was received from the upstream server
Http2 is now enabled by default for Kong, but if you are having issues, a good place to start is to inspect the proxy_listeners section of the global config. In my case, I found that http2 was only enabled for the SSL port, and not for the non SSL. A good way to see the global config is to send a GET request to the root url of the admin api, for example GET http://localhost:8001/.