I am running a flask app behind nginx/uwsgi. I am facing CORS issues when uploading files btw the upload limit in nginx is set to 30M and the same is uwsgi and I'm only uploading 2M of files, and I allowed all CORS origins. I've tried everything but to no avail, the request succeeds when I run it directly from an interactive python session.
I have an endpoint /result
#app.route('/result', methods = ['GET', 'POST'])
#token_required
def result(user : User):
if request.method == "GET":
d = request.args
# do stuff
return jsonify({'success': False, 'msg': 'Unable to fullfill request' }), 201
else:
# do stuff
return jsonify({'success' : False, 'msg': 'Missing Fields'}), 201
here are the uwsgi logs
[pid: 8615|app: 0|req: 1/1] xxx.xx.xxx.xxx () {52 vars in 820 bytes} [Fri Apr 22 18:33:08 2022] OPTIONS /jwt => generated 0 bytes in 4 msecs (HTTP/2.0 200) 8 headers in 340 bytes (1 switches on core 0)
[pid: 8615|app: 0|req: 2/2] xxx.xx.xxx.xxx () {52 vars in 840 bytes} [Fri Apr 22 18:33:08 2022] OPTIONS /notifications => generated 0 bytes in 0 msecs (HTTP/2.0 200) 8 headers in 340 bytes (1 switches on core 0)
[pid: 8614|app: 0|req: 1/3] xxx.xx.xxx.xxx () {52 vars in 826 bytes} [Fri Apr 22 18:33:08 2022] OPTIONS /result => generated 0 bytes in 4 msecs (HTTP/2.0 200) 8 headers in 346 bytes (1 switches on core 0)
[pid: 8615|app: 0|req: 3/4] xxx.xx.xxx.xxx () {52 vars in 820 bytes} [Fri Apr 22 18:33:08 2022] OPTIONS /jwt => generated 0 bytes in 1 msecs (HTTP/2.0 200) 8 headers in 340 bytes (1 switches on core 0)
[pid: 8615|app: 0|req: 4/5] xxx.xx.xxx.xxx () {52 vars in 840 bytes} [Fri Apr 22 18:33:08 2022] OPTIONS /notifications => generated 0 bytes in 0 msecs (HTTP/2.0 200) 8 headers in 340 bytes (1 switches on core 0)
[pid: 8617|app: 0|req: 1/6] xxx.xx.xxx.xxx () {52 vars in 945 bytes} [Fri Apr 22 18:33:08 2022] GET /jwt => generated 22 bytes in 14 msecs (HTTP/2.0 201) 5 headers in 190 bytes (1 switches on core 0)
[pid: 8615|app: 0|req: 5/7] xxx.xx.xxx.xxx () {52 vars in 951 bytes} [Fri Apr 22 18:33:08 2022] GET /result => generated 274 bytes in 14 msecs (HTTP/2.0 201) 5 headers in 191 bytes (1 switches on core 0)
[pid: 8613|app: 0|req: 1/8] xxx.xx.xxx.xxx () {52 vars in 965 bytes} [Fri Apr 22 18:33:08 2022] GET /notifications => generated 19973 bytes in 25 msecs (HTTP/2.0 201) 5 headers in 193 bytes (2 switches on core 0)
[pid: 8614|app: 0|req: 2/9] xxx.xx.xxx.xxx () {52 vars in 945 bytes} [Fri Apr 22 18:33:08 2022] GET /jwt => generated 22 bytes in 7 msecs (HTTP/2.0 201) 5 headers in 190 bytes (1 switches on core 0)
[pid: 8614|app: 0|req: 3/10] xxx.xx.xxx.xxx () {52 vars in 965 bytes} [Fri Apr 22 18:33:08 2022] GET /notifications => generated 19973 bytes in 10 msecs (HTTP/2.0 201) 5 headers in 193 bytes (1 switches on core 0)
[pid: 8614|app: 0|req: 4/11] xxx.xx.xxx.xxx () {52 vars in 827 bytes} [Fri Apr 22 18:33:18 2022] OPTIONS /result => generated 0 bytes in 1 msecs (HTTP/2.0 200) 8 headers in 346 bytes (0 switches on core 0)
Chrome OPTIONS response
access-control-allow-headers: authorization
access-control-allow-methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT
access-control-allow-origin: https://example.com
access-control-expose-headers: Content-Disposition
allow: OPTIONS, HEAD, POST, GET
content-length: 0
content-type: text/html; charset=utf-8
date: Fri, 22 Apr 2022 18:33:18 GMT
server: nginx/1.20.0
vary: Origin
Chrome Console error
Access to XMLHttpRequest at 'https://api.example.com/result' from origin 'https://example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
the error is quite funny because the OPTIONS response has the 'Access-Control-Allow-Origin' header.
When I was setting up uwsgi I had to change nginx's user group. when I check the nginx error logs at /var/log/nginx/error.log I noticed that there was a permissions issue.
I solved this by changing the user group of
sudo chgrp www-data /var/lib/nginx/tmp/ /var/lib/nginx/ /var/lib/nginx/tmp/client_body/
Still can't explain why plain python requests query from my pc was not causing the same issue.
PS
Whenever nginx will face an error and returns a response to the browser you will see the cors thing in the console logs. so always take that with a grain of salt especially if you already set the cors headers from flask side correctly.
Related
I have the following scenario:
server: Ubuntu 20.04.3 LTS
Openstack: installed following the official guide
Watcher: 1:4.0.0-0ubuntu0.20.04.1 (installed also following the official wiki)
Everything works like a charm, however, when I run
root#controller:/etc/watcher# openstack optimize service list
Internal Server Error (HTTP 500)
root#controller:/etc/watcher#
and checked what is it about on watcher's log
2022-01-15 17:25:58.509 17960 INFO watcher-api [-] 10.0.0.11 "GET /v1/services HTTP/1.1" status: 500 len: 139 time: 0.0277412
2022-01-15 17:40:52.535 17960 INFO watcher-api [-] Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/eventlet/wsgi.py", line 573, in handle_one_response
result = self.application(self.environ, start_response)
File "/usr/lib/python3/dist-packages/watcher/api/app.py", line 58, in __call__
return self.v1(environ, start_response)
File "/usr/lib/python3/dist-packages/watcher/api/middleware/auth_token.py", line 61, in __call__
return super(AuthTokenMiddleware, self).__call__(env, start_response)
File "/usr/local/lib/python3.8/dist-packages/webob/dec.py", line 129, in __call__
resp = self.call_func(req, *args, **kw)
File "/usr/local/lib/python3.8/dist-packages/webob/dec.py", line 193, in call_func
return self.func(req, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/__init__.py", line 338, in __call__
response = self.process_request(req)
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/__init__.py", line 659, in process_request
resp = super(AuthProtocol, self).process_request(request)
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/__init__.py", line 409, in process_request
data, user_auth_ref = self._do_fetch_token(
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/__init__.py", line 445, in _do_fetch_token
data = self.fetch_token(token, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/__init__.py", line 752, in fetch_token
data = self._identity_server.verify_token(
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/_identity.py", line 157, in verify_token
auth_ref = self._request_strategy.verify_token(
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/_identity.py", line 108, in _request_strategy
strategy_class = self._get_strategy_class()
File "/usr/local/lib/python3.8/dist-packages/keystonemiddleware/auth_token/_identity.py", line 130, in _get_strategy_class
if self._adapter.get_endpoint(version=klass.AUTH_VERSION):
File "/usr/local/lib/python3.8/dist-packages/keystoneauth1/adapter.py", line 291, in get_endpoint
return self.session.get_endpoint(auth or self.auth, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keystoneauth1/session.py", line 1233, in get_endpoint
return auth.get_endpoint(self, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keystoneauth1/identity/base.py", line 375, in get_endpoint
endpoint_data = self.get_endpoint_data(
File "/usr/local/lib/python3.8/dist-packages/keystoneauth1/identity/base.py", line 275, in get_endpoint_data
endpoint_data = service_catalog.endpoint_data_for(
File "/usr/local/lib/python3.8/dist-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for
raise exceptions.EndpointNotFound(msg)
keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for identity service in regionOne region not found
and the request on the webserver side
==> horizon_access.log <==
127.0.0.1 - - [15/Jan/2022:17:38:29 +0300] "GET /dashboard/project/api_access/view_credentials/ HTTP/1.1" 200 1027 "http://localhost/dashboard/project/api_access/" "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
10.0.0.11 - - [15/Jan/2022:17:38:30 +0300] "GET /identity/v3/auth/tokens HTTP/1.1" 200 5318 "-" "python-keystoneclient"
10.0.0.11 - - [15/Jan/2022:17:38:30 +0300] "GET /compute/v2.1/servers/detail?all_tenants=True&changes-since=2022-01-15T14%3A33%3A30.416004%2B00%3A00 HTTP/1.1" 200 433 "-" "python-novaclient"
10.0.0.11 - - [15/Jan/2022:17:40:52 +0300] "GET /identity HTTP/1.1" 300 569 "-" "openstacksdk/0.50.0 keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.10"
10.0.0.11 - - [15/Jan/2022:17:40:52 +0300] "POST /identity/v3/auth/tokens HTTP/1.1" 201 5316 "-" "openstacksdk/0.50.0 keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.10"
10.0.0.11 - - [15/Jan/2022:17:40:52 +0300] "POST /identity/v3/auth/tokens HTTP/1.1" 201 5320 "-" "watcher/unknown keystonemiddleware.auth_token/9.1.0 keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.8.10"
and on keystone side - I run it with some verbosity using the following command
/usr/bin/uwsgi --procname-prefix keystone --ini /etc/keystone/keystone-uwsgi-public.ini
I got the following log
DEBUG keystone.server.flask.request_processing.req_logging [None req-e422207d-b376-4e97-b20b-1d16144be4db None None] REQUEST_METHOD: `GET` {{(pid=20441) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:27}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-e422207d-b376-4e97-b20b-1d16144be4db None None] SCRIPT_NAME: `/identity` {{(pid=20441) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:28}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-e422207d-b376-4e97-b20b-1d16144be4db None None] PATH_INFO: `/` {{(pid=20441) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:29}}
[pid: 20441|app: 0|req: 1/1] 10.0.0.11 () {58 vars in 998 bytes} [Sat Jan 15 17:44:30 2022] GET /identity => generated 268 bytes in 5 msecs (HTTP/1.1 300) 6 headers in 232 bytes (1 switches on core 0)
DEBUG keystone.server.flask.request_processing.req_logging [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] REQUEST_METHOD: `POST` {{(pid=20440) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:27}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] SCRIPT_NAME: `/identity` {{(pid=20440) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:28}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] PATH_INFO: `/v3/auth/tokens` {{(pid=20440) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:29}}
DEBUG oslo_db.sqlalchemy.engines [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION {{(pid=20440) _check_effective_sql_mode /usr/local/lib/python3.8/dist-packages/oslo_db/sqlalchemy/engines.py:304}}
DEBUG passlib.handlers.bcrypt [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] detected 'bcrypt' backend, version '3.2.0' {{(pid=20440) _load_backend_mixin /usr/local/lib/python3.8/dist-packages/passlib/handlers/bcrypt.py:567}}
DEBUG passlib.handlers.bcrypt [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] 'bcrypt' backend lacks $2$ support, enabling workaround {{(pid=20440) _finalize_backend_mixin /usr/local/lib/python3.8/dist-packages/passlib/handlers/bcrypt.py:382}}
DEBUG keystone.auth.core [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] MFA Rules not processed for user `97eec1465cdc4e41b5c0ba48a1b39cc2`. Rule list: `[]` (Enabled: `True`). {{(pid=20440) check_auth_methods_against_rules /opt/stack/keystone/keystone/auth/core.py:438}}
DEBUG keystone.common.fernet_utils [None req-cc547fb9-886e-4ed2-a3be-7e043004eed8 None None] Loaded 2 Fernet keys from /etc/keystone/fernet-keys/, but `[fernet_tokens] max_active_keys = 3`; perhaps there have not been enough key rotations to reach `max_active_keys` yet? {{(pid=20440) load_keys /opt/stack/keystone/keystone/common/fernet_utils.py:286}}
[pid: 20440|app: 0|req: 1/2] 10.0.0.11 () {62 vars in 1095 bytes} [Sat Jan 15 17:44:30 2022] POST /identity/v3/auth/tokens => generated 4862 bytes in 125 msecs (HTTP/1.1 201) 6 headers in 385 bytes (1 switches on core 0)
DEBUG keystone.server.flask.request_processing.req_logging [None req-0584fbcc-66c5-4fba-9d8a-ea8ad2d40c5d None None] REQUEST_METHOD: `GET` {{(pid=20441) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:27}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-0584fbcc-66c5-4fba-9d8a-ea8ad2d40c5d None None] SCRIPT_NAME: `/identity` {{(pid=20441) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:28}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-0584fbcc-66c5-4fba-9d8a-ea8ad2d40c5d None None] PATH_INFO: `/` {{(pid=20441) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:29}}
[pid: 20441|app: 0|req: 2/3] 10.0.0.11 () {58 vars in 1033 bytes} [Sat Jan 15 17:44:30 2022] GET /identity => generated 268 bytes in 2 msecs (HTTP/1.1 300) 6 headers in 232 bytes (1 switches on core 0)
DEBUG keystone.server.flask.request_processing.req_logging [None req-f096d017-66d0-4baa-8414-2596d0869005 None None] REQUEST_METHOD: `POST` {{(pid=20440) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:27}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-f096d017-66d0-4baa-8414-2596d0869005 None None] SCRIPT_NAME: `/identity` {{(pid=20440) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:28}}
DEBUG keystone.server.flask.request_processing.req_logging [None req-f096d017-66d0-4baa-8414-2596d0869005 None None] PATH_INFO: `/v3/auth/tokens` {{(pid=20440) log_request_info /opt/stack/keystone/keystone/server/flask/request_processing/req_logging.py:29}}
DEBUG keystone.auth.core [None req-f096d017-66d0-4baa-8414-2596d0869005 None None] MFA Rules not processed for user `c5c42a1a942e48fd9b735ea9c6a11ed0`. Rule list: `[]` (Enabled: `True`). {{(pid=20440) check_auth_methods_against_rules /opt/stack/keystone/keystone/auth/core.py:438}}
DEBUG keystone.common.fernet_utils [None req-f096d017-66d0-4baa-8414-2596d0869005 None None] Loaded 2 Fernet keys from /etc/keystone/fernet-keys/, but `[fernet_tokens] max_active_keys = 3`; perhaps there have not been enough key rotations to reach `max_active_keys` yet? {{(pid=20440) load_keys /opt/stack/keystone/keystone/common/fernet_utils.py:286}}
[pid: 20440|app: 0|req: 2/4] 10.0.0.11 () {62 vars in 1130 bytes} [Sat Jan 15 17:44:30 2022] POST /identity/v3/auth/tokens => generated 4866 bytes in 26 msecs (HTTP/1.1 201) 6 headers in 385 bytes (2 switches on core 0)
So the first thing I did is to check the catalog
openstack catalog list
----
| keystone | identity | RegionOne |
| | | internal: http://controller/identity |
| | | RegionOne |
| | | public: http://controller/identity |
| | | RegionOne |
| | | admin: http://controller/identity |
| | | |
---
My question is: do I need to create a specific (another) internal endpoint for the identity service and where should I declare it for the watcher-api to find it?
EDIT: Following #Larsks comment, I changed the credentials used on watcher.conf by username=admin (the admin user) and the corresponding password. Openstack optimize service list gave back the following
WARNING keystonemiddleware.auth_token [-] Identity response: {"error":{"code":401,"message":"The request you have made requires authentication.","title":"Unauthorized"}}
: keystoneauth1.exceptions.http.Unauthorized: The request you have made requires authentication. (HTTP 401) (Request-ID: req-56b63a60-1ba2-4f12-93c0-e7c7d1a1769c)
2022-01-15 19:04:17.424 28742 CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: keystonemiddleware.auth_token._exceptions.ServiceError: Identity server rejected authorization necessary to fetch token data
How to get origin file from nginx cache ?
you can get cached file on the pwd, such as /www/server/nginx/proxy_cache_dir
nginx version
nginx -v
nginx version: nginx/1.15.10
cat cached file
MD5hash
KEY: http://domain/request_uri/
HTTP/1.1 200 OK
Server: Tengine
Content-Type: application/javascript
Content-Length: 12379
Connection: close
Date: Thu, 08 Aug 2019 14:13:09 GMT
Vary: Accept-Encoding
x-oss-request-id: 5D4C2DF59563BC5E816932BF
Accept-Ranges: bytes
ETag: "19A6B640E83F8475B59CD1C8213C3C1C"
Last-Modified: Sun, 28 Jul 2019 15:47:50 GMT
x-oss-object-type: Normal
x-oss-hash-crc64ecma: 7799365534687729124
x-oss-storage-class: Standard
Content-MD5: Gaa2QOg/hHW1nNHIITw8HA==
x-oss-server-time: 1
Ali-Swift-Global-Savetime: 1565273589
Via: cache1.l2cn1821[53,200-0,M], cache19.l2cn1821[63,0], cache5.cn369[123,200-0,M], cache9.cn369[125,0]
Age: 0
X-Cache: MISS TCP_MISS dirn:-2:-2
X-Swift-SaveTime: Thu, 08 Aug 2019 14:13:09 GMT
X-Swift-CacheTime: 3600
Timing-Allow-Origin: *
EagleId: 01c1bcd115652735896705467e
!function(e){var t={};function n(s){if(t[s])return t[s].exports;var r=t[s]={i:s,l:!1,exports:{}};return e[s].call(r.exports,r,r.exports,n),r.l=!0,r.exports}n.m=e,n.c=t,n.d=function(e,t,s){n.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:s})},n.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},n.t=function(e,t){if(1&t&&(e=n(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var s=Object.create(null);if(n.r
........large js file....
So, how to get js file from nginx cached file ?
When I run firebase deploy command, I would like to know which files are deployed.
I tried firebase deploy --debug, but I don't see any information about uploaded files.
[2018-05-25T09:48:26.423Z] ----------------------------------------------------------------------
[2018-05-25T09:48:26.426Z] Command: /usr/local/bin/node /usr/local/bin/firebase deploy --debug
[2018-05-25T09:48:26.427Z] CLI Version: 3.18.4
[2018-05-25T09:48:26.427Z] Platform: darwin
[2018-05-25T09:48:26.427Z] Node Version: v9.2.0
[2018-05-25T09:48:26.428Z] Time: Fri May 25 2018 11:48:26 GMT+0200 (CEST)
[2018-05-25T09:48:26.428Z] ----------------------------------------------------------------------
[2018-05-25T09:48:26.439Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"]
[2018-05-25T09:48:26.440Z] > authorizing via signed-in user
[2018-05-25T09:48:26.441Z] > refreshing access token with scopes: ["email","https://www.googleapis.com/auth/cloud-platform","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","openid"]
[2018-05-25T09:48:26.442Z] >>> HTTP REQUEST POST https://www.googleapis.com/oauth2/v3/token
{ refresh_token: '1/sNSNg7xxbzwVujBEzRAQ2eZHkEuT0d6A2jVVUGa-e9Jgrc8NASizU4RK7MEmNnov',
client_id: '563584335869-fgrhgmd47bqnekij5i8b5pr03ho849e6.apps.googleusercontent.com',
client_secret: 'j9iVZfS8kkCEFUPaAeJV0sAi',
grant_type: 'refresh_token',
scope: 'email https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/cloudplatformprojects.readonly https://www.googleapis.com/auth/firebase openid' }
Fri May 25 2018 11:48:26 GMT+0200 (CEST)
[2018-05-25T09:48:26.585Z] <<< HTTP RESPONSE 200 cache-control=no-cache, no-store, max-age=0, must-revalidate, pragma=no-cache, expires=Mon, 01 Jan 1990 00:00:00 GMT, date=Fri, 25 May 2018 09:48:26 GMT, vary=X-Origin, Origin,Accept-Encoding, content-type=application/json; charset=UTF-8, x-content-type-options=nosniff, x-frame-options=SAMEORIGIN, x-xss-protection=1; mode=block, server=GSE, alt-svc=hq=":443"; ma=2592000; quic=51303433; quic=51303432; quic=51303431; quic=51303339; quic=51303335,quic=":443"; ma=2592000; v="43,42,41,39,35", accept-ranges=none, connection=close
[2018-05-25T09:48:26.593Z] >>> HTTP REQUEST GET https://admin.firebase.com/v1/projects/test-table
Fri May 25 2018 11:48:26 GMT+0200 (CEST)
[2018-05-25T09:48:27.222Z] <<< HTTP RESPONSE 200 server=nginx, date=Fri, 25 May 2018 09:48:27 GMT, content-type=application/json; charset=utf-8, content-length=141, connection=close, x-content-type-options=nosniff, strict-transport-security=max-age=31536000; includeSubdomains, cache-control=no-cache, no-store
[2018-05-25T09:48:27.223Z] >>> HTTP REQUEST GET https://admin.firebase.com/v1/database/test-table/tokens
Fri May 25 2018 11:48:27 GMT+0200 (CEST)
[2018-05-25T09:48:27.777Z] <<< HTTP RESPONSE 200 server=nginx, date=Fri, 25 May 2018 09:48:27 GMT, content-type=application/json; charset=utf-8, content-length=274, connection=close, x-content-type-options=nosniff, strict-transport-security=max-age=31536000; includeSubdomains, cache-control=no-cache, no-store
=== Deploying to 'test-table'...
i deploying hosting
i hosting: preparing dist directory for upload...
[2018-05-25T09:48:28.900Z] >>> HTTP REQUEST PUT https://deploy.firebase.com/v1/hosting/test-table/uploads/-LDLglCdzpfzQK77Fbrb?fileCount=2&message=
Fri May 25 2018 11:48:28 GMT+0200 (CEST)
Uploading: [ ] 0%[2018-05-25T09:48:32.752Z] <<< HTTP RESPONSE 200 server=nginx, date=Fri, 25 May 2018 09:48:32 GMT, content-type=application/json; charset=utf-8, content-length=49, connection=close, access-control-allow-origin=*, access-control-allow-methods=GET, PUT, POST, DELETE, OPTIONS, strict-transport-security=max-age=31556926; includeSubDomains; preload, x-content-type-options=nosniff
[2018-05-25T09:48:32.753Z] [hosting] .tgz uploaded successfully, waiting for extraction
✔ hosting: 2 files uploaded successfully
[2018-05-25T09:48:33.642Z] [hosting] deploy completed after 5190ms
[2018-05-25T09:48:33.643Z] >>> HTTP REQUEST POST https://deploy.firebase.com/v1/projects/test-table/releases
{ hosting:
{ public: 'dist',
ignore: [ 'firebase.json', '**/.*', '**/node_modules/**' ],
version: '-LDLglCdzpfzQK77Fbrb',
prefix: '-LDLglCdzpfzQK77Fbrb/',
manifest: [] } }
Fri May 25 2018 11:48:33 GMT+0200 (CEST)
[2018-05-25T09:48:34.951Z] <<< HTTP RESPONSE 200 server=nginx, date=Fri, 25 May 2018 09:48:34 GMT, content-type=application/json; charset=utf-8, content-length=34, connection=close, access-control-allow-origin=*, access-control-allow-methods=GET, PUT, POST, DELETE, OPTIONS, strict-transport-security=max-age=31556926; includeSubDomains; preload, x-content-type-options=nosniff
✔ Deploy complete!
The only information the Firebase CLI shows is:
i hosting: preparing dist directory for upload...
So this means that everything in your dist directory is deployed, and nothing else.
Since the files are uploaded as a single .tgz file, there is no progress report for individual files.
Apache Tika should be accessible from Python program via HTTP, but I can't get it to work.
I am using this command to run the server (with and without the two options at the end):
java -jar tika-server-1.17.jar --port 5677 -enableUnsecureFeatures -enableFileUrl
And it works fine with curl:
curl -v -T /tmp/tmpsojwBN http://localhost:5677/tika
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5677 (#0)
> PUT /tika HTTP/1.1
> Host: localhost:5677
> User-Agent: curl/7.47.0
> Accept: */*
> Accept-Encoding: gzip, deflate
> Content-Length: 418074
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Date: Sat, 07 Apr 2018 12:28:41 GMT
< Transfer-Encoding: chunked
< Server: Jetty(8.y.z-SNAPSHOT)
But when I try something like (tried different combinations for headers, here I recreated same headers as python-tika client uses):
with tempfile.NamedTemporaryFile() as tmp_file:
download_file(url, tmp_file)
payload = open(tmp_file.name, 'rb')
headers = {
'Accept': 'application/json',
'Content-Disposition': 'attachment; filename={}'.format(
os.path.basename(tmp_file.name))}
response = requests.put(TIKA_ENDPOINT_URL + '/tika', payload,
headers=headers,
verify=False)
I've tried to use payload as well as fileUrl - with the same result of WARN javax.ws.rs.ClientErrorException: HTTP 406 Not Acceptable and java stack trace on the server. Full trace:
WARN javax.ws.rs.ClientErrorException: HTTP 406 Not Acceptable
at org.apache.cxf.jaxrs.utils.SpecExceptions.toHttpException(SpecExceptions.java:117)
at org.apache.cxf.jaxrs.utils.ExceptionUtils.toHttpException(ExceptionUtils.java:173)
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:542)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:177)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXRSInInterceptor.java:77)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:274)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.doService(JettyHTTPDestination.java:261)
at org.apache.cxf.transport.http_jetty.JettyHTTPHandler.handle(JettyHTTPHandler.java:76)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1088)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1024)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:748)
I've also tried to compare ( with nc -l localhost 5677 | less) what is so different with two requests (payload abbreviated):
From curl:
PUT /tika HTTP/1.1
Host: localhost:5677
User-Agent: curl/7.47.0
Accept: */*
Content-Length: 418074
Expect: 100-continue
%PDF-1.4
%<D3><EB><E9><E1>
1 0 obj
<</Creator (Chromium)
From Python requests library:
PUT /tika HTTP/1.1
Host: localhost:5677
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: application/json
User-Agent: python-requests/2.13.0
Content-type: application/pdf
Content-Length: 246176
%PDF-1.4
%<D3><EB><E9><E1>
1 0 obj
<</Creator (Chromium)
The question is, what is the correct way to call Tika server from Python?
I've also tried python tika library in client-only mode and using tika-app via jnius. With tika client, as well as using tika-app.jar with pyjnius, I only freezes (call never returns) when I use them in a celery worker. At the same, pyjnius / tika-app and tika-python script both work nicely in a script: I have not figured out what is wrong inside celery worker. I guess, something to do with threading and/or initialization in wrong place. But that is a topic for another question.
And here is what tika-python requests:
PUT /tika HTTP/1.1
Host: localhost:5677
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: application/json
User-Agent: python-requests/2.13.0
Content-Disposition: attachment; filename=tmpb3YkTq
Content-Length: 183234
%PDF-1.4
%<D3><EB><E9><E1>
1 0 obj
<</Creator (Chromium)
And now it seems like this is some kind of a problem with tika server:
$ tika-python --verbose --server 'localhost' --port 5677 parse all /tmp/tmpb3YkTq
2018-04-08 09:44:11,555 [MainThread ] [INFO ] Writing ./tmpb3YkTq_meta.json
(<open file '<stderr>', mode 'w' at 0x7f0b688eb1e0>, 'Request headers: ', {'Accept': 'application/json', 'Content-Disposition': 'attachment; filename=tmpb3YkTq'})
(<open file '<stderr>', mode 'w' at 0x7f0b688eb1e0>, 'Response headers: ', {'Date': 'Sun, 08 Apr 2018 06:44:13 GMT', 'Transfer-Encoding': 'chunked', 'Content-Type': 'application/json', 'Server': 'Jetty(8.y.z-SNAPSHOT)'})
['./tmpb3YkTq_meta.json']
Cf:
$ tika-python --verbose --server 'localhost' --port 5677 parse text /tmp/tmpb3YkTq
2018-04-08 09:43:38,326 [MainThread ] [INFO ] Writing ./tmpb3YkTq_meta.json
(<open file '<stderr>', mode 'w' at 0x7fc3eee4a1e0>, 'Request headers: ', {'Accept': 'application/json', 'Content-Disposition': 'attachment; filename=tmpb3YkTq'})
(<open file '<stderr>', mode 'w' at 0x7fc3eee4a1e0>, 'Response headers: ', {'Date': 'Sun, 08 Apr 2018 06:43:38 GMT', 'Content-Length': '0', 'Server': 'Jetty(8.y.z-SNAPSHOT)'})
2018-04-08 09:43:38,409 [MainThread ] [WARNI] Tika server returned status: 406
['./tmpb3YkTq_meta.json']
I got this exception when I run my application. It happens also in the real Azure blob storage.
I've caught with Fiddler the request that creates this problem:
GET http://127.0.0.1:10000/devstoreaccount1/ebb413ed-fdb5-49f2-a5ac-74faa7e2d3bf/8844c3ec-9e4b-43ec-88b2-58eddf65fc0a/perro?timeout=90 HTTP/1.1
x-ms-version: 2009-09-19
User-Agent: WA-Storage/6.0.6002.18006
x-ms-range: bytes=0-524304
If-Match: 0x8CDA190BD304DD0
x-ms-date: Wed, 23 Feb 2011 16:49:18 GMT
Authorization: SharedKey devstoreaccount1:5j3IScY9UJLN3o1ICWKwVEazO4/IDJG796sdZKqHlR4=
Host: 127.0.0.1:10000
And this is the response:
HTTP/1.1 412 The condition specified using HTTP conditional header(s) is not met.
Content-Length: 252
Content-Type: application/xml
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: fbff9d15-65c8-4f21-9088-c95e4496c62c
x-ms-version: 2009-09-19
Date: Wed, 23 Feb 2011 16:49:18 GMT
<?xml version="1.0" encoding="utf-8"?><Error><Code>ConditionNotMet</Code><Message>The condition specified using HTTP conditional header(s) is not met.
RequestId:fbff9d15-65c8-4f21-9088-c95e4496c62c
Time:2011-02-23T16:49:18.8790478Z</Message></Error>
It happens when I use the Stream retrieved from this line:
blob.OpenRead();
Why the ETAG minds in a read operation? How may I avoid this problem?
It happens every time I launch several parallel tasks doing things on the blob storage.
If i use:
blob.OpenRead(new BlobRequestOptions() { AccessCondition = AccessCondition.IfMatch("*") });
I got this exception with no inner one (before it had a WebException with the details), either a fail line in Fiddler :
Microsoft.WindowsAzure.StorageClient.StorageClientException was unhandled
Message=The conditionals specified for this operation did not match server.
Source=mscorlib
StackTrace:
Server stack trace:
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImpl[T](Func`2 impl)
at Microsoft.WindowsAzure.StorageClient.BlobReadStream.Read(Byte[] buffer, Int32 offset, Int32 count)
at System.IO.BinaryReader.ReadBytes(Int32 count)
at System.Runtime.Serialization.Formatters.Binary.SerializationHeaderRecord.Read(__BinaryParser input)
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.ReadSerializationHeaderRecord()
at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run()
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
...........
Thanks in advance.
Bufff... mistery solved!
Well, when you do a CloudBlob.OpenRead(), the client library is doing two operations:
First, get the blob block list:
GET /devstoreaccount1/etagtest/test2.txt?comp=blocklist&blocklisttype=Committed&timeout=90 HTTP/1.1
x-ms-version: 2009-09-19
User-Agent: WA-Storage/6.0.6002.18006
x-ms-date: Wed, 23 Feb 2011 22:21:01 GMT
Authorization: SharedKey devstoreaccount1:SPOBe/IUrZJvoPXnAdD/Twnppvu37+qrUbHnaBHJY24=
Host: 127.0.0.1:10000
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: application/xml
Last-Modified: Wed, 23 Feb 2011 22:20:33 GMT
ETag: 0x8CDA1BF0593B660
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: ecffddf2-137f-403c-9595-c8fc2847c9d0
x-ms-version: 2009-09-19
x-ms-blob-content-length: 4
Date: Wed, 23 Feb 2011 22:21:02 GMT
Attention to the ETag in the response.
Second, I guess that start to retrieve it, and now attention to the ETag in the request:
GET /devstoreaccount1/etagtest/test2.txt?timeout=90 HTTP/1.1
x-ms-version: 2009-09-19
User-Agent: WA-Storage/6.0.6002.18006
x-ms-range: bytes=0-525311
If-Match: 0x8CDA1BF0593B660
x-ms-date: Wed, 23 Feb 2011 22:21:02 GMT
Authorization: SharedKey devstoreaccount1:WXzXRv5e9+p0SzlHUAd7iv7jRHXvf+27t9tO4nrhY5Q=
Host: 127.0.0.1:10000
HTTP/1.1 206 Partial Content
Content-Length: 4
Content-Type: text/plain
Content-Range: bytes 0-3/4
Last-Modified: Wed, 23 Feb 2011 22:20:33 GMT
ETag: 0x8CDA1BF0593B660
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: db1e221d-fc61-4837-a255-28b1547cb5d7
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
Date: Wed, 23 Feb 2011 22:21:02 GMT
What happen if another WebRole do something in the blob between call? YES a race condition.
Solution: Use CloudBlob.DownloadToStream(), that method only issues one call:
GET /devstoreaccount1/etagtestxx/test2.txt?timeout=90 HTTP/1.1
x-ms-version: 2009-09-19
User-Agent: WA-Storage/6.0.6002.18006
x-ms-date: Wed, 23 Feb 2011 22:34:02 GMT
Authorization: SharedKey devstoreaccount1:VjXIO2kbjCIP4UeiXPtxDxmFLeoYAKOqiRv4SV3bZno=
Host: 127.0.0.1:10000
HTTP/1.1 200 OK
Content-Length: 4
Content-Type: text/plain
Last-Modified: Wed, 23 Feb 2011 22:33:47 GMT
ETag: 0x8CDA1C0DEB562D0
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 183a05bb-ea47-4811-8768-6a62195cdb64
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
Date: Wed, 23 Feb 2011 22:34:02 GMT
I will put this on practice tomorrow morning at work and see what happen.
You can still use OpenRead, you need to pass OperationContext instance like below code:
// cloudBlob instance of CloudPageBlob
OperationContext context = new OperationContext();
context.SendingRequest += (sender, e) => {
e.Request.Headers["if-match"] = "*";
};
using (AutoResetEvent waitHandle = new AutoResetEvent(false))
{
cloudBlob.StreamMinimumReadSizeInBytes = 16385;
var result = cloudBlob.BeginOpenRead(null, null, context,
ar => waitHandle.Set(),
null);
waitHandle.WaitOne();
using (Stream blobStream = vhd.EndOpenRead(result))
{
var k = blobStream.ReadByte();
}
}
One thing that comes to mind is that the ETag in
If-Match: 0x8CDA190BD304DD0
is malformed; a valid (strong) etag is always in double quotes.
Dunno whether this has something to do with your problem, though.
If you don't want to store blob data in your memory with DownloadToStream and still want to use blob read then you can add access condition on a read operation which matches any Etag available on the reference blob Like below
var accessCondition = new AccessCondition();
var blobRequestOptions = new BlobRequestOptions();
var operationContext = new OperationContext();
// Added match of any ETag access condition so that it will not cause any issue due to ongoing concurrent modification on the same blob
operationContext.SendingRequest += (sender, e) =>
{
if (e.Request.Headers.Contains("if-match"))
{
e.Request.Headers.Remove("if-match");
}
e.Request.Headers.Add("if-match", "*");
};
var blobStream = await blobRef.OpenReadAsync(accessCondition, blobRequestOptions, operationContext);