I am trying to make an oauth2 access_token in a server-to-server JSON API scenario. But it failed with invalid_grant error, please help.
from oauth2client.client import SignedJwtAssertionCredentials
KEY_FILE = 'xxxxxxxxxxxx-privatekey.p12'
with open(KEY_FILE, 'r') as fd:
key = fd.read()
SERVICE_ACCOUNT_EMAIL = 'xxxxxx.apps.googleusercontent.com'
credentials = SignedJwtAssertionCredentials(SERVICE_ACCOUNT_EMAIL, key,
scope="https://www.googleapis.com/auth/datastore https://www.googleapis.com/auth/userinfo.email",
token_uri='https://accounts.google.com/o/oauth2/token')
assertion = credentials._generate_assertion()
h = httplib2.Http()
credentials._do_refresh_request(h.request)
and I got
Traceback (most recent call last):
File "/Users/pahud/Projects/oauth2client/x.py", line 24, in <module>
credentials._do_refresh_request(h.request)
File "/Users/pahud/Projects/oauth2client/oauth2client/client.py", line 710, in _do_refresh_request
raise AccessTokenRefreshError(error_msg)
oauth2client.client.AccessTokenRefreshError: invalid_grant
[Finished in 0.7s with exit code 1]
http://i.stack.imgur.com/iGGYx.png
I fixed it.
SERVICE_ACCOUNT_EMAIL = 'xxxxxx.apps.googleusercontent.com'
the above is client ID not Email, I fixed this and it's working now.
I have the same problem.
To solve the problem, you need to notice the following elements:
Did you use client_secrets.json in your program? If yes, check whether the name is the same as that in your current directory.
The "client_email " or the "SERVICE_ACCOUNT_EMAIL" is not your personal email or the client id. It is "client id's email". You can check that email in https://console.developers.google.com/project/ ==>credentials==>Service account==>email address.
Basically, if your client id is:<clientid>.apps.googleusercontent.com
You client email here would be:<clientid>#developer.gserviceaccount.com
In my case the problem was with the .boto file. Try to configure it again with the credentials from the Service account.
For the ones using fallback: gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
use for the fallback any "Client ID for native application". This is not necessary as its said in: https://cloud.google.com/storage/docs/gspythonlibrary
but i couldn't find other way, it was throwing errors without it.
Related
In my Composer Airflow DAGs, I have been using the CloudSqlProxyRunner to connect to my Cloud SQL instance.
However, after updating Google Cloud Composer from v1.18.4 to 1.18.6, my DAG started to encounter a strange error:
[2022-04-22, 23:20:18 UTC] {cloud_sql.py:462} INFO - Downloading cloud_sql_proxy from https://dl.google.com/cloudsql/cloud_sql_proxy.linux.x86_64 to /home/airflow/dXhOYoU_cloud_sql_proxy.tmp
[2022-04-22, 23:20:18 UTC] {taskinstance.py:1702} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/decorators/base.py", line 134, in execute
return_value = super().execute(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 174, in execute
return_value = self.execute_callable()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 185, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/real_time_scoring_pipeline.py", line 99, in get_messages_db
with SQLConnection() as sql_conn:
File "/home/airflow/gcs/dags/helpers/helpers.py", line 71, in __enter__
self.proxy_runner.start_proxy()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 524, in start_proxy
self._download_sql_proxy_if_needed()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 474, in _download_sql_proxy_if_needed
raise AirflowException(
airflow.exceptions.AirflowException: The cloud-sql-proxy could not be downloaded. Status code = 404. Reason = Not Found
Checking manually, https://dl.google.com/cloudsql/cloud_sql_proxy.linux.x86_64 indeed returns a 404.
Looking at the function that raises the exception, _download_sql_proxy_if_needed, it has this code:
system = platform.system().lower()
processor = os.uname().machine
if not self.sql_proxy_version:
download_url = CLOUD_SQL_PROXY_DOWNLOAD_URL.format(system, processor)
else:
download_url = CLOUD_SQL_PROXY_VERSION_DOWNLOAD_URL.format(
self.sql_proxy_version, system, processor
)
So, for whatever reason, in both of these latest images of Composer, processor = os.uname().machine returns x86_64. Previously, it returned amd64, and https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 is in fact a valid link to the binary we need.
I replicated this error in Composer 2.0.10 as well.
I am still investigating possible workarounds, but posting this here in case someone else encounters this issue, and has figured out a workaround, and to raise this with Google engineers (who, according to Composer's docs, monitor this tag).
My current workaround is patching the CloudSqlProxyRunner to hardcode the correct URL:
class PatchedCloudSqlProxyRunner(CloudSqlProxyRunner):
"""
This is a patched version of CloudSqlProxyRunner to provide a workaround for an incorrectly
generated URL to the Cloud SQL proxy binary.
"""
def _download_sql_proxy_if_needed(self) -> None:
download_url = "https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64"
# the rest of the code is taken from the original method
proxy_path_tmp = self.sql_proxy_path + ".tmp"
self.log.info(
"Downloading cloud_sql_proxy from %s to %s", download_url, proxy_path_tmp
)
# httpx has a breaking API change (follow_redirects vs allow_redirects)
# and this should work with both versions (cf. issue #20088)
if "follow_redirects" in signature(httpx.get).parameters.keys():
response = httpx.get(download_url, follow_redirects=True)
else:
response = httpx.get(download_url, allow_redirects=True) # type: ignore[call-arg]
# Downloading to .tmp file first to avoid case where partially downloaded
# binary is used by parallel operator which uses the same fixed binary path
with open(proxy_path_tmp, "wb") as file:
file.write(response.content)
if response.status_code != 200:
raise AirflowException(
"The cloud-sql-proxy could not be downloaded. "
f"Status code = {response.status_code}. Reason = {response.reason_phrase}"
)
self.log.info(
"Moving sql_proxy binary from %s to %s", proxy_path_tmp, self.sql_proxy_path
)
shutil.move(proxy_path_tmp, self.sql_proxy_path)
os.chmod(self.sql_proxy_path, 0o744) # Set executable bit
self.sql_proxy_was_downloaded = True
And then instantiate it and use it as I would the original CloudSqlProxyRunner:
proxy_runner = PatchedCloudSqlProxyRunner(path_prefix, instance_spec)
proxy_runner.start_proxy()
But I am hoping that this is properly fixed by someone at Google soon, by fixing the os.uname().machine value,
or uploading a Cloud SQL proxy binary to the one currently generated in _download_sql_proxy_if_needed.
As mentioned by #enocom this commit to support arm64 download links actually caused a side-effect of generating broken download links. I assume the author of the commit thought that the Cloud SQL Proxy had binaries for each machine type, although in fact there are not Linux x86_64 links.
I have created an airflow PR to hopefully fix the broken links, hopefully it will get merged in soon and resolve this. Will update the thread with any updates.
Update (I've been working with Jack on this): I just merged that PR! When a new version of the providers is added to PyPI, you'll need to add it to your Composer environment. In the meantime, as a workaround, you could take the fix from Jack's PR and use it as a local dependency. (Similar to the other reply here!) If you do this, I highly recommend setting a calendar reminder (maybe a month from now?) to remove the workaround and go back to importing from the provider package, just to make sure you don't miss out on other updates to it! :)
We're currently looking into Firebase<>BigQuery (not sandboxed) for monitoring purposes. We've hooked up one of our projects using the Firebase integration and have gathered a few days worth of data.
Only the data is always a day off, which makes sense since the transfer only runs every 24 hours. But trying to change it through the bq cli:
bq update --transfer_config \
--target_dataset='crashlytics' \
--schedule='every 2 hours' \
projects/p/locations/l/transferConfigs/c
results into a 400 error:
Bigquery service returned an invalid reply in update operation: Error reported by server with missing error fields. Server returned: {u'error': {u'status': u'INVALID_ARGUMENT',
u'message': u'Request contains an invalid argument.', u'code': 400}}.
Please make sure you are using the latest version of the bq tool and try again. If this problem persists, you may have encountered a bug in the bigquery client. Please file a bug
report in our public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well as any rows that can be made public from the following information:
========================================
== Platform ==
CPython:2.7.16:Darwin-19.2.0-x86_64-i386-64bit
== bq version ==
2.0.53
== Command line ==
['/path/bq/bq.py', '--application_default_credential_file', '/path/e#mail.com/adc.json', '--credential_file', '/path/e#email.com/singlestore_bq.json', '--project_id=tde-psv-app', 'update', '--transfer_config', '--target_dataset=crashlytics', '--schedule=every 2 hours', 'projects/p/locations/l/transferConfigs/c']
== UTC timestamp ==
2020-02-24 08:47:23
== Error trace ==
Traceback (most recent call last):
File "/path/bq/bq.py", line 1116, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/path/bq/bq.py", line 4615, in RunWithArgs
schedule_args=schedule_args)
File "/path/bq/bigquery_client.py", line 3984, in UpdateTransferConfig
x__xgafv='2').execute()
File "/path/bq/bigquery_client.py", line 810, in execute
BigqueryHttp.RaiseErrorFromHttpError(e)
File "/path/bq/bigquery_client.py", line 788, in RaiseErrorFromHttpError
BigqueryClient.RaiseError(content)
File "/path/bq/bigquery_client.py", line 2385, in RaiseError
raise BigqueryError.Create(error, result, [])
BigqueryInterfaceError: Error reported by server with missing error fields. Server returned: {u'error': {u'status': u'INVALID_ARGUMENT', u'message': u'Request contains an invalid argument.', u'code': 400}}
========================================
Unexpected exception in update operation: Bigquery service returned an invalid reply in update operation: Error reported by server with missing error fields. Server returned:
{u'error': {u'status': u'INVALID_ARGUMENT',
u'message': u'Request contains an invalid argument.', u'code': 400}}.
Please make sure you are using the latest version of the bq tool and try again. If this problem persists, you may have encountered a bug in the bigquery client. Please file a bug
report in our public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well as any rows that can be made public from the following information:
We might get the impression that this is not possible for this kind of datasets / Firebase projects, but we can't see to find any clean answer on that.
Right now the data export is only available once per 24 hours. We are looking into changing this behavior. Please stay up to date on the Firebase blog for any announcements.
In the Airflow admin site
When I update the http_default connection the http sensor gives the following error:
ERROR - Could not create Fernet object: Incorrect padding
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 173, in get_fernet
_fernet = Fernet(fernet_key.encode('utf-8'))
File "/usr/local/lib/python3.6/site-packages/cryptography/fernet.py", line 35, in init
key = base64.urlsafe_b64decode(key)
File "/usr/local/lib/python3.6/base64.py", line 133, in urlsafe_b64decode
return b64decode(s)
File "/usr/local/lib/python3.6/base64.py", line 87, in b64decode
return binascii.a2b_base64(s)
binascii.Error: Incorrect padding
It seems your $FERNET_KEY is not set.
Can you check the output of echo $FERNET_KEY?
Can you also check the fernet_key = entry in your airflow.cfg?
If those are empty, you can generate a new one with some Python code:
from cryptography.fernet import Fernet
print(Fernet.generate_key().decode())
Then set this value in your airflow.cfg under fernet_key =.
Alternatively you can also set it via export AIRFLOW__CORE__FERNET_KEY=your_fernet_key (this gives you more flexibility if you are building your environment dynamically).
Important to keep in mind
The Fernet Key is used to encrypt your connections' credentials, so you need to keep it safe it you want to be able to decrypt them later. If you had created some connections before with another fernet key, and you generated a new one as described above, your old connections won't work and will have to be recreated once you set the new key in place.
I tried to crawl my email data by using 'edeR' package.
I succeeded in getting 'inbox' folder, but failed to get 'sent mail'.
Here are the codes.
Sys.setenv(JAVA_HOME="C:/Program Files/Java/jre1.8.0_121")
library(rJava)
library(edeR)
mail_sen<-extractBetween(username="xxxx#gmail.com",
password="xxxxx", folder="[Gmail]/Sent Mail",
startDate="06-Jan-2017", endDate="06-Mar-2017", nmail=5)
When I change (folder="[Gmail]/Sent Mail") into (folder="inbox"), it works.
However, when I use the code above, error comes.
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Java\jre1.8.0_121\lib\ext\jython.jar\Lib\imaplib.py", line 749, in uid
imaplib.error: command SEARCH illegal in state AUTH
I'm stuck in this error...
Anyone who can solve this problem?
the error is not very informative but this post suggests it means that it can't find your folder: edeR seems to be a wrapper around the imaplib Python library that is referenced in that question. According to the answers, the name of the "Sent Mail" folder is language-dependent.
So i think you might have to try different translations of "Sent Mail" in folder="[Gmail]/Sent Mail"
eta: i do not recommend changing your gmail language settings to see what happens to the "Sent Mail" folder name to a language you do not speak. I just spent five minutes finding out which option under the little cogwheel in gmail meant "settings" in Bahasa Indonesia.
I'm new to stack so this might be a very silly mistake.
I'm trying to setup a one node swift configuration for a simple proof of concept. I did follow the instructions. However, something is missing. I keep getting this error:
root#lab-srv2544:/etc/swift# swift stat
Traceback (most recent call last):
File "/usr/bin/swift", line 10, in <module>
sys.exit(main())
File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 1287, in main
globals()['st_%s' % args[0]](parser, argv[1:], output)
File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 492, in st_stat
stat_result = swift.stat()
File "/usr/lib/python2.7/dist-packages/swiftclient/service.py", line 427, in stat
raise SwiftError('Account not found', exc=err)
swiftclient.service.SwiftError: 'Account not found'
Also, the syslog always complains about proxy-server:
Dec 12 12:16:37 lab-srv2544 proxy-server: Account HEAD returning 503 for [] (txn: tx9536949d19d14f1ab5d8d-00548b4d25) (client_ip: 127.0.0.1)
Dec 12 12:16:37 lab-srv2544 proxy-server: 127.0.0.1 127.0.0.1 12/Dec/2014/20/16/37 HEAD /v1/AUTH_71e79a29599149099aa98d5d276eaa0b HTTP/1.0 503 - python-swiftclient-2.3.0 8d2b0748804f4b34... - - - tx9536949d19d14f1ab5d8d-00548b4d25 - 0.0013 - - 1418415397.334497929 1418415397.335824013
Anyone seen this problem before?
When using 'swift' command to access swift storage, pass user id and password as argument, if it is not set in environment variable.
The most probable reason for this behavior is a funny order in your "pipeline" directive in /etc/swift/proxy-server.conf
To verify this hypothesis:
comment out your current pipeline, and write this one instead:
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
restart your proxy server with the command
swift-init proxy-server restart
Make sure the environment variables OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME and OS_AUTH_URL are defined
try to list your containers with
swift list
If you get a list of containers then the diagnoses is correct.
Get back to your proxy-server.conf and try to add one element per time to your pipeline, restarting the server each time, and testing each time, until you find the right order.
For your reference see http://docs.openstack.org/developer/swift/deployment_guide.html#proxy-server-configuration