Bigquery crashlytics dataset schedule interval - firebase

We're currently looking into Firebase<>BigQuery (not sandboxed) for monitoring purposes. We've hooked up one of our projects using the Firebase integration and have gathered a few days worth of data.
Only the data is always a day off, which makes sense since the transfer only runs every 24 hours. But trying to change it through the bq cli:
bq update --transfer_config \
--target_dataset='crashlytics' \
--schedule='every 2 hours' \
projects/p/locations/l/transferConfigs/c
results into a 400 error:
Bigquery service returned an invalid reply in update operation: Error reported by server with missing error fields. Server returned: {u'error': {u'status': u'INVALID_ARGUMENT',
u'message': u'Request contains an invalid argument.', u'code': 400}}.
Please make sure you are using the latest version of the bq tool and try again. If this problem persists, you may have encountered a bug in the bigquery client. Please file a bug
report in our public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well as any rows that can be made public from the following information:
========================================
== Platform ==
CPython:2.7.16:Darwin-19.2.0-x86_64-i386-64bit
== bq version ==
2.0.53
== Command line ==
['/path/bq/bq.py', '--application_default_credential_file', '/path/e#mail.com/adc.json', '--credential_file', '/path/e#email.com/singlestore_bq.json', '--project_id=tde-psv-app', 'update', '--transfer_config', '--target_dataset=crashlytics', '--schedule=every 2 hours', 'projects/p/locations/l/transferConfigs/c']
== UTC timestamp ==
2020-02-24 08:47:23
== Error trace ==
Traceback (most recent call last):
File "/path/bq/bq.py", line 1116, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/path/bq/bq.py", line 4615, in RunWithArgs
schedule_args=schedule_args)
File "/path/bq/bigquery_client.py", line 3984, in UpdateTransferConfig
x__xgafv='2').execute()
File "/path/bq/bigquery_client.py", line 810, in execute
BigqueryHttp.RaiseErrorFromHttpError(e)
File "/path/bq/bigquery_client.py", line 788, in RaiseErrorFromHttpError
BigqueryClient.RaiseError(content)
File "/path/bq/bigquery_client.py", line 2385, in RaiseError
raise BigqueryError.Create(error, result, [])
BigqueryInterfaceError: Error reported by server with missing error fields. Server returned: {u'error': {u'status': u'INVALID_ARGUMENT', u'message': u'Request contains an invalid argument.', u'code': 400}}
========================================
Unexpected exception in update operation: Bigquery service returned an invalid reply in update operation: Error reported by server with missing error fields. Server returned:
{u'error': {u'status': u'INVALID_ARGUMENT',
u'message': u'Request contains an invalid argument.', u'code': 400}}.
Please make sure you are using the latest version of the bq tool and try again. If this problem persists, you may have encountered a bug in the bigquery client. Please file a bug
report in our public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well as any rows that can be made public from the following information:
We might get the impression that this is not possible for this kind of datasets / Firebase projects, but we can't see to find any clean answer on that.

Right now the data export is only available once per 24 hours. We are looking into changing this behavior. Please stay up to date on the Firebase blog for any announcements.

Related

GCP Composer v1.18.6 and 2.0.10 incompatible with CloudSqlProxyRunner

In my Composer Airflow DAGs, I have been using the CloudSqlProxyRunner to connect to my Cloud SQL instance.
However, after updating Google Cloud Composer from v1.18.4 to 1.18.6, my DAG started to encounter a strange error:
[2022-04-22, 23:20:18 UTC] {cloud_sql.py:462} INFO - Downloading cloud_sql_proxy from https://dl.google.com/cloudsql/cloud_sql_proxy.linux.x86_64 to /home/airflow/dXhOYoU_cloud_sql_proxy.tmp
[2022-04-22, 23:20:18 UTC] {taskinstance.py:1702} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1330, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1457, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1513, in _execute_task
result = execute_callable(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/decorators/base.py", line 134, in execute
return_value = super().execute(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 174, in execute
return_value = self.execute_callable()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 185, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/real_time_scoring_pipeline.py", line 99, in get_messages_db
with SQLConnection() as sql_conn:
File "/home/airflow/gcs/dags/helpers/helpers.py", line 71, in __enter__
self.proxy_runner.start_proxy()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 524, in start_proxy
self._download_sql_proxy_if_needed()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 474, in _download_sql_proxy_if_needed
raise AirflowException(
airflow.exceptions.AirflowException: The cloud-sql-proxy could not be downloaded. Status code = 404. Reason = Not Found
Checking manually, https://dl.google.com/cloudsql/cloud_sql_proxy.linux.x86_64 indeed returns a 404.
Looking at the function that raises the exception, _download_sql_proxy_if_needed, it has this code:
system = platform.system().lower()
processor = os.uname().machine
if not self.sql_proxy_version:
download_url = CLOUD_SQL_PROXY_DOWNLOAD_URL.format(system, processor)
else:
download_url = CLOUD_SQL_PROXY_VERSION_DOWNLOAD_URL.format(
self.sql_proxy_version, system, processor
)
So, for whatever reason, in both of these latest images of Composer, processor = os.uname().machine returns x86_64. Previously, it returned amd64, and https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 is in fact a valid link to the binary we need.
I replicated this error in Composer 2.0.10 as well.
I am still investigating possible workarounds, but posting this here in case someone else encounters this issue, and has figured out a workaround, and to raise this with Google engineers (who, according to Composer's docs, monitor this tag).
My current workaround is patching the CloudSqlProxyRunner to hardcode the correct URL:
class PatchedCloudSqlProxyRunner(CloudSqlProxyRunner):
"""
This is a patched version of CloudSqlProxyRunner to provide a workaround for an incorrectly
generated URL to the Cloud SQL proxy binary.
"""
def _download_sql_proxy_if_needed(self) -> None:
download_url = "https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64"
# the rest of the code is taken from the original method
proxy_path_tmp = self.sql_proxy_path + ".tmp"
self.log.info(
"Downloading cloud_sql_proxy from %s to %s", download_url, proxy_path_tmp
)
# httpx has a breaking API change (follow_redirects vs allow_redirects)
# and this should work with both versions (cf. issue #20088)
if "follow_redirects" in signature(httpx.get).parameters.keys():
response = httpx.get(download_url, follow_redirects=True)
else:
response = httpx.get(download_url, allow_redirects=True) # type: ignore[call-arg]
# Downloading to .tmp file first to avoid case where partially downloaded
# binary is used by parallel operator which uses the same fixed binary path
with open(proxy_path_tmp, "wb") as file:
file.write(response.content)
if response.status_code != 200:
raise AirflowException(
"The cloud-sql-proxy could not be downloaded. "
f"Status code = {response.status_code}. Reason = {response.reason_phrase}"
)
self.log.info(
"Moving sql_proxy binary from %s to %s", proxy_path_tmp, self.sql_proxy_path
)
shutil.move(proxy_path_tmp, self.sql_proxy_path)
os.chmod(self.sql_proxy_path, 0o744) # Set executable bit
self.sql_proxy_was_downloaded = True
And then instantiate it and use it as I would the original CloudSqlProxyRunner:
proxy_runner = PatchedCloudSqlProxyRunner(path_prefix, instance_spec)
proxy_runner.start_proxy()
But I am hoping that this is properly fixed by someone at Google soon, by fixing the os.uname().machine value,
or uploading a Cloud SQL proxy binary to the one currently generated in _download_sql_proxy_if_needed.
As mentioned by #enocom this commit to support arm64 download links actually caused a side-effect of generating broken download links. I assume the author of the commit thought that the Cloud SQL Proxy had binaries for each machine type, although in fact there are not Linux x86_64 links.
I have created an airflow PR to hopefully fix the broken links, hopefully it will get merged in soon and resolve this. Will update the thread with any updates.
Update (I've been working with Jack on this): I just merged that PR! When a new version of the providers is added to PyPI, you'll need to add it to your Composer environment. In the meantime, as a workaround, you could take the fix from Jack's PR and use it as a local dependency. (Similar to the other reply here!) If you do this, I highly recommend setting a calendar reminder (maybe a month from now?) to remove the workaround and go back to importing from the provider package, just to make sure you don't miss out on other updates to it! :)

VTS few test cases give syntax error unexpected 'newline' and module gets reported as incomplete (inspite of test cases pass)

/data/local/tmp/VtsHalBiometricsFaceV1_0TargetTest/VtsHalBiometricsFaceV1_0TargetTest.config[1]: syntax error: unexpected 'newline'
Total Tests : 1
PASSED : 1
FAILED : 0
IMPORTANT: Some modules failed to run to completion, tests counts may be inaccurate.
============== End of Results ==============
Issue : Test case is passing but module is not getting reported as completed.
Issue seen with only Android 11 based VTS suites and works well with older android flavor VTS suites.
Environment of 18.04.2 LTS ubuntu and few modules inspite of passing the test cases it does not report a module pass (only for few modules) and shows it as Done=false in results report.
Logs indicate this kind of errors pointing to various .config files.
Any idea / suggestion what could be issue ?
This was asked again in syntax error: unexpected 'newline' in .config file in android vts and that got an answer that if you've modified the vts-tradefed file, then this error appears.
Additionally I noticed that even chmod changes can cause this problem to appear.

Uploading manifest file failing

After making some changes in a JSON manifest file, I was trying to update it following the Amazon documentation:
ask smapi update-skill-manifest -g development -s amzn1.ask.skill.xxxx --manifest "skillManifest.json" --debug
I kept getting this error:
The error was not pointing to what the error was but my guess was that it was related to the parameters, but that was strange as I was following the documentation to letter.
I then tried, instead of passing the json file, to cat the content of the file, which would be either:
For Powershell: --manifest "$(type skillmanifest.json)"
For Linux: --manifest "$(cat skillmanifest.json)"
I still kept getting the same error.
Firstly, for debugging and getting a more accurate error, I checked my ASK-CLi version, which was outdated.
After updating ASK to the latest version I was still getting the same error.
At that point it started including an error object, which was saying:
When looking into Parsing error due to invalid body. and INVALID_REQUEST_PARAMETER through the error codes, it just said the body of the request cannot be parsed.
After research and playing around, the problem was the manifest parameter, changing it to "file:FILENAME" solved the issue:
--manifest "file:skillmanifest.json"
The documentation is not stating this but it seems necessary for it to go through.
I hope this helps someone out there avoid spending a full day troubleshooting.

Can't able to create table using ORE.create

I had executed the R program and when I try to push the result to a table using
ore.create(score, table="xyz")
I'm getting the following error:
Error in .oci.GetQuery(conn, statement, data = data, prefetch = prefetch, :
ORA-12801: error signaled in parallel query server P007, instance XY.ab.dc.cd:abc (2)
ORA-06520: PL/SQL: Error loading external library
ORA-06522: /app/oracle/product/11.2.0/dbhome_1/lib/librqe.so: cannot open shared object file: No such file or directory
ORA-06512: at "RQSYS.RQROWEVALIMPL", line 20
ORA-06512: at "RQSYS.RQROWEVALIMPL", line 16
ORA-06512: at line 4
Please help to solve this issue since I tried to solve this for the past 1 week and I cant able to as I am new to this.
Any help much appreciated
This looks like a problem with your installation of Oracle R package.
The message indicates your are running on 11gR2. ORE requires 11.2.0.3 or higher, or 11.2.0.1 with a specific patch applied. Check this OTN Forum thread for details.
You need an Oracle Support contract to get hold of these patches. If you don't have a contract you will need to migrate to database 12c in order to use R.

Swift Juno complains 'Account not found'

I'm new to stack so this might be a very silly mistake.
I'm trying to setup a one node swift configuration for a simple proof of concept. I did follow the instructions. However, something is missing. I keep getting this error:
root#lab-srv2544:/etc/swift# swift stat
Traceback (most recent call last):
File "/usr/bin/swift", line 10, in <module>
sys.exit(main())
File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 1287, in main
globals()['st_%s' % args[0]](parser, argv[1:], output)
File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 492, in st_stat
stat_result = swift.stat()
File "/usr/lib/python2.7/dist-packages/swiftclient/service.py", line 427, in stat
raise SwiftError('Account not found', exc=err)
swiftclient.service.SwiftError: 'Account not found'
Also, the syslog always complains about proxy-server:
Dec 12 12:16:37 lab-srv2544 proxy-server: Account HEAD returning 503 for [] (txn: tx9536949d19d14f1ab5d8d-00548b4d25) (client_ip: 127.0.0.1)
Dec 12 12:16:37 lab-srv2544 proxy-server: 127.0.0.1 127.0.0.1 12/Dec/2014/20/16/37 HEAD /v1/AUTH_71e79a29599149099aa98d5d276eaa0b HTTP/1.0 503 - python-swiftclient-2.3.0 8d2b0748804f4b34... - - - tx9536949d19d14f1ab5d8d-00548b4d25 - 0.0013 - - 1418415397.334497929 1418415397.335824013
Anyone seen this problem before?
When using 'swift' command to access swift storage, pass user id and password as argument, if it is not set in environment variable.
The most probable reason for this behavior is a funny order in your "pipeline" directive in /etc/swift/proxy-server.conf
To verify this hypothesis:
comment out your current pipeline, and write this one instead:
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
restart your proxy server with the command
swift-init proxy-server restart
Make sure the environment variables OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME and OS_AUTH_URL are defined
try to list your containers with
swift list
If you get a list of containers then the diagnoses is correct.
Get back to your proxy-server.conf and try to add one element per time to your pipeline, restarting the server each time, and testing each time, until you find the right order.
For your reference see http://docs.openstack.org/developer/swift/deployment_guide.html#proxy-server-configuration

Resources