.buffer() returns error: "maxError for Buffer must be in the same units as the distance" - google-earth-engine

I would like to set a proj for .buffer(). When I set only a proj I am asked to set a non-zero maxError. When I set a maxError .buffer() returns: "Error in map(ID=0):
Geometry.buffer: maxError for Buffer must be in the same units as the distance"
read-only link: https://code.earthengine.google.com/258401a214549d4209c2c23052ede684
Any ideas? Thank you!

Related

PositionTargetGlobal failed because no origin

When I launch the mavros_posix_sitl.launch in px4, it always warns that:
"PositionTargetGlobal failed because no origin"
This leads to the inaccurate flight altitude of UAV in gazebo.
How to solve this problem? I have no idea.
This is due to PX4's EKF2 not having an origin set when starting the SITL.
Throught the mavlink console (if you dont have one started, just execute PX4-Autopilot/Tools/mavlink_shell.py pointing to the correct port), you can set a specific origin using the commander set_ekf_origin command and specifying the latitude, longitude and altitude, which will remove the warnings.
Once set, you can also save that origin by saving and loading your parameters.

how to fix "Error in ParseDataFeedJSON(GA.Data) : code : 400 Reason : Invalid expression. Expression exceeds max size of 4096"

I am trying to get the eventLabels from the google analytics api through R
I tried reducing the number of max.results in the Init() function. I still keep getting the error
# get eventLabel which is a unique video ID of the video on the website.
query.list <- Init(startDate,
endDate,
dimensions = "ga:eventLabel",
metrics = "ga:totalEvents",
filters = reportFiltersCOVE,
max.results = 10000,
table.id = tableID_events)
# run query
ga.query <- QueryBuilder(query.list)
# save data for google analytics in data.nko.COVE
data.nko.COVE <- GetReportData(ga.query,
gaOAuth_token)
I get the following error
"Error in ParseDataFeedJSON(GA.Data) : code : 400 Reason : Invalid expression. Expression exceeds max size of 4096"
when I run the last piece of code
data.nko.COVE <- GetReportData(ga.query,
gaOAuth_token)
I need help in understanding what this error means and how can I fix it?"
Any help is much appreciated
I'm guessing this error is referring to the string length of the your filter Expression.
I think if you count the number of characters of 'reportFiltersCOVE' it will exceed the length of 4096.
However I was not able to find any documentation on the limit for the filter field.
https://developers.google.com/analytics/devguides/reporting/core/v3/reference#filter
Can you try again with a shorter filter expression?

[DICOM][MergeCOM] MC3 E: Total attribute length (4) not a multiple of size

I have an older version of mergecom library( V4.4.0 ). And now I received the latest version(V5.4.0 ). When I tried to integrate the latest MergeCom library I am getting following error on C-ECHO( logged in merge.log ).
DICOM;(20936) 06-21 17:59:01.28 MC3 E: Total attribute length (4) not a multiple of size
DICOM;(20936) 06-21 17:59:01.28 MC3 E: for VR (UN): 8, tag '0x0'
DICOM;(20936) 06-21 17:59:01.28 MC3(ReadMessageToTag) E: Message received encoded improperly Invalid VR length in stream data .
Please find the attached wireshark logs snapshots
Wireshark
1. ASSOCIATION-RQ
2.ASSOCIATION-RSP
3.ECHO-RQ
4.ECHO-RSP
5.ABORT
The error log from MergeCOM-3 is implying a parsing error when reading the C-ECHO-RSP. The log message is implying MergeCOM-3 did not identify the group 0 element's value representation, and instead interpreted it as UN.
From the appearance of the C-ECHO-RSP in the WireShark capture, it appears to be encoded properly and WireShark was able to decode the C-ECHO-RSP.
Were there any other errors in the logs? Is your data dictionary being loaded properly, such that the library would know the VR of the group 0 length tag (0000,0000)?

How to adjust future.global.maxSize?

Here's a useless example that shows my point
library(future)
a = 1:200000000
object.size(a)
test %<-% head(a)
I get the following error:
Error in getGlobalsAndPackages(expr, envir = envir, persistent =
persistent, : The total size of all global objects that need to be
exported for the future expression (‘head(a)’) is 762.95 MiB. This
exceeds the maximum allowed size of 500.00 MiB (option
'future.global.maxSize'). There are two globals: ‘a’ (762.94 MiB of
class ‘numeric’) and ‘head’ (10.05 KiB of class ‘function’).
Can anyone help me understand how to adjust that future.global.maxSize option? I tried options(future.global.maxSize = 1500000) but that didn't work.
Got it figured out and learned how you can edit options for any package.
This is the line that I used (edit: the change was from 'global' to 'globals':
options(future.globals.maxSize= 891289600)
If you want to customize your limit, I saw in the package source that the limit was calculated and this is how you would calculate the size for an 850mb limit:
850*1024^2 = 891289600
Thanks!

ftruncate failed at the second time

I'm trying to exceed the shared memory object after shm_open and ftruncate successfully at fisrt. Here is the code,
char *uuid = GenerateUUID();
int fd = shm_open(uuid, O_RDWR|O_CREAT|O_EXCL, S_IRUSR|S_IWUSR);
if(fd == -1) perror("shm_open");
size_t shmSize = sizeof(container);
int ret = ftruncate(fd, shmSize);
perror("ftruncate first");
ret = ftruncate(fd, shmSize * 2);
perror("ftruncate second");
It could pass the first ftruncate, but for the second ftruncate, it exceeds failed with errno=22, "Invalid argument".
I also tried to ftruncate the memory object after mmap, refer to the ftruncate's man page, the shared memory should be formatted as zero to the new length.
Besides, I also tried to ftruncate the memory object in the child process (This is an IPC topic among two processes), the ftruncate returns "Invalid fd, no such file or directory" but I could shm_open and mmap successfully in child process.
Any ideas? Thanks!
I think this is a known "feature" of shm_open(), ftruncate(), mmap().
You have to ftruncate() the first time through to give the shared memory a length, but subsequent times ftruncate() gives that error number 22, which you can simply ignore.
The used implementation seems to conform to an older specification where returning an error is an allowed behavior for ftruncate(fd, length) when length exceeds the previous length:
If the file previously was smaller than this size, ftruncate() shall
either increase the size of the file or fail.

Resources