What will happen if I send an empty vector in Boost MPI?
For example, if I sent using the following code:
vector<int> empty_vector;
vector<int> other_vector(1,1);
if (world.rank()==0)
world.isend(1,10,empty_vector);
if (world.rank()==1)
world.recv(0,10,other_vector);
Would this work? Would it just receive an empty vector and assign it to other_vector?
Thanks
I was hoping to get a quick response on this, but I didn't. So I went ahead and just tested it. You can send an empty vector as a message. After sending, rank 1 had an empty vector for other_vec and rank 1 had the original other_vec of size 1.
Related
I am trying use simple slice operator as follows, but the result is not correct.
arr = c(1,2,3,4,5,6,7)
arr[2:2+3]
I expected to get the sliced array 2,3,4,5 but instead I get 5. Does R interpret arr[2:2+3] as arr[2:2]+3 ? If so, then why?
The correct version of slice would be arr[2:(2+3)], right?
As #camille pointed out in the comment section, the reason of failure is due to the order of operations!
I have a table of origin/destination pairs (lat-long coordinates). I'm using the R library gmapsdistance and mapply to loop over the table calling the API for each row. (The primary reason for this approach is that each row is a unique combination of origin, destination, and departure time, but gmapsdistance does not accept a vector of departure times.)
The problem is that I'm getting random, non-reproducible errors. I'll run the first 2000 rows and something will crash. I will back up and run the first 1000 and then the second 1000 and get no error.
As a result, I'm unable to provide a reproducible example. (If I could, I'd like to think I would have solved this by now.) Here is my mapply call:
result <- mapply(
gmapsdistance,
origin = to_skim$coords_orig,
destination = to_skim$coords_dest,
combinations = "pairwise",
key = api_key,
mode = mode,
departure = to_skim$departure_secs
)
The error messages themselves are not constant. I've seen ERROR : replacement has length zero but also:
AttValue: " or ' expected
attributes construct error
Couldn't find end of Start Tag html line 2
Extra content at the end of the document
The key is that I can re-run the exact same call and get a successful result. Thanks for any and all advice!
I'm trying to test the field: ResultBufferSize when working with Vertica 7.2.3 using ODBC.
From my understanding this field should effect the result set.
ResultBufferSize
but even with value 1 I get 20K results.
Anyway to make it work?
ResultBufferSize is the size of the result buffer configured at the ODBC data source. Not at runtime.
You get the actual size of a fetched buffer by preparing the SQL statement - SQLPrepare(), counting the result columns - SQLNumResultCols(), and then, for each found column, running SQLDescribe() .
Good luck -
Marco
I need to add a whole other answer to your comment, Tsahi.
I'm not completely sure if I still misunderstand you, though.
Maybe clarifying how I do it in an ODBC based SQL interpreter sheds some light on the matter.
SQLPrepare() on a string containing, say, "SELECT * FROM foo", returns SQL_SUCCESS, and the passed statement handle becomes valid.
SQLNumResultCols(&stmt,&colcount) on that statement handle returns the number of columns in its second parameter.
In a for loop from 0 to (colcount-1), I call SQLDescribeCol(), to get, among other things, the size of the column - that's how many bytes I'd have to allocate to fetch the biggest possible occurrence for that column.
I allocate enough memory to be able to fetch a block of rows instead of just one row in a subsequent SQLFetchScroll() call. For example, a block of 10,000 rows. For this, I need to allocate, for each column in colcount, 10,000 times the maximum possible fetchable size. Plus a two-byte integer for the Null indicator for each column. These two : data area allocated and null indicator area allocated, for 10,000 rows in my example, make the fetch buffer size, in other words, the result buffer size.
For the prepared statement handle, I call a SQLSetStmtAttr() to set SQL_ATTR_ROW_ARRAY_SIZE to 10,000 rows.
SQLFetchScroll() will return either 10,000 rows in one call, or, if the table foo contains fewer rows, all rows in foo.
This is how I understand it to work.
You can do the maths the other way round:
You set the max fetch buffer.
You prepare and describe the statement and columns as explained above.
For each column, you count two bytes for the null indicator, and the maximum possible fetch size as from SQLDescribeCol(), to get the sum of bytes for one row that need to be allocated.
You integer divide the max fetch buffer by the sum of bytes for one row.
And you use that integer divide result for the call of SQLSetStmtAttr() to set SQL_ATTR_ROW_ARRAY_SIZE.
Hope it makes some sense ...
I am totally new to R. Hopefully you can help. I am trying to simulate from a Hawkes process using R. The main idea is that-first of all I simulated some events from a homogeneous Poisson process. Then each of these events will create their own children using a non homogeneous Poisson process. The code is like as below:
SimulateHawkesprocess<-function(n,tmax,lambda,lambda2){
times<-Simulatehomogeneousprocess(n,lambda)
count<-1
while(count<n){
newevent<-times[count] + Simulateinhomogeneousprocess(lambda2,tmax,lambdamax=NA)
times<-c(times,newevent)
count<-count+1
n<-length(times)
}
return(times)
}
But the r code is producing this infinite loop(probably because of the last line: (n<-length(times))). How can I overcome this problem? How can I put a stopping condition?
This is not a R specific problem. You need to get your algorithm working correctly first. Compare the code you have written against what you want to do. If you need help with the algorithm then tag the question as such. Moreover the function call to Simulateinhomogeneousprocess is very inconsistent. Some insight into that function would help. What is that function returning, a number or a vector?
Within the loop you are increasing the value of n by at least 1 each time so you never reach the end.
newevent<-times[count] + Simulateinhomogeneousprocess(lambda2,tmax,lambdamax=NA)
This creates a non empty variable
times<-c(times,newevent)
Increases the "times" vector by at least 1 (since newevent is non-empty)
count<-count+1
n<-length(times)
You increase the count by 1 but also increase the n value by atleast 1 thus creating a never ending loop. One of these things has to change for the loop to stop.
A bit-stuffing based framing protocol uses an 8-bit delimiter pattern of 01111110. If the output bit-string after stuffing is 01111100101, then the input bit-string is
(A) 0111110100
(B) 0111110101
(C) 0111111101
(D) 0111111111
Correct answer given is B.
My question is why 1 is added after five 1's from left even when delimiter has six continuous 1's.
I think we will add 1 only when we get six continuous 1's, to avoid a 0.
Correct me if I am wrong.
The delimiter given 01111110. Delimiter basically used to determine the start and end of the frame. So we need to make sure if the same pattern(01111110) is also in data frame then receiver will not think of it as start or end of frame rather a valid data portion. That's why after '011111' of data bits, one '0' bit is stuffed to make sure it will not give impression of start or end of frame.
When the receiver receives ,it checks for consecutive five ones and if the next bit is zero then it drops it(If next bit is 1 instead of 0 then check the next bit of this bit ,if that is 0 then it is delimiter else error has occured). This is known as '0' bit stuffing.