How would one re-write the following in proper Java 8 functional style using filter, collector, etc:
private BigInteger calculateProduct(char[] letters) {
int OFFSET = 65;
BigInteger[] bigPrimes = Arrays.stream(
new int[] { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31,
37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89,
97, 101, 103, 107, 109, 113 })
.mapToObj(BigInteger::valueOf)
.toArray(BigInteger[]::new);
BigInteger result = BigInteger.ONE;
for (char c : letters) {
//System.out.println(c+"="+(int)c);
if (c < OFFSET) {
return new BigInteger("-1");
}
int pos = c - OFFSET;
result = result.multiply(bigPrimes[pos]);
}
return result;
}
#Test public void test() {
assertThat(calculateProduct(capitalize("carthorse"))).isEqualTo(calculateProduct(capitalize("orchestra")));
}
private char[] capitalize(String word) {
return word.toUpperCase().toCharArray();
}
You may do it like this:
static final BigInteger[] PRIMES
= IntStream.of(
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53,
59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113)
.mapToObj(BigInteger::valueOf)
.toArray(BigInteger[]::new);
private BigInteger calculateProduct(char[] letters) {
final int OFFSET = 65;
final CharBuffer cb = CharBuffer.wrap(letters);
if(cb.chars().anyMatch(c -> c<OFFSET))
return BigInteger.ONE.negate();
return cb.chars()
.mapToObj(c -> PRIMES[c-OFFSET])
.reduce(BigInteger.ONE, BigInteger::multiply);
}
Note that moving the creation of the PRIMES array out of the method, to avoid generating it again for each invocation, is an improvement that works independently of whether you use loops or “functional style” operations.
Also, your code doesn’t handle characters being too large, so you might improve the method to
private BigInteger calculateProduct(char[] letters) {
final int OFFSET = 65;
final CharBuffer cb = CharBuffer.wrap(letters);
if(cb.chars().mapToObj(c -> c-OFFSET).anyMatch(c -> c<0||c>PRIMES.length))
return BigInteger.ONE.negate();
return cb.chars()
.mapToObj(c -> PRIMES[c-OFFSET])
.reduce(BigInteger.ONE, BigInteger::multiply);
}
I can't tell why you want that, but may be this (which creates many more objects and is more verbose):
private static BigInteger calculateProduct(char[] letters) {
int OFFSET = 65;
BigInteger[] bigPrimes = Arrays.stream(
new int[] { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31,
37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89,
97, 101, 103, 107, 109, 113 })
.mapToObj(BigInteger::valueOf)
.toArray(BigInteger[]::new);
Optional<Character> one = IntStream.range(0, letters.length)
.mapToObj(x -> letters[x])
.filter(x -> x < OFFSET)
.findAny();
if (one.isPresent()) {
return new BigInteger("-1");
} else {
return IntStream.range(0, letters.length)
.mapToObj(x -> letters[x])
.parallel()
.reduce(
BigInteger.ONE,
(x, y) -> {
int pos = y - OFFSET;
return x.multiply(bigPrimes[pos]);
},
BigInteger::multiply);
}
}
Related
I am using an xgboost model to predict onto a raster stack. I have successfully used the same approach with CART, xgb and Random Forest models:
library(raster)
# create a RasterStack or RasterBrick with with a set of predictor layers
logo <- brick(system.file("external/rlogo.grd", package="raster"))
names(logo)
# known presence and absence points
p <- matrix(c(48, 48, 48, 53, 50, 46, 54, 70, 84, 85, 74, 84, 95, 85,
66, 42, 26, 4, 19, 17, 7, 14, 26, 29, 39, 45, 51, 56, 46, 38, 31,
22, 34, 60, 70, 73, 63, 46, 43, 28), ncol=2)
a <- matrix(c(22, 33, 64, 85, 92, 94, 59, 27, 30, 64, 60, 33, 31, 9,
99, 67, 15, 5, 4, 30, 8, 37, 42, 27, 19, 69, 60, 73, 3, 5, 21,
37, 52, 70, 74, 9, 13, 4, 17, 47), ncol=2)
# extract values for points
xy <- rbind(cbind(1, p), cbind(0, a))
v <- data.frame(cbind(pa=xy[,1], extract(logo, xy[,2:3])))
xgb <- xgboost(data = data.matrix(subset(v, select = -c(pa))), label = v$pa,
nrounds = 5)
raster::predict(model = xgb, logo)
But with xgboost I get the following error:
Error in xgb.DMatrix(newdata, missing = missing) :
xgb.DMatrix does not support construction from list
The problem is that predict.xgb.Booster does not accept a data.frame for argument newdata (see ?predict.xgb.Booster). That is unexpected (all common predict.* methods take a data.frame), but we can work around it. I show how to do that below, using the "terra" package instead of the obsolete "raster" package (but the solution is exactly the same for either package).
The example data
library(terra)
library(xgboost)
logo <- rast(system.file("ex/logo.tif", package="terra"))
p <- matrix(c(48, 48, 48, 53, 50, 46, 54, 70, 84, 85, 74, 84, 95, 85,
66, 42, 26, 4, 19, 17, 7, 14, 26, 29, 39, 45, 51, 56, 46, 38, 31,
22, 34, 60, 70, 73, 63, 46, 43, 28), ncol=2)
a <- matrix(c(22, 33, 64, 85, 92, 94, 59, 27, 30, 64, 60, 33, 31, 9,
99, 67, 15, 5, 4, 30, 8, 37, 42, 27, 19, 69, 60, 73, 3, 5, 21,
37, 52, 70, 74, 9, 13, 4, 17, 47), ncol=2)
xy <- rbind(cbind(1, p), cbind(0, a))
v <- extract(logo, xy[,2:3])
xgb <- xgboost(data = data.matrix(v), label=xy[,1], nrounds = 5)
The work-around is to write a prediction function that first coerces the data.frame with "new data" to a matrix. We can use that function with predict<SpatRaster>
xgbpred <- function(model, data, ...) {
predict(model, newdata=as.matrix(data), ...)
}
p <- predict(logo, model=xgb, fun=xgbpred)
plot(p)
I have an (N, I) tensor of N rows with I indices beween 0 and Z, e.g.,
N=5, I=3, Z=100:
foo = tensor([[83, 5, 85],
[ 7, 60, 66],
[89, 25, 63],
[58, 67, 47],
[12, 46, 40]], device='cuda:0')
Now I want to efficiently add X random additional new indices (i.e., not yet included in the tensor!) between 0 and Z to the tensor, e.g.:
foo_new = tensor([[83, 5, 85, 9, 43, 53, 42],
[ 7, 60, 66, 85, 64, 22, 1],
[89, 25, 63, 38, 24, 4, 75],
[58, 67, 47, 83, 43, 29, 55],
[12, 46, 40, 74, 21, 11, 52]], device='cuda:0')
The tensor would in the end have in each row I+X unique indices between 0 and Z, where I indices are the ones from the initial tensor, and X indices are uniform randomly drawn without replacement from the remaining indices {0...Z}\{I(n)}, where {I(n)} are the inidices of the nth row.
So it's like a multidimensional random draw without replacement from indices 0 to Z, where the first I draws (in each row) are enforced to result in the indices given by the initial tensor.
How would I do this efficiently, especially with potentially large Z?
What I tried so far (which was quite slow):
device = torch.cuda.current_device()
notinfoo = torch.ones((N,I), device=device).byte()
N_row = torch.arange(N, device=device).unsqueeze(dim=-1)
notinfoo[N_row, foo] = 0
foo_new = torch.stack([torch.cat((f, torch.arange(Z, device=device)[nf][torch.randperm(Z-I, device=device)][:X])) for f,nf in zip(foo,notinfoo)])
Use first numpy numpy.random.choice to get samples with replace=False for without replacement sampling.
and then concat both using torch.cat
import numpy as np
foo_new = torch.tensor(np.random.choice(100 , (5,4), replace=False)) # Z = 100
foo_new = torch.cat((foo, foo_new), 1)
foo_new
tensor([[83, 5, 85, 56, 83, 16, 20],
[ 7, 60, 66, 43, 31, 75, 67],
[89, 25, 63, 96, 3, 13, 11],
[58, 67, 47, 55, 92, 70, 35],
[12, 46, 40, 79, 61, 58, 76]])
I am receiving data via UART from an Arduino. I followed the documentation and I get the data as expected most of the time. Sometimes the read does not finish, gets a few zeroes then starts a new read with the rest of the data. This can be seen in the example output, all the data is there but split into 2 reads. I am only sending data once a second so there should be plenty time.
My Code:
private UartDeviceCallback mUartCallback = new UartDeviceCallback() {
#Override
public boolean onUartDeviceDataAvailable(UartDevice uart) {
// Read available data from the UART device
try {
readUartBuffer(uart);
} catch (IOException e) {
Log.w(TAG, "Unable to access UART device", e);
}
// Continue listening for more interrupts
return true;
}
private void readUartBuffer(UartDevice uart) throws IOException {
// Maximum amount of data to read at one time
final int maxCount = 20;
byte[] buffer = new byte[maxCount];
uart.read(buffer, maxCount);
Log.i(TAG, Arrays.toString(buffer));
}
#Override
public void onUartDeviceError(UartDevice uart, int error) {
Log.w(TAG, uart + ": Error event " + error);
}
};
Example output:
[50, 48, 54, 46, 52, 53, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
[50, 48, 54, 46, 57, 51, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
[50, 48, 54, 46, 48, 52, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
[50, 48, 55, 46, 51, 52, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
[50, 48, 54, 46, 53, 48, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
[50, 48, 55, 46, 51, 54, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
[50, 48, 54, 46, 57, 51, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 0, 0, 0, 0]
[51, 48, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[50, 48, 55, 46, 51, 56, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 0, 0, 0, 0]
[51, 48, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[50, 48, 54, 46, 52, 57, 32, 50, 49, 46, 55, 48, 32, 51, 51, 46, 51, 48, 32, 0]
I am quite sure the problem is the R Pi since I am looping back from the Arduino to my PC with no problems. I also found that unless I make maxCount the exact number of bytes I am sending, the problem is more prevalent. Whereby, the data comes in random packages but in the correct order. Am I wasting my time? Should I just use I2C?
"Should I just use I2C?" - no.
There is no problem with R Pi because "all the data is there". They (can be) split (or not, especially if it short) into 2 (or more) reads, because onUartDeviceDataAvailable() can be fired before ALL data available (but only part of it was available), so you should read them in a loop until you receive all of them. And, from your code: maxCount - Maximum amount of data to read at one time is not size for ALL data, it's max. size for one-time read. You code can be something like that (NB! it's just example, not complete solution):
private void readUartBuffer(UartDevice uart) throws IOException {
// Buffer for all data
final int maxSizeOfAllData = 30;
byte[] completaDataBuffer = new byte[maxSizeOfAllData];
// Buffer for one uart.read() call
final int maxCount = 20;
byte[] buffer = new byte[maxCount];
int bytesReadOnce; // number of actually available data
int totalBytesRead = 0;
// read all available data
while ((bytesReadOnce = uart.read(buffer, maxCount))) > 0) {
// add new data to "all data" buffer
for (int i = 0; i < bytesReadOnce; i++) {
completaDataBuffer[totalBytesRead + i] = buffer[i]
if (totalBytesRead + i == maxSizeOfAllData - 1) {
// process complete buffer here
...
totalBytesRead = 0;
break;
}
}
totalBytesRead += bytesReadOnce;
}
}
Also, take a look at NmeaGpsModule.java from Android Things user-space drivers and LoopbackActivity.java from Android Things samples.
I ended up adding an end character (0x36) and using a dataCompleteFlag:
private void readUartBuffer(UartDevice uart) throws IOException {
// Maximum amount of data to read at one time
final int maxCount = 32;
byte[] buffer = new byte[maxCount];
boolean dataCompleteFlag = false;
uart.read(buffer, maxCount);
Log.i(TAG, Arrays.toString(buffer));
if (!dataCompleteFlag) {
for (int i = 0; i < maxCount; i++) {
if (buffer[i] == 36) {
dataCompleteFlag = true;
dataCount = 0;
}
else if(dataCount > maxCount) {
dataCount = 0;
}
else if(buffer[i] != 0) {
finalDataBuffer[dataCount] = buffer[i];
dataCount++;
}
}
}
if (dataCompleteFlag) {
//process data
}
}
#Override
public void onUartDeviceError(UartDevice uart, int error) {
Log.w(TAG, uart + ": Error event " + error);
}
};
I extracted this data from a file:
forests<-read.table("~/Desktop/f.txt", header = FALSE, sep = " ", fill = TRUE)
library(arules) //after installing the package
I get an error when I type
forests<- apriori(forests, parameter = list(support=0.3))
Error in asMethod(object) : column(s) 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60,
61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
78, 79, 80, 81, 82, 83, 84, 85 not logical or a factor. Discretize the
columns first.
I tried discritize(forests). It still doesn't work.
itemFrequencyPlot(forests) also gives an error saying unable to find inherited method.
Have I imported the data incorrectly?
I am trying to forecast using R's arima from java using Eclipse. I am using Rserve. I need to output the forecast and the intervals in array format. I can print out the forecast in the Eclipse console as an output. How do I retrieve the point forecast and the confidence interval as an array. Here is my code.
RConnection c = null;
int[] kings = { 60, 43, 67, 50, 56, 42, 50, 65, 68, 43, 65, 34, 47, 34,
49, 41, 13, 35, 53, 56, 16, 43, 69, 59, 48, 59, 86, 55, 68, 51,
33, 49, 67, 77, 81, 67, 71, 81, 68, 70, 77, 56 };
try {
c = new RConnection();
System.out.println("INFO : The Server version is :-- " + c.getServerVersion());
c.eval("library(\"forecast\")");
c.assign("kings", kings);
c.eval("datats<-data;");
c.eval("kingsts<-ts(kings);");
c.eval("arima<-auto.arima(kingsts);");
c.eval("fcast<-forecast(arima, h=12);");
String f = c.eval("paste(capture.output(print(fcast)),collapse='\\n')").asString();
System.out.println(f);
//Codes online suggest I do the following but this does not work
REXP fs = re.eval("summary(fcast);");
double[] forecast = fs.asDoubleArray();
for(int i=0; i
I figured out how to retrieve the values. I converted this into the data frame.
c.eval("ds=as.data.frame(fcast);");
//get the forecast values
c.eval("names(ds)[1]<-paste(\"actual\");");
REXP actual=c.eval("(ds$\"actual\")");
double[] forecast = actual.asDoubles();
for(int i=0; i<forecast.length; i++)
System.out.println("forecast values are: "+forecast[i]);
The still need to figure out how to attach the time stamp to the dataframe