Can Storm's HdfsBolt flush data after a timeout as well? - bigdata

We are using Storm to process streaming data and store into HDFS. We have got everything to work but have one issue. I understand that we can specify the number of tuples after which the data gets flushed to HDFS using SyncPolicy, something like this below:
SyncPolicy syncPolicy = new CountSyncPolicy(Integer.parseInt(args[3]));
The question I have is can the data also be flushed after a timeout? For e.g. we have set the SyncPolicy above to 1000 tuples. If for whatever reason we get 995 tuples and then the data stops coming in for a while is there any way that storm can flush the 995 records to HDFS after a specified timeout (5 seconds)?
Thanks in advance for any help on this!
Shay

Yes, if you send a tick tuple to the HDFS bolt, it will cause the bolt to try to sync to the HDFS file system. All this happens in the HDFS bolt's execute function.
To configure tick tuples for your topology, in your topology config. In Java, to set that to every 300 seconds the code would look like:
Config topologyConfig = new Config();
topologyConfig.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 300);
StormSubmitter.submitTopology("mytopology", topologyConfig, builder.createTopology());
You'll have to adjust that last line depending on your circumstances.

There is an alternative solution for this problem,
First, lets clarify about sync policy, If your sync policy is 1000 ,then HdfsBolt only sync the data from 1000 tuple by calling hsync() method in execute() means it only clears the buffer by pushing data to disk, but for faster write disk may uses its cache and not writing to file directly.
The data is written to the file only when the size of data matches your rotation policy that need to specify at the time of bolt creation.
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(100.0f, Units.KB);
So for flushing the record the to file after timeout, Seperate your tick tuple from normal tuples in excecute method and calculate the time difference of both tuple, If the diff is greater than timeout period then write the data to file.
By handling tick tuple differently you can also avoid the tick tuple frequency written to your file.
See the below code for better understanding:
public class CustomHdfsBolt1 extends AbstractHdfsBolt {
private static final Logger LOG = LoggerFactory.getLogger(CustomHdfsBolt1.class);
private transient FSDataOutputStream out;
private RecordFormat format;
private long offset = 0L;
private int tickTupleCount = 0;
private String type;
private long normalTupleTime;
private long tickTupleTime;
public CustomHdfsBolt1() {
}
public CustomHdfsBolt1(String type) {
this.type = type;
}
public CustomHdfsBolt1 withFsUrl(String fsUrl) {
this.fsUrl = fsUrl;
return this;
}
public CustomHdfsBolt1 withConfigKey(String configKey) {
this.configKey = configKey;
return this;
}
public CustomHdfsBolt1 withFileNameFormat(FileNameFormat fileNameFormat) {
this.fileNameFormat = fileNameFormat;
return this;
}
public CustomHdfsBolt1 withRecordFormat(RecordFormat format) {
this.format = format;
return this;
}
public CustomHdfsBolt1 withSyncPolicy(SyncPolicy syncPolicy) {
this.syncPolicy = syncPolicy;
return this;
}
public CustomHdfsBolt1 withRotationPolicy(FileRotationPolicy rotationPolicy) {
this.rotationPolicy = rotationPolicy;
return this;
}
public CustomHdfsBolt1 addRotationAction(RotationAction action) {
this.rotationActions.add(action);
return this;
}
protected static boolean isTickTuple(Tuple tuple) {
return tuple.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID)
&& tuple.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID);
}
public void execute(Tuple tuple) {
try {
if (isTickTuple(tuple)) {
tickTupleTime = Calendar.getInstance().getTimeInMillis();
long timeDiff = normalTupleTime - tickTupleTime;
long diffInSeconds = TimeUnit.MILLISECONDS.toSeconds(timeDiff);
if (diffInSeconds > 5) { // specify the value you want.
this.rotateWithOutFileSize(tuple);
}
} else {
normalTupleTime = Calendar.getInstance().getTimeInMillis();
this.rotateWithFileSize(tuple);
}
} catch (IOException var6) {
LOG.warn("write/sync failed.", var6);
this.collector.fail(tuple);
}
}
public void rotateWithFileSize(Tuple tuple) throws IOException {
syncHdfs(tuple);
this.collector.ack(tuple);
if (this.rotationPolicy.mark(tuple, this.offset)) {
this.rotateOutputFile();
this.offset = 0L;
this.rotationPolicy.reset();
}
}
public void rotateWithOutFileSize(Tuple tuple) throws IOException {
syncHdfs(tuple);
this.collector.ack(tuple);
this.rotateOutputFile();
this.offset = 0L;
this.rotationPolicy.reset();
}
public void syncHdfs(Tuple tuple) throws IOException {
byte[] e = this.format.format(tuple);
synchronized (this.writeLock) {
this.out.write(e);
this.offset += (long) e.length;
if (this.syncPolicy.mark(tuple, this.offset)) {
if (this.out instanceof HdfsDataOutputStream) {
((HdfsDataOutputStream) this.out).hsync(EnumSet.of(SyncFlag.UPDATE_LENGTH));
} else {
this.out.hsync();
}
this.syncPolicy.reset();
}
}
}
public void closeOutputFile() throws IOException {
this.out.close();
}
public void doPrepare(Map conf, TopologyContext topologyContext, OutputCollector collector) throws IOException {
LOG.info("Preparing HDFS Bolt...");
this.fs = FileSystem.get(URI.create(this.fsUrl), this.hdfsConfig);
this.tickTupleCount = 0;
this.normalTupleTime = 0;
this.tickTupleTime = 0;
}
public Path createOutputFile() throws IOException {
Path path = new Path(this.fileNameFormat.getPath(),
this.fileNameFormat.getName((long) this.rotation, System.currentTimeMillis()));
this.out = this.fs.create(path);
return path;
}
}
You can directly use this class in your project.
Thanks,

Related

How to access a method (value) that is nested in a public static class

How do I access/get the string return values of a public static method that is nested in a public static class?
I want to display the string on a screen.
I've tried using private StringProperty variables to setDataString() the method return values as seen in the code snippet below.
The method named "byteToHex(buffer)" is the one whose return value I'm trying to access.
public static class SerialPortReader implements SerialPortEventListener
{
final public static char COMMA = ',';
final public static String COMMA_STR = ",";
final public static char ESCAPE_CHAR = '\\';
#Override
public void serialEvent(SerialPortEvent event)
{
if(event.isRXCHAR() && event.getEventValue() > 0)
{
try {
byte buffer[] = serialPort.readBytes();
byteToHex(buffer);
TransCeiveSerialData dataString = new TransCeiveSerialData();
dataString.setDataString(byteToHex(buffer));
/*
* wait some milliseconds before sending next data package to avoid data losses
*/
try {
Thread.sleep(100);
}catch(InterruptedException ie)
{
Logger.getLogger(TransCeiveSerialData.class.getName()).log(Level.SEVERE, null, ie);
}
}
catch(SerialPortException spe) {System.out.println("Error in port listener: " + spe);}
}
}
}
public static String byteToHex(byte x[])
{
StringBuffer retString = new StringBuffer();
for(int i = 0; i < x.length; ++i)
{
retString.append(Integer.toHexString(0x0100 + (x[i] & 0x00FF)).substring(1));
}
return retString.toString();
}
Using for exmaple System.out.println("Received data: " + instanceOfClass.getDataString()); in an external class to get the method's return string I get a "null". But I expect to get 31323334353637380d0a.
I've also tried binding the values but without any success.
Do you perhaps have any ideas how I can solve this problem? Your help will be very much appreciated.
Thanks a lot in advance!
AvJoe

Netty: TCP Client Server File transfer: Exception TooLongFrameException:

I am new to netty and I am trying to design a solution as below for transfer of file from Server to Client over TCP:
1. Zero copy based file transfer in case of non-ssl based transfer (Using default region of the file)
2. ChunkedFile transfer in case of SSL based transfer.
The Client - Server file transfer works in this way:
1. The client sends the location of the file to be transfered
2. Based on the location (sent by the client) the server transfers the file to the client
The file content could be anything (String /image /pdf etc) and any size.
Now, I get this TooLongFrameException: at the Server side, though the server is just decoding the path received from the client, for running the code mentioned below (Server/Client).
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 65536: 215542494061 - discarded
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522)
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500)
Now, My question is:
Am I wrong with the order of Encoders and Decoders and its configuration? If so, what is the correct way to configure it to receive a file from the server?
I went through few related StackOverflow posts SO Q1,SO Q2 , SO Q3, SO Q4. I got to know about the LengthFieldBasedDecoder, but I didn't get to know how to configure its corresponding LengthFieldPrepender at the Server (Encoding side). Is it even required at all?
Please point me into the right direction.
FileClient:
public final class FileClient {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", SSL ? "8992" : "8023"));
static final String HOST = System.getProperty("host", "127.0.0.1");
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the client
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc(), HOST, PORT));
}
pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(64*1024, 0, 8));
pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));
pipeline.addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast(new ObjectEncoder());
pipeline.addLast( new FileClientHandler()); }
});
// Start the server.
ChannelFuture f = b.connect(HOST,PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
group.shutdownGracefully();
}
}
}
FileClientHandler:
public class FileClientHandler extends ChannelInboundHandlerAdapter{
#Override
public void channelActive(ChannelHandlerContext ctx) {
String filePath = "/Users/Home/Documents/Data.pdf";
ctx.writeAndFlush(Unpooled.wrappedBuffer(filePath.getBytes()));
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("File Client Handler Read method...");
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
}
FileServer:
/**
* Server that accept the path of a file and echo back its content.
*/
public final class FileServer {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", SSL ? "8992" : "8023"));
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true).handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
pipeline.addLast("frameDecoder",new LengthFieldBasedFrameDecoder(64*1024, 0, 8));
pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));
pipeline.addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast(new ObjectEncoder());
pipeline.addLast(new ChunkedWriteHandler());
pipeline.addLast(new FileServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
FileServerHandler:
public class FileServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object obj) throws Exception {
RandomAccessFile raf = null;
long length = -1;
try {
ByteBuf buff = (ByteBuf)obj;
byte[] bytes = new byte[buff.readableBytes()];
buff.readBytes(bytes);
String msg = new String(bytes);
raf = new RandomAccessFile(msg, "r");
length = raf.length();
} catch (Exception e) {
ctx.writeAndFlush("ERR: " + e.getClass().getSimpleName() + ": " + e.getMessage() + '\n');
return;
} finally {
if (length < 0 && raf != null) {
raf.close();
}
}
if (ctx.pipeline().get(SslHandler.class) == null) {
// SSL not enabled - can use zero-copy file transfer.
ctx.writeAndFlush(new DefaultFileRegion(raf.getChannel(), 0, length));
} else {
// SSL enabled - cannot use zero-copy file transfer.
ctx.writeAndFlush(new ChunkedFile(raf));
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
System.out.println("Exception server.....");
}
}
I referred Netty In Action and code samples from here
There are multiple things wrong with your server/client. First thing the SSL, for the client you don't need to initialize a SslContext for a server instead you would do something like this:
sslCtx = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
On the server side of things you use a SelfSignedCertificate which in itself isn't wrong but would like to remind you that it should only be used for debugging purposes and not in production. In addition you use the ChannelOption.SO_KEEPALIVE which isn't recommended since the keepalive interval is OS-dependent. Furthermore you added Object En-/Decoder to your pipeline which in your case don't do anything useful so you can remove them.
Also you configured your LengthFieldBasedFrameDecoder wrong due to an incomplete and wrong parameter list. In the netty docs you need the version of the constructor which defines the lengthFieldLength and initialBytesToStrip. Besides the not stripping the length field you also defined the wrong lengthFieldLength which should be the same as your LengthFieldPrepender's lengthFieldLength which is 4 bytes. In conlusion you could use the constructor like this:
new LengthFieldBasedFrameDecoder(64 * 1024, 0, 4, 0, 4)
In both your handler you don't specify a Charset when en-/decoding your String which could lead to problems because if no ´Charset´ is defined the systems default will be used which could vary. You could do something like this:
//to encode the String
string.getBytes(StandardCharsets.UTF_8);
//to decode the String
new String(bytes, StandardCharsets.UTF_8);
Additionally you tried to use the DefaultFileRegion if no SslHandler was added to the pipeline which would have been fine if you didn't added the LengthFieldHandler since they would need a memory copy of the byte[] to send to added the length field. Moreover I would recommend using the ChunkedNioFile instead of the ChunkedFile because it's nonblocking which is always a good thing. You would do this like that:
new ChunkedNioFile(randomAccessFile.getChannel())
One final thing on how to decode a ChunkedFile as it's split in chunks you can simply assamble them tougether with a simple OutputStream. Here's an old file handler of mine:
public class FileTransferHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final Path path;
private final int size;
private final int hash;
private OutputStream outputStream;
private int writtenBytes = 0;
private byte[] buffer = new byte[0];
protected FileTransferHandler(Path path, int size, int hash) {
this.path = path;
this.size = size;
this.hash = hash;
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf byteBuf) throws Exception {
if(this.outputStream == null) {
Files.createDirectories(this.path.getParent());
if(Files.exists(this.path))
Files.delete(this.path);
this.outputStream = Files.newOutputStream(this.path, StandardOpenOption.CREATE, StandardOpenOption.APPEND);
}
int size = byteBuf.readableBytes();
if(size > this.buffer.length)
this.buffer = new byte[size];
byteBuf.readBytes(this.buffer, 0, size);
this.outputStream.write(this.buffer, 0, size);
this.writtenBytes += size;
if(this.writtenBytes == this.size && MurMur3.hash(this.path) != this.hash) {
System.err.println("Received file has wrong hash");
return;
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
if(this.outputStream != null)
this.outputStream.close();
}
}

Iterator from object with next() and get()

Given an object like this:
Matcher matcher = pattern.matcher(sql);
with usage like so:
Set<String> matches = new HashSet<>();
while (matcher.find()) {
matches.add(matcher.group());
}
I'd like to replace this while loop by something more object-oriented like so:
new Iterator<String>() {
#Override
public boolean hasNext() {
return matcher.find();
}
#Override
public String next() {
return matcher.group();
}
}
so that I can easily e.g. make a Stream of matches, stick to using fluent APIs and such.
The thing is, I don't know and can't find a more concise way to create this Stream or Iterator. An anonymous class like above is too verbose for my taste.
I had hoped to find something like IteratorFactory.from(matcher::find, matcher::group) or StreamSupport.of(matcher::find, matcher::group) in the jdk, but so far no luck. I've no doubt libraries like apache commons or guava provide something for this, but let's say I can't use those.
Is there a convenient factory for Streams or Iterators that takes a hasNext/next method combo in the jdk?
In java-9 you could do it via:
Set<String> result = matcher.results()
.map(MatchResult::group)
.collect(Collectors.toSet());
System.out.println(result);
In java-8 you would need a back-port for this, taken from Holger's fabulous answer
EDIT
There is a single method btw tryAdvance that could incorporate find/group, something like this:
static class MyIterator extends AbstractSpliterator<String> {
private Matcher matcher;
public MyIterator(Matcher matcher) {
// I can't think of a better way to estimate the size here
// may be you can figure a better one here
super(matcher.regionEnd() - matcher.regionStart(), 0);
this.matcher = matcher;
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
while (matcher.find()) {
action.accept(matcher.group());
return true;
}
return false;
}
}
And usage for example:
Pattern p = Pattern.compile("\\d");
Matcher m = p.matcher("12345");
Set<String> result = StreamSupport.stream(new MyIterator(m), false)
.collect(Collectors.toSet());
This class I wrote embodies what I wanted to find in the jdk. Apparently though it just doesn't exist. eugene's accepted answer offers a java 9 Stream solution though.
public static class SearchingIterator<T> implements Iterator<T> {
private final BooleanSupplier advancer;
private final Supplier<T> getter;
private Optional<T> next;
public SearchingIterator(BooleanSupplier advancer, Supplier<T> getter) {
this.advancer = advancer;
this.getter = getter;
search();
}
private void search() {
boolean hasNext = advancer.getAsBoolean();
next = hasNext ? Optional.of(getter.get()) : Optional.empty();
}
#Override
public boolean hasNext() {
return next.isPresent();
}
#Override
public T next() {
T current = next.orElseThrow(IllegalStateException::new);
search();
return current;
}
}
Usage:
Matcher matcher = Pattern.compile("\\d").matcher("123");
Iterator<String> it = new SearchingIterator<>(matcher::find, matcher::group);

Hadoop: the Mapper didn't read files from multiple input paths

The Mapper didn't manage to read a file from multiple directories. Could anyone help?
I need to read one file in each mapper. I've added multiple input paths and implemented the custom WholeFileInputFormat, WholeFileRecordReader. In the map method, I don't need the input key. I make sure that each map can read a whole file.
Command line: hadoop jar AutoProduce.jar Autoproduce /input_a /input_b /output
I specified two input path----1.input_a; 2.input_b;
Run method snippets:
Job job = new Job(getConf());
job.setInputFormatClass(WholeFileInputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]), new Path(args[1]));
FileOutputFormat.setOutputPath(job, new Path(args[2]));
map method snippets:
public void map(NullWritable key, BytesWritable value, Context context){
FileSplit fileSplit = (FileSplit) context.getInputSplit();
System.out.println("Directory :" + fileSplit.getPath().toString());
......
}
Custom WholeFileInputFormat:
class WholeFileInputFormat extends FileInputFormat<NullWritable, BytesWritable> {
#Override
protected boolean isSplitable(JobContext context, Path file) {
return false;
}
#Override
public RecordReader<NullWritable, BytesWritable> createRecordReader(
InputSplit split, TaskAttemptContext context) throws IOException,
InterruptedException {
WholeFileRecordReader reader = new WholeFileRecordReader();
reader.initialize(split, context);
return reader;
}
}
Custom WholeFileRecordReader:
class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable> {
private FileSplit fileSplit;
private Configuration conf;
private BytesWritable value = new BytesWritable();
private boolean processed = false;
#Override
public void initialize(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException {
this.fileSplit = (FileSplit) split;
this.conf = context.getConfiguration();
}
#Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (!processed) {
byte[] contents = new byte[(int) fileSplit.getLength()];
Path file = fileSplit.getPath();
FileSystem fs = file.getFileSystem(conf);
FSDataInputStream in = null;
try {
in = fs.open(file);
IOUtils.readFully(in, contents, 0, contents.length);
value.set(contents, 0, contents.length);
} finally {
IOUtils.closeStream(in);
}
processed = true;
return true;
}
return false;
}
#Override
public NullWritable getCurrentKey() throws IOException,InterruptedException {
return NullWritable.get();
}
#Override
public BytesWritable getCurrentValue() throws IOException,InterruptedException {
return value;
}
#Override
public float getProgress() throws IOException {
return processed ? 1.0f : 0.0f;
}
#Override
public void close() throws IOException {
// do nothing
}
}
PROBLEM:
After setting two input paths, all map tasks read files from only one directory..
Thanks in advance.
You'll have to use MultipleInputs instead of FileInputFormat in the driver. So your code should be as:
MultipleInputs.addInputPath(job, new Path(args[0]), <Input_Format_Class_1>);
MultipleInputs.addInputPath(job, new Path(args[1]), <Input_Format_Class_2>);
.
.
.
MultipleInputs.addInputPath(job, new Path(args[N-1]), <Input_Format_Class_N>);
So if you want to use WholeFileInputFormat for the first input path and TextInputFormat for the second input path, you'll have to use it the following way:
MultipleInputs.addInputPath(job, new Path(args[0]), WholeFileInputFormat.class);
MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class);
Hope this works for you!

Read File and Return Synchronously (Metro App)

I am writing a Metro App.
I am trying to read a file and return a float[] from the data. But no matter what I do, the function seems to return null. I have tried the solutions to similar questions to no luck.
For example if I use:
float[] floatArray = new ModelReader("filename.txt").ReadModel()
The result will be a null array.
However if I use:
new ModelReader("filename.txt")
The correct array will be printed to the console because "Test" also prints the array before returning it. This seems very weird to me.
Please give me some guidance, I have no idea what is wrong.
public class ModelReader
{
float[] array;
public ModelReader(String name)
{
ReadModelAsync(name);
}
public float[] ReadModel()
{
return array;
}
private async Task ReadModelAsync(String name)
{
await readFile(name);
}
async Task readFile(String name)
{
// settings
var path = #"Assets\models\" + name;
var folder = Windows.ApplicationModel.Package.Current.InstalledLocation;
// acquire file
var file = await folder.GetFileAsync(path);
// read content
var read = await Windows.Storage.FileIO.ReadTextAsync(file);
using (StringReader sr = new StringReader(read))
{
Test test = new Test(getFloatArray(sr));
this.array = test.printArray();
}
}
private float[] getFloatArray(StringReader sr) { ... }
public class Test
{
public float[] floatArray;
public Test(float[] floatArray)
{
this.floatArray = floatArray;
}
public float[] printArray()
{
for (int i = 0; i < floatArray.Length; i++)
{
Debug.WriteLine(floatArray[i]);
}
return floatArray;
}
}
You're trying to get the result of an asynchronous operation before it has completed. I recommend you read my intro to async / await and follow-up with the async / await FAQ.
In particular, your constructor:
public ModelReader(String name)
{
ReadModelAsync(name);
}
is returning before ReadModelAsync is complete. Since constructors cannot be asynchronous, I recommend you use an asynchronous factory or asynchronous lazy initialization as described on my blog (also available in my AsyncEx library).
Here's a simple example using an asynchronous factory approach:
public class ModelReader
{
float[] array;
private ModelReader()
{
}
public static async Task<ModelReader> Create(string name)
{
var ret = new ModelReader();
await ret.ReadModelAsync(name);
return ret;
}
...
}

Resources