I am studying Java8 esp Stream API.
but still don't get it how stream and map work.
what i understood of stream was like
the result will be 1111 2222 when i use peek() and forEach() but the result of println() is mixed.
i thought if i use map().filter().map().filter() then first of all do first map() and return to stream and do next filter() and moving to next like this. so i am so confused of this result.
this is my code
package exam_20170823;
import java.io.File;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class StreamEx2 {
public static void main(String[] args) {
File[] fileArr = {
new File("Ex1.java"),
new File("Ex1.bak"),
new File("Ex1.txt"),
new File("Ex2.java"),
new File("Ex1")
};
/*1) make stream
2) find filename extension
3) change 2) to uppercase
4) remove duplicate
5) print
*/
Stream<File> fileStream = Stream.of(fileArr);
fileStream.map(s->s.getName())
.filter(s -> s.indexOf(".") != -1)
.peek(a -> System.out.println(a))
.map(s -> s.substring(s.indexOf(".")+1).toUpperCase())
.distinct()
.forEach(s -> System.out.println(s));
}
}
and this is result
Ex1.java
JAVA
Ex1.bak
BAK
Ex1.txt
TXT
Ex2.java
just i want to know why the result is not like this? -> "Ex1.java,Ex1.bak,Ex1.txt,Ex2.java" first, and print "JAVA,BAK,TXT"
i used peek() first and finally use foreach() so i expected
after using peek() the stream will be Ex1.java,Ex1.bak,Ex1.txt,Ex2.java
and then next i use map() so it must have JAVA,BAK,TXT. and
finally use foreach() so each things of stream will be printed what was what i expected but it is so confused. is anyone can help me to understand why?
I think you are confusing laziness here. Not all elements are going to go through the map, then all to the filter and then all that are filtered going to the other map. This is not how stream works.
The processing is lazy. Meaning that one element at a time is taken from the source (in your case an array of Files), then that element goes through all of the stages of the Stream pipeline (map, then filter, then peek); notice that if filter fails, it does not reaches peek at all. Then the second element is taken from the source and does the same thing and so on.
That is why you see the output on each stage at a time. See this example:
Stream.of(1, 2, 3, 4)
.filter(x -> {
System.out.println("Filering x = " + x);
return x > 2;
})
.map(x -> {
System.out.println("Mapping x = " + x);
return x + 1;
})
.collect(Collectors.toList());
Notice how the mapping stage is being executed only at the third element, because the first two do not satisfy the Predicate in the filter stage.
Related
I'm new to Rust, and I'm trying to make an interface where the user can choose a file by typing the filename from a list of available files.
This function is supposed to return the DirEntry corresponding to the chosen file:
fn ask_user_to_pick_file(available_files: Vec<DirEntry>) -> DirEntry {
println!("Which month would you like to sum?");
print_file_names(&available_files);
let input = read_line_from_stdin();
let chosen = available_files.iter()
.find(|dir_entry| dir_entry.file_name().into_string().unwrap() == input )
.expect("didnt match any files");
return chosen
}
However, it appears chosen is somehow borrowed here? I get the following error:
35 | return chosen
| ^^^^^^ expected struct `DirEntry`, found `&DirEntry`
Is there a way I can "unborrow" it? Or do I have to implement the Copy trait for DirEntry?
If it matters I don't care about theVec after this method, so if "unborrowing" chosen destroys the Vec, thats okay by me (as long as the compiler agrees).
Use into_iter() instead of iter() so you get owned values instead of references out of the iterator. After that change the code will compile and work as expected:
fn ask_user_to_pick_file(available_files: Vec<DirEntry>) -> DirEntry {
println!("Which month would you like to sum?");
print_file_names(&available_files);
let input = read_line_from_stdin();
let chosen = available_files
.into_iter() // changed from iter() to into_iter() here
.find(|dir_entry| dir_entry.file_name().into_string().unwrap() == input)
.expect("didnt match any files");
chosen
}
I have the next code:
import zio._
import scala.concurrent.Future
case class AppError(description: String) extends Throwable
// legacy-code imitation
def method(x: Int): Task[Boolean] = {
Task.fromFuture { implicit ec => Future.successful(x == 0) }
}
def handler(input: Int): IO[AppError, Int] = {
for {
result <- method(input)
_ <- IO.fail(AppError("app error")).when(result)
} yield input
}
but this code does not compile, because compiler says result type is:
ZIO[Any, Throwable, Int]
How to convert from Task (where I call method) to IO?
You'll need to decide what you want to do with Throwable errors which are not AppError.
If you decide you want to map them to an AppError you can do:
method(input).mapError {
case ae: AppError => ae
case other => AppError(other.getMessage)
}
If you want to refine those errors and only keep the ones that are AppError then you can use one of the refine* family of operators, which will keep errors that match the predicate and terminate the fiber otherwise.
method(input).refineToOrDie[AppError] // IO[AppError, Boolean]
// Or
method(input).refineOrDie { case ae: AppError => ae } // IO[AppError, Boolean]
Or if you want to assume that all errors from method are considered "Fiber terminating", then you can use .orDie to absorb the error and kill the fiber:
method(input).orDie // UIO[Boolean]
Or if you want to recover from the error and handle it a different way then you could use the catch* family
method(input).catchAll(_ => UIO.succeed(false)) // UIO[Boolean]
Finally if you wanted to have the result mapped into an Either you could use .either, which will lift the error out of the error channel and map it into Either[E, A]
method(input).either // UIO[Either[Throwable, Boolean]]
There is a great cheat sheet (though admittedly a bit out of date) here as well
I've a buffer that is actually ArrayList<Object>.
Happens async:
This buffer list changes very frequently - I mean 15-50 times in single second and the idea is that whenever there's an update, I remove first element by position buffer.removeAt(0) and add new value in the end by buffer.add(new).
At some point I call a function that goes and do calculation with buffer list. What I do is I go through the list - element by element. At some point I run into NPE as the the element has been removed async.
How to solve this NPE? I was thinking of making deep copy, but making deep copy would mean to go through the buffer list and do some data allocation, which basically means that while I do deep copy I can still run into NPE.
How problems like these are solved?
How to solve NPE?
What would be more optimized way as this is gonna consume a lot of memory?
Code:
private fun observeFrequentData() {
frequentData.observe(owner, Observer { data ->
if (accelerationData == null) return#Observer
GlobalScope.launch {
val a = data[0].toDouble()
val b = data[1].toDouble()
val c = a + b
val timestamp = System.currentTimeMillis()
val customObj = CustomObj(c, timestamp)
if (buffer.size >= 5000) {
buffer.removeAt(0)
}
buffer.add(acceleration)
}
})
}
fun getBuffer() {
val mappedData = buffer.map { it.smth } // NPE, it == null
}
If you are doing lots of removing from 0, and insert at the end. Then ArrayList is probably not the container to use.
you can consider using a LinkedList .
buffer.removeFirst();
and
buffer.add(acceleration);
also note the following comments regarding synchronization.
Note that this implementation is not synchronized. If multiple threads
access a linked list concurrently, and at least one of the threads
modifies the list structurally, it must be synchronized externally. (A
structural modification is any operation that adds or deletes one or
more elements; merely setting the value of an element is not a
structural modification.) This is typically accomplished by
synchronizing on some object that naturally encapsulates the list. If
no such object exists, the list should be "wrapped" using the
Collections.synchronizedList method. This is best done at creation
time, to prevent accidental unsynchronized access to the list:
List list = Collections.synchronizedList(new LinkedList(...));
Using the synchronized keyword on your piece of code as #patrickf suggested.
To take care of performance, instead of making the method call itself synchronized, you can just write the 3 "buffer" related lines of code (size, removeAt and add) in a synchronized block.
Something like;
.
.
.
synchronized {
if (buffer.size >= 5000) {
buffer.removeAt(0)
}
buffer.add(acceleration)
}
}
})
Hope this helps!
I'm trying to take a large file and split it into many smaller files. The location where each split occurs is based on a predicate returned from examining the contents of each given line (isNextObject function).
I have attempted to read in the large file via the File.ReadLines function so that I can iterate through the file one line at a time without having to hold the entire file in memory. My approach was to group the sequence into a sequence of smaller sub-sequences (one per file to be written out).
I found a useful function that Tomas Petricek created on fssnip called groupWhen. This function worked great for my initial testing on a small subset of the file, but a StackoverflowException is thrown when using the real file. I am not sure how to adjust the groupWhen function to prevent this (I'm still an F# greenie).
Here is a simplified version of the code showing only the relevant parts that will recreate the StackoverflowExcpetion::
// This is the function created by Tomas Petricek where the StackoverflowExcpetion is occuring
module Seq =
/// Iterates over elements of the input sequence and groups adjacent elements.
/// A new group is started when the specified predicate holds about the element
/// of the sequence (and at the beginning of the iteration).
///
/// For example:
/// Seq.groupWhen isOdd [3;3;2;4;1;2] = seq [[3]; [3; 2; 4]; [1; 2]]
let groupWhen f (input:seq<_>) = seq {
use en = input.GetEnumerator()
let running = ref true
// Generate a group starting with the current element. Stops generating
// when it founds element such that 'f en.Current' is 'true'
let rec group() =
[ yield en.Current
if en.MoveNext() then
if not (f en.Current) then yield! group() // *** Exception occurs here ***
else running := false ]
if en.MoveNext() then
// While there are still elements, start a new group
while running.Value do
yield group() |> Seq.ofList }
This is the gist of the code making use Tomas' function:
module Extractor =
open System
open System.IO
open Microsoft.FSharp.Reflection
// ... elided a few functions include "isNextObject" which is
// a string -> bool (examines the line and returns true
// if the string meets the criteria to that we are at the
// start of the next inner file)
let writeFile outputDir file =
// ... write out "file" to the file system
// NOTE: file is a seq<string>
let writeFiles outputDir (files : seq<seq<_>>) =
files
|> Seq.iter (fun file -> writeFile outputDir file)
And here is the relevant code in the console application that makes use of the functions:
let lines = inputFile |> File.ReadLines
writeFiles outputDir (lines |> Seq.groupWhen isNextObject)
Any ideas on the proper way to stop groupWhen from blowing the stack? I'm not sure how I would convert the function to use an accumulator (or to use a continuation instead, which I think is the correct terminology).
The problem with this is that the group() function returns a list, which is an eagerly evaluated data structure, which means that every time you call group() it has to run to the end, collect all results in a list, and return the list. This means that the recursive call happens within that same evaluation - i.e. truly recursively, - thus creating stack pressure.
To mitigate this problem, you could just replace the list with a lazy sequence:
let rec group() = seq {
yield en.Current
if en.MoveNext() then
if not (f en.Current) then yield! group()
else running := false }
However, I would consider less drastic approaches. This example is a good illustration of why you should avoid doing recursion yourself and resort to ready-made folds instead.
For example, judging by your description, it seems that Seq.windowed may work for you.
It's easy to overuse sequences in F#, IMO. You can accidentally get stack overflows, plus they are slow.
So (not actually answering your question),
personally I would just fold over the seq of lines using something like this:
let isNextObject line =
line = "---"
type State = {
fileIndex : int
filename: string
writer: System.IO.TextWriter
}
let makeFilename index =
sprintf "File%i" index
let closeFile (state:State) =
//state.writer.Close() // would use this in real code
state.writer.WriteLine("=== Closing {0} ===",state.filename)
let createFile index =
let newFilename = makeFilename index
let newWriter = System.Console.Out // dummy
newWriter.WriteLine("=== Creating {0} ===",newFilename)
// create new state with new writer
{fileIndex=index + 1; writer = newWriter; filename=newFilename }
let writeLine (state:State) line =
if isNextObject line then
/// finish old file here
closeFile state
/// create new file here and return updated state
createFile state.fileIndex
else
//write the line to the current file
state.writer.WriteLine(line)
// return the unchanged state
state
let processLines (lines: string seq) =
//setup
let initialState = createFile 1
// process the file
let finalState = lines |> Seq.fold writeLine initialState
// tidy up
closeFile finalState
(Obviously a real version would use files rather than the console)
Yes, it is crude, but it is easy to reason about, with
no unpleasant surprises.
Here's a test:
processLines [
"a"; "b"
"---";"c"; "d"
"---";"e"; "f"
]
And here's what the output looks like:
=== Creating File1 ===
a
b
=== Closing File1 ===
=== Creating File2 ===
c
d
=== Closing File2 ===
=== Creating File3 ===
e
f
=== Closing File3 ===
Can I retrieve a Method via reflection, somehow combine it with a target object, and return it as something that looks like a function in Scala (i.e. you can call it using parenthesis)? The argument list is variable. It doesn't have to be a "first-class" function (I've updated the question), just a syntactic-looking function call, e.g. f(args).
My attempt so far looks something like this (which technically is pseudo-code, just to avoid cluttering up the post with additional definitions):
class method_ref(o: AnyRef, m: java.lang.reflect.Method) {
def apply(args: Any*): some_return_type = {
var oa: Array[Object] = args.toArray.map { _.asInstanceOf[Object] }
println("calling: " + m.toString + " with: " + oa.length)
m.invoke(o, oa: _*) match {
case x: some_return_type => x;
case u => throw new Exception("unknown result" + u);
}
}
}
With the above I was able to get past the compiler errors, but now I have a run-time exception:
Caused by: java.lang.IllegalArgumentException: argument type mismatch
The example usage is something like:
var f = ... some expression returning method_ref ...;
...
var y = f(x) // looks like a function, doesn't it?
UPDATE
Changing the args:Any* to args:AnyRef* actually fixed my run-time problem, so the above approach (with the fix) works fine for what I was trying to accomplish. I think I ran into a more general issue with varargs here.
Sure. Here's some code I wrote that add an interface to a function. It's not exactly what you want, but I think it can be adapted with few changes. The most difficult change is on invoke, where you'll need to change the invoked method by the one obtained through reflection. Also, you'll have to take care that the received method you are processing is apply. Also, instead of f, you'd use the target object. It should probably look something like this:
def invoke(proxy: AnyRef, method: Method, args: Array[AnyRef]) = method match {
case m if /* m is apply */ => target.getClass().getMethod("name", /* parameter type */).invoke(target, args: _*)
case _ => /* ??? */
}
Anyway, here's the code:
import java.lang.reflect.{Proxy, InvocationHandler, Method}
class Handler[T, R](f: Function1[T, R])(implicit fm: Manifest[Function1[T, R]]) extends InvocationHandler {
def invoke(proxy: AnyRef, method: Method, args: Array[AnyRef]) = method.invoke(f, args: _*)
def withInterface[I](implicit m: Manifest[I]) = {
require(m <:< manifest[Function1[T, R]] && m.erasure.isInterface)
Proxy.newProxyInstance(m.erasure.getClassLoader(), Array(m.erasure), this).asInstanceOf[I]
}
}
object Handler {
def apply[T, R](f: Function1[T, R])(implicit fm: Manifest[Function1[T, R]]) = new Handler(f)
}
And use it like this:
trait CostFunction extends Function1[String, Int]
Handler { x: String => x.length } withInterface manifest[CostFunction]
The use of "manifest" there helps with syntax. You could write it like this:
Handler({ x: String => x.length }).withInterface[CostFunction] // or
Handler((_: String).length).withInterface[CostFunction]
One could also drop the manifest and use classOf instead with a few changes.
If you're not looking for a generic invoke that takes the method name--but rather, you want to capture a particular method on a particular object--and you don't want to get too deeply into manifests and such, I think the following is a decent solution:
class MethodFunc[T <: AnyRef](o: Object, m: reflect.Method, tc: Class[T]) {
def apply(oa: Any*): T = {
val result = m.invoke(o, oa.map(_.asInstanceOf[AnyRef]): _*)
if (result.getClass == tc) result.asInstanceOf[T]
else throw new IllegalArgumentException("Unexpected result " + result)
}
}
Let's see it in action:
val s = "Hi there, friend"
val m = s.getClass.getMethods.find(m => {
m.getName == "substring" && m.getParameterTypes.length == 2
}).get
val mf = new MethodFunc(s,m,classOf[String])
scala> mf(3,8)
res10: String = there
The tricky part is getting the correct type for the return value. Here it's left up to you to supply it. For example,if you supply classOf[CharSequence] it will fail because it's not the right class. (Manifests are better for this, but you did ask for simple...though I think "simple to use" is generally better than "simple to code the functionality".)