Scala.js ReferenceError in sbt shell but not in browser - sbt

I'm new to ScalaJS, so I'm a little perplexed with this issue.
I'm trying to write a very simple facade for the peer.js library. I have this:
#js.native
#JSGlobal
class Peer() extends js.Object {
def this(id: String = ???, options: js.Object = ???) = this()
def connect(id: String, options: js.Object = ???): DataConnection = js.native
def on(event: String, callback: js.Function): Unit = js.native
def disconnect(): Unit = js.native
def reconnect(): Unit = js.native
def destroy(): Unit = js.native
def id: String = js.native
def connections: js.Object = js.native
def disconnected: Boolean = js.native
def destroyed: Boolean = js.native
}
And here is the simple code I'm trying to run:
object index {
def main(args: Array[String]): Unit = {
val peer = new Peer()
peer.on("open", (id: String) => println(id))
}
}
This small piece of code works perfectly fine in the browser, however when I try to run it in the sbt shell, I get this error:
ReferenceError: Peer is not defined
ReferenceError: Peer is not defined
at $c_Lcom_nicolaswinsten_peerscalajs_index$.main__AT__V (file:///C:/Users/mjwin/IdeaProjects/peer-scalajs/target/scala-2.13/peer-scalajs-fastopt/main.js:840:14)
at $s_Lcom_nicolaswinsten_peerscalajs_index__main__AT__V (file:///C:/Users/mjwin/IdeaProjects/peer-scalajs/target/scala-2.13/peer-scalajs-fastopt/main.js:826:47)
at file:///C:/Users/mjwin/IdeaProjects/peer-scalajs/target/scala-2.13/peer-scalajs-fastopt/main.js:2078:1
at file:///C:/Users/mjwin/IdeaProjects/peer-scalajs/target/scala-2.13/peer-scalajs-fastopt/main.js:2079:4
at Script.runInContext (vm.js:143:18)
at Object.runInContext (vm.js:294:6)
at processJavaScript (C:\Users\mjwin\IdeaProjects\peer-scalajs\node_modules\jsdom\lib\jsdom\living\nodes\HTMLScriptElement-impl.js:241:10)
at HTMLScriptElementImpl._innerEval (C:\Users\mjwin\IdeaProjects\peer-scalajs\node_modules\jsdom\lib\jsdom\living\nodes\HTMLScriptElement-impl.js:176:5)
at onLoadExternalScript (C:\Users\mjwin\IdeaProjects\peer-scalajs\node_modules\jsdom\lib\jsdom\living\nodes\HTMLScriptElement-impl.js:98:12)
at onLoadWrapped (C:\Users\mjwin\IdeaProjects\peer-scalajs\node_modules\jsdom\lib\jsdom\browser\resources\per-document-resource-loader.js:53:33)
[error] org.scalajs.jsenv.ExternalJSRun$NonZeroExitException: exited with code 1
[error] at org.scalajs.jsenv.ExternalJSRun$$anon$1.run(ExternalJSRun.scala:186)
[error] stack trace is suppressed; run 'last Compile / run' for the full output
[error] (Compile / run) org.scalajs.jsenv.ExternalJSRun$NonZeroExitException: exited with code 1
I'm sure it's something really simple, but I'm stumped. Any guesses?
Thank you

Related

GRPC Python - Error received from peer ipv4

I am trying to call server method from client using GRPC but getting below error:
Error received from peer ipv4
I tried to find a solution for this but its more than a day now and unable to figure out, please someone help. Any help is really appreciated
Server proto file (chunk.proto):
syntax = "proto3";
service FileServer {
rpc upload_chunk_stream(stream Chunk) returns (Reply) {}
rpc upload_single_chunk(Chunk) returns (Reply) {}
rpc download_chunk_stream(Request) returns (stream Chunk) {}
rpc get_available_memory_bytes(Empty_request) returns (Reply_double) {}
rpc get_stored_hashes_list_iterator(Empty_request) returns (stream Reply_string) {}
rpc hash_id_exists_in_memory(Request) returns (Reply) {}
}
message Chunk {
bytes buffer = 1;
}
message Request {
string hash_id = 1;
}
message Reply {
bool success = 1;
}
message Reply_double {
double bytes = 1;
}
message Empty_request {}
message Reply_string {
string hash_id = 1;
}
Generated code on server side, files generated:
chunk_pb2_grpc.py and chunk_pb2.py
Below are server files:
StorageManager.py
import sys
sys.path.append('./')
import grpc
import time
import chunk_pb2, chunk_pb2_grpc
from MemoryManager import MemoryManager
CHUNK_SIZE_ = 1024
class StorageManagerServer(chunk_pb2_grpc.FileServerServicer):
def __init__(self, memory_node_bytes, page_memory_size_bytes):
self.memory_manager = MemoryManager(memory_node_bytes, page_memory_size_bytes)
def upload_chunk_stream(self, request_iterator, context):
hash_id = ""
chunk_size = 0
number_of_chunks = 0
print("inside")
for key, value in context.invocation_metadata():
if key == "key-hash-id":
hash_id = value
elif key == "key-chunk-size":
chunk_size = int(value)
elif key == "key-number-of-chunks":
number_of_chunks = int(value)
assert hash_id != ""
assert chunk_size != 0
assert number_of_chunks != 0
success = self.memory_manager.put_data(request_iterator, hash_id, number_of_chunks, False)
return chunk_pb2.Reply(success=success)
StartNodeExample.py
import sys
sys.path.append('./')
from StorageManager import StorageManagerServer
import grpc
import time
import chunk_pb2, chunk_pb2_grpc
from concurrent import futures
if __name__ == '__main__':
print("Starting Storage Manager.")
server_grpc = grpc.server(futures.ThreadPoolExecutor(max_workers=1))
total_memory_node_bytes = 1 * 1024 * 1024 * 1024 # start with 1 GB
CHUNK_SIZE_ = 1024
total_page_memory_size_bytes = CHUNK_SIZE_
chunk_pb2_grpc.add_FileServerServicer_to_server(StorageManagerServer(total_memory_node_bytes, total_page_memory_size_bytes), server_grpc)
# port = 9999
# server_grpc.add_insecure_port(f'[::]:{port}')
server_grpc.add_insecure_port('[::]:9999')
server_grpc.start()
print("Storage Manager is READY.")
try:
while True:
time.sleep(60 * 60 * 24) # should infinity
except KeyboardInterrupt:
server_grpc.stop(0)
Client side proto file(chunk.proto):
syntax = "proto3";
service FileServer {
rpc upload_chunk_stream(stream Chunk) returns (Reply) {}
rpc upload_single_chunk(Chunk) returns (Reply) {}
rpc download_chunk_stream(Request) returns (stream Chunk) {}
}
message Chunk {
bytes buffer = 1;
}
message Request {
string hash_id = 1;
}
message Reply {
bool success = 1;
}
Generated code on client side, files generated: chunk_pb2_grpc.py and chunk_pb2.py
Below are client side files:
grpc_client.py
import grpc
import chunk_pb2
import chunk_pb2_grpc
import threading
import io
import hashlib
CHUNK_SIZE = 1024 * 1024 * 4 # 4MB
def get_file_byte_chunks(f):
while True:
piece = f.read(CHUNK_SIZE)
if len(piece) == 0:
return
yield chunk_pb2.Chunk(buffer=piece)
class Client:
def __init__(self, address):
channel = grpc.insecure_channel(address)
self.stub = chunk_pb2_grpc.FileServerStub(channel)
def upload(self, f, f_name):
print("Inside here")
hash_object = hashlib.sha1(f_name.encode())
hex_dig = hash_object.hexdigest()
print(hex_dig)
chunks_generator = get_file_byte_chunks(f)
metadata = (
('key-hash-id', hex_dig),
('key-chunk-size', str(CHUNK_SIZE))
)
response = self.stub.upload_chunk_stream(chunks_generator, metadata=metadata)
server.py(Flask rest API server)
from flask import Flask, url_for, send_from_directory, request
import logging, os
from werkzeug.utils import secure_filename
from flask import jsonify, make_response
# import worker
import grpc_client
app = Flask(__name__)
#app.route('/', methods = ['POST'])
def api_root():
app.logger.info(PROJECT_HOME)
if request.method == 'POST' and request.files['image']:
app.logger.info(app.config['UPLOAD_FOLDER'])
img = request.files['image']
img_name = secure_filename(img.filename)
client = grpc_client.Client('127.0.0.1:9999')
client.upload(img, img_name)
return make_response(jsonify({"success":True}),200)
else:
return "Where is the file?"
Error received:
[2019-12-15 03:08:55,973] ERROR in app: Exception on / [POST]
Traceback (most recent call last):
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "server.py", line 32, in api_root
client.upload(img, img_name)
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/grpc_client.py", line 32, in upload
response = self.stub.upload_chunk_stream(chunks_generator, metadata=metadata)
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/grpc/_channel.py", line 871, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/wamiqueansari/Documents/275_gash/project/final/Tracking/venv/lib/python3.7/site-packages/grpc/_channel.py", line 592, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception calling application: "
debug_error_string = "{"created":"#1576408135.973233000","description":"Error received from peer ipv4:127.0.0.1:9999","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Exception calling application: ","grpc_status":2}"

fs2 stream to zip-compressed fs2stream

I have a stream of fs2 streams and I'd like to create a compressed stream ready to be written into file with *.zip extension or for downloading.
The problem is that stream never terminates. Here is the code:
package backup
import java.io.OutputStream
import cats.effect._
import cats.effect.implicits._
import cats.implicits._
import fs2.{Chunk, Pipe, Stream, io}
import java.util.zip.{ZipEntry, ZipOutputStream}
import fs2.concurrent.Queue
import scala.concurrent.{ExecutionContext, SyncVar}
// https://github.com/slamdata/fs2-gzip/blob/master/core/src/main/scala/fs2/gzip/package.scala
// https://github.com/scalavision/fs2-helper/blob/master/src/main/scala/fs2helper/zip.scala
// https://github.com/eikek/sharry/blob/2f1dbfeae3c73bf2623f65c3591d0b3e0691d4e5/modules/common/src/main/scala/sharry/common/zip.scala
object Fs2Zip {
private def writeEntry[F[_]](zos: ZipOutputStream)(implicit F: Concurrent[F],
blockingEc: ExecutionContext,
contextShift: ContextShift[F]): Pipe[F, (String, Stream[F, Byte]), Unit] =
_.flatMap {
case (name, data) =>
val createEntry = Stream.eval(F.delay {
zos.putNextEntry(new ZipEntry(name))
})
val writeEntry = data.through(io.writeOutputStream(F.delay(zos.asInstanceOf[OutputStream]), blockingEc, closeAfterUse = false))
val closeEntry = Stream.eval(F.delay(zos.closeEntry()))
createEntry ++ writeEntry ++ closeEntry
}
private def zipP1[F[_]](implicit F: ConcurrentEffect[F],
blockingEc: ExecutionContext,
contextShift: ContextShift[F]): Pipe[F, (String, Stream[F, Byte]), Byte] = entries => {
Stream.eval(Queue.unbounded[F, Option[Chunk[Byte]]]).flatMap { q =>
Stream.suspend {
val os = new java.io.OutputStream {
private def enqueueChunkSync(a: Option[Chunk[Byte]]) = {
println(s"enqueueChunkSync $a")
val done = new SyncVar[Either[Throwable, Unit]]
q.enqueue1(a).start.flatMap(_.join).runAsync(e => IO(done.put(e))).unsafeRunSync
done.get.fold(throw _, identity)
println(s"enqueueChunkSync done $a")
}
#scala.annotation.tailrec
private def addChunk(c: Chunk[Byte]): Unit = {
val free = 1024 - bufferedChunk.size
if (c.size > free) {
enqueueChunkSync(Some(Chunk.vector(bufferedChunk.toVector ++ c.take(free).toVector)))
bufferedChunk = Chunk.empty
addChunk(c.drop(free))
} else {
bufferedChunk = Chunk.vector(bufferedChunk.toVector ++ c.toVector)
}
}
private var bufferedChunk: Chunk[Byte] = Chunk.empty
override def close(): Unit = {
// flush remaining chunk
enqueueChunkSync(Some(bufferedChunk))
bufferedChunk = Chunk.empty
// terminate the queue
enqueueChunkSync(None)
}
override def write(bytes: Array[Byte]): Unit =
Chunk.bytes(bytes)
override def write(bytes: Array[Byte], off: Int, len: Int): Unit =
addChunk(Chunk.bytes(bytes, off, len))
override def write(b: Int): Unit =
addChunk(Chunk.singleton(b.toByte))
}
val write: Stream[F, Unit] = Stream
.bracket(F.delay(new ZipOutputStream(os)))((zos: ZipOutputStream) => F.delay(zos.close()))
.flatMap((zos: ZipOutputStream) => entries.through(writeEntry(zos)))
val read = q.dequeue
.unNoneTerminate
.flatMap(Stream.chunk(_))
read.concurrently(write)
}
}
}
def zip[F[_]: ConcurrentEffect: ContextShift](entries: Stream[F, (String, Stream[F, Byte])])(
implicit ec: ExecutionContext): Stream[F, Byte] =
entries.through(zipP1)
}
The code is shamelessly copied from https://github.com/eikek/sharry/blob/master/modules/common/src/main/scala/sharry/common/zip.scala
and updated to compile with the latest fs2 and cats-effect
I narrowed the problem to enqueueChunkSync:
private def enqueueChunkSync(a: Option[Chunk[Byte]]) = {
val done = new SyncVar[Either[Throwable, Unit]]
q.enqueue1(a).start.flatMap(_.join).runAsync(e => IO(done.put(e))).unsafeRunSync
done.get.fold(throw _, identity)
}
which blocks on the last chunk. When I put a println in there and make the buffer smaller I see that chunks are flushed successfully until the last one.
When I remove the blocking bit done.get.fold(throw _, identity) it seems to work, but then I imagine the bytes are flushed to the stream all at once?
How is the last chunk different from the previous ones?

Finishing a forked process blocks SBT with a custom output strategy

In SBT, I fork a Java process with:
class FilteredOutput extends FilterOutputStream(System.out) {
var buf = ArrayBuffer[Byte]()
override def write(b: Int) {
buf.append(b.toByte)
if (b == '\n'.toInt)
flush()
}
override def flush(){
if (buf.nonEmpty) {
val arr = buf.toArray
val txt = try new String(arr, "UTF-8") catch { case NonFatal(ex) ⇒ "" }
if (!txt.startsWith("pydev debugger: Unable to find real location for"))
out.write(arr)
buf.clear()
}
super.flush()
}
}
var process = Option.empty[Process]
process = Some(Fork.java.fork(ForkOptions(outputStrategy = new FilteredOutput()), Seq("my.company.MyClass")))
as a result of a custom task.
Later on, I terminate it with:
process.map { p =>
log info "Killing process"
p.destroy()
}
by means of another custom task.
The result is that SBT doesn't accept more input and gets blocked. Ctrl+C is the only way of restoring control back, but SBT dies as a consequence.
The problem has to do with the custom output strategy, that filters some annoying messages.
With jstack I haven't seen any deadlock.
SBT version 0.13.9.
The solution is to avoid closing System.out:
class FilteredOutput extends FilterOutputStream(System.out) {
var buf = ArrayBuffer[Byte]()
override def write(b: Int) {
...
}
override def flush(){
...
}
override def close() {}
}

Is there a way to automatically generate the gradle dependencies declaration in build.gradle?

This is in the context of converting an existing java project into a gradle project.
Is there a tool or webservice that would help generate the dependencies declaration in the build.gradle by pointing to a directory that contain all the dependent jars ?
In general there no such tool but if all the jars all structured (I mean paths: com/google/guava/guava and so on) it should be easy to write such script on your own.
Mind that it can also be done by importing the whole folder in the following way:
repositories {
flatDir {
dirs 'lib'
}
}
or
dependencies {
runtime fileTree(dir: 'libs', include: '*.jar')
}
In my comments to #Opal 's answer, I said that I was working on a quick and dirty groovy script to achieve this. I forgot to attach the script after that. Apologies for the same.
Here's my quick and dirty script. Does solve the purpose partially.
Hoping that someone can improve on it.
#! /usr/bin/env groovy
#Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.5.2' )
import static groovyx.net.http.ContentType.JSON
import groovyx.net.http.RESTClient
import groovy.json.JsonSlurper
import groovy.util.slurpersupport.GPathResult
import static groovyx.net.http.ContentType.URLENC
//def artifactid = "activation"
//def version = "1.1"
//def packaging = "jar"
//
//def mavenCentralRepository = new RESTClient( "http://search.maven.org/solrsearch/select?q=a:%22${artifactid}%22%20AND%20v:%22${version}%22%20AND%20p:%22${packaging}%22&rows=20&wt=json".toString() )
////basecamp.auth.basic userName, password
//
//def response = mavenCentralRepository.get([:])
//println response.data.response.docs
//
//def slurper = new JsonSlurper()
//def parsedJson = slurper.parseText(response.data.toString());
//
//println parsedJson.response.docs.id
def inputFile = new File("input.txt");
def fileList = []
fileList = inputFile.readLines().collect {it.toString().substring(it.toString().lastIndexOf('/') + 1)}
def artifactIDvsVersionMap = [:]
fileList.collectEntries(artifactIDvsVersionMap) {
def versionIndex = it.substring(0,it.indexOf('.')).toString().lastIndexOf('-')
[it.substring(0,versionIndex),it.substring(versionIndex+1).minus(".jar")]
}
println artifactIDvsVersionMap
new File("output.txt").delete();
def output = new File("output.txt")
def fileWriter = new FileWriter(output, true)
def parsedGradleParameters = null
try {
parsedGradleParameters = artifactIDvsVersionMap.collect {
def artifactid = it.key
def version = it.value
def packaging = "jar"
def mavenCentralRepository = new RESTClient( "http://search.maven.org/solrsearch/select?q=a:%22${artifactid}%22%20AND%20v:%22${version}%22%20AND%20p:%22${packaging}%22&rows=20&wt=json".toString() )
def response = mavenCentralRepository.get([:])
println response.data.response.docs.id
def slurper = new JsonSlurper()
def parsedJson = slurper.parseText(response.data.toString());
def dependency = parsedJson.response.docs.id
fileWriter.write("compile '${dependency}'")
fileWriter.write('\n')
sleep (new Random().nextInt(20));
return parsedJson.response.docs.id
}
} finally {
fileWriter.close()
}
println parsedGradleParameters
Groovy pros - Pardon if the code is not not clean. :)

How can I get the value of a SettingKey like baseDirectory in a function?

I've got the following settingKey:
val filterValues = SettingKey[Map[String, String]]("filter-values")
And so when defining the setting:
filterValues := Map(
"someKey" -> sys.props.get("some.path").getOrElse(localPath("example"))
…
)
...
private def localFile(path: String): String = ((baseDirectory) { _ / path })(_.getAbsolutePath)
But what I'm getting is the following type mismatch:
Build.scala:8: type mismatch;
[error] found : sbt.Def.Initialize[String]
[error] required: String
[error] private def localFile(path: String): String = ((baseDirectory) { _ / path })(_.getAbsolutePath)
What's the right way to do this? (for sbt 0.13, btw)
You should extract the value of the settings within the setting intializer, and pass it to the function:
filterValues := Map(
"someKey" -> sys.props.get("some.path").getOrElse(localPath(baseDirectory.value, "example"))
…
)
...
private def localFile(base: File, path: String): String = (base / path).getAbsolutePath

Resources