Meteor job-collection not running remote jobs - meteor

With vsivsi:job-collection, I've set up jobs like in the example, but the difference is my jobs are processed on the server. And I can't see what's missing compared to the example app, which processes jobs on the client.
lib/db.coffee
#ParsingJobs = JobCollection('parsing', {
workTimeout: 10000
transform: (d) ->
try
res = new Job(ParsingJobs, d)
catch e
res = d
return res
})
if Meteor.isServer
Meteor.startup(->
ParsingJobs.allow({
admin: (user_id, method, params) ->
# commented temporarily Roles.userIsInRole(Meteor.user(), ['admin'])
true
})
ParsingJobs.startJobServer()
server.coffee
que = ParsingJobs.processJobs('parsing', {workTimeout: 10000}, (job, cb) ->
# do some processing
job.done('success')
cb()
ParsingJobs.find({type: 'parsing', status: 'ready'}).observe
added: ->
que.trigger()
On the client I can just run a shell command:
x = ParsingJobs.find().fetch()[0]
x.rerun()
Result:
job_class.js:16 Uncaught Error: Job remote method call error, no valid invocation method found.
What am I doing wrong?

Change this line:
que = ParsingJobs.processJobs('parsing', {workTimeout: 10000}, (job, cb) ->
to this:
que = Job.processJobs('parsing', {workTimeout: 10000}, (job, cb) ->

Related

R, httr, and an unstoppable error message

In summary, I'm finding that when running httr::POST against a plumber api, within an R.utils::withTimeout, the post is throwing an error message which can't be suppressed. The error message is:
Error in .Call(R_curl_fetch_memory, enc2utf8(url), handle, nonblocking) : reached elapsed time limit
Things I've tried:
suppressmessages/warnings
tryCatch with no error response
Using sinks
I can't think of any other way of stopping this error. The code carries on running but it's resulting in users of the api getting spammed with error messages when checking for an available port to call. Any ideas welcome.
Reproducible example (note this uses rstudio jobs pkg and a separate plumber file, but i've tried without the jobs pkg and the issue persists so it's not connected to that.) :
plumber.R:
#* quick ping
#* #post /ping
function() {
list(msg = "you got here!")
}
#* slow 10s call
#* #post /slowfxn
function() {
Sys.sleep(10)
1
}
code which calls and produces errors:
#run the plumber file in a separate job:
r <- plumber::plumb("errortest/plumber.R")
job::job({r$run(swagger = F,port = 1111)})
#this function will throw the error if no response comes within 3s:
call_with_timeout = function() {
R.utils::withTimeout(
rawToChar(httr::POST("http://127.0.0.1:1111/echo")$content)
,timeout = 3,onTimeout = "silent")
}
#now call the fxn (again in a job) which takes 10 s to run:
job::job({httr::POST(url = "http://127.0.0.1:1111/slowfxn")})
#wait a second for that job to be initiated:
Sys.sleep(1)
#while that's running, try and call the quick fxn, with a 3 second timeout:
#try wrap it in a trycatch to suppress:
tryCatch( {
call_with_timeout()
},
error = function(x) {},
TimeoutException = function(x){},
warning = function(x) {}
)
suppressMessages(suppressWarnings(call_with_timeout()))
#not even a sink can stop it!
sink("delete.txt")
call_with_timeout()
sink(NULL)

F# AWS lambda - Parts of asynchronous function are never executed

I have an asynchronous F# function in an AWS Lambda. However, when testing it, parts of the function are never called. The function is always terminated after 10 seconds, even though the aws-lambda-tools-defaults.json file specifies the timeout to 30 seconds.
Here is the code:
namespace MyProject
open FSharp.Data
type SecretsJson = JsonProvider<"./Resources/Secrets.json", RootName="Secret">
module ApiClient =
let testGet (secrets: SecretsJson.Secret) =
async {
printfn "%s" "3. This is sometimes printed out and sometimes not."
let! test = Http.AsyncRequestString("https://postman-echo.com/get?foo1=bar1&foo2=bar2")
printfn "%s" "4. This is never printed."
return ""
}
namespace MyProject
open Amazon.Lambda.Core
open Amazon
open Amazon.S3
open Amazon.S3.Util
open System.IO
open Amazon.S3.Model
open Amazon.SecretsManager.Extensions.Caching
[<assembly: LambdaSerializer(typeof<Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer>)>]
()
type Function() =
member __.FunctionHandler (input: S3EventNotification) (_: ILambdaContext) : System.Threading.Tasks.Task<unit> =
printfn "%s" "1. This gets printed always."
async {
use client = new AmazonS3Client(RegionEndpoint.EUWest1)
use secretsCache = new SecretsManagerCache()
let! secretsString = secretsCache.GetSecretString "secretsKey" |> Async.AwaitTask
printfn "%s" "2. Also this is always printed."
let secrets = SecretsJson.Parse(secretsString)
let! response = ApiClient.testGet secrets
printfn "%s" "5. This is never printed."
} |> Async.StartAsTask
I don't understand this behaviour. I thought that the |> Async.StartAsTask expression will ensure that the whole async block in the Function.FunctionHandler gets executed as a standard C# Task<T>. Or do I have to convert each of my Async<'a> functions into a Task<T>? Or is there a different error in my code that I don't see?
The async code shouldn't be the source of the problem. I tried the following:
open System
open Amazon.Lambda.Core
open FSharp.Data
let loadStuff (context: ILambdaContext) (i:int) =
async {
let! test = Http.AsyncRequestString("https://www.google.com")
sprintf "i:%i Length: %i" i test.Length
|> context.Logger.LogLine
}
and
type Functions() =
member __.Get (request: APIGatewayProxyRequest) (context: ILambdaContext) =
async {
do! [1..10]
|>Seq.map (Loader.loadStuff context)
|>Async.Parallel
|>Async.Ignore
do! Async.Sleep(10000)
do! [11..15]
|>Seq.map (Loader.loadStuff context)
|>Async.Parallel
|>Async.Ignore
}
|> Async.StartAsTask
This takes 12 seconds and outputs:
i:6 Length: 47974
i:3 Length: 47201
i:4 Length: 47200
i:8 Length: 47255
i:5 Length: 47183
i:1 Length: 47145
i:7 Length: 47203
i:9 Length: 47177
i:10 Length: 47202
i:2 Length: 47198
i:14 Length: 47201
i:12 Length: 47155
i:11 Length: 47250
i:13 Length: 47162
i:15 Length: 47130
Consider adding try catch blocks to your code and also using context.Logger.Logline instead of printfn. Also if you have very long running pieces of work e.g. >30secs. Consider breaking them down into more functions and possibly coordinating them with something like serverless workflows.

Ejabberd: error in simple module to handle offline messages

I have an Ejabberd 17.01 installation where I need to push a notification in case a recipient is offline. This seems the be a common task and solutions using a customized Ejabberd module can be found everywhere. However, I just don't get it running. First, here's me script:
-module(mod_offline_push).
-behaviour(gen_mod).
-export([start/2, stop/1]).
-export([push_message/3]).
-include("ejabberd.hrl").
-include("logger.hrl").
-include("jlib.hrl").
start(Host, _Opts) ->
?INFO_MSG("mod_offline_push loading", []),
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_message, 10),
ok.
stop(Host) ->
?INFO_MSG("mod_offline_push stopping", []),
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_message, 10),
ok.
push_message(From, To, Packet) ->
?INFO_MSG("mod_offline_push -> push_message", [To]),
Type = fxml:get_tag_attr_s(<<"type">>, Packet), % Supposedly since 16.04
%Type = xml:get_tag_attr_s(<<"type">>, Packet), % Supposedly since 13.XX
%Type = xml:get_tag_attr_s("type", Packet),
%Type = xml:get_tag_attr_s(list_to_binary("type"), Packet),
?INFO_MSG("mod_offline_push -> push_message", []),
ok.
The problem is the line Type = ... line in method push_message; without that line the last info message is logged (so the hook definitely works). When browsing online, I can find all kinds of function calls to extract elements from Packet. As far as I understand it changed over time with new releases. But it's not good, all variants lead in some kind of error. The current way returns:
2017-01-25 20:38:08.701 [error] <0.21678.0>#ejabberd_hooks:run1:332 {function_clause,[{fxml,get_tag_attr_s,[<<"type">>,{message,<<>>,normal,<<>>,{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>},[],[{text,<<>>,<<"sfsdfsdf">>}],undefined,[],#{}}],[{file,"src/fxml.erl"},{line,169}]},{mod_offline_push,push_message,3,[{file,"mod_offline_push.erl"},{line,33}]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},{line,382}]},{ejabberd_hooks,run1,3,[{file,"src/ejabberd_hooks.erl"},{line,329}]},{ejabberd_sm,route,3,[{file,"src/ejabberd_sm.erl"},{line,126}]},{ejabberd_local,route,3,[{file,"src/ejabberd_local.erl"},{line,110}]},{ejabberd_router,route,3,[{file,"src/ejabberd_router.erl"},{line,87}]},{ejabberd_c2s,check_privacy_route,5,[{file,"src/ejabberd_c2s.erl"},{line,1886}]}]}
running hook: {offline_message_hook,[{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>},{message,<<>>,normal,<<>>,{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>},[],[{text,<<>>,<<"sfsdfsdf">>}],undefined,[],#{}}]}
I'm new Ejabberd and Erlang, so I cannot really interpret the error, but the Line 33 as mentioned in {mod_offline_push,push_message,3,[{file,"mod_offline_push.erl"}, {line,33}]} is definitely the line calling get_tag_attr_s.
UPDATE 2017/01/27: Since this cost me a lot of headache -- and I'm still not perfectly happy -- I post here my current working module in the hopes it might help others. My setup is Ejabberd 17.01 running on Ubuntu 16.04. Most stuff I tried and failed with seem to for older versions of Ejabberd:
-module(mod_fcm_fork).
-behaviour(gen_mod).
%% public methods for this module
-export([start/2, stop/1]).
-export([push_notification/3]).
%% included for writing to ejabberd log file
-include("ejabberd.hrl").
-include("logger.hrl").
-include("xmpp_codec.hrl").
%% Copied this record definition from jlib.hrl
%% Including "xmpp_codec.hrl" and "jlib.hrl" resulted in errors ("XYZ already defined")
-record(jid, {user = <<"">> :: binary(),
server = <<"">> :: binary(),
resource = <<"">> :: binary(),
luser = <<"">> :: binary(),
lserver = <<"">> :: binary(),
lresource = <<"">> :: binary()}).
start(Host, _Opts) ->
?INFO_MSG("mod_fcm_fork loading", []),
% Providing the most basic API to the clients and servers that are part of the Inets application
inets:start(),
% Add hook to handle message to user who are offline
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_notification, 10),
ok.
stop(Host) ->
?INFO_MSG("mod_fcm_fork stopping", []),
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_notification, 10),
ok.
push_notification(From, To, Packet) ->
% Generate JID of sender and receiver
FromJid = lists:concat([binary_to_list(From#jid.user), "#", binary_to_list(From#jid.server), "/", binary_to_list(From#jid.resource)]),
ToJid = lists:concat([binary_to_list(To#jid.user), "#", binary_to_list(To#jid.server), "/", binary_to_list(To#jid.resource)]),
% Get message body
MessageBody = Packet#message.body,
% Check of MessageBody is not empty
case MessageBody/=[] of
true ->
% Get first element (no idea when this list can have more elements)
[First | _ ] = MessageBody,
% Get message data and convert to string
MessageBodyText = binary_to_list(First#text.data),
send_post_request(FromJid, ToJid, MessageBodyText);
false ->
?INFO_MSG("mod_fcm_fork -> push_notification: MessageBody is empty",[])
end,
ok.
send_post_request(FromJid, ToJid, MessageBodyText) ->
%?INFO_MSG("mod_fcm_fork -> send_post_request -> MessageBodyText = ~p", [Demo]),
Method = post,
PostURL = gen_mod:get_module_opt(global, ?MODULE, post_url,fun(X) -> X end, all),
% Add data as query string. Not nice, query body would be preferable
% Problem: message body itself can be in a JSON string, and I couldn't figure out the correct encoding.
URL = lists:concat([binary_to_list(PostURL), "?", "fromjid=", FromJid,"&tojid=", ToJid,"&body=", edoc_lib:escape_uri(MessageBodyText)]),
Header = [],
ContentType = "application/json",
Body = [],
?INFO_MSG("mod_fcm_fork -> send_post_request -> URL = ~p", [URL]),
% ADD SSL CONFIG BELOW!
%HTTPOptions = [{ssl,[{versions, ['tlsv1.2']}]}],
HTTPOptions = [],
Options = [],
httpc:request(Method, {URL, Header, ContentType, Body}, HTTPOptions, Options),
ok.
Actually it fails with second arg Packet you pass to fxml:get_tag_attr_s in push_message function
{message,<<>>,normal,<<>>,
{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,
<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},
{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,
<<"xxx.xxx.xxx.xxx">>,<<>>},
[],
[{text,<<>>,<<"sfsdfsdf">>}],
undefined,[],#{}}
because it is not xmlel
Looks like it is record "message" defined in tools/xmpp_codec.hrl
with <<>> id and type 'normal'
xmpp_codec.hrl
-record(message, {id :: binary(),
type = normal :: 'chat' | 'error' | 'groupchat' | 'headline' | 'normal',
lang :: binary(),
from :: any(),
to :: any(),
subject = [] :: [#text{}],
body = [] :: [#text{}],
thread :: binary(),
error :: #error{},
sub_els = [] :: [any()]}).
Include this file and use just
Type = Packet#message.type
or, if you expect binary value
Type = erlang:atom_to_binary(Packet#message.type, utf8)
The newest way to do that seems to be with xmpp:get_type/1:
Type = xmpp:get_type(Packet),
It returns an atom, in this case normal.

R source file from RConnection (JRI) "could not find function"

I'm trying to source a R script file from an org.rosuda.REngine.Rserve.RConnection and then calling a function from that script, but am getting, "Error: could not find function "main".
Start Rserve from terminal
> require(Rserve)
Loading required package: Rserve
> Rserve()
Starting Rserve...
"C:\Users\slenzi\DOCUME~1\R\WIN-LI~1\3.3\Rserve\libs\x64\Rserve.exe"
> Rserve: Ok, ready to answer queries.
Error: could not find function "main"
External script rdbcTest1.R
require(RJDBC)
main <- function() {
jdbcDriver <- JDBC(driverClass = dbDriverClass, classPath = dbDriverPath)
jdbcConnection <- dbConnect(jdbcDriver, dbUrl, dbUser, dbPwd)
dbResult <- dbGetQuery(jdbcConnection, dbQuery)
dbDisconnect(jdbcConnection)
return(dbResult)
}
Java Code
import org.rosuda.REngine.REXP;
import org.rosuda.REngine.REXPMismatchException;
import org.rosuda.REngine.REngine;
import org.rosuda.REngine.REngineException;
import org.rosuda.REngine.RList;
import org.rosuda.REngine.Rserve.RConnection;
import org.rosuda.REngine.Rserve.RserveException;
REXP result = null;
RConnection c = null;
try {
c = new RConnection();
String driverPath = "C:/temp/ojbdc6.jar";
String scriptPath = "C:/temp/rdbcTest1.R";
c.assign("dbUser", "foo");
c.assign("dbPwd", "******");// commented out
c.assign("dbUrl", "jdbc:oracle:thin:#myhost:1511:ecogtst");
c.assign("dbDriverClass", "oracle.jdbc.driver.OracleDriver");
c.assign("dbDriverPath", driverPath);
// assign query to execute
c.assign("dbQuery", "SELECT count(*) FROM prs.members");
String evalSource = String.format("try(source(\"%s\", local=TRUE), silent=TRUE)", scriptPath);
c.eval("evalSource");
// debug
result = c.eval("ls()");
logger.info("REXP debug => " + result.toDebugString());
result = c.eval("main()");
} catch (RserveException e) {
logger.error("error running script test, " + e.getMessage());
} finally {
if(c != null){
c.close();
}
}
Output
evaluating => try(source("C:/temp/rdbcTest1.R", local=TRUE), silent=TRUE)
REXP debug => org.rosuda.REngine.REXPString#61c6bc05[6]{"dbDriverClass","dbDriverPath","dbPwd","dbQuery","dbUrl","dbUser"}
Error: could not find function "main"
error running oracle script test, eval failed, request status: error code: 127
Running the script directly from the R terminal it works fine. When ran from RConnection I can see the values of the variables I assign (c.assign(...)) but not the main() function from the sourced script.
What am I missing in regards to sourcing a script? How can I access a function from the script?
Thanks.
** UPDATE **
I got the following to work, but if anyone has any idea why source() doesn't seem to work in my example above, please let me know!
RConnection c = ...
REXP x = c.parseAndEval("try(eval(parse(file=\"C:/temp/rdbcTest3.R\")), silent=TRUE)")

lua with Nginx : bad argument to 'getNewSessionID' (string expected, got nil)

I have compiled nginx with Lua. and i am facing a problem .
When i hit localhost i get an error in nginx error logs
2015/05/29 07:41:32 [error] 112#0: *1 lua entry thread aborted: runtime error: /opt/hello.lua:11: bad argument #1 to 'getNewSessionID' (string expected, got nil)
stack traceback:
coroutine 0:
[C]: in function 'getNewSessionID'
Nginx configurations are
location / {
default_type 'text/html';
content_by_lua_file /opt/hello.lua;
}
And my hello.lua look like
package.path = package.path ..';/opt/?.lua'
JSON = (loadfile "/opt/JSON.lua")()
local proxyFunctions = require("proxyCode")
coherenceNGIXLocation = "/coherence"
local coherenceCookie = proxyFunctions.getExistingSessionID()
-- To Create a coherance Session if not present
if (not coherenceCookie or coherenceCookie == '') then
-- ngx.log(ngx.INFO,"No Coherence Session Exists, Creating New")
local uidCookieValueEncoded = proxyFunctions.base64Encode(str,proxyFunctions.getNewSessionID())
local sessCreateResponse = proxyFunctions.coherencePut('{"sessionToken":"'..uidCookieValueEncoded..'"}')
local parsedJSON= JSON:decode(sessCreateResponse.body)
ngx.log(ngx.INFO, "Coherence Session Create")
ngx.log(ngx.INFO, "\n\n\n=============Coherence Res==============\n\n\n"..sessCreateResponse.body.."\n\n\n=============Coherence Res==============\n\n\n")
end
local client_cookie = ngx.req.get_headers()["cookie"]
-- To Create a coherance Session if not present
ngx.log(ngx.DEBUG, "\n\n\n=============REQ==============\n\n\n"..proxyFunctions.deepPrint(ngx.req.get_headers()).."\n\n\n=============REQ==============\n\n\n")
-- Set the cookie for the legacy request
ngx.req.set_header('cookie', client_cookie)
ngx.log(ngx.INFO, "\n\n\n=============REQ URI==============\n\n\n"..ngx.var.request_uri.."\n\n\n=============REQ URI==============\n\n\n")
local legacy_res = ngx.location.capture("/app"..ngx.var.request_uri, { share_all_vars = true })
ngx.log(ngx.DEBUG, "\n\n\n=============APP Res==============\n\n\n"..proxyFunctions.deepPrint(legacy_res.header).."\n\n\n=============APP Res==============\n\n\n")
if (legacy_res.status == 302) then
ngx.redirect(legacy_res.header.Location )
else
ngx.log(ngx.INFO, "Passing the Page/Layout to Maken")
local dev_res_encoded = legacy_res.body
dev_res_encoded = ngx.escape_uri(dev_res_encoded)
-- ngx.req.set_header('Cookie', ngx.req.get_headers()["cookie"])
-- ngx.req.set_header("Content-Type", "text/html") -- application/x-www-form-urlencoded -- 'page='..dev_res_encoded
ngx.req.set_header('cookie', client_cookie) -- maken/render
-- legacy_res.header('Content-Length', legacy_res.header['Content-Length']);
local maken_res = ngx.location.capture("/maken/", {method = ngx.HTTP_POST, body = legacy_res.body, share_all_vars = true} )
ngx.header['Set-Cookie'] = legacy_res.header["Set-Cookie"]
ngx.log(ngx.INFO, "Sending Back Maken Response to Client/Browser with Cookies from lagacy")
ngx.say(maken_res.body)
end
My proxyCode.lua looks like
function _M.getNewSessionID()
-- strip the sessionID length 11 cookie name before putting into the session
-- return string.sub(ngx.var.uid_set,11)
return _M.removeCookieSessionPrefixname(ngx.var.uid_set,11)
end
function _M.removeCookieSessionPrefixname(cookieName)
return string.sub(cookieName,11)
end
function _M.getIncomingAuthToken()
local cookie = ngx.req.get_headers()["Cookie"]
if (cookie ~=nil) then
local str = string.match(cookie,"AuthToken=(.*)$")
if (str ~= nil) then
return string.sub(str,11)
end
end
return nil
end
Can someone help me why i am getting this error ? i am new to Lua programming. and trying to run legacy application on my laptop.

Resources