pig script failing with GC:Overhead limit exceeded - out-of-memory

I am running one pig script on around 1 gb of data which involves several groupby and foreach statement.
here is sample pig code:
ab = GROUP y BY(y1,
y2,
y3,
y4,
y5,
y6 ) ;
xy = FOREACH ab {
abc = FOREACH y
GENERATE
x1,
x2,
x3,
x4,
x5,
x6,
rel1,
rel2;
GENERATE
group, abc;
} ;
Note: rel1 and rel2 are genereted by group by they are also bag in itself and for one [y1,y2,y3,y4,y5,y6] the bagsize contains around 448 records which is of size 700mb and pig is failing for xy relation saying GC Overhead limit exceeded .
Yarn Logs
2018-08-08 15:01:13,299 INFO [Service Thread] org.apache.pig.impl.util.SpillableMemoryManager: first memory handler call - Collection threshold init = 1148190720(1121280K) used = 5726479864(5592265K) committed = 5726797824(5592576K) max = 5726797824(5592576K), toFree = 3046581752
2018-08-08 15:04:22,192 FATAL [ResponseProcessor for block BP-1779694772-10.xxx.xx.17-1533341581987:blk_1074055963_315162] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[ResponseProcessor for block BP-1779694772-10.xxx.xx.17-1533341581987:blk_1074055963_315162,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: GC overhead limit exceeded
2018-08-08 15:05:24,112 INFO [ResponseProcessor for block BP-1779694772-10.xxx.xx.17-1533341581987:blk_1074055963_315162] org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException

Related

Run two pmap with different workers asynchronously in Julia

I have a function which I use with pmap to paralellize it. I would like to run 4 times this function asynchronously using 10 workers each but I can't run two or more pmap at the same time.
I'm using Julia v1.1 with a 40-CPUs machine on linux.
using Distributed
addprocs(4)
#everywhere function TestParallel(x)
a = 0
while a < 4
println("Value = ",x, " in worker = ", myid())
sleep(1)
a += 1
end
end
a = WorkerPool([2,3])
b = WorkerPool([4,5])
c = [i for i = 1:10]
#sync #async for i in c
pmap(x-> TestParallel(x), a, c)
pmap(x-> TestParallel(x), b, c)
end
I expect to have:
From worker 2: Value = 1 in worker = 2
From worker 3: Value = 2 in worker = 3
From worker 4: Value = 3 in worker = 4
From worker 5: Value = 4 in worker = 5
So the firsts two elements of c go to the first pmap and the next two elements to the second pmap, then whoever finishes first gets the next two elements.
Now I'm obtaining:
From worker 2: Value = 1 in worker = 2
From worker 3: Value = 2 in worker = 3
From worker 2: Value = 1 in worker = 2
From worker 3: Value = 2 in worker = 3
After the first pmap completes all elements of c the second pmap starts over solving all elements again.
From worker 2: Value = 9 in worker = 2
From worker 3: Value = 10 in worker = 3
From worker 5: Value = 2 in worker = 5
From worker 4: Value = 1 in worker = 4
There are some problems with your question: #sync and #async use green thread and you want to distribute your computations. Syntax #sync #async [some code] spawns a code asynchronously and waits for it to complete. Hence effectively it has the same meaning as [some code].
While your question is not clear I will assume that you want to launch 2 pmaps in parallel utilizing separate worker pools (this seems like the most likely thing you are trying to do).
In that case here is the code:
using Distributed
addprocs(4)
#everywhere function testpar2(x)
for a in 0:3
println("Value = $x [$a] in worker = $(myid())")
sleep(0.2)
end
return 1000*myid()+x*x #I assume you want to return some value
end
a = WorkerPool([2,3])
b = WorkerPool([4,5])
c = collect(1:10)
#sync begin
#async begin
res1 = pmap(x-> testpar2(x), a, c)
println("Got res1=$res1")
end
#async begin
res2 = pmap(x-> testpar2(x), b, c)
println("Got res2=$res2")
end
end
When running the above code you will see something like:
...
From worker 5: Value = 10 [3] in worker = 5
From worker 2: Value = 10 [3] in worker = 2
From worker 3: Value = 9 [3] in worker = 3
Got res2=[4001, 5004, 5009, 4016, 5025, 4036, 4049, 5064, 4081, 5100]
Got res1=[2001, 3004, 2009, 3016, 2025, 3036, 3049, 2064, 3081, 2100]
Task (done) #0x00000000134076b0
You can clearly seen that both pmaps have been run in parallel on different worker pools.

SCILAB ode: How to solve 2nd order ODE

I am coming from ode45 in MATLAB trying to learn ode in scilab. I ran into an exception I am not sure how to address.
function der = f(t,x)
wn3 = 2800 * %pi/30; //rad/s
m = 868.1/32.174; //slugs
k = m*wn3^2; //lbf/ft
w = 4100 * %pi/30; //rad/s
re_me = 4.09/32.174/12; //slug-ft
F0 = w^2*re_me; //lbf
der(1) = x(2);
der(2) = -k*x(1) + F0*sin(w*t);
endfunction
x0 = [0; 0];
t = 0:0.1:5;
t0 = t(1);
x = ode(x0,t0,t,f);
plot(t,x(1,:));
I get this error message that I don't understand:
lsoda-- at t (=r1), mxstep (=i1) steps
needed before reaching tout
where i1 is : 500
where r1 is : 0.1027287737654D+01
Excessive work done on this call (perhaps wrong jacobian type).
at line 35 of executed file C:\Users\ndomenico\Documents\Scilab\high_frequency_vibrator_amplitude_3d.sce
ode: lsoda exit with state -1.
Thank you!
Your ode is particularly stiff (k = 2319733). To me, it has no sense to give such a large final time. The time step you took (0.1) is also very large w.r.t to the driving frequency. If you replace the line
t = 0:0.1:5
by
t = linspace(0,0.1,1001)
i.e. request approximations of your solution for t in [0,0.1] and 1000 time steps you will have the following output:

Identify cause of high GHC memory consumption without profile build

I heard GHC is slow in terms of LOC per second. So I created this Haskell program:
module Main where
x1 = 1
x2 = 2
x3 = 3
...
x999998 = 999998
x999999 = 999999
x1000000 = 1000000
main = putStrLn "1M LOC!"
And I can't even compile it! At least I can see that parser can do 43 lines per second:
[1 of 1] Compiling Main ( 1Mloc.hs, 1Mloc.o )
*** Parser [Main]:
Parser [Main]: alloc=22735369056 time=23683.420
As far as I know, GHC RTS must be recompiled with profiling enabled to start digging into the cause. Given I don't have profiled GHC, is there any chance to figure out what is causing this? I can't even collect statistics because it gets killed...
Killed process 16609 (ghc) total-vm:1074093288kB, anon-rss:6804448kB ...
Actually, I can't compile 10K LOC either. With down to 1K LOC at least I can see horrible productivity numbers. I realize this is a synthetic program, but what could be so bad about it?
Linking 1Kloc ...
1,383,344,416 bytes allocated in the heap
325,164,408 bytes copied during GC
60,849,840 bytes maximum residency (9 sample(s))
282,960 bytes maximum slop
58 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 233 colls, 0 par 0.227s 0.230s 0.0010s 0.0066s
Gen 1 9 colls, 0 par 0.149s 0.174s 0.0193s 0.0588s
TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1)
SPARKS: 0(0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.000s ( 0.000s elapsed)
MUT time 0.522s ( 1.047s elapsed)
GC time 0.376s ( 0.404s elapsed)
EXIT time 0.000s ( 0.008s elapsed)
Total time 0.899s ( 1.460s elapsed)
Alloc rate 2,647,974,824 bytes per MUT second
Productivity 58.1% of total user, 71.7% of total elapsed

How to make parallel calls in Erlang and wait for all of the results?

I'm working on a mobile game backend in Erlang. For each HTTP request, it might need to query different data sources, such as PostgreSQL, MongoDB and Redis. I want to make independent calls to these data sources in parallel but cannot find a clear Erlang way to do it.
For example,
handle_request(?POST, <<"login">>, UserId, Token) ->
% Verify token from PostgreSQL
AuthResult = auth_service:login(UserId, Token),
% Get user data such as outfits and level from MongoDB
UserDataResult = userdata_service:get(UserId),
% Get messages sent to the user from Redis
MessageResult = message_service:get(UserId),
% How to run the above 3 calls in parallel?
% Then wait for all their results here?
% Combine the result and send back to client
build_response(AuthResult, UserDataResult, MessageResult).
Each service will eventually call into the corresponding data driver (epgsql, eredis, mongo_erlang) that end up with some pooboy:transaction and gen_server:call there. How to design these services module are not decided yet either.
I want to make sure that the 3 data calls above could run in parallel, and then the handle_request function waits for all of those 3 calls finish, and then call build_response. How could I do that properly?
As a reference, in NodeJS, I might do this
var authPromise = AuthService.login(user_id, token);
var userDataPromise = UserdataService.get(user_id);
var messagePromise = MessageService.get(user_id);
Promise.all(authPromise, userDataPromise, messagePromise).then( function(values) {
buildResponse(values);
}
In Scala I might do this
val authFuture = AuthService.login(userId, token)
val userDataFuture = UserdataService.get(userId)
val messageFuture = MessageService.get(userId)
for {
auth <- authFuture
userData <- userDataFuture
message <- messageFuture
} yield ( buildResponse(auth, userData, message )
Apparently, I'm thinking the problem as a promise/future/yield problem. But I was told that if I'm looking for a Promise in Erlang I might be going in the wrong direction. What would be the best practice in Erlang to achieve this?
How to make parallel calls in Erlang and wait for all of the results?
You can employ stacked receive clauses. Erlang will wait forever in a receive clause until a message arrives from a process (or you can specify a timeout with after)--which is similar to awaiting a promise in nodejs:
-module(my).
-compile(export_all).
all_results() ->
Pid1 = spawn(?MODULE, getdata1, [self(), {10, 20}]),
Pid2 = spawn(?MODULE, getdata2, [self(), 30]),
Pid3 = spawn(?MODULE, getdata3, [self()]),
[receive {Pid1, Result1} -> Result1 end,
receive {Pid2, Result2} -> Result2 end,
receive {Pid3, Result3} -> Result3 end].
getdata1(From, {X, Y}) ->
%% mimic the time it takes to retrieve the data:
SleepTime = rand:uniform(100),
io:format("Sleeping for ~w milliseconds~n", [SleepTime]),
timer:sleep(SleepTime),
From ! {self(), X+Y}. %% send the data back to the main process
getdata2(From, Z) ->
SleepTime = rand:uniform(100),
io:format("Sleeping for ~w milliseconds~n", [SleepTime]),
timer:sleep(SleepTime),
From ! {self(), Z+1}.
getdata3(From) ->
SleepTime = rand:uniform(100),
io:format("Sleeping for ~w milliseconds~n", [SleepTime]),
timer:sleep(SleepTime),
From ! {self(), 16}.
Note that this code:
[receive {Pid1, Result1} -> Result1 end,
receive {Pid2, Result2} -> Result2 end,
receive {Pid3, Result3} -> Result3 end].
is equivalent to:
R1 = receive {Pid1, Result1} ->
Result1
end,
R2 = receive {Pid2, Result2} ->
Result2
end,
R3 = receive {Pid3, Result3} ->
Result3
end,
[R1, R2, R3].
In the shell:
~/erlang_programs$ erl
Erlang/OTP 20 [erts-9.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V9.3 (abort with ^G)
1> c(my).
my.erl:2: Warning: export_all flag enabled - all functions will be exported
{ok,my}
2> timer:tc(my, all_results, []).
Sleeping for 66 milliseconds
Sleeping for 16 milliseconds
Sleeping for 93 milliseconds
{96356,[30,31,16]}
3> timer:tc(my, all_results, []).
Sleeping for 57 milliseconds
Sleeping for 30 milliseconds
Sleeping for 99 milliseconds
{100153,[30,31,16]}
4> timer:tc(my, all_results, []).
Sleeping for 66 milliseconds
Sleeping for 31 milliseconds
Sleeping for 24 milliseconds
{66426,[30,31,16]}
timer:tc() returns the time that a function takes to execute in microseconds (1,000 microseconds = 1 millisecond) along with the function's return value. For instance, the first time that all_results() was called it took 96.4 milliseconds to complete, while the individual processes would have taken 66+16+93=175+ milliseconds to finish if executed sequentially.

MATLAB: What happens for a global variable when running in the parallel mode?

What happens for a global variable when running in the parallel mode?
I have a global variable, "to_be_optimized_parameterIndexSet", which is a vector of indexes that should be optimized using gamultiobj and I have set its value only in the main script(nowhere else).
My code works properly in serial mode but when I switch to parallel mode (using "matlabpool open" and setting proper values for 'gaoptimset' ) the mentioned global variable becomes empty (=[]) in the fitness function and causes this error:
??? Error using ==> parallel_function at 598
Error in ==> PF_gaMultiFitness at 15 [THIS LINE: constants(to_be_optimized_parameterIndexSet) = individual;]
In an assignment A(I) = B, the number of elements in B and
I must be the same.
Error in ==> fcnvectorizer at 17
parfor (i = 1:popSize)
Error in ==> gamultiobjMakeState at 52
Score =
fcnvectorizer(state.Population(initScoreProvided+1:end,:),FitnessFcn,numObj,options.SerialUserFcn);
Error in ==> gamultiobjsolve at 11
state = gamultiobjMakeState(GenomeLength,FitnessFcn,output.problemtype,options);
E rror in ==> gamultiobj at 238
[x,fval,exitFlag,output,population,scores] = gamultiobjsolve(FitnessFcn,nvars, ...
Error in ==> PF_GA_mainScript at 136
[x, fval, exitflag, output] = gamultiobj(#(individual)PF_gaMultiFitness(individual, initialConstants), ...
Caused by:
Failure in user-supplied fitness function evaluation. GA cannot continue.
I have checked all the code to make sure I've not changed this global variable everywhere else.
I have a quad-core processor.
Where is the bug? any suggestion?
EDIT 1: The MATLAB code in the main script:
clc
clear
close all
format short g
global simulation_duration % PF_gaMultiFitness will use this variable
global to_be_optimized_parameterIndexSet % PF_gaMultiFitness will use this variable
global IC stimulusMoment % PF_gaMultiFitness will use these variables
[initialConstants IC] = oldCICR_Constants; %initialize state
to_be_optimized_parameterIndexSet = [21 22 23 24 25 26 27 28 17 20];
LB = [ 0.97667 0.38185 0.63529 0.046564 0.23207 0.87484 0.46014 0.0030636 0.46494 0.82407 ];
UB = [1.8486 0.68292 0.87129 0.87814 0.66982 1.3819 0.64562 0.15456 1.3717 1.8168];
PopulationSize = input('Population size? ') ;
GaTimeLimit = input('GA time limit? (second) ');
matlabpool open
nGenerations = inf;
options = gaoptimset('PopulationSize', PopulationSize, 'TimeLimit',GaTimeLimit, 'Generations', nGenerations, ...
'Vectorized','off', 'UseParallel','always');
[x, fval, exitflag, output] = gamultiobj(#(individual)PF_gaMultiFitness(individual, initialConstants), ...
length(to_be_optimized_parameterIndexSet),[],[],[],[],LB,UB,options);
matlabpool close
some other piece of code to show the results...
The MATLAB code of the fitness function, "PF_gaMultiFitness":
function objectives =PF_gaMultiFitness(individual, constants)
global simulation_duration IC stimulusMoment to_be_optimized_parameterIndexSet
%THIS FUNCTION RETURNS MULTI OBJECTIVES AND PUTS EACH OBJECTIVE IN A COLUMN
constants(to_be_optimized_parameterIndexSet) = individual;
[smcState , ~, Time]= oldCICR_CompCore(constants, IC, simulation_duration,2);
targetValue = 1; % [uM]desired [Ca]i peak concentration
afterStimulus = smcState(Time>stimulusMoment,14); % values of [Ca]i after stimulus
peak_Ca_value = max(afterStimulus); % smcState(:,14) is [Ca]i
if peak_Ca_value < 0.8 * targetValue
objectives(1,1) = inf;
else
objectives(1, 1) = abs(peak_Ca_value - targetValue);
end
pkIDX = peakFinder(afterStimulus);
nPeaks = sum(pkIDX);
if nPeaks > 1
peakIndexes = find(pkIDX);
period = Time(peakIndexes(2)) - Time(peakIndexes(1));
objectives(1,2) = 1e5* 1/period;
elseif nPeaks == 1 && peak_Ca_value > 0.8 * targetValue
objectives(1,2) = 0;
else
objectives(1,2) = inf;
end
end
Global variables do not get passed from the MATLAB client to the workers executing the body of the PARFOR loop. The only data that does get sent into the loop body are variables that occur in the text of the program. This blog entry might help.
it really depends on the type of variable you're putting in. i need to see more of your code to point out the flaw, but in general it is good practice to avoid assuming complicated variables will be passed to each worker. In other words anything more then a primitive may need to be reinitialized inside a parallel routine or may need have specific function calls (like using feval for function handles).
My advice: RTM

Resources