Ownership process error running phoenix test in elixir 1.3.2 - runtime-error

I am working through a phoenix tutorial but I have this error:
** (DBConnection.OwnershipError) cannot find ownership process for #PID<0.265.0>.
I am not using Task.start so nothing should be running asynchronously, and I thought that having the mode in the unless tag would be enough to prevent this error, in test/support/channel_case.ex:
setup tags do
:ok = Ecto.Adapters.SQL.Sandbox.checkout(Watchlist.Repo)
unless tags[:async] do
Ecto.Adapters.SQL.Sandbox.mode(Watchlist.Repo, {:shared, self()})
end
:ok
end
So, I am curious how I resolve this error.
This is how I run it:
mix test test/integration/listing_movies_test.exs
I am using elixir 1.3.2
UPDATE
defmodule ListingMoviesIntegrationTest do
use ExUnit.Case, async: true
use Plug.Test
alias Watchlist.Router
#opts Router.init([])
test 'listing movies' do
movie = %Movie{name: "Back to the future", rating: 5}
|> Repo.insert! <== error happens here
conn = conn(:get, "/movies")
response = Router.call(conn, #opts)
assert response.status == 200
assert response.resp_body == movie
end
Full stack trace:
(db_connection) lib/db_connection.ex:718: DBConnection.checkout/2
(db_connection) lib/db_connection.ex:619: DBConnection.run/3
(db_connection) lib/db_connection.ex:463: DBConnection.prepare_execute/4
(ecto) lib/ecto/adapters/postgres/connection.ex:91: Ecto.Adapters.Postgres.Connection.execute/4
(ecto) lib/ecto/adapters/sql.ex:235: Ecto.Adapters.SQL.sql_call/6
(ecto) lib/ecto/adapters/sql.ex:454: Ecto.Adapters.SQL.struct/6
(ecto) lib/ecto/repo/schema.ex:397: Ecto.Repo.Schema.apply/4
(ecto) lib/ecto/repo/schema.ex:193: anonymous fn/11 in Ecto.Repo.Schema.do_insert/4
(ecto) lib/ecto/repo/schema.ex:124: Ecto.Repo.Schema.insert!/4
test/integration/listing_movies_test.exs:13: (test)
and in test_helper which is actually being called as I put in a debug statement:
ExUnit.start
Ecto.Adapters.SQL.Sandbox.mode(Watchlist.Repo, :manual)
Ecto.Adapters.SQL.Sandbox.mode(Watchlist.Repo, {:shared, self()})

use ExUnit.Case, async: true
use Plug.Test
You have code for setup hook in "test/support/channel_case.ex" but you are not using it anywhere in your test or at least it is not clear if you do use it. It would be helpful if you can add this:
IO.puts "#{inspect __MODULE__}: setup is getting called."
somewhere in your setup hook's code. This will make sure that the code actually runs. My suspicion from the comment you made in my previous answer this code is dead.
defmodule ListingMoviesIntegrationTest do
use ExUnit.Case, async: true
use Plug.Test
alias Watchlist.Router
setup do
:ok = Ecto.Adapters.SQL.Sandbox.checkout(Watchlist.Repo)
Ecto.Adapters.SQL.Sandbox.mode(Watchlist.Repo, {:shared, self()})
:ok
end
#opts Router.init([])
test 'listing movies' do
movie = %Movie{name: "Back to the future", rating: 5}
|> Repo.insert! <== error happens here
conn = conn(:get, "/movies")
response = Router.call(conn, #opts)
assert response.status == 200
assert response.resp_body == movie
end
...

I got the same error and in my case on Elixir 1.8.2, Phoenix 1.4.1: after looking at this Elixir forum thread, I changed my test_helpers.exs to have the line below to have the adapter pool mode from manual to auto.
Ecto.Adapters.SQL.Sandbox.mode(MyApp.Repo, :auto)
You can read more about this about the modes and pool checkout and ownership on the Hex Ecto docs

I guess you are skipping this line
Ecto.Adapters.SQL.Sandbox.mode(Watchlist.Repo, {:shared, self()})
because
iex(1)> unless true do
...(1)> IO.puts "test"
...(1)> end
nil
iex(2)> unless false do
...(2)> IO.puts "test"
...(2)> end
test
:ok
Did you try:
if tags[:async] do
Ecto.Adapters.SQL.Sandbox.mode(Watchlist.Repo, {:shared, self()})
end

Related

Ecto.Repo receives a struct that does not implement Access behaviour

i have a problem with an Ecto Repo and a schema in one of my
tests. The schema is the following:
defmodule Elixirserver.Transactions.Bank do
#behaviour Elixirserver.ContentDump
use Ecto.Schema
import Ecto.Changeset
alias Elixirserver.Transactions.Account
#attrs [:name, :code]
schema "banks" do
field(:name, :string)
field(:code, :string)
has_many(:account, Account)
timestamps()
end
#doc false
def changeset(bank, attrs \\ []) do
bank
|> cast(attrs, #attrs)
|> validate_required(#attrs)
end
def to_json(bank) do
%{
id: bank.id,
name: bank.name,
code: bank.code,
type: "BANK"
}
end
end
When i try to execute a test i obtain the following:
(UndefinedFunctionError) function
Elixirserver.Transactions.Bank.fetch/2 is undefined
(Elixirserver.Transactions.Bank does not implement the Access behaviour)
The test is this:
def create(conn, %{"bank" => bank_params}) do
with {:ok, %Bank{} = bank} <- Transactions.create_bank(bank_params) do
conn
|> put_status(:created)
|> put_resp_header("location", bank_path(conn, :show, bank))
|> render("show.json", id: bank["id"])
end
end
Now, apparently this is because the Access behaviour is not implemented. Do i have to provide it explicitly ?
I am using ExMachina to generate fixtures, and i generated the resources with mix phx.gen.json.
bank["id"] is most probably the problem. Structs don't implement the access interface, you should use the dot so this should work: bank.id.
Details can be found here.

Function clause error Erlang

I am trying to understand process communication in erlang. Here I have a master process and five friends process. If a friend sends a message to any of the other 5 they have to reply back. But the master should be aware of all this. I am pasting the code below.
-module(prog).
-import(lists,[append/2,concat/1]).
-import(maps,[from_lists/1,find/2,get/2,update/3]).
-import(string,[equal/2]).
-import(file,[consult/1]).
-export([create_process/1,friends/4, master/1, main/0,prnt/1]).
%% CREATE PROCESS
create_process([])->ok;
create_process([H|T])->
{A,B} = H,
Pid = spawn(prog,friends,[B,self(),0,A]),
register(A,Pid),
create_process(T).
%% FRIENDS PROCESS
friends(Msg, M_pid, State, Self_name)->
S = lists:concat([Self_name," state =",State,"\n"]),
io:fwrite(S),
if
State == 0 ->
timer:sleep(500),
io:fwrite("~p~n",[Self_name]),
lists:foreach(fun(X) -> whereis(X)!{Self_name,"intro",self()} end, Msg),
friends(Msg, M_pid, State + 1, Self_name);
State > 0 ->
receive
{Process_name, Process_msg, Process_id} ->
I = equal(Process_msg,"intro"),
R = equal(Process_msg,"reply"),
XxX = lists:concat([Self_name," recieved ",Process_msg," from ",Process_name,"\n"]),
io:fwrite(XxX),
if
I == true ->
io:fwrite("~p~n",[whereis(Process_name)]),
M_pid!{lists:concat([Self_name," received intro message from ", Process_name , "[",Process_id,"]"]), self()},
io:fwrite(I),
whereis(Process_name)!{Self_name, "reply",self()},
friends(Msg, M_pid, State + 1, Self_name);
R == true ->
M_pid!{lists:concat([Self_name," received reply message from ", Process_name , "[",Process_id,"]"]), self()},
io:fwrite(R),
friends(Msg, M_pid, State + 1, Self_name)
end
after
1000->
io:fwrite(lists:concat([Self_name," has received no calls for 1 second, ending..."]))
end
end.
master(State)->
receive
{Process_message, Process_id} ->
io:fwrite(Process_message),
master(State+1)
after
2000->
ok
end.
main() ->
B = [{john, [jill,joe,bob]},
{jill, [bob,joe,bob]},
{sue, [jill,jill,jill,bob,jill]},
{bob, [john]},
{joe, [sue]}],
create_process(B),
io:fwrite("~p~n",[whereis(sue)]),
master(0).
I think the line in friends() function,
M_pid!{lists:concat([Self_name," received intro message from ", Process_name , "[",Process_id,"]"]), self()}
is the cause of error but I cannot understand why. M_pid is known and I am concatenating all the info and sending it to master but I am confused why it isnt working.
The error I am getting is as follows:
Error in process <0.55.0> with exit value: {function_clause,[{lists,thing_to_list,
[<0.54.0>],
[{file,"lists.erl"},{line,603}]},
{lists,flatmap,2,[{file,"lists.erl"},{line,1250}]},
{lists,flatmap,2,[{file,"lists.erl"},{line,1250}]},
{prog,friends,4,[{file,"prog.erl"},{line,45}]}]}
I dont know what is causing the error. Sorry for asking noob questions and thanks for your help.
An example of what Dogbert discovered:
-module(my).
-compile(export_all).
go() ->
Pid = spawn(my, nothing, []),
lists:concat(["hello", Pid]).
nothing() -> nothing.
In the shell:
2> c(my).
my.erl:2: Warning: export_all flag enabled - all functions will be exported
{ok,my}
3> my:go().
** exception error: no function clause matching
lists:thing_to_list(<0.75.0>) (lists.erl, line 603)
in function lists:flatmap/2 (lists.erl, line 1250)
in call from lists:flatmap/2 (lists.erl, line 1250)
4>
But:
-module(my).
-compile(export_all).
go() ->
Pid = spawn(my, nothing, []),
lists:concat(["hello", pid_to_list(Pid)]).
nothing() -> nothing.
In the shell:
4> c(my).
my.erl:2: Warning: export_all flag enabled - all functions will be exported
{ok,my}
5> my:go().
"hello<0.83.0>"
From the erl docs:
concat(Things) -> string()
Things = [Thing]
Thing = atom() | integer() | float() | string()
The list that you feed concat() must contain either atoms, integers, floats, or strings. A pid is neither an atom, integer, float, nor string, so a pid cannot be used with concat(). However, pid_to_list() returns a string:
pid_to_list(Pid) -> string()
Pid = pid()
As you can see, a pid has its own type: pid().
I ran your code.
Where you went wrong was to pass Process_id(which is of type pid()) to lists:concat/1.
Let us try to understand this error:
{function_clause,[{lists,thing_to_list,
[<0.84.0>],
[{file,"lists.erl"},{line,603}]},
{lists,flatmap,2,[{file,"lists.erl"},{line,1250}]},
{lists,flatmap,2,[{file,"lists.erl"},{line,1250}]},
{prog,friends,4,[{file,"prog.erl"},{line,39}]}]}
It states the function lists:thing_to_list/1 has no definition(see the word function_clause in the error log) which accepts an argument of type pid() as denoted here by [<0.84.0>].
Strings are represented as lists in erlang, which is why we use lists:concat/1.
As #7stud pointed out these are the valid types which can be passed to lists:concat/1 as per the documentation:
atom() | integer() | float() | string()
There are 2 occurrences of the following line. Fix them and you are good to go:
Incorrect Code:
M_pid!{lists:concat([Self_name," received intro message from ", Process_name , "[",Process_id,"]"]), self()},
Corrected Code
M_pid!{lists:concat([Self_name," received intro message from ", Process_name , "[",pid_to_list(Process_id),"]"]), self()},
Notice the use of the function erlang:pid_to_list/1. As per the documentation the function accepts type pid() and returns it as string().

How to check if can write to a folder

In julia, how do I check if the current is allowed to write to a folder?
I could do the python way, and just attempt to do it, and then fail fail and recover.
(In my case I can definitely recover, I have a list of locations to attempt to write to, as fallbacks. I expect the first few not to work (The first few are shared locations, so only computer admins are likely to have permission to writer there)
Python has also os.access function. Maybe Julia will have something similar in the future. Now we could borrow idea. :)
It is implemented in posixmodule.c (also functionality for windows!) so if you are on posix you could simply mimic:
julia> const R_OK = 4 # readability
julia> const W_OK = 2 # writability
julia> const X_OK = 1 # executability
julia> const F_OK = 4 # existence
julia> access(path, mode) = ccall(:access, Cint, (Cstring, Cint), path, mode) == 0;
Small test:
julia> access("/root", W_OK)
false
julia> access("/tmp", W_OK)
true
(for windows it could be just a little more complicated... But I could not test it now)
EDIT:
Thanks to Matt B. we could use libuv support in Julia which has to be portable (although slower on posix systems):
julia> function uv_access(path, mode)
local ret
req = Libc.malloc(Base._sizeof_uv_fs)
try
ret = ccall(:uv_fs_access, Int32, (Ptr{Void}, Ptr{Void}, Cstring, Int64, Ptr{Void}), Base.eventloop(), req, path, mode, C_NULL)
ccall(:uv_fs_req_cleanup, Void, (Ptr{Void},), req)
finally
Libc.free(req)
end
return ret, ret==0 ? "OK" : Base.struverror(ret)
end
julia> uv_access("/tmp", W_OK)
(0, "OK")
julia> uv_access("/root", W_OK)
(-13, "permission denied")
julia> uv_access("/nonexist", W_OK)
(-2, "no such file or directory")
Is the following sufficient:
julia> testdir(dirpath) = try (p,i) = mktemp(dirpath) ; rm(p) ; true catch false end
testdir (generic function with 1 method)
julia> testdir("/tmp")
true
julia> testdir("/root")
false
Returns true if dirpath is writable (by creating a temporary file inside a try-catch block). To find the first writable directory in a list, the following can be used:
julia> findfirst(testdir, ["/root","/tmp"])
2
Doing apropos("permissions"):
julia> apropos("permissions")
Base.Filesystem.gperm
Base.Filesystem.mkpath
Base.Filesystem.operm
Base.Filesystem.uperm
Base.Filesystem.mkdir
Base.Filesystem.chmod
shows a function called Base.Filesystem.uperm which seems to do exactly what you want it to:
help?> uperm
search: uperm supertype uppercase UpperTriangular isupper unescape_string unsafe_pointer_to_objref
uperm(file)
Gets the permissions of the owner of the file as a bitfield of
Value Description
––––– ––––––––––––––––––
01 Execute Permission
02 Write Permission
04 Read Permission
For allowed arguments, see stat.
Unfortunately it seems to be a bit buggy on my (old v7 nightly) build:
julia> uperm("/root")
0x07 # Uhhh I hope not?
I will update my build and raise a bug if one is not already present.
PS. In case it wasn't clear, I would expect to use this in combination with isdir to detect directory permissions specifically
I don't think that Dan Getz's answer will work on Windows because the temporary file created cannot be deleted while there is an open handle to it, but this amended version with a call to close does work:
function isfolderwritable(folder)
try
(p,i) = mktemp(folder)
close(i)
rm(p)
return(true)
catch
return(false)
end
end

TCP Listener on Elixir using Erlang's gen_tcp module

I am using the following code to create a TCP listener on elixir :
defmodule KVServer do
use Application
#doc false
def start(_type, _args) do
import Supervisor.Spec
children = [
supervisor(Task.Supervisor, [[name: KVServer.TaskSupervisor]]),
worker(Task, [KVServer, :accept, [4040]])
]
opts = [strategy: :one_for_one, name: KVServer.Supervisor]
Supervisor.start_link(children, opts)
end
#doc """
Starts accepting connections on the given `port`.
"""
def accept(port) do
{:ok, socket} = :gen_tcp.listen(port,
[:binary, packet: :line, active: false, reuseaddr: true])
IO.puts "Accepting connections on port #{port}"
loop_acceptor(socket)
end
defp loop_acceptor(socket) do
{:ok, client} = :gen_tcp.accept(socket)
{:ok, pid} = Task.Supervisor.start_child(KVServer.TaskSupervisor, fn -> serve(client) end)
:ok = :gen_tcp.controlling_process(client, pid)
loop_acceptor(socket)
end
defp serve(socket) do
socket
|> read_line()
|> write_line(socket)
serve(socket)
end
defp read_line(socket) do
{:ok, data} = :gen_tcp.recv(socket, 0)
data
end
defp write_line(line, socket) do
:gen_tcp.send(socket, line)
end
end
It is taken from the following link: http://elixir-lang.org/getting-started/mix-otp/task-and-gen-tcp.html
When I try to get data from my gps deveice (for which I am writing this piece of code) using :gen_tcp.recv(socket,0), I get an error:
{:error, reason} = :gen_tcp.recv(socket, 0) and the reason it showed is just "closed".
However, the device is sending data, which I confirmed using a tcp packet sniffer (tcpflow).
Also, when I try to send data using telnet as described in the tutorial above, it works fine.
Any help would be highly appreciated.
I was finally able to figure it out. Actually the device was sending raw stream of data not lines of data. So I had to change "packet: :line" parameter in :gen_tcp.listen function to "packet: :raw".
It was working on telnet because telnet sends lines of data (with line breaks).

Celluloid & performant HTTP requests

I try to switch an existing crawler from EventMachine to Celluloid. To get in touch with Celluloid I've generated a bunch of static files with 150 kB per file on a linux box which are served via Nginx.
The code at the bottom should do its work, but there is a issues with the code which I don't understand: the code should spawn maximum 50 threads because of the thread pool size of 50 but it spawns 180 of them. If I increase the pool size to 100, 330 threads are spawned. What's going wrong there?
A simple copy & paste of this code should work on every box, so any hints are welcome :)
#!/usr/bin/env jruby
require 'celluloid'
require 'open-uri'
URLS = *(1..1000)
##requests = 0
##responses = 0
##total_size = 0
class Crawler
include Celluloid
def fetch(id)
uri = URI("http://data.asconix.com/#{id}")
puts "Request ##{##requests += 1} -> #{uri}"
begin
req = open(uri).read
rescue Exception => e
puts e
end
end
end
URLS.each_slice(50).map do |idset|
pool = Crawler.pool(size: 50)
crawlers = idset.to_a.map do |id|
begin
pool.future(:fetch, id)
rescue Celluloid::DeadActorError, Celluloid::MailboxError
end
end
crawlers.compact.each do |resp|
$stdout.print "Response ##{##responses += 1} -> "
if resp.value.size == 150000
$stdout.print "OK\n"
##total_size += resp.value.size
else
$stdout.print "ERROR\n"
end
end
pool.terminate
puts "Actors left: #{Celluloid::Actor.all.to_set.length} -- Alive: #{Celluloid::Actor.all.to_set.select(&:alive?).length}"
end
$stdout.print "Requests total: #{##requests}\n"
$stdout.print "Responses total: #{##responses}\n"
$stdout.print "Size total: #{##total_size} bytes\n"
By the way, the same issue occurs when I define the pool outside the each_slice loop:
....
#pool = Crawler.pool(size: 50)
URLS.each_slice(50).map do |idset|
crawlers = idset.to_a.map do |id|
begin
#pool.future(:fetch, id)
rescue Celluloid::DeadActorError, Celluloid::MailboxError
end
end
crawlers.compact.each do |resp|
$stdout.print "Response ##{##responses += 1} -> "
if resp.value.size == 150000
$stdout.print "OK\n"
##total_size += resp.value.size
else
$stdout.print "ERROR\n"
end
end
puts "Actors left: #{Celluloid::Actor.all.to_set.length} -- Alive: #{Celluloid::Actor.all.to_set.select(&:alive?).length}"
end
What ruby are you using? jRuby, Rubinius, etc? And what version of those?
Reason I ask is, threads are handled differently for each ruby. What you seem to be describing is added threads that come in for supervisors, and for tasks. Looking at the date of your post, it is likely that fibers are actually becoming native threads, which would make it seem like using jRuby perhaps. Also, using Futures often invokes the internal thread pool, which has nothing to do with your pool.
With both those reasons and others like them you could look for, that makes sense why you'd have a higher thread count than your pool calls for. This is a bit old though, so maybe you could follow up as to whether you still have this problem, and post output.

Resources