Ecto.Repo receives a struct that does not implement Access behaviour - functional-programming

i have a problem with an Ecto Repo and a schema in one of my
tests. The schema is the following:
defmodule Elixirserver.Transactions.Bank do
#behaviour Elixirserver.ContentDump
use Ecto.Schema
import Ecto.Changeset
alias Elixirserver.Transactions.Account
#attrs [:name, :code]
schema "banks" do
field(:name, :string)
field(:code, :string)
has_many(:account, Account)
timestamps()
end
#doc false
def changeset(bank, attrs \\ []) do
bank
|> cast(attrs, #attrs)
|> validate_required(#attrs)
end
def to_json(bank) do
%{
id: bank.id,
name: bank.name,
code: bank.code,
type: "BANK"
}
end
end
When i try to execute a test i obtain the following:
(UndefinedFunctionError) function
Elixirserver.Transactions.Bank.fetch/2 is undefined
(Elixirserver.Transactions.Bank does not implement the Access behaviour)
The test is this:
def create(conn, %{"bank" => bank_params}) do
with {:ok, %Bank{} = bank} <- Transactions.create_bank(bank_params) do
conn
|> put_status(:created)
|> put_resp_header("location", bank_path(conn, :show, bank))
|> render("show.json", id: bank["id"])
end
end
Now, apparently this is because the Access behaviour is not implemented. Do i have to provide it explicitly ?
I am using ExMachina to generate fixtures, and i generated the resources with mix phx.gen.json.

bank["id"] is most probably the problem. Structs don't implement the access interface, you should use the dot so this should work: bank.id.
Details can be found here.

Related

How to create shard&index in airflow mongohook?

I want to run mongo command with mongohook of airflow. How can I do it?
sh.shardCollection(db_name +, { _id: "hashed" }, false, { numInitialChunks: 128 });
db.collection.createIndex({ "field": 1 }, { field: true });
The pymongo client which the Mongohook provided in Airflow uses doesn't support the sh.shardCollection command in your script.
Though the createIndex collection method is supported in the pymongo client.
I recommend anyway to install the mongosh CLI binary and bake it into your container image for your workers.
You can write your shell command to a script such as /dags/templates/mongo-admin-create-index.js or some other location that it can be found.
Then can implement a custom operator using the SubprocessHook to run mongosh CLI command such as:
mongosh -f {mongosh_script} {db_address}
This custom operator would be along these lines
from airflow.compat.functools import cached_property
from airflow.hooks.subprocess import SubprocessHook
from airflow.providers.mongo.hooks import MongoHook
class MongoshScriptOperator(BaseOperator):
template_fields: Sequence[str] = ('mongosh_script')
def __init__(
self,
*,
mongosh_script: str,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.mongosh_script = mongosh_script
#cached_property
def subprocess_hook(self):
"""Returns hook for running the shell command"""
return SubprocessHook()
def execute(self):
"""Executes a mongosh script"""
mh = MongoHook(self.conn_id)
self.subprocess_hook.run_command(
command=['mongosh', '-f', self.mongosh_script, mh.uri],
)
When creating the DagNode, you can pass the location of the script to your custom operator.

run the same method of a list of instances in pathos.multiprocessing

I am working on a traveling salesman problem. Given that all agents traverse the same graph to find their own path separately, i am trying to parallelize the path-finding action of agents. the task is for each iteration, all agents start from a start node to find their paths and collect all the paths to find the best path in the current iteration.
I am using pathos.multiprocessing.
the agent class has a traverse method as,
class Agent:
def find_a_path(self, graph):
# here is the logic to find a path by traversing the graph
return found_path
I create a helper function to wrap up the method
def do_agent_find_a_path(agent, graph):
return agent.find_a_path(graph)
then create a pool and employ amap by passing the helper function, a list of agent instance and the same graph,
pool = ProcessPool(nodes = 10)
res = pool.amap(do_agent_find_a_path, agents, [graph] * len(agents))
but, the processes are created in sequence and it runs very slow. I'd like to have some instructions on a correct/decent way to leverage pathos in this situation.
thank you!
UPDATE:
I am using pathos 0.2.3 on ubuntu,
Name: pathos
Version: 0.2.3
Summary: parallel graph management and execution in heterogeneous computing
Home-page: https://pypi.org/project/pathos
Author: Mike McKerns
i get the following error with the TreadPool sample code:
>import pathos
>pathos.pools.ThreadPool().iumap(lambda x:x*x, [1,2,3,4])
Traceback (most recent call last):
File "/opt/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-f8f5e7774646>", line 1, in <module>
pathos.pools.ThreadPool().iumap(lambda x:x*x, [1,2,3,4])
AttributeError: 'ThreadPool' object has no attribute 'iumap'```
I'm the pathos author. I'm not sure how long your method takes to run, but from your comments, I'm going to assume not very long. I'd suggest that, if the method is "fast", that you use a ThreadPool instead. Also, if you don't need to preserve the order of the results, the fastest map is typically uimap (unordered, iterative map).
>>> class Agent:
... def basepath(self, dirname):
... import os
... return os.path.basename(dirname)
... def slowpath(self, dirname):
... import time
... time.sleep(.2)
... return self.basepath(dirname)
...
>>> a = Agent()
>>> import pathos.pools as pp
>>> dirs = ['/tmp/foo', '/var/path/bar', '/root/bin/bash', '/tmp/foo/bar']
>>> import time
>>> p = pp.ProcessPool()
>>> go = time.time(); tuple(p.uimap(a.basepath, dirs)); print(time.time()-go)
('foo', 'bar', 'bash', 'bar')
0.006751060485839844
>>> p.close(); p.join(); p.clear()
>>> t = pp.ThreadPool(4)
>>> go = time.time(); tuple(t.uimap(a.basepath, dirs)); print(time.time()-go)
('foo', 'bar', 'bash', 'bar')
0.0005156993865966797
>>> t.close(); t.join(); t.clear()
and, just to compare against something that takes a bit longer...
>>> t = pp.ThreadPool(4)
>>> go = time.time(); tuple(t.uimap(a.slowpath, dirs)); print(time.time()-go)
('bar', 'bash', 'bar', 'foo')
0.2055649757385254
>>> t.close(); t.join(); t.clear()
>>> p = pp.ProcessPool()
>>> go = time.time(); tuple(p.uimap(a.slowpath, dirs)); print(time.time()-go)
('foo', 'bar', 'bash', 'bar')
0.2084510326385498
>>> p.close(); p.join(); p.clear()
>>>

send calls via an endpoint to a GenServer

Given a running GenServer, is there a known way to send synchronous/asynchronous calls to the pid via an endpoint, without using the Phoenix framework?
Here's an example call (using python's requests library) that maps the reply term to JSON:
iex> give_genserver_endpoint(pid, 'http://mygenserverendpoint/api')
iex> {:ok, 'http://mygenserverendpoint/api'}
>>> requests.get(url='http://mygenserverendpoint/getfood/fruits/colour/red')
>>> '{ "hits" : ["apple", "plum"]}'
You can write a complete elixir http server using cowboy and plug:
Application Module
defmodule MyApp do
use Application
def start(_type, _args) do
import Supervisor.Spec
children = [
worker(MyGenServer, []),
Plug.Adapters.Cowboy.child_spec(:http, MyRouter, [], [port: 4001])
]
opts = [strategy: :one_for_one, name: MyApp.Supervisor]
Supervisor.start_link(children, opts)
end
end
Router Module
defmodule MyRouter do
use Plug.Router
plug :match
plug :dispatch
get "/mygenserverendpoint/getfood/fruits/colour/:colour" do
response_body = MyGenServer.get_fruit_by_colour(colour)
conn
|> put_resp_content_type("application/json")
|> send_resp(conn, 200, Poison.encode(response_body))
end
match _ do
send_resp(conn, 404, "oops")
end
end
GenServer module
defmodule MyGenServer do
use GenServer
def start_link do
GenServer.start_link(__MODULE__, :ok, name: __MODULE__)
end
def get_fruit_by_colour(colour) do
GenServer.call(__MODULE__, {:get_by_colour, colour})
end
def handle_call({:get_by_colour, colour}, _from, state) do
{:reply, %{"hits" : ["apple", "plum"]}, state}
end
end

Create a portal_user_catalog and have it used (Plone)

I'm creating a fork of my Plone site (which has not been forked for a long time). This site has a special catalog object for user profiles (a special Archetypes-based object type) which is called portal_user_catalog:
$ bin/instance debug
>>> portal = app.Plone
>>> print [d for d in portal.objectMap() if d['meta_type'] == 'Plone Catalog Tool']
[{'meta_type': 'Plone Catalog Tool', 'id': 'portal_catalog'},
{'meta_type': 'Plone Catalog Tool', 'id': 'portal_user_catalog'}]
This looks reasonable because the user profiles don't have most of the indexes of the "normal" objects, but have a small set of own indexes.
Since I found no way how to create this object from scratch, I exported it from the old site (as portal_user_catalog.zexp) and imported it in the new site. This seemed to work, but I can't add objects to the imported catalog, not even by explicitly calling the catalog_object method. Instead, the user profiles are added to the standard portal_catalog.
Now I found a module in my product which seems to serve the purpose (Products/myproduct/exportimport/catalog.py):
"""Catalog tool setup handlers.
$Id: catalog.py 77004 2007-06-24 08:57:54Z yuppie $
"""
from Products.GenericSetup.utils import exportObjects
from Products.GenericSetup.utils import importObjects
from Products.CMFCore.utils import getToolByName
from zope.component import queryMultiAdapter
from Products.GenericSetup.interfaces import IBody
def importCatalogTool(context):
"""Import catalog tool.
"""
site = context.getSite()
obj = getToolByName(site, 'portal_user_catalog')
parent_path=''
if obj and not obj():
importer = queryMultiAdapter((obj, context), IBody)
path = '%s%s' % (parent_path, obj.getId().replace(' ', '_'))
__traceback_info__ = path
print [importer]
if importer:
print importer.name
if importer.name:
path = '%s%s' % (parent_path, 'usercatalog')
print path
filename = '%s%s' % (path, importer.suffix)
print filename
body = context.readDataFile(filename)
if body is not None:
importer.filename = filename # for error reporting
importer.body = body
if getattr(obj, 'objectValues', False):
for sub in obj.objectValues():
importObjects(sub, path+'/', context)
def exportCatalogTool(context):
"""Export catalog tool.
"""
site = context.getSite()
obj = getToolByName(site, 'portal_user_catalog', None)
if tool is None:
logger = context.getLogger('catalog')
logger.info('Nothing to export.')
return
parent_path=''
exporter = queryMultiAdapter((obj, context), IBody)
path = '%s%s' % (parent_path, obj.getId().replace(' ', '_'))
if exporter:
if exporter.name:
path = '%s%s' % (parent_path, 'usercatalog')
filename = '%s%s' % (path, exporter.suffix)
body = exporter.body
if body is not None:
context.writeDataFile(filename, body, exporter.mime_type)
if getattr(obj, 'objectValues', False):
for sub in obj.objectValues():
exportObjects(sub, path+'/', context)
I tried to use it, but I have no idea how it is supposed to be done;
I can't call it TTW (should I try to publish the methods?!).
I tried it in a debug session:
$ bin/instance debug
>>> portal = app.Plone
>>> from Products.myproduct.exportimport.catalog import exportCatalogTool
>>> exportCatalogTool(portal)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File ".../Products/myproduct/exportimport/catalog.py", line 58, in exportCatalogTool
site = context.getSite()
AttributeError: getSite
So, if this is the way to go, it looks like I need a "real" context.
Update: To get this context, I tried an External Method:
# -*- coding: utf-8 -*-
from Products.myproduct.exportimport.catalog import exportCatalogTool
from pdb import set_trace
def p(dt, dd):
print '%-16s%s' % (dt+':', dd)
def main(self):
"""
Export the portal_user_catalog
"""
g = globals()
print '#' * 79
for a in ('__package__', '__module__'):
if a in g:
p(a, g[a])
p('self', self)
set_trace()
exportCatalogTool(self)
However, wenn I called it, I got the same <PloneSite at /Plone> object as the argument to the main function, which didn't have the getSite attribute. Perhaps my site doesn't call such External Methods correctly?
Or would I need to mention this module somehow in my configure.zcml, but how? I searched my directory tree (especially below Products/myproduct/profiles) for exportimport, the module name, and several other strings, but I couldn't find anything; perhaps there has been an integration once but was broken ...
So how do I make this portal_user_catalog work?
Thank you!
Update: Another debug session suggests the source of the problem to be some transaction matter:
>>> portal = app.Plone
>>> puc = portal.portal_user_catalog
>>> puc._catalog()
[]
>>> profiles_folder = portal.some_folder_with_profiles
>>> for o in profiles_folder.objectValues():
... puc.catalog_object(o)
...
>>> puc._catalog()
[<Products.ZCatalog.Catalog.mybrains object at 0x69ff8d8>, ...]
This population of the portal_user_catalog doesn't persist; after termination of the debug session and starting fg, the brains are gone.
It looks like the problem was indeed related with transactions.
I had
import transaction
...
class Browser(BrowserView):
...
def processNewUser(self):
....
transaction.commit()
before, but apparently this was not good enough (and/or perhaps not done correctly).
Now I start the transaction explicitly with transaction.begin(), save intermediate results with transaction.savepoint(), abort the transaction explicitly with transaction.abort() in case of errors (try / except), and have exactly one transaction.commit() at the end, in the case of success. Everything seems to work.
Of course, Plone still doesn't take this non-standard catalog into account; when I "clear and rebuild" it, it is empty afterwards. But for my application it works well enough.

Error attempting to decode with wreq

I'm trying really hard to understand how to use lenses and wreq and its turning out to really slow me down.
The error seems to be claiming there's some mismatched type here. I'm not sure exactly how to handle that though. I'm still fairly new to haskell and these lenses are pretty confusing. However, wreq seems to be cleaner, which is why I chose to use it. Can anyone help me understand what the error is, and how to fix it? I seem to run into alot of these type errors. I am aware that Maybe TestInfo won't be returned by my code at the moment. That's ok. That error know how to handle. This error however, I don't.
Here is my code:
Module TestInformation:
{-# LANGUAGE OverloadedStrings #-}
module TestInformation where
import Auth
import Network.Wreq
import Control.Lens
import Data.Aeson
import Data.Aeson.Lens (_String)
type TestNumber = String
data TestInfo = TestInfo {
TestId :: Int,
TestName :: String,
}
instance FromJSON TestInfo
getTestInfo :: Key -> TestNumber -> Maybe TestInfo
getTestInfo key test =
decode (res ^. responseBody . _String)
where opts = defaults & auth ?~ oauth2Bearer key
res = getWith opts ("http://testsite.com/v1/tests/" ++ test)
Module Auth:
module Auth where
import qualified Data.ByteString as B
type Key = B.ByteString
The error:
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help
[1 of 2] Compiling Auth ( Auth.hs, interpreted )
[2 of 2] Compiling TestInformation ( TestInformation.hs, interpreted )
TestInformation.hs:36:18:
Couldn't match type ‘Response body10’
with ‘IO (Response Data.ByteString.Lazy.Internal.ByteString)’
Expected type: (body10
-> Const Data.ByteString.Lazy.Internal.ByteString body10)
-> IO (Response Data.ByteString.Lazy.Internal.ByteString)
-> Const
Data.ByteString.Lazy.Internal.ByteString
(IO (Response Data.ByteString.Lazy.Internal.ByteString))
Actual type: (body10
-> Const Data.ByteString.Lazy.Internal.ByteString body10)
-> Response body10
-> Const Data.ByteString.Lazy.Internal.ByteString (Response body10)
In the first argument of ‘(.)’, namely ‘responseBody’
In the second argument of ‘(^.)’, namely ‘responseBody . _String’
TestInformation.hs:36:33:
Couldn't match type ‘Data.ByteString.Lazy.Internal.ByteString’
with ‘Data.Text.Internal.Text’
Expected type: (Data.ByteString.Lazy.Internal.ByteString
-> Const
Data.ByteString.Lazy.Internal.ByteString
Data.ByteString.Lazy.Internal.ByteString)
-> body10 -> Const Data.ByteString.Lazy.Internal.ByteString body10
Actual type: (Data.Text.Internal.Text
-> Const
Data.ByteString.Lazy.Internal.ByteString Data.Text.Internal.Text)
-> body10 -> Const Data.ByteString.Lazy.Internal.ByteString body10
In the second argument of ‘(.)’, namely ‘_String’
In the second argument of ‘(^.)’, namely ‘responseBody . _String’
Failed, modules loaded: Auth.
Leaving GHCi.
This type checks for me:
getTestInfo :: Key -> TestNumber -> IO (Maybe TestInfo)
getTestInfo key test = do
res <- getWith opts ("http://testsite.com/v1/tests/" ++ test)
return $ decode (res ^. responseBody)
where opts = defaults & auth ?~ oauth2Bearer key
getWith is an IO action, so to get its return value you need to use the monadic binding operator <-.
Full program: http://lpaste.net/133443 http://lpaste.net/133498

Resources