SENTRY_DSN in FastAPI config - fastapi

Could you explain this configuration in BaseSetting FastApi?
SENTRY_DSN: Optional[HttpUrl] = None
#validator("SENTRY_DSN", pre=True)
def sentry_dsn_can_be_blank(cls, v: str) -> Optional[str]:
if len(v) == 0:
return None
return v
Is it necessary for every project that develops with FastApi?
the reference code is:
https://github.com/tiangolo/full-stack-fastapi-postgresql/blob/master/%7B%7Bcookiecutter.project_slug%7D%7D/backend/app/app/core/config.py

Related

Ecto.Repo receives a struct that does not implement Access behaviour

i have a problem with an Ecto Repo and a schema in one of my
tests. The schema is the following:
defmodule Elixirserver.Transactions.Bank do
#behaviour Elixirserver.ContentDump
use Ecto.Schema
import Ecto.Changeset
alias Elixirserver.Transactions.Account
#attrs [:name, :code]
schema "banks" do
field(:name, :string)
field(:code, :string)
has_many(:account, Account)
timestamps()
end
#doc false
def changeset(bank, attrs \\ []) do
bank
|> cast(attrs, #attrs)
|> validate_required(#attrs)
end
def to_json(bank) do
%{
id: bank.id,
name: bank.name,
code: bank.code,
type: "BANK"
}
end
end
When i try to execute a test i obtain the following:
(UndefinedFunctionError) function
Elixirserver.Transactions.Bank.fetch/2 is undefined
(Elixirserver.Transactions.Bank does not implement the Access behaviour)
The test is this:
def create(conn, %{"bank" => bank_params}) do
with {:ok, %Bank{} = bank} <- Transactions.create_bank(bank_params) do
conn
|> put_status(:created)
|> put_resp_header("location", bank_path(conn, :show, bank))
|> render("show.json", id: bank["id"])
end
end
Now, apparently this is because the Access behaviour is not implemented. Do i have to provide it explicitly ?
I am using ExMachina to generate fixtures, and i generated the resources with mix phx.gen.json.
bank["id"] is most probably the problem. Structs don't implement the access interface, you should use the dot so this should work: bank.id.
Details can be found here.

Ejabberd: error in simple module to handle offline messages

I have an Ejabberd 17.01 installation where I need to push a notification in case a recipient is offline. This seems the be a common task and solutions using a customized Ejabberd module can be found everywhere. However, I just don't get it running. First, here's me script:
-module(mod_offline_push).
-behaviour(gen_mod).
-export([start/2, stop/1]).
-export([push_message/3]).
-include("ejabberd.hrl").
-include("logger.hrl").
-include("jlib.hrl").
start(Host, _Opts) ->
?INFO_MSG("mod_offline_push loading", []),
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_message, 10),
ok.
stop(Host) ->
?INFO_MSG("mod_offline_push stopping", []),
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_message, 10),
ok.
push_message(From, To, Packet) ->
?INFO_MSG("mod_offline_push -> push_message", [To]),
Type = fxml:get_tag_attr_s(<<"type">>, Packet), % Supposedly since 16.04
%Type = xml:get_tag_attr_s(<<"type">>, Packet), % Supposedly since 13.XX
%Type = xml:get_tag_attr_s("type", Packet),
%Type = xml:get_tag_attr_s(list_to_binary("type"), Packet),
?INFO_MSG("mod_offline_push -> push_message", []),
ok.
The problem is the line Type = ... line in method push_message; without that line the last info message is logged (so the hook definitely works). When browsing online, I can find all kinds of function calls to extract elements from Packet. As far as I understand it changed over time with new releases. But it's not good, all variants lead in some kind of error. The current way returns:
2017-01-25 20:38:08.701 [error] <0.21678.0>#ejabberd_hooks:run1:332 {function_clause,[{fxml,get_tag_attr_s,[<<"type">>,{message,<<>>,normal,<<>>,{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>},[],[{text,<<>>,<<"sfsdfsdf">>}],undefined,[],#{}}],[{file,"src/fxml.erl"},{line,169}]},{mod_offline_push,push_message,3,[{file,"mod_offline_push.erl"},{line,33}]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},{line,382}]},{ejabberd_hooks,run1,3,[{file,"src/ejabberd_hooks.erl"},{line,329}]},{ejabberd_sm,route,3,[{file,"src/ejabberd_sm.erl"},{line,126}]},{ejabberd_local,route,3,[{file,"src/ejabberd_local.erl"},{line,110}]},{ejabberd_router,route,3,[{file,"src/ejabberd_router.erl"},{line,87}]},{ejabberd_c2s,check_privacy_route,5,[{file,"src/ejabberd_c2s.erl"},{line,1886}]}]}
running hook: {offline_message_hook,[{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>},{message,<<>>,normal,<<>>,{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>},[],[{text,<<>>,<<"sfsdfsdf">>}],undefined,[],#{}}]}
I'm new Ejabberd and Erlang, so I cannot really interpret the error, but the Line 33 as mentioned in {mod_offline_push,push_message,3,[{file,"mod_offline_push.erl"}, {line,33}]} is definitely the line calling get_tag_attr_s.
UPDATE 2017/01/27: Since this cost me a lot of headache -- and I'm still not perfectly happy -- I post here my current working module in the hopes it might help others. My setup is Ejabberd 17.01 running on Ubuntu 16.04. Most stuff I tried and failed with seem to for older versions of Ejabberd:
-module(mod_fcm_fork).
-behaviour(gen_mod).
%% public methods for this module
-export([start/2, stop/1]).
-export([push_notification/3]).
%% included for writing to ejabberd log file
-include("ejabberd.hrl").
-include("logger.hrl").
-include("xmpp_codec.hrl").
%% Copied this record definition from jlib.hrl
%% Including "xmpp_codec.hrl" and "jlib.hrl" resulted in errors ("XYZ already defined")
-record(jid, {user = <<"">> :: binary(),
server = <<"">> :: binary(),
resource = <<"">> :: binary(),
luser = <<"">> :: binary(),
lserver = <<"">> :: binary(),
lresource = <<"">> :: binary()}).
start(Host, _Opts) ->
?INFO_MSG("mod_fcm_fork loading", []),
% Providing the most basic API to the clients and servers that are part of the Inets application
inets:start(),
% Add hook to handle message to user who are offline
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_notification, 10),
ok.
stop(Host) ->
?INFO_MSG("mod_fcm_fork stopping", []),
ejabberd_hooks:add(offline_message_hook, Host, ?MODULE, push_notification, 10),
ok.
push_notification(From, To, Packet) ->
% Generate JID of sender and receiver
FromJid = lists:concat([binary_to_list(From#jid.user), "#", binary_to_list(From#jid.server), "/", binary_to_list(From#jid.resource)]),
ToJid = lists:concat([binary_to_list(To#jid.user), "#", binary_to_list(To#jid.server), "/", binary_to_list(To#jid.resource)]),
% Get message body
MessageBody = Packet#message.body,
% Check of MessageBody is not empty
case MessageBody/=[] of
true ->
% Get first element (no idea when this list can have more elements)
[First | _ ] = MessageBody,
% Get message data and convert to string
MessageBodyText = binary_to_list(First#text.data),
send_post_request(FromJid, ToJid, MessageBodyText);
false ->
?INFO_MSG("mod_fcm_fork -> push_notification: MessageBody is empty",[])
end,
ok.
send_post_request(FromJid, ToJid, MessageBodyText) ->
%?INFO_MSG("mod_fcm_fork -> send_post_request -> MessageBodyText = ~p", [Demo]),
Method = post,
PostURL = gen_mod:get_module_opt(global, ?MODULE, post_url,fun(X) -> X end, all),
% Add data as query string. Not nice, query body would be preferable
% Problem: message body itself can be in a JSON string, and I couldn't figure out the correct encoding.
URL = lists:concat([binary_to_list(PostURL), "?", "fromjid=", FromJid,"&tojid=", ToJid,"&body=", edoc_lib:escape_uri(MessageBodyText)]),
Header = [],
ContentType = "application/json",
Body = [],
?INFO_MSG("mod_fcm_fork -> send_post_request -> URL = ~p", [URL]),
% ADD SSL CONFIG BELOW!
%HTTPOptions = [{ssl,[{versions, ['tlsv1.2']}]}],
HTTPOptions = [],
Options = [],
httpc:request(Method, {URL, Header, ContentType, Body}, HTTPOptions, Options),
ok.
Actually it fails with second arg Packet you pass to fxml:get_tag_attr_s in push_message function
{message,<<>>,normal,<<>>,
{jid,<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>,
<<"homer">>,<<"xxx.xxx.xxx.xxx">>,<<"conference">>},
{jid,<<"carl">>,<<"xxx.xxx.xxx.xxx">>,<<>>,<<"carl">>,
<<"xxx.xxx.xxx.xxx">>,<<>>},
[],
[{text,<<>>,<<"sfsdfsdf">>}],
undefined,[],#{}}
because it is not xmlel
Looks like it is record "message" defined in tools/xmpp_codec.hrl
with <<>> id and type 'normal'
xmpp_codec.hrl
-record(message, {id :: binary(),
type = normal :: 'chat' | 'error' | 'groupchat' | 'headline' | 'normal',
lang :: binary(),
from :: any(),
to :: any(),
subject = [] :: [#text{}],
body = [] :: [#text{}],
thread :: binary(),
error :: #error{},
sub_els = [] :: [any()]}).
Include this file and use just
Type = Packet#message.type
or, if you expect binary value
Type = erlang:atom_to_binary(Packet#message.type, utf8)
The newest way to do that seems to be with xmpp:get_type/1:
Type = xmpp:get_type(Packet),
It returns an atom, in this case normal.

Create a portal_user_catalog and have it used (Plone)

I'm creating a fork of my Plone site (which has not been forked for a long time). This site has a special catalog object for user profiles (a special Archetypes-based object type) which is called portal_user_catalog:
$ bin/instance debug
>>> portal = app.Plone
>>> print [d for d in portal.objectMap() if d['meta_type'] == 'Plone Catalog Tool']
[{'meta_type': 'Plone Catalog Tool', 'id': 'portal_catalog'},
{'meta_type': 'Plone Catalog Tool', 'id': 'portal_user_catalog'}]
This looks reasonable because the user profiles don't have most of the indexes of the "normal" objects, but have a small set of own indexes.
Since I found no way how to create this object from scratch, I exported it from the old site (as portal_user_catalog.zexp) and imported it in the new site. This seemed to work, but I can't add objects to the imported catalog, not even by explicitly calling the catalog_object method. Instead, the user profiles are added to the standard portal_catalog.
Now I found a module in my product which seems to serve the purpose (Products/myproduct/exportimport/catalog.py):
"""Catalog tool setup handlers.
$Id: catalog.py 77004 2007-06-24 08:57:54Z yuppie $
"""
from Products.GenericSetup.utils import exportObjects
from Products.GenericSetup.utils import importObjects
from Products.CMFCore.utils import getToolByName
from zope.component import queryMultiAdapter
from Products.GenericSetup.interfaces import IBody
def importCatalogTool(context):
"""Import catalog tool.
"""
site = context.getSite()
obj = getToolByName(site, 'portal_user_catalog')
parent_path=''
if obj and not obj():
importer = queryMultiAdapter((obj, context), IBody)
path = '%s%s' % (parent_path, obj.getId().replace(' ', '_'))
__traceback_info__ = path
print [importer]
if importer:
print importer.name
if importer.name:
path = '%s%s' % (parent_path, 'usercatalog')
print path
filename = '%s%s' % (path, importer.suffix)
print filename
body = context.readDataFile(filename)
if body is not None:
importer.filename = filename # for error reporting
importer.body = body
if getattr(obj, 'objectValues', False):
for sub in obj.objectValues():
importObjects(sub, path+'/', context)
def exportCatalogTool(context):
"""Export catalog tool.
"""
site = context.getSite()
obj = getToolByName(site, 'portal_user_catalog', None)
if tool is None:
logger = context.getLogger('catalog')
logger.info('Nothing to export.')
return
parent_path=''
exporter = queryMultiAdapter((obj, context), IBody)
path = '%s%s' % (parent_path, obj.getId().replace(' ', '_'))
if exporter:
if exporter.name:
path = '%s%s' % (parent_path, 'usercatalog')
filename = '%s%s' % (path, exporter.suffix)
body = exporter.body
if body is not None:
context.writeDataFile(filename, body, exporter.mime_type)
if getattr(obj, 'objectValues', False):
for sub in obj.objectValues():
exportObjects(sub, path+'/', context)
I tried to use it, but I have no idea how it is supposed to be done;
I can't call it TTW (should I try to publish the methods?!).
I tried it in a debug session:
$ bin/instance debug
>>> portal = app.Plone
>>> from Products.myproduct.exportimport.catalog import exportCatalogTool
>>> exportCatalogTool(portal)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File ".../Products/myproduct/exportimport/catalog.py", line 58, in exportCatalogTool
site = context.getSite()
AttributeError: getSite
So, if this is the way to go, it looks like I need a "real" context.
Update: To get this context, I tried an External Method:
# -*- coding: utf-8 -*-
from Products.myproduct.exportimport.catalog import exportCatalogTool
from pdb import set_trace
def p(dt, dd):
print '%-16s%s' % (dt+':', dd)
def main(self):
"""
Export the portal_user_catalog
"""
g = globals()
print '#' * 79
for a in ('__package__', '__module__'):
if a in g:
p(a, g[a])
p('self', self)
set_trace()
exportCatalogTool(self)
However, wenn I called it, I got the same <PloneSite at /Plone> object as the argument to the main function, which didn't have the getSite attribute. Perhaps my site doesn't call such External Methods correctly?
Or would I need to mention this module somehow in my configure.zcml, but how? I searched my directory tree (especially below Products/myproduct/profiles) for exportimport, the module name, and several other strings, but I couldn't find anything; perhaps there has been an integration once but was broken ...
So how do I make this portal_user_catalog work?
Thank you!
Update: Another debug session suggests the source of the problem to be some transaction matter:
>>> portal = app.Plone
>>> puc = portal.portal_user_catalog
>>> puc._catalog()
[]
>>> profiles_folder = portal.some_folder_with_profiles
>>> for o in profiles_folder.objectValues():
... puc.catalog_object(o)
...
>>> puc._catalog()
[<Products.ZCatalog.Catalog.mybrains object at 0x69ff8d8>, ...]
This population of the portal_user_catalog doesn't persist; after termination of the debug session and starting fg, the brains are gone.
It looks like the problem was indeed related with transactions.
I had
import transaction
...
class Browser(BrowserView):
...
def processNewUser(self):
....
transaction.commit()
before, but apparently this was not good enough (and/or perhaps not done correctly).
Now I start the transaction explicitly with transaction.begin(), save intermediate results with transaction.savepoint(), abort the transaction explicitly with transaction.abort() in case of errors (try / except), and have exactly one transaction.commit() at the end, in the case of success. Everything seems to work.
Of course, Plone still doesn't take this non-standard catalog into account; when I "clear and rebuild" it, it is empty afterwards. But for my application it works well enough.

Error attempting to decode with wreq

I'm trying really hard to understand how to use lenses and wreq and its turning out to really slow me down.
The error seems to be claiming there's some mismatched type here. I'm not sure exactly how to handle that though. I'm still fairly new to haskell and these lenses are pretty confusing. However, wreq seems to be cleaner, which is why I chose to use it. Can anyone help me understand what the error is, and how to fix it? I seem to run into alot of these type errors. I am aware that Maybe TestInfo won't be returned by my code at the moment. That's ok. That error know how to handle. This error however, I don't.
Here is my code:
Module TestInformation:
{-# LANGUAGE OverloadedStrings #-}
module TestInformation where
import Auth
import Network.Wreq
import Control.Lens
import Data.Aeson
import Data.Aeson.Lens (_String)
type TestNumber = String
data TestInfo = TestInfo {
TestId :: Int,
TestName :: String,
}
instance FromJSON TestInfo
getTestInfo :: Key -> TestNumber -> Maybe TestInfo
getTestInfo key test =
decode (res ^. responseBody . _String)
where opts = defaults & auth ?~ oauth2Bearer key
res = getWith opts ("http://testsite.com/v1/tests/" ++ test)
Module Auth:
module Auth where
import qualified Data.ByteString as B
type Key = B.ByteString
The error:
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help
[1 of 2] Compiling Auth ( Auth.hs, interpreted )
[2 of 2] Compiling TestInformation ( TestInformation.hs, interpreted )
TestInformation.hs:36:18:
Couldn't match type ‘Response body10’
with ‘IO (Response Data.ByteString.Lazy.Internal.ByteString)’
Expected type: (body10
-> Const Data.ByteString.Lazy.Internal.ByteString body10)
-> IO (Response Data.ByteString.Lazy.Internal.ByteString)
-> Const
Data.ByteString.Lazy.Internal.ByteString
(IO (Response Data.ByteString.Lazy.Internal.ByteString))
Actual type: (body10
-> Const Data.ByteString.Lazy.Internal.ByteString body10)
-> Response body10
-> Const Data.ByteString.Lazy.Internal.ByteString (Response body10)
In the first argument of ‘(.)’, namely ‘responseBody’
In the second argument of ‘(^.)’, namely ‘responseBody . _String’
TestInformation.hs:36:33:
Couldn't match type ‘Data.ByteString.Lazy.Internal.ByteString’
with ‘Data.Text.Internal.Text’
Expected type: (Data.ByteString.Lazy.Internal.ByteString
-> Const
Data.ByteString.Lazy.Internal.ByteString
Data.ByteString.Lazy.Internal.ByteString)
-> body10 -> Const Data.ByteString.Lazy.Internal.ByteString body10
Actual type: (Data.Text.Internal.Text
-> Const
Data.ByteString.Lazy.Internal.ByteString Data.Text.Internal.Text)
-> body10 -> Const Data.ByteString.Lazy.Internal.ByteString body10
In the second argument of ‘(.)’, namely ‘_String’
In the second argument of ‘(^.)’, namely ‘responseBody . _String’
Failed, modules loaded: Auth.
Leaving GHCi.
This type checks for me:
getTestInfo :: Key -> TestNumber -> IO (Maybe TestInfo)
getTestInfo key test = do
res <- getWith opts ("http://testsite.com/v1/tests/" ++ test)
return $ decode (res ^. responseBody)
where opts = defaults & auth ?~ oauth2Bearer key
getWith is an IO action, so to get its return value you need to use the monadic binding operator <-.
Full program: http://lpaste.net/133443 http://lpaste.net/133498

How to redirect console output of command to a file?

What are the means of redirecting output from a single command to a file in sbt?
I could exit sbt, execute sbt mycommand > out.txt, and start it again, but I'm wondering if there's an alternative?
In Add a custom logger you can find more information about what's required to have a custom logger that logs to a file - use extraLoggers and add an instance of sbt.AbstractLogger that does the saving.
You may find my answer to Displaying timestamp for debug mode in SBT? useful. The example copied here:
def datedPrintln = (m: String) =>
println(s"+++ ${java.util.Calendar.getInstance().getTime()} $m")
extraLoggers := {
val clientLogger = FullLogger {
new Logger {
def log(level: Level.Value, message: => String): Unit =
if(level >= Level.Info) datedPrintln(s"$message at $level")
def success(message: => String): Unit = datedPrintln(s"success: $message")
def trace(t: => Throwable): Unit = datedPrintln(s"trace: throwable: $t")
}
}
val currentFunction = extraLoggers.value
(key: ScopedKey[_]) => clientLogger +: currentFunction(key)
}

Resources