Flask Testing with SQLite DB, deleting not working - sqlite

In order to test my flask application, I setup a local SQLite DB with create_engine. To delete, I have my route call TablesEtl().delete_record_by_id().
class TablesEtl:
def __init__(self) -> None:
self.engine = create_engine(get_database_engine_uri(), connect_args={"check_same_thread": False})
def get_session(self) -> Session:
session = Session(self.engine, autoflush=False)
return session
def delete_record_by_id(self, record_id: int, table: Base) -> None:
session = self.get_session()
session.query(table).filter(table.id == record_id).delete()
session.commit()
session.close()
def get_record_by_id(self, record_id: int, table: Base) -> dict:
session = self.get_session()
query: Base = Query(table, session=session).filter(table.id == record_id).first()
session.close()
if query is None:
return {}
info = {}
counting = 0
for field in query.__table__.columns.keys():
info[field.replace("_", "-")] = getattr(query, field)
counting += 1
return info
The test_delete is done by creating a record, deleting it through this route and then requesting to the "information" route, information about the record I've just created. It should return a 404, since the record doesn't exist.
The problem is that it returns 200 and the correct information about the record I've just created.
This didn't occured when I ran the tests with pytest alone, but now I'm running with pytest-xdist and I think the second request is coming before the record was actually deleted. I also noticed that a db-name.db-journal was created side by side with my original db-name.db.
def test_not_finding_model_info_after_delete(self) -> None:
model_id = create_model_on_client(self.client, self.authorization_header, "Model Name")
self.client.delete(API_DELETE_MODEL, headers=self.authorization_header, json={"model-id": model_id})
response = get_model(self.client, self.authorization_header, model_id)
self.assertEqual(404, response.status_code)
When I run it with some time.sleep it works, but I don't want to depend upon a sleep for it to work and when I run pytest a second time with --lf option it works, but I believe it's because there are less calls being made to the db. When I get all the records that were asked to be deleted and check against the db, they are in fact deleted. I believe it's something to do with this .db-journal.
Can someone shade a light to it? Another DB would be better for this problem?

Related

Dynamically generated tasks in task group Airflow and condition for EmailOperator

I want to build the next dag in airflow
If there are new tickets in the search_jira_tickets task, then return me a list of tickets that I should process according to the scheme above. There are few problems:
I get airflow exception TypeError: 'XComArg' object is not iterable when I iterate over the list, returned by the function serch_new_jira_tickets(). I need iteration because one ticket can be good and another not. Here is my dag:
#task
def serch_new_jira_tickets():
jql = 'MY_JQL_QUERY'
issues_list = jira.search_issues(jql)
if issues_list:
return issues_list
else:
raise AirflowSkipException('No new issues found')
#task
def check_ticket(issue):
...
#task
def process_ticket(issue):
...
with DAG(
dag_id='update_tickets',
default_args=default_args,
schedule_interval='#hourly'
) as dag:
new_tickets = serch_new_jira_tickets()
for ticket in new_tickets:
with TaskGroup(group_id='process_funds_jira_tickets') as group:
email_manager = EmailOperator(
task_id='send_email',
to='me#example.com',
subject='Value in jira ticket was updated',
html_content='Value in ticket has been updated',
dag=dag)
check_ticket = check_ticket(ticket)
process_ticket = process_ticket(ticket)
check_ticket >> process_ticket >> email_manager
new_tickets >> group
I don't know how to create a condition for EmailOperator, under which it would be executed only if the jira ticket one of the fields == 100, otherwise nothing should happen. I.e. if one of the value in process_ticket task == 100 than process email_manager task, otherwise not.
For your first problem I think what you want is the new Dynamic Task Mapping feature in Airflow 2.3. Prior to this version, any kind of for loop on a variable number of Tasks can only be done with some hacks.
Assuming you are able to use Airflow 2.3 you need to modify your task serch_new_jira_tickets (sic) to return a list of tickets. If there are no tickets, it should return an empty list.
You can then remove your TaskGroup and do this:
new_tickets = serch_new_jira_tickets()
checked = check_ticket.expand(ticket=new_tickets)
processed = process_ticket.expand(ticket=checked)
emailed = email_manager.expand(subject=processed)
I think the EmailOperator would need to be tweaked as well, but I'm not sure how you are passing in the template parameters. Perhaps your process_ticket task returns subject strings?
email_manager = EmailOperator.partial(
task_id='send_email',
to='me#example.com',
subject='Value in jira ticket was updated',
html_content='Value in ticket has been updated',
dag=dag)
For your second problem I suspect you want to use the ShortCircuitOperator. You would then add two more tasks... one ShortCircuitOperator that calls the EmailOperator, or a dummy task.

How to reset callback_query value to receive multiple inputs in Python-Telegram-Bot

After the user clicks on one button, the callback_query.data is assigned a value and is no longer able to receive additional inputs and the while loop constantly runs with that value. I am not sure how to either reset the callback_query.data value or to request the user for an additional input.
I would like to receive multiple inputs from the user using InlineKeyboard and only continue to the run code when the user clicks on the done InlineKeyboardButton.
reply_markup = InlineKeyboardMarkup(button_list)
update.message.reply_text('Please choose the headers:', reply_markup=reply_markup)
query = update.callback_query
query_list = query.data.split(" ")
while 'done' not in query_list:
if query.data not in query_list:
query_list.append(update.callback_query.data)
else:
pass
query.answer()
query.edit_message_text(text="Your input has been updated!")
This line of code uses the Python-Telegram-Bot package.
By using ConversationHandler and a while loop, I am able to receive a dynamic number of inputs.
def select(update: Update, context: CallbackContext) -> int:
global reply_markup
reply_markup = InlineKeyboardMarkup(build_menu(button_list, n_cols=1))
update.message.reply_text('Please choose the headers:', reply_markup=reply_markup)
return SECOND
def display(update: Update, context: CallbackContext) -> int:
query = update.callback_query
global query_list
query_list = query.data.split(" ")
context.bot.send_message(chat_id=update.effective_chat.id, text=f"{query.data} is added as a column header")
query.answer()
query.edit_message_text(text="Select other headers or click Done to end the selection!", reply_markup=reply_markup)
return THIRD
def display_2(update: Update, context: CallbackContext) -> int:
query = update.callback_query
while 'done' not in query_list:
if query.data not in query_list and query.data != 'done':
query_list.append(query.data)
context.bot.send_message(chat_id=update.effective_chat.id, text=f"{query.data} is added as a column header")
elif query.data == 'done':
break
else:
text = f"{query.data} is already added as a column header, choose another header or click Done to end the selection!"
context.bot.send_message(chat_id=update.effective_chat.id, text=text)
del update.callback_query.data
return THIRD
query.edit_message_text(text="I have received your input! Type /workout to receive your workout for today!")
return ConversationHandler.END
As seen in display_2 the while loop will bring the user back to the THIRD state of the conversation, which is display_2. I am not sure whether there is a more efficient way in doing this but this achieves my intent.

How to make this function where I can give it an argument or not

So I made this very small function. it is a bonehead easy function but frankly borderline my capabilities.. Im learning. The function works as expected, but I would like to go further. I would like to make it so I can either give it an argument (a username) and just get the information for that single user, or default to reporting all users. is this possible w/o starting over from what I have so far?
I have just poked around and seen some examples but nothing that I can fit into my script. that I can understand at least.
import boto3
iam = boto3.client('iam')
def user_group():
for myusers in iam.list_users()['Users']:
Group = iam.list_groups_for_user(UserName=myusers['UserName'])
print("User: " + myusers['UserName'])
for groupName in Group['Groups']:
print("Group: " + groupName['GroupName'])
print("----------------------------")
user_group()
I would like to have the ability to run this script in two fashions.
1) add an argument(s) of 'username' so I can get the response for a particular user
2) default to getting response for all users if no argument is given.
This can be done by using an argument with a default value:
def user_group(user = None):
if user is None:
print("No user")
else:
print(user)
user_group()
user_group('some user')
prints
No user
some user
In your case you may want to write
def user_group(user = None):
users_to_list = iam.list_users()['Users'] if user is None else [user]
for myusers in user_to_list:
...

incremental select using id from SQLite in Twisted

I am trying to select data from a table in SQLite one row ONLY at a time for each call to the function, and I want the row to increment on each call (self.count is initialized elsewhere and 'line' is irrelevant here) I am using an adbapi connection pool in Twisted to connect to the DB. Here is the code I have tried:
def queryBTData4(self,line):
self.count=self.count+1
uuId=self.count
query="SELECT co2_data, patient_Id FROM btdata4 WHERE uid=:uid",{"uid": uuId}
d = self.dbpool.runQuery(query)
return d
This code works if I just set uid=1 or any other number in the DB (I used autoincrement for uid when I created the DB) but when I try to assign a value to uid (i.e. self.count via uuId) it reports that the operator has to be string or unicode.(I have tried both but it does not seem to help) However, I know that the query statement above works just fine in a previous program when I use a cursor and the execute command but I cannot see why it does not work here. I have tried all sorts of combinations and searched for a solution but have not found anything yet that works.(I have also tried the statement with brackets and other forms)
Thanks for any help or advice.
Here is the entire code:
from twisted.internet import protocol, reactor
from twisted.protocols import basic
from twisted.enterprise import adbapi
import sqlite3, time
class ServerProtocol(basic.LineReceiver):
def __init__(self):
self.conn = sqlite3.connect('biomed2.db',check_same_thread=False)
self.dbpool = adbapi.ConnectionPool("sqlite3" , 'biomed2.db', check_same_thread=False)
def connectionMade(self):
self.sendLine("conn made")
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
factory.originator = self
reactor.connectTCP('localhost', 1234, factory)
def lineReceived(self, line):
self._received = line
self.insertBTData4(self._received)
self.sendLine("line recvd")
def forwardLine(self, recipient):
recipient.sendLine(self._received)
def insertBTData4(self,data):
print "data in insert is",data
chx=data
PID=2
device_ID=5
query="INSERT INTO btdata4(co2_data,patient_Id, sensor_Id) VALUES ('%s','%s','%s')" % (chx, PID, device_ID)
dF = self.dbpool.runQuery(query)
return dF
class ClientProtocol(basic.LineReceiver):
def __init__(self):
self.conn = sqlite3.connect('biomed2.db',check_same_thread=False)
self.dbpool = adbapi.ConnectionPool("sqlite3" , 'biomed2.db', check_same_thread=False)
self.count=0
def connectionMade(self):
print "server-client made connection with client"
self.factory.originator.forwardLine(self)
#self.transport.loseConnection()
def lineReceived(self, line):
d=self.queryBTData4(self)
d.addCallbacks(self.sendData,self.printError )
def queryBTData4(self,line):
self.count=self.count+1
query=("SELECT co2_data, patient_Id FROM btdata4 WHERE uid=:uid",{"uid": uuId})
dF = self.dbpool.runQuery(query)
return dF
def sendData(self,line):
data=str(line)
self.sendLine(data)
def printError(self,error):
print "Got Error: %r" % error
error.printTraceback()
def main():
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
reactor.listenTCP(4321, factory)
reactor.run()
if __name__ == '__main__':
main()
The DB is created in another program, thus:
import sqlite3, time, string
conn = sqlite3.connect('biomed2.db')
c = conn.cursor()
c.execute('''CREATE TABLE btdata4
(uid INTEGER PRIMARY KEY, co2_data integer, patient_Id integer, sensor_Id integer)''')
The main program takes data into the server socket and inserts into DB. On the client socket side, data is removed from the DB one line at a time and sent to an external server. The program also has the ability to send data from the server side to the client side if required but I am not doing so here at the moment.
In queryBTData(), every time the function is called the count increments and I assign that value to uuId, which I then pass to the query. I have had this query statement working in a program where I do not use the adbapi but it does not seem to work here. I hope this is clear enough but if not please let me know and I will try again.
EDIT:
I have modified the program to take one row from the DB at a time (see queryBTData() below) but have come across another problem.
def queryBTData4(self,line):
self.count=self.count+1
xuId= self.count
#xuId=10
return self.dbpool.runQuery("SELECT co2_data FROM btdata4 WHERE uid = ?",xuId)
#return self.dbpool.runQuery("SELECT co2_data FROM btdata4 WHERE uid = 10")
When the count gets to 10 I get an error (which I will post below) which states that: "Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied"
I have tried setting xuId to 10 (see commented out line xuId=10) but I still get the same error. However, if I switch the return statements (to commented out return) I do indeed get correct row with no error. I have tried converting xuId to unicode but that makes no difference, I still get the same error. Basically, if I I set uid in the return statement to 10 or more (commented out return) it works, but if I set uid to xuId (i.e. uid=?,xuId) in the first return, it only works when xuId is below 10. The API documentation, as far as I can tell, gives no clue as to why this occurs.(I have also disabled the insert into DB to eliminate this and checked the SQLite3_ limit, which is 999)
Here are the errors I am getting when using the first return statement.
Got Error: <twisted.python.failure.Failure <class 'sqlite3.ProgrammingError'>>
Traceback (most recent call last):
File "c:\python26\lib\threading.py", line 504, in __bootstrap
self.__bootstrap_inner()
File "c:\python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "c:\python26\lib\threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
--- <exception caught here> ---
File "c:\python26\lib\site-packages\twisted\python\threadpool.py", line 207, i
n _worker
result = context.call(ctx, function, *args, **kwargs)
File "c:\python26\lib\site-packages\twisted\python\context.py", line 118, in c
allWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "c:\python26\lib\site-packages\twisted\python\context.py", line 81, in ca
llWithContext
return func(*args,**kw)
File "c:\python26\lib\site-packages\twisted\enterprise\adbapi.py", line 448, i
n _runInteraction
result = interaction(trans, *args, **kw)
File "c:\python26\lib\site-packages\twisted\enterprise\adbapi.py", line 462, i
n _runQuery
trans.execute(*args, **kw)
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current sta
tement uses 1, and there are 2 supplied.
Thanks.
Consider the API documentation for runQuery. Next, consider the difference between these three function calls:
c = a, b
f(a, b)
f((a, b))
f(c)
Finally, don't paraphrase error messages. Always quote them verbatim. Copy/paste whenever possible; make a note when you've manually transcribed them.

"Update security settings" in portal_workflow triggers catalog metadata update unnecessarily?

Plone 3.3.5: We have a middle sized Plone site and we'd like to update its workflows. Since it's a long-running process we noticed something strange going on. Our Archetypes accessors, not security related, where called when hitting "Update security settings" in portal_workflow.
Looks like the culprit is update_metadata=1 default setting in ZCatalog:
-> self.plone_log("treatmentToImagingHours: %s"%str(treatmentToImagingHours))
(Pdb) bt
/Users/moo/sits/parts/zope2/lib/python/ZServer/PubCore/ZServerPublisher.py(25)__init__()
-> response=b)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(401)publish_module()
-> environ, debug, request, response)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(202)publish_module_standard()
-> response = publish(request, module_name, after_list, debug=debug)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(119)publish()
-> request, bind=1)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/mapply.py(88)mapply()
-> if debug is not None: return debug(object,args,context)
/Users/moo/sits/parts/zope2/lib/python/ZPublisher/Publish.py(42)call_object()
-> result=apply(object,args) # Type s<cr> to step into published object.
<string>(4)_facade()
/Users/moo/sits/parts/zope2/lib/python/AccessControl/requestmethod.py(64)_curried()
-> return callable(*args, **kw)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(457)updateRoleMappings()
-> count = self._recursiveUpdateRoleMappings(portal, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(609)_recursiveUpdateRoleMappings()
-> count = count + self._recursiveUpdateRoleMappings(v, wfs)
/Users/moo/sits/eggs/Products.CMFCore-2.1.2-py2.4.egg/Products/CMFCore/WorkflowTool.py(600)_recursiveUpdateRoleMappings()
-> ob.reindexObject(idxs=['allowedRolesAndUsers'])
/Users/moo/sits/eggs/Products.Archetypes-1.5.11-py2.4.egg/Products/Archetypes/CatalogMultiplex.py(115)reindexObject()
-> c.catalog_object(self, url, idxs=lst)
/Users/moo/sits/eggs/Plone-3.3rc2-py2.4.egg/Products/CMFPlone/CatalogTool.py(417)catalog_object()
-> update_metadata, pghandler=pghandler)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/ZCatalog.py(536)catalog_object()
-> update_metadata=update_metadata)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/Catalog.py(348)catalogObject()
-> self.updateMetadata(object, uid)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/Catalog.py(277)updateMetadata()
-> newDataRecord = self.recordify(object)
/Users/moo/sits/parts/zope2/lib/python/Products/ZCatalog/Catalog.py(417)recordify()
-> if(attr is not MV and safe_callable(attr)): attr=attr()
/Users/moo/sits/products/SitsPatient/content/SitsPatient.py(2452)outSichECASS()
portal_workflow calls ob.reindexObject(idxs=['allowedRolesAndUsers']). However, this triggers refresh to all metadata.
1) Is this normal behavior?
2) Is this desired behavior?
3) Can I turn update_metadata off to speed up the process without breaking anything? Does portal security rely on metadata in any point?
Yes, this is normal behaviour. The catalog stores a subset of information that an object provides as a cache, so you can render pages with just catalog results without having to wake up the original objects. This includes the current workflow state for an object.
When reindexing, the catalog must update the metadata too, as it has no means of determining if that data has changed or not.
In this particular process, you cannot turn update_metadata off without patching; you'd have to either:
patch Products.ZCatalog.Catalog.catalogObject to switch off update_metadata there,
patch Products.Archetypes.CatalogMultiplex.CatalogMultiplex.reindexObject to call catalogObject with the update_metadata flag set to False,
patch the workflow tool to call reindexObjectSecurity instead of reindexObject.
You'd have to audit your catalog schema (metadata) columns to see if nothing will indeed change when you update workflow security.

Resources