fastAPI SQLmodel MultipleResultsFound: Multiple rows were found when exactly one was required - fastapi

This is my delete function.
def delete_session(self,session_id: int, db):
with Session(engine) as session:
statement = select(db).where(db.session == session_id)
results = session.exec(statement)
sess = results.one()
print("sess: ", sess)
if not sess:
raise HTTPException(status_code=404, detail="Session not found")
session.delete(sess)
session.commit()
return {"Session Deleted": True}
I want to delete all records where session_id matches.
But its throwing following error
MultipleResultsFound: Multiple rows were found when exactly one was required
How can i delete multiple rows at once.
I tried using
sess = results.all()
but it say
sqlalchemy.orm.exc.UnmappedInstanceError: Class 'builtins.list' is not mapped
Thanks

Currently, you are trying to delete several data items, except that session.delete() only takes a single value, not a list of values.
You are using results.one() probably thinking that you can isolate your answers and return only one. However, it is explained in the documentation that if multiple entries are found in the parameter passed to one() then it will throw a MultipleResultsFound exception, hence your error.
Indeed, your statement returns a list, so multiple values.
In order to delete all your elements, you should not use one() but simply iterate with a for loop on your results and delete one by one, your data, as follows:
def delete_session(self, session_id: int, db):
with Session(engine) as session:
statement = select(db).where(db.session == session_id)
results = session.exec(statement).all()
for sess in results:
session.delete(sess)
session.commit()
return {"Session Deleted": True}

Related

fastapi + sqlalchemy + pydantic → how to read data return to schema

I'm trying to use FastApi, sqlalchemy and pydantic.
I have in request body, a schema, a field type list, and optional named files (files: list[schemas.ImageBase]).
I need to read all the entered data one by one but it doesn't let me loop for the returned list.
This also happens to me when I get a query returned using for example:
def get_setting(svalue:int, s_name: str):
db = SessionLocal()
query = db.query(models.Setting)\
.filter(
models.Setting.svalue == svalue,
models.Setting.appuser == s_name
).all()
return query
in the -->
async def get_settings(svalue: int, name:str):
**values**=crud.get_setting(svalue=svalue,s_name=name)
return {"settings" : values}
But I can't loop (with the for) **values**
Why? I have to set something or I'm wrong using the query or pydantic?
I expect to be looking for a list or dictionary and being able to read the data

Vertex in Python Gremlin not updating

Using python gremlin on Neptune workbench, I have two functions:
The first adds a Vertex with a set of properties, and returns a reference to the traversal operation
The second adds to that traversal operation.
For some reason, the first function's operations are getting persisted to the DB, but the second operations do not. Why is this?
Here are the two functions:
def add_v(v_type, name):
tmp_id = get_id(f"{v_type}-{name}")
result = g.addV(v_type).property('id', tmp_id).property('name', name)
result.iterate()
return result
def process_records(features):
for i in features:
v_type = i[0]
name = i[1]
v = add_v(v_type, name)
if len(i) > 2:
%debug
props = i[2]
for r in props:
v.property(r[0], r[1]).iterate()
Your add_V method has already iterated the traversal. If you want to return the traversal from add_v in a way that you can add to it remove the iterate.

Runtime error:dictionary changed size during iteration

I iterate thru items of a dictionary "var_dict".
Then as I iterate in a for loop, I need to update the dictionary.
I understand that is not possible and that triggers the runtime error I experienced.
My question is, do I need to create a different dictionary to store data? As is now, I am trying to use same dictionary with different keys.
I know the problem is related to iteration thru the key and values of a dictionary and attempt to change it. I want to know if the best option in this case if to create a separate dictionary.
for k, v in var_dict.items():
match = str(match)
match = match.strip("[]")
match = match.strip("''")
result = [index for index, value in enumerate(v) if match in value]
result = str(result)
result = result.strip("[]")
result = result.strip("'")
#====> IF I print(var_dict), at this point I have no error *********
if result == "0":
#It means a match between interface on RP PSE2 model was found; Interface position is on PSE2 architecture
print (f'PSE-2 Line cards:{v} Interfaces on PSE2:{entry} Interface PortID:{port_id}')
port_id = int(port_id)
print(port_id)
if port_id >= 19:
#print(f'interface:{entry} portID={port_id} CPU_POS={port_cpu_pos} REPLICATION=YES')
if_info = [entry,'PSE2=YES',port_id,port_cpu_pos,'REPLICATION=YES']
var_dict['IF_PSE2'].append(if_info)
#===> *** This is the point that if i attempt to print var_dict, I get the Error during olist(): dictionary changed size during iteration
else:
#print(f'interface:{entry},portID={port_id} CPU_POS={port_cpu_pos} REPLICATION=NO')
if_info = [entry,'PSE2=YES',port_id,port_cpu_pos,'REPLICATION=NO']
var_dict['IF_PSE2'].append(if_info)
else:
#it means the interface is on single PSE. No replication is applicable. Just check threshold between incoming and outgoing rate.
if_info = [entry,'PSE2=NO',int(port_id),port_cpu_pos,'REPLICATION=NO']
var_dict['IF_PSE1'].append(if_info)
I did a shallow copy and that allowed me to iterate a dictionary copy and make modifications to the original dictionary. Problem solved. Thanks.
(...)
temp_var_dict = var_dict.copy()
for k, v in temp_var_dict.items():
(...)

accumulator in pyspark with dict as global variable

Just for learning purpose, I tried to set a dictionary as a global variable in accumulator the add function works well, but I ran the code and put dictionary in the map function, it always return empty.
But similar code for setting list as a global variable
class DictParam(AccumulatorParam):
def zero(self, value = ""):
return dict()
def addInPlace(self, acc1, acc2):
acc1.update(acc2)
if __name__== "__main__":
sc, sqlContext = init_spark("generate_score_summary", 40)
rdd = sc.textFile('input')
#print(rdd.take(5))
dict1 = sc.accumulator({}, DictParam())
def file_read(line):
global dict1
ls = re.split(',', line)
dict1+={ls[0]:ls[1]}
return line
rdd = rdd.map(lambda x: file_read(x)).cache()
print(dict1)
For anyone who arrives at this thread looking for a Dict accumulator for pyspark: the accepted solution does not solve the posed problem.
The issue is actually in the DictParam defined, it does not update the original dictionary. This works:
class DictParam(AccumulatorParam):
def zero(self, value = ""):
return dict()
def addInPlace(self, value1, value2):
value1.update(value2)
return value1
The original code was missing the return value.
I believe that print(dict1()) simply gets executed before the rdd.map() does.
In Spark, there are 2 types of operations:
transformations, that describe the future computation
and actions, that call for action, and actually trigger the execution
Accumulators are updated only when some action is executed:
Accumulators do not change the lazy evaluation model of Spark. If they
are being updated within an operation on an RDD, their value is only
updated once that RDD is computed as part of an action.
If you check out the end of this section of the docs, there is an example exactly like yours:
accum = sc.accumulator(0)
def g(x):
accum.add(x)
return f(x)
data.map(g)
# Here, accum is still 0 because no actions have caused the `map` to be computed.
So you would need to add some action, for instance:
rdd = rdd.map(lambda x: file_read(x)).cache() # transformation
foo = rdd.count() # action
print(dict1)
Please make sure to check on the details of various RDD functions and accumulator peculiarities because this might affect the correctness of your result. (For instance, rdd.take(n) will by default only scan one partition, not the entire dataset.)
For accumulator updates performed inside actions only, their value is
only updated once that RDD is computed as part of an action

SELECT COUNT(*) doesn't work in QML

I'm trying to get the number of records with QML LocalStorage, which uses sqlite. Let's take this snippet in account:
function f() {
var db = LocalStorage.openDatabaseSync(...)
db.transaction (
function(tx) {
var b = tx.executeSql("SELECT * FROM t")
console.log(b.rows.length)
var c = tx.executeSql("SELECT COUNT(*) FROM t")
console.log(JSON.stringify(c))
}
)
}
The output is:
qml: 3
qml: {"rowsAffected":0,"insertId":"","rows":{}}
What am I doing wrong that the SELECT COUNT(*) doesn't output anything?
EDIT: rows only seems empty in the second command. Calling
console.log(JSON.stringify(c.rows.item(0)))
gives
qml: {"COUNT(*)":3}
Two questions now:
Why is rows shown as empty
How can I access the property inside c.rows.item(0)
In order to visit the items, you have to use:
b.rows.item(i)
Where i is the index of the item you want to get (in your first example, i belongs to [0, 1, 2] for you have 3 items, in the second one it is 0 and you can query it as c.rows.item(0)).
The rows field appears empty and it is a valid result, for the items are not part of the rows field itself (indeed you have to use a method to get them, as far as I know that method could also be a memento that completely enclose the response data) and the item method is probably defined as not enumerable (I cannot verify it, I'm on the beach and it's quite difficult to explore the Qt code now :-)). You can safely rely on the length parameter to know if there are returned values, thus you can iterate over them to print them out. I did something like that in a project of mine and it works fine.
The properties inside item(0) have the same names given for the query. I suggest to rewrite that query as:
select count(*) as cnt from t
Then, you can get the count as:
c.rows.item(0).cnt

Resources