WTForms-flask: organize formfield name and data into dictionary - dictionary

I've been stuck on this for some time now:
I have a form like so:
class attributes(Form):
height=IntegerField('height')
weight=IntegerField('weight')
class main(Form):
John=FormField(attributes)
Ted=FormField(attributes)
David(FormField(attributes)`
I wish to iteratively create a dictionary to store an identifying field label and field data in flask without using John_height=John.height.data for every FormField. The idea is to eventually pass the dictionary for writing into a database using a SQL statement where the dictionary key will match the database column and formfield data will be the database values.
The dictionary should look something like this:
{John_height : 170,
John_weight: 170,
Ted_height : 120,
Ted_weight: 190,
David_height : 150,
David_weight: 100}
Thank you in advance.

from wtforms import Form
from wtforms.fields import IntegerField, FormField
class Attributes(Form):
height = IntegerField('height')
weight = IntegerField('weight')
To build your forms iteratively you can do either of these:
def main(people=['John', 'Ted', 'David']):
class Main(Form):
pass
for person in people:
setattr(Main, person, FormField(Attributes))
return Main()
or
class Main(Form):
for person in ['John', 'Ted', 'David']:
vars()[person] = FormField(Attributes)
del person
personally I prefer the second since it is a proper class structure but less dynamic.
To build your dictionary you can then do the following:
obj = Main()
data = dict()
for field in obj: # <- this works since obj has an __iter__ method self defined
for key in field.data.keys():
data.update({field.name + '_' + key: field.data[key]})
print(data)
>>> {'John_height': None, 'John_weight': None, 'Ted_height': None, 'Ted_weight': None, 'David_height': None, 'David_weight': None}
The None values are due to empty form construction.

Related

ValidationError when using pydantic Field

I run these codes by defining two classes, Blackboard and Table, based on BaseModel. The I defined another class which takes two attributes: bloackboard, defined to be a Blackboard; tables, defined to be a list of Table class objects.
from typing import List
from pydantic import BaseModel, Field
class Blackboard(BaseModel):
size = 4000
color: str = Field(..., alias='yanse',
description='the color of the blackboard, you can choose green or black.')
class Table(BaseModel):
position: str
class ClassRoom(BaseModel):
blackboard: Blackboard
tables: List[Table]
m = ClassRoom(
blackboard={'color': 'green'},
tables=[{'position': 'first row, left 1'}, {'position': 'first row, left 2'}]
)
I got an error :
File "pydantic\main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ClassRoom
blackboard -> yanse
field required (type=value_error.missing)
I want to know how could I correctly use Field class.
Thanks
I expect to have no error.
You are using an alias for the color field in your schema and filling your data with python dictionaries.
in this case, you should replace:
blackboard={'color': 'green'}
with:
blackboard={'yanse': 'green'}
The color field is used when you have a python schema object, not in dictionaries.
In case you wanted to populate your Blackboard model using color, you can activate allow_population_by_field_name option in the Blackboard config options as follow:
class Blackboard(BaseModel):
size = 4000
color: str = Field(..., alias='yanse',
description='the color of the blackboard,
you can choose green or black.')
class Config:
allow_population_by_field_name = True

Is there a way to save nested entities in gcloud-python?

I'm trying to save an object into Cloud Datastore, the object contains a dictionary as a property value:
client = datastore.Client(project_id)
key = client.key('Config', 'config', 'Environment', 'env_name')
env = datastore.entity.Entity(key)
env['prop1'] = dict(foo='bar')
client.put(env)
but it raises
ValueError: Unknown protobuf attr type
Although I'm able to do so using gcloud-node.
Is it possible to save compound object using gcloud-python?
It sounds like you're interested in storing an embedded entity, which I believe is what gcloud-node does automagically.
I think you can do this by setting the field (prop1) to a datastore.Entity containing a sub-property (foo) set to 'bar'.
client = datastore.Client(project_id)
key = client.key('Config', 'config', 'Environment', 'env_name')
env = datastore.Entity(key)
env['prop1'] = datastore.Entity(key=client.key('EmbeddedKind')
env['prop1']['foo'] = 'bar'
client.put(env)
When you get this back, it'll look like...
>>> c.get(env.key)
<Entity[{'kind': u'Config', 'name': u'config'}, {'kind': u'Env', 'name': u'env_name'}] {u'prop1': <Entity[{'kind': u'Embedded'}] {u'foo': 'bar'}>}>

Transform bag of key-value tuples to map in Apache Pig

I am new to Pig and I want to convert a bag of tuples to a map with specific value in each tuple as key. Basically I want to change:
{(id1, value1),(id2, value2), ...} into [id1#value1, id2#value2]
I've been looking around online for a while, but I can't seem to find a solution. I've tried:
bigQMap = FOREACH bigQFields GENERATE TOMAP(queryId, queryStart);
but I end up with a bag of maps (e.g. {[id1#value1], [id2#value2], ...}), which is not what I want. How can I build up a map out of a bag of key-value tuple?
Below is the specific script I'm trying to run, in case it's relevant
rawlines = LOAD '...' USING PigStorage('`');
bigQFields = FOREACH bigQLogs GENERATE GFV(*,'queryId')
as queryId, GFV(*, 'queryStart')
as queryStart;
bigQMap = ?? how to make a map with queryId as key and queryStart as value ?? ;
TOMAP takes a series of pairs and converts them into the map, so it is meant to be used like:
-- Schema: A:{foo:chararray, bar:int, bing:chararray, bang:int}
-- Data: (John, 27, Joe, 30)
B = FOREACH A GENERATE TOMAP(foo, bar, bing, bang) AS m ;
-- Schema: B:{m: map[]}
-- Data: (John#27,Joe#30)
So as you can see the syntax does not support converting a bag to a map. As far as I know there is no way to convert a bag in the format you have to map in pure pig. However, you can definitively write a java UDF to do this.
NOTE: I'm not too experienced with java, so this UDF can easily be improved on (adding exception handling, what happens if a key added twice etc.). However, it does accomplish what you need it to.
package myudfs;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import java.util.Map;
import java.util.HashMap;
import java.util.Iterator;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.DataBag;
public class ConvertToMap extends EvalFunc<Map>
{
public Map exec(Tuple input) throws IOException {
DataBag values = (DataBag)input.get(0);
Map<Object, Object> m = new HashMap<Object, Object>();
for (Iterator<Tuple> it = values.iterator(); it.hasNext();) {
Tuple t = it.next();
m.put(t.get(0), t.get(1));
}
return m;
}
}
Once you compile the script into a jar, it can be used like:
REGISTER myudfs.jar ;
-- A is loading some sample data I made
A = LOAD 'foo.in' AS (foo:{T:(id:chararray, value:chararray)}) ;
B = FOREACH A GENERATE myudfs.ConvertToMap(foo) AS bar;
Contents of foo.in:
{(open,apache),(apache,hadoop)}
{(foo,bar),(bar,foo),(open,what)}
Output from B:
([open#apache,apache#hadoop])
([bar#foo,open#what,foo#bar])
Another approach is to use python to create the UDF:
myudfs.py
#!/usr/bin/python
#outputSchema("foo:map[]")
def BagtoMap(bag):
d = {}
for key, value in bag:
d[key] = value
return d
Which is used like this:
Register 'myudfs.py' using jython as myfuncs;
-- A is still just loading some of my test data
A = LOAD 'foo.in' AS (foo:{T:(key:chararray, value:chararray)}) ;
B = FOREACH A GENERATE myfuncs.BagtoMap(foo) ;
And produces the same output as the Java UDF.
BONUS:
Since I don't like maps very much, here is a link explaining how the functionality of a map can be replicated with just key value pairs. Since your key value pairs are in a bag, you'll need to do the map-like operations in a nested FOREACH:
-- A is a schema that contains kv_pairs, a bag in the form {(id, value)}
B = FOREACH A {
temp = FOREACH kv_pairs GENERATE (key=='foo'?value:NULL) ;
-- Output is like: ({(),(thevalue),(),()})
-- MAX will pull the maximum value from the filtered bag, which is
-- value (the chararray) if the key matched. Otherwise it will return NULL.
GENERATE MAX(temp) as kv_pairs_filtered ;
}
I ran into the same situation so I submitted a patch that just got accepted: https://issues.apache.org/jira/browse/PIG-4638
This means that what you wanted is a core part starting with pig 0.16.

What's wrong with my filter query to figure out if a key is a member of a list(db.key) property?

I'm having trouble retrieving a filtered list from google app engine datastore (using python for server side). My data entity is defined as the following
class Course_Table(db.Model):
course_name = db.StringProperty(required=True, indexed=True)
....
head_tags_1=db.ListProperty(db.Key)
So the head_tags_1 property is a list of keys (which are the keys to a different entity called Headings_1).
I'm in the Handler below to spin through my Course_Table entity to filter the courses that have a particular Headings_1 key as a member of the head_tags_1 property. However, it doesn't seem like it is retrieving anything when I know there is data there to fulfill the request since it never displays the logs below when I go back to iterate through the results of my query (below). Any ideas of what I'm doing wrong?
def get(self,level_num,h_key):
path = []
if level_num == "1":
q = Course_Table.all().filter("head_tags_1 =", h_key)
for each in q:
logging.info('going through courses with this heading name')
logging.info("course name filtered is %s ", each.course_name)
MANY MANY THANK YOUS
I assume h_key is key of headings_1, since head_tags_1 is a list, I believe what you need is IN operator. https://developers.google.com/appengine/docs/python/datastore/queries
Note: your indentation inside the for loop does not seem correct.
My bad apparently '=' for list is already check membership. Using = to check membership is working for me, can you make sure h_key is really a datastore key class?
Here is my example, the first get produces result, where the 2nd one is not
import webapp2 from google.appengine.ext import db
class Greeting(db.Model):
author = db.StringProperty()
x = db.ListProperty(db.Key)
class C(db.Model): name = db.StringProperty()
class MainPage(webapp2.RequestHandler):
def get(self):
ckey = db.Key.from_path('C', 'abc')
dkey = db.Key.from_path('C', 'def')
ekey = db.Key.from_path('C', 'ghi')
Greeting(author='xxx', x=[ckey, dkey]).put()
x = Greeting.all().filter('x =',ckey).get()
self.response.write(x and x.author or 'None')
x = Greeting.all().filter('x =',ekey).get()
self.response.write(x and x.author or 'None')
app = webapp2.WSGIApplication([('/', MainPage)],
debug=True)

SimpleTerm title not being set

I have a form with a SelectFieldWidget, that is currently using a static vocabularly, which is basically this:
from zope.schema.vocabulary import SimpleVocabulary, SimpleTerm
primary_contacts = SimpleVocabulary([
SimpleTerm( unicode(token), title=unicode(token.upper()), token=token ) for token in [
'one','two','three','four','five','six','seven','eight','nine','ten',
]
])
The vocabulary is assigned to the field in the form schema:
form.widget( primary_contact_person=SelectFieldWidget )
primary_contact_person = schema.List(
title = u'Nominate Primary Contact',
required = False,
value_type = schema.Choice(
vocabulary=primary_contacts,
)
)
The schema is then serialized using plone.supermodel & then deserialized when needed by the form (this is for another requirement).
The form is using a custom, handwritten template, and I'm in the process of adding the tal statements to generate the select field options. I had thought I could do this through referencing the widgets on the form, but when I do that I hit a problem:
(Pdb) self # break point in form
<Products.Five.metaclass.edit_metadata object at 0xc1ce450>
(Pdb) select = self.widgets['primary_contact_person']
(Pdb) first = [t for t in select.terms][0]
(Pdb) first.token
'one'
(Pdb) first.value
u'one'
(Pdb) first.title
(Pdb)
The title is None on the term when it's accessed through the widget. I've tried looking it up through the vocabulary:
(Pdb) select.terms.getTermByToken('one').title
(Pdb)
But again, it's None. However, it is there for terms in the original vocabulary object:
(Pdb) from my.package import primary_contacts
(Pdb) [t for t in primary_contacts][0].title
u'ONE'
So while I could use the source vocab object directly to provide the values the template needs, the plan is for this vocabulary to eventually be dynamic, at which point I would expect I'd need to interrogate the widget itself.
What am I doing wrong here, why is title not being defined?
The problem was with plone.supermodel. I should have mentioned more clearly that I'm using the serialized schema to produce the form, and I apologise for this.
Basically, plone.supermodel provides an export/import process, which can only deal with simple lists of values.
# line 263 in plone.supermodel.exportimport
term = SimpleTerm(token = encoded, value = value, title = value)
The solution was to use named vocabularies, which serializes the reference to the vocabulary rather than the vocabulary itself.
Sorry again for the lack of information that made this harder to debug.

Resources