I am trying to run a app which has a basic template which accepts the fields and saves it. My Model looks like this:
from django.db import models
# Create your models here.
class Book(models.Model):
"""docstring for MyApp"""
title = models.CharField(max_length=200)
abstract = models.TextField()
pub_date = models.DateTimeField('date published')
publisher = models.CharField(max_length=200)
def __unicode__(self):
return self.title
class Author(models.Model):
name = models.CharField(max_length=200)
sci='Sci'
arts= 'Arts'
engg= 'Engg'
mgmt='mgmt'
none=' '
Choice_In_Intrest = (
(sci,'Science'),
(arts,'Arts'),
(engg,'Engineering'),
(mgmt,'Mangaement'),
)
intrest = models.CharField(max_length=4 ,choices= Choice_In_Intrest , default=none)
book = models.ForeignKey(Book, null=True)
urls.py:
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
#url(r'^admin/', include(admin.site.urls)),
url(r'^create/$', 'myBook.views.insertBook'),
url(r'^all/$', 'myBook.views.books'),
url(r'^get/(?P<book_id>\d+)/$', 'myBook.views.book'),
url(r'^addAuthor/(?P<book_id>\d+)/$', 'myBook.views.addAuthor'),
]
forms.py:
from django import forms
from models import Book, Author
class BookForm(forms.ModelForm):
class Meta:
model = Book
fields ='__all__'
class AuthorForm(forms.ModelForm):
class Meta:
model = Author
fields = '__all__'
**insert_book.html:**
{% block content%}
<h2>Add Book</h2>
<form action='/create/' method="POST">{%csrf_token%}
<ul>
{{form.as_ul}}
</ul>
<input type="submit" name="submit" value="Add Book">
</form>
{% endblock %}
When I runs the server the Html page displayed, but when I click the Add Book button, it shows the following error:
IntegrityError at /create/
NOT NULL constraint failed: myBook_book.author_id
Request Method: POST
Request URL: http://127.0.0.1:8080/create/
Django Version: 1.8.5
Exception Type: IntegrityError
Exception Value:
NOT NULL constraint failed: myBook_book.author_id
Exception Location: /home/rizwan/django-rizwan/local/lib/python2.7/site-packages/Django-1.8.5-py2.7.egg/django/db/backends/sqlite3/base.py in execute, line 318
Python Executable: /home/rizwan/django-rizwan/bin/python
Python Version: 2.7.6
Python Path:
['/home/rizwan/projects/bookInfo',
'/home/rizwan/django-rizwan/local/lib/python2.7/site-packages/Django-1.8.5-py2.7.egg',
'/home/rizwan/django-rizwan/lib/python2.7/site-packages/Django-1.8.5-py2.7.egg',
'/home/rizwan/django-rizwan/lib/python2.7',
'/home/rizwan/django-rizwan/lib/python2.7/plat-x86_64-linux-gnu',
'/home/rizwan/django-rizwan/lib/python2.7/lib-tk',
'/home/rizwan/django-rizwan/lib/python2.7/lib-old',
'/home/rizwan/django-rizwan/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/home/rizwan/django-rizwan/local/lib/python2.7/site-packages',
'/home/rizwan/django-rizwan/lib/python2.7/site-packages']
Server time: Wed, 4 Nov 2015 11:12:00 +0000
I haven't define author_id any where, What caused it?
I am using Sqlite 3.
id fields are automatically generated by Models, and they are predictably of type serial NOT NULL PRIMARY KEY,. While using the Book model, they are required but have not been set yet. Database didn't get any author_id, so it attempted to set null on this field, but id field has the not null constraint. You can fix by either setting null=True on field definition in the model, or you can have a default, or add an author field to your form.
Related
How do I set up a my hydra config to accept a custom enum? Specifically I followed the Structured Config Schema tutorial.
I have a dataclass config:
#dataclass_validate
#dataclass
class CustomConfig:
custom_enum: CustomEnum
With the custom enum:
class CustomEnum(str, Enum):
ENUM1 = "enum1"
ENUM2 = "enum2"
Error from running python my_app.py
Error merging 'data/config' with schema
Invalid value 'enum1', expected one of [ENUM1, ENUM2]
full_key: custom_enum
object_type=CustomConfig
Where my_app.py is just:
cs = ConfigStore.instance()
cs.store(name="base_config", node=Config)
cs.store(group="data", name="config", node=CustomConfig)
#hydra.main(config_path=".", config_name="config")
def setup_config(cfg: Config) -> None:
print(OmegaConf.to_yaml(cfg))
And the config in data/config.yaml is just
custom_enum: enum1
Note the error message: Invalid value 'enum1', expected one of [ENUM1, ENUM2].
This is to say, in your data/config.yaml file, you should be using ENUM1 instead of enum1.
I'm trying to pass parameter from google composer into a dataflow template as following way, but it does not work.
# composer code
trigger_dataflow = DataflowTemplateOperator(
task_id="trigger_dataflow",
template="gs://mybucket/my_template",
dag=dag,
job_name='appsflyer_events_daily',
parameters={
"input": f'gs://my_bucket/' + "{{ ds }}" + "/*.gz"
}
)
# template code
class UserOptions(PipelineOptions):
#classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'--input',
default='gs://my_bucket/*.gz',
help='path of input file')
def main():
pipeline_options = PipelineOptions()
user_options = pipeline_options.view_as(UserOptions)
p = beam.Pipeline(options=pipeline_options)
lines = (
p
| MatchFiles(user_options.input)
)
You can pass like following.
DataflowTemplateOperator(,
task_id="task1",
template=get_variable_value("template"),
on_failure_callback=update_job_message,
parameters={
"fileBucket": get_variable_value("file_bucket"),
"basePath": get_variable_value("path_input"),
"Day": "{{ json.loads(ti.xcom_pull(key=run_id))['day'] }}",
},
)
We are using Java and in Dataflow jobs we have option class get and set like following
public interface MyOptions extends CommonOptions {
#Description("The output bucket")
#Validation.Required
ValueProvider<String> getFileBucket();
void setFileBucket(ValueProvider<String> value);
}
We need to create template for this dataflow jobs and that template will be trigger by composer dag.
Moving from dataflow classic template to flex template fixed the issue.
I am using H2O DAI 1.9.0.6. I am tring to load custom recipe (BERT pretained model using custom recipe) on Expert settings. I am using local file to upload. However upload is not happning. No error, no progress nothing. After that activity I am not able to see this model under RECIPE tab.
Took Sample Recipe from below URL and Modified for my need. Thanks for the person who created this Recipe.
https://github.com/h2oai/driverlessai-recipes/blob/master/models/nlp/portuguese_bert.py
Custom Recipe
import os
import shutil
from urllib.parse import urlparse
import requests
from h2oaicore.models import TextBERTModel, CustomModel
from h2oaicore.systemutils import make_experiment_logger, temporary_files_path, atomic_move, loggerinfo
def is_url(url):
try:
result = urlparse(url)
return all([result.scheme, result.netloc, result.path])
except:
return False
def maybe_download_language_model(logger,
save_directory,
model_link,
config_link,
vocab_link):
model_name = "pytorch_model.bin"
if isinstance(model_link, str):
model_name = model_link.split('/')[-1]
if '.bin' not in model_name:
model_name = "pytorch_model.bin"
maybe_download(url=config_link,
dest=os.path.join(save_directory, "config.json"),
logger=logger)
maybe_download(url=vocab_link,
dest=os.path.join(save_directory, "vocab.txt"),
logger=logger)
maybe_download(url=model_link,
dest=os.path.join(save_directory, model_name),
logger=logger)
def maybe_download(url, dest, logger=None):
if not is_url(url):
loggerinfo(logger, f"{url} is not a valid URL.")
return
dest_tmp = dest + ".tmp"
if os.path.exists(dest):
loggerinfo(logger, f"already downloaded {url} -> {dest}")
return
if os.path.exists(dest_tmp):
loggerinfo(logger, f"Download has already started {url} -> {dest_tmp}. "
f"Delete {dest_tmp} to download the file once more.")
return
loggerinfo(logger, f"Downloading {url} -> {dest}")
url_data = requests.get(url, stream=True)
if url_data.status_code != requests.codes.ok:
msg = "Cannot get url %s, code: %s, reason: %s" % (
str(url), str(url_data.status_code), str(url_data.reason))
raise requests.exceptions.RequestException(msg)
url_data.raw.decode_content = True
if not os.path.isdir(os.path.dirname(dest)):
os.makedirs(os.path.dirname(dest), exist_ok=True)
with open(dest_tmp, 'wb') as f:
shutil.copyfileobj(url_data.raw, f)
atomic_move(dest_tmp, dest)
def check_correct_name(custom_name):
allowed_pretrained_models = ['bert', 'openai-gpt', 'gpt2', 'transfo-xl', 'xlnet', 'xlm-roberta',
'xlm', 'roberta', 'distilbert', 'camembert', 'ctrl', 'albert']
assert len([model_name for model_name in allowed_pretrained_models
if model_name in custom_name]), f"{custom_name} needs to contain the name" \
" of the pretrained model architecture (e.g. bert or xlnet) " \
"to be able to process the model correctly."
class CustomBertModel(TextBERTModel, CustomModel):
"""
Custom model class for using pretrained transformer models.
The class inherits :
- CustomModel that really is just a tag. It's there to make sure DAI knows it's a custom model.
- TextBERTModel so that the custom model inherits all the properties and methods.
Supported model architecture:
'bert', 'openai-gpt', 'gpt2', 'transfo-xl', 'xlnet', 'xlm-roberta',
'xlm', 'roberta', 'distilbert', 'camembert', 'ctrl', 'albert'
How to use:
- You have already downloaded the weights, the vocab and the config file:
- Set _model_path as the folder where the weights, the vocab and the config file are stored.
- Set _model_name according to the pretrained architecture (e.g. bert-base-uncased).
- You want to to download the weights, the vocab and the config file:
- Set _model_link, _config_link and _vocab_link accordingly.
- _model_path is the folder where the weights, the vocab and the config file will be saved.
- Set _model_name according to the pretrained architecture (e.g. bert-base-uncased).
- Important:
_model_path needs to contain the name of the pretrained model architecture (e.g. bert or xlnet)
to be able to load the model correctly.
- Disable genetic algorithm in the expert setting.
"""
# _model_path is the full path to the directory where the weights, vocab and the config will be saved.
_model_name = NotImplemented # Will be used to create the MOJO
_model_path = NotImplemented
_model_link = NotImplemented
_config_link = NotImplemented
_vocab_link = NotImplemented
_booster_str = "pytorch-custom"
# Requirements for MOJO creation:
# _model_name needs to be one of
# bert-base-uncased, bert-base-multilingual-cased, xlnet-base-cased, roberta-base, distilbert-base-uncased
# vocab.txt needs to be the same as vocab.txt used in _model_name (no custom vocabulary yet).
_mojo = False
#staticmethod
def is_enabled():
return False # Abstract Base model should not show up in models.
def _set_model_name(self, language_detected):
self.model_path = self.__class__._model_path
self.model_name = self.__class__._model_name
check_correct_name(self.model_path)
check_correct_name(self.model_name)
def fit(self, X, y, sample_weight=None, eval_set=None, sample_weight_eval_set=None, **kwargs):
logger = None
if self.context and self.context.experiment_id:
logger = make_experiment_logger(experiment_id=self.context.experiment_id, tmp_dir=self.context.tmp_dir,
experiment_tmp_dir=self.context.experiment_tmp_dir)
maybe_download_language_model(logger,
save_directory=self.__class__._model_path,
model_link=self.__class__._model_link,
config_link=self.__class__._config_link,
vocab_link=self.__class__._vocab_link)
super().fit(X, y, sample_weight, eval_set, sample_weight_eval_set, **kwargs)
class GermanBertModel(CustomBertModel):
_model_name = "bert-base-german-dbmdz-uncased"
_model_path = os.path.join(temporary_files_path, "german_bert_language_model/")
_model_link = "https://huggingface.co/bert-base-german-dbmdz-uncased/resolve/main/pytorch_model.bin"
_config_link = "https://huggingface.co/bert-base-german-dbmdz-uncased/resolve/main/config.json"
_vocab_link = "https://huggingface.co/bert-base-german-dbmdz-uncased/resolve/main/vocab.txt"
_mojo = True
#staticmethod
def is_enabled():
return True
Check that your custom recipe has is_enabled() returning True.
def is_enabled():
return True
I have an interface:
class ISomething(Interface):
something = schema.Dict(
title=u"Something",
description=u"Define something.",
key_type=schema.TextLine(title=u"Some Title"),
value_type=schema.Text(title=u"Some Text"))
used to create a form that saves values in registry (ControlPanelFormWrapper, RegistryEditForm).
In registry.xml:
<record name="something">
<field type="plone.registry.field.Dict">
<title>Something</title>
<key_type type="plone.registry.field.TextLine" />
<value_type type="plone.registry.field.Text" />
</field>
</record>
It's working: I can add key-value items {'Some Title': 'Some Text'}.
I need to modify my form to have multiple fields instead of Some text, but keeping the Dict. Example:
{'Some Title': {
'field_1': 'Value 1',
'field_2': 'Value 2'
}
}
I expect this to work then:
registry = getUtility(IRegistry)
reg_something = registry.get("something")
print reg_something['Some Title']['field_1']
>>> Value 1
So, how to change my interface and registry record to have the form updated in this way?
This is described in an article from Mark van Lent:
https://www.vlent.nl/weblog/2011/09/07/dict-list-value-ploneappregistry/
Adjust the registry.xml accordingly, exchange the record-name with yours:
<record name="my.package.example">
<field type="plone.registry.field.Dict">
<title>Verification filesnames</title>
<key_type type="plone.registry.field.TextLine">
<title>Key</title>
</key_type>
<value_type type="plone.registry.field.List">
<title>Value list</title>
<value_type type="plone.registry.field.TextLine">
<title>Values</title>
</value_type>
</value_type>
</field>
<value purge="false" />
See also this question where Luca Fabbri and Gil Forcada each provide alternative approaches, which might be true time-savers on the long term:
Plone- How can I create a control panel for a record in registry that is a dictionary type?
registry.xml in my default profile (imported with an upgrade step):
<registry>
<records interface="my.package.something.ISomethingItems">
<record name="mypackage_multiplesomething">
<field type="plone.registry.field.List">
<title>Something Items</title>
<value_type type="collective.z3cform.datagridfield.DictRow">
<schema>my.package.something.ISomething</schema>
</value_type>
</field>
</record>
</records>
</registry>
In something.py just define the interfaces:
from collective.z3cform.datagridfield import BlockDataGridFieldFactory
from collective.z3cform.datagridfield.registry import DictRow
from plone import api
from plone.app.registry.browser.controlpanel import ControlPanelFormWrapper
from plone.app.registry.browser.controlpanel import RegistryEditForm
from plone.autoform import directives
from zope import schema
from zope.interface import Interface
from zope.interface import implementer
from zope.schema.interfaces import IVocabularyFactory
from zope.schema.vocabulary import SimpleTerm
from zope.schema.vocabulary import SimpleVocabulary
class ISomething(Interface):
id = schema.ASCIILine(
title=u"Something ID",
description=u"Some description."
)
text = schema.Text(
title=u"A text field",
description=u"Human readable text"
)
url = schema.URI(
title=u"An URL",
description=u"Don't forget http:// or https://"
)
class ISomethingItems(Interface):
# the field is the same used in registry.xml
mypackage_multiplesomething = schema.List(
title=u"Something Items",
description=u"Define something items",
value_type=DictRow(title=u"Something", schema=ISomething)
)
directives.widget(mypackage_multiplesomething=BlockDataGridFieldFactory)
Now we can have an edit form (in something.py):
class SomethingItemsEditForm(RegistryEditForm):
schema = ISomethingItems
label = u"Something items definition"
class SomethingItemsView(ControlPanelFormWrapper):
""" Something items edit form """
form = SomethingItemsEditForm
defined as browser page (configure.zcml):
<browser:page
name="something-items-settings"
for="Products.CMFPlone.interfaces.IPloneSiteRoot"
class=".something.SomethingItemsView"
permission="cmf.ManagePortal"
/>
Easy to get the values from registry using api:
>>> from plone import api
>>> reg_something_items = api.portal.get_registry_record(
'mypackage_somethingitems', interface=ISomethingItems)
[{'id': 'some id', 'text': 'some text', 'url': 'http://something.com'}, {'id': 'some id other', 'text': 'some text other', 'url': 'http://something-other.com'}]
>>> item = reg_something_items[0]
{'id': 'some id', 'text': 'some text', 'url': 'http://something.com'}
>>> item['id']
some id
>>> item['text']
some text
>>> item['url']
http://something.com
If you added an uninstall profile to your product it is a good idea to add registry.xml in it:
<registry>
<record name="my.package.something.ISomethingItems.mypackage_somethingitems"
delete="True" remove="True" />
</registry>
to be sure the registry will be clean after uninstall.
You can check anytime the values you have in registry in SITE/portal_registry (Site Setup -> Configuration Registry)
I'm creating a fork of my Plone site (which has not been forked for a long time). This site has a special catalog object for user profiles (a special Archetypes-based object type) which is called portal_user_catalog:
$ bin/instance debug
>>> portal = app.Plone
>>> print [d for d in portal.objectMap() if d['meta_type'] == 'Plone Catalog Tool']
[{'meta_type': 'Plone Catalog Tool', 'id': 'portal_catalog'},
{'meta_type': 'Plone Catalog Tool', 'id': 'portal_user_catalog'}]
This looks reasonable because the user profiles don't have most of the indexes of the "normal" objects, but have a small set of own indexes.
Since I found no way how to create this object from scratch, I exported it from the old site (as portal_user_catalog.zexp) and imported it in the new site. This seemed to work, but I can't add objects to the imported catalog, not even by explicitly calling the catalog_object method. Instead, the user profiles are added to the standard portal_catalog.
Now I found a module in my product which seems to serve the purpose (Products/myproduct/exportimport/catalog.py):
"""Catalog tool setup handlers.
$Id: catalog.py 77004 2007-06-24 08:57:54Z yuppie $
"""
from Products.GenericSetup.utils import exportObjects
from Products.GenericSetup.utils import importObjects
from Products.CMFCore.utils import getToolByName
from zope.component import queryMultiAdapter
from Products.GenericSetup.interfaces import IBody
def importCatalogTool(context):
"""Import catalog tool.
"""
site = context.getSite()
obj = getToolByName(site, 'portal_user_catalog')
parent_path=''
if obj and not obj():
importer = queryMultiAdapter((obj, context), IBody)
path = '%s%s' % (parent_path, obj.getId().replace(' ', '_'))
__traceback_info__ = path
print [importer]
if importer:
print importer.name
if importer.name:
path = '%s%s' % (parent_path, 'usercatalog')
print path
filename = '%s%s' % (path, importer.suffix)
print filename
body = context.readDataFile(filename)
if body is not None:
importer.filename = filename # for error reporting
importer.body = body
if getattr(obj, 'objectValues', False):
for sub in obj.objectValues():
importObjects(sub, path+'/', context)
def exportCatalogTool(context):
"""Export catalog tool.
"""
site = context.getSite()
obj = getToolByName(site, 'portal_user_catalog', None)
if tool is None:
logger = context.getLogger('catalog')
logger.info('Nothing to export.')
return
parent_path=''
exporter = queryMultiAdapter((obj, context), IBody)
path = '%s%s' % (parent_path, obj.getId().replace(' ', '_'))
if exporter:
if exporter.name:
path = '%s%s' % (parent_path, 'usercatalog')
filename = '%s%s' % (path, exporter.suffix)
body = exporter.body
if body is not None:
context.writeDataFile(filename, body, exporter.mime_type)
if getattr(obj, 'objectValues', False):
for sub in obj.objectValues():
exportObjects(sub, path+'/', context)
I tried to use it, but I have no idea how it is supposed to be done;
I can't call it TTW (should I try to publish the methods?!).
I tried it in a debug session:
$ bin/instance debug
>>> portal = app.Plone
>>> from Products.myproduct.exportimport.catalog import exportCatalogTool
>>> exportCatalogTool(portal)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File ".../Products/myproduct/exportimport/catalog.py", line 58, in exportCatalogTool
site = context.getSite()
AttributeError: getSite
So, if this is the way to go, it looks like I need a "real" context.
Update: To get this context, I tried an External Method:
# -*- coding: utf-8 -*-
from Products.myproduct.exportimport.catalog import exportCatalogTool
from pdb import set_trace
def p(dt, dd):
print '%-16s%s' % (dt+':', dd)
def main(self):
"""
Export the portal_user_catalog
"""
g = globals()
print '#' * 79
for a in ('__package__', '__module__'):
if a in g:
p(a, g[a])
p('self', self)
set_trace()
exportCatalogTool(self)
However, wenn I called it, I got the same <PloneSite at /Plone> object as the argument to the main function, which didn't have the getSite attribute. Perhaps my site doesn't call such External Methods correctly?
Or would I need to mention this module somehow in my configure.zcml, but how? I searched my directory tree (especially below Products/myproduct/profiles) for exportimport, the module name, and several other strings, but I couldn't find anything; perhaps there has been an integration once but was broken ...
So how do I make this portal_user_catalog work?
Thank you!
Update: Another debug session suggests the source of the problem to be some transaction matter:
>>> portal = app.Plone
>>> puc = portal.portal_user_catalog
>>> puc._catalog()
[]
>>> profiles_folder = portal.some_folder_with_profiles
>>> for o in profiles_folder.objectValues():
... puc.catalog_object(o)
...
>>> puc._catalog()
[<Products.ZCatalog.Catalog.mybrains object at 0x69ff8d8>, ...]
This population of the portal_user_catalog doesn't persist; after termination of the debug session and starting fg, the brains are gone.
It looks like the problem was indeed related with transactions.
I had
import transaction
...
class Browser(BrowserView):
...
def processNewUser(self):
....
transaction.commit()
before, but apparently this was not good enough (and/or perhaps not done correctly).
Now I start the transaction explicitly with transaction.begin(), save intermediate results with transaction.savepoint(), abort the transaction explicitly with transaction.abort() in case of errors (try / except), and have exactly one transaction.commit() at the end, in the case of success. Everything seems to work.
Of course, Plone still doesn't take this non-standard catalog into account; when I "clear and rebuild" it, it is empty afterwards. But for my application it works well enough.