I've been trying to get the following colab notbook to work without success.
I was able to scrape images needed and (hopefully) resized them all to 256x256.
After connecting my drive and following all steps, I reached the training part.
There, I am using the default values with: resume_from = "ffhq256"
The process start but ends after ~20 secods with the following error:
Constructing networks...
Setting up TensorFlow plugin "fused_bias_act.cu": Compiling... Loading... Done.
Setting up TensorFlow plugin "upfirdn_2d.cu": Compiling... Loading... Done.
Resuming from "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/transfer-learning-source-nets/ffhq-res256-mirror-paper256-noaug.pkl"
Downloading https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/transfer-learning-source-nets/ffhq-res256-mirror-paper256-noaug.pkl ... done
Traceback (most recent call last):
File "train.py", line 645, in <module>
main()
File "train.py", line 637, in main
run_training(**vars(args))
File "train.py", line 522, in run_training
training_loop.training_loop(**training_options)
File "/content/drive/MyDrive/colab-sg2-ada/stylegan2-ada/training/training_loop.py", line 129, in training_loop
G.copy_vars_from(rG)
File "/content/drive/MyDrive/colab-sg2-ada/stylegan2-ada/dnnlib/tflib/network.py", line 512, in copy_vars_from
self._components[name].copy_vars_from(src_comp)
File "/content/drive/MyDrive/colab-sg2-ada/stylegan2-ada/dnnlib/tflib/network.py", line 509, in copy_vars_from
self.copy_own_vars_from(src_net)
File "/content/drive/MyDrive/colab-sg2-ada/stylegan2-ada/dnnlib/tflib/network.py", line 482, in copy_own_vars_from
tfutil.set_vars({self._get_vars()[name]: value for name, value in value_dict.items() if name in self._get_vars()})
File "/content/drive/MyDrive/colab-sg2-ada/stylegan2-ada/dnnlib/tflib/tfutil.py", line 227, in set_vars
run(ops, feed_dict)
File "/content/drive/MyDrive/colab-sg2-ada/stylegan2-ada/dnnlib/tflib/tfutil.py", line 33, in run
return tf.get_default_session().run(*args, **kwargs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1156, in _run
(np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (3, 3, 512, 256) for Tensor 'G_synthesis/64x64/Conv0_up/weight/new_value:0', which has shape '(3, 3, 512, 512)'
Any help would be greatly appreciated!
Related
I'm learning how to use DeepPavlov and can't figure how to use it for NER using the Camembert (french) pre-trained model. My goal is to tag a short paragraph in french.
The docs from deeppavlov explicitly list the Camembert model as a viable transformer architecture. I tried to follow as best as I could but I keep getting this error when I try to build the model.
>>> ner_model = build_model('ner_ontonotes_bert_mult')
/home/philippe/.local/lib/python3.10/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
Some weights of the model checkpoint at camembert-base were not used when initializing CamembertForTokenClassification: ['lm_head.dense.weight', 'lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.decoder.weight', 'lm_head.dense.bias']
- This IS expected if you are initializing CamembertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CamembertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CamembertForTokenClassification were not initialized from the model checkpoint at camembert-base and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2023-01-29 09:58:13.308 ERROR in 'deeppavlov.core.common.params'['params'] at line 108: Exception in <class 'deeppavlov.models.torch_bert.torch_transformers_sequence_tagger.TorchTransformersSequenceTagger'>
Traceback (most recent call last):
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/common/params.py", line 102, in from_params
component = obj(**dict(config_params, **kwargs))
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 182, in __init__
super().__init__(optimizer=optimizer,
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/models/torch_model.py", line 98, in __init__
self.load()
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 295, in load
self.crf = CRF(self.n_classes).to(self.device)
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/crf.py", line 13, in __init__
super().__init__(num_tags=num_tags, batch_first=batch_first)
File "/home/philippe/.local/lib/python3.10/site-packages/torchcrf/__init__.py", line 40, in __init__
raise ValueError(f'invalid number of tags: {num_tags}')
ValueError: invalid number of tags: 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/commands/infer.py", line 55, in build_model
component = from_params(component_config, mode=mode)
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/common/params.py", line 102, in from_params
component = obj(**dict(config_params, **kwargs))
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 182, in __init__
super().__init__(optimizer=optimizer,
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/models/torch_model.py", line 98, in __init__
self.load()
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 295, in load
self.crf = CRF(self.n_classes).to(self.device)
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/crf.py", line 13, in __init__
super().__init__(num_tags=num_tags, batch_first=batch_first)
File "/home/philippe/.local/lib/python3.10/site-packages/torchcrf/__init__.py", line 40, in __init__
raise ValueError(f'invalid number of tags: {num_tags}')
ValueError: invalid number of tags: 0
I downloaded the camembert-base model from https://huggingface.co/camembert-base and copied the files in .deeppavlov/models/camembert-base directory.
Then I figured the deeppavlov's model 'ner_ontonotes_bert_mult' was the best for my use so I edited the config file and changed thoses lines in the metadata section at the end. The docs from DeepPavlov ask to change the TRANSFORMER value, witch I did, and I changed the MODEL_PATH so it point to the files I downloaded previously.
"variables": {
"ROOT_PATH": "~/.deeppavlov",
"DOWNLOADS_PATH": "{ROOT_PATH}/downloads",
"MODELS_PATH": "{ROOT_PATH}/models",
"TRANSFORMER": "camembert-base",
"MODEL_PATH": "{MODELS_PATH}/camembert-base"
},
I am aware that I should have copied the config file to a new one with a different name but this should not be a problem.
Then in python I did the following :
from deeppavlov import configs, build_model
build_model('ner_ontonotes_bert_mult')`
And then I get the error mentioned before. I am lost and don't know where to look from now.
I'm trying to train AllenNLPs coreference model on a 16GB GPU, using this config file: https://github.com/allenai/allennlp-models/blob/main/training_config/coref/coref_spanbert_large.jsonnet
I created train, test, and dev files using this script: https://github.com/allenai/allennlp/blob/master/scripts/compile_coref_data.sh
I got CUDA out of memory almost instantly, so I tried changing "spans_per_word" and "max_antecedents" to lower values. With spans_per_words set to 0.1 instead of 0.4, I could run a bit longer but not nearly a full epoch. Is a 16GB GPU not enough? Or are there other parameters I could try changing?
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/allennlp/bin/allennlp", line 8, in
sys.exit(run())
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/main.py", line 34, in run
main(prog="allennlp")
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/commands/init.py", line 119, in main
args.func(args)
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/commands/train.py", line 119, in train_model_from_args
file_friendly_logging=args.file_friendly_logging,
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/commands/train.py", line 178, in train_model_from_file
file_friendly_logging=file_friendly_logging,
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/commands/train.py", line 242, in train_model
file_friendly_logging=file_friendly_logging,
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/commands/train.py", line 466, in _train_worker
metrics = train_loop.run()
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/commands/train.py", line 528, in run
return self.trainer.train()
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/training/trainer.py", line 740, in train
metrics, epoch = self._try_train()
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/training/trainer.py", line 772, in _try_train
train_metrics = self._train_epoch(epoch)
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/allennlp/training/trainer.py", line 523, in _train_epoch
loss.backward()
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/ubuntu/anaconda3/envs/allennlp/lib/python3.7/site-packages/torch/autograd/init.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 1.33 GiB (GPU 0; 14.76 GiB total capacity; 11.69 GiB already allocated; 639.75 MiB free; 13.09 GiB reserved in total by PyTorch)
16GB is on the low end for that model.
When this model receives a lot of text, it will split the text into multiple shorter sequences of 512 word pieces each, and run them all at the same time. That way you end up with a lot of sequences in memory at the same time even when the batch size is 1.
Try setting max_sentence to a lower value (default is 110), and see if that works.
I'm new to pyinstaller and I'm getting this error (NotImplementedError: Can't perform this operation for unregistered loader type) when I try to import a file with my App.
The complete traceback is:
Exception in Tkinter callback
Traceback (most recent call last):
File "tkinter\__init__.py", line 1702, in __call__
File "BioRank.py", line 190, in load
File "site-packages\pandas\core\frame.py", line 710, in style
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "C:\Users\tizma\Anaconda3\lib\site-
packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__)
File "site-packages\pandas\io\formats\style.py", line 50, in <module>
File "site-packages\pandas\io\formats\style.py", line 111, in Styler
File "site-packages\jinja2\environment.py", line 830, in get_template
File "site-packages\jinja2\environment.py", line 804, in _load_template
File "site-packages\jinja2\loaders.py", line 113, in load
File "site-packages\jinja2\loaders.py", line 234, in get_source
File "site-packages\pkg_resources\__init__.py", line 1396, in has_resource
File "site-packages\pkg_resources\__init__.py", line 1449, in _has
NotImplementedError: Can't perform this operation for unregistered loader type
I did some research and found out that pyinstaller does not support pkg_resources. Is there any workaround this issue?
I have faced this problem using pyinstaller. The solution was to edit your-python-path\Lib\site-packages\pandas\io\formats\stytle:
Go to line 120 and change
template = env.**get_template**("html.tpl")
to
template = env.**from_string**("html.tpl")
Then try again.
My problem was I was using background color in pd.Series("backgroud....) and the pyinstaller was not building it, so after the change it works.
I experienced the exact same issue with pyinstaller not including pkg_resources. Rafael's answer above sorted it out for me. Much simpler than the other solutions I found related to this issue. I needed to edit the style.py file. This solution also doesn't require any additional datas in the spec file which is convenient.
Path:
your_path\venv\Lib\site-packages\pandas\io\formats\style.py
Before:
template = env.get_template("html.tpl")
After:
template = env.from_string("html.tpl")
Summary: through-the-filesystem editing not working for my diazo theme. Plone breaks.
Details:
I've created my first live plone site with 4.3.2 and diazo. You can see the live version at borogreen.org. I would like to keep editing the theme forward.
My ubuntu 12.04LTS test server has only plone432 + diazo + dexterity (not used) + Static resource storage 1.0.2 enabled. For test purposes, I'm using the available sunrain theme.
I've placed the sunrain theme manually inside the /resources folder, as suggested per
http://developer.plone.org/reference_manuals/external/plone.app.theming/userguide.html#deploying-and-testing-themes
Trying to enable that theme in the Site Setup | Theming panel | Advanced, I set the path to the theme rules to
/++theme++sunrain/rules.xml
and the absolute path prefix to
/++theme++sunrain/
Plone does not recognize it: no theme gets enabled. The debug mode spits out the following error codes
2014-03-29 00:10:07 ERROR plone.subrequest Error handling subrequest to /++theme++sunrain/rules.xml
Traceback (most recent call last):
File "/home/plone/Plone/buildout-cache/eggs/plone.subrequest-1.6.7-py2.7.egg/plone/subrequest/__init__.py", line 116, in subrequest
traversed = request.traverse(path)
File "/home/plone/Plone/buildout-cache/eggs/Zope2-2.13.21-py2.7.egg/ZPublisher/BaseRequest.py", line 502, in traverse
subobject = self.traverseName(object, entry_name)
File "/home/plone/Plone/buildout-cache/eggs/Zope2-2.13.21-py2.7.egg/ZPublisher/BaseRequest.py", line 326, in traverseName
ob2 = namespaceLookup(ns, nm, ob, self)
File "/home/plone/Plone/buildout-cache/eggs/zope.traversing-3.13.2-py2.7.egg/zope/traversing/namespace.py", line 112, in namespaceLookup
return traverser.traverse(name, ())
File "/home/plone/Plone/buildout-cache/eggs/plone.resource-1.0.2-py2.7.egg/plone/resource/traversal.py", line 27, in traverse
raise NotFound
NotFound
2014-03-29 00:10:07 ERROR plone.transformchain Unexpected error whilst trying to apply transform chain
Traceback (most recent call last):
File "/home/plone/Plone/buildout-cache/eggs/plone.transformchain-1.0.3-py2.7.egg/plone/transformchain/transformer.py", line 48, in __call__
newResult = handler.transformIterable(result, encoding)
File "/home/plone/Plone/buildout-cache/eggs/plone.app.theming-1.1.1-py2.7.egg/plone/app/theming/transform.py", line 170, in transformIterable
transform = self.setupTransform(runtrace=runtrace)
File "/home/plone/Plone/buildout-cache/eggs/plone.app.theming-1.1.1-py2.7.egg/plone/app/theming/transform.py", line 108, in setupTransform
transform = compileThemeTransform(rules, absolutePrefix, readNetwork, parameterExpressions, runtrace=runtrace)
File "/home/plone/Plone/buildout-cache/eggs/plone.app.theming-1.1.1-py2.7.egg/plone/app/theming/utils.py", line 580, in compileThemeTransform
runtrace=runtrace,
File "/home/plone/Plone/buildout-cache/eggs/diazo-1.0.4-py2.7.egg/diazo/compiler.py", line 115, in compile_theme
read_network=read_network,
File "/home/plone/Plone/buildout-cache/eggs/diazo-1.0.4-py2.7.egg/diazo/rules.py", line 195, in process_rules
rules_doc = etree.parse(rules, parser=rules_parser)
File "lxml.etree.pyx", line 2957, in lxml.etree.parse (src/lxml/lxml.etree.c:56299)
File "parser.pxi", line 1526, in lxml.etree._parseDocument (src/lxml/lxml.etree.c:82331)
File "parser.pxi", line 1555, in lxml.etree._parseDocumentFromURL (src/lxml/lxml.etree.c:82624)
File "parser.pxi", line 1455, in lxml.etree._parseDocFromFile (src/lxml/lxml.etree.c:81663)
File "parser.pxi", line 1002, in lxml.etree._BaseParser._parseDocFromFile (src/lxml/lxml.etree.c:78623)
File "parser.pxi", line 569, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:74567)
File "parser.pxi", line 650, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:75458)
File "parser.pxi", line 588, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:74760)
IOError: Error reading file '/++theme++sunrain/rules.xml': failed to load external entity "/++theme++sunrain/rules.xml"
What's wrong here?
ps: of course I can upload the theme as zip file and enable it that way, which works fine. I would really like to edit through-the-filesystem as I can foresee a lot of development in the future.
An up-to-date and working write-up for plone432 how to edit diazo themes through the filesystem using the /resources directory would be the answer, but I have not found that either outside of the plone.app.theming user guide. Help!
Recently we noticed that some (published) folders in our Plone 3.3.5 site are no longer visible in the navigation and folder listing views. While trying to set a new effective date (it's set in the past, so that is not the problem) on them i get a KeyError:
Traceback (innermost last):
Module ZPublisher.Publish, line 125, in publish
Module Zope2.App.startup, line 238, in commit
Module transaction._manager, line 96, in commit
Module transaction._transaction, line 389, in commit
Module transaction._transaction, line 445, in _callBeforeCommitHooks
Module collective.indexing.transactions, line 57, in before_commit
Module collective.indexing.queue, line 149, in process
Module collective.indexing.indexer, line 89, in unindex
Module collective.indexing.indexer, line 71, in unindex
Module Products.CMFSquidTool.queue, line 160, in unindexObject
Module Products.CMFSquidTool.patch, line 24, in call
Module Products.Archetypes.CatalogMultiplex, line 49, in unindexObject
Module Products.CMFPlone.CatalogTool, line 445, in uncatalog_object
Module Products.CacheSetup.patch, line 109, in uncatalog_object
Module Products.CacheSetup.patch_utils, line 6, in call
Module Products.ZCatalog.ZCatalog, line 569, in uncatalog_object
Module Products.ZCatalog.Catalog, line 390, in uncatalogObject
Module Products.PluginIndexes.DateRangeIndex.DateRangeIndex, line 192, in unindex_object
Module Products.PluginIndexes.DateRangeIndex.DateRangeIndex, line 391, in _removeForwardIndexEntry
KeyError: -1666126693
In some cases other operations also triggers the same errors.
The parent folder in which this is happening was recently cut and pasted to its current location in the site, maybe that's got something to do with it.
Help much appreciated, thanks !
Try to rebuild the catalog, it can help, it seems there are some orphan objects in the catalog.