I'd like to use specialized configuration as per Hydra documentation in Common Patterns -> Specializing Configuration. The difference is that my specialized configuration is in a file, not just one variable. In the example below I want to choose transform based on the model and the dataset. The configs for different transforms are in files. This would work if I specified all the transform configuration in dataset_model/cifar10_alexnet.yaml file, but that would defeat the purpose because I can't reuse the transform config in this case. Alsewhere in Hydra if you specify the name of the file it would automatically pick up the config in that file, but it does not seem to work in the specialized configuration.
I've modified the example in documentation as follows:
config.yaml:
defaults:
- dataset: cifar10
- model: alexnet
- transform: crop
- dataset_model: ${defaults.0.dataset}_${defaults.1.model}
optional: true
Added directory called transform and two files inside that directory:
crop.yaml:
# #package _group_
type: crop
test1: 7
resize.yaml:
# #package _group_
type: resize
test1: 50
changed file dataset_model/cifar10_alexnet.yaml:
# #package _global_
model:
num_layers: 5
transform: resize
Everything else is exactly as per the documentation. When I run this I get an exception:
Traceback (most recent call last):
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/hydra/_internal/config_loader_impl.py", line 720, in _merge_config
ret = OmegaConf.merge(cfg, loaded_cfg)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/omegaconf.py", line 321, in merge
target.merge_with(*others[1:])
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/basecontainer.py", line 331, in merge_with
self._format_and_raise(key=None, value=None, cause=e)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/base.py", line 101, in _format_and_raise
type_override=type_override,
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/basecontainer.py", line 329, in merge_with
self._merge_with(*others)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/basecontainer.py", line 347, in _merge_with
BaseContainer._map_merge(self, other)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/basecontainer.py", line 296, in _map_merge
dest.__setitem__(key, src_value)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/dictconfig.py", line 262, in __setitem__
self._format_and_raise(key=key, value=value, cause=e)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/base.py", line 101, in _format_and_raise
type_override=type_override,
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/_utils.py", line 694, in format_and_raise
_raise(ex, cause)
File "/home/natalia/.pyenv/versions/3.7.9/lib/python3.7/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ValidationError:
full_key: transform
reference_type=Optional[Dict[Union[str, Enum], Any]]
object_type=dict
So, the question is - is this functionality supported and if it is, what am i doing wrong?
Your config is trying to merge the string "resize" into a dictionary like:
transform:
type: crop
test1: 7
This is not something you can do.
You are not explaining what you are trying to do very well, but my guess is that you want to compose a different transform based on selected dataset.
Hydra 1.1 will add support for recursive defaults list which will probably allow you to do what you want.
This is the doc for the new defaults list. You can install this version as a pre-release (see primary project readme).
Related
I'm learning how to use DeepPavlov and can't figure how to use it for NER using the Camembert (french) pre-trained model. My goal is to tag a short paragraph in french.
The docs from deeppavlov explicitly list the Camembert model as a viable transformer architecture. I tried to follow as best as I could but I keep getting this error when I try to build the model.
>>> ner_model = build_model('ner_ontonotes_bert_mult')
/home/philippe/.local/lib/python3.10/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
Some weights of the model checkpoint at camembert-base were not used when initializing CamembertForTokenClassification: ['lm_head.dense.weight', 'lm_head.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.decoder.weight', 'lm_head.dense.bias']
- This IS expected if you are initializing CamembertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CamembertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CamembertForTokenClassification were not initialized from the model checkpoint at camembert-base and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2023-01-29 09:58:13.308 ERROR in 'deeppavlov.core.common.params'['params'] at line 108: Exception in <class 'deeppavlov.models.torch_bert.torch_transformers_sequence_tagger.TorchTransformersSequenceTagger'>
Traceback (most recent call last):
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/common/params.py", line 102, in from_params
component = obj(**dict(config_params, **kwargs))
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 182, in __init__
super().__init__(optimizer=optimizer,
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/models/torch_model.py", line 98, in __init__
self.load()
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 295, in load
self.crf = CRF(self.n_classes).to(self.device)
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/crf.py", line 13, in __init__
super().__init__(num_tags=num_tags, batch_first=batch_first)
File "/home/philippe/.local/lib/python3.10/site-packages/torchcrf/__init__.py", line 40, in __init__
raise ValueError(f'invalid number of tags: {num_tags}')
ValueError: invalid number of tags: 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/commands/infer.py", line 55, in build_model
component = from_params(component_config, mode=mode)
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/common/params.py", line 102, in from_params
component = obj(**dict(config_params, **kwargs))
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 182, in __init__
super().__init__(optimizer=optimizer,
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/core/models/torch_model.py", line 98, in __init__
self.load()
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/torch_transformers_sequence_tagger.py", line 295, in load
self.crf = CRF(self.n_classes).to(self.device)
File "/home/philippe/.local/lib/python3.10/site-packages/deeppavlov/models/torch_bert/crf.py", line 13, in __init__
super().__init__(num_tags=num_tags, batch_first=batch_first)
File "/home/philippe/.local/lib/python3.10/site-packages/torchcrf/__init__.py", line 40, in __init__
raise ValueError(f'invalid number of tags: {num_tags}')
ValueError: invalid number of tags: 0
I downloaded the camembert-base model from https://huggingface.co/camembert-base and copied the files in .deeppavlov/models/camembert-base directory.
Then I figured the deeppavlov's model 'ner_ontonotes_bert_mult' was the best for my use so I edited the config file and changed thoses lines in the metadata section at the end. The docs from DeepPavlov ask to change the TRANSFORMER value, witch I did, and I changed the MODEL_PATH so it point to the files I downloaded previously.
"variables": {
"ROOT_PATH": "~/.deeppavlov",
"DOWNLOADS_PATH": "{ROOT_PATH}/downloads",
"MODELS_PATH": "{ROOT_PATH}/models",
"TRANSFORMER": "camembert-base",
"MODEL_PATH": "{MODELS_PATH}/camembert-base"
},
I am aware that I should have copied the config file to a new one with a different name but this should not be a problem.
Then in python I did the following :
from deeppavlov import configs, build_model
build_model('ner_ontonotes_bert_mult')`
And then I get the error mentioned before. I am lost and don't know where to look from now.
Trying to make my own component based on KubernetesPodOperator. I am able to define and add the component to the list of components but when trying to run it, I get:
Operator 'KubernetesPodOperator' of node 'KubernetesPodOperator' is not configured in the list of available operators. Please add the fully-qualified package name for 'KubernetesPodOperator' to the AirflowPipelineProcessor.available_airflow_operators configuration.
and error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/tornado/web.py", line 1704, in _execute
result = await result
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/handlers.py", line 120, in post
response = await PipelineProcessorManager.instance().process(pipeline)
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/processor.py", line 134, in process
res = await asyncio.get_event_loop().run_in_executor(None, processor.process, pipeline)
File "/opt/conda/lib/python3.9/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/opt/conda/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup
future.result()
File "/opt/conda/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
File "/opt/conda/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/airflow/processor_airflow.py", line 122, in process
pipeline_filepath = self.create_pipeline_file(pipeline=pipeline,
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/airflow/processor_airflow.py", line 420, in create_pipeline_file
target_ops = self._cc_pipeline(pipeline, pipeline_name)
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/airflow/processor_airflow.py", line 368, in _cc_pipeline
raise ValueError(f"Operator '{component.name}' of node '{operation.name}' is not configured "
ValueError: Operator 'KubernetesPodOperator' of node 'KubernetesPodOperator' is not configured in the list of available operators. Please add the fully-qualified package name for 'KubernetesPodOperator' to the AirflowPipelineProcessor.available_airflow_operators configuration.
After looking through the src code, I can see in the processor_airflow.py these lines:
# This specifies the default airflow operators included with Elyra. Any Airflow-based
# custom connectors should create/extend the elyra configuration file to include
# those fully-qualified operator/class names.
available_airflow_operators = ListTrait(
CUnicode(),
["airflow.operators.slack_operator.SlackAPIPostOperator",
"airflow.operators.bash_operator.BashOperator",
"airflow.operators.email_operator.EmailOperator",
"airflow.operators.http_operator.SimpleHttpOperator",
"airflow.contrib.operators.spark_sql_operator.SparkSqlOperator",
"airflow.contrib.operators.spark_submit_operator.SparkSubmitOperator"],
help="""List of available Apache Airflow operator names.
Operators available for use within Apache Airflow pipelines. These operators must
be fully qualified (i.e., prefixed with their package names).
""",
).tag(config=True)
tho I am unsure if this can be extended from the client.
The available_airflow_operators list is a configurable trait in Elyra. You’ll have to add the fully-qualified package name for the KubernetesPodOperator to this list in order for it to create the DAG correctly.
To do so, generate a config file from the command line with jupyter elyra --generate-config. Open the created file and add the following line (you can add it under the PipelineProcessor(LoggingConfigurable) heading if you prefer to keep the file organized):
c.AirflowPipelineProcessor.available_airflow_operators.append("airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator")
Change that string value to the correct package for your use case if it's not the above (make sure that it ends with the class name of the required operator). If you need to add multiple packages, you can also use extend rather than append.
Edit: here is the link to the relevant documentation
I'm taking a class, Intro to Algebraic Cryptology. We're using Sage for everything and CoCalc. This class is the first I've heard of either. The instructor has provided many convenience functions for our use. I do not like repeatedly copying them into new Sage worksheets in CoCalc. So, I put them in a library.
It took some time but I finally learned that to use them I have to do this in Sage:
load_attach_path('/path/to/the/directory')
%attach elliptic_curve_common.sage
Now, there is a function which she wrote for us to use called HPSonEC. This function is about using the Hellman-Pohlig-Silver exploit for cracking encryption on elliptic curves. What's infuriating is that the function will not work when used as I have above and I get this error:
Error in lines 5-5
Traceback (most recent call last):
File "/cocalc/lib/python3.8/site-packages/smc_sagews/sage_server.py", line 1230, in execute
exec(
File "", line 1, in <module>
File "<string>", line 298, in HPSonEC
File "<string>", line 263, in listptorder
File "<string>", line 151, in ECTimes
File "sage/rings/rational.pyx", line 2401, in sage.rings.rational.Rational.__mul__ (build/cythonized/sage/rings/rational.c:20911)
return coercion_model.bin_op(left, right, operator.mul)
File "sage/structure/coerce.pyx", line 1248, in sage.structure.coerce.CoercionModel.bin_op (build/cythonized/sage/structure/coerce.c:11304)
raise bin_op_exception(op, x, y)
TypeError: unsupported operand parent(s) for *: 'Rational Field' and 'Abelian group of points on Elliptic Curve defined by y^2 = x^3 + 389787687398479 over Finite Field of size 324638246338947256483756487461'
However, if I take that function, and the others on which it depends, and copy them into my Sage worksheet, they work just fine. Literally, no differences in the code at all. What may be the issue?
When reading code from a worksheet or a .sage file,
the Sage preparser is applied.
When reading code from a .py file, it is not.
See many questions where this came up:
https://stackoverflow.com/search?q=%5Bsage%5D+preparser
https://ask.sagemath.org/questions/scope:all/sort:activity-desc/page:1/query:preparser/
I am using the functory module and I am facing a very bizarre issue with the code.
My code is working fine and I have been able to parallelized a play on my game but when I try to play once again (launch another time a parallelized function) it raises a really weird error.
Here you can find the error :
Fatal error: exception Unix.Unix_error(43, "write", "")
Raised by primitive operation at file "unix.ml", line 252, characters 7-34
Called from file "protocol.ml", line 45, characters 10-32
Re-raised at file "network.ml", line 536, characters 10-11
Called from file "network.ml", line 565, characters 47-80
Called from file "list.ml", line 73, characters 12-15
Called from file "network.ml", line 731, characters 4-27
Called from file "map_fold.ml", line 98, characters 4-242
Called from file "game_ia.ml", line 111, characters 10-54
Called from file "gameplay.ml", line 34, characters 12-48
Called from file "gameplay.ml", line 57, characters 22-37
Called from file "gameplay.ml", line 85, characters 5-22
So I've decided to search in the following folders to see what primitive operation has been raised :
(unix.ml) external rename : string -> string -> unit = "unix_rename"
(network.ml) Some jid when w.state <> Disconnected -> send w (Protocol.Master.Kill jid)
So for some reason, it seems that my worker disconnects by itself. I was wondering if any of you already had this issue and what to do in order to solve it ?
You can find my game here. The main files involved are game_ia.ml (best_move_parallelized) and gameplay.ml (at the very bottom).
Thank you in advance for your help.
The error you get is (type the following in the toploop)¹:
# (Obj.magic 43: Unix.error);;
- : Unix.error = Unix.EPROTOTYPE
which means: Protocol wrong type for socket. So you have to examine how you initialize your socket.
¹ You can also count the exceptions in unix.mli, knowing that the first one, E2BIG, is 0. Emacs C-u 43 ↓ helps.
Summary: through-the-filesystem editing not working for my diazo theme. Plone breaks.
Details:
I've created my first live plone site with 4.3.2 and diazo. You can see the live version at borogreen.org. I would like to keep editing the theme forward.
My ubuntu 12.04LTS test server has only plone432 + diazo + dexterity (not used) + Static resource storage 1.0.2 enabled. For test purposes, I'm using the available sunrain theme.
I've placed the sunrain theme manually inside the /resources folder, as suggested per
http://developer.plone.org/reference_manuals/external/plone.app.theming/userguide.html#deploying-and-testing-themes
Trying to enable that theme in the Site Setup | Theming panel | Advanced, I set the path to the theme rules to
/++theme++sunrain/rules.xml
and the absolute path prefix to
/++theme++sunrain/
Plone does not recognize it: no theme gets enabled. The debug mode spits out the following error codes
2014-03-29 00:10:07 ERROR plone.subrequest Error handling subrequest to /++theme++sunrain/rules.xml
Traceback (most recent call last):
File "/home/plone/Plone/buildout-cache/eggs/plone.subrequest-1.6.7-py2.7.egg/plone/subrequest/__init__.py", line 116, in subrequest
traversed = request.traverse(path)
File "/home/plone/Plone/buildout-cache/eggs/Zope2-2.13.21-py2.7.egg/ZPublisher/BaseRequest.py", line 502, in traverse
subobject = self.traverseName(object, entry_name)
File "/home/plone/Plone/buildout-cache/eggs/Zope2-2.13.21-py2.7.egg/ZPublisher/BaseRequest.py", line 326, in traverseName
ob2 = namespaceLookup(ns, nm, ob, self)
File "/home/plone/Plone/buildout-cache/eggs/zope.traversing-3.13.2-py2.7.egg/zope/traversing/namespace.py", line 112, in namespaceLookup
return traverser.traverse(name, ())
File "/home/plone/Plone/buildout-cache/eggs/plone.resource-1.0.2-py2.7.egg/plone/resource/traversal.py", line 27, in traverse
raise NotFound
NotFound
2014-03-29 00:10:07 ERROR plone.transformchain Unexpected error whilst trying to apply transform chain
Traceback (most recent call last):
File "/home/plone/Plone/buildout-cache/eggs/plone.transformchain-1.0.3-py2.7.egg/plone/transformchain/transformer.py", line 48, in __call__
newResult = handler.transformIterable(result, encoding)
File "/home/plone/Plone/buildout-cache/eggs/plone.app.theming-1.1.1-py2.7.egg/plone/app/theming/transform.py", line 170, in transformIterable
transform = self.setupTransform(runtrace=runtrace)
File "/home/plone/Plone/buildout-cache/eggs/plone.app.theming-1.1.1-py2.7.egg/plone/app/theming/transform.py", line 108, in setupTransform
transform = compileThemeTransform(rules, absolutePrefix, readNetwork, parameterExpressions, runtrace=runtrace)
File "/home/plone/Plone/buildout-cache/eggs/plone.app.theming-1.1.1-py2.7.egg/plone/app/theming/utils.py", line 580, in compileThemeTransform
runtrace=runtrace,
File "/home/plone/Plone/buildout-cache/eggs/diazo-1.0.4-py2.7.egg/diazo/compiler.py", line 115, in compile_theme
read_network=read_network,
File "/home/plone/Plone/buildout-cache/eggs/diazo-1.0.4-py2.7.egg/diazo/rules.py", line 195, in process_rules
rules_doc = etree.parse(rules, parser=rules_parser)
File "lxml.etree.pyx", line 2957, in lxml.etree.parse (src/lxml/lxml.etree.c:56299)
File "parser.pxi", line 1526, in lxml.etree._parseDocument (src/lxml/lxml.etree.c:82331)
File "parser.pxi", line 1555, in lxml.etree._parseDocumentFromURL (src/lxml/lxml.etree.c:82624)
File "parser.pxi", line 1455, in lxml.etree._parseDocFromFile (src/lxml/lxml.etree.c:81663)
File "parser.pxi", line 1002, in lxml.etree._BaseParser._parseDocFromFile (src/lxml/lxml.etree.c:78623)
File "parser.pxi", line 569, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:74567)
File "parser.pxi", line 650, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:75458)
File "parser.pxi", line 588, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:74760)
IOError: Error reading file '/++theme++sunrain/rules.xml': failed to load external entity "/++theme++sunrain/rules.xml"
What's wrong here?
ps: of course I can upload the theme as zip file and enable it that way, which works fine. I would really like to edit through-the-filesystem as I can foresee a lot of development in the future.
An up-to-date and working write-up for plone432 how to edit diazo themes through the filesystem using the /resources directory would be the answer, but I have not found that either outside of the plone.app.theming user guide. Help!