How do i use the closurebuilder for compiling and minifying scripts - google-closure-compiler

I am totally new to closure-library and am getting started. I just installed Python on my windows7 machine want to concatenate and minify the scripts. I ran through some commands as documented here but no gain. here are some parameters
Python installed in c:\python27\python.exe
Closure library in c:\closure\
Closure compiler in c:\closure\bin\build\compiler.jar
My Javascript file in D:\projects\closureapp\js\index.js
contents of the index.js is as below
/// <reference path="../closure/base.js" />
/// <reference path="../closure/dom/dom.js" />
/*Hello world into Closure Library Example*/
//Load the dom module
goog.require("goog.dom");
//refer the document body
var pageBody = document.body;
//after the body is loaded execute and add a header
pageBody.onload = function () {
//create a header for the page
var pageHeader = goog.dom.createDom('h1', { 'style': 'background-color:#EEE' }, 'Hello world!');
//append the header to the document body
goog.dom.appendChild(pageBody, pageHeader);
};
I executed the command below to produce compiled javascript but no gains
c:\python27\python.exe c:\closure\bin\build\c
losurebuilder.py --root=closure/ --root=d:\Projects\closureapp\js\ --
output_mode=compiled --compiler_jar=compiler.jar > d:\Projects\closureapp\js\output.js
i get some weird messages like below
c:\closure\bin\build\closurebuilder.py: Building dependency tree..
Traceback (most recent call last):
File "c:\closure\bin\build\closurebuilder.py", line 257, in <module> main()
File "c:\closure\bin\build\closurebuilder.py", line 204, in main tree = depstree.DepsTree(sources)
File "c:\closure\bin\build\depstree.py", line 56, in __init__ raise NamespaceNotFoundError(require, source)
depstree.NamespaceNotFoundError: Namespace "goog.async.Deferred" never provided.
Required in Source closure\messaging\portchannel.js

This looks like the same issue as http://code.google.com/p/closure-library/issues/detail?id=316

Related

GCP Composer/Airflow calling Dataflow/beam throwing error

I have a GCP Cloud Composer environment with airflow version composer-1.10.0-airflow-1.10.6 and python 3, 3.6 to be precise. I am calling an apache-beam pipeline on Dataflow using a python_operator.PythonOperator, operator. Here is the code snippet
Calling the pipeline function
test_greeting = python_operator.PythonOperator(
task_id='python_pipeline',
python_callable=run_pipeline
)
The pipeline function is as follows
def run_pipeline():
print("Test Pipeline")
pipeline_args=[
"--runner","DataflowRunner",
"--project","*****",
"--temp_location","gs://******/temp",
"--region","us-east1",
"--job_name","job1199",
"--zone","us-east1-b"
]
pipeline_options=PipelineOptions(pipeline_args)
pipe=beam.Pipeline(options=pipeline_options)
small_sum = (
pipe
| beam.Create([18,5,7,7,9,23,13,5])
| "Combine Globally" >> beam.CombineGlobally(AverageFn())
| 'Write results' >> beam.io.WriteToText('gs://******/ouptut_from_pipline/combine')
)
run_result=pipe.run()
run_result.wait_until_finish()
return "True"
When I run this the pipeline execution runs in dataflow but fails with the following error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/dataflow_worker/batchworker.py", line 648, in do_work
work_executor.execute()
File "/usr/local/lib/python3.6/site-packages/dataflow_worker/executor.py", line 150, in execute
test_shuffle_sink=self._test_shuffle_sink)
File "/usr/local/lib/python3.6/site-packages/dataflow_worker/executor.py", line 116, in create_operation
is_streaming=False)
File "apache_beam/runners/worker/operations.py", line 1032, in apache_beam.runners.worker.operations.create_operation
File "apache_beam/runners/worker/operations.py", line 845, in apache_beam.runners.worker.operations.create_pgbk_op
File "apache_beam/runners/worker/operations.py", line 903, in apache_beam.runners.worker.operations.PGBKCVOperation.__init__
File "/usr/local/lib/python3.6/site-packages/apache_beam/internal/pickler.py", line 290, in loads
return dill.loads(s)
File "/usr/local/lib/python3.6/site-packages/dill/_dill.py", line 275, in loads
return load(file, ignore, **kwds)
File "/usr/local/lib/python3.6/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/usr/local/lib/python3.6/site-packages/dill/_dill.py", line 472, in load
obj = StockUnpickler.load(self)
File "/usr/local/lib/python3.6/site-packages/dill/_dill.py", line 462, in find_class
return StockUnpickler.find_class(self, module, name)
ModuleNotFoundError: No module named 'unusual_prefix_162ac8b7030d5bd1ff5f128a26483932d3968a4d_python_bash'
The beam version is Apache Beam Python 3.6 SDK 2.19.0.
I suspect the version of Python 3.6 may be the issue as calling the pipeline directly (as a runner) from my local system works fine and my local system is running python 3.7.
I cant find a way to test this theory though.
It would be helpful to get tips of how to resolve this issue.

Unit test for exception raised in custom GNU radio python block

I have created a custom python sync block for use in a gnuradio flowgraph. The block tests for invalid input and, if found, raises a ValueError exception. I would like to create a unit test to verify that the exception is raised when the block indeed receives invalid input data.
As part of the python-based qa test for this block, I created a flowgraph such that the block receives invalid data. When I run the test, the block does appear to raise the exception but then hangs.
What is the appropriate way to test for this? Here is a minimal working example:
#!/usr/bin/env python
import numpy as np
from gnuradio import gr, gr_unittest, blocks
class validate_input(gr.sync_block):
def __init__(self):
gr.sync_block.__init__(self,
name="validate_input",
in_sig=[np.float32],
out_sig=[np.float32])
self.max_input = 100
def work(self, input_items, output_items):
in0 = input_items[0]
if (np.max(in0) > self.max_input):
raise ValueError('input exceeds max.')
validated_in = output_items[0]
validated_in[:] = in0
return len(output_items[0])
class qa_validate_input (gr_unittest.TestCase):
def setUp (self):
self.tb = gr.top_block ()
def tearDown (self):
self.tb = None
def test_check_valid_data(self):
src_data = (0, 201, 92)
src = blocks.vector_source_f(src_data)
validate = validate_input()
snk = blocks.vector_sink_f()
self.tb.connect (src, validate)
self.tb.connect (validate, snk)
self.assertRaises(ValueError, self.tb.run)
if __name__ == '__main__':
gr_unittest.run(qa_validate_input, "qa_validate_input.xml")
which produces:
DEPRECATED: Using filename with gr_unittest does no longer have any effect.
handler caught exception: input exceeds max.
Traceback (most recent call last):
File "/home/xxx/devel/gnuradio3_8/lib/python3.6/dist-packages/gnuradio/gr/gateway.py", line 60, in eval
try: self._callback()
File "/home/xxx/devel/gnuradio3_8/lib/python3.6/dist-packages/gnuradio/gr/gateway.py", line 230, in __gr_block_handle
) for i in range(noutputs)],
File "qa_validate_input.py", line 21, in work
raise ValueError('input exceeds max.')
ValueError: input exceeds max.
thread[thread-per-block[1]: <block validate_input(2)>]: SWIG director method error. Error detected when calling 'feval_ll.eval'
^CF
======================================================================
FAIL: test_check_valid_data (__main__.qa_validate_input)
----------------------------------------------------------------------
Traceback (most recent call last):
File "qa_validate_input.py", line 47, in test_check_valid_data
self.assertRaises(ValueError, self.tb.run)
AssertionError: ValueError not raised by run
----------------------------------------------------------------------
Ran 1 test in 1.634s
FAILED (failures=1)
The top_block's run() function does not call the block's work() function directly but starts the internal task scheduler and its threads and waits them to finish.
One way to unit test the error handling in your block is to call the work() function directly
def test_check_valid_data(self):
src_data = [[0, 201, 92]]
output_items = [[]]
validate = validate_input()
self.assertRaises(ValueError, lambda: validate.work(src_data, output_items))

posix_fallocate() failed: Operation not permitted while opening .realm file

I get the below error when i try to open and download .realm file in /tmp directory of serverless framework.
{"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"Error: posix_fallocate() failed: Operation not permitted" }
Below is the code:
let realm = new Realm({path: '/tmp/custom.realm', schema: [schema1, schema2]});
realm.write(() => {
console.log('completed==');
});
EDIT: this might soon be finally fixed in Realm-Core: see issue 4957.
In case you'll run into this problem elsewhere, here's a workaround.
This caused by AWS Lambda not supporting the fallocate and fallocate64 system calls. Instead of returning the correct error code in this case, which would be EINVAL for not supported on this file system, Amazon has blocked the system call so that it returns EPERM. Realm-Core has code that handles EINVAL return value correctly but will be bewildered by the unexpected EPERM returned from the system call.
The solution is to add a small shared library as a layer to the lambda: compile the following C file on Linux machine or inside lambda-ci Docker image:
#include <errno.h>
#include <fcntl.h>
int posix_fallocate(int __fd, off_t __offset, off_t __len) {
return EINVAL;
}
int posix_fallocate64(int __fd, off_t __offset, off_t __len) {
return EINVAL;
}
Now, compile this to a shared object with something like
gcc -shared fix.c -o fix.so
Then add it to a root of a ZIP file:
zip layer.zip fix.so
Create a new lambda layer from this zip
Add the lambda layer to your lambda function
Finally make the shared object be loaded by configuring the environment value LD_PRELOAD with value /opt/fix.so to your Lambda.
Enjoy.

NameError: Method not available report

I made the web service to communicate two applications odoo12 and drupal. when i try to retrieve a report in odoo12 from drupal, i get this error message:
-Drupal:
Le site Web a rencontré une erreur inattendue. Veuillez essayer de nouveau plus tard.</br></br><em class="placeholder">Zend\XmlRpc\Client\Exception\FaultException</em>: Traceback (most recent call last):
File "C:\odoo-12.0\odoo\addons\base\controllers\rpc.py", line 63, in xmlrpc_2
response = self._xmlrpc(service)
File "C:\odoo-12.0\odoo\addons\base\controllers\rpc.py", line 43, in _xmlrpc
result = dispatch_rpc(service, method, params)
File "C:/odoo-12.0\odoo\http.py", line 121, in dispatch_rpc
result = dispatch(method, params)
File "C:/odoo-12.0\odoo\service\model.py", line 34, in dispatch
raise NameError("Method not available %s" % method)
NameError: Method not available report
in <em class="placeholder">Zend\XmlRpc\Client->call()</em> (line <em class="placeholder">325</em> of <em class="placeholder">vendor\zendframework\zend-xmlrpc\src\Client.php</em>). <pre class="backtrace">Jsg\Odoo\Odoo->getReport('crm_ong.report_recufiscal', 0, 'qweb-pdf') (Line: 124)
-Odoo:
Traceback (most recent call last):
File "C:/odoo-12.0\odoo\http.py", line 121, in dispatch_rpc
result = dispatch(method, params)
File "C:/odoo-12.0\odoo\service\model.py", line 34, in dispatch
raise NameError("Method not available %s" % method)
NameError: Method not available report
-code drupal
public function submitForm(array &$form, FormStateInterface $form_state) {
global $id_don;
global $client;
$id_don = (int) $form_state->getValues()['id_don'];
$model = "crm.alima.don";
$ids = [$id_don];
$report_data=$client->getReport('crm_solthis.report_recufiscal', $id_don, 'qweb-pdf');
header('Content-Type: application/pdf');
echo $report_data;die();
header('Content-Type: text/css');
header("Content-Disposition: attachment; filename=RecuFiscal.pdf");
}
The report service has been removed from Odoo since version 11.0.
Relevant commits : c23ef9a, 3425752.
I just inspected Odoo client used by Drupal and it appears the code doesn't take these changes into account :
# from function getReport()
$client = $this->getClient('report');
$reportId = $client->call('report', $params);
To fix your issue, don't use getReport, I guess it's still possible to grab some data for your model and print kind of a report by tweaking the method from the client.
I suggest to switch to the object endpoint to get a generic XmlRpcClient on which you might be able to call render().
For example you can use search() to get a reportId in the first place (no more report service but ir.actions.report model still available), and then try to read/render it like in this example (this is not 'client' code relative to Odoo but you get the idea).

Working PySide example of Qt diagram editor?

When running the PySide Diagram Scene example (circa 2010), I get the error below. Is there a more current example of a basic diagram editor available?
C:\Python34\python.exe C:/Users/dle/Documents/Programming/Python/diagramscene.py
Traceback (most recent call last):
File "C:/Users/dle/Documents/Programming/Python/diagramscene.py", line 11, in <module>
import diagramscene_rc
File "C:\Users\dle\Documents\Programming\Python\diagramscene_rc.py", line 404, in <module>
qInitResources()
File "C:\Users\dle\Documents\Programming\Python\diagramscene_rc.py", line 399, in qInitResources
QtCore.qRegisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data)
TypeError: 'qRegisterResourceData' called with wrong argument types:
qRegisterResourceData(int, str, str, str)
Supported signatures:
qRegisterResourceData(int, unicode, unicode, unicode)
The problem is that the file diagramscene_rc.py has been generated for python2, to solve it you must recompile that file for it opens a terminal in the folder and executes the following command:
pyside-rcc diagramscene.qrc -o diagramscene_rc.py -py3
Or, place the letter b before assigning the variable as shown below:
qt_resource_data = "\
\x00\x00\x01\x12\
...
qt_resource_name = "\
\x00\x06\
...
qt_resource_struct = "\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
....
to:
qt_resource_data = b"\
\x00\x00\x01\x12\
...
qt_resource_name = b"\
\x00\x06\
...
qt_resource_struct = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
....

Resources