How to provide multiple dependencies in --chunk for closure compiler? - google-closure-compiler

I have this requirement where i have test.js file dependent on x and y modules. So i was trying something like,
The below code generates x and y successfully but breaks on producing the test module.
How can i pass multiple dependencies? If we can't do this via chunk then how else can we do this?
OPTS = (
"--js app1.js"
"--chunk app:1"
"--js x1.js"
"--chunk x:1:app"
"--js y1.js"
"--chunk y:1:app"
"--js test1.js"
"--chunk test:3:[x,y]"
)
java -jar comipler.jar $(echo ${OPTS[*]}).
Thanks in advance for the help,
Kiran

Related

Julia - Metaprogramming for using several modules

I'm using Julia to autograde students' work. I have all of their files Student1.jl, Student2.jl, etc. as separate modules Student1, Student2, etc in a directory that is part of LOAD_PATH. What I want to be able to do works completely fine in the REPL, but fails in a file.
macro Student(number)
return Meta.parse("Main.Student$number")
end
using Student1
#Student(1).call_function(inputs)
works completely fine in the REPL. However, since I'm running this in a script, I need to be able to include the modules with more metaprogramming that is currently not working. I would have thought that the exact same script above would have worked in a file Autograder.jl by calling
#eval(using Student1)
#Student(1).call_function(inputs)
in a module Autograder. But I get either an UndefVarError: Student1 not defined or LoadError: cannot replace module Student1 during compilation depending on how I tweak things.
Is there something small in Julia metaprogramming I'm missing here to make this autograding system work out? Thanks for any advice.
The code just as you have written works for me on julia versions 1.1.0, 1.3.1, 1.5.1, 1.6.0 and 1.7.0. By that I mean, if I add an inputs variable and put your first code block in a file Autograder.jl and run JULIA_LOAD_PATH="modules:$JULIA_LOAD_PATH" julia Autograder.jl with the student modules in the modules directory I get the output of the call_function function in the Student1 module.
However if Autograder.jl actually contains a module then the Student$number module is not required into Main and your macro needs to be modified accordingly:
module Autograder
macro Student(number)
return Meta.parse("Student$number") # or "Autograder.Student$number"
end
inputs = []
#eval(using Student1)
#Student(1).call_function(inputs)
end
Personally I wouldn't use a macro to accomplish this, here is one possible alternative:
student(id) = Base.require(Main, Symbol("Student$(id)"))
let student_module = student(1)
student_module.call_function(inputs)
end
or without modifying the LOAD_PATH:
student(id) = include("modules/Student$(id).jl")
let student_module = student(1)
student_module.call_function(inputs)
end

OpenMDAO adding command line args for ExternalCodeComp that won't results in runtime error

In OpenMDAO V3.1 I am using an ExternalCodeComp to execute a CFD code. Typically, I would call it as such:
mpirun nodet_mpi --design_run
If the above call is made in the appropriate directory, then it will find the appropriate run file and execute the CFD run. I have tried command args for the ExternalCodeComp;
execute = ['mpirun', 'nodet_mpi', '--design_run']
execute = ['mpirun', 'nodet_mpi --design_run']
execute = ['mpirun nodet_mpi --design_run']
I either get an error such as:
RunTimeError: 255, execvp error on file "nodet_mpi --design_run" (No such file or directory)
Or that the command cannot be found.
Is there any way to setup the execute statement to include commandline args for the flow solver when an input file is not defined?
Thanks in advance!
One detail in your question seems incorrect, you state that you have tried execute = "...". The ExternalCodeComp uses an option called command. I will assume that you are using the correct option in your code.
The most correct form to use is the list with all arguments as single entries in the list:
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run']
Your error msg seems to indicate that the directory that OpenMDAO is running in is not the same as the directory you would like to execute the CFD code from. The absolute simplest solution would be to make sure that you are in the correct directory via cd in the terminal window before executing your python script.
However, there is likely a reason that your python script is in a different place so there are other options I can suggest:
You can use a combination of os.getcwd() and os.chdir() inside the compute method that you have implemented to make sure you switch into and out of the working directory for the CFD code.
If you would like to, you can modify the entries of the list you've assigned to the self.options['command'] option on the fly within your compute method. You would again be relying on some of the methods in the os module for help. os.path.exists can be used to test if the specific input files you need exist or not, and you can modify the command option accordingly.
For option 2, code would look something like this:
def compute(self, inputs, outputs):
if os.path.exists('some_input.file'):
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run']
else:
self.options['command'] = ['mpirun', 'nodet_mpi', '--design_run', '--other_options']
# the parent compute function actually runs the external code
super().compute(inputs, outputs)

Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file

In tensorflow the training from the scratch produced following 6 files:
events.out.tfevents.1503494436.06L7-BRM738
model.ckpt-22480.meta
checkpoint
model.ckpt-22480.data-00000-of-00001
model.ckpt-22480.index
graph.pbtxt
I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application.
I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.
You can use this simple script to do that. But you must specify the names of the output nodes.
import tensorflow as tf
meta_path = 'model.ckpt-22480.meta' # Your .meta file
output_node_names = ['output:0'] # Output nodes
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
If you don't know the name of the output node or nodes, there are two ways
You can explore the graph and find the name with Netron or with console summarize_graph utility.
You can use all the nodes as output ones as shown below.
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
(Note that you have to put this line just before convert_variables_to_constants call.)
But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.
As it may be helpful for others, I also answer here after the answer on github ;-).
I think you can try something like this (with the freeze_graph script in tensorflow/python/tools) :
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
The important flag here is --input_binary=false as the file graph.pbtxt is in text format. I think it corresponds to the required graph.pb which is the equivalent in binary format.
Concerning the output_node_names, that's really confusing for me as I still have some problems on this part but you can use the summarize_graph script in tensorflow which can take the pb or the pbtxt as an input.
Regards,
Steph
I tried the freezed_graph.py script, but the output_node_name parameter is totally confusing. Job failed.
So I tried the other one: export_inference_graph.py.
And it worked as expected!
python -u /tfPath/models/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/your/config/path/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix=/your/checkpoint/path/model.ckpt-50000 \
--output_directory=/output/path
The tensorflow installation package I used is from here:
https://github.com/tensorflow/models
First, use the following code to generate the graph.pb file.
with tf.Session() as sess:
# Restore the graph
_ = tf.train.import_meta_graph(args.input)
# save graph file
g = sess.graph
gdef = g.as_graph_def()
tf.train.write_graph(gdef, ".", args.output, True)
then, use summarize graph get the output node name.
Finally, use
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
to generate the freeze graph.

How to set the value of a SettingKey based on different sbt commands?

There's the command sbt flywayMigrate from flywaydb.org. The command requires use to set flywayUrl, flywayUser, and flywayPassword beforehand. It was good so far.
Now I want to be able to use sbt flywayMigrate for two different environment; Their variables should be different.
I tried to make two new commands: sbt flywayMigrateDev and sbt flywayMigrateProd. I couldn't figure out how to connect the new commands to flywayMigrate.
I tried creating a new scope. But I couldn't figure out how to wire the variables and tasks properly.
I wonder if anyone can give me an example on how to do this. I'd like to see a code example.
We can simplify the problem to:
There's the command sbt flywayMigrate that depends on flywayUrl. How do we allow the command to use different flywayUrls by calling sbt commands (or any other way is good, too)?
Thank you!
You should use config for this.
Example .sbt file contents:
// Set up your configs.
lazy val prodConfig = config("prod")
lazy val devConfig = config("dev")
// Set up any configuration that's common between dev and prod.
val commonFlyway = Seq(
// For the sake of example, a couple of shared settings.
flywayUser := "pg_admin",
flywayLocations := Seq("filesystem:migrations")
)
// Set up prod and dev.
inConfig(prodConfig)(flywayBaseSettings(prodConfig) ++ commonFlyway)
flywayUrl.in(prodConfig) := "jdbc:etc:proddb.somecompany.com"
// Or however you want to load your production password.
flywayPassword.in(prodConfig) := sys.env.getOrElse("PROD_PASSWD", "(unset)")
inConfig(devConfig)(flywayBaseSettings(prodConfig) ++ commonFlyway)
flywayUrl.in(devConfig) := "jdbc:etc:devdb.somecompany.com"
flywayPassword.in(devConfig) := "development_passwd"
Now you can run prod:flywayMigrate and dev:flywayMigrate to migrate production and development, respectively.
See the Flyway docs page for other examples.

why nacl sdk contains so many 0 byte files?

I'm newbie to nacl. And I find out there are so many 0 byte files in the directory (nacl_sdk/pepper_38/toolchain/win_*/bin).
When I change the project platform to NaCl64 and compile(hello_nacl_cpp), there comes out an error
(error MSB6006: “D:\nacl_sdk\pepper_38\toolchain\win_x86_newlib\bin\x86_64-nacl-gcc.exe”已退出,代码为 -1)
But I can debug the example "hello_world_gles" with PPAPI platform, so I'm not sure the environment is ok.
Anyone can tell me something? Thanks!
Answer my question.
As #binji says we should use cygtar.py(which is in the dirctory sdk_tools) to extract the file.
Here we go:
Open cygtar.py with your text editor, you will find a class named CygTar who is the real worker.
Move dwon, and insert a snippet of code below Main function.
def MyLogic():
os.chdir('D:\\nacl_sdk\\sdk')
# tar = CygTar('naclports.tar.bz2', 'r', True) #here must use linux file path
tar = CygTar('naclsdk_win.tar.bz2', 'r', True)
tar.Extract()
Then replace sys.exit(Main(sys.argv)) with sys.exit(MyLogic()) at last of file.That all.
Note: If you have learned python, you will know code indent is very important in python, be careful.
And the final code should looks like this:

Resources