What is PVFULL in Bitbake recipe - bitbake

Does anyone know what PVFULL does in Bitbake recipes? I have a build that used to work but is now broken so I'm trying to figure out what it does.

Turns out it's a custom proprietary parameter used alongside PV and PR:
inherit externalgitsrc
PV = "${EXTERNALSRC_GITPKGV}"
PR = "${EXTERNALSRC_GITPKGR}"
PVFULL = "${EXTERNALSRC_GITPKGVFULL}"

Related

ProgressBar in IJulia printing every new line

I am currently learning Julia and have been practicing in the Jupyter notebook environment. However, the ProgressBar package (similar to TQDM in python) is updated every new line instead of updating on the same line (picture attached). Is there any way to fix this? Thanks.
UPDATE : Here is the full function that I wrote.
function spike_rate(raw_dat, width)
N = size(raw_dat)[1]
domain = collect(1:N);
spike_rat = zeros(N);
for i in ProgressBar(1:N)
dx = i .- domain;
window = gaussian.(dx, width);
spike_rat[i] = sum(window .* raw_dat) ./ width;
end
return spike_rat;
end
This seems to be a known issue with ProgressBars.jl, unfortunately. It's not clear what changed to make these progress bars not work properly anymore, but the maintainer's comment says that tqdm uses "a custom ipywidget" to make this work for the Python library, and that hasn't been implemented for the Julia package yet.
To expand on #Zitzero's mention of ] up, that calls Pkg.update() which also prints a progress bar - so the suggestion is to use the mechanism Pkg uses for it. Pkg has an internal module called MiniProgressBars which handles this output.
Edit: Tim Holy's ProgressMeter package seems well-maintained, and is a much better option than relying on an internal non-exported Pkg submodule with no docs. So I'd recommend ProgressMeter over the below.
The Readme mentions a caveat regarding printing additional information with the progress bar when in Jupyter, which likely applies to MiniProgressBar as well. So, using ProgressMeter, and separating the progress output vs other relevant output to different cells, seems like the best option.
(not recommended)
using Pkg.MiniProgressBars
bar = MiniProgressBar(; indent=2, header = "Progress", color = Base.info_color(),
percentage=false, always_reprint=true)
bar.max = 100
# start_progress(stdout, bar)
for i in 1:100
sleep(0.05) # replace this with your code
# print_progress_bottom(stdout)
bar.current = i
show_progress(stdout, bar)
end
# end_progress(stdout, bar)
This is based on how Pkg uses it, from this file. The commented out lines (with start_progress, print_progress_bottom, and end_progress) are in the original usage in Pkg, but it's not clear what they do and here they just seem to mess up the output - maybe I'm using them wrongly, or maybe Jupyter notebooks only support a subset of the ANSI codes that MiniProgressBars uses.
There is a way, the package module does that as far as I know when you do:
] up
Could you share a little of your code?
For example, where you define what is written to the console.
One guess is that
print("text and progressbar")
instead of
println("text and progressbar")
could help, because println() always creates a new line, while print() should just overwrite you current line.

how to include environment when submitting an automl experiment in azure machine learning

I use code like below to create an AutoML object to submit an experiment for classification training
automl_settings = {
"n_cross_validations": 2,
"primary_metric": 'accuracy',
"enable_early_stopping": True,
"experiment_timeout_hours": 1.0,
"max_concurrent_iterations": 4,
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
compute_target = compute_target,
training_data = train_data,
label_column_name = label,
**automl_settings
)
ws = Workspace.from_config()
experiment = Experiment(ws, "your-experiment-name")
run = experiment.submit(automl_config, show_output=True)
I want to include my conda yml file (like below) in my experiment submission.
env = Environment.from_conda_specification(name='myenv', file_path='conda_dependencies.yml')
However, I don't see any environment parameter in AutoMLConfig class documentation (similar to what environment parameter does in ScriptRunConfig) or find any example how to do so.
I notice after the experiment is submitted, I get message like this
Running on remote.
No run_configuration provided, running on aml-compute with default configuration
Is run_configuration used for specifying environment? If so, how do I provide run_configuration in my AutoML experiment run?
Thank you.
I figured out how to fix the issues associated with the sdk 1.19.0 upgrade in the AML environment I use, thus no need for the workaround (ie. pass in a SDK 1.18.0 conda environment file to AutoML experiment run) I was thinking about. My original question no longer needs an answer, I just want to add this note in case someone else has the same question later on.
I still don't know why AutoML experiment run has no option to pass in a conda environment file. It would be nice if a reason is given in the AML documentation.

How to convert Tensorflow Object Detection API model to TFLite?

I am trying to convert a Tensorflow Object Detection model(ssd-mobilenet-v2-fpnlite, from TensorFlow 2 Detection Model Zoo) to TFLite. First of all, I train the model using the model_main_tf2.py and then I use the export_tflite_graph_tf2.py to export a saved model(.pb). However, when it comes to convert the .pb file to .tflite it throws this error:
OSError: SavedModel file does not exist at: /content/gdrive/My Drive/models/research/object_detection/fine_tuned_model/saved_model/saved_model.pb/{saved_model.pbtxt|saved_model.pb}
To convert the .pb file I used:
import tensorflow as tf
SAVED_MODEL_PATH = os.path.join(os.getcwd(),'object_detection', 'fine_tuned_model', 'saved_model', 'saved_model.pb')
# SAVED_MODEL_PATH: '/content/gdrive/My Drive/models/research/object_detection/exported_model/saved_model/saved_model.pb'
converter = tf.lite.TFLiteConverter.from_saved_model(SAVED_MODEL_PATH)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()
open("detect.tflite", "wb").write(tflite_model)
or "tflite_convert" from command line, but with the same error. I also tried to run it with the latest tf-nightly version as it suggests here, but the outcome is the same. I tried to pass the path with various ways, it seems like the .pd is not well written (not the right file). Is there a way to manage to convert the model to tflite so as to implement it to android? Thank you!
Your saved_model path should be "/content/gdrive/My Drive/models/research/object_detection/fine_tuned_model/saved_model/". It is the folder instead of files in that folder
For quick test, try to type in terminal
tflite_convert \
--saved_model_dir="path to saved_folder" \
--output_file="path to tflite file u want to save"
I don't have enough reputation to just comment but the problem here seems to be your SAVED_MODEL_PATH.
You could try to hardcode the path and remove the .pb file. I don't remember exactly what's the trick here but it's definitively due to the path

parabolic_squaremesh not defined in DifferentialEquations

I'm trying to reproduce an example from a tutorial and I get stuck with meshes not being defined:
using DiffEqBase
using DiffEqPDEBase
f(t,x,u) = ones(size(x,1)) - .5u
u0_func(x) = zeros(size(x,1))
tspan = (0.0,1.0)
dx = 1//2^(3)
dt = 1//2^(7)
# THE FOLLOWING LINE IS FAILING
mesh = parabolic_squaremesh([0 1 0 1],dx,dt,tspan,:neumann)
u0 = u0_func(mesh.node)
prob = HeatProblem(u0,f,mesh)
sol = solve(prob,FEMDiffEqHeatImplicitEuler())
The code fails at the line where I try to create the mesh with the error message:
*UndefVarError: parabolic_squaremesh not defined
top-level scope at test.jl:22
All packages are installed without errors. However, I was not able to install the
using FiniteElementDiffEq which seems to be depreciated.
This functionality has not existed in many years. It was found because of a Google search turning up DifferentialEquations.jl v3.0, while v6.0 is the first version to support Julia 1.0. There's not much else to say about this other than you won't make it work on a modern version of Julia, and if you want to use it you'd have to drop back to something like Julia v0.6.

Editing Fortran referenced code from R

I would like to be able to edit the Fortran code that is referred to in the fGarch package.
More specifically I would like to edit the available conditional distributions that can be used by fGarch::garchFit, i.e. including the stable distribution and the generalised hyperbolic distribution.
So having looked into the garchFit() function, I have delved (deepish) into the code, and .aparchLLH.internal() is referred to from the garchFit() function and there is a line in there that refers to Fortran written code.
The specific line that I am referring to is the following bit of code:
fit <- .Fortran("garchllh", N = as.integer(N), Y = as.double(.series$x),
Z = as.double(.series$z), H = as.double(.series$h),
NF = as.integer(NF), X = as.double(params), DPARM = as.double(DPARM),
MDIST = as.integer(MDIST), MYPAR = as.integer(MYPAR),
F = as.double(0), PACKAGE = "fGarch")
I believe that the Fortran function garchllh is what I would like to edit, but do not know how to go about editing it so that I can introduce new distributions into the garchFit() function.
N.B. Just as a note, I do not have much experience in Fortran code, but would like to have a look at it to see if it can be edited and altered to fit for my purpose, so any help on the Fortran editing of code section would be much appreciated...
As mentioned in comments, you need to download the source -- a good place would be to start with install.packages("fGarch",type="source") and see that everything compiles properly. Then, look at the package source -- seems like you would need to do a pretty straightforward adjustment to dist.f, and probably add more changes to various places where MDIST is set -- start with grep MDIST *.R in the R directory of the extracted source. After you're done and tested, you could also talk to the package maintainers -- perhaps they would include your additions in the next version :)

Resources