Overall Rollup Process [RB.1.5] - scorm

In Sequencing Pseudo Code and at line "3.2. Apply the appropriate Objective Rollup Process to the activity" of "Overall Rollup Process [RB.1.5]", I don't know which Objective Rollup Process I should apply (i.e., Using Measure or Using Rules or Default Rules).
Please explain for me.
Thank you

I've been developing a SCORM 2004 sequencing engine,
and just started implementing [RB.1.5]. At first I was
confused at the same point too. But finally I reached
the following conclusions:
The term, "appropriate", that confuses you is pointless.
"Objective Rollup Process" in [RB.1.5] merely references [RB.1.2 b].
The appropriate objective rollup process will be selected
and applied in [RB.1.2 b].
Lines from 1. to 1.2. in [RB.1.2 b] determines whether the
default rollup rules should be applied or not. The rest of
the code is for objective rollup using rules process.

The pseudo code does not define how to select the appropriate Objective Rollup Process in [RB.1.5].
Instead, section 4.6.5. of the SN book states how to do that.
Here is a summary of the section, translated into Ruby pseudo code:
if activity.rolled_up_objective.objective_satisfied_by_measure == true
apply_objective_rollup_process_using_measure
elsif activity.rollup_rules.any? { |rollup_rule| [:satisfied, :not_satisfied].include?(rollup_rule.action) }
apply_objective_rollup_process_using_rules
else
apply_objective_rollup_process_using_default_rules
end
In the version 1.1 of SCORM 2004 4th Edition,
both Objective Rollup Process Using Rules
and Objective Rollup Process Using Default Rules are included into [RB.1.2 b],
so line 3.2. of [RB.1.5],
Apply the appropriate Objective Rollup Process to the activity
should be:
For each objective associated with the activity
If Objective Contributes to Rollup for the objective is True Then
Set the rolled-up objective to the objective
Break For
End If
End For
If (the rolled-up objective is Defined) And (Objective Satisfied By Measure for the rolled-up objective is True)
Apply the Objective Rollup Using Measure Process [RB.1.2 a] to the activity
Else
Apply the Objective Rollup Using Rules Process [RB.1.2 b] to the activity
End

Related

Creation of a 'partial objective' in OpenMDAO

I am creating a program that optimizes a set of coupled subcomponents to minimize for their total mass. Currently each component is a group that has a promoted output for it's mass and then another group group exists at the top level that takes each of these masses as inputs, computes the sum, and then this sum is used as the objective for the optimizer.
This program is designed to be operated by a user where the type and number of subcomponents is set at runtime. This proves problematic for my statically declared mass summing group that would need to change it's inputs depending on what components are added at runtime.
I was therefor wondering if is there a way to declare a 'partial objective' where each of these partial pieces would be summed together for the final objective processed by the ScipyOptimize Driver? The 'partial objectives', design variable and constraints could simply be added in each subsystem, and the subsystem is added to the model, they would ready to go to fit into the larger optimization.
Another way could be some sort of summer behavior in a group where the inputs to be summed were exclusively created via glob pattern. Something along the lines of
self.add_subsystem('sum', Summer(inputs='mass:*'))
Is there any way to achieve either of these types of functionality in OpenMDAO 3.1.1?
In OpenMDAO V3.1, there is a configure method that will let you accomplish what you want --- subject to a few caveats. The first caveat is that in V3.1 you can inspect the I/O of components from within a group configure but you can not inspect the I/O of child groups. This is something we are working to remedy, but as of V3.1 this restriction is present.
None the less, here is some code that accomplishes what I think you were seeking. Its not super clean, but it does achieve the kind of reactive setup that you were going for.
import openmdao.api as om
class Summer(om.ExplicitComponent):
def setup(self):
# note: will add inputs via the configure method of parent group
self.add_output('total_mass')
self.declare_partials('total_mass', wrt='*', val=1)
def compute(self, inputs, outputs):
outputs['total_mass'] = 0
for inp_name in inputs:
outputs['total_mass'] += inputs[inp_name]
class TotalMass(om.Group):
def setup(self):
# Only add the summing comp, others will be added by users
self.add_subsystem('sum', Summer())
def configure(self):
sum_comp = self.sum
# NOTE: need to access some private attributes of the group here,
# so this is a little fragile, but works as of OM V3.1
for subsys in self._subsystems_myproc:
s_name = subsys.name
if s_name == 'sum':
continue
i_name = f'{s_name}_mass'
sum_comp.add_input(i_name)
self.connect(f'{s_name}.mass', f'sum.{i_name}')
if __name__ == "__main__":
p = om.Problem()
tm = p.model.add_subsystem('tm', TotalMass())
tm.add_subsystem('part_1', om.ExecComp('mass=3+x'))
tm.add_subsystem('part_2', om.ExecComp('mass=5+x'))
p.setup()
p.run_model()
p.model.list_outputs()
We're planning changes that will make more model introspection at the time of setup/configure possible. Until those changes are implemented, then the typical way of achieving this is similar to what you've implemented. Without introspection, you need to give Summer the names of the inputs it should expect (not wildcard-based).
You can give your systems which compute mass some attribute, for instance 'mass_output_name'.
Then, you could iterate through all such systems:
mass_output_systems = [sys_a, sys_b, sys_c]
mass_names = [sys.mass_output_name for sys in mass_output_systems]
And then feed these to your summing subsystem:
self.add_subsystem('sum', Summer(inputs=mass_names))

Error when computing jacobian vector product

I have a group with coupled disciplines which is nested in a model where all other components are uncoupled. I have assigned a nonlinear Newton and linear direct solvers to the coupled group.
When I try to run the model with default "RunOnce" solver everything is OK, but as soon as I try to run optimization I get following error raised from linear_block_gs.py:
File "...\openmdao\core\group.py", line 1790, in _apply_linear scope_out, scope_in)
File "...\openmdao\core\explicitcomponent.py", line 339, in _apply_linear
self.compute_jacvec_product(*args)
File "...\Thermal_Cycle.py", line 51, in compute_jacvec_product
d_inputs['T'] = slope * deff_dT / alp_sc
File "...\openmdao\vectors\vector.py", line 363, in setitem
raise KeyError(msg.format(name)) KeyError: 'Variable name "T" not found.'
Below is the N2 diagram of the model. Variable "T" which is mentioned in the error comes from implicit "temp" component and is fed back to "sc" component (file Thermal_Cycle.py in the error msg) as input.
N2 diagram
The error disappears when I assign DirectSolver on top of the whole model. My impression was that "RunOnce" would work as long as groups with implicit components have appropriate solvers applied to them as suggested here and is done in my case. Why does it not work when trying to compute total derivatives of the model, i.e. why compute_jacvec_product cannot find coupled variable "T"?
The reason I want to use "RunOnce" solver is that optimization with DirecSolver on top becomes very long as my variable vector "T" increases. I suspect it should be much faster with linear "RunOnce"?
I think this example of the compute_jacvec_product method might be helpful.
The problem is that, depending on the solver configuration or the structure of the model, OpenMDAO may only need some of the partials that you provide in this method. For example, your matrix-free component might have two inputs, but only one is connected, so OpenMDAO does not need the derivative with respect to the unconnected input, and in fact, does not allocate space for it in the d_inputs or d_outputs vectors.
So, to fix the problem, you just need to put an if statement before assigning the value, just like in the example.
Based on the N2, I think that I agree with your strategy of putting the direct solver down around the coupling only. That should work fine, however it looks like you're implementing a linear operator in your component, based on:
File "...\Thermal_Cycle.py", line 51, in compute_jacvec_product d_inputs['T'] = slope * deff_dT / alp_sc
You shouldn't use direct solver with matrix-free partials. The direct solver computes an inverse, which requires the full assembly of the matrix. The only reason it works at all is that OM has some fall-back functionality to manually assemble the jacobian by passing columns of the identity matrix through the compute_jacvec_product method.
This fallback mechanism is there to make things work, but its very slow (you end up calling compute_jacvec_product A LOT).
The error you're getting, and why it works when you put the direct solver higher up in the model, is probably due to a lack of necessary if conditions in your compute_jacvec_product implementation.
See the docs on explicit component for some examples, but the key insight is to realize that not every single variable will be present when doing a jacvec product (it depends on what kind of solve is being done --- i.e. one for Newton vs one for total derivatives of the whole model).
So those if-checks are needed to check if variables are relevant. This is done, because for expensive codes (i.e. CFD) some of these operations are quite expensive and you don't want to do them unless you need to.
Are your components so big that you can't use the compute_partials function? Have you tried specifying the sparsity in your jacobian? Usually the matrix-free partial derivative methods are not needed until you start working with really big PDE solvers with 1e6 or more implicit outputs variables.
Without seeing some code, its hard to comment with more detail, but in summary:
You shouldn't use compute_jacvec_product in combination with direct solver. If you really need matrix-free partials, then you need to switch to iterative linear solvers liket PetscKrylov.
If you can post the code for the the component in Thermal_Cycle.py that has the compute_jacvec_product I could give a more detailed recommendation on how to handle the partial derivatives in that case.

How to accumulate gradient across mini-batch and then back-propagation in Chainer?

I am doing classifying video sequence, I need 2 things:
Because of limited GPU memory, I want to accumulate gradient across mini-batch, and then average gradient value, and then back propagation.
I need to know how to shuffle between mini-batch but not shuffle inside each mini-batch, because I want the video sequence keep its order.
Question 1:
You can forward and backward each minibatch but not call optimizer.update(), after you have repeated forward & backward for necessary minibatches, you can call optimizer.update() to updated based on accumulated gradients.
If you want to achieve it with trainer module, I think you need to override StandardUpdater to define your own Updater class to do above.
Question 2:
Are you using trainer module?
If so, you can define your own iterator to achieve this. See also below for reference how to define iterator class.
https://github.com/chainer/chainer/blob/master/examples/ptb/train_ptb.py
http://corochann.com/training-rnn-with-simple-sequence-dataset-1304.html

Finite difference between old and new OpenMDAO

So I am converting a code from the old OpenMDAO to the new OpenMDAO. All the outputs and the partial gradients have been verified as correct. At first the problem would not optimize at all and then I realized that the old code had some components that did not provide gradients so they were automatically finite differenced. So I added fd_options['force_fd'] = True to those components but it still does not optimize to the right value. I checked the total derivative and it was still not correct. It also takes quite a bit longer to do each iteration than the old OpenMDAO. The only way I can get my new code to optimize to the same value as the old OpenMDAO code is to set each component to finite difference, even on the components that provide gradients. So I have a few questions about how finite difference works between the old and the new OpenMDAO:
When the old OpenMDAO did automatic finite difference did it only do it on the outputs and inputs needed for the optimization or did it calculate the entire Jacobian for all the inputs and outputs? Same question for the new OpenMDAO when you turn 'force_fd' to True.
Can you provide some parts of the Jacobian of a component and have it finite difference the rest? In the old OpenMDAO did it finite difference any gradients not provided unless you put missing_deriv_policy = 'assume_zero'?
So, the old OpenMDAO looked for groups of components without derivatives, and bundled them together into a group that could be finite differenced together. New OpenMDAO doesn't do that, so each of those components would be finite differenced separately.
We don't support that yet, and didn't in old OpenMDAO. We do have a story up on our pivotal tracker though, so we will eventually have this feature.
What I suspect might be happening for you is that the finite-difference groupings happened to be better in classic OpenMDAO. Consider one component with one input and 10 outputs connected to a second component with 10 inputs and 1 output. If you finite difference them together, only one execution is required. if you finite difference them individually, you need one execution of component one, and 10 executions of component two. This could cause a noticeable or even major performance hit.
Individual FD vs group FD can also cause accuracy problems, if there is an important input that has vastly different scaling than the other variables, so that the default FD stepsize of 1.0e-6 is no good. (Note: you can set a step_size when you add a param or output and it overrides the default for that var.)
Luckilly, new OpenMDAO has a way to recreate what you had in old OpenMDAO, but it is not automatic. What you would need to do is take a look at your model and figure out what components can be FD'd together, and then create a sub Group and move those components into that group. You can set fd_options['force_fd'] to True on the group, and it'll finite difference that group together. So for example, if you have A -> B -> C, with no components in between, and none have derivatives, you can move A, B, and C into a new sub Group with force_fd set to True.
If that doesn't fix things, we may have to look more deeply at your model.

Formula to prioritize tasks based on weight and date

Is there a formula or algorithm which can prioritize items based on weight and a date? For instance, a critical item would always be at the top of the list while a two normal items would be prioritized based on their due date.
Scheduling is one of the most-studied areas of computer science, which is convenient, because it gives a lot of prior art that you can learn from.
Perhaps the easiest approach is Earliest Deadline First -- where you schedule the task with the first deadline and work on it until it blocks. Then work on the next earliest deadline. The downside is that low-priority tasks that take a long time might stall higher-priority tasks.
It might be worthwhile to determine if your scheduling must be hard, firm, or soft -- sometimes it makes sense to drop tasks completely and finish nearly everything on time than to finish everything but half a second too late.
Yes. This can either be done by defining a comparison function that checks priority first. I.e.
// Returns n < 0, 0, or n > 1 if value1 is less than, equal to or greater
compare(value1, value2) {
if(value1.priority != value2.priority) {
return value1.priority - value2.priority;
}
return value1.date - value2.date;
}
Alternatively, this function returns a value calculated from the date and the priority, this can be used to compare tasks and order them by priority (and then date):
// Returns
task.GetValue() {
return me.GetDateAsIntegerValue() + MAX_DATE_VALUE * me.GetPriority();
}
But just as sarnold mentioned, this is a highly studied area.
A different way to look at this is as a ranking problem. If you take these two values, weight and priority as inputs, you can create a table of paired comparisons that decompose items into their inputs (weight and priority) and outputs are relative orderings.
Consider, say, item 42 and item 69, denoted X42 and X69: if you have their weights and priority (W42, P42) and (W69, P69), you'd like to know if X42 should appear before X69, after it, or at an equal position. If you have a training set, you can tag whether one is preferred to the other.
What we're lacking here is a method for comparing these. A very simple method is to use logistic regression on the differences, i.e. a simple function f( (W_A - W_B), (P_A - P_B)), or f((W42 - W69),(P42 - P69)), in this case. If the result is above some threshold, then A is preferred to B, otherwise B is preferred to A. You can use this to sort the results.
As usual, most of the results online are not very accessible to beginners. Here's a short chapter that may be helpful in understanding the logistic regression. However, if you'd like to address such matters in more depth, the statistics StackExchange site would be better.
You'll have to decide: (1) if what you're looking at can be decomposed into an additive function of the weight and priority, and, if so, (2) the loss function or objective function that you need to minimize, so that you can get the optimal parameters for this additive function. An ordinal logistic model is one choice, ordinal probit another, and there are tons of other options. If you don't use an additive function (i.e. a linear combination), you'll have a challenging range of possibilities to consider, so it's best to start with something simple.
You can separate the tasks by rating the impact 1-10 (10 being highest) and the output needed 1-10 (also 10 being hardest)
You add the numbers together and divide by two. The result will be the priority ranking of your task 1-10 (10 being most important).
Example:
Check Emails: impact 2 output 1 = 1.5
Call potential customer: impact 10 output 2 = 6
From this example the calling of the customer would then be placed in a higher priority than checking emails.

Resources