Objectives Global to System - scorm

I have a question about Objectives Global to System attribute of organization element. What does 'Objectives Global to System' mean? What is its usage?

Objectives Global to System, which is set to true by default, means that all global objectives are available for all courses taken by the same learner for a certain attempt. Let say you set the objective satisfied status of "OBJ_1" to true in course A, this value will be available to that learner if he took course B in the same LMS.

Related

How do I write constraints that is only active at Construction Heuristics phase?

I would like to have some constraints that is only active during construction heuristics phase, so I write like this:
fun aConstraintOnlyActiveInCHPhase(constraintFactory: ConstraintFactory): Constraint {
return constraintFactory.from(MyPlanningEntity::class.java)
.ifExists(MyPlanningEntity::class.java,
Joiners.filtering({entity1,entity2 -> entity2.myplanningvariable == null})
)
...
...
.penalize("aConstraintOnlyActiveInCHPhase",HardSoftScore.ONE_HARD)
}
However this works for all but the last planning entity, when the last planning entity is initialized, there is no other uninitialized planning entity so this constraint will not be active.
How do I write constraints that is active for all planning entity during construction heuristics phase?
Furthermore, how do I write constraints that is active in different phase during solving?
The short answer is that you do not. The score represents the measure of quality of your solution, as it pertains to a particular problem. The problem you are solving is the same in every solver phase.
If you change your constraints, you are changing the optimization problem, and therefore might as well run a new solver with a new configuration. Whatever solution you got until that point may as well be thrown out of the window, because it was optimized for different criteria which are no longer valid.
That said, the constraint above will do what you want if you start using forEachIncludingNullVars(...) instead. This will include uninitialized entities, helping you avoid the ifExists(...) hack.

How is the seed chosen if not set by the user?

For the purpose of reproducibility, one has to choose a seed. In R, we can use set.seed().
My question is, when the seed is not set explicitly, how does the computer choose the seed?
Why is there no default seed?
A pseudo random number generator (PRNG) needs a default start value, which you can set with set.seed(). If there is no given it generally takes computer based information. This could be time, cpu temperatur or something similar. If you want a more random start value it is possible to use physical values, like white noise or nuclear decay, but you generally need an extern information source for this kind of random information.
The documentation mentions R uses current time and the process ID:
Initially, there is no seed; a new one is created from the current time and the process ID when one is required. Hence different sessions will give different simulation results, by default. However, the seed might be restored from a previous session if a previously saved workspace is restored.
A default seed is a bad idea, since a random generators would always produce the same samples of numbers by default. If you always take the same seed it's not anymore randomized, since there will be always the same numbers. So you just provide a fixed data sample, which is not the intended output of a PRNG. You could of course turn the default seed off (if there would be one), but the intended function is primary to generate a completely random set of data and not a fixed one.
For statistical approaches it matters for validation and verification reasons, but it's getting more important when you get to cryptography. In this field a good PRNG is mandatory.

References or Standardization of "Value Updating" in Constraint Satisfaction

Constraint Satisfaction Problems (CSPs) are basically, you have a set of constraints with variables and the domains of values for the variables. Then given some configuration of the variables (assignment of variables to values in their domains), you check to see if the constraints are "satisfied". That is, you check to see that evaluating all of the constraints returns a Boolean "true".
What I would like to do is sort of the reverse. Instead of this Boolean "testing" if the constraints are true, I would like to instead take the constraints and enforce them on the variables. That is, set the variables to whatever values they need to be in order to satisfy the constraints. An example of this would be like in a game, you say "this box's right side is always to the left of its containing box's right side," or, box.right < container.right. Then the constraint solving engine (like Cassowary for the game example) would take the box and set its "right" property to whatever number value it resolved to. So instead of the constraint solver giving you a Boolean value "yes the variable configuration satisfies the constraints", it instead updates the variables' configuration with appropriate values, "you have updated the variables". I think Cassowary uses the Simplex Algorithm for solving its constraints.
I am a bit confused because Wikipedia says:
constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a set of values for the variables that satisfies all constraints—that is, a point in the feasible region.
That seems different than the constraint satisfaction problem, of which it says:
An evaluation is consistent if it does not violate any of the constraints.
That's why it seems CSPs are to return Boolean values, while in CS you can set the values. Not quite clear the distinction.
Anyways, I am looking for general techniques on Constraint Solving, in the sense of setting variables like in the simplex algorithm. However, I would like to apply it to any situation, not just linear programming. Some standard and simple example constraints are:
All variables are different.
box.right < container.right
The sum of all variables < 10
Variable a goes before variable b in evaluation.
etc.
For the first case, seeing if the constraints are satisfied (Boolean true) is pretty easy: iterate through the pairs of variables, and if any pair is not equal to each other, return false, otherwise return true after processing all variables.
However, doing the equivalent of setting the variables doesn't seem possible at first glance: iterate through the pairs of variables, and if they are not equal, perhaps you set the first one to the second one. You might have to do some fixed point thing, processing some of them more than once. And then figuring out what value to set them to seems arbitrary how I just did it. Maybe instead you need some further (nested) constraints defining how set the values (e.g. "set a to b if a > b, otherwise set b to a"). The possibilities are customizable.
In addition, for simpler cases like box.right < container.right, it is even complicated. You could say at first that if box.right >= container.right then set box.right = container.right. But maybe actually you don't want that, but instead you want some iPhone-like physics "bounce" where it overextends and then bounces back with momentum. So again, the possibilities are large, and you should probably have additional constraints.
So my question is, similar to how for testing the constraints (for Boolean value) is standardized to CSP, I am wondering if there are any references or standardizations in terms of setting the values used by the constraints.
The only thing I have seen so far is that Cassowary simplex algorithm example which works well for an array of linear inequalities on real-numbered variables. I would like to see something that can handle the "All variables are different" case, and the other cases listed, as well as the standard CSP example problems like for scheduling, box packing, etc. I am not sure why I haven't encountered more on setting/updating constraint variables instead of the Boolean "yes constraints are satisfied" problem.
The only limits I have are that the constraints work on finite domains.
If it turns out there is no standardization at all and that every different constraint listed requires its own entire field of research, that would be good to know. Then I at least know what the situation is and why I haven't really seen much about it.
CSP is a research fields with many publications each year. I suggest you to read one of the books on the subject, like Rina Dechter's.
For standardized CSP languages, check MiniZinc on one hand, and XCSP3 on the other.
There are two main approaches to CSP solving: systematic and stochastic (also known as local search). I have worked on three different CSP solvers, one of them stochastic, but I understand systematic solvers better.
There are many different approaches to systematic solvers. It is possible to fill a whole book covering all the possible approaches, so I will explain only the two approaches I believe the most in:
(G)AC3 which propagates constraints, until all global constraints (hyper-arcs) are consistent.
Reducing the problem to SAT, and letting the SAT solver do the hard work. There is a great algorithm that creates the CNF lazily, on demand when the solver is already working. In a sence, this is a hybrid SAT/CSP algorithm.
To get the AC3 approach going you need to maintain a domain for each variable. A domain is basically a set of possible assignments.
For example, consider the domains of a and b: D(a)={1,2}, D(b)={0,1} and the constraint a <= b. The algorithm checks one constraint at a time, and when it reaches a <= b, it sees that a=2 is impossible, and also b=0 is impossible, so it removes them from the domains. The new domains are D'(a)={1}, D'(b)={1}.
This process is called domain propagation. Using a queue of "dirty" constraints, or "dirty" variables, the solver knows which constraint to propagate next. When the queue is empty, then all constraints (hyper arcs) are consistent (this is where the name AC3 comes from).
When all arcs are consistent, then the solver picks a free variable (with more than one value in the domain), and restricts it to a single value. In SAT, this is called a decision It adds it to the queue and propagates the constraints. If it gets to a conflict (a constraint can't be satisfied), it goes back and undos an earlier decision.
There are a lot of things going on here:
First, how the domains are represented. Some solvers only hold a pair of bounds for each domain. Others, have a set of integers. My solver holds an interval set, or a bit vector.
Then, how the solver knows to propagate a constraint? Some solvers such as SAT solvers, Minion, and HaifaCSP, use watches to avoid propagating irrelevant constraints. This has a significant performance impact on clauses.
Then there is the issue of making decisions. Usually, it is good to choose a variable that has a small domain and high connectivity. There are many papers comparing many different strategies. I prefer a dynamic strategy that resembles the VSIDS of SAT solvers. This strategy is auto-tuned according to conflicts.
Making decision on the value is also important. Many simply take the smallest value in the domain. Sometimes this can be suboptimal if there is a constraint that limits a sum from below. Another option is to randomly choose between max and min values. I tune it further, and use the last assigned value.
After everything, there is the matter of backtracking. This is a whole can of worms. The problem with simple backtracking is that sometimes the cause for conflicts happened at the first decision, but it is detected only at the 100'th. The best thing is to analyze the conflict, and realize where the cause of the conflict is. SAT solvers have been doing this for decades. But CSP representation is not as trivial as CNF. So not many solvers could do it efficiently enough.
This is a nontrivial subject that can fill at least two university courses. Just the subject of conflict analysis can take half of a course.

How to avoid detecting uninitialized variables when using the impact analysis of Frama-C

I find that if there is an uninitialized left-value (variable X for example) in the program, Frama-C asserts that X has been initialized, but then the assertion gets the final status invalid. It seems that Frama-C stops the analysis after detecting the invalid final status, so that the actual result of the impact analysis (the impacted statements) is just a part of the ideal result. I want Frama-C to proceed the impact analysis regardless of those uninitialized variables, but I haven't found any related options yet. How to deal with this problem?
You're invoking an undefined behavior as indicated in annex J.2 of ISO C standard "The value of an object with automatic storage duration is used while it is indeterminate" (Note to language lawyers: said annex is informative, and I've been unable to trace that claim back to the normative sections of the standard, at least for C11). The EVA plug-in, which is used internally by Impact analysis, restricts itself to execution paths that have a well-defined meaning according to the standard (the proverbial nasal demons are not part of the abstract domains of EVA). If there are no such paths, abstract execution will indeed stop. The appropriate way to deal with this problem is to ensure the local variables of the program under analysis are properly initialized before being accessed.
Update
I forgot to mention that in the next version (16 - Sulfur), whose beta version is available at https://github.com/Frama-C/Frama-C-snapshot/wiki/downloads/frama-c-Sulfur-20171101-beta.tar.gz, EVA has an option -val-initialized-locals, whose help specifies:
Local variables enter in scope fully initialized. Only useful for the analysis of programs buggy w.r.t. initialization.

Code Review Checklist

Please provide me some parameters to evaluate the code effeciency till now I included the following in my code checklist:
Warnings are in the Code (No/Yes)
Code Analysis by Tool Report
Unused Using
Unit Test Cases
Indentation
Null Reference Exception
Naming Convention
Code Reusability
Code Consistency
Comments
Code Readability
Use of Generics
Speed
Disposing of Unmanaged Resources
Exception Handling
Length of Code (Number of Lines) 30-40 lines per method
Is Nested For/ Foreach loop is used?
Use of Linq or Lambda
Usage of access specifiers (private, public, protected, internal, protected internal) as per the scope
Usage of interfaces wherever needed to maintain decoupling
Marking of a class as sealed or static or abstract as per its usage and your need.
Use a Stringbuilder instead of string if multiple concatenations required, saving heap memory.
Any unreachable code exists and if possible modifies the code if it exists.
I would start by defining "software efficiency". This article gives a hint: https://www.keenesystems.com/blog/defining-efficiency-as-a-software-requirement
According to ISO 25010: efficiency is "resources expended in relation to the accuracy and completeness with which users achieve goals"
Then, it could be "Performance efficiency" with meaning "performance relative to the amount of resources used under stated conditions" and criteria such as
Time behavior
Resource utilization
Capacity
Other norms include ISO/IEC 9126-1, ISO/IEC 25062 and ISO 9241-11
From https://en.wikipedia.org/wiki/ISO_9241#ISO_9241-11
"System Efficiency: For evaluating system efficiency, the researcher records the time (in seconds) that participants took to complete each task."
Also interesting: which code is consuming less power?
Finally: "Productivity (also referred to as efficiency) is the amount of
product produced for an amount of resource. For software, productivity is commonly measured by size (ESLOC) divided by effort hours." see Department of Defense Software Factbook
To sum this up. I think you should update your list and focus on what you really want and need to measure, and what is generic about the system or software and what is e.g. language specific efficiency criteria.

Resources