How do I access the list of generated cuts after presolving the model? - mixed-integer-programming

I would like to access the model information after presolving. Specifically, I would like to see the list of cuts that were added to the model during the presolving phase. Is there a way to access this list of cuts?

If cuts get added to the model during presolving, then they would be added as constraints (since the LP has not been created at that stage). Note that this does not happen very often in presolving.
I would suggest simply looking at the transformed problem and to check which constraints have been added.

Related

Using IndepVarComp instead of metadata

I wanted to modify my problem and broke it down to some groups.
None of the groups have solvers attached to it. So they are just groups (composed of few components) that make it easy for the user to
differentiate various parts of the product. (each being a group).
I am confused with the IndepVarComp() and Metadata (declared in the initialization)
So far I have always used a single group and a single IndepVarComp() where the outputs were always the design variables.
If I continue this approach I could use metadata and pass large dictionaries i.e self.options.declare('aa', types=dict).
But looking at other codes based on openmdao, they seem to use indepvarcomp as if it is the metadata (ie. they dont change during iterations.and they are used as a component inside that group)
Could you guide me if one or the other is right way to
IndepVarComp outputs should be used for any output that could conceivably be a design variable or a parameter that you might wish to change in your run script.
Metadata should be used for anything that is explicitly intended to be constant. It is intended to be set once during instantiation and then not changed after the fact.

How to split a complex hierarchical Model into two partial Views?

I'm learning the Model/View paradigm of Qt because it seems very well suited to edit the data structures I have to deal with, such as this one:
Addition
|_QuadraticFunction
| |_intercept=0.2
| |_slope=0.0
| |_quadratic=1.2
|_Multiplication
|_LinearFunction
| |_intercept=0.0
| |_slope=-8.9
|_Gaussian
|_center=0.6
|_sigma=0.4
My data structure is made up of a combination of functions, each function has its own properties. However, I don't want to display the whole data structure in a single TreeView because it can get too long for complicated structures. Instead, I want to show one view including only the function names, and other view showing only the properties of the function selected in the previous view by the user with a click of the mouse, like this:
(FunctionsView, the first View)
Addition
|_QuadraticFunction
|_Multiplication
|_**LinearFunction**
|_Gaussian
(selectedFunctionView, the second View)
intercept 0.0
slope -8.9
In this example, the user clicked on LinearFunction in the first View, and the second View automatically showed its properties.
My question is: can I hold all my data structure (function names and function properties) under a single model and then have two Views that display only parts of the model like above? If not, do I have to create one model for each partial View, each model indexing different parts of the data structure? Please help, I'm inexperienced with this.
.Jose
Yes, you can absolutely keep it all in one model with two different views. You'll probably want to look into a QSortFilterProxyModel; you would have one of these for each view. The proxy applies sorting and filtering--and filtering is what you're doing here--to a more complete model. When you select something in the main view, you'll want to issue a signal that's picked up by the other proxy model (or the other view and passed to its proxy), and then refilter based on the newly selected item. It's actually very easy to use overall. The easiest mistake to make is getting confused over which model index to use because you'll have model indexes for the proxies, and model indexes for the complete model, and sometimes you have to convert between the two. The documentation is pretty clear on where that's necessary, but it helps to be extra aware of it.
Take a look at the documentation and if you have more questions, please ask.

How to prevent duplicate code while doing microservice or soa ? or how to define the bounded context without duplicating the models?

Can somebody please refer me how do we separate out the models while separating out each service to run of their own? So basically a few of the models we have right now overlaps among services. I went through some of the procedures which ask to use canonical pattern, however I also got that we should not use canonical pattern.
One of the solution would be to keep all the models in a common place which we are doing right now. But its seems problematic for managing the services in the form of one repository per one service.
Duplicate models are also fine with me if I can find good logic for it.
Vertical slicing and domain decomposition leads to having each vertical slice have it's own well defined fields (not entities) that belong together (bounded context/business functions), defining a logical service boundary and decomposing the Service Boundary to Business Components and Autonomous Components (smallest unit of work).
Each component owns the data it modifies and is the only one in the system that can change the state of that data, you can have many copies of the data for readers but only one logical writer.
There for it makes more sense not to share these data models as they are internal to the component responsible for the data model's state (and is encapsulated).
Data readers can use view models and these are dictated more by the consumer then the producer, the view/read models hold the "real"/"current" state of the transactional data (which is private to the data modifier), read data can be updated by the data processor after any logical state change.
So it makes more sense to publish view/read models for read only consumers...
Check out Udi Dahan's video
Does that make sense?

how to properly implement different views options of the same data set

I'm currently building a model view architecture and came across an issue I can't find information on across the internet.
I have one set of complex data, that I want to show to the user in two (or more) different fashions :
full data is shown
only selected (partial) information in shown
The way this data is printed is to me irrelevant, but if this help it's either in a table view (basic information) or a column view (full information). those two clases comes from QT model / view framework.
Now I though about two option to implement this and wonder the one I should use
Option 1
I build my data structure,
include it in a custom model
specialize (subclass) view classes in order to only print what I'm interrested in.
Option 2
I build my data structure,
specialize my models to only provide access to relevant data
use standart view to print it on screen.
I would honestly go for option 2, but seeing the amount of case over the internet where option 1 is used I started to wonder if I'm doing it right. (I never found any example of dual model of a data when multiple view of a model appears to be quite frequent)
Placing data relevant handling inside view classes seem wrong to me, but duplicating models of a data leads to either duplicated data (which also seems wrong) or shared data (and then model no longer 'hold' the data)
I also had a look on QT delegates, but those class are mostly meant to change the appearence of data. I didn't find a way using delegates to ignore the data that is not relevant for one view.
You are completely right thinking that it's wrong to use views for filtering data. The only reasons for reimplementing a view is to have a different view of the same data or special processing of user events.
So there are two ways to filter out a data:
1.Create two models which will share the data. It's a standard recommended approach - not to keep data in models.
2.Create one model providing all the data and create a proxy model inherited from QSortFilterProxyModel to filter out the data.
You will need to reimplement filterAcceptsColumn method to filter out columns and filterAcceptsRow to filter out rows.
Then use View-Model to show all the data or View-Proxy-Model to show some data.

Making Graphite UI data cumualtive by default

I'm setting up Graphite, and hit a problem with how data is represented on the screen when there's not enough pixels.
I found this post whose first answer is very close to what I'm looking for:
No what is probably happening is that you're looking at a graph with more datapoints than pixels, which forces Graphite to aggregate the datapoints. The default aggregation method is averaging, but you can change it to summing by applying the cumulative() function to your metrics.
Is there any way to get this cumulative() behavior by default?
I've modified my storage-aggregation.conf to use 'aggregationMethod = sum', but I believe this is for historical data and not for data that's displayed in the UI.
When I apply cumulative() everything is perfect, I'm just wondering if there's a way to get this behavior by default.
I'm guessing that even though you've modified your storage-aggregation.conf to use 'aggregationMethod = sum', your metrics you've already created have not changed their aggregationMethod. The rules in storage-aggregation.conf only affect new metrics.
To change your existing metrics to be summed instead of averaged, you'll need to use whisper-resize.py. Or you can delete your existing metrics and they'll be recreated with sum.
Here's an example of what you might need to run:
whisper-resize.py --xFilesFactor=0.0 --aggregationMethod=sum /opt/graphite/storage/whisper/stats_counts/path/to/your/metric.wsp 10s:28d 1m:84d 10m:1y 1h:3y
Make sure to run that as the same user who owns the file, or at least make sure the files have the same ownership when you're done, otherwise they won't be writeable for new data.
Another possibility if you're using statsd is that you're just using metrics under stats instead of stats_counts. From the statsd README:
In the legacy setting rates were recorded under stats.counter_name
directly, whereas the absolute count could be found under
stats_count.counter_name. With disabling the legacy namespacing those
values can be found (with default prefixing) under
stats.counters.counter_name.rate and stats.counters.counter_name.count
now.
Basically, metrics are aggregated differently under the different namespaces when using statsd, and you want stuff under stats_count or stats.counters for things that should be summed.

Resources