Slice of impact analysis plugin frama-C - frama-c

The impact analysis plugin has a method slice which slices the given statement according to the impact analysis.
Is it like the slicing plugin where we create project then we choose the slicing zone and after adding this zone to the request? How could i print the result of this impact analysis slice?

The function Db.Impact.slice has actually no real link to the Impact plugin. It simply computes a slicing request on the given list of statements, without calling Impact anywhere. As you noticed, you can already build this request from the API of Slicing, while Db.Impact.Stmt is not customizable.
(This function slice should be internal to Impact. It is used in the GUI when the user has selected the option Slicing after impact. We will remove it in the next release to lift this ambiguity.)

Related

How to cache complex calculated temporary Data

I have an Application that allows people to bet on the result of soccer games.
The score of each single bet (=entity) is calculated by comparing the betted scores of the bet with the actual result in the game(=entity). Bet's are betted within Betrounds. Betrounds are organisations where groups bet on gamegroups (groups of games e.g. single matchdays). Single Usergroups can have several betrounds.
To summarize the relational model:
UserGroup 1:N BetRounds 1:N Bets N:1 Game
Within each betround I create a resulttable where I show every user with their result points and position.
In order to calculate the position of one user I need to calculate the points of every user within a betround.
These points from the single betrounds are aggregated into groups and within the group there is again a resulttable.
Example
A Usergroup with: 20 users
One Season has 34 matchdays
One matchday has 9 games
In order to calculate the the points for this usergroup I would need to calculate the points from 20*34*9=6120 bets.
Since this is a lot to calculate I don't want to do it everytime I show the resulttable.
I currently see two options in order to save some calculation time:
Cache
Save interim results (e.g. on the bet entity) in the database
Maybe a mix of both.
Cache
If caching is the correct way I am not sure on which level and how to invalidate.
There are several options what to cache:
- pointresult of single bets
- pointresults of single users within a betround
- whole result table of a betround (points & position)
- pointresult of single user within usergroup
- whole resulttable of usergroup
I am unsure how to cache those data:
- just the integer values for positions and points
- whole entities (e.g. bets)
- temporary not persistent entities (e.g. to represent the the resulttables)
- the html output of the table
Then dependent on format how to cache it:
- html views could be cached via reverse proxies
- values / entities probably via redis / memcache etc.
In the future we might change to a single page app that data is only served via restapi, then caching of html outputs is not an option.
Dependent on the caching strategy the question arises how to invalidate cache and optionally warm it, so that the result is never calculated within the application but only recalculated when the cache is invalidated and immediately replaced by the new result.
I have read very often that cache invalidation is evil. I am not sure if this applies to my use case since all points/results/tables etc. only change when my interface updates the result of the games. This is the only time when points change.
2.Save interim results (e.g. on the bet entity) in the database
I am not sure if this scenario is applicable on all levels. I first thought about saving the actual result on a bet instead of always comparing the bet scores with the actual scores. This would then make my data model a little bit redundent and i have increased complexity if I wrong result is fetched by my interface and later the correct comes in and my points are not recalculated.
On all other levels I would need to create new interim entities to store table results persistently.
3.Mix of both
I am not sure how mixing both would look like and if it makes sense at all, but I thought it might be an option.
Any advice, Input or experience would be highly appreciated.
I only mildly understand betting, so hopefully this helps.
It sounds like you are asking two questions:
When do I calculate results?
How much caching should I use?
To me it sounds like there are very clear events that happen, after which you can successfully calculate your results. Your design should take advantage of this and be evented in nature. You should have background processes that can detect when a game is complete. The results of the game should be written, and additional background jobs should be triggered to calculate the results of any bets that depend on that game.
This would also be the point at which any caches that involve that game, results from that game, or results from any bets on that game, should be invalidated and/or refreshed.
How much you should cache should be based on how much you need to cache. Caching should be considered separately from computing results. That is not caching. That is computing results and storing them. You should definitely not be calculating results during a page view request, and should be done ahead of time when the corresponding event (game ends) has triggered the calculation.
Your database should pretty much always represent the latest information you have on everything. You should avoid doing any calculations on-the-fly if possible.
I would get all the events and background stuff working first, then see what kind of performance you get. At that point your app should be doing little more than taking the results and sticking them into a view for each page view. If that part is going too slow, then you should start looking at caching your views/templates/html. As mentioned before, these caches could be invalidated by your background workers when they encounter new results.

Access 2010 Calculated Field - Table Requires More Space Than Static Field

I've started using Access 2010 recently and started testing some of the new features, namely the Calculated Field datatype.
I had hoped that this was something that based on a formula (expression builder) would remove an amount of data and shrink an ACCDB file because Access only has the formula not actual data.
However, my new version of the file seems to be larger than the original which IMHO makes the feature a bit useless.
I've searched the interweb regarding the feature and can only really find people who show how to create one rather than any pros and cons about the feature.
As it stands I'm going to go back to the old method of calculations in a query but before I do I thought I'd ask on StackOverflow just in case anybody has used it.
Access stores the results of calculated fields for each record, so yes, that will increase the size of the database. However your claim that this "makes the feature a bit useless" misses the point:
The primary advantage of using calculated fields is that the calculation (expression) is defined once, at the table level. Once the calculated field has been defined it can simply be used much like any other field in queries, reports, etc..
Sure, you can "go back to the old method of calculations in a query" if that suits your purposes, but it also means that
You will have to repeat the (same) calculation logic in all of your queries.
If the calculation logic ever changes then you'll have to go back and edit all of those queries.
Every time you run one of those queries it will have to re-do the calculation for every record, instead of simply retrieving the calculated field from the table.

Making Graphite UI data cumualtive by default

I'm setting up Graphite, and hit a problem with how data is represented on the screen when there's not enough pixels.
I found this post whose first answer is very close to what I'm looking for:
No what is probably happening is that you're looking at a graph with more datapoints than pixels, which forces Graphite to aggregate the datapoints. The default aggregation method is averaging, but you can change it to summing by applying the cumulative() function to your metrics.
Is there any way to get this cumulative() behavior by default?
I've modified my storage-aggregation.conf to use 'aggregationMethod = sum', but I believe this is for historical data and not for data that's displayed in the UI.
When I apply cumulative() everything is perfect, I'm just wondering if there's a way to get this behavior by default.
I'm guessing that even though you've modified your storage-aggregation.conf to use 'aggregationMethod = sum', your metrics you've already created have not changed their aggregationMethod. The rules in storage-aggregation.conf only affect new metrics.
To change your existing metrics to be summed instead of averaged, you'll need to use whisper-resize.py. Or you can delete your existing metrics and they'll be recreated with sum.
Here's an example of what you might need to run:
whisper-resize.py --xFilesFactor=0.0 --aggregationMethod=sum /opt/graphite/storage/whisper/stats_counts/path/to/your/metric.wsp 10s:28d 1m:84d 10m:1y 1h:3y
Make sure to run that as the same user who owns the file, or at least make sure the files have the same ownership when you're done, otherwise they won't be writeable for new data.
Another possibility if you're using statsd is that you're just using metrics under stats instead of stats_counts. From the statsd README:
In the legacy setting rates were recorded under stats.counter_name
directly, whereas the absolute count could be found under
stats_count.counter_name. With disabling the legacy namespacing those
values can be found (with default prefixing) under
stats.counters.counter_name.rate and stats.counters.counter_name.count
now.
Basically, metrics are aggregated differently under the different namespaces when using statsd, and you want stuff under stats_count or stats.counters for things that should be summed.

ArrayCollection fast filtering

I have the following situation: have a screen with 4 charts, each being populated with 59 array collections. Each array collection has 2000+ data points.
This construct needs to be filtered(by time), but the problem I'm encountering is the fact that it takes a long time for the filtering to finish(which is to expected given the amount of data points that need to filtered).
Filtering one chart at a time is not option, so I wanted to ask you what do you think would be the best approach? (should I use vector instead?). To generalize this question what would be the best way to filter large collections in flex/as3?
Thanks.
You'll have to squeeze out every bit of performance improvement that's possible and that's suited:
use Vector if possible, and as much as you can. It has (contrary to what www.flextras.com posits) a filter property which accepts a filtering function.
ArrayCollections are slow. (In general all flex native classes are unnecessarily slow) So if you really HAVE to use ArrayCollections, only use them to present the resulting Vectors.
if the problem is that the application "freezes" you can look into green threading so you can present the user with a progress bar, that way they at least have a sense of progress.
http://blog.generalrelativity.org/actionscript-30/green-threads/
I would suggest to filter large collections on the server. This has several benefits:
you may be able to minimize network traffic because only the filtered data have to be transferred
server side computing can be parallelized and is typically faster, because of more performant hardware and the server's runtime language (e.g. Java).
services requests are done asynchronously, so your client application is not blocked
Use vector's wherever possible, use green threading if you still can't manage. Internally we use a lot of dictionaries to cache computed queries for later lookup. Dictionaries in as3 are one of the fastest objects around. So we pre-filter in the background and story the various filterted collections in a dictionary. Not sure if it works for your case.
Ok, so I googled around more for green threading and stumbled upon a project by gskinner (PerformanceTestv2). Testing the generation of data vs rendering time got me the following results:
[MethodTest name='Set gain series test: ' time=1056.0 min=1056 max=1056 deviation=0.000 memory=688] //filtering data source
[MethodTest name='Set gain series test: ' time=24810.0 min=24810 max=24810 deviation=0.000 memory=16488] //filtering + rendering.
Next I looked on how to improve rendering time of the chart, but not much improvement. However I did find a project based on Degrafa: Axis. This has been ported to flex 4, event 4.5([Axiss 4.5])3. I've integrated charts based on this framework and the results are really great so far.

which is better small relation db query or constant global variable interaction?

I am building a web application where i am using multilingual support.
I am using variables for label text display, so that administrators can
change a value in one place and that change is reflected throughout the application.
Please suggest which is better/less time consuming for displaying label text?
using relational db interaction.
constant variable.
xml interaction.
How could I find/calculate the processing time of the above three?
'Less time Consuming' is easy, and completely intuitive; constants will always be faster than retrieving the information from any external source, even another in-memory source. Probably even faster than retrieving it from a variable (which is where any of the other solutions would have to end up putting the data)
But I suspect there is more to it than that. If you need to support the ability to change that data (and even if not), you may consider also using Resource Files, which would enable you to replace all such resources based on language/culture.
But you could test the speed fairly easily using the .NET 4 StopWatch class, or the system tickcount (not sure of the object offhand where that comes from) if you don't have 4.0
db interaction, in that case the rate of db-interaction would be increased, unless you apply some cache logic.
Constants, Manageability issues.
XML, parsing time+High rate of IO etc.
Create three unit test for each choice.
Load test them and compare the results.

Resources