Protop and lrgExtent alerts - openedge

Presumably, every start of the protop agent, there are massive fixed-extent alerts, although in fact a new next variable extent has already been created. Is this normal behavior? In my opinion, these are false positives.
Examples:
site.resource lrgExtent 807.2 > 500
site.resource lrgExtent 807.2 > 700
site.resource lrgExtent 807.2 > 800

If you have large extents then, yes, ProTop is going to alert on those at startup. The “large extent” alert is not related to such an extent being the last one nor does it matter that you have added a variable extent to your structure. The point is that it is large and therefore a potential problem.
(Possible problems include things like being more painful to manipulate if you ever have to dbrpr the database or perform “extent surgery” on a damaged db.)
The alert that you are asking about only says that there is at least one large extent. It doesn’t fire for every such extent so “massive alerts” don’t happen. Just one.
If you still don’t want the alert you can comment it out in alert.cfg. (You could also increase the threshold to avoid the alert.) To avoid losing your changes to alert.cfg on updates create a localized version by adding your customer id to the name. I.e. "alert.xyzzy.cfg" (your customer id is in etc/custid.cfg).

Related

How to auto restrict the view in rpivottable to be data protection compliant

I am starting a customer lifetime project at work and want to share how the data looks with the business, as I want to be able to identify the important variables with them. I plan to do this using the excellent rpivottable package and launch a shiny app to see where there are basic differences in groups to select my features.
This would mean I have my customer base of 4million customers and slice and dice them in a number of ways.
However, following GDPR we need to ensure no group is shown that has less than 7 customers in it. Therefore I need somekind of background calculation to ensure that less than 7 customers are never shown.
If I think logically about this, the only way I could see it working would be to make a change to the pivottable, have some form of submit button, so that the size of groups could be calculated, and then a filter (which needs to be hidden from the user so it cannot be switched off) is applied.
I know I should provide code, but I do not know where to start here. Has anyone had similar issues and has a potential solution to all or part of the problem?
Has anyone built a hidden filter into their rpivottable?
Has anyone been able to restrict their output to only show 90% of their data?
Thanks,
J
To be absolutely sure, you would need to load in a data frame that looks like "dim, dim, dim, count" where count is always greater than 7. Basically just a bit of preprocessing on your input data. Unfortunately, this means that you will be restricted to a small number of coarse dimensions, else you will end up filtering out everything.

Using realm with infinite scrolling table view in swift

Im learning about using NSOperation, NSOperationQueue for my networking calls to deliver a more responsive UI in my apps' table view.
The result of the networking operation get stored into the realm and displayed in the table view.
This is an infinite scroll table view and as the user gets the end, more data is pulled into the app.
I am wondering what is the best design paradigm to use here, and where is the best spot to clear the realm. I don't want to inflate the app with useless data. I just want them to have data if they log back in with no network (airplane mode).
I also would like to know where the best spot to trigger these networking operations is? cellForRowAtIndexPath perhaps? I am not to sure since I usually just use Alamofire and trigger a network request in viewDidLoad. But these are not cancellable calls.
I've gone through the great tutorials on ray wenderlich but other then the playground examples, I am still not getting a real world application tutorial. If anyone knows of a good one on this subject let me know
thanks
This might be tricky to answer since it all depends on your app, the size/type of data it's displaying and how often you want to perform network fetches. In the end, it will be most likely be a compromise between what 'feels good' and how many system resources need to be consumed to make it happen.
In this particular scenario, Realm is being used as a caching mechanism and nothing more, so when to clear it should probably depend on how aggressively you wish to clear it.
If I was building a system like this, I would decide on a set number of the latest items I would always want to have available and save them in Realm. If the user then decided to start scrolling down beyond that limit, more data would be downloaded and appended to the Realm database as they went. Eventually the user will get tired and scroll back to the top (Or they might even just quit the app and restart from the top). At that point, it would be appropriate to trigger an operation to review the size of the Realm cache and remove as many items as necessary to bring it back to the desired size. If they start scrolling down again, then it's appropriate to just re-download that data.
Unlike SQLite, where items are copied into memory, Realm is very good at lazy-loading resources mapped from disk, so it's not necessary to worry about the number of Realm items in memory, more just the size of the Realm file on disk, which again depends on how big the data you're downloading is.
As for when to trigger another network operation to request more data, it's probably best to do it in tableView(_:willDisplay:forRowAt:). Depending on how large the data to download is (and the size of your table cells are), you should play with it until it feels natural when scrolling at a pretty normal speed. As a starting point, I'd recommend starting at maybe a whole screen-worth of table cells before hitting the bottom of the scroll view.
Good luck!

xProc - Pausing a pipeline and continue it when certain event occurs

I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.

How to cache complex calculated temporary Data

I have an Application that allows people to bet on the result of soccer games.
The score of each single bet (=entity) is calculated by comparing the betted scores of the bet with the actual result in the game(=entity). Bet's are betted within Betrounds. Betrounds are organisations where groups bet on gamegroups (groups of games e.g. single matchdays). Single Usergroups can have several betrounds.
To summarize the relational model:
UserGroup 1:N BetRounds 1:N Bets N:1 Game
Within each betround I create a resulttable where I show every user with their result points and position.
In order to calculate the position of one user I need to calculate the points of every user within a betround.
These points from the single betrounds are aggregated into groups and within the group there is again a resulttable.
Example
A Usergroup with: 20 users
One Season has 34 matchdays
One matchday has 9 games
In order to calculate the the points for this usergroup I would need to calculate the points from 20*34*9=6120 bets.
Since this is a lot to calculate I don't want to do it everytime I show the resulttable.
I currently see two options in order to save some calculation time:
Cache
Save interim results (e.g. on the bet entity) in the database
Maybe a mix of both.
Cache
If caching is the correct way I am not sure on which level and how to invalidate.
There are several options what to cache:
- pointresult of single bets
- pointresults of single users within a betround
- whole result table of a betround (points & position)
- pointresult of single user within usergroup
- whole resulttable of usergroup
I am unsure how to cache those data:
- just the integer values for positions and points
- whole entities (e.g. bets)
- temporary not persistent entities (e.g. to represent the the resulttables)
- the html output of the table
Then dependent on format how to cache it:
- html views could be cached via reverse proxies
- values / entities probably via redis / memcache etc.
In the future we might change to a single page app that data is only served via restapi, then caching of html outputs is not an option.
Dependent on the caching strategy the question arises how to invalidate cache and optionally warm it, so that the result is never calculated within the application but only recalculated when the cache is invalidated and immediately replaced by the new result.
I have read very often that cache invalidation is evil. I am not sure if this applies to my use case since all points/results/tables etc. only change when my interface updates the result of the games. This is the only time when points change.
2.Save interim results (e.g. on the bet entity) in the database
I am not sure if this scenario is applicable on all levels. I first thought about saving the actual result on a bet instead of always comparing the bet scores with the actual scores. This would then make my data model a little bit redundent and i have increased complexity if I wrong result is fetched by my interface and later the correct comes in and my points are not recalculated.
On all other levels I would need to create new interim entities to store table results persistently.
3.Mix of both
I am not sure how mixing both would look like and if it makes sense at all, but I thought it might be an option.
Any advice, Input or experience would be highly appreciated.
I only mildly understand betting, so hopefully this helps.
It sounds like you are asking two questions:
When do I calculate results?
How much caching should I use?
To me it sounds like there are very clear events that happen, after which you can successfully calculate your results. Your design should take advantage of this and be evented in nature. You should have background processes that can detect when a game is complete. The results of the game should be written, and additional background jobs should be triggered to calculate the results of any bets that depend on that game.
This would also be the point at which any caches that involve that game, results from that game, or results from any bets on that game, should be invalidated and/or refreshed.
How much you should cache should be based on how much you need to cache. Caching should be considered separately from computing results. That is not caching. That is computing results and storing them. You should definitely not be calculating results during a page view request, and should be done ahead of time when the corresponding event (game ends) has triggered the calculation.
Your database should pretty much always represent the latest information you have on everything. You should avoid doing any calculations on-the-fly if possible.
I would get all the events and background stuff working first, then see what kind of performance you get. At that point your app should be doing little more than taking the results and sticking them into a view for each page view. If that part is going too slow, then you should start looking at caching your views/templates/html. As mentioned before, these caches could be invalidated by your background workers when they encounter new results.

Preventing Users from Working on the Same Row

I have a web application at work that is similar to a ticket working system. Some users enter new issues. Other workers choose and resolve issues. All of the data is maintained in MS SQL server 2005.
The users working to resolve issues go to a page where they can view open issues. Because up to twenty people can be looking at this page at the same time, one potential problem I had to address was what happens if someone picks an issue that someone else picked just after their page loaded.
To address this, I did two things. First, the gridview displaying the issues to select uses an AJAX timer to update every second. Once an issue has been selected, it disappears one second later at most. In case they select one within this second, they get a message asking them to choose another.
The problem is that the AJAX part of this is sending too many updates (this is what I am assuming) and it is affecting the performance of the page and database. In addition, the updates are not performing every second. I find the timer to be unreliable when working to trigger stored procedures.
There has to be a better way, but I can't seem to find one. Does anyone have experience with a situation like this or have suggestions to keep multiple users from selecting the same record to maintain? I really do not want to disable the AJAX part entirely because I feel the message alone would make the application frustrating to use.
Thanks,
Put a lock timestamp field on the row in the database. Write a stored proc that returns true or false if the expiration timsetamp is older than a specific time. Set your sessions on your web app to expire in the same time, a minute or two. When a user select a row they hit the stored proc which helps the app to decide if it should let the user to modify it.
Hope that makes sense....
Two things can help mitigate your problem.
First, after-selection notification that the case has been taken is needed regardless of your ajax update time frame. Even checking every second doesn't mean two people cannot click the same case at what they perceive to be the same time. In such cases, one of the users needs to be notified that their selection is invalid even though it appeared valid when selected. This notification doesn't need to be elaborate; keeping a light, helpful tone can improve user perception even in the light of disappointment. And if you identify the user who selected that record already, that will not only help your users coordinate in future but also divert attention from your program to the user who snaked the juicy case. (indeed, management may like giving your users the occasional collision as it will motivate them to select cases faster)
Second, a small tweak to how you display your cases can reduce selection collisions. Adding a random element to display order and/or filtering out every other case on display will help your users select different cases naturally. Human pattern recognition and task selection isn't really random so small changes to presentation can equal big changes to selection behavior. Reductions in collision chance keeps your collision notifications rare (and thus less frustrating to your users). This is even better if your users can be separated into classifications that can help determine useful case ordering/filtering.
Okay, a third thing that will help you over time is if you keep a log of when collisions occur (with helpful meta data about the collision—like who was involved and selection timing). Armed with solid collision data, you can find what works and what doesn't. Over time, you can hone your application to your actual use cases as well as identify potential problems early. Nothing reassures your users more than being on top of a problem (and able to explain your plans to solve it) before they're even aware it exists.
With these mitigating patterns, you'll probably find you can safely reduce your ajax query timeframe without affecting user experience. And with useful logging, you'll have the assurance that any tweaks you put in place are actually working (or not—which is maybe even more useful to know).
I did something similar where once a user opened a ticket (row) it assigned that ticket to that user and set a value on that record, like and FK to that particular user, so if anyone else tried to open that ticket (row) it would let them know it has already been assigned to someone else.
If possible limit the system so that they just get the next open issue off the work queue as opposed having them be able choose from all open issues.
If that isn't possible, I suppose you could check upon the choosing of an issue to see if it is still available. If it's not available, then make it disappear after the user clicks on it. This way you are only requesting when they actually click on something as opposed to constant polling of the data.
Have you tried increasing the time between refreshes. I would expect that once per 30 seconds would be sufficient. 40 requests/minute is a lot less load than 1200/minute. Your users may not even notice the difference.
If they do, how about providing a refresh button on the page so the users can manually refresh the list just prior to selecting an item to avoid the annoying message if they choose.
I'm missing to see the issue, specially after you mentioned you are already flagging tickets as in progress/being maintained and have a timestamp/version of the item.
Isn't the following enough:
User browses the tickets and sees a list of available tickets i.e. this excludes ones that are in the db as in progress. If you want the users to also see tickets in progress, you indicate it clearly in the ticket status and disable the option to take it.
User either flags a ticket as in progress explicitly or implicitly by opening the ticket (depends on the user experience / how its presented to the users).
User explicitly moves the ticket to a different status i.e. completed, invalid, awaiting for feedback, etc.
When the items are retrieved at 1, you include a timestamp/version. When 2 happens, you use a optimistic concurrency approach to make sure that if 2 persons try to update the take the ticket at the same time only the first one will be successful.
What will happen is that for the second person, the update ... where ... timestamp = #timestamp will not find any records to update and you will report back that the ticket was already taken.
If you want, you can build on top of the above to update the UI as tickets are grabbed. This could be by just doing a full refresh of the current page of tickets after x time (maybe alerting/prompting the user), or even by retrieving a list of tickets changed for the page of tickets being showed with ajax. You still have the earlier steps in place, as this modification its just a convenience for the users.

Resources