How to request additional data with WER? - minidump

I'm new to WER (actually, just registered a few days ago).
I've managed to establish an account and map my test application to it.
However, when I invoke an unhandled exception, I see that no additional files are collected (there is proper report in Solutions center with BucketID <> 8, it is not arrived yet to winqual, but at least it was sent).
I expect that files like Version.txt, AppCompat.txt and minidump.mdmp will be collected. I also tried to add file by using WerRegisterFile function. No errors, but no files are sent either.
I've read this thread - it says that additional data is only collected, if server asks for it. So, my question is (may be it sounds stupid, but...): how can I request additional data? I've scanned everything in my profile, but the only useful options are related to product/file mappings.
I feel that this should be obvious (it's not discussed in help), but... I'm stuck :( A screenshot indicating where to look would be nice :)

It's strange that nobody have answered on that question, as the answer was a very simple :)
You can not collect any data for new errors. Error report should arrive on WinQual first. Then you can view a report (there is short info only) and ask server to collect files, dumps, additional info. Only then additional data will be collected.

Related

Gamelift Matchmaking times out after match found

Hoping to get some insight into the behavior I am seeing while trying to use GameLift Matchmaking.
I have my configuration setup as such that it does not require player acceptance, as such:
GameLiftMatchmakingConfiguration:
Type: AWS::GameLift::MatchmakingConfiguration
Properties:
AcceptanceRequired: false
...
When I go to the GameLift console and into the configuration I see that it was correctly set as well that it does not require acceptance.
This is where I am confused, because now I have it working where it places 2 users in PotentialMatchCreated and I get this event from GameLift. Then 30 seconds later, I get more events stating that these placements timed out and searching again.
The configuration documentation states that AcceptanceTimeoutSeconds is only required if AcceptanceRequired is true, which it is not for me.
the acceptance documentation states that you only call this When FlexMatch builds a match, all the matchmaking tickets involved in the proposed match are placed into status REQUIRES_ACCEPTANCE
Which its not, its in PotentialMatchCreated.
So my question is, what do I have to do to confirm a placement once GameLift places 2 users into a match? I am a bit surprised because I thought that the fact that it doesn't have to be accepted would mean that its automatically accepted match.
Also theres very little documentation I found regarding what to do in this situation, given the nature of this service not being as known as others I totally expected that but really hoping someone can help me on what to do next.
Any insight or help is greatly appreciated.
UPDATE1:
Additional information: I do not need to utilize GameLift fleets or builds at all. We have a browser game we are building and just want to utilize the matchmaking feature. So we dont have any game servers or anything like that, its just on our website where they would play the game and use our api's/websockets that puts the matchmaking on the server and notify the client when a match has been found with all the subsequent details.
UPDATE2:
To confirm my suspicions I decided to actually try to use the accept match endpoint and see what happens. Just as the documentation states, you can only accept a match if it requires acceptance. I get an error stating that I cannot accept a match that is not in REQUIRE_ACCEPTANCE state. Guessing this is a bug on AWS's side, I don't see any other endpoints that I can hit for being in state PotentialMatchCreated.
Figured out the issue. It has to do with the FlexModeMatch on the GameLiftMatchmakingConfiguration. For my use case, just needing matchmaking, STANDALONE is the correct implementation because we aren't having GameLift actually create game servers/sessions for us. I had mine using WITH_QUEUE which is why I believe I was having issues. Seemingly working correctly now.

When multiple resumes cmi.core.score.raw is improper

When I make multiple resumes attempt to a single course the cmi.core.score.raw score is not updated properly.
Suppose there are 5 sections in the course. If the user successfully completes the 2 sections, then exits the score is proper (say 30). While the user resumes the course, it starts with the proper position and even if the user correctly answers all the question correctly, the score is not updating to 100 ( it saves as 70-80 ). It shows the result as failed which is wrong since the user has answered all the questions correctly.
I thought it may be due to the suspend_data max limit posed by SCORM 1.2, but the course resumes at the correct location every time. So I am confused about what may be causing this behavior?
I also tried the same course in scormcloud, there also the same issue persists.
Are there any settings which we need to take care while creating the SCORM 1.2 package which may have caused this issue?
Has anyone faced this issue previously? I googled and couldn't find the appropriate answer. Any help would be appreciated.
Update: I'm attaching the scromcloud launch history image which will clearly show the score value at launch start and end.
The SCO is typically responsible for conducting the "math" portion since it maintains the cmi.core.score section of the student attempt.
You're correct to assume something with the suspend data may not be giving the SCO's capability of determining a history of correct/incorrect but deeper analysis would be required within the logic of the SCO to find out if it fully supported being put back in a position it left off in.
SCORM 1.2 was mostly a 'optional' standard in regards to the support of the Student Attempt. And even though the standard often states there are character limits it typically was optionally enforced by the LMS.
So something to possibly evaluate whats going on within the SCO while using it on a LMS would be to try out the bookmarklet on https://cybercussion.com or see if you can locate the deeper logs on SCORM cloud which show the actual data being stored. Its always possible the suspend data is obfuscated but since you said SCORM 1.2 I'm going to say doubtful. It may be semi-human readable.
Fail that we'd need to dig into the SCO code base to determine how it "puts itself back" after obtaining the suspend data. To do that I'd search the code base for "cmi.suspend_data" to maybe help locate it.

Filtering Data in ASP.NET Web Services

I've been using this site for quite a while, usually being able to sort out my questions by browsing through the questions and following tags. However, I've recently come across a question that is rather hard to lookup amongst the great number of questions asked - a question I hope some of you might be able to share your opinion on.
As my problem is a bit hard to fit into a single line, going in the title, I'll try to give a bit more details on the problem I've encountered. So, as the title says I need to filter, or limit, some of the response data my standard ASP.NET Soap-based Web service returns on invoking various web methods. The web service is used to return data used by other systems (a data repository more or less), where the client today is able to specify a few parameters on how the data should be filtered and in return a full-set of data back.
Well, easy enough I thought, just put additional filtering options on the existing web methods which needs a bit more filtered applied, make adjustments on the server-side and we are all set to go - well, unfortunately it turned out to be a bit more tricky then this.
The problem I am facing is that I'm working on a web service running in a production environment, which needs to be extended in such that additional filters can be applied to existing web method being invoked w/o affecting the calls already being made by other systems used by the customer using their client stubs. This is where I am a bit troubled, since I can't seem to find a "right solution" on extending the current web service.
Today, the filter is send as a custom data structure which holds information on which data should filtered, but I am not sure if I can simply just add more information to this data structure w/o breaking code at the clients? One of my co-workers suggested that I could implement a solution where I would extend the web.config on the server-side to hold a section with details on which data should be excluded (filtered out), but I don't find this to be a viable solution long-sighted - and I don't trust customers with such an option since this is likely to go wrong at some point. So the solution I am looking for is a way that I can apply a "second filter" to the data I am requesting from the client so instead of getting a full-set of data back it should only give a fraction, it implemented in such that the filter can be easily modified and it must not affect the current client calls.
Any suggestions on how I should approach this problem?
Thanks!
Kind regards,
E.
A pretty common practice is to create another instance of the application OR use part of the url to signify the version of the endpoint they are connecting to, perhaps the virtual directory is the date. That way old calls will go to the old API and new calls will come in on the new API.
http://api.example.com/dostuff
vs
http://api.example.com/6-7-2011/dostuff

Best Practices for Passing Data Between Pages

The Problem
In the stack that we re-use between projects, we are putting a little bit too much data in the session for passing data between pages. This was good in theory because it prevents tampering, replay attacks, and so on, but it creates as many problems as it solves.
Session loss itself is an issue, although it's mostly handled by implementing Session State Server (or by using SQL Server). More importantly, it's tricky to make the back button work correctly, and it's also extra work to create a situation where a user can, say, open the same screen in three tabs to work on different records.
And that's just the tip of the iceberg.
There are workarounds for most of these issues, but as I grind away, all this friction gives me the feeling that passing data between pages using session is the wrong direction.
What I really want to do here is come up with a best practice that my shop can use all the time for passing data between pages, and then, for new apps, replace key parts of our stack that currently rely on Session.
It would also be nice if the final solution did not result in mountains of boilerplate plumbing code.
Proposed Solutions
Session
As mentioned above, leaning heavily on Session seems like a good idea, but it breaks the back button and causes some other problems.
There may be ways to get around all the problems, but it seems like a lot of extra work.
One thing that's very nice about using session is the fact that tampering is just not an issue. Compared to passing everything via the unencrypted QueryString, you end up writing much less guard code.
Cross-Page Posting
In truth I've barely considered this option. I have a problem with how tightly coupled it makes the pages -- if I start doing PreviousPage.FindControl("SomeTextBox"), that seems like a maintenance problem if I ever want to get to this page from another page that maybe does not have a control called SomeTextBox.
It seems limited in other ways as well. Maybe I want to get to the page via a link, for instance.
QueryString
I'm currently leaning towards this strategy, like in the olden days. But I probably want my QueryString to be encrypted to make it harder to tamper with, and I would like to handle the problem of replay attacks as well.
On 4 guys from Rolla, there's an article about this.
However, it should be possible to create an HttpModule that takes care of all this and removes all the encryption sausage-making from the page. Sure enough, Mads Kristensen has an article where he released one. However, the comments make it sound like it has problems with extremely common scenarios.
Other Options
Of course this is not an exaustive look at the options, but rather the main options I'm considering. This link contains a more complete list. The ones I didn't mention such as Cookies and the Cache not appropriate for the purpose of passing data between pages.
In Closing...
So, how are you handling the problem of passing data between pages? What hidden gotchas did you have to work around, and are there any pre-existing tools around this that solve them all flawlessly? Do you feel like you've got a solution that you're completely happy with?
Thanks in advance!
Update: Just in case I'm not being clear enough, by 'passing data between pages' I'm talking about, for instance, passing a CustomerID key from a CustomerSearch.aspx page to Customers.aspx, where the Customer will be opened and editing can occur.
First, the problems with which you are dealing relate to handling state in a state-less environment. The struggles you are having are not new and it is probably one of the things that makes web development harder than windows development or the development of an executable.
With respect to web development, you have five choices, as far as I'm aware, for handling user-specific state which can all be used in combination with each other. You will find that no one solution works for everything. Instead, you need to determine when to use each solution:
Query string - Query strings are good for passing pointers to data (e.g. primary key values) or state values. Query strings by themselves should not be assumed to be secure even if encrypted because of replay. In addition, some browsers have a limit on the length of the url. However, query strings have some advantages such as that they can be bookmarked and emailed to people and are inherently stateless if not used with anything else.
Cookies - Cookies are good for storing very tiny amounts of information for a particular user. The problem is that cookies also have a size limitation after which it will simply truncate the data so you have to be careful with putting custom data in a cookie. In addition, users can kill cookies or stop their use (although that would prevent use of standard Session as well). Similar to query strings, cookies are better, IMO, for pointers to data than for the data itself unless the data is tiny.
Form data - Form data can take quite a bit of information however at the cost of post times and in some cases reload times. ASP.NET's ViewState uses hidden form variables to maintain information. Passing data between pages using something like ViewState has the advantage of working nicer with the back button but can easily create ginormous pages which slow down the experience for the user. In general, ASP.NET model does not work on cross page posting (although it is possible) but instead works on posts back to the same page and from there navigating to the next page.
Session - Session is good for information that relates to a process with which the user is progressing or for general settings. You can store quite a bit of information into session at the cost of server memory or load times from the databases. Conceptually, Session works by loading the entire wad of data for the user all at once either from memory or from a state server. That means that if you have a very large set of data you probably do not want to put it into session. Session can create some back button problems which must be weighed against what the user is actually trying to accomplish. In general you will find that the back button can be the bane of the web developer.
Database - The last solution (which again can be used in combination with others) is that you store the information in the database in its appropriate schema with a column that indicates the state of the item. For example, if you were handling the creation of an order, you could store the order in the Order table with a "state" column that determines whether it was a real order or not. You would store the order identifier in the query string or session. The web site would continue to write data into the table to update the various parts and child items until eventually the user is able to declare that they are done and the order's state is marked as being a real order. This can complicate reports and queries in that they all need to differentiate "real" items from ones that are in process.
One of the items mentioned in your later link was Application Cache. I wouldn't consider this to be user-specific since it is application wide. (It can obviously be shoe-horned into being user-specific but I wouldn't recommend that either). I've never played with storing data in the HttpContext outside of passing it to a handler or module but I'd be skeptical that it was any different than the above mentioned solutions.
In general, there is no one solution to rule them all. The best approach is to assume on each page that the user could have navigated to that page from anywhere (as opposed to assuming they got there by using a link on another page). If you do that, back button issues become easier to handle (although still a pain). In my development, I use the first four extensively and on occasion resort to the last solution when the need calls for it.
Alright, so I want to preface my answer with this; Thomas clearly has the most accurate and comprehensive answer so far for people starting fresh. This answer isn't in the same vein at all. My answer is coming from a "business developer's" standpoint. As we all know too well; sometimes it's just not feasible to spend money re-writing something that already exists and "works"... at least not all in one shot. Sometimes it's best to implement a solution which will let you migrate to a better alternative over time.
The only thing I'd say Thomas is missing is; client-side javascript state. Where I work we've found customers are coming to expect "Web 2.0"-type applications more and more. We've also found these sorts of applications typically result in much higher user satisfaction. With a little practice, and the help of some really great javascript libraries like jQuery (we've even started using GWT and found it to be AWESOME) communicating with JSON-based REST services implemented in WCF can be trivial. This approach also provides a very nice way to start moving towards a SOA-based architecture, and clean separation of UI and business logic.
But I digress.
It sounds to me as though you already have an application, and you've already stretched the limits of ASP.NET's built-in session state management. So... here's my suggestion (assuming you've already tried ASP.NET's out-of-process session management, which scales signifigantly better than the in-process/on-box session management, and it sounds like you have because you mentioned it); NCache.
NCache provides you with a "drop-in" replacement for ASP.NET's session management options. It's super easy to implement, and could "band-aid" your application more than well enough to get you through - without any significant investment in refactoring your existing codebase immediately.
You can use the extra time and money to start reducing your technical debt by focusing new development on things with immediate business-value - using a new approach (such as any of the alternatives offered in the other answers, or mine).
Just my thoughts.
Several months later, I thought I would update this question with the technique I ended up going with, since it has worked out so well.
After playing with more involved session state handling (which resulted in a lot of broken back buttons and so on) I ended up rolling my own code to handle encrypted QueryStrings. It's been a huge win -- all of my problem scenarios (back button, multiple tabs open at the same time, lost session state, etc) are solved and the complexity is minimal since the usage is very familiar.
This is still not a magic bullet for everything but I think it's good for about 90% of the scenarios you run into.
Details
I built a class called CorePage that inherits from Page. It has methods called SecureRequest and SecureRedirect.
So you might call:
SecureRedirect(String.Format("Orders.aspx?ClientID={0}&OrderID={1}, ClientID, OrderID)
CorePage parses out the QueryString and encrypts it into a QueryString variable called CoreSecure. So the actual request looks like this:
Orders.aspx?CoreSecure=1IHXaPzUCYrdmWPkkkuThEes%2fIs4l6grKaznFGAeDDI%3d
If available, the currently logged in UserID is added to the encryption key, so replay attacks are not as much of a problem.
From there, you can call:
X = SecureRequest("ClientID")
Conclusion
Everything works seamlessly, using familiar syntax.
Over the last several months I've also adapted this code to work with edge cases, such as hyperlinks that trigger a download - sometimes you need to generate a hyperlink on the client that has a secure QueryString. That works really well.
Let me know if you would like to see this code and I will put it up somewhere.
One last thought: it's weird to accept my own answer over some of the very thoughtful posts other people put on here, but this really does seem to be the ultimate answer to my problem. Thanks to everyone who helped get me there.
After going through all the above scenarios and answers and this link Data pasing methods My final advice would be :
COOKIES for:
ENCRYPT[userId's]
ENCRYPT[productId]
ENCRYPT[xyzIds..]
ENCRYPT[etc..]
DATABASE for:
datasets BY COOKIE ID
datatables BY COOKIE ID
all other large chunks BY COOKIE ID
My advise also depends on the below statistics and this link details Data pasing methods :
I would never do this. I have never had any issues storing all session data in the database, loading it based on the users cookie. It's a session as far as anything is concerned, but I maintain control over it. Don't give up control of your session data to your web server...
With a little work, you can support sub sessions, and allow multi-tasking in different tabs/windows.
As a starting point, I find using the critical data elements, such as a Customer ID, best put into the query string for processing. You can easily track/filter bad data coming off of these elements, and it also allows for some integration with e-mail or other related sites/applications.
In a previous application, the only way to view an employee or a request record involving them was to log into the application, do a search for the employee or do a search for recent records to find the record in question. This became problematic and a big time sink when somebody from a related department needed to do a simple view on records for auditing purposes.
In the rewrite, I made both the employee Id, and request Ids available through a basic URL of "ViewEmployee.aspx?Id=XXX" and "ViewRequest.aspx?Id=XXX". The application was setup to A) filter out bad Ids and B) authenticate and authorize the user before allowing them to these pages. What this allowed the primarily application users to do was to send simple e-mails to the auditors with a URL in the e-mail. When they were in a big hurry, they were in their bulk processing time, they were able to simply click down a list of URLs and do the appropriate processing.
Other session related data, such as modification dates and maintaining the "state" of the user's interaction with the application gets a little more complex, but hopefully this provides a starting poing for you.

How do you keep track of temporary threads of conversation online

Often when I post a comment or answer on a site I like to keep an eye out for additional responses from other people, possibly replying again if appropriate. Sometimes I'll bookmark a page for a while, other times I'll end up re-googling keywords to locate the post again. I've always thought there should be something better than my memory for keeping track of pages I care about for a few days to a week.
Does anyone any clever ideas for this type of thing? Is there a micro-delicious type of online app with a bookmarklet for very short term followup?
Update I think I should clarify. I wasn't asking about Stack Overflow specifically - on the "read/write web" in general I add comments to blog posts, respond to google group threads, etc. It's that sort of mish-mash of individual pages on random sites that I would care to keep track of for seven-to-ten days.
For stackoverflow, I put together a little bookmarklet thing at http://stackoverflow.hewgill.com. I use it to keep track of posts that I might want to come back to later, for reference or to answer if nobody else did, or whatever. The backend automatically retrieves updates from the SO server and updates your list of bookmarklets.
In my head mostly. I occasionally forget things, but it works well enough.
That's a very interesting question you asked here.
I do th efollowing:
temp bookmarks in browser
just a tab in Firefox left opened for weeks :)
subscription to email\rss when possible. When email notification comes I often put it into special folder in my email tree.
Different logins, notification types etc are complicating following info in the web :(
Other interesting questions:
how to organize information storage (notes, saved web pages, forum threads etc) for current usage and as a read-only library, sync it between different PCs and USB disks, how to label (tag) it and search it
how to store old mails, conversations, chats,..?
store digital photos for future: make hard-copy printouts or just regulary rewrite it from CD to a new one
Click on your username, then Responses.

Resources