Scorm Suspend Data issue migrating from scorm cloud - scorm

I currently have a client that has a Scorm 1.2 course which is hosted on Scorm cloud. Things are going fine there with the course but the client wants to get off Scorm cloud because the fees are adding up. I know Scorm cloud is a very in depth application but all the client really needs to worry about is allowing the user to continue where they left off, detecting if the user passed the final score and ideally the score they got on the final exam.
I have used a few different scorm player wrappers but I am running into the same issue with them all. The scorm course made from storyboard creates a huge value for suspend_data and when it is really long for some reason the course will ask if the user wants to continue where they left off but will bring them back to the beginning and not continue where they left off.
I know to be Scorm 1.2 compliant the suspend_data should be no longer than 4096 characters but some are upwards of 90,000 characters. I also have read Scorm doesn't really enforce this it is mainly a LMS restriction on field size. I am storing all of the cmi data into a medium_text field so I am not having an issue with storing it.
My main question is has any migrated off scorm cloud and took their users history (suspend_data) with them to have users continue where they left off with your scorm player? Another question is has anyone had this issue and is there a player/wrapper you have used to successfully use a large suspend_data for a Scorm 1.2 course. What I am trying to do is take them off Scorm cloud so I can do an api call to get all of the cmi data for each user and then start launching directly from their site and store new cmi data but we can't move away since many users would have to start the course over.
I did run more tests with my scorm player and when through the entire course and saved my suspend_data at various points and I can get it to launch at that point. The longest my suspend_data was just under 30k characters.
Any tips in the right direction would be appreciated.

Migrating SCORM data from system to system is tough as while the SCORM elements themselves should be named similarly in the systems SCORM implementation, each LMS could store/name them in a different manner.
As far as the large suspend data issue goes, are you moving to a system that allows for a customized maximum suspend data size? In SCORM Cloud and LMSs that run our SCORM Engine, we do have that course property that allows you to store as much suspend data as you want. There are a number of LMSs out there that use Engine so you may be able to modify that behavior.
Shoot us a message at support#scorm.com if you have any other questions!
Thanks,
Joe Donnelly
Rustici Software Support

Related

SCORM 2004 - suspend_data not saving in SCORM Cloud

I have a bit of a strange problem and am struggling to find any relevant info about it in the docs or elsewhere.
We have implemented SCORM 1.2 and 2004 in the past and I'm currently trying to fix an issue we have with our 2004 version and have hit a bit of a brick wall. We store data about our learners progress through the course, i.e which pages they have visited in 'cmi.suspend_data' we then retrieve this at the start of their next session to provide visual feedback in the UI.
In 2004 2nd Edition, we are unable to retrieve that data from the LMS.
To take it right back to basics, I have uploaded a blank SCORM course to SCORM Cloud (empty index.html) and am launching the course and finding the API_1484_11 and calling the following functions on it.
API.Initialize('');
API.SetValue('cmi.suspend_data', 'Test');
API.Commit('');
API.Terminate('');
Then exiting the course. At this point I can see the suspend data in the 'Sandbox Registration State'.
I then go back into the course and call:
API.Initialize('');
API.GetValue('cmi.suspend_data');
And am returned a blank string. At that point if you do API.GetLastError() it is a 403.
Am I missing something vital here, or some difference between 1.2 and 2004? Is this expected behaviour?
I think the issue you are running into is that you are not setting cmi.exit to "suspend". I believe that the specification says that the LMS is to retrieve the suspend_data from a previous learning experience only if the exit is suspended...

Some persistent data is not stored issue in Scorm

I have a question in scorm,Actually now i have a scorm package created in lectora and its launching and working fine for me .But in the middle one alert is showing "Some persistent data was not stored" .When i googled it i got that its due to the exceeding of suspend_data reporting from the scorm package(4k for scorm 1.2 and 64 k for scorm 2004).
How can i crack the problem through my code or how can i set the maximum limit in my scorm adaptor because i cant change the package.
Hope you understand the problem very eagerly waiting for your reply
Thanking You
Arun KG
I have seen solutions where developers have hijacked some of the other read/write CMI fields to store arbitrary data. For example, cmi.comments gets you another 4K in SCORM 1.2. SCORM 2004 gives you quite a few more (cmi.interactions.n.description, cmi.objectives.n.description, ...).
Most of these alternative fields are not mandatory, so your target LMS may not support them.

get request parameter from url in HP loadrunner

I am using HP loadrunner for my automatic tests.
Every time, when i run my application, it creates some transfer and also generates id in URL.
How can i get the id from URL?
Thanks in advance!
The web_reg_save_param function in LoadRunner is used for this. The following line will save the current page URL to the parameter (URL).
web_reg_save_param("URL", "LB/ic=Location: ", "RB=\r\n", "Search=Headers", LAST);
If you know what the ID is that your looking for, ie. http://www.example.com/?id=298374293847 you can adjust the call accordingly.
web_reg_save_param("URL", "LB/ic=Location: http://www.example.com/?id=", "RB=\r\n", "Search=Headers", LAST);
Hope this helps.
Recording with Siebel 8.1 on Loadrunner 11 having issues,posted a question on HP and got the same comment. But usually we can try the below mentioned option
You can record in Siebel-Web or web (http/html) and playback as
either too (if you want to record in Siebel-Web and play back in
regular web, just copy the contents of the script to a regular web
script and save).
Try a proxy mode recording in LR.
Changing registry and disables NTLM.
Turn off all autocorrelation rules
Turn on record as URL mode (as an alternate use web_custom_requests())
Use a sniffer to capture the traffic and then build a script by hand.(Best Option)
Change settings on the Siebel server side as well (Enable Automation=True, EnableWebClientAutomation = TRUE)
If you are recording your scripts using Web http/html you can use automatic correlation. For automatic correlation go to Design Studio
If you are unable to find the value there,then you must correlate manually using web_reg_save_param by giving the left and right boundaries.
This is going to sound belligerent, condescending and downright offensive. It is not to be meant as a reflection on you, but upon your management who has placed you in this position.
The topic of Correlation is one covered extensively in the class for LoadRunner web script development. It is the topic of a full 1/3 of the class and an additional appendix. All told some four different techniques for collecting dynamic data are covered, presented or documented as a part of the class materials. This capability, the handling of dynamic data, is a foundation tool skill.
Vardges, your management has placed you in a tough spot. Personally I would bolt for greener fields, for any management which is willing to do this to a line-of-business employee is also willing to toss that same person under the bus to salvage their own hide or a client relationship. Blaming you for something that management is unwilling to address is not a question of "if?" when training and mentoring does not occur, but only "when?" will the blame be placed on you.
James Pulley
Moderator: YahooGroups Advanced-LoadRunner, YahooGroups LoadRunner, SQAForums LoadRunner, LinkedIn LoadRunner, GoogleGroups lr-loadrunner

What is the most useful information to display at the front of the office?

The company I work for has just purchased 4 32" LCD screens to be mounted at the front of the office for demonstration purposes. Whilst we are not demonstrating (most of the time), the screens are to be used as development information screens for the whole team.
What information would people recommend displaying to be most useful to the team? Our focus is on hosted business web-apps but I am interested in what other teams doing other types of development find useful too. Pointers on how to gather the displayed information would be useful also.
Information about your continuous integration status.
Major Development Milestones that have been hit in the last week
Releases within the last month (including a short description why this release is awesome)
Use it as motivational board. The achievements of software development are seldom communicated well enough.
Since you're hosting apps for your customers, server and network status information would probably be useful.
Heck, why not create a "chat room" for the dev team to discuss issues and post a streaming version of that as well?
Schedule information, Scrum notes from that morning, a gantt chart...the possibilities abound.
Outstanding bugcount, sorted by priority and severity. You can likely get this from your bugtracking tool programmatically.
Depending on your process management
system, possibly a list of feature
requests and the percentage complete
on each of them. Again, you can probably get this programmatically from your process management / time tracking tool.
Time spent in the current development
cycle, and time remaining. Again, this should be available from your process / management / time tracking tool. You may want to use this data with your bugcounts as well to give a bugs / day fix rate.
If you're a public company with a
profit-sharing plan (i.e. stock or
options), the current price of the
stock (this can be surprisingly
strongly motivating). You can get stock data from several sources online programmatically (although a small delay may be injected unless you're paying for the service).
The movie 'Office Space'
Weather radar from intellicast.com
Latest Checkin.
Number of checkins per day
Number of customers that use software
Metrics on Bugs found/fixed and the ratio.
One screen could be an aggregated RSS feed of development topics pulled from sites such as Stack Overflow (or even Coding Horror). Not sure what your goal for these screens is, but I could see it useful to me if you had a feed with topics specific to your development team headlined. If I were there, I'd glimpse them, maybe catch an interesting thread, and go learn something. Funnel a bunch of keywords and tags through a Yahoo Pipe and dump it to the screen.
That's if they are more "informal and informational."
I think most popular pages from your webapp(s) would be a fun/interesting thing to show on a big monitor up front.
Another would be a live feed of your error reporting.
We have one monitor showing all meetings for the day, with start-end, subject, and room. I find this helpful, not only for my orientation, but also to see what other people do at our company.
xkcd, bunny, dilbert and savage chickens :-)

When is Google Analytics not good enough? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to determine why an enterprise wouldn't want to use Google Analytics.
Here are the main reasons I've seen mentioned:
Inability to track clients that have Javascript disabled.
Lack of ownership of the statistics - Google owns the data.
Most of the web clients with Javascript disabled will probably be bots/spiders. This data is interesting, but probably not very useful.
As for the ownership issue, this is a bit paranoid IMO.
What am I missing here? When is Google Analytics not good enough?
Here are my findings from additional research:
Google Analytics is limited to 5 million page views per month - source
If a web site generates more than 5 million pageviews per month it will need linked to an active AdWords account to avoid interruption of service.
Lack of / slow technical support
All Google support is handled through email and response times can take a week or more. Commercial analytics products often have much faster & personalized support.
Inability to track files (PDF's, Images, etc.)
GA relies on Javascript and files lack the ability to execute Javascript. The workaround to this problem is to tag the link, but this won't track requests that go directly to the file.
Limited ability to customize
This is a selling point that I see pushed by commercial analytics tools (WebTrends). However it's never explained what customizations are denied by GA but allowed by WebTrends.
The Google Analytics EULA does not allow you to track individual users by identifying them. So if you wanted to add a custom variable for username to track how many times each user logs in, then you would be in a gray zone if not outright violating the EULA.
I use Google Analytics on about 10 sites right now and it's a great tool. In addition to all the analytics stats, you can tie it in with AdSense and it becomes a marketing/revenue tool and not just "wow look at all these cool user stats". If there was a way to track by user ID in certain circumstances (e.g. if user's agreed to it, or if they work for the company that owns the site) then I would have no issues.
Besides, it's free and all you have to do is add JavaScript to the files, so give it a try and see what you think after a few months.
One reason that was, surprisingly, not posted:
timing / speed of reaction
It takes at least 4 hours (up to 24) for GA to update your data.
This is ok for me personally in most of the cases, but when reacting fast is crucial (news sites, one-off events, etc.) you may want to employ some other solution (Mint comes to mind, but it's not the only one out there of course).
Thought I'd add my two pence worth to this thread, as this a topic close to my heart and one I've debated with colleagues for years. We've used webtrends in house for as long as i can remember, back to version 4 of the log analyzer (how different things were back then!). Since Google Analytics came along, we've started to come under increasing pressure from certain parts of our business to switch, as 'it does everything we need form an analytics tool'
Well, true in many senses it does, especially these days. But I championed the integration of our CRM and web analytics tools back in 2006, and as our business isn't e-commerce (the 'conversion' happens offline, sometimes months after the visitor acquisition) we need to integrate in this way to get a true picture of campaign effectiveness, and notion of ROI.
All of this means, we need access to the raw data, need to be able to join visitor records on sessionID etc, without this access we'd be screwed. I'd love it if we could roll without it, but the current requirements mean we can't, so this alone is a HUGE reason why Google analytics is not good enough.
Over and out
For tracking desktop software or creating a whitelabel solution there are better solutions.
For white label an integration based analytics, i use MixPanel. For Desktop Software, i use Deskmetrics
Google Analytics does not work well with mobile phones. While the iPhone and the Palm may be supported, many of the existing handsets do not support the javascript that Google uses.
If you're based in the UK, then theoretically you could be breaking the Data Protection Act by using Analytics.
If information about your users (like which web pages they're looking at) goes "outside the European Economic Area" and onto Google's servers in the US, then you're breaking the DPA.
Pretty obscure, but you did ask :)
Piwik avoids the problem because you host it on your own servers.
Lack of ownership of the statistics - Google owns the data.
... As for the ownership issue, this is a
bit paranoid IMO.
One problem with it is that we can't even access the raw data. We had a use case this week where we wanted a visitor map for an executive presentation. We needed to get more flexible with how the visitor map is displayed (wanted to view the map in Google Earth plug-in). In GA, you can't. You take what they give you. You can see a map of how many visits came from each city, but you can't export a data file of cities and number of visits, to run the data through other tools. So, paranoia aside, there are significant limitations on what you can accomplish with GA.
However this is not a problem if you use Urchin, the self-hosted version of GA: you can export the data and do what you want with it. (And the exported data is richer than the web server log's, as it includes some analysis already.)
Since Piwik is open source, and pluggable, I imagine you could enhance the visitor map plug-in any way you wanted to. And export whatever data you want.
Whether this limitation affects you depends on your needs, obviously.
Update: I've now looked at the GA Data Export API, and it turns out that things you cannot do through the UI (as you can with Urchin), you can do with this API. It does look like you can export the visit data I was talking about, via a feed (although there are daily traffic caps on those requests). So sprinkle salt heavily on what I wrote above.
A couple more points that I've come across:
GA doesn't let you dig beyond full-day statistics; I would often like the ability to investigate whether a traffic dip the previous day was caused by the design update I did at 1pm or the soccer match on TV at 8pm.
GA doesn't offer a workaround for traffic spikes caused by DDoS attacks, Slashdotting etc. When I'm looking at a GA visitor graph of 2009, all I can see is the 2-million-pageview-spike on October 16th, pushing the entire rest of the year down flat against the horizontal axis of the graph. To get a meaningful graph, GA should offer the ability to trim or exclude outlying data points, or the ability to limit/bracket the graph window itself
GA doesn't have an event monitoring client (think Reinvigorate's Snoop tool)
While GA is very user-friendly, I've found it's not as granular as some of the other stats programs (or maybe I'm not looking in the right places). Before the marketing monkeys I work with began pushing GA, we were very satisfied with AWStats. The sheer scope of the data helped us on several occasions hone sites to better suit their audience. While GA is very shiny and laid out well, I personally still prefer the raw numbers like I used to get through AWStats.
Slow data processing speed - Can be as low as 15-30 mins for page views, but may be up to 48 for eCommerce
EULA is limiting in some cases
You won't own or have any control of the data. Google's engineers might use it (anonymously) for testing
Anything more complex requires customization - Downloads and such care of no issue, but there are limits
Cross domain tracking by linker is faulty at best
Visit based - Proper tools are based on Visitor level, GA works on Visit based reporting mostly
Limited number of custom vars used at one time (5)
No tech support, if you're realistic
Usually when there is a downtime notice, it's already gone
API limitations (4 dimensions and 10 metrics at one time, not all can be used together in addition to that)
I have many more, but at the end of the day it is a good tool for it's price.
From the non-technical point, I think the most important is that some enterprise has the high level data security policy. All of the data should be controlled and managed by themselves.
If you use the Google analytics,the data is stored in google's server. For some special enterprise, like insurance, financial company. The policy should be followed.
I would NOT go with server logs. In fact I have them disabled on my server. Why you ask me?
For the simple reason that everytime you hit my server that stupid logging program makes an entry in the physical log file on my HDD. So if my server gets 100,000 hits in a day that's 100,000 time a HDD write operation happens.
You think that's cool? Well it's not. It's slowing your server down, specially if the log file is huge.
Why would someone even consider doing that to their server? Specially when we're working so hard to minify javascript, css and make image files 2 KB smaller!
Please do yourself a favor don't log directly on your server.
At least Google Analytics logs it on Google's server so my server's healthier.
I wouldn't use it for any of my sites, because you're forcing the user to accept your proprietary JavaScript code in their browser, which is bad. Also, giving your data is Google is a really bad idea.
See Piwiki for something you can run yourself as in free software, eliminating both of the problems.

Resources