Experimenting with Tincan/XAPI project created using Adobe Presenter - adobe

Using Adobe Presenter, I have created a presentation with an opening slide, two quiz questions, and a concluding slide. I have published as a Tincan presentation (have confirmed that there is a tincan.xml file published), uploaded to the web and run the presentation. The presentation runs fine, quiz works fine. Two problems:
1)
The index.htm file references
./data/resources/ha/wr_/OpenAjaxManagedHub-all.js
But there is no such directory as wr_
so naturally no files in it.
Chrome dev tools console reports the file not found.
2)
tc-config.js contains the following reference to a publicly available Record Store, so I would expect an attempt to store the activity there when the presentation concludes:
TC_RECORD_STORES = [
{
endpoint: "https://cloud.scorm.com/ScormEngineInterface/TCAPI/public/",
auth: "Basic VGVzdFVzZXI6cGFzc3dvcmQ="
}
];
But looking in Chrome dev tools Network tab, there is no XMLHttpRequest. In fact, the only thing in the Network tab is a bunch of GETs for the files in the presentation.
Also, my understanding is that if it had been stored, it would show up here:
http://tincanapi.com/report-sample/
And it does not show up there.
Don't know if these two problems are related.
Any clues will be appreciated.

Related

Cannot find namespace in project oxford name space in sample app of microsoft cognitive services

I am trying to create a simple app that shows near real time emotions with a live web cam. I am using this guide:
https://www.microsoft.com/cognitive-services/en-us/emotion-api/documentation/emotion-api-how-to-topics/HowtoAnazlyzeVideo_Emotion
I've Downloaded the sample app:
https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/
but I am unsure how I could make it run. the README says:
. Get API keys for the Vision APIs.
2. Open the sample in Visual Studio 2015, build and run the sample applications:
- For BasicConsoleSample, the Face API key is hard-coded directly in BasicConsoleSample/Program.cs.
- For LiveCameraSample, the keys should be entered into the Settings pane of the app. They will be persisted across sessions as user data.
I've got the API keys and inserted the Face API key to the Program.cs code and all of the API's into the settings.
It say's that it cannot find some namespaces as seen in the image:
Why does it say that the namespaces are not included in the Microsoft.ProjectOxford namespace ? Thanks
Edit: This was solved. After updating Nuget and all it still didn't work and the problem was that the pathname was too long.
What I did was downloading the project to desktop and than pressed extract files.
It created a folder with a long name and inside was another folder with a long name so the path was big.
Just put it in D or just extract here and not it will extract the folder inside the zip and not create another one with the folder inside.
You're seeing red squiggles because you're missing some dependent assemblies. These are provided via NuGet, so you should download it by right-clicking on the project, and selecting Manage NuGet packages.... The UI thereafter is hopefully self-explanatory.
Once the missing packages are pulled in from NuGet, you should be able to build+run the application. Once you run it, there should be UI presented where you'd enter the requisite key. The XAML for it is here.

Creating a "web service" using ASP.net - what's in the DLL?

All,
Sorry in advance - I'm a total novice when it comes to ASP.net.
I'm working on a project that's fairly simple. I have single HTML page that collects input from the user. When the input is complete - the html page uses AJAX to post the data to a web service. That service receives the data, does some processing on it, then sends back a response.
The "client" part of this app is pure HTML/Javascript (not ASP.net), and is complete and works perfectly.
The "service" part of this app (MyHandler.ashx) is built using ASP.net. Technically - what it does is receive data from the AJAX post. It then uses Microsoft.Office.Interop.Excel to open an Excel spreadsheet, pass the users' inputs into that spreadsheet, then retrieve several calculated values from the spreadsheet, and returns those values in the response to the AJAX post.
Using Visual Studio VS I've got this whole process running locally on my PC.
When I "publish" the project - VS creates a ton of files. I sent those files to the team that manages the server; they deployed them, and voilà - it works. (The necessary Office interop libraries are installed on the server).
So - my question - as I make a few modest changes (e.g., validation, error handling) to the handler - MyHandler.ashx - which of those published files actually change? If i want to reploy - do I simply need to resend an updated version of MyHandler.ashx? Or, do simple coding changes to that file require changes to the DLL?
I guess my question is, generally - what's in the DLL? (E.g., is it a compiled version of MyHandler.ashx?)
More specifically - publishing my project creates the following files that I don't really understand:
Web.config
Global.asax (in my project, there's not much in here)
bin/MyProject.dll
So, if I make changes to MyHandler.ashx - can I simply reploy THAT file? Or, do I need to "publish", then "redeploy" thd dll? (By changes - I mean simple code changes, not decisions to include/exclude other external dependencies).
Sorry - this question must seem like nonsense to knowledgeable ASP.net developers. But, with other technologies I've used, things were clear:
If you're developing a Flash project, you write source code in .FLA files, then compile, then deploy the resulting .SWF files.
If you're developing an HTML/JavaScript/PHP project, you write those files, then deploy those same files
I'd trying to get a better understanding of what's what with ASP.net.
Thanks again.
The DLL contains the compiled code behind the ASHX file. The ASHX is just a service definition for an HTTP handler. When you make changes to the service (e.g. the code), simply issue another Publish like you did before and send the entire package.
But in short, when you change the code, the DLL is what's changing.

Uploading big Files

I am an amateur ASP.net developer working on my first job (a friends website). ASP.net v4.0, using VS2010.
His company makes 3D models (using a 3D printer). The website is currently in development but can be found here. I would be the first to admit that the code has been a bit rushed and hacked in, but my friend is quite happy with what he has so far.
One of the requirements is that his customers need to be able to upload their model design files, which could be up to 100 MB each (or maybe more). I am struggling to get this to work properly.
I started by using the built in <asp:FileUpload ID="FileUpload1" runat="server" /> tag and an animated gif image, similar to the idea described in Joe Stagner's tutorial - thanks Joe, I like your presentations a lot.
This worked ok for a small test file but doesn't give any indication of upload progress. So I tried to improve my solution using ideas developed from Sunasara Imdadhusen in his code project article. My upload code looks like this:
Task t = Task.Factory.StartNew(() =>
{
byte[] buffer = new byte[UPLOAD_BUFFER_BYTE_SIZE];
// Upload the file in chunks so that we can measure how long it is taking.
using (FileStream fs = new FileStream(Path.Combine(newQuotePath, filename), FileMode.Create))
{
DateTime stopwatch = DateTime.Now;
while (stats.Uploaded < stats.TotalSize)
{
int bytecount = postedFile.InputStream.Read(buffer, 0, UPLOAD_BUFFER_BYTE_SIZE);
fs.Write(buffer, 0, bytecount);
stats.Uploaded += bytecount;
double dRate = UPLOAD_BUFFER_BYTE_SIZE / Math.Abs((DateTime.Now - stopwatch).TotalSeconds);
stats.Rate = (int)(Math.Min(dRate, int.MaxValue));
// Sleep is for debugging only!
//System.Threading.Thread.Sleep(2000);
stopwatch = DateTime.Now;
}
}
}, TaskCreationOptions.LongRunning);
Where stats is a reference to a class that is stored as a session variable, and is accessed from a javascript function running in a setInterval(...) (while the upload is taking place) by the PageMethod:
[System.Web.Services.WebMethod]
[System.Web.Script.Services.ScriptMethod]
public static UploadStatus GetFileUploadStatus()
{
UploadStatus stats = (UploadStatus)HttpContext.Current.Session["UploadFileStatus"];
if ((stats != null) && (stats.IsReady))
{
return stats;
}
else
{
return null;
}
}
This worked on my local machine (using the thread sleep to slow down the upload). So I published it to our host 123-Reg and it didn't work as expected. The animated gif comes on when the upload starts, but the progress bar doesn't start moving. The file input control and the submit button (in an IFrame) that should get disabled as soon as the upload starts take ages to become disabled. Then the web page just hangs there. After waiting for a while I clicked the page refresh button. When the page refreshed it showed that my test file had been uploaded successfully. I tried a 16 KB file and a 1.6 MB file.
Since this worked on my local machine, I suspect that this is happening because our website host (123-Reg) is using a web farm.
Anyway I thought this was going to be easy but it is not, and so I had a look for some open-source upload progress bars. I had a look at NeatUpload but it says "By default, NeatUpload won't work properly on a web garden or web farm", and it also says to get it to work on a web farm "Specify the same random 32-hex-digit decryptionKey attribute in the section of each server's Web.config". But I don't think I have access to 123-Reg's server config files(?).
My friend has suggested that perhaps we could use a service such as dropbox, but I had a look and I have no idea how to add this to our website. This might be a good option since they might be optimized for uploads.
Any advice or suggestions would be very much appreciated - thanks.
EDIT: The story so far ... still struggling with this.
I looked into using dropbox in detail (and other cloud storage providers such as SkyDrive etc). Seems that these services are designed around providing applications that interact with individual user's storage. I wanted all users to be able to upload files to MY dropbox folder but without sharing (customers should not have access to other customer's design files). Anyway I carried on, setup a dropbox account, installed the Sharpbox SDK and reprogrammed my upload code (the bit inside the task action). This seemed to work ok on my local machine, it was a bit slower because it was uploading to the dropbox server (I didn't need the Thread.Sleep). I published the website to the 123-Reg server and got a similar, unusable experience as before.
So far I had been testing on IE9, and I just happend to try it out with Chrome and I noticed a funny thing. Just after clicking the upload button but before the page refreshed Chrome showed an upload % complete dialog in the bottom left. This ran to 100%, then my controls started to show some action before the whole page started to hang again.
According to MSDN the HttpPostedFile:
"By default, all requests, including form fields and uploaded files, larger than 256 KB are buffered to disk"
So does this mean that my Task is being started after the painful part of waiting for the upload (which is actually happening between the client and a buffer)? If this is the case then it would not make sense to make it more painful by then sending the file off to dropbox right? And my progress bar is tracking the progress of the wrong bit?
(Today I am going to check the error handling of my code since I have a feeling that the reason it is hanging is because it is not recovering from an exception properly). It certainly feels like I am learning a lot by doing this.
This might be a bit overkill but you could look into using BluImp's jQuery File Upload tool (Demo site here: http://blueimp.github.com/jQuery-File-Upload/ ) - a developer called Max Pavlov has modified the original version (originally for non-dotnet technologies) for use in MVC 3.
It can be found here on git hub https://github.com/maxpavlov/jQuery-File-Upload.MVC3 . I have successfully implemented this in both MVC 3 and MVC 4 Beta. The only thing I had to do to make it work effectively in MVC 4 is remove some of the ClientDependancy (This is a DLL that handles bundling and minification of JS and CSS files) code as this replicates functionality already in MVC 4 but not in MVC 3. Additionally I have added some pages to the wiki over on GitHub describing what I did although its not complete. If you decide to go down this route I could update some details there with my more recent findings. My notes so far on MVC 4 integration can be found here https://github.com/maxpavlov/jQuery-File-Upload.MVC3/wiki/MVC-4---EnableDefaultBundles .
Incidentally I have managed to upload files up to about 2Gb using this tool and have found it quite flexible!

ASP.NET Sound Resource not publishing

So I created an ASP.NET 4 application in VS2010, that needs to play sound to the end user, and it is working perfectly in my local development environment. The problem is the sound resource nor the Resources.resx is not being published to the server. Any idea why?
What I did:
1) Under Project  Properties  Recources I added my sound resource called: soundbyte (containing soundbyte.wav). I noticed this creates a Resource folder with the wav file and under my project a Resources.resx file referencing the file
2) In my code I play the file as follows:
Dim audioFile = My.Resources. soundbyte
Dim player = New Media.SoundPlayer(audioFile)
player.Load()
player.Play()
In the Visual Studio Solution Explorer right-click on Resources.resx and select Properties. Build Action. Set to content.
EDIT: The following resource might also help.
http://blog.andreloker.de/post/2010/07/02/Visual-Studio-default-build-action-for-non-default-file-types.aspx
Ultimately, I found a way to play the sound to the client browser (as opposed to the server the asp app is running on) was to follow the techniques in this example: http://www.vbdotnetheaven.com/UploadFile/scottlysle/PlaySoundsInASPX09032006083212AM/PlaySoundsInASPX.aspx
But I found an even better way in my case was to use Javascript, which doesnt' require the Resources technique.
simply embed the sound on the page after the tag:
<embed src="Sounds/jump.wav" autostart=false width=1 height=1 id="sound1" enablejavascript="true">
Then in javascript setup the function:
function EvalSound(soundobj) {
var thissound=document.getElementById(soundobj);
thissound.Play();
}
Finally play the sound in the browser as needed in Javascript:
EvalSound('sound1');

Better way to handle page that links to hundreds of binaries?

I've struggled with a better solution for the following setup. I'm not actively working on this, but know some that might appreciate other ways of handling this.
Setup:
Tridion-managed page has a single "linked list" component Linked list
Single component has component links to other components in Tridion
Linked-to components often link to multimedia component (mm)
An XSLT component template (XSLT CT) renders XML with above content and with links to PDF
XSL document() function used to grab embedded (linked-to) content, all content converted to XML nodes and attributes
TCMScriptAssistant namespace with publishBinary() publishes related PDF and other media
Page template just outputs the result of the CT
Business requirements:
improved publishing (last I worked on this, some of these files created a 2GB publishing transaction because of the PDFs)
published XML content file must reference the associated PDFs; hyperlinks work but identifiers might not help because of...
no Tridion content delivery APIs, mainly for independence from the storage database but also to avoid Tridion-specific code on the presentation server (loosely coupled setup and less training for developers)
The biggest issue is the huge transport package during publishing. The second problem is publishing any of the linked-to PDFs will cause the page to republish.
How could this setup be improved or re-engineered, preferably without too many changes to the existing templates, though modular templating could be considered.
Dynamic component presentations could possibly work, but would need to be published to the file system and not use dynamic linking or broker objects (e.g. no criteria filters, binary metadata, etc).
There are indeed 2 questions. I will handle them in reverse order.
To prevent the page from being republished when you publish a binary, you can use the event system in older versions of Tridion (pre-2011) to turn off link resolving, or with newer versions you can use a custom resolver to prevent this. There is an article by Nuno which explains this(http://nunolinhares.blogspot.com/2011/10/tridion-publisher-and-custom-resolvers.html)
Your second one is a bit tougher, in no small part because of your criteria for not using the SDL Tridion CD APIs. I would have suggested publishing the binaries separately (this would keep the file size down of your transaction package), and using Binary Linking to resolve the paths at request time.
Given this is not an option, I think the only was I would approach it would be to still use dynamic component presentations, and then use predictable unique file names for the PDfs (i.e. use something like 317-12345.pdf based on the URI), and use one directory for all the binaries. That way you could enter the paths to the binary using your XSLT template, as you know where the binaries will be located later. You could then use a custom resolver to publish the binaries when you publish the main list component or page.
Hope that helps
Chris

Resources