Assuming everything is working online, if given the exact same parameters, should we expect the same route from javascript, iOS, and Android while using the HERE-SDK/API?
The reason I ask is because I see that the SDK's use a built in router, while the api seems to react to a server. So if online, would all 3 of these devices give the same route?
Not necessarily. When iOS and Android use the online router, the result is most likely quite close to JS, but not necessarily identical.
Few points:
Android and iOS uses a different endpoint/protocol than JS (what shouldn't make a big difference if you really use the same route options but no guarantee results are identical)
Android and iOS might set implicitly (different) default options than JS
Biggest difference: mapdata/mapversion. The mobile Premium SDK 3.x operates -as you already said- on local mapdata with a local router. Even when connecting online, the online router response will always match the mapversion you have on the phone (otherwise it could not be guaranteed to be rendered correctly and used for TbT voice guidance on the phone). Means, when you use a 3 month old map on your phone, you will get a route response online that matches the 3 month old mapdata, while JS is always on the freshest data (monthly). Even if you take care to update your data on the phone regularly, you only get updates once a quarter on the phone at the moment. And this means, different mapdata might lead to different routes in worst case.
As said before, these are all details why it can't be guaranteed that the results are 100% always the same, but in many cases they are.
Related
I've nearly finished my lambda service for my smart home skill, and everything works great. The Echo is receiving my confirmations and correctly relaying their information. I'm now trying to build in error handling.
From the SHS API reference, there are a bunch of error messages listed that correspond to different circumstances. Are these errors supposed to change what Alexa says? Regardless of which one, if any, that I use Alexa just responds that the command doesn't work on that device. Right now I'm literally just using callback(err) and return the copy and pasted object from the API reference and still Alexa responds with the generic error.
It's easy to put in a bunch of constants to define error returns. It's harder to wire all of that into a firmware patch of a hardware device. Also, they only release an update to the SDK a few times a year. While they patch the hardware every couple of weeks.
Given that, I suspect that they put those error returns into the SDK to meet with a ship date with the SDK. More as placeholders than specific functionality. Over time, and if there is increased adoption of home skills, they will roll out updates to the hardware device that will take advantage of those returns.
My advice would be to use them. But not to expect there to be a difference right now. And don't mention differences in your documentation. If there is another place you can surface diagnostic information, you might want to do that so your customers can fix their problems.
I would like to create encrypted media (mp3 and mp4) that will need some form of authentication to playback. Would prefer playback on VLC, but a custom player if necessary. Or a customized version of VLC. And should be local. No streaming.
The problem however is that I've read a number of threads and articles on this and most seem to suggest that at the end a user can simply record the final stream . eg using stereomix
What are the viable options, if any, to prevent this or at the least, make it extremely difficult?
Protection against screen capture software is one of the most difficult goals for any DRM client implementation to accomplish, due to the extensibility and flexibility of a modern computer's graphics system.
My team performed a set of experiments on this topic a few months ago and we found only one DRM client implementation that was able to prevent screen capture: Microsoft PlayReady running in Internet Explorer 11 via HTML Encrypted Media Extensions.
This configuration resulted in a black rectangle being recorded, instead of the video picture. Using Microsoft PlayReady in other media players (e.g. a Silverlight browser plugin) also failed to protect against screen capture, so this level of protection is specific to the implementation built into Internet Explorer 11, at least today.
You can try out Microsoft PlayReady in the successful configuration here: http://ie.microsoft.com/testdrive/html5/eme/
This approach would not, however, fulfill your requirements regarding media format and "no streaming". Such a scenario is not directly in scope of modern DRM technologies, so I recommend you re-architect your solution. Use DASH as the video format and stream it (e.g. even locally from the same computer) to a web app based player. This is a setup I have seen before on projects that need local playback while still enabling use of modern media delivery and DRM technologies.
The field of DRM technologies is fast evolving as new technology vendors (Google, Adobe, Apple and others) enter the mass-scale DRM market in order to grab a slice away from the market leader (Microsoft PlayReady). Thus, it is worth re-testing such results every now and then.
This is a follow-up to another question I asked, but with more precise information.
I have two fundamentally identical web pages that demo WebRTC, one using XSockets as the backend signaling layer, and one using SignalR as the backend signaling layer.
The two backends are fundamentally identical, differing only at the points where they (obviously) have different ways of sending data down to the client. Similarly, the TypeScript/JavaScript WebRTC code on the two clients is completely identical, as I've abstracted out the signaling layer.
The problem is that the XSockets site works consistently, while the SignalR site fails (mostly consistently, though not completely). Usually it fails while calling peerConnection.setLocalDescription(), but it can also fail silently; or it can (sometimes) even work.
You can see the two different pages in operation here:
XSockets site: http://xsockets.demo.alanta.com/
SignalR site: http://signalr.demo.alanta.com/
The source code for both is at https://bitbucket.org/smithkl42/xsockets.webrtc, with the XSockets version on the xsockets branch, and the SignalR version on the signalr branch.
So my question is: does anybody know of any reason why using one signal layer instead of another would make any difference to WebRTC? For instance, does one or the other send back Unicode strings instead of ANSI? Or have I misdiagnosed the problem, and the real difference is elsewhere?
Figured it out. Turns out that SignalR 1.0 RC1 has a bug in it that changes any "+" in a string into a space. So lines in the SDP that looked like this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2+z
Were getting changed into this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2 z
But because not every SDP had a "+" in it on a critical line, sometimes it would work. Everything explained.
The bug has been reported to the good folks working on SignalR (see https://github.com/SignalR/SignalR/issues/1194), and in the meantime, a simple encodeURIComponent() and decodeURIComponent() around the strings in question fixed it.
I'm using HttpBuilder (a Groovy HTTP library built on top of apache's httpclient) to sent requests to the last.fm API. The docs for this API say you should set the user-agent header to "something appropriate" in order to reduce your chances of getting blocked.
Any idea what kind of values would be deemed appropriate?
The name of your application including a version number?
I work for Last.fm. "Appropriate" means something which will identify your app in a helpful way to us when we're looking at our logs. Examples of when we use this information:
investigating bugs or odd behaviour; for example if you've found an edge case we don't handle, or are accidentally causing unusual load on a system
investigating behaviour that we think is inappropriate; we might want to get in touch to help your application work better with our services
we might use this information to judge which API methods are used, how often, and by whom, in order to do capacity planning or to get general statistics on the API eco-system.
A helpful (appropriate) User-Agent:
tells us the name and version of your application (preferably something unique and easy to find on Google!)
tells us the specific version of your application
might also contain a URL at which to find out more, e.g. your application's homepage
Examples of unhelpful (inappropriate) User-Agents:
the same as any of the popular web browsers
the default user-agent for your HTTP Client library (e.g. curl/7.10.6 or PEAR HTTP_Request)
We're aware that it's not possible to change the User-Agent sent when your application is browser-based (e.g. Javascript or Flash) and don't expect you to do so. (That shouldn't be a problem in your case.)
If you're using a 3rd party Last.fm API library, such as one of the ones listed at http://www.last.fm/api/downloads , then we would prefer it if you added extra information to the User-Agent to identify your application, but left the library name and version in there as well. This is immensely useful when tracking down bugs (in either our service or in the client libraries).
When using the networking API in BB OS 5.0 (ConnectionFactory, etc.) there are a ton of options for configuring the connection. How much of this is it appropriate/expected to expose to the end user of the application?
Certainly, I will be setting what I think are appropriate defaults for my application, but some things (e.g. preferred and disallowed transports) seem like they are questions that the user can or should answer.
Is there any kind of best practice here?
Yes, this is one of the things I dislike in BB development - you never know what type of connectivity a BB user has on the device. As a result the code to detect a usable transport is complicated (even despite RIM has some sample code on how to do this).
In the apps development I've been involved in there were different approaches to this. However each app had networking settings which were implied to be populated by user.
For instance, one app asks user to select a transport type on app startup. :) This is definitelly an ideal solution for developers, but not for users (they simply may not know what the "network transport" is). If the target audience mostly consists of advanced users, then this will work good.
Another approach is to use some code to auto-detect a usable transport type, however this approach may also fail (for instance, if the code tries to cover a wide range of OS versions and device makes, then there are most likely will be some unexpected exclusions). So as a fallback scenario it is good to have some networking settings screen where user could check what transports to use (maybe just the only one) and APN settings.
It depends on the target audience. You could do a simplified view with basic options and and advanced view with every thing under the sun that is configurable with a reset button in case the user gets lost.