Could any one instruct me the steps of implementing Encrypted Media Extensions using videojs-contrib-eme in local server (with Access Point) which doesn't has internet.
Users connect to local server using WiFi with mobile and playback the videos in browser.
So my question is as
EME implementations use the following external components:
Key System
Content Decryption Module (CDM):
License (Key) server
Packaging service
(refer for more info -- https://developers.google.com/web/fundamentals/media/eme)
what components are already provided by videojs-contrib-eme , and what components do I need to implement ?
It sounds like you are building for an off-line case - the main DRM's supported by most browsers, Widevine, FairPlay and PlayReady, require an internet connection usually for the license request and response.
It is possible to have persistent licenses, i.e. a DRM license which will work offline for download and go use cases like watching movies offline, but even this requires internet connectivity for the original license request and response.
If you plan to implement your own proprietary DRM system, then you will need more changes than just to the player itself, i.e. video.js, in your example.
You will need to implement some form of key server, your own CDM and some form of packager.
It's certainly possible to do all this, but it is a lot of work. If this is not just for a learning exercise, it may be more practical to implement some simple encryption solution on your server and then add simple decryption functionality just before you play the content. This is not as secure but may be good enough for your needs.
Alternatively if you really want DRM level security, it might be worth seeing if you can have limited internet access just for the DRM license requests and responses which are typically very small. This would also you leverage standard browsers and packagers.
Related
I have developed a tool that loads in an configuration file at runtime. Some of the values are encrypted with an AES key.
The tool will be scheduled to run on a regular basis from a remote machine. What is an acceptable way to provide the decryption key to the program. It has a command line interface which I can pass it through. I can currently see three options
Provide the full key via CLI, meaning the key is available in the clear at OS config level (i.e. CronJob)
Hardcode the key into the binary via source code. Not a good idea for a number of reasons. (Decompiling and less portable)
Use a combination of 1 and 2 i.e. Have a base key in exe and then accept partial key via CLI. This way I can use the same build for multiple machines, but it doesn't solve the problem of decompiling the exe.
It is worth noting that I am not too worried about decompiling the exe to get key. If i'm sure there are ways I could address via obfuscation etc.
Ultimately if I was really conscious I wouldn't be storing the password anywhere.
I'd like to hear what is considered best practice. Thanks.
I have added the Go tag because the tool is written in Go, just in case there is a magical Go package that might help, other than that, this question is not specific to a technology really.
UPDATE:: I am trying to protect the key from external attackers. Not the regular physical user of the machine.
Best practice for this kind of system is one of two things:
A sysadmin authenticates during startup, providing a password at the console. This is often extremely inconvenient, but is pretty easy to implement.
A hardware device is used to hold the credential. The most common and effective are called HSMs (Hardware Security Modules). They come in all kinds of formats, from USB keys to plug-in boards to external rack-mounted devices. HSMs come with their own API that you would need to interface with. The main feature of an HSM is that it never divulges its key, and it has physical safeguards to protect against it being extracted. Your app sends it some data and it signs the data and returns it. That proves that that the hardware module was connected to this machine.
For specific OSes, you can make use of the local secure credential storage, which can provide some reasonable protection. Windows and OS X in particular have these, generally keyed to some credential the admin is required to type at startup. I'm not aware of a particularly effective one for Linux, and in general this is pretty inconvenient in a server setting (because of manual sysadmin intervention).
In every case that I've worked on, an HSM was the best solution in the end. For simple uses (like starting an application), you can get them for a few hundred bucks. For a little more "roll-your-own," I've seen them as cheap as $50. (I'm not reviewing these particularly. I've mostly worked with a bit more expensive ones, but the basic idea is the same.)
I would like to create encrypted media (mp3 and mp4) that will need some form of authentication to playback. Would prefer playback on VLC, but a custom player if necessary. Or a customized version of VLC. And should be local. No streaming.
The problem however is that I've read a number of threads and articles on this and most seem to suggest that at the end a user can simply record the final stream . eg using stereomix
What are the viable options, if any, to prevent this or at the least, make it extremely difficult?
Protection against screen capture software is one of the most difficult goals for any DRM client implementation to accomplish, due to the extensibility and flexibility of a modern computer's graphics system.
My team performed a set of experiments on this topic a few months ago and we found only one DRM client implementation that was able to prevent screen capture: Microsoft PlayReady running in Internet Explorer 11 via HTML Encrypted Media Extensions.
This configuration resulted in a black rectangle being recorded, instead of the video picture. Using Microsoft PlayReady in other media players (e.g. a Silverlight browser plugin) also failed to protect against screen capture, so this level of protection is specific to the implementation built into Internet Explorer 11, at least today.
You can try out Microsoft PlayReady in the successful configuration here: http://ie.microsoft.com/testdrive/html5/eme/
This approach would not, however, fulfill your requirements regarding media format and "no streaming". Such a scenario is not directly in scope of modern DRM technologies, so I recommend you re-architect your solution. Use DASH as the video format and stream it (e.g. even locally from the same computer) to a web app based player. This is a setup I have seen before on projects that need local playback while still enabling use of modern media delivery and DRM technologies.
The field of DRM technologies is fast evolving as new technology vendors (Google, Adobe, Apple and others) enter the mass-scale DRM market in order to grab a slice away from the market leader (Microsoft PlayReady). Thus, it is worth re-testing such results every now and then.
we are working on an ASP.NET project that is required to comply with the OWASP ASVS checklist. One of the term is "Verify that backend TLS connection failures are logged." I couldn't find a way to achieve this but the customer is holding us to it. Any suggestions/references? Sample code will be even better.
Here's the link to the owasp thing:
http://code.google.com/p/owasp-asvs/wiki/Verification_V10
Thanks in advance.
I agree this is badly worded. There is a revamp of the ASVS at the moment. Come on by and help us make things more concrete and testable.
In your instance, TLS connections are typically maintained by the operating system on behalf of application and library code that rarely if ever makes any real effort to vaidate that the TLS connection is what it says it is on the tin. Browsers are getting better at warning their users, but libraries are shockingly bad and applications that call them worse at checking for end to end errors and making wise security decisions about the state of the connection. Even basic things like certificate revocation should be a no brainer, but so few libraries enable this by default.
We see this every day in mobile phone apps - it's trivial to get most mobile apps to connect via MITM proxies that don't provide any user feedback that the connection is untrustworthy.
I'd like to see this requirement to be:
"User agent software (mobile app, browser, web service, library) MUST make it clear to the end user in terms they can readily understand that the connection is untrustworthy, and furthermore reject the connection, or to require user intervention to establish an insecure connection. Such failed or insecure connections should be logged."
This would make sure that - regardless if it's the OS, the library or the application - someone owns this interaction, and that is has a clear security objective (no untrusted connections), a usability control that favors security, and lastly a detection control to allow pre-incident monitoring and post-incident re-construction if a user makes a terribly poor choice (for we know they always do given the chance).
I know this doesn't answer your question per se, but you don't mention if you use OpenSSL or WCF or ... I'd be happy to contribute a code snippet if you let me know the platform you'd like.
When using the networking API in BB OS 5.0 (ConnectionFactory, etc.) there are a ton of options for configuring the connection. How much of this is it appropriate/expected to expose to the end user of the application?
Certainly, I will be setting what I think are appropriate defaults for my application, but some things (e.g. preferred and disallowed transports) seem like they are questions that the user can or should answer.
Is there any kind of best practice here?
Yes, this is one of the things I dislike in BB development - you never know what type of connectivity a BB user has on the device. As a result the code to detect a usable transport is complicated (even despite RIM has some sample code on how to do this).
In the apps development I've been involved in there were different approaches to this. However each app had networking settings which were implied to be populated by user.
For instance, one app asks user to select a transport type on app startup. :) This is definitelly an ideal solution for developers, but not for users (they simply may not know what the "network transport" is). If the target audience mostly consists of advanced users, then this will work good.
Another approach is to use some code to auto-detect a usable transport type, however this approach may also fail (for instance, if the code tries to cover a wide range of OS versions and device makes, then there are most likely will be some unexpected exclusions). So as a fallback scenario it is good to have some networking settings screen where user could check what transports to use (maybe just the only one) and APN settings.
It depends on the target audience. You could do a simplified view with basic options and and advanced view with every thing under the sun that is configurable with a reset button in case the user gets lost.
For a current project, I was thinking of implementing WebDAV to present a virtual file store that clients can access. I have only done Google research so far but it looks like I can get away with only implementing two methods:
GET, PROPFIND
I think that this is great. I was just curious though. If I wanted to implement file uploading via:
PUT
I haven't implemented it, but it seems simple enough. My only concern is whether a progress meter will be displayed for the user if they are using standard Vista Explorer or OSX Finder.
I guess I'm looking for some stories from people experienced with WebDAV.
For many WebDAV clients and even for read only access, you will also need to support OPTIONS. If you want to support upload, PUT obviously is required, and some clients (MacOS X?) will require locking support.
(btw, RFC 4918 is the authorative source of information).
I implemented most of the WebDAV protocol in about a day's work: http://github.com/nfarina/simpledav
I wrote it in Python to run on Google App Engine, and I expect any other language would be a similar effort. All in all, it's about two pages of code.
I implemented following methods: OPTIONS, PROPFIND, MKCOL, DELETE, MOVE, PUT, GET. So far I've tested Transmit and Cyberduck and both work great with it.
Hopefully this can provide some guidance for the next person out there interested in implementing a WebDAV server. It's not a difficult protocol, it's just very dense with abstracted language like 'depth' and 'collections' and blah.
Here's the spec: http://www.webdav.org/specs/rfc4918.html
But the best way to understand the protocol is to watch a client interacting with a working server. I used Transmit to connect to Box.net's WebDAV server and monitored traffic with Charles Proxy.
Bit late to the party, but I've implemented most of the webdav protocol and I can tell with confidence you'll need to implement most of the protocol.
For OS/X you'll need class-2 WebDAV support, which includes LOCK and UNLOCK (I found it particularly difficult to fully implement the http If: header, but for Finder you'll only need a bit of that.)
These are some of my personal findings:
http://sabre.io/dav/clients/windows/
http://sabre.io/dav/clients/finder/
Hope this helps
If you run Apache Jackrabbit under, say, Tomcat, it can be configured to offer WebDAV and store uploaded files. Perhaps that will be a useful model, or even a good enough replacement for the planned implementation.
Apache Jackrabbit Support for WebDAV
Also, you may want to be aware of the BitKinex client (free 30 day trial), which I have found to be a useful tool for testing a WebDAV server.
BitKinex Home Page
We use WebDAV internally to provide a folder-based view of some file shares to clients outside of our firewall. We're using IIS6 for this.
Basically, it boils down to creating a Virtual Directory in IIS that maps to each network file system that you want to make available via WebDAV. Set it up with the content coming from "A share located on another computer" -- use the UNC path to the share for the Network Directory value. We turn on all options except Index this resource. Disable all default content pages. Turn on Windows Integrated Authentication (ours is set up using SSL as well). I have the root set up to deny access to anonymous and allow access to any authenticated user. We also have a wildcard MIME mapping (.* to application/octet-stream). Enable the WebDAV web service extension in IIS. You also need to set up the web server to delegate permissions to all the file servers you may be accessing so it can pass on the user's credentials.
If you have Macintosh clients you may also need an ISAPI filter that maps 401 to 403 errors for Darwin clients. Microsoft and Apple disagree on how to handle the situation when you don't have permission to write to a directory. Apple keeps resending the credentials on a 401 (Access Denied) error, translating it to a 403 (Forbidden) error keeps this from happening. By default Apple likes to write a "dot" file to every directory it accesses. Navigating through directories where you don't have write access will end up crashing the Finder if you don't have the filter. I have source code for this if needed.
This is all off the top of my head. It's possible (probable?) that I may have missed something. Feel free to contact me via the contact information on my web site if you have problems.
We have a webDAV servlet on our web based product.
i've found Apache Jackrabbit a good help for implementing it. however webDav is a serious P.I.T.A on the client side support.
many client implementation differ widely in their behavior and you most likely will have to support several different kinds of bugged implementations.
some examples:
MS vista only supports authentication over SSL
most windows based webDAV client assume your webdav-server/let is a sharepoint server and will act accordingly (thus not according to the webDAV protocol)
one example of this is that you NEED to allow and Unauthenticated LOCK request on the root of your server (ie yourdomain.com/ not yourdomain.com/where/webdav/should/live) else you wont be able to get write acces in MS windows.
(this is a serious P.I.T.A on a tomcat machine where your stuff usualy lives in server.com/servlets/paths/thelocation)
most(all?) versions of MS office respond different to webdav links.
i guess my point is integrating webdav support into an existing product can be a LOT harder then you would expect. and if possible i would advice to use a (semi)-standalone webDAV server such as jackrabbit webdavServer, or apache mod_webdav
I've found OS X's Finder WebDAV support to be really finicky. In order to get read-write support, you have to implement LOCK, in addition to other bits.
I wrote a WebDAV interface to a Postres database, where python modules were stored in the database in a hierarchical folder-like structure. Accessing it with cadaver worked fine, and IIRC a GUI windows browser worked too, but Finder refused to mount the share as anything other than read-only.
So, I don't know if if would give a progress bar. The files I was dealing with were small enough that a read/copy from them was virtually instantaneous. I think a copy of a large file using the Finder would probably give a progress bar - it does for any other type of mounted share.
Here is another open source project for WSGI WebDAV
http://code.google.com/p/wsgidav/
where I picked up the PyFileServer project.