Meteor serve remote video files as a reverse proxy - meteor

A bit of a weird question :) I have a video at https://s3.amazonaws.com/mybucket/myvideo.mp4 and I would like my Meteor server to respond to http://mywebsite.com/myvideo.mp4 exactly as if the video from s3 was living there. So that I can stream, seek etc ie that
<video><source src="http://mywebsite.com/myvideo.mp4" type="video/mp4"></video>
would behave exactly like
<video><source src="https://s3.amazonaws.com/mybucket/myvideo.mp4" type="video/mp4"></video>
while not hosting anything on my server.
This doesn't do the job: it seems to download everything (my server went out of memory...)
The reason of this weird request is that Safari is not doing a great job with CORS (see here and here) and I can't paint to a canvas a video from a different domain... setting crossOrigin and configuring CORS correctly in AWS doesn't solve it.
And just to check: there is no simpler way to set AWS config to make things as if the content was coming from http://mywebsite.com right?

Lots going on here. You're on the right track with reverse proxy, as that's the only way you'll change the outbound source of the files. But for a hosted Meteor app, that is still being developed for AWS integration, so people end up using a reverse proxy like Nginx, which is most popular now for Meteor devs.
The SO question you posted is directions for how to go directly from the S3 bucket, but it looks like what you want to do is serve the content from the bucket (that you own?) and have it's source by your website. In and of itself not hard, you will need to configure your AWS setup so that the content is streamed from your S3 then to your hosted space, and through to the app. Cloudfront is a service they have that can start the process for you, but without knowing more about your server / host setup its hard to troubleshoot. You'll have to specify an origin domain name when delivering content through AWS CF, but you'll surely be able to figure out how to get this to be www.yourwebsite.com, particularly if you're already hosting on AWS and use Nginx.
https://aws.amazon.com/blogs/aws/using-amazon-cloudfront-for-video-streaming/
https://www.jwplayer.com/blog/delivering-hls-with-amazon-cloudfront/

Related

Nextjs in Production

I know that Next.js can do SSR.
I have questions about production.
In my experience(no SSR). Frontend build static files, and then give the folder to backend to integrate.And there is only one server.
I want to know that if we want to use SSR with Next.js (not static site).
Do we need host two server? One for host backend(nodes, java…), another for host frontned(next.js)?
If I use nodejs as backend language.Can I write all api in next.js?(I mean frontend and backend code all use next.js, so that there is only one server).
If question one's answer is yes, I see the document use next start to host server, is it strong enough to host many users?
Do we need host two server? One for host backend(nodes, java…), another for host > frontned(next.js)?
In most cases you would have a single server producing the SSR as well as rendering the markup required for the client. The associated Javascript files that only the browser can be sent via a asset serving server ( e.g: An S3 bucket ) - You would front the whole thing using a CDN so your server would not get all public requests
If I use nodejs as backend language.Can I write all api in next.js?(I mean frontend and backend code all use next.js, so that there is only one server).
Yes, for simplistic uses you can checkout the api solve that NextJS ships with. https://nextjs.org/docs/api-routes/introduction
If question one's answer is yes, I see the document use next start to host ? server, is it strong enough to host many users?
You would use a next build and next start - With the latest optimizations nextjs adds Static site generation (SSG) - Sorry one more confusing term but this lets your backend nodejs app receive much lesser requests and be smart about serving repetitive requests, However even with these abilities you should front the whole thing using a CDN to ensure high availability and low operating costs.

Play radio stream in Alexa Skill

I would like to develop a simple Alexa skill which should do only one thing.
By invoking it with:
Alexa, play Radio Luxembourg
it should play http://sc-rtllive.newmedia.lu
I found examples how to play media files hosted on an external server, but none playing a stream.
Is it possible at all?
Edit
There is actually not really a need for this at all.
The built-in TuneIn-support can do that for you.
Provided you pronounciation is good (I never seem to get it right), this should work:
Alexa, play RTL Radio Lëtzebuerg on tunein
Based on this :https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#play
Identifies the location of audio content at a remote HTTPS location.
The audio file must be hosted at an Internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the files must present a valid, trusted SSL certificate. Self-signed certificates cannot be used. Many content hosting services provide this. For example, you could host your files at a service such as Amazon Simple Storage Service (Amazon S3) (an Amazon Web Services offering).
The supported formats for the audio file include AAC/MP4, MP3, HLS, PLS and M3U. Bitrates: 16kbps to 384 kbps.
I pretty much copied and pasted this code into a Lambda function (1m requests per month via the free tier is plenty) and just changed the podcastURL:
https://github.com/bespoken/super-simple-audio-player/blob/Part1/README.md
The instructions in the README talk about setting up bespoken-tools, which indeed are great for debugging, but if you want to run it independently of your own machine, can use their code in Lambda. Their guide to setting up the Alexa skill with Amazon will work perfectly, with the exception that your HTTPS endpoint for the service will be the Lambda endpoint, not your own box. Here's some basics on Lambda: https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html
The only issue, as #Tiantian correctly points out, is that the Radio Luxembourg isn't HTTPS. Maybe you can proxy it or something. Looks like this would do the trick: https://webrobots.io/how-to-setup-a-free-proxy-server-on-amazon-ec2/
(You'd want to restrict that so it only proxies traffic to the Radio Luxembourg link.)

Only use Browserstacklocal for certain URLs

I'm currently trying to get my Protractor tests working with Browserstack. My (automated) tests are run on a staging server that can only be accessed from within a VPN. I am using Browserstacklocal to access the staging server without a problem.
My question: Is it possible to direct ONLY the staging server URLs through Browserstacklocal? For exemple, during my test, I go to PayPal Sandbox to purchase an item. I would like the Paypal connection to be made directly from the Browserstack remote machine.
The "-only" parameter restricts the Local Testing access to specified local servers and/or folders. Consider the following example:
./BrowserStackLocal ACCESS_KEY -only localhost,443,1
In this case, only the traffic for the domain “localhost” will be directed to your private server and rest all URL’s/domains will be accessed directly through our remote VM’s.
More details on all the Local Testing modifiers can be found here.
Seems like Browserstacklocal's -only flag does the trick. Refer to this article for more info. Although I wouldn't mind someone explaining what the parameters are.

Using Cloudfront to expose ElasticSearch REST API in read only (GET/HEAD)

I want to let my clients speak directly with ElasticSearch REST API, obviously preventing them from performing any data or configuration change.
I had a look at ElasticSearch REST interface and I noticed the pattern: HTTP GET requests are pretty safe (harmless queries and status of cluster).
So I thought I can use Cloudfront as a CDN/Proxy that only allows GET/HEAD methods (you can impose such restrict it in the main configuration).
So far so good, all is set up. But things don't work because I would need to open my EC2 security group to the world in order to be reachable from Cloudfront! I don't want this, really!
When I use EC2 with RDS, I can simply allow access to my EC2 security group in RDS security groups. Why can't I do this with CloudFront? Or can I?
Ideas?
edit: It's not documented, but ES accepts facets query, which involve a (JSON) body, not only with POST, but also with GET. This simply breaks HTTP recommendation (as for RFC3616) by not ignoring the body for GET request (source).
This relates because, as pointed out, exposing ES REST interface directly can lead to easy DOS attacks using complex queries. I'm still convinced though, having one less proxy is still worth it.
edit: Other option for me would be to skip CloudFront and adding a security layer as an ElasticSearch plugin as shown here
I ended coding with my own plugin. Surprisingly there was nothing quite like this around.
No proxies, no Jetty, no Tomcat.
Just a the original ES rest module and my RestFilter. Using a minimum of reflection to obtain the remote address of the requests.
enjoy:
https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin
Note that even a GET request can be harmful in Elasticsearch. A query which simply takes up too much resources to compute will bring down your cluster. Facets are a good way to do this.
I'd recommend writing a simple REST API you place in front of ES so you get much more control over what hits your search cluster. If that's not an option you could consider running Nginx on your ES boxes to act as a local reverse proxy, which will give you the same control (and a whole lot more) as CloudFront does. Then you'd only have to open up Nginx to the world, instead of ES.
A way to do this in AWS would be:
Set up an Application Load Balancer in front of your ES cluster. Create a TLS cert for the ALB and serve https. Open the ES security group to the ALB.
Set up CloudFront and use the ALB as origin. Pass a custom header with a secret value (for WAF, see next point).
Set up WAF on your ALB to only allow requests that contain the custom header with the secret value. Now all requests have to go through CloudFront.
Set up a Lambda#Edge function on your CloudFront distribution to either remove the body from GET requests, or DENY such requests.
It’s quite some work, but there’s advantages over the plugin, e.g.:
CloudFront comes with free network DDOS protection
CloudFront gives your users lower latency to ES because of the fast CloudFront network and global PoP’s.
Opens many options to use CloudFront, WAF and Lamba#Edge to further protect your ES cluster.
I’m working on sample code in CDK to set all of this up. Will report back when that’s ready.

What are your experiences implementing/using WebDAV?

For a current project, I was thinking of implementing WebDAV to present a virtual file store that clients can access. I have only done Google research so far but it looks like I can get away with only implementing two methods:
GET, PROPFIND
I think that this is great. I was just curious though. If I wanted to implement file uploading via:
PUT
I haven't implemented it, but it seems simple enough. My only concern is whether a progress meter will be displayed for the user if they are using standard Vista Explorer or OSX Finder.
I guess I'm looking for some stories from people experienced with WebDAV.
For many WebDAV clients and even for read only access, you will also need to support OPTIONS. If you want to support upload, PUT obviously is required, and some clients (MacOS X?) will require locking support.
(btw, RFC 4918 is the authorative source of information).
I implemented most of the WebDAV protocol in about a day's work: http://github.com/nfarina/simpledav
I wrote it in Python to run on Google App Engine, and I expect any other language would be a similar effort. All in all, it's about two pages of code.
I implemented following methods: OPTIONS, PROPFIND, MKCOL, DELETE, MOVE, PUT, GET. So far I've tested Transmit and Cyberduck and both work great with it.
Hopefully this can provide some guidance for the next person out there interested in implementing a WebDAV server. It's not a difficult protocol, it's just very dense with abstracted language like 'depth' and 'collections' and blah.
Here's the spec: http://www.webdav.org/specs/rfc4918.html
But the best way to understand the protocol is to watch a client interacting with a working server. I used Transmit to connect to Box.net's WebDAV server and monitored traffic with Charles Proxy.
Bit late to the party, but I've implemented most of the webdav protocol and I can tell with confidence you'll need to implement most of the protocol.
For OS/X you'll need class-2 WebDAV support, which includes LOCK and UNLOCK (I found it particularly difficult to fully implement the http If: header, but for Finder you'll only need a bit of that.)
These are some of my personal findings:
http://sabre.io/dav/clients/windows/
http://sabre.io/dav/clients/finder/
Hope this helps
If you run Apache Jackrabbit under, say, Tomcat, it can be configured to offer WebDAV and store uploaded files. Perhaps that will be a useful model, or even a good enough replacement for the planned implementation.
Apache Jackrabbit Support for WebDAV
Also, you may want to be aware of the BitKinex client (free 30 day trial), which I have found to be a useful tool for testing a WebDAV server.
BitKinex Home Page
We use WebDAV internally to provide a folder-based view of some file shares to clients outside of our firewall. We're using IIS6 for this.
Basically, it boils down to creating a Virtual Directory in IIS that maps to each network file system that you want to make available via WebDAV. Set it up with the content coming from "A share located on another computer" -- use the UNC path to the share for the Network Directory value. We turn on all options except Index this resource. Disable all default content pages. Turn on Windows Integrated Authentication (ours is set up using SSL as well). I have the root set up to deny access to anonymous and allow access to any authenticated user. We also have a wildcard MIME mapping (.* to application/octet-stream). Enable the WebDAV web service extension in IIS. You also need to set up the web server to delegate permissions to all the file servers you may be accessing so it can pass on the user's credentials.
If you have Macintosh clients you may also need an ISAPI filter that maps 401 to 403 errors for Darwin clients. Microsoft and Apple disagree on how to handle the situation when you don't have permission to write to a directory. Apple keeps resending the credentials on a 401 (Access Denied) error, translating it to a 403 (Forbidden) error keeps this from happening. By default Apple likes to write a "dot" file to every directory it accesses. Navigating through directories where you don't have write access will end up crashing the Finder if you don't have the filter. I have source code for this if needed.
This is all off the top of my head. It's possible (probable?) that I may have missed something. Feel free to contact me via the contact information on my web site if you have problems.
We have a webDAV servlet on our web based product.
i've found Apache Jackrabbit a good help for implementing it. however webDav is a serious P.I.T.A on the client side support.
many client implementation differ widely in their behavior and you most likely will have to support several different kinds of bugged implementations.
some examples:
MS vista only supports authentication over SSL
most windows based webDAV client assume your webdav-server/let is a sharepoint server and will act accordingly (thus not according to the webDAV protocol)
one example of this is that you NEED to allow and Unauthenticated LOCK request on the root of your server (ie yourdomain.com/ not yourdomain.com/where/webdav/should/live) else you wont be able to get write acces in MS windows.
(this is a serious P.I.T.A on a tomcat machine where your stuff usualy lives in server.com/servlets/paths/thelocation)
most(all?) versions of MS office respond different to webdav links.
i guess my point is integrating webdav support into an existing product can be a LOT harder then you would expect. and if possible i would advice to use a (semi)-standalone webDAV server such as jackrabbit webdavServer, or apache mod_webdav
I've found OS X's Finder WebDAV support to be really finicky. In order to get read-write support, you have to implement LOCK, in addition to other bits.
I wrote a WebDAV interface to a Postres database, where python modules were stored in the database in a hierarchical folder-like structure. Accessing it with cadaver worked fine, and IIRC a GUI windows browser worked too, but Finder refused to mount the share as anything other than read-only.
So, I don't know if if would give a progress bar. The files I was dealing with were small enough that a read/copy from them was virtually instantaneous. I think a copy of a large file using the Finder would probably give a progress bar - it does for any other type of mounted share.
Here is another open source project for WSGI WebDAV
http://code.google.com/p/wsgidav/
where I picked up the PyFileServer project.

Resources