Wrapper for Jupyter Notebooks - jupyter-notebook

I would like to create a web app, let's call it JupyterFrontend, that serves as a wrapper for jupyter notebooks. The main functionality of JupyterFrontend are as follows:
Authenticate users using a username and password combination
Create a Jupyter notebook session
Connect to a set of shared backend compute resources to execute the code
I would like to forward/share the Jupyter sessions of each user as a route/endpoint in my web app JupyterFrontend.
What's the best way to go about this?
How do I ensure that I can provide session persistence in this kind of architecture?

What you have explained is all possible using Jupyterhub.
There is an authentication class for Jupyterhub, which you can use a username password combination. Or any other authentication provider you can think of.
You can have multiple compute resources as the backend. Again, this is very flexible. You can have cloud resources, local network resources etc.

Related

AWS CloudWatch with mobile applications

I have a backend system built in AWS and I'm utilizing CloudWatch in all of the services for logging and monitoring. I really like the ability to send structured JSON logs into CloudWatch that are consistent and provide a lot of context around the log message. Querying the logs and getting to the root of an issue is simple or just exploring the health of the environment - makes CloudWatch a must have for my backend.
Now I'm working on the frontend side of things, mobile applications using Xamarin.Forms. I know AWS has Amplify but I really wanted to stick with Xamarin.Forms as that's a skill set I've already got and I'm comfortable with. Since Amplify didn't support Xamarin.Forms I've been stuck looking at other options for logging - one of them being Microsoft's AppCenter.
If I go the AppCenter route I'll end up having to build out a mapping of the AppCenter installation identifier and my users between the AWS environment and the AppCenter environment. Before I start down that path I wanted to ask a couple questions around best practice and security of an alternative approach.
I'm considering using the AWS SDK for .Net, creating an IAM Role with a Policy that allows for X-Ray and CloudWatch PUT operations on a specific log group and then assigning it to an IAM User. I can issue access keys for the user and embed them in my apps config files. This would let me send log data right into CloudWatch from the mobile apps using something like NLog.
I noticed with AppCenter I have to provide a client secret to the app, which wouldn't be any different than providing an IAM User access key to my app for pushing into CloudWatch. I'm typically a little shy about issuing access keys from AWS but as long as the Policy is tight I can't think of any negative side-effects... other than someone flooding me with log data should they pull the key out of the app data.
An alternative route I'm exploring is instead of embedding the access keys in my config files - I could request them from my API services and hold it in-memory. Only downside to that is when the user doesn't have internet connectivity logging might be a pain (will need to look at how NLog handles sinks that aren't currently available - queueing and flushing)
Is there anything else I'm not considering or is this approach a feasible solution with minimal risk?

How to I pass secrets stored in AWS Secret Manager to a Docker container in Sagemaker?

My code is in R. And I need to excess external database. I am storing database credentials in AWS Secret Manager.
So I first tried using paws library to get aws secrets in R but that would require storing access key, secret id and session token, and I want to avoid that.
Is there a better way to do this? I have created IAM role for Sagemaker. Is it possible to pass secrets as environment variables?
Edit: I wanted to trigger Sagemaker Processing
I found a simple solution to it. Env variables can be passed via Sagemaker sdk. It minimizes the dependencies.
https://sagemaker.readthedocs.io/en/stable/api/training/processing.html
As another answer suggested, paws can be used as well to get secrets from aws. This would be a better approach
You should be able to use Paws for this. According to documentation it will use the IAM role configured for your Sagemaker instance
If you are running the package on an instance with an appropriate IAM role, Paws will use it automatically and you don’t need to do anything extra.
You only have to add the relevant access permissions (e.g. Allow ssm:GetParameters) to the Sagemaker IAM role.

How do I give an Openstack server permissions to call the Openstack APIs?

I am aware of how the permission system works in AWS:
By giving an EC2 instance a specific IAM role, it is possible to give all programs running on that specific EC2 instance some set of permissions for accessing other AWS services (e.g. permission to delete an EBS volume).
Is there something similar for Openstack? If you would like a program that is running on an Openstack server to be able to programmatically make changes through the Openstack API:s, how do you solve that?
The scenario I am thinking of is this:
You create a new Rackspace OnMetal cloud server together with an extra Rackspace Cloud Block Storage volume, and copy a big input data file to it with scp. You log in to the server with ssh and start a long running compute job. It would be great if the compute job by itself would be able to copy the result files to Rackspace Cloud Files and then unmount and delete the
Rackspace Cloud Block Storage volume that was used as temporary storage during the computation.
Rackspace's Role Based Access Control (RBAC) system is similar to AWS IAM roles. It lets you create users that restricted to specific APIs and capabilities. For example, a readonly cloud files user, or a cloud block storage administrator.
You could create a new user that only has access to the areas required for this compute job, e.g. cloud block storage and cloud files. Then your job would use that user's apikey to request a token and call the cloud block storage and cloud files api.
You did not mention a specific language but I recommend using an SDK, as it will handle the API specifics and quirks and get you up and running more quickly.

Connecting Firebase Simple Login and External Server

How would you use Firebase's simple login to allow users to upload music files.
As I understand it, it doesn't make sense to even think about storing audio files in Firebase's database which is why I would like to be able to store them on an external PHP server.
So, the question revolves on whether I can use Firebase's simple login system to allow users to authenticate to an external server.
I have seen Using NodeJs with Firebase - Security ... which gives some great insight, but then how would you enable the large file upload to the external server?
The technique from the answer you linked will work for your situation too, you just need to translate it into PHP and the Firebase REST APIs. Additionally, since the REST API isn't real-time you must add some kind of task queue that it can poll.
Your program would flow something like this:
User logs in to Firebase with Simple Login
User write to only a place that they can (based on security rules). The user also writes an entry into a task queue.
Your PHP server connects with a token that allows reads of all of the user's secret places.
Your PHP server polls the firebase every once in awhile to look for new tasks. If there's a new task, it validates the user and allows that user to post data to it.
All that being said, this is going to be pretty complicated. PHP's execution model does not lend itself well to real-time systems, and
I strongly recommend you consider some other options:
You're using a cloud platform, Firebase, for your realtime stuff, so consider a cloud service for your binaries too, like filepicker.io
If you really want to host the files yourself, use something that's more real-time like node.js. It'll save you the effort of constructing that task queue.

Automatizing controlling of Skype Linux UI application

I am the author of project Sevabot http://sevabot-skype-bot.readthedocs.org/ - a Skype bot for Linux.
As the server installation is painful ( https://sevabot-skype-bot.readthedocs.org/en/latest/ubuntu.html ) I'd like to figure out how to make it simpler for the users to use the bot throught cloud deployments... to the point we can ask Skype username, password and some cloud service credentials and automatically deploy the bot to the customers on the virtual machine provider like Amazon EC2. We'd like to make the deployment possible in a such way that one doesn't need to run VNC just to connect your Xvfb to enter the Skype login on the first time.
One challenge is automatizing Skype Linux UI input (username, password). Skype does not seem to be an option to enter user credentials in automatized way from the command line. As far as I know the user interface is Qt based and you need some sort of robot to choose widgets and simulate keypresses and mouse.
I'd like to know what options there exist for controlling blackbox Linux UI applications and automatize actions for them?

Resources