I have my Dash plotly app running in PCF, my app.py runs based on a excel file which is uploaded to pcf along with app.py, but the excel feed changes daily, so daily i am uploading the new file to pcf using "cf push", is it possible to avoid that, like making pcf to read excel from my file system instead of uploading the new excel file to pcf cell container everytime
Basically you need some persistent storage attached to your container so app can refer the available file at run time. There are the options that can be explored:
If NFS is enabled at your end then you can mount the file share and pick the files from that location directly.
Otherwise you can have another PCF service
(just to keep it separate for better management) that can pull the
files from your server using sftp and transfer to S3. Amend your app
to refer the file from S3.
Related
I a using GGIR package for accelerometer data analysis. My data is onedrive folder which takes a long time to download. Is there a way I can access the onedrive files directly without downloading to my local machine?
My guess would be that this is not possible. If you're working with Azure there are tools available to connect to OneDrive and download/upload the data which is then processed on a separate instance. I'm guessing the same applies to your local machine, but I'm not intimately familiar with Microsoft's services to be sure.
For example:
By using Azure Logic Apps and the OneDrive connector, you can create automated tasks and workflows to manage your files, including upload, get, delete files, and more. With OneDrive, you can perform these tasks:
Build your workflow by storing files in OneDrive, or update existing files in OneDrive.
Use triggers to start your workflow when a file is created or updated within your OneDrive.
Use actions to create a file, delete a file, and more. For example, when a new Office 365 email is received with an attachment (a trigger), create a new file in OneDrive (an action).
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-onedrive
I am new to Azure batch. I am trying to use R in parallel with Azure batch in rstudio to run code on a cluster. I am able to successfully start the cluster and get the example code to work properly. When I try to run my own code I am getting an error that says the cluster nodes cannot find my data files. Do I have to change my working directory to Azure batch somehow?
Any information on how to do this is much appreciated.
I have figured out how to get Azure batch to see my data files. Not sure if this is the most efficient way, but here is what I did.
Download a program called Microsoft Azure Storage Explorer which runs on my local computer.
Connect to my Azure storage using the storage name and primary storage key found in the Azure portal.
in Microsoft Azure Storage Explorer find Blob containers, right click create new container.
Upload data files to that new container.
Right click on data files and go to copy URL.
Paste URL in R like this model_Data<-read.csv(paste('https://<STORAGE NAME HERE>.blob.core.windows.net/$root/k',k,'%20data%20file.csv',sep=''),header=T)
I am working on an API that needs to download a file from server A and upload it to server B in the same network. It's for internal use. Each of the files will have multiple versions and will need to be uploaded to server B multiple times and all the versions of the same file will share the same file name. This is my first time dealing with file manipulation so please bare with me if my question sounds ignorant. Can I use HttpClient.PostAsync for the uploading part in this effort? Or can I just use Stream.CopyToAsync if it's ok to just copy over? Thanks!
Stream.CopyToAsync copies stream from one to another inside memory of same server.
In your case, you can use HttpClient.PostAsync, but one the other server there should be some api to receive the stream content and save to disk.
What is the best method to zip large files present in AZ blob storage and download them to the user in an archive file (zip/rar)
does using azure batch can help ?
currently we implement this functions in a traditionally way , we read stream generate zip file and return the result but this take many resources on the server and time for users.
i'am asking about the best technical and technologies solution (preferred way using Microsoft techs)
There are few ways you can do this **from azure-batch only point of view**: (for the initial part user code should own whatever zip api they use to zip their files but once it is in blob and user want to use in the nodes then there are options mentioned below.)
For initial part of your question I found this which could come handy: https://microsoft.github.io/AzureTipsAndTricks/blog/tip141.html (but this is mainly from idea sake and you will know better + need to design you solution space accordingly)
In option 1 and 3 below you need to make sure you user code handle the unzip or unpacking the zip file. Option 2 is the batch built-in feature for *.zip file both at pool and task level.
Option 1: You could have your *rar or *zip file added as azure batch resource files and then unzip them at the start task level, once resource file is downloaded. Azure Batch Pool Start up task to download resource file from Blob FileShare
Option 2: The best opiton if you have zip but not rar file in the play is this feature named Azure batch applicaiton package link here : https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
The application packages feature of Azure Batch provides easy
management of task applications and their deployment to the compute
nodes in your pool. With application packages, you can upload and
manage multiple versions of the applications your tasks run, including
their supporting files. You can then automatically deploy one or more
of these applications to the compute nodes in your pool.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages#application-packages
An application package is a .zip file that contains the application binaries and supporting files that are required for your
tasks to run the application. Each application package represents a
specific version of the application.
With regards to the size: refer to the max allowed in blob link in the document above.
Option 3: (Not sure if this will fit your scenario) Long shot for your specific scenario but you could also mount virtual blob to the drive at join pool via mount feature in azure batch and you need to write code at start task or some thing to unzip from the mounted location.
Hope this helps :)
I'm developing an application using Adobe Flex 4.5 SDK, in which the user would be able to export multiple files bundled in one zip file. I was thinking that I must need to take the following steps in order for performing this task:
Create a temporary folder on the server for the user who requested the download. Since it is an anonymous type of user, I have to read Sate/Session information to identify the user.
Copy all the requested files into the temporary folder on the server
Zip the copied file
Download the zip file from the server to the client machine
I was wondering if anybody knows any best-practice/sample-code for the task
Thanks
The ByteArray class has some methods for compressing, but this is more for data transport, not for packaging up multiple files.
I don't like saying things are impossible, but I will say that this should be done on the server-side. Depending on your server architecture I would suggest sending the binary files to a server script which could package the files for you.
A quick google search for your preferred server-side language and zipping files should give you some sample scripts to get you started.