I am trying to convert the first page of a pdf uploaded to Storage to a JPG so that I can generate a thumbnail and display it to my users. I use Imagemagick for that. The issue is that it seems like Google cloud function instances don't have ghostscript (gs) that seems to be a dependency to manipulate pdfs.
Is there a way to have it available in some way?
(fyi, I am able to properly convert on my local machine with both Imagemagick and ghostscript installed). So, I know the command I am using is good.
AWS Lambda instances have ghostscript installed by the way
Thanks
Actually Ghostscript was deprecated from the app engine as well. I think your best option is maybe to use pdf.js deployed with your cloud function. I have not tried it myself but it looks like the only way forward with the current state of CF. Other option is to deploy a GCE with Ghostscript and send a request from the CF to convert the PDF page for you.
Related
I am new to AEM, and would like to know how to take author instance into local instance. I can take the pushed changes made in project by using git command, then I can build the local by using mvn command. However I can't see the changes that I made in aem prod on local aem.
Can you help me regarding this. Thank you in advance!
If you are talking how you can have the changes made in pages in any of the environment to your local instance, then just create a content package in crxde in the environment from where you want the changes. After creating the package, download it and install in your local instance crxde.
The more direct approach is via the Package Manager, you create a package, enter the desired filters, then build and download. There are automated ways, by using the package manager with curl commands, Grabbit, etc. For local development probably you do not need to bring all the content, especially if the DAM is too big.
As the heading of the question; I just want to know whether it is possible to use python in Wordpress for a complete build of a web page. I have seen some answers but, they were not satisfactory and it is recommended to use python there?
As I understand, you want to know whether it is possible to use python in Wordpress for a complete build of a web page.
Yes, python should be installed onto your server. On most linux powered servers we have python by default.
However, shared hosting will allow you to run python inside wordpress. It is best to be on VPS then you can trigger python from wordpress using short codes.That will execute and display python script output on the page.
For further reading please follow the links below:
The trick is use the word press[source code] shortcut tag, as documented at
http://en.support.wordpress.com/code/posting-source-code/
I am trying to use locally in R a tensorflow model using tfdatasets and cloudML using training data available in Google cloud storage without uploading it. AS far as I know the package "tfdtasets" should use gs:// URLs directly with gs_data_dir().
If I specify in TSScript.R:
data_dir <- gs_data_dir(gs://my-gcp-project/data/)
When I run cloud_train(TSScript.R) I get the error:
Error: 'gs://my-gpc-project/data/train.data.csv' does not exist in current working directory ('/root/.local/lib/python2.7/site-packages/cloudml-model')
Here my questions:
Is it somehow possible to do it but I am doing some mistakes in my script?
If not, do I need to install R in the cloud and working from there directly?
would it possible maybe training data from bigTable without uploading it locally?
Thanks
For 1) I think you might be looking for tf.gfile()
https://www.tensorflow.org/api_docs/python/tf/io/gfile/GFile
Example of use: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/keras/trainer/model.py#L154
Hope this helps!
For 2) If you want to do this, you should look at Custom Containers. https://cloud.google.com/ml-engine/docs/custom-containers-training
For 3) I'm not familiar with BigTable, but my guess is you would have to query that data you need and manually pull it locally. I don't think TF.gfile supports BigTable only GCS.
App Services is a great place to store data but now that I have a lot of critial info in there I realized there isn't a way to create a backup or roll back to an earlier state (in case I did something stupid like -X DELETE /users)
Any way to back up this data either online or offline?
Apart from API access to fetch records x by x and storing locally, there is no solution at the moment. Team is planning an S3 integration (export data to S3) but no completion date is defined for that yet.
Looks like the only way is to query the data using e.g. CURL and save the results to a local file. I dont believe there is a way to export natively.
http://apigee.com/docs/app-services/content/working-queries
From 2014/2015 Usergrid versions it is possible to make exports and imports using "Usergrid tools"
On this page it is explained how to install them :
https://github.com/apache/incubator-usergrid/tree/master/stack/tools
Basically once you run
$ java -jar usergrid-tools.jar export
and this will export your data as json files in an export directory.
There are several export and import tools avaible, the best way to see them is to visit this page :
https://github.com/apache/incubator-usergrid/tree/6d962b7fe1cd5b47896ca16c0d0b9a297df45a54/stack/tools/src/main/java/org/apache/usergrid/tools
OK, so I know how to upload to SharePoint thanks to this question:
How to send file to Sharepoint from Linux creating non existend directories
Now I am trying to figure out how to do it with Atlassian's Confluence. Any takers?
What I am looking for is a scriptable Unix command.
You question needs improving for a real understanding of what you're trying to achieve. That said, you can for example attach a file to a page within Confluence using curl.
The easy way to use do this would be using Bob Swifts Atlassian/Confluence CLI using the addAttachment command to add an attachment.
The plugin changed to be commercial a while back, but you can download the latest free release here.