Currently i am using postman rest client for checking API syntax for various products.
Is there any tool available like ipython notebook for testing REST API.
IPython notebook is a server which we can run python programs and store the programs and its outputs as a notebook. We can later open those notebooks and can view all the programs which we did.
I would like to store the API testing details with response and want to share those things.
Using postman, i can create a collection and share and i hope i cannot save the responses. Here in ipython notebook, i can save both the input and output.
You can either use python library to do your rest testing, or if you really want something more custom, I would suggest looking at writing a wrapper kernel or even maybe a full kernel for IPython. It allows to use your own custom languages in cells. See the a few of the existing kernels.
And don't be afraid to try to write one, it's quite easy,
Related
As the heading of the question; I just want to know whether it is possible to use python in Wordpress for a complete build of a web page. I have seen some answers but, they were not satisfactory and it is recommended to use python there?
As I understand, you want to know whether it is possible to use python in Wordpress for a complete build of a web page.
Yes, python should be installed onto your server. On most linux powered servers we have python by default.
However, shared hosting will allow you to run python inside wordpress. It is best to be on VPS then you can trigger python from wordpress using short codes.That will execute and display python script output on the page.
For further reading please follow the links below:
The trick is use the word press[source code] shortcut tag, as documented at
http://en.support.wordpress.com/code/posting-source-code/
I made a jupyter notebook that can read hdf5 files and use some functions to analyze the data. I would like to put this jupyter notebook on a server containing different hdf5 files and make it available for people who are working in other places. An exemple of functions would be to see the expression of some genes in a sample. Those people could open this jupyter notebook and add a list of specific genes to look at.
I am looking at JupyterLab, but I can see that people can read and modify the notebook. I would like that even if they modify it, when they finish, the notebook is as it was before they opened it. Do you think it is possible to do that ? I think I could do it locally with "read-only" but I don't know how to do it on jupyterLab.
I am a newbie for things related to servers. I will really appreciate your help and suggestions.
Thanks a lot :)
Some suggestions:
You could supply the notebook via a code sharing resource like GitHub and let them clone/copy/download use it as they see fit via their own resources. This insures they aren't changing your source notebook. And you don't have to worry about computational resources because they can run it where they prefer such as on their own cluster or at a cloud center.
Typically, I'd suggest the MyBinder project at https://mybinder.org/ for what you describe as it lets you share temporary active sessions that launch on remote servers. You set up a Github repository with notebooks and the data or a way to retrieve the data built in the notebook and when the temporary session launches, users can work through your notebook. This way they can extend and modify and run their own data and not change your source notebook. However, the resources are limiting as it is free, see here.
Examples:
solve for sediment transport
Resolving the measurement uncertainty paradox in ecological management
A quick introduction to RNAseq
bendit-binder
blast-binder
I've seen others use Code Ocean, see an example here.
There are other national/government-funded centers that allow similar hosting of services and resources that can be shared with others. CyVerse is one that is now running in the United States and several place, such as CyVerseUK in association with the Earlham Institute, and elsewhere. They offer notebooks to be served via their VICE apps in their Discover environment. Their resource allows more computational power and storage than the free, public MyBinder service.
You can use Mercury framework. It is an open-source tool that converts Python notebooks into interactive documents, like web apps, reports, dashboards, slides.
It requires an YAML header in the first cell in the notebook. In the header you define the presence of the notebook in the Mercury (title, description, show-code). Additionally, you can add widgets to the notebook. Widgets are directly connected to variables in the notebook. The variables which values are controlled by widgets should be in the separate cell.
The example notebook
The example notebook with YAML header
The YAML header
There are two widgets defined in the below YAML header. The name and points widgtes have the same name as variables in the code. User can change the widgets values, click Run and the notebook will be executed with new values.
---
title: My notebook
description: Notebook with plot
show-code: False
params:
name:
input: text
label: What is your name?
value: Piotr
points:
input: slider
label: How many points?
value: 200
min: 10
max: 250
---
The Web Application from Notebook
The final notebook after execution can be downloaded as standalone HTML or PDF file.
Deployment
The Mercury framework is built on top of Django. It can be deployed on any machine with Python available. The deployment for Heroku is very simple, and can be done with two commands (you need to have Heroku CLI tool):
heroku create my-amazing-web-app-name-here
git push heroku main
Please make sure that there is Procfile created for Heroku deployment with:
web: mercury run 0.0.0.0:$PORT
You can also deploy Mercury with docker-compose. Please check the docs for more details. You can also check the article on how to share Jupyter Notebook with non-programmers with Mercury tool.
You can install a local Gitlab server instead of publishing on Github. Then publish your Jupyter notebooks in Gitlab. Share your server URL with people. Our company is running all internally.
I am trying to convert the first page of a pdf uploaded to Storage to a JPG so that I can generate a thumbnail and display it to my users. I use Imagemagick for that. The issue is that it seems like Google cloud function instances don't have ghostscript (gs) that seems to be a dependency to manipulate pdfs.
Is there a way to have it available in some way?
(fyi, I am able to properly convert on my local machine with both Imagemagick and ghostscript installed). So, I know the command I am using is good.
AWS Lambda instances have ghostscript installed by the way
Thanks
Actually Ghostscript was deprecated from the app engine as well. I think your best option is maybe to use pdf.js deployed with your cloud function. I have not tried it myself but it looks like the only way forward with the current state of CF. Other option is to deploy a GCE with Ghostscript and send a request from the CF to convert the PDF page for you.
I have an R script that I run every day that scrapes data from a couple of different websites, and then writes the data scraped to a couple of different CSV files. Each day, at a specific time (that changes daily) I open RStudio, open the file, and run the script. I check that it runs correctly each time, and then I save the output to a CSV file. It is often a pain to have to do this everyday (takes ~10-15 minutes a day). I would love it if someway I could have this script run automatically at a pre-defined specific time, and a buddy of mine said AWS is capable of doing this?
Is this true? If so, what is the specific feature / aspect of AWS that is able to do this, this way I can look more into it?
Thanks!
Two options come to mind thinking about this:
Host a EC2 Instance with R on it and configure a CRON-Job to execute your R-Script regularly.
One easy way to get started: Use this AMI.
To execute the script R offers a CLI rscript. See e.g. here on how to set this up
Go Serverless: AWS Lambda is a hosted microservice. Currently R is not natively supported but on the official AWS Blog here they offer a step by step guid on how to run R. Basically you execute R from Python using the rpy2-Package.
Once you have this setup schedule the function via CloudWatch Events (~hosted cron-job). Here you can find a step by step guide on how to do that.
One more thing: You say that your function outputs CSV files: To save them properly you will need to put them to a file-storage like AWS-S3. You can do this i R via the aws.s3-package. Another option would be to use the AWS SDK for python which is preinstalled in the lambda-function. You could e.g. write a csv file to the /tmp/-dir and after the R script is done move the file to S3 via boto3's S3 upload_file function.
IMHO the first option is easier to setup but the second-one is more robust.
It's a bit counterintuitive but you'd use Cloudwatch with an event rule to run periodically. It can run a Lambda or send a message to an SNS topic or SQS queue. The challenge you'll have is that a Lambda doesn't support R so you'd either have to have a Lambda kick off something else or have something waiting on the SNS topic or SQS queue to run the script for you. It isn't a perfect solution as there are, potentially, quite a few moving parts.
#stdunbar is right about using CloudWatch Events to trigger a lambda function. You can set a frequency of the trigger or use a Cron. But as he mentioned, Lambda does not natively support R.
This may help you to use R with Lambda: R Statistics ready to run in AWS Lambda and x86_64 Linux VMs
If you are running windows, one of the easier solution is to write a .BAT script to run your R-script and then use Window's task scheduler to run as desired.
To call your R-script from your batch file use the following syntax:
C:\Program Files\R\R-3.2.4\bin\Rscript.exe" C:\rscripts\hello.R
Just verify the path to the "RScript" application and your R code is correct.
Dockerize your script (write a Dockerfile, build an image)
Push the image to AWS ECR
Create an AWS ECS cluster and AWS ECS task definition within the cluster that will run the image from AWS ECR every time it's spun-up
Use EventBridge to create a time-based trigger that will run the AWS ECS task definition
I recently gave a seminar walking through this at the Why R? 2022 conference.
You can check out the video here: https://www.youtube.com/watch?v=dgkm0QkWXag
And the GitHub repo here: https://github.com/mrismailt/why-r-2022-serverless-r-in-the-cloud
I want to create a custom report. Response format for sonarqube web service API /api/issues/search is JSON or XML. How can I use that response to create a html or CSV file using "unix shell without using command line tools" so that I can use it as a Report. Or is there any other better way to achieve this?
you can generate a html file if you run an analysis in the preview mode http://docs.sonarqube.org/pages/viewpage.action?pageId=6947686
It looks as if the SonarQube team has been working hard to not allow people to do this. They appear to want people to purchase an Enterprise Subscription in order to export reports.
An old version of sonar-runner (now called sonar-scanner) had an option to allow local report output. But that feature is "no more supported".
ERROR: The preview mode, along with the 'sonar.analysis.mode' parameter, is no more supported. You should stop using this parameter.
Looks like version 2.4 of Sonar Runner does what you want. If you can find it. Of course they only have 2.5RC1 available on the site now.
Using the following command should work on version 2.4:
sonar-runner -Dsonar.analysis.mode=preview -Dsonar.issuesReport.html.enable=true
There at least two open-source projects that query the SQ API to generate reports in various formats.
https://github.com/cnescatlab/sonar-cnes-report/tree/dev (Java)
https://github.com/soprasteria/sonar-report (JavaScript/Node)
At the time of writing both are active.