I would like to conduct an extensive tests on my new internet connection using speedtest.net
Is there some way that I can completely automate the process.
The speed test is conducted at a fixed interval of time automatically and a spanshot of the screen is then taken and stored on my system.
I found a good repository that does basically exactly what you're asking, but better (runs multiple tests, can run from the command line.)
I would recommend copying the code (python) from https://github.com/Janhouse/tespeed.
You can run this repeatedly using a cron job, and easily email the results to yourself using the crontab file.
For easy step by step instructions, I found http://www.pythonforbeginners.com/code-snippets-source-code/command-line-speedtest-net-via-tespeed/
Related
I am trying to implement a Selenium test to perform automated actions on a website (looping through pages). I am using R and RSelenium package as well as a PostgreSQL database using DBI package. All this using EC2 AWS server.
My problem is that after a few minutes that the script was launched, my RStudio session freezes (as well as my Linux session) and I can see a message like "cannot allocate memory".
So this is clearly a memory issue without a doubt, and by doing top I could see that my Selenium docker was using most of the resources.
But my question is how can I reduce the amount of memory used by the Selenium test?
IMHO there is no practical way for a test to use less memory than the memory required by the given test. You can try to simplify the given test by breaking it up into 2 or more tests. Check for memory leaks, as suggested in another answer.
It would be much easier to use the next largest instance type with more memory, and shut down the instance when not in use to save money, if that is an issue.
Don't forget drive.close() in your code, if you don't close your driver, you will have a lot instance of Chrome.
I'm new to R, and I'm invoking an R script from a NodeJS app. When the R Script is invoked, it takes a long time in producing output. I investigated and realized that the bulk of that overhead is when it loads the libraries and the model I'm using. Let me clarify that any optimization would work, taking into account that I'm running this code in a Raspberry Pi 2 b+.
My question is: Is there a way to preload all the libraries and the model on R and then trigger predictions on demand? So that I won't need to reload the libraries and the model every time I want a prediction.
No. Since you're just invoking a script the loading of everything it has to be done everytime the script is run; since nothing didn't exist in memory before you invoked it.
One workaround I would suggest is to instead run a R script have your R script running as a service and then query that service from nodejs.
I cannot help you with that since my expertise for R doesn't go very far away and I don't know if having an R server is even possible.
An alternative to that, if it is not too cumbersome, is to port your R project to python and mount a server of some kind (which with python is extremely easy to do) and then poke that server from nodejs. Since you would be running a server you can just cache the libraries at the server startup time and have everything in RAM for your next query.
I am wondering if it's possible to have relatively simple R code pull and feed data into say... a text file attached to an email without having to keep my PC on.
I have a web-scraping code here that use:
library(XML)
library(stringr)
to scrape some web data which i would like to save daily..
Putting that on a loop that runs every 24 hours would be relatively easy, but i don't want to keep my PC on or not able to use the R environment while this is running.
what are my options?
Suggest you spin up an AWS EC2 instance and set the script to run as a cron job on a daily basis.
Here's some resources:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html
http://www.louisaslett.com/RStudio_AMI
http://strimas.com/r/rstudio-cloud-1
Requires a little bash but if you aren't familiar it's definitely worth learning.
If you're on Windows, you can schedule batch R scripts to run via the Task Scheduler. +1 for AWS and cron though - super easy to get going once you establish the EC2 instance and get R running on it.
We have set of automated test cases around 2000 and we need to run them daily on every new build that come up. It's currently taking 4 hours to finish the test on one machine. In order reduce this we planning to run tests in batches (500 per batch) on same machine by initiating multiple browsers of same type. Say 4 firefox browser sessions per test suite. So it can finish in 1 hour time.Is it possible to achieve this using selenium webdriver and testng? Please suggest.
It is possible using Selenium Grid and TestNG. Grid can help distribute your tests on various machines or browser instances as you require. To get started, refer : Grid2
I think you might need to change your driver instantiation code a bit to include RemoteWebDriver instead of concrete drivers, but would be ok if your driver instantiation code in your framework is isolated. TestNG and Grid can help provided your tests are well written to support parallel execution.
For TestNG, you can refer, parallel running in TestNG.
If you are using Python - the best way to go is to use py.test to drive the tests. To distribute the tests the pytest-xdist plugin can be used.
Anyway, for both Java and Python you can use Jenkins to run/distribute your tests (using Selenium plugin)
I have created tests using selenium 2, I'm also using the selenium standalone server to run the tests.
The problem is that if I run one test, it works. If I run multiple tests, some of them fail. If I try then to run a failed test, it works.
Could the tests be running on threads?
I've used the NUnit GUI, and TeamCity to run the tests ... both give the same results : different tests fail, run again, other tests fail.
Any thoughts ?
EDIT
The tests shouldn't depend on one another. The database is emptied and repopulated for every test.
I guess the only problem could be that the database is not emptied correctly ... but then if I run the same test multiple times it should also fail.
EDIT2
The tests fail with "element not found".
I'll try and add a "WaitForElement" that retries every few milliseconds and maybe that will fix it.
Without knowing the exact errors that are thrown its hard to say. The normal causes of flakiness tend to be waits are not set to a decent time or the web server can't handle that many requests.
If the DB is on the same machine as the webserver, and why shouldnt it be on a build box, it can be intensive to clear it out.
I would recommend going through each of the errors and making it bullet proof for that and then moving to the next. I know people who run there tests all the time without flakiness so its definitely an environmental thing that can be sorted.
I know I'm a bit late to the party here but are you using a single window to run your tests? I had a similar issue since the site I'm testing has only one page load event so waiting for elements or pausing the test became very dodgy and I had different tests passing each time. Adding a ton of wait times didn't work at all until I just opened a new "clean" browser for each test. Testing does get slower but it worked.