I am trying to use TranslateR package in R to translate the column of the data from Korean to English. I tried using the following code:
install.packages("devtools")
library(devtools)
library(translateR)
##Install development version of translateR
devtools::install_github("ChristopherLucas/translateR")
foodcode_translated <- translateR::translate(dataset = food_code,
content.field = '식품코드명',
google.api.key =
"XXXXXXXXXXXXXXXX",
source.lang = 'ko',
target.lang = 'en')
I successfully got the google API key.
The codes above do not show any error but there is no change in the data frame after the codes are run.
Can anybody tell me what the issue is?
Thanks
Related
I've been trying (a lot) to download NASA's GPM-IMERG satellite images with precipitation data from R. I'm doing the question in this forum and not in the GIS forum because I read that most r users are here. (FYI: Im running Windows 10 and r 3.6.3 in rstudio). So far I've tried the following:
Created an account at PMM-NASA (see here). Everything worked well.
Installed the gpm package from devtools::install_github("csaybar/gpm"), followed by
gpm_getaxel() (see here). I tried running the following code.
gpm_download(path = RutaDownloads,
user = "myuser#email.com",
password = "myuser#email.com",
dates = c('2017-01-01','2017-02-28'),
band = 3,
lonMin = 70,
lonMax = 75,
latMin = 34,
latMax = 38,
product = 'finalrun',
quiet = F,
n = 1)
However, it did not worked. The error shown in rstudio is the following.
'gdal_translate' not found Error in gdaltranslate_exist() : GDAL
should be declared as a system variable if you are using Windows
I haven't had any problems running gdal when working with multiple rasters/vectors in R (so far). Does anyone know if I have to install gdal in my PC apart from installing rgdal in r? If so, how can I do it and 'synchronize' it with R to be used with gpm package? I know there is a lot of information in google, however, I'd rather take the advice from anyone that has done this before, because in the past I did not have such a good experience working, for example, with GDAL and Python, and that is the main reason I started working my GIS codes in R.
I also tried with another alternative installing the remotes package and remotes::install_github("bastianmanz/GPM_rain") (See here). However, this uses the rhdf5 package, which is not available for my R version, and thus I installed the BiocManager and then BiocManager::install("rhdf5") (following the instructions from here). With GPM_rain there are two possible ways to download the images: (1) with wget i.e. system_download, but its not straightforward because it needs a list of the files to download, and (2) with rcurl_download using RCurl. This seemed easy to use. Nevertheless, the first problem is that I cannot specify lat/long for the image extent and, as in point 2, it did not work. I tried running the following code:
rcurl_download(product = "nrt",
nrt_type = "late",
start = 20170101,
end = 20170131,
userpwd = "myuser#email.com:myuser#email.com")
But the error says that rcurl_downloadfunction was not found. How to fix any of the above errors or have any other solutions, I would be really grateful if you could share your experience, specially for option 2, where I can set lat/long to download the data. Thanks in advance,
Jorge.
--------------------------------------------------------------------------------------------------------
EDIT:
Package gpm is "retired" according to its developer (see here).
I really have a hard time trying to translate 1.5k responses to an open ended question from french to english. I want to use the R-Package "translateR" with the Microsoft-API. Microsoft because I got an Azure-Account due to my University without the need of spending Creditcard-Information.
Actually I am not sure if I am doing it wrong because of beeing unable to fill in the right parameter for "client id" and "client secret" or if its just outdated package which does not work with Microsoft API anymore due to migration by Microsoft or something. I researched some similar questions at stackoverflow but did not found any answer or solution already.
Here is some code to maybe replicate the problem. An exampledataset is used which is integrated in "translateR".
#install.packages("translateR")
library(translateR)
data(enron)
google.dataset.out <- translateR::translate(dataset = enron,
content.field = 'email',
microsoft.client.id = my.client.id,
microsoft.client.secret = my.client.secret,
source.lang = 'en',
target.lang = 'de')
I am constantly getting this output:
Error in function (type, msg, asError = TRUE) :
Could not resolve host: datamarket.accesscontrol.windows.net
I am quite new to using R-Language, pls be kind if I did something totaly stupid. Can anyone confirm that it is not possible to use "translateR" with microsoft API anymore? Can anyone give me advise how to deal with my data if translation is not possible with the package anymore?
The R-Package is outdated but the development version has been updated more recently. For installation the package "devtools" needs to be installed before using the following command:
###Install devtools###
install.packages("devtools")
###Install development version of translateR###
devtools::install_github("ChristopherLucas/translateR")
Within the development version the commandsyntax changed aswell.
library(translateR)
data(enron)
dataset.out <- translateR::translate(dataset = enron,
content.field = 'email',
microsoft.api.key = 'my.ms.api.key',
source.lang = 'en',
target.lang = 'de')
For more information read this:
translateR documentation update on github
I am trying to use the acs package in R to download Census data for a basic map, but I am unable to download the data and I'm receiving a confusing error message.
My code is as follows:
#Including all packages here in case this is somehow the issue
install.packages(c("choroplethr", "choroplethrMaps", "tidycensus", "tigris", "leaflet", "acs", "sf"))
library(choroplethr)
library(choroplethrMaps)
library(tidycensus)
library(tigris)
library(leaflet)
library(acs)
library(sf)
library(tidyverse)
api.key.install("my_api_key")
SD_geo <- geo.make(state="CA", county = 73, tract = "*", block.group = "*")
median_income <- acs.fetch(endyear = 2015, span = 5, geography = SD_geo, table.number = "B19013", col.names="pretty")
Everything appears to work until the final command, when I receive the following error message:
trying URL 'http://web.mit.edu/eglenn/www/acs/acs-variables/acs_5yr_2015_var.xml.gz'
Content type 'application/xml' length 735879 bytes (718 KB)
downloaded 718 KB
Error in if (url.test["statusMessage"] != "OK") { :
missing value where TRUE/FALSE needed
In addition: Warning message:
In (function (endyear, span = 5, dataset = "acs", keyword, table.name, :
XML variable lookup tables for this request
seem to be missing from ' https://api.census.gov/data/2015/acs5/variables.xml ';
temporarily downloading and using archived copies instead;
since this is *much* slower, recommend running
acs.tables.install()
This is puzzling to me because 1) it appears like something is in fact being downloaded at first? and 2) 'Error in if (url.test["statusMessage"] != "OK") { :
missing value where TRUE/FALSE needed' makes no sense to me. It doesn't align with any of the arguments in the function.
I have tried:
Downloading the tables using acs.tables.install() as recommended in the second half of the error message. Doesn't help.
Changing the endyear and span to be sure that I'm falling within the years of data supported by the API. I seem to be, according to the API documentation. Have also used the package default arguments with no luck.
Using 'variable =' and the code for the variable as found in the official API documentation. This returns only the two lines with the mysterious "Error in if..." message.
Removing colnames = "pretty"
I'm going to just download the datafile as a CSV and read it into R for now, but I'd like to be able to perform this function from the script for future maps. Any information on what's going on here would be appreciated. I am running R version 3.3.2. Also, I'm new to using this package and the API. But I'm following the documentation and can't find evidence that I'm doing anything wrong.
Tutorial I am working off of:
http://zevross.com/blog/2015/10/14/manipulating-and-mapping-us-census-data-in-r-using-the-acs-tigris-and-leaflet-packages-3/#get-the-tabular-data-acs
And documentation of the acs package: http://eglenn.scripts.mit.edu/citystate/wp-content/uploads/2013/02/wpid-working_with_acs_R2.pdf
To follow up on Brandon's comment, version 2.1.1 of the package is now on CRAN, which should resolve this issue.
Your code runs for me. My guess would be that the Census API was temporarily down.
As you loaded tidycensus and you'd like to do some mapping, you might also consider the following code:
library(tidycensus)
census_api_key("your key here") # use `install = TRUE` to install the key
options(tigris_use_cache = TRUE) # optional - to cache the Census shapefile
median_income <- get_acs(geography = "block group",
variables = "B19013_001",
state = "CA", county = "San Diego",
geometry = TRUE)
This will get you the data you need, along with feature geometry for mapping, as a tidy data frame.
I emailed Ezra Haber Glenn, the author of the package, about this as I was having the same issue. I received a response within 30 minutes and it was after midnight, which I thought was amazing. Long story short, the acs package version 2.1.0 is configured to work with the changes the Census Bureau is making to their API later this summer, and it is currently presenting some problems windows users in the mean time. Ezra is going to be releasing an update with a fix, but in the mean time I reverted back to version 2.0 and it works fine. I'm sure there are a few ways to do this, but I installed the devtools package and ran:
require(devtools)
install_version("acs", version = "2.0", repos = "http://cran.us.r-project.org")
Hope this helps anyone else having a similar issue.
I need to download a custom dataset in an Azure Jupyter/iPython Notebook.
My ultimate goal is to install an R package. To be able to do this the package (the dataset) needs to be downloaded in code. I followed the steps outlined by Andrie de Vries in the comments section of this post: Jupyter Notebooks with R in Azure ML Studio.
Uploading the package as a ZIP file was without problems, but when I run the code in my notebook I get an error:
Error in curl(x$DownloadLocation, handle = h, open = conn): Failure
when receiving data from the peer Traceback:
download.datasets(ws, "plotly_3.6.0.tar.gz.zip")
lapply(1:nrow(datasets), function(j) get_dataset(datasets[j, . ], ...))
FUN(1L[[1L]], ...)
get_dataset(datasets[j, ], ...)
curl(x$DownloadLocation, handle = h, open = conn)
So I simplified my code into:
library("AzureML")
ws <- workspace()
ds <- datasets(ws)
ds$Name
data <- download.datasets(ws, "plotly_3.6.0.tar.gz.zip")
head(data)
Where "plotly_3.6.0.tar.gz.zip" is the name of my dataset of data type "Zip".
Unfortunately this results in the same error.
To rule out data type issues I also tried to download another dataset of mine which is of data type "Dataset". Also the same error.
Now I change the dataset I want to download to one of the sample datasets of AzureML Studio.
"text.preprocessing.zip" is of datatype Zip
data <- download.datasets(ws, "text.preprocessing.zip")
"Flight Delays Data" is of datatype GenericCSV
data <- download.datasets(ws, "Flight Delays Data")
Both of the sample datasets can be downloaded without problems.
So why can't I download my own saved dataset?
I could not find anything helpful in the documentation of the download.datasets function. Not on rdocumentation.org, nor on cran.r-project.org (page 17-18).
Try this:
library(AzureML)
ws <- workspace(
id = "your AzureML ID",
auth = "your AzureML Key"
)
name <- "Name of your saved data"
ws <- workspace()
It seems the error I got was due to a bug in the (then early) Azure ML Studio.
I tried again after the reply of Daniel Prager only to find out my code works as expected without any changes. Adding the id and auth parameters was not needed.
I'm using R version 3.0.2 and have installed the package tm. Previously, I also loaded a package called tm.plugin.tags. To get a measure of whether a text corpus was positive or negative I used the following approach:
library('tm')
library('tm.plugin.tags')
pos <- tm_tag_score(TermDocumentMatrix(corpus, control = list(removePunctuation = TRUE)), tm_get_tags("Positiv"))
tm.plugin.tags seems to be no longer available for R. This was based on the following classification system http://www.wjh.harvard.edu/~inquirer/homecat.htm and I'm wondering if there is any other package or approach that I can use to achieve a similar result.
I have emailed the package maintainer of tm so I will post an update here once/if I receive a response.
You can install tm.plugin.tags using the following command
install.packages("tm.plugin.tags", repos = "http://datacube.wu.ac.at", type = "source")
This installs without any problem
Thanks
Cheers