Im looking for an R Package that can read and extract the data of a barcode of an scanned image.
The barcode is in "Interleaved 2 of 5" format.
Is there any solution in R for this task or do I have to move to Python for this?
I would rather stick to R.
I solved this problem by running a python scrypt from the R scrypt.
I used the reticulate R package which provides an R interface to Python modules, classes, and functions.
The python scrypt example (decode_photo.py) is given below - pyzbar and PIL packages are required:
from pyzbar.pyzbar import decode
from PIL import Image
def decode_my_photo(file_name):
result = decode(Image.open(file_name))
decoded_items =[]
for x in result:
decoded_items.append(str(x.data).replace("'","").replace("b",""))
return(decoded_items)
The R scrypt is the following:
library(reticulate)
con <- "path_to_barcode_image.png"
use_python("C:\\Users\\MyName\\Anaconda3\\") #can be different
source_python("decode_photo.py")
decoded_value <- decode_my_photo(con)
Related
I am trying to run below code where I want to read csv file and then write "sas7bdat". I have tried below code.
we already have prerequisite library installed on the system for R.
from rpy2 import robjects
robjects.r('''
library(haven)
data <- read_csv("filename.csv")
write_sas(data, "filename.sas7bdat")
''')
After running above code, there are no output get generated by this code and even I am not getting any error.
Expected output: trying to read .csv file and then that data i want to export in .sas7bdat format. (In Standard python 3.9.2 Editor)
python do not have such functionality/library hence I am trying this way to export data in .sas7bdat format.
Plz Suggest some change in above code or any other way in python through which I can create/export .sas7bdat format in python.
Thanks.
I had experience using R in Python Jupyter Notebooks, it is a bit complicated at beginning, but it did work. Here I just pasted my personal notes, hope these help:
# Major steps in installing "rpy2":
# Step 1: install R on Jupyter Notebook: conda install -c r r-essentials
# Step 2: install the "rpy2" Python package: pip install rpy2 (you may have to check the version)
# Step 3: create the environment variables: R_HOME, R_USER and R_LIBS_USER
# you can modify these environment variables in the system settings on your windows PC or use codes to set them every time)
# load the rpy2 module after installation
# Then you will be able to enable R cells within the Python Jupyter Notebook
# run this line in your Jupyter Notebook
%load_ext rpy2.ipython
My work was to do ggplot2 in Python, so I did:
# now use R to access this dataframe and plot it using ggplot2
# tell Jupyter Notebook that you are going to use R in this cell, and for the "test_data" generated using the Python
%%R -i test_data
library(ggplot2)
plot <- ggplot(test_data) +
geom_point(aes(x,y),size = 20)
plot
ggsave('test.png')
Please before you run the code make sure that haven and reader are installed in your R kernel.
from rpy2.robjects.packages import SignatureTranslatedAnonymousPackage
string = """
write_sas <- function(file, col_names = TRUE, write_to){
data <- readr::read_csv(file, col_names = col_names)
haven::write_sas(data, path = write_to)
print(paste("Data is written to ", write_to))
}
"""
rwrap = SignatureTranslatedAnonymousPackage(string, "rwrap")
rwrap.write_sas( file = "https://robjhyndman.com/data/ausretail.csv",
col_names = False,
write_to = "~/Downloads/filename.sas7bdat")
You can use any of the R function arguments. same as I used col_names
I do not have any code to post, but just a question.
There are several tools I am aware of to read SDMX files in R (an SDMX is an XML file for exchanging statistical data) like for instance
https://github.com/opensdmx/rsdmx
https://github.com/amattioc/SDMX
but does anyone know a way to export some data to an SDMX format for dissemination?
Any suggestion is welcome!
This is not a ‘pure’ R solution, but the Python sdmx1 package is fully usable through reticulate, and allows to programmatically generate SDMX objects and then serialize them as SDMX-ML (XML). For example:
# Use reticulate to import the Python package
> library(reticulate)
> sdmx <- import("sdmx")
# Create an (empty) DataMessage object
> msg <- sdmx$message$DataMessage()
# Convert to XML
> xml <- sdmx$to_xml(msg, pretty_print = TRUE)
# Write to file using the built-in R method
# The Python 'bytes' object must be decoded to a string
> write(xml$decode(), file = "message.xml")
This gives output like:
<mes:GenericData xmlns:com="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/common" xmlns:data="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/data/structurespecific" xmlns:str="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/structure" xmlns:mes="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/message" xmlns:gen="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/data/generic" xmlns:footer="http://www.sdmx.org/resources/sdmxml/schemas/v2_1/message/footer" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<mes:Header>
<mes:Test>false</mes:Test>
</mes:Header>
</mes:GenericData>
For more information on authoring more complex messages using sdmx1, there is a page “HOWTO Generate SDMX-ML from Python objects” in the documentation.
I am new to stackoverflow and python so please bear with me.
I am trying to run an Latent Dirichlet Analysis on a text corpora with the gensim package in python using PyCharm editor. I prepared the corpora in R and exported it to a csv file using this R command:
write.csv(testdf, "C://...//test.csv", fileEncoding = "utf-8")
Which creates the following csv structure (though with much longer and already preprocessed texts):
,"datetimestamp","id","origin","text"
1,"1960-01-01","id_1","Newspaper1","Test text one"
2,"1960-01-02","id_2","Newspaper1","Another text"
3,"1960-01-03","id_3","Newspaper1","Yet another text"
4,"1960-01-04","id_4","Newspaper2","Four Five Six"
5,"1960-01-05","id_5","Newspaper2","Alpha Bravo Charly"
6,"1960-01-06","id_6","Newspaper2","Singing Dancing Laughing"
I then try the following essential python code (based on the gensim tutorials) to perform simple LDA analysis:
import gensim
from gensim import corpora, models, similarities, parsing
import pandas as pd
from six import iteritems
import os
import pyLDAvis.gensim
class MyCorpus(object):
def __iter__(self):
for row in pd.read_csv('//mpifg.local/dfs/home/lu/Meine Daten/Imagined Futures and Greek State Bonds/Topic Modelling/Python/test.csv', index_col=False, header = 0 ,encoding='utf-8')['text']:
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(row.split())
if __name__ == '__main__':
dictionary = corpora.Dictionary(row.split() for row in pd.read_csv(
'//.../test.csv', index_col=False, encoding='utf-8')['text'])
print(dictionary)
dictionary.save(
'//.../greekdict.dict') # store the dictionary, for future reference
## create an mmCorpus
corpora.MmCorpus.serialize('//.../greekcorpus.mm', MyCorpus())
corpus = corpora.MmCorpus('//.../greekcorpus.mm')
dictionary = corpora.Dictionary.load('//.../greekdict.dict')
corpus = corpora.MmCorpus('//.../greekcorpus.mm')
# train model
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=50, iterations=1000)
I get the following error codes and the code exits:
...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:832: DeprecationWarning: invalid escape sequence \d
\...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:2736: DeprecationWarning: invalid escape sequence \d
\...\Python\venv\lib\site-packages\setuptools-28.8.0-py3.6.egg\pkg_resources_vendor\pyparsing.py:2914: DeprecationWarning: invalid escape sequence \g
\...\Python\venv\lib\site-packages\pyLDAvis_prepare.py:387:
DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
I cannot find any solution and to be honest neither have any clue where exactly the problem comes from. I spent hours making sure that the encoding of the csv is utf-8 and exported (from R) and imported (in python) correctly.
What am I doing wrong or where else could I look at? Cheers!
DeprecationWarining is exactly that - warning about a feature being deprecated which is supposed to prompt the user to use some other functionality instead to maintain the compatibility in the future. So in your case I would just watch for the update of libraries that you use.
Starting with the last warning it look like it is originating from pandas and has been logged against pyLDAvis here.
The remaining ones come from pyparsing module but it does not seem that you are importing it explicitly. Maybe one of the libraries you use has a dependency and uses some relatively old and deprecated functionality. To eradicate the warning for the start I would check if upgrading does not help. Good luck!
import warnings
warnings.filterwarnings("ignore")
pyLDAvis.enable_notebook()
Try using this
Does anybody now if there is any module that could let me import a graph (network) in Julia?
Working with Python, I used the graph-tool package, which served me very well! I have my graphs in .gt file format. Can I use any module in Julia, so that I can import them there?
I have searched for LightGraphs and Junet, which is fairly new but cannot seem to see any "import" section in the documentation.
The most straightforward solution is to convert your gt files to graphml format, which is compatible with LightGraphs, and is the recommended alternative format by graph-tool.
Suppose you have a ".gt" file that was generated in the past by the following python code:
from graph_tool.all import *
g = Graph()
v1 = g.add_vertex()
v2 = g.add_vertex()
e = g.add_edge(v1,v2)
g.save('g.gt')
Start a new python session, and convert from "gt" to "graphml" format:
import graph_tool as gt
g = gt.Graph()
g.load('g.gt')
g.save('g.xml.gz')
Then, in julia, use LightGraphs with the GraphIO package to load from the GraphML file:
using LightGraphs, GraphIO
D = loadgraphs("g.xml.gz", GraphMLFormat())
#> Dict{String,LightGraphs.AbstractGraph} with 1 entry:
# "G" => {2, 1} directed simple Int64 graph
If you'd like to use PyCall to perform the conversion directly from within julia (i.e. in a script), here's how:
using PyCall
#pyimport graph_tool as gt
G = gt.Graph()
G[:load]("g.gt")
G[:save]("g.xml.gz")
(Note that this implies python and the graph-tool library are already installed on your machine and accessible from julia).
In theory, if you prefer graph-tools and you're used to its syntax and wish to keep working directly with the .gt file format, you can use it via PyCall from within julia throughout as above. Whether this is preferable to migrating over to LightGraphs which is designed for julia though is another thing. It's your call :)
PS. Greetings from Leamington, fellow Leamingtonian!
Graph importing for LightGraphs is now in GraphIO.jl. Supported import formats currently include
GML
Graph6
GEXF
GraphML
Pajek NET
DOT
with more formats coming soon.
I am trying to import the R package 'forecast; in netbeans to use its functions. I have managed to make the JRI connection and also to import the javaGD library and experimented with it with a certain success. The problem about the forecasting package is that I cannot find the corresponding JAR files so to include them as a library in my project. I am loading it normally : re.eval(library(forecast)), but when I implement one of the library's function, a null value is returned. Although I am quite sure that the code is correct I am posting it just in case.
tnx in advance
Rengine re = new Rengine(Rargs, false, null);
System.out.println("rengine created, waiting for R!");
if(!re.waitForR())
{
System.out.println("cannot load R");
return;
}
re.eval("library(forecast)");
re.eval("library(tseries)");
re.eval("myData <- read.csv('C:/.../I-35E-NB_1.csv', header=F, dec='.', sep=',')");
System.out.println(re.eval("myData"));
re.eval("timeSeries <- ts(myData,start=1,frequency=24)");
System.out.println("this is time series object : " + re.eval("timeSeries"));
re.eval("fitModel <- auto.arima(timeSeries)");
REXP fc = re.eval("forecast(fitModel, n=20)");
System.out.println("this is the forecast output values: " + fc);
You did not convert values from R into java, you should first create a numerical vector of auto.arima output in R, and then use the method .asDoubleArray() to read it into java.
I gave a complete example in [here] How I can load add-on R libraries into JRI and execute from Java? , that shows exactly How to use the auto.arima function in Java using JRI.