Trying to convert output from text to numeric in PhPExcel - phpexcel

I am using PhPExcel to export to a spreadsheet, and am getting the following error in the numbers column in Excel:
The number in this cell is formatted as text or preceded by an
apostrophe
I have searched on here extensively, and tried several solutions to fix this. The following line is what outputs the numbers:
$objPHPExcel->getActiveSheet()->setCellValue('D'.$excel_row, show_currency($aGenericAmenity['price']));
The closest I could get to an answer via my search was:
$objPHPExcel->getActiveSheet()->setCellValueExplicit('D'.$excel_row, show_currency($aGenericAmenity['price'], PHPExcel_Cell_DataType::TYPE_STRING));
but that did not work. I am not a programmer, so any help is appreciated.

Set the value as a straight number, and use a format mask to display it as currency
$objPHPExcel->getActiveSheet()
->setCellValue('D'.$excel_row, $aGenericAmenity['price']);
$objPHPExcel->getActiveSheet()
->getStyle('D'.$excel_row)
->getNumberFormat()
->setFormatCode(PHPExcel_Style_NumberFormat::FORMAT_CURRENCY_EUR_SIMPLE);
There's plenty examples showing how to do this, and it's described in the documentation as well

Related

How to display and work with strange characters in R

I have a data set in R in which some variables contains strings with "strange" characters, such as ë, é or €. Unfortunately, the source of the data set itself has some errors in displaying these characters originated from the export. For example: € is displayed in the string, which should be €
I've managed to debug these using a list provided on https://www.i18nqa.com/debug/utf8-debug.html. Despite my code being a little redundant, I've managed to solve the problem. For each strange character I used gsub to revalue for the character that it should be. I think (actually, I know) that this could be coded more efficient, but hey, it worked for me. Here's what I did:
df$var1 <- gsub("€", "€", df$var1)
However, after saving my code and reopening the code, it is not working anymore. I think I saved it in the wrong coding format, causing some information to get lost. My question is if it is possible to get the code working again. R is displaying parts of my original code in red font with a red background. It would be a pain to retype my code, so I hope there's a quick fix for this.
By the way: there's no option to rerun the export with different settings, so I'd like to fix the problem at hand with the data set at hand. I've tried all the encodings which R shows when you press "save with encoding". For example, UTF-8 correctly works with ë and ë, but not with €

How to remove unknown symbols from strings?

Sorry if this is a stupid question, but I tried searching for similar problems and did not find what I was looking for.
I scraped some text from Internet and now try to work with it in R. I encountered a problem: there are unknown characters inserted in the middle of some words. It looks normal when I just display the table, but when I copy the text there is this symbol. For example, if the cell in the table is "Example", when I copy it to the console, I see this:
This unfortunately is problematic as R does not recognize the word in these cases and would not find the cell if I, for example, would try to find all cells that contain the word "Example". As the error seems random and doesn't just apply to specific words I do not know how to fix it - can anybody help me?
Thank you very much in advance!!
You can use iconv function to remove all non-ASCII characters from the string. Please see the example below:
iconv("Ex·ample", from = "UTF-8", to = "ASCII", sub = "")
# Example

R displaying unicode/utf-8 encoding rather the special characters

I have a dataframe in R which has one row of utf-8 encoded special characters and one integer row.
If I display both rows, or go into the view(), I do not see the characters displayed correctly.
However, if I only select the row with the special characters, it works. Any ideas?
This is the output (if I paste it, the encoding disappears):
This looks like a bug in R. I've worked around a number of these in the corpus package. Try the following
library(corpus)
print.corpus_frame(WW_mapping[1:3,])
Alternatively, do
library(corpus)
class(WW_mapping) <- c("corpus_frame", "data.frame")
WW_mapping[1:3,]
Adding the "corpus_frame" class to the data frame changes the print and format methods; otherwise, it does not change the behavior of the object.
If that doesn't work, please report your sessionInfo() along with dput(WW_mapping). (Actually, even if this fix does work, please report this information so that we can let the R core developers know about the problem.)

characters converted to dates when using write.csv

my data frame has a column A with strings in character form
> df$A
[1] "2-60", "2-61", "2-62", "2-63" etc
I saved the table using write.csv, but when I open it with Excel column A appears formatted as date:
Feb-60
Feb-61
Feb-62
Feb-63
etc
Anyone knows what can I do to avoid this?
I tweaked the arguments of write.csv but nothing worked, and I can't seem to find an example in Stack Overflow that helps solve this problem.
As said in the comments, this is an excel behaviour, not R's. And that can't be deactivated:
Microsoft Excel is preprogrammed to make it easier to enter dates. For
example, 12/2 changes to 2-Dec. This is very frustrating when you
enter something that you don't want changed to a date. Unfortunately
there is no way to turn this off. But there are ways to get around it.
Microsoft Office Article
The first suggested way around it according to the article is not helpful, because it relies on changing the cell formatting, but that's too late when you open the .csv file in excel (it's already converted to an integer representing the date).
There is, however, a useful tip:
If you only have a few numbers to enter, you can stop Excel from
changing them into dates by entering:
An apostrophe (‘) before you enter a number, such as ’11-53 or ‘1/47. The apostrophe isn’t displayed in the cell after you press
Enter.
So you can make the data display as original by using
vec <- c("2-60", "2-61", "2-62", "2-63")
vec <- paste0("'", vec)
Just remember the values will still have the apostrophe if you read them again in R, so you might have to use
vec <- sub("'", "", vec)
This might not be ideal but at least it works.
One alternative is enclosing the text in =" ", as an excel formula, but that has the same end result and uses more characters.
Another solution - a bit tedious, Use Import Text File in Excel, click thru the dialog boxes and in Step 3 of 3 of the Text Import Wizard, you will have an option of setting the column data format, use "Text" for the column that has "2-60", "2-61", "2-62", "2-63". If you use General (the default), Excel tries to be smart and converts the answer for you.
I solved the problem by saving the file using the .xlsx format by using the function
write.xlsx()
from the package xlsx (https://www.rdocumentation.org/packages/xlsx/versions/0.6.5)

R not reading ampersand from .txt file with read.table() [duplicate]

I am trying to load a table to R with some of the column heading start with a number, i.e. 55353, 555xx, abvab, 77agg
I found after loading the files, all the headings that start with a number has a X before it, i.e. changed to X55353, X555xx, abvab, X77agg
What can I do to solve this problem. Please kindly notice that not all column heading are start with a number. What should I do to solve this problem?
Many thanks
Probably your issue will be solved by adding check.names=FALSE to your read.table() call.
For more information see ?read.table

Resources