NUL at the beginning of written file - writetofile

I have the following question. When writing a String to a file the first character seems to be a Nul. I see that if I open the file with Notepad++, there is a black background where "NUL" is written in white. However, I don't really understand why this is happening. The String is transmitted from another device to a tablet running Android on it, afterwards the data should be written to a file. When debugging the code everything looks fine until I'll open the file.
Thanks and cheers
pingu

Related

RStudio - some non-standard special characters in the R Script change by themselves

I occasionally work with data frames where unorthodox special characters are used that look identical to standard characters in RStudio's in-built viewing functionality. I refer to these characters in my scripts, but sometimes when I open the file, these characters have been changed to standard keyboard characters within the script.
For example, in my script, ’ changes to a standard apostrophe ' and – changes to a standard hyphen -.
These scripts are ones I have to run regularly, so having to manually correct this each time is a chore. I also haven't worked out what it is that triggers RStudio to make these changes. I've tried closing and reopening to test if that's the trigger, and the characters have remained correct. It only seems to happen after I've turned off my computer.
Does anyone know of a workaround for this and/or what is causing this? TIA
EDIT: the reason I need to do this is I export to csv which is UTF-8 encoded.
I've found a workaround, although I welcome any feedback on any drawbacks to this.
If you have already written your code (including the special characters):
Click File > Save with Encoding... > Show all encodings > unicodeFFFE
Now when you reopen the file:
Click File > Reopen with Encoding... > Show all encodings > unicodeFFFE
If you haven't already written your code, it should just be a case of saving your file from the start with the unicodeFFFE encoding (instructions above) before you write the code and then using the reopen with encoding option whenever you open the file.

My R script has been turned entirely into blank spaces. Is this recoverable?

If I knew how to reproduce this then I would, but I have never encountered this before. I saved my work using RStudio as I normally would, saving the progress of writing my R script. But when I opened it the next day, the entire script has been blanked. It still has the same file size as before, but nothing appears when I open it in RStudio. It just has a blank line 1.
If I open the script in Notepad++ I can see that the entire file has NUL characters. What has happened here? Is this recoverable?

Enconding .rmd file mac

Yesterday I was doing an RMD file with a Mac and everything went well, but today, when I reopened the file it looks like this:
D??Pˆ#A????DE????UOØÊË?? Z†é¡Áç€????Z[??¯íñ»•??`#??A??xE??eE??ç#????
f=T??ž Ë^??‚??l[iótó:ò??§FÌ????w72??øø‡ˆDA????U??#?? N??¯??J‡??E????NO´Á??€í??Ç_žË??S|V1™??;¯??îR‘”xe^gP??‹#K??????tO9¢??ì|£æ8??????i;v??7Z??§¬ç˜??/‰RM’ÂmŽlwi3vÇ7œ¡ÊAç????ZE????êJøÀ‡??E??D??AQ??Œ^aD????A[??ô…b›N?????óò†^ƒ®«??Èq??%Z§??Ë®????r#,æ????D??}Q¾ Tçœ??BQ??Œ,^Ž®t??;~??Z†hƒrá/??‡¨EŒ??]Iì«ÁÈ????SO‘Êyç??{~]ë??KôüÍ????nd#Xèñü•??amGo??akG{??jW}—(Ÿ˜
,??Ø&ï¿&ïbN??qô$…òš??6€¦??áY??ͬ??˜,(˜r-,”f}U?? Ë??_å??(Z–??$€ò??§^Ì®????3rÇ/œˆ#E????N??Aô??Í??^€??KQÈŒ??]QìŠÊ[ç¯??î^}??+Dž??\Wƒ??«#üƒ??µoÀa??E??9D????q['ò????‰®N??¤}î*|??
:????B[??ô,…š~5????Y…T›œ
Does someone know how to revert the encoding?
Though I am a Windows user, I have an easy way to make a corrupted file readable: you can reopen the file with whatever encoding you want. First of all, open the file as usual. Then, if you see the file garbled, click File and Reopen with Encoding... button, as shown in the following picture.
Choose Encoding will appear next, so you can choose one.

Corrupted Rmarkdown script: How can I get the Cyrillic characters back?

I was working with a script with lots of Cyrillic characters (throughout chunks and out of them) for weeks. One day I have opened a new Rmarkdown script where I wrote English, while the other document is still in my R session. Afterwards, I have returned to the Cyrillic document and everything written turns to something like this 8 иÑлÑ 1995 --> ÐлаÑÑÑ - наÑодÑ
The question is: Where is the source of problem? And, how can the corrupted script turn to its original form (with the Cyrillic characters)?
UPDATE!!
I have tried reopeining the Rstudio scrip with encoding CP1251, CP1252, windows1251 and UTF8, but it does not work. Certaintly the weird symbols change to another weird symbols. The problem is that I have saved the document with the default encoding CP1251 and windows1251) at the very begining.
Solution:
If working with cyrillic and lating characters, be sure you save the Rstudio script with UTF-8 encoding always, when you computer is windows (I do not know mac). If you close the script and open it again, re-open the file with UTF8 encoding.
Assuming you're using RStudio: Open your *.Rmd file and then try to reopen it "with encoding". Therefore simply use the File-Menu as shown below.
Select "Show all encodings" and choose your specific encoding, I suggest windows-1251 for cyrillic encoding:
Note: Apparently the issue can also occur while at the one time opening the *.Rmd file as "standalone" and at the other time from within an R Project.
Hope that would help.

R exporting text issue

I have a problem that it might be a bit unique, but I think that if it is answered it could answer other questions about encoding too.
In order to expand my R skills I tried to write a function that I could manage the vcf file from android phones. Everything went ok, until I tried to upload the file in the phone. An error appeared that the first line starts with something else than a normal VCF version 3 file. But when I check the file on the PC it appears to be ok without these characters that my phone said. So, I asked about it and one person here said that it is the Byte Ordering Mark and I should use a HEX editor to see it. And it was there even it couldn't be seen in the TXT editor of windows and linux.
Thus, I tried to solve the problem by using fileEncoding arguments in R. the code that I use to write the file is:
write.table(cons2,file=paste(filename,".vcf",sep=""),row.names=F,col.names=F,quote=FALSE,fileEncoding="")
I put ASCII as argument, UTF-8 etc but no luck. ASCII seems to delete some of the characters, and UTF-8 makes these characters be visible in the text file.
I would appreciate if someone could provide a solution to this.
PS: I know that if I modify the file in a HEX editor it solves the problem, but I want the solution in the R coding.

Resources