The appearance of "textparcali" in RStudio Source Editor was as follows.
In textparcali (tbl_df), I ran the following code to delete single strings.
textparcali$word<-gsub("\\W*\\b\\w\\b\\W*",'', textparcali$word)
But the deletion was interesting. You can see the picture below. Please note lines 67 and 50.
Everything was fine for line 50 and lines like that. However, this was not the case for line 67 (and I think there are others like it).
I focused on one line(67) to understand why you deleted it wrong. I've already seen what it says on this line in the editor. But I also wanted to look at the console. I wrote the following code to the console.
textparcali$word[67]
The word on line 67 looks different in the console. The value that doesn't appear when you make a copy paste but surprisingly appears on the console:
The reason I put it as a picture is because this character disappears after the copy-paste command.
You can download the file containing this character from the link below. However, you should open it with Notepad ++.
Character.txt
Gsub did his job right. How is that possible? What's the name of this character? When I try to write code that destroys this character, the " sign changes and does not delete.
textparcali$word<-gsub('[[:punct:]]+',' ',textparcali$word) command also does not work.
What is the explanation of my experience? I do not know. Is there a way to destroy this character? What caused this? I ve asked a lot.
Thank you all.
(I apologize for the bad scribbles in the pictures.)
I found the surprise character.
Above Right, Combining Dot ͘ ͘
The following is the code required to eliminate this character.
c<-"surprise character"
c
[1] "\u0358"
textparcali$word<-gsub("\u0358","",textparcali$word,ignore.case = FALSE)
textparcali$word<-gsub("\u307","",textparcali$word,ignore.case = FALSE)
Code 307 did the job for me. However, you should determine what the actual code is. If not, your character code may be incorrect.
More detailed information can be found in the links below.
https://gist.github.com/ngs/2782436
https://www.charbase.com/0358-unicode-combining-dot-above-right
Thanks a lot!
Related
I want to search for multiple codes appearing in a cell. There are so many codes that I'd like to write parts of the code in succeeding lines. For example, let's say I am looking for "^a11","^b12", "^c67$" or "^d13[[:blank:]]". I am using:
^a11|^b12|^c67$|^d13[[:blank:]]
This seems to work. Now, I tried:
^a11|^b12|
^c67$|^d13[[:blank:]]
That also seemed to work. However when I tried:
^a11|^b12|^c67$|
^d13[[:blank:]]
It did not count the last one.
Note that my code is wrapped into a function. So the above is an argument that I feed the function. I'm thinking that's the problem, but I still don't know why one truncation works while the other does not.
I realized the answer today. The problem is that since I am feeding the regex argument, it will count the next line in the succeeding code.
Thus, the code below was only "working" because ^c67$ is empty.
^a11|^b12|
^c67$|^d13[[:blank:]]
And the code below was not working because ^d13 is not empty but also this setup looks for (next line)^d13[[:blank:]] instead of just ^d13[[:blank:]]
^a11|^b12|^c67$|
^d13[[:blank:]]
So an inelegant fix is:
^a11|^b12|^c67$|
^nothinghere|^d13[[:blank:]]
This inserts a burner code that is empty which is affected by the line break.
I am looking for some direction here, as I seem to be missing something. I have the following ZPL that is loaded into a ZD620:
^XA
^LH0,0^LRN^FT100,50,0^A0N,30,30^FN1^FDCORELIMS.BARCODE^FS
^FO471,27^BQN,1,3^FDQA,^FN1^FS
^FT381,188^A0N,50,68^FD^FN1^FS
^XZ
I use an off-the-shelf software that turns CORELIMS.BARCODE into the entity's barcode value to be encoded. That works fine. What is not happening, when the Generated QR Code is scanned, the output is always missing the first 3 characters. What should show up is something like: 5BX10, what I get is: 10.
During my troubleshooting I used the following code and I receive the full string:
^XA
^LH0,0^LRN^FT100,50,0^A0N,30,30^FN1^FDCORELIMS.BARCODE^FS
^FO471,27^BQN,1,3^FDQA,5BX10^FS
^FT381,188^A0N,50,68^FD^FN1^FS
^XZ
All other fields using the ^FN1 command (including this one: ^FT381,188^A0N,50,68^FD^FN1^FS) output the correct value, just not the generated QR code.
I found similar questions, however, none of which are using a ^FN command, and their suggestions do not work for my situation. Those links are listed here:
Print ZPLII QR to open url
ZPL QR code not printing what is in the string
Thanks for help and I would really like to learn what I am doing wrong.
The ^FNx commands are used with stored formats; they cannot be used in a "one-off" label format like you are showing. I am traveling and don't have a zebra printer to test this but basically you need to define the label format "template" using ^DF like:
^XA
^DFR:MYFORMAT.ZPL^FS
^LH0,0^LRN^FT100,50,0^A0N,30,30
^FO471,27^BQN,1,3^FN1^FS
^FT381,188^A0N,50,68^FN1^FS
^XZ
That stores the format as R:MYFORMAT.ZPL. Then you use ^XF to recall the format and provide the values for the ^FNx:
^XA
^XFR:MYFORMAT.ZPL^FS
^FN1^FDQA,CORELIMS.BARCODE^FS
^XZ
Note that you include the extra data params required by ^BQ in the ^FD string.
Hope that helps.
I'm having trouble reading this table into R:
http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt
I tried all of the following:
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt")
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",skip=7,header=FALSE)
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",skip=8,header=FALSE)
read.table("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",skip=10,header=FALSE)
If I tell it that the separator is a tab, i get the wrong table:
d = read.table(file="http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt",header=FALSE,skip=7,sep="\t")
the only thing that seems to work is readLines. but then i don't know how to get a data.frame out of each line.
d =readLines("http://www.census.gov/popest/about/geo/state_geocodes_v2012.txt")
any suggestions? thanks.
I agree that read.fwf will work, once you've worked out the widths.
But, Yeah -- I just hate people who allow whitespace inside elements (e.g. "SouthDakota" ) . One other thing you can do is edit the source text file, replacing {2,N} spaces with a tab. That will leave the state names as-is but give you a workable delimiter.
I am attempting to cat homegrown LATEX output from R using cat but run into this snag that I suspect has to do with Encoding which I know nothing about or even where to start.
Using cat like this:
cat(paste0("\b", paste0(1, 2, "r")))
Produces exactly what I expect in the console. But:
cat(paste0("\b", paste0(1, 2, "r")), file="foo.txt")
gives an odd square character where the "\b" was (as seen HERE). I doubt this is a new problem for R/LATEX users creating home grown stuff but am obviously not searching with the right key words to find out an answer.
What is happening?
How do I fix it?
EDIT: Per Dason's suggestion:
> readLines("foo.txt")
[1] "\b 1 2 r"
Nothing is wrong. Your editor is displaying the square character in place of \b. Try
readLines("foo.txt")
to see that "\b12r" is what is stored in the file.
I am belatedly setting up Aptana to use four spaces instead of tabs. I've made the necessary changes to the preferences so every new tab inserts four spaces.
All the existing tabs remain, however, and so I get Mixed spaces and tab errors. How can you do a Replace all to fix this? I've tried ^t, <TAB> etc but it just searches for these as normal strings. What are the correct ways to specify a space and a tab?
I've found myself in similar situation and copying whole source code and repasting it helped me out. Just do as follow on your source code Ctrl+A, Ctrl+C, then Del whole code and Ctrl+V it again. You will get only spaces if have set it in options like you mentioned above.
There's an option in the refactor source menu to convert between tabs and space-tabs.
This worked for me: Edit > Find/Replace... (CTRL+F), check Regular Expressions, type \t and type 4 spaces (I use 4 spaces for tab).