Custom Format in RDLC - report

I have a challenge in RDLC Report, to get amount in custom Format. e.g
150 will be shown as 15A
151 will be like 15B
152 will be like 15C etc.
Last digit in amount will be replaced will A,B,C etc. From 0 to 9 will be replace with A to J
Is there any custom format available, or some function to be written for such format?

I am not clear about this will work for you, yes but have some logic for your answer.
You have to use your custom logic (I think vb code is acceptable) here as you code in c#. you define a parameter and code it, that will return as you want.
http://yuluer.com/page/ddcjfdcd-rdlc-report-in-space.shtml
Custom .ToString() Formats in .rdlc Reports
https://msdn.microsoft.com/en-IN/library/ms252130%28v=vs.90%29.aspx
How to replace a particular string in rdlc?

You can use this expression:
=Left(CStr(Fields!YourNumber.Value), CStr(Fields!YourNumber.Value).Length - 1) &
Choose(Right(CStr(Fields!YourNumber.Value), 1), "A", "B", "C", "D", "E", "F", "G", "H", "I", "J")
Use Choose function to substitute the last digit in your number.

Related

r Remove parts of column name after special characters

Problem
I have a dataframe where I am trying to rename column entries that have multiple special characters, varying numbers of digits, and also include positive and negative numbers like shown in the example below.
Name Number
A -500--550
B -600--650
C -700--750
D -8000--8500
E -9000--9500
F -100-200
G 200-400
These entries are date ranges and the middle hyphen is supposed to indicate "to", so "A" would be read as "negative 500 to negative 550"; "F" would be read as "negative 100 to (positive) 200"; and G would be read as "(200 to 400).
Having a "-" in the beginning of many entries, and "--" in the middle and different numbers of digits is making things a bit complicated. For my end results I would like to remove the "to" dash and everything after. The end results should look like this:
Name Number
A -500
B -600
C -700
D -8000
E -9000
F -100
G 200
A dplyr approach would be great, but I'm not terribly picky as long as it works.
Similar Questions
I found some similar questions which came close to providing an answer, but the differences in the data sets have caused problems.
In this example they have differing number of digits after the dot ".", and use gsub to tackle the issue.
Removing characters in column titles after "."
colnames(df) <- gsub("\\..*$", "", colnames(df))
In this other example they had multiple dots "." and wanted to delete the last ".".
Remove (or replace) everything after a specified character in R strings
One of the methods used stringr as is shown below.
library(stringr)
str_remove(x, "\\.[^.]*$")
The problem here is that for many entries, I'd want to remove the second "-" onwards, but that doesn't work for rows "F" or "G"
str_remove(testing$Number, "\\--[^-]*$")
[1] "-500" "-600" "-700" "-8000" "-9000" "-100-200" "200-400"
Sample Data
I've provided a sample test set below.
structure(list(Name = c("A", "B", "C", "D", "E", "F", "G"), Number = c("-500--550",
"-600--650", "-700--750", "-8000--8500", "-9000--9500", "-100-200",
"200-400")), class = "data.frame", row.names = c(NA, -7L))
I would replace on the pattern -+\d+$:
testing$Number <- sub("-+\\d+$", "", testing$Number)
Here is a working regex demo.
The regex used here says to match:
-+ one or more dashes
\d+ followed by one or more digits
$ the end of the value

gsub works but replace does not

This is a simple stupid test put together to explain what was happening with my code. Why does replace simply spit back what is fed into it instead of actually replacing the content? It is because of this unexpected behavior that I am now using gsub instead.
Examples:
> replace(x="cat", "a", "o")
a
"cat" "o"
> gsub("a", "o", "cat")
[1] "cot"

Translating vector elements in R using correspondence table

What is the efficient and simple way in R to do the following:
read in two-column data from a file
use this information to build some kind of translation dictionary, like a python dict
apply the translation to the content of a vector in order to obtain the translated vector, possibly for several vectors but using the same correspondence information
?
I thought that the hash package would help me to do that, but I'm unsure I perform step 3 correctly.
Say my initial vector is my_vect and my hash is my_dict
I tried the following:
values(my_dict, keys=my_vect)
The following observation make me doubt that I'm doing it in the proper way:
The operation seems slow (more than one second on a powerful desktop computer with a vector of 582 entries and a hash of 46665 entries)
It results in something that doesn't look homogeneous with my_vec: while my_vec appeared as "indexed by numbers" (I mean that integer numbers between square brackets appear on the side of the values when displaying the data in the interactive console), the result of calling values as above appears to still somehow looks like a dictionary: each displayed translated value has the original value (i.e. hash key) displayed above it. I just want the values.
Edit:
If I understand correctly, R has some way of using "names" instead of numerical indices for vectors, and what I obtain using the values function is such a vector with names. It seems to work for what I wanted to do, although I imagine it takes more memory than necessary.
I tried libraries hash and hashmap, and the second seemed more efficient.
A small usage example:
> library(hashmap)
> keys = c("a", "b", "c", "d")
> values = c("A", "B", "C", "D")
> my_dict <- hashmap(keys, values)
> my_vect <- c("b", "c", "c")
> translated <- my_dict$find(my_vect)
> translated
[1] "B" "C" "C"
To build the dictionary from a table obtained using read.table, the option stringsAsFactors = FALSE of read.table has to be used, otherwise weird things happen (see discussion in the comments of https://stackoverflow.com/a/38838271/1878788).
Did you try the str_replace_all function from the stringr package?
Let's say you have a dictionary data frame dict with columns original and replacement. The following code replaces all instances of original with replacement in the vector.
library(stringr)
translations <- setNames(dict$replacement, dict$original)
new_vect <- str_replace_all(vect, fixed(translations))
I'm not sure if it implements hashing, but the underlying expression is in C code from the stringi package, so it should be fast.
The only case where that won't work as is, is if some of the words in original contain other words in original. In this case you'll need to add regular expression start-string (^) or end-string ($) markers to the original strings you want to replace.
translations <- setNames(dict$replacement, paste0("^", dict$original, "$"))

Function that extracts each unique character in a string

Let's say that I have a string "rrkn". Is there a function in R that'll return a vector "r", "k", "n" (i.e. each unique character in the string)?
If you want to make it slightly less cumbersome to type:
uniqchars <- function(x) unique(strsplit(x, "")[[1]])
Another solution to use rawToChar(unique(charToRaw(x))).
A stringr option is:
library(stringr)
str_unique(str_split_1("rrkn", ""))

Return matching values instead of boolean

Consider:
chars <- c("A", "B", "C")
string <- c("B", "C")
chars[!(chars %in% string)]
So, I want to get the char(s) which is(are) not in string.
The code works, but I feel like it's kind of inconvenient.
Is there a function in R which returns the actual value directly, instead of evaluating TRUE/FALSE and then indexing?
As akrun mentioned you want to use
setdiff(chars, string)
more generally, the help page for setdiff is the same than union and other useful and I feel underused functions to perform operations on sets such as
intersect()
Which more directly answers the phrasing of your initial question.

Resources