Pulling bond security names from ISIN in R - r

I'm trying to convert individual ISINs into their respective bond names in R. I've been able to achieve it in Excel, but weirdly passing the 'bdp' function doesn't seem to work in the desired way in R.
To give an example, I currently have an ISIN for a government bond: GB00BK5CVX03, I would like to dynamically convert said ISIN into the name for this bond (UKT 0.625 06/07/2025 GOVT).
In excel, I do:
=BDP("GB00BK5CVX03 ISIN", "ID_BB_SEC_NUM_DES")
And it delivers a useable result: UKT 0.625 06/07/25
In R, I try pretty much the same thing:
bdp("GB00BK5CVX03 ISIN", "ID_BB_SEC_NUM_DES")
And it delivers:
I was expecting a similar result to the excel output (namely a string that I could then attach to an object).
Does anyone know where I'm going wrong? Any help is much appreciated.

So I managed to solve it, turns out the API will not respond to "ISIN" being at the end of the ISIN, even though it works fine in excel.
Therefore changing the code to read:
bdp("GB00BK5CVX03 GOVT", "ID_BB_SEC_NUM_DES")
Solved the issue.

Related

Add row into table even when some data not found in power automate?

Hi I am trying to use AI Builder to scan some titles and populate into a spreadsheet once the pdf is dropped into a folder.
It works fine where it find all the Data but if it can not find any data in the columns starting with SOL then it does bring anything through. I would like it too still bring through any data from the first 3 columns even if nothing is found for the "SOL" columns. Can anyone help please?
Example Output as needed. Currently row 3 will not come through.
Tried some conditions and compose
Maybe you can also post your message in the Power automate community.

Exporting data from R that is in a list generated by a function

So I've used the decompose function and I want to export all the lists it generates not just the plot it creates. I tried converting the lists into either a matrix or data frame but then that gets rid of the date header and year columns so if someone knows how to convert it and keep the list formatting that would solve my issue I think.
Anyway, The closest I've got to being able to do this keeping the list format is by doing
capture.output(decompose, file = "filename.csv")
As you can see from the image attached though:
Sometimes the months arent all together in a row which is really not helpful or what I want. It also just puts it in one column and I'm having to go into the excel after and do the text to column option which is going to get old really quickly.
Any help would be greatly appriciated. I'm really new to R so apologise if there is an obvious fix I'm missing.

Converting data between classes in R: large CSV file, prices recorded like "$10.00"

So full disclosure, I am new to R and programming in general. Because of that, it is very hard for me to search when I have problems because I am not even sure what keywords to use. I am learning, and all I am hoping for y'all to do is point me in the right direction.
I have a very large csv file that I imported into R. Around 2 million observations (don't worry, I am not planning on using all 2 million). The only problem is that the people recording the data formatted the file to record to prices as "$10.00". Because of this, R recognizes the data has a factor, and also treats each individual price as a separate variable because of the dollar sign. I would like to reformat this column as a numeric variable.
I am sure there is some way to go about reformatting this in R, the only problem is I am not sure which functions I need. Sorry for the very basic question, I have just hit a wall a figured I would reach out.
Any and all help is much appreciated!
Thank you!
We could also use sub
as.numeric(sub('\\D+', '', x))
#[1] 10.00 11.24 15.22
data
x<-c("$10.00","$11.24","$15.22")
Suppose that your data looks like this:
x<-c("$10.00","$11.24","$15.22")
You can use the substring function to trim the initial dollar sign (which will still leave you with strings) and then use as.numeric to turn it to a numeric vector.
newx<-as.numeric(substring(x,2))
will produce a vector named newx with value
c(10.00,11.24,15.22)
We tell the substring to start at the 2nd character (strings in R are 1-indexed), and then cast to numeric.
In your data frame (suppose it is called df), you can replace the column like
df$MoneyColumn <- as.numeric(substring(df$MoneyColumn,2))

R studio wont recognize my variables? any ideas why?

So I imported the file and saved it as Purity and its clearly imported. I tried t-test but it doesn't recognize my variables. I tried using the names function to retrieve the variable names and its exactly what I'm entering, V1 and V2. I also tried with Lab-1 and Lab-2. I also tried just using dataset=Purity, all to no avail.
I took a screenshot so as to show code and that data is in the studio, can anyone tell me why this is not working?
apologies if this is painfully obvious I was only introduced to R for stats last week and am still a beginner, also don't have much experience with programming in general. I have looked at other similar problems but just cant see why mine aren't being recognised and others are.
You've got 2 issues here:
1). You don't show how you imported the dataset but you need to either remove the first row or (better) name the columns correctly. I'm assuming you imported the data with read.table(). If so, then include the argument header = TRUE when you import the data.
2). You need to tell R where you want it to get Lab-1 and Lab-2 from.
with(Purity, t.test(Lab-1, Lab-2, paired = TRUE))
r is case sensitive. It looks like your script is using a lowercase 'v" when its an upper case "V" in your variable names.
The problem is in the way you have named your variable. r doesn't recognise the hyphen (-) as a legitimate part of a variable name. Try using underscore (_) instead.

reading in large text files in r

I want to read in a large ido file that had just under 110,000,000 rows and 8 columns. The columns are made up of 2 integer columns and 6 logical columns. The delimiter "|" is used in the file. I tried using read.big.matrix and it took forever. I also tried dumpDf and it ran out of RAM. I tried ff which I heard was a good package and I am struggling with errors. I would like to do some analysis with this table if I can read it in some way. If anyone has any suggestions that would be great.
Kind Regards,
Lorcan
Thank you for all your suggestions. I managed to figure out why I couldn't get the error to work. I'll give you all answers and suggestions so no one can make my stupid mistake again.
First of all, the data that was been giving to me contained some errors in it so I was doomed to fail from the start. I was unaware until a colleague came across it in another piece of software. In a column that contained integers there were some letters so that when the read.table.ff package tried to read in the data set it somehow got confused or I don't know. Whatever though I was given another sample of data, 16,000,000 rows and 8 columns with correct entries and it worked perfectly. The code that I ran is as follows and took about 30 seconds to read:
setwd("D:/data test")
library(ff)
ffdf1 <- read.table.ffdf(file = "test.ido", header = TRUE, sep = "|")
Thank you all for your time and if you have any questions about the answer feel free to ask and I will do my best to help.
Do you really need all the data for your analysis? Maybe you could aggregate your dataset (say from minute values to daily averages). This aggregation only needs to be done once, and can hopefully be done in chunks. In this way you do need to load all your data into memory at once.
Reading in chunks can be done using scan, the important arguments are skip and n. Alternatively, put your data into a database and extract the chunks in that way. You could even using the functions from the plyr package to run chunks in parallel, see this blog post of mine for an example.

Resources