Add row into table even when some data not found in power automate? - automated-tests

Hi I am trying to use AI Builder to scan some titles and populate into a spreadsheet once the pdf is dropped into a folder.
It works fine where it find all the Data but if it can not find any data in the columns starting with SOL then it does bring anything through. I would like it too still bring through any data from the first 3 columns even if nothing is found for the "SOL" columns. Can anyone help please?
Example Output as needed. Currently row 3 will not come through.
Tried some conditions and compose

Maybe you can also post your message in the Power automate community.

Related

Pulling bond security names from ISIN in R

I'm trying to convert individual ISINs into their respective bond names in R. I've been able to achieve it in Excel, but weirdly passing the 'bdp' function doesn't seem to work in the desired way in R.
To give an example, I currently have an ISIN for a government bond: GB00BK5CVX03, I would like to dynamically convert said ISIN into the name for this bond (UKT 0.625 06/07/2025 GOVT).
In excel, I do:
=BDP("GB00BK5CVX03 ISIN", "ID_BB_SEC_NUM_DES")
And it delivers a useable result: UKT 0.625 06/07/25
In R, I try pretty much the same thing:
bdp("GB00BK5CVX03 ISIN", "ID_BB_SEC_NUM_DES")
And it delivers:
I was expecting a similar result to the excel output (namely a string that I could then attach to an object).
Does anyone know where I'm going wrong? Any help is much appreciated.
So I managed to solve it, turns out the API will not respond to "ISIN" being at the end of the ISIN, even though it works fine in excel.
Therefore changing the code to read:
bdp("GB00BK5CVX03 GOVT", "ID_BB_SEC_NUM_DES")
Solved the issue.

Iteration and Subtraction of columns for graphs

a question if I may. I am using Jupyter Notebook and Python 3. I am using 3 csv files from https://data.humdata.org/dataset/novel-coronavirus-2019-ncov-cases to produce graphs to track the covid 19 epidemic. These are purely for my use to learn Python and Data Visualisation. I have changed the dates in row one to be Day1, Day2 ect, dropped the Province/State, Lat, Long columns and set the Country/Region column as the index. Each dataset now has 107 columns and 267 rows. The three datasets are cases, deaths and recover. Things are going ok but I have a slight problem and need some advice. The graphs are updated with a new column each day and this causes me some problems when I try write code to show the daily increase in numbers from today over yesterday. Currently I have to manually update my code each day to compensate for the extra columns in the 3 csv files as my code reads like:-
daily_increase_C = [(0,
(cases["Day1"].sum()- (0)),
(cases["Day2"].sum()-cases["Day1"].sum()),
(cases["Day3"].sum()-cases["Day2"].sum()),
(cases["Day4"].sum()-cases["Day3"].sum()),
(cases["Day5"].sum()-cases["Day4"].sum()),
---------------------------------------------
(cases["Day102"].sum()-cases["Day101"].sum()),
(cases["Day103"].sum()-cases["Day102"].sum()),
(cases["Day104"].sum()-cases["Day104"].sum()),
(cases["Day106"].sum()-cases["Day105"].sum()))]
So the last line has to be copied, pasted and then updated each day. There has to be a better way of achieving this but new to coding as I am, I cannot seem to get my head around it and figure it out.
Any advice, pointers, help on how to look at and approach this problem would be greatly appreciated. I hope I have explained this clearly enough for you, if not my apologies and please post your questions needing clarification. Thanks in advance for any help.

Exporting data from R that is in a list generated by a function

So I've used the decompose function and I want to export all the lists it generates not just the plot it creates. I tried converting the lists into either a matrix or data frame but then that gets rid of the date header and year columns so if someone knows how to convert it and keep the list formatting that would solve my issue I think.
Anyway, The closest I've got to being able to do this keeping the list format is by doing
capture.output(decompose, file = "filename.csv")
As you can see from the image attached though:
Sometimes the months arent all together in a row which is really not helpful or what I want. It also just puts it in one column and I'm having to go into the excel after and do the text to column option which is going to get old really quickly.
Any help would be greatly appriciated. I'm really new to R so apologise if there is an obvious fix I'm missing.

read RTF files into R containing econ data (a lot of numbers)

Recently I obtained a lot of RTF files that contain econ data for analysis I need to do. Unfortunately, this is how Bureau of Statistics of my country could help with time series data for a long time lapse. If there is a one time need to select particular indicator for 10 years or so I'm OK to find these values by hand using Word/Notepad/TestEdit(for Mac). But my problem is that I have 15 files with data that I need to combine somehow in one dataset for my work. But, before even start doing this I don't have a clue if it is possible to read those files in appropriate format (data.frame). Wanted to ask expert opinion on how to approach this task. An example of one of files could be downloaded from here:
[https://www.dropbox.com/s/863ikx6poid8unc/Export_for_SO.rtf?dl=0][1]
All values are in Russian. Dataset represents export of particular product (first column) across countries (second column) in US dollars for 2 periods.
Thank you.
Use code found on https://datascienceplus.com/how-to-import-multiple-csv-files-simultaneously-in-r-and-create-a-data-frame/
replace read_csv with read_rtf
You may want to manually convert your files to another format using an office suite or a text editor. You should be able to save as in another format.
While in R, you may want to give striprtf a try. I'm guessing you will still have to clean your data a bit afterward.
You can install the package like this:
install.packages("striprtf")

reading in large text files in r

I want to read in a large ido file that had just under 110,000,000 rows and 8 columns. The columns are made up of 2 integer columns and 6 logical columns. The delimiter "|" is used in the file. I tried using read.big.matrix and it took forever. I also tried dumpDf and it ran out of RAM. I tried ff which I heard was a good package and I am struggling with errors. I would like to do some analysis with this table if I can read it in some way. If anyone has any suggestions that would be great.
Kind Regards,
Lorcan
Thank you for all your suggestions. I managed to figure out why I couldn't get the error to work. I'll give you all answers and suggestions so no one can make my stupid mistake again.
First of all, the data that was been giving to me contained some errors in it so I was doomed to fail from the start. I was unaware until a colleague came across it in another piece of software. In a column that contained integers there were some letters so that when the read.table.ff package tried to read in the data set it somehow got confused or I don't know. Whatever though I was given another sample of data, 16,000,000 rows and 8 columns with correct entries and it worked perfectly. The code that I ran is as follows and took about 30 seconds to read:
setwd("D:/data test")
library(ff)
ffdf1 <- read.table.ffdf(file = "test.ido", header = TRUE, sep = "|")
Thank you all for your time and if you have any questions about the answer feel free to ask and I will do my best to help.
Do you really need all the data for your analysis? Maybe you could aggregate your dataset (say from minute values to daily averages). This aggregation only needs to be done once, and can hopefully be done in chunks. In this way you do need to load all your data into memory at once.
Reading in chunks can be done using scan, the important arguments are skip and n. Alternatively, put your data into a database and extract the chunks in that way. You could even using the functions from the plyr package to run chunks in parallel, see this blog post of mine for an example.

Resources