How can I access the data in a Cassandra Table using RCassandra - r

I need to get the data in a column of a table Cassandra Database. I am using RCassandra for this. After getting the data I need to do some text mining on it. Please suggest me how do connect to cassandra, and get the data into my R Script using RCassandra
My RScript :
library(RCassandra)
connect.handle <- RC.connect(host="127.0.0.1", port=9160)
RC.cluster.name(connect.handle)
RC.use(connect.handle, 'mykeyspace')
sourcetable <- RC.read.table(connect.handle, "sourcetable")
print(ncol(sourcetable))
print(nrow(sourcetable))
print(sourcetable)
This will print the output as:
> print(ncol(sourcetable))
[1] 1
> print(nrow(sourcetable))
[1] 18
> print(sourcetable)
144 BBC News
158 IBN Live
123 Reuters
131 IBN Live
But my cassandra table contains four columns, but here its showing only 1 column. I need to get each column values separated. So how do I get the individual column values(Eg.each feedurl) What changes should I make in my R script?
My cassandra table, named sourcetable

I have used Cassandra and R with the correct Cran Jar files, but RCassandra is easier. RCassandra is a direct interface to Cassandra without the use of Java. To connect to Cassandra you will use RC.connect to return a connection handle like this.
RC.connect(host = <xxx>, port = <xxx>)
RC.login(conn, username = "bar", password = "foo")
You can then use a RC.get command to retrieve data or RC.ReadTable command to read table data.
BUT, First you should read THIS

I am confused as well. Table demo.emp has 4 row and 4 columns ( empid, deptid, first_name and last_name). Neither RC.get nor RC.read.table gets the all the data.
cqlsh:demo> select * from emp;
empid | deptid | first_name | last_name
-------+--------+------------+-----------
1 | 1 | John | Doe
1 | 2 | Mia | Lewis
2 | 1 | Jean | Doe
2 | 2 | Manny | Lewis
> RC.get.range.slices(c, "emp", limit=10)
[[1]]
key value ts
1 1.474796e+15
2 John 1.474796e+15
3 Doe 1.474796e+15
4 1.474796e+15
5 Mia 1.474796e+15
[[2]]
key value ts
1 1.474796e+15
2 Jean 1.474796e+15
3 Doe 1.474796e+15
4 1.474796e+15
5 Manny 1.474796e+15

Related

Parsing NSG Flowlogs in Azure Log Analytics Workspace to separate Public IP addresses

I have been updating a KQL query for use in reviewing NSG Flow Logs to separate the columns for Public/External IP addresses. However the data within each cell of the column contains additional information that needs to be parsed out so my excel addin can run NSLOOKUP against each cell and looking for additional insights. Later I would like to use the parse operator (https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/parseoperator) to separate this information to determine what that external IP address belongs to through nslookup, resolve-dnsname, whois , or other means.
However currently I am attempting to parse out the column, but is not comma delimited and instead uses a single space and multiple pipes. Below is my query and I would like to add a parse to this to either have a comma delimited string in a single cell [ for PublicIP (combination of Source and Destination), PublicSourceIP, and PublicDestIP. ] or break it out into multiple rows. How would parse be best used to separate this information, or is there a better operator to use to carry this out?
For Example the content could look like this
"20.xx.xx.xx|1|0|0|0|0|0 78.xxx.xxx.xxx|1|0|0|0|0|0"
AzureNetworkAnalytics_CL
| where SubType_s == 'FlowLog' and (FASchemaVersion_s == '1'or FASchemaVersion_s == '2')
| extend NSG = NSGList_s, Rule = NSGRule_s,Protocol=L4Protocol_s, Hits = (AllowedInFlows_d + AllowedOutFlows_d + DeniedInFlows_d + DeniedOutFlows_d)
| project-away NSGList_s, NSGRule_s
| project TimeGenerated, NSG, Rule, SourceIP = SrcIP_s, DestinationIP = DestIP_s, DestinationPort = DestPort_d, FlowStatus = FlowStatus_s, FlowDirection = FlowDirection_s, Protocol=L4Protocol_s, PublicIP=PublicIPs_s,PublicSourceIP = SrcPublicIPs_s,PublicDestIP=DestPublicIPs_s
// ## IP Address Filtering ##
| where isnotempty(PublicIP)
**| parse kind = regex PublicIP with * "|1|0|0|0|0|0" ipnfo ' ' *
| project ipnfo**
// ## port filtering
| where DestinationPort == '443'
Based on extract_all() followed by strcat_array() or mv-expand
let AzureNetworkAnalytics_CL = datatable (RecordId:int, PublicIPs_s:string)
[
1 ,"51.105.236.244|2|0|0|0|0|0 51.124.32.246|12|0|0|0|0|0 51.124.57.242|1|0|0|0|0|0"
,2 ,"20.44.17.10|6|0|0|0|0|0 20.150.38.228|1|0|0|0|0|0 20.150.70.36|2|0|0|0|0|0 20.190.151.9|2|0|0|0|0|0 20.190.151.134|1|0|0|0|0|0 20.190.154.137|1|0|0|0|0|0 65.55.44.109|2|0|0|0|0|0"
,3 ,"20.150.70.36|1|0|0|0|0|0 52.183.220.149|1|0|0|0|0|0 52.239.152.234|2|0|0|0|0|0 52.239.169.68|1|0|0|0|0|0"
];
// Option 1
AzureNetworkAnalytics_CL
| project RecordId, PublicIPs = strcat_array(extract_all("(?:^| )([^|]+)", PublicIPs_s),',');
// Option 2
AzureNetworkAnalytics_CL
| mv-expand with_itemindex=i PublicIP = extract_all("(?:^| )([^|]+)", PublicIPs_s) to typeof(string)
| project RecordId, i = i+1, PublicIP
Fiddle
Option 1
RecordId
PublicIPs
1
51.105.236.244,51.124.32.246,51.124.57.242
2
20.44.17.10,20.150.38.228,20.150.70.36,20.190.151.9,20.190.151.134,20.190.154.137,65.55.44.109
3
20.150.70.36,52.183.220.149,52.239.152.234,52.239.169.68
Option 2
RecordId
i
PublicIP
1
1
51.105.236.244
1
2
51.124.32.246
1
3
51.124.57.242
2
1
20.44.17.10
2
2
20.150.38.228
2
3
20.150.70.36
2
4
20.190.151.9
2
5
20.190.151.134
2
6
20.190.154.137
2
7
65.55.44.109
3
1
20.150.70.36
3
2
52.183.220.149
3
3
52.239.152.234
3
4
52.239.169.68
David answers your question. I would just like to add that I worked on the raw NSG Flow Logs and parsed them using kql in this way:
The raw JSON:
{"records":[{"time":"2022-05-02T04:00:48.7788837Z","systemId":"x","macAddress":"x","category":"NetworkSecurityGroupFlowEvent","resourceId":"/SUBSCRIPTIONS/x/RESOURCEGROUPS/x/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/x","operationName":"NetworkSecurityGroupFlowEvents","properties":{"Version":2,"flows":[{"rule":"DefaultRule_DenyAllInBound","flows":[{"mac":"x","flowTuples":["1651463988,0.0.0.0,192.168.1.6,49944,8008,T,I,D,B,,,,"]}]}]}}]}
kql parsing:
| mv-expand records
| evaluate bag_unpack(records)
| extend flows = properties.flows
| mv-expand flows
| evaluate bag_unpack(flows)
| mv-expand flows
| extend flowz = flows.flowTuples
| mv-expand flowz
| extend result=split(tostring(flowz), ",")
| extend source_ip=tostring(result[1])
| extend destination_ip=tostring(result[2])
| extend source_port=tostring(result[3])
| extend destination_port=tostring(result[4])
| extend protocol=tostring(result[5])
| extend traffic_flow=tostring(result[6])
| extend traffic_decision=tostring(result[7])
| extend flow_state=tostring(result[8])
| extend packets_src_to_dst=tostring(result[9])
| extend bytes_src_to_dst=tostring(result[10])
| extend packets_dst_to_src=tostring(result[11])
| extend bytes_dst_to_src=tostring(result[12])

Multiple orderBy in firestore

I have a question about how multiple orderBy works.
Supposing these documents:
collection/
doc1/
date: yesterday at 11:00pm
number: 1
doc2/
date: today at 01:00am
number: 6
doc3/
date: today at 13:00pm
number: 0
If I order by two fields like this:
.orderBy("date", "desc")
.orderBy("number", "desc")
.get()
How are those documents sorted? And, what about doing the opposite?
.orderBy("number", "desc")
.orderBy("date", "desc")
.get()
Will this result in the same order?
I'm a bit confused since I don't know if it will always end up ordering by the last orderBy.
In the documentation for orderBy() in Firebase it says this:
You can also order by multiple fields. For example, if you wanted to order by state, and within each state order by population in descending order:
Query query = cities.orderBy("state").orderBy("population", Direction.DESCENDING);
So, it is basically that. With logic from SQL where you have ORDER BY to order your table. Let's say you have a database of customers who are from all over the world. Then you can use ORDER BY Country and you will order them by their Country in any order you want. But if you add the second argument, let's say Customer Name, then it will first order by the Country and then within that ordered list it will order by Customer Name. Example:
1. Adam | USA |
2. Jake | Germany |
3. Anna | USA |
4. Semir | Croatia |
5. Hans | Germany |
When you call orderBy("country") you will get this:
1. Semir | Croatia |
2. Jake | Germany |
3. Hans | Germany |
4. Adam | USA |
5. Anna | USA |
Then when you call orderBy("customer name") you get this:
1. Semir | Croatia |
2. Hans | Germany |
3. Jake | Germany |
4. Adam | USA |
5. Anna | USA |
You can see that Hans and Jake switched places, because H is before J but they are still ordered by the Country name. In your case when you use this:
.orderBy("date", "desc")
.orderBy("number", "desc")
.get()
It will first order by the date and then by the numbers. But since you don't have the same date values, you won't notice any difference. This also goes for the second one. But let's say that one of your fields had the same date, so your data looks like this:
collection/
doc1/
date: yesterday at 11:00pm
number: 1
doc2/
date: today at 01:00am
number: 6
doc3/
date: today at 01:00am
number: 0
Now, doc2 and doc3 are both dated to today at 01:00am. Now when you order by the date they will be one below the other, probably doc2 will be shown first. But when you use orderBy("number") then it will check for numbers inside the same dates. So, if its just orderBy("number") without "desc" you would get this:
orderBy("date");
// output: 1. doc1, 2. doc2, 3. doc3
orderBy("number");
// output: 1. doc1, 2. doc3, 3. doc2
Because number 0 is before 6. Just reverse it for desc.

Load data from R into PosgreSQL database without losing constraints

I'm trying to import some dataframe from R into my PostgreSQL database. The tables are defined as follows:
CREATE TABLE fact_citedpubs(
citedpubs_id serial CONSTRAINT fact_citedpubs_pk PRIMARY KEY NOT NULL,
originID integer REFERENCES dim_country(country_id),
yearID integer REFERENCES dim_year(year_id),
citecount double precision
);
In my dataframe I have values for originID, yearID and citecount. My dataframe looks like this
| YEAR | GEO_DESC |OBS_VALUE
| 8 | 1 | 13.29400
| 17 | 2 | 4.42005
| 17 | 1 | 12.95001
| 15 | 1 | 11.61365
| 14 | 1 | 13.48174
To import this dataframe in the postgresql database I use the function dbWriteTable(con, 'fact_citedpubs', citations, overwrite = TRUE)
Because of using overwrite = TRUE Postgresql drops all the earlier set contstraints (primary-, foreign keys and datatypes). Is there any other way to import data into a postgresql database from R while keeping the constraints that were set in advance?
Many thanks!

SQLite UPDATE returns empty

I'm trying to update a table column from another table with the code below.
Now the editor says '39 rows affected' and I can see something happened because some cells changed from null to empty (nothing shows).
While orhers are still null
What could be wrong here?
Why does it not update properly....
PS: I checked manually that the values are not empty in the column to check for.
UPDATE CANZ_CONC
SET EAN = (SELECT t1.EAN_nummer FROM ArtLev_CONC t1 WHERE t1.Artikelcode_leverancier = Artikelcode_leverancier)
WHERE ARTNMR IN (SELECT t1.Artikelcode_leverancier FROM Artlev_CONC t1 WHERE t1.Artikelcode_leverancier = ARTNMR);
Edit:
The tabel2 is like:
NMR | EAN | CUSTOM
-------------------------------
1 | 987 | A
2 | 654 | B
3 | 321 | C
Tabel 1 is like
NMR | EAN | CUSTOM
-------------------------------
1 | null | null
2 | null | null
5 | null | null
After the UPDATE table1 is like
NMR | EAN | CUSTOM
-------------------------------
1 | | null
2 | | null
5 | null | null
I've got this working.
I guess my data was corrupted after all.
Since it is about 330.000 rows it was not very easy to spot.
But it came to me when the loading of the data took about 10 minutes!
It used to be about 40 - 60 seconds.
So I ended up back at the drawing board for the initial csv file.
I also saw the columns had not been given a DATA type, so I altered that as well.
Thanx for the help!

Match multiple columns with same value in ODBC

Hi I have an Access Table like this.
----------------------------------------------------------------
| firstname | surname | address |
----------------------------------------------------------------
| Joan | Rivers | 123 Fake St. |
| Michael | Jackson | 69 Balls Head St. |
| Justin | Bieber | None |
----------------------------------------------------------------
I'm wondering if it is possible, over ODBC, to construct a query that allows me to match my input to any column.
Something like this:
SELECT * FROM NEMESISES WHERE '%value%' LIKE firstname or surname or address;
and when value is plugged in for example: '%bie%', it outputs the Justin Bieber row or when '%st%' is plugged in it outputs the Joan Rivers and Michael Jackson row.
Thank You!
You can divide it into 3 matchings:
SELECT * FROM NEMESISES
WHERE firstname LIKE '%value%'
OR surname LIKE '%value%'
OR address LIKE '%value%';
Or you can match joined values of columns:
SELECT * FROM NEMESISES
WHERE firstname || surname || address LIKE '%value%';
I would prefer first solution: database have less to do.

Resources