I have a dataframe in the below format.
Am trying to move this to a DynamoDB table in the below format.
Device Id
SensorType
TimeStamp
Min Max Avg
Struggling with my code below..any help on this...
df = pd.DataFrame(col)
df['SensorValue'] = pd.to_numeric(df['SensorValue'], errors='coerce')
df['CurrentTime'] = pd.to_datetime(df['CurrentTime'])
minute = pd.Grouper(key='CurrentTime', freq='T')
df = df.groupby(['DeviceId','SensorDataType', minute]).SensorValue.agg(['min','max','mean'])
table1 = dynamodb.Table('bsm_data_table')
with table1.batch_writer() as batch:
for index, row in df.iterrows():
content = {
'DeviceId', row['DeviceId'],
'SensorDataType', row['SensorDataType'],
'CurrentTime', row['CurrentTime'],
'min', row['min'],
'max',row['max'],
'mean',row['mean']
}
batch.put_item(Item=content)
Related
I am trying to fetch data from JIRA using jirAgileR library.
All the date fields are returning values in date format except for duedate.
It returns values like 18466.00, 18473.00 etc.
The only difference between duedate and rest of the date fields is that duedate is date type and rest of the fields are datetime.
library(JirAgileR, quietly = T)
library(knitr, quietly = T)
library(dplyr, quietly = T)
if (is.null(JIRABaseURL)) JIRABaseURL
if (is.null(username)) username
if (is.null(password)) password
fields1<-get_jira_issues(domain = JIRABaseURL,
username = username,
password = password,
jql_query = "project in('My Project')",
fields = c('duedate', 'updateddate', 'components'),
maxResults = 50,
verbose = FALSE,
as.data.frame = TRUE)
how can I fix the duedate in this code?
updateddate is fetching data in correct format.
With the following code it is super easy to list all vector layers in a geopackage:
my_gpkg = r'PATH_TO_GEOPACKAGE'
gpkg_layers = [l.GetName() for l in ogr.Open(my_gpkg )]
Is there also a way to list all raster layers in a geopackage?
Could solve my problem with the help of this post: https://gis.stackexchange.com/questions/287997/counting-the-number-of-layers-in-a-geopackage
And here is my solution:
import sqlite3
my_gpkg = r'PATH_TO_GEOPACKAGE'
sqliteConnection = sqlite3.connect(my_gpkg)
cursor = sqliteConnection.cursor()
# the table gpkg_contents is mandatory in every geopackage
sqlite_select_query = """Select table_name from gpkg_contents where data_type = 'tiles'"""
cursor.execute(sqlite_select_query)
records = cursor.fetchall()
raster_layers = []
for row in records:
layer_name = row[0]
raster_layers.append(layer_name)
print('These are the raster layers in your geopackage: {}'.format(raster_layers))
cursor.close()
I have created graph with cumulative churn in current month and previos month. I created 2 columns:
CurrentMonth and PrevMonth with values "Yes"/"No" and then I filtered data by these columns.
CurrentMonth =
var currentrowyearmonth = FORMAT('Sheet1 (2)'[datetime]; "yyyymm")
var istoday = FORMAT(MAX('Sheet1 (2)'[datetime]); "yyyymm")
return if(istoday = currentrowyearmonth; "Yes"; "No")
PrevMonth =
var currentrowyearmonth = FORMAT(('Sheet1 (2)'[datetime]); "yyyymm")
var istoday = FORMAT(EDATE(MAX('Sheet1 (2)'[datetime]); -1); "yyyymm")
var currentrowday = DAY('Sheet1 (2)'[datetime])
var maxday = DAY(MAX('Sheet1 (2)'[datetime]))
return if(istoday = currentrowyearmonth; if(currentrowday <= maxday; "Yes"; "No"); "No")
I want to be able plot the same graph for any date selected in the filter.
For example,
If I chose today's date June will be the current month, and May the previous month. If I chose May 26 May will be the current month, and April the previous month and then the graph is automatically rebuilt.
Later date strings should not be counted.
I need to replace MAX('Sheet1 (2)'[datetime]) in var "istoday" to selected date from filter.
How can I do this or does this task require something else?
Okay so I have a table like shown...
I want to use PowerBI to create a new column called 'First_Interaction' where it will say 'True' if this was the user's earliest entry for that day. Any entry that came in after the first entry will be set to "False".
This is what I want the column to be like...
Use the following DAX formula to create a column:
First_Interaction =
VAR __userName = 'Table'[UserName]
VAR __minDate = CALCULATE( MIN( 'Table'[Datetime] ), FILTER( 'Table', 'Table'[UserName] = __userName ) )
Return IF( 'Table'[Datetime] = __minDate, "TRUE", "FALSE" )
Power BI dosnt support less than second so your DateTime Column must be a Text value. Take that on consideration for future transformation.
I have two tables(oracle):
(I have marked the primary keys with a star before the column name)
Table1 Columns are :
*date,
*code,
*symbol,
price,
weight
Table2 columns are :
*descriptionID
code
symbol
date
description
I need to find the below information using query,
For a given code and a symbol on a particular day,is there any description.
for example: code = "AA" and symbol = "TEST" on 2012-4-1 on Table 1 => is there atleast one row like ID=, code ="AA", symbol ="TEST" ,date = 2012-4-1 in table 2
I tried with the below query:
select * from Table1 t1 INNER JOIN
Table2 t2
on t1.code = t2.code and t1.symbol = t2.symbol and
TO_CHAR(t1.date, 'YYYY/MM/DD') = TO_CHAR(t1.date, 'YYYY/MM/DD')
But it doesnt give me output like:
code = AA, symbol = TEST, date 2012-4-1 => descrition count = 10
code = AA, symbol = TEST, date 2012-4-2 => descrition count = 5
code = BB, symbol = HELO, date 2012-4-1 => descrition count = 20
Can some one suggest me a query which can achieve the above output.
I don't see why you need the join:
SELECT count(*)
FROM Table2
WHERE code='AA'
AND symbol = 'TEST'
AND date = to_date('2012-04-01', 'yyyy-mm-dd')
UPDATE: (after reading your comment)
I still don't see why you need the join. Do you need some data from table1 ?
Anyway, if you want the count for all the (code,symbol,date)s then why not group by ?
As for the dates, better use trunc to get rid of the time parts.
So:
SELECT code, symbol, date, count(*)
FROM Table2
GROUP BY code, symbol, date
the Trunc() Method takes a String\Date input and Creates a DATE output that is in this Format: "DD\MM\YYYY".
So Its should do exactly what you want.