I am writing a script in order to connect to an Aspentech Infoplus 21 database server.
When calling for a single TAG I do not record any problem
import pandas as pd
import pyodbc
from datetime import datetime
from datetime import timedelta
#---- Connect to IP21
conn = pyodbc.connect("DRIVER={AspenTech SQLplus};HOST=192.xxx.x.xxx;PORT=10014")
#---- Query string
tag = 'BAN0E10TI110V'
end = datetime.now()
start = end-timedelta (days=2)
end = end.strftime("%Y-%m-%d %H:%M:%S")
start=start.strftime("%Y-%m-%d %H:%M:%S")
sql = "select TS,VALUE from HISTORY "\
"where NAME='%s'"\
"and PERIOD = 300*10"\
"and REQUEST = 2"\
"and REQUEST=2 and TS between TIMESTAMP'%s' and TIMESTAMP'%s'" % (tag, start, end)
data = pd.read_sql(sql,conn) # Pandas DataFrame with your data!
When calling multiple tags through a csv (following script) file I can not get the required data.
import pandas as pd
import pyodbc
from datetime import datetime
from datetime import timedelta
#---- Connect to IP21
conn = pyodbc.connect("DRIVER={AspenTech SQLplus};HOST=192.xxx.x.xxx;PORT=10014")
tags = pd.read_csv("C:\\Users\\xxx\\TAGcsvIN.csv", decimal=',', sep=';', parse_dates=True)
#---- Query string
end = datetime.now()
start = end-timedelta (days=2)
end = end.strftime("%Y-%m-%d %H:%M:%S")
start=start.strftime("%Y-%m-%d %H:%M:%S")
sql = "select TS,VALUE from HISTORY "\
"where NAME='%s'"\
"and PERIOD = 300*10"\
"and REQUEST = 2"\
"and REQUEST=2 and TS between TIMESTAMP'%s' and TIMESTAMP'%s'" % (tags['TAGcsv'], start, end)
data = pd.read_sql(sql,conn) # Pandas DataFrame with your data!
Do someone know how to call multiple tags via csv file?
I'm not proficient in python, but if you want to query several tag, you should build a query like this:
"where NAME IN (""tag1"", ""tag2"", ""tagN"")"\
Related
I am trying to do some data exploration for this dataset I have. The table I want to import is 11 million rows. Here is the script and output
#Creating a variable for our BQ project space
project_id = 'project space'
#Query
Step1 <-
"
insertquery
"
#Executing the query from the variable above
Step1_df <- query_exec(Step1, project = project_id, use_legacy_sql = FALSE, max_pages = Inf,page_size = 99000)
Error:
Error in curl::curl_fetch_memory(url, handle = handle) :
Operation was aborted by an application callback
Is there a different bigquery library I can use ? Looking to also speed up the upload time .
I have the TIMESTAMP data like:
[29:23:59:45]
This stands for whatever month 29, 23:59:45
How can I convert in PySpark to like DAY 29, TIME:23:59:45?
Possibly using something like
from datetime import datetime
dVal = datetime.strptime('[29:23:59:45]', '%d/%h/%m/%s')
This is a classic example for which is needed to use a User Defined Function (UDF).
from datetime import datetime
from spark.sql import functions as F
def toDate(x):
return datetime.strptime(x, '%m %H:%M:%S')
toDate = F.udf(toDate)
new_df = df.withColumn('date', toDate(F.col('timestamp'))
where, df is supposed to be the old dataframe containing a column named 'timestamp' as you reported.
I am trying to split a flowfile into multiple flow files on the basis of adding a month to a date which i am getting in the coming flowfile.
eg.
{"to":"2019-12-31T00:00:00Z","from":"2019-03-19T15:36:48Z"}
be the dates i am getting in a flowfile . so i have to split this single flow file into 11 flowfiles with date ranges like
{"to":"2019-04-19","from":"2019-03-19"}
{"to":"2019-05-19","from":"2019-04-19"}
{"to":"2019-06-19","from":"2019-05-19"}
....... and so till
{"to":"2019-12-31","from":"2019-12-19"} .
i have been trying with example inputs to split files with this into day wise flowfiles:
`
begin = '2018-02-15'
end = '2018-04-23'
dt_start = datetime.strptime(begin, '%Y-%m-%d')
dt_end = datetime.strptime(end, '%Y-%m-%d')
one_day = timedelta(days = 1)
start_dates = [dt_start]
end_dates = []
today = dt_start
while today <= dt_end:
tomorrow = today + one_day
print(tomorrow)
`
but i get a error in my Execute script processor. nifi flow screenshot
Since you're using Jython, you may have to cast today to some Jython/Python time variable or call today.getTime() in order to do arithmetic operations on it.
Am new to sqlalchemy. When I run this code there is NO database. I want it to create the database, add the table defined and the data. Reading the documentation for to_sql this code should create the table if it doesn't exist ( it doesn't), when I run it it throws an error that the table has no column num 1 ??? AND does NOT create the database. What am I doing wrong please?
import pandas as pd
import sqlite3
from sqlalchemy import create_engine
date_stuff = [ (20171219, 13.71,28), (20171319, 144.71,33), (20171919, 99.99,99)]
labels = ['date', 'num 1' , 'num 2']
dev_env = "/home/test/Desktop/mtest/hvdata/"
db_name = "tinydatabase.db"
def new_sql_add ( todays_data ):
todays_data.to_sql(name='mcm_trends', con = db ,if_exists='append')
if __name__ == '__main__' :
db_path = dev_env + db_name
db = create_engine('sqlite:///db_path')
df_for_sql = pd.DataFrame.from_records( date_stuff , columns = labels)
new_sql_add ( df_for_sql )
Not sure how to approach this one.
User supplies an argument, ie, program.exe '2001-08-12'
I need to add a single day to that argument - this will represent a date range for another part of the program. I am aware that you can add or subtract from the current day but how does one add or subtract from a user supplied date?
import datetime
...
date=time.strptime(argv[1], "%y-%m-%d");
newdate=date + datetime.timedelta(days=1)
Arnauds Code is valid,Just see how to use it :) :-
>>> import datetime
>>> x=datetime.datetime.strptime('2001-08-12','%Y-%m-%d')
>>> newdate=x + datetime.timedelta(days=1)
>>> newdate
datetime.datetime(2001, 8, 13, 0, 0)
>>>
Okay, here's what I've got:
import sys
from datetime import datetime
user_input = sys.argv[1] # Get their date string
year_month_day = user_input.split('-') # Split it into [year, month, day]
year = int(year_month_day[0])
month = int(year_month_day[1])
day = int(year_month_day[2])
date_plus_a_day = datetime(year, month, day+1)
I understand this is a little long, but I wanted to make sure each step was clear. I'll leave shortening it up to you if you want it shorter.