I am working in R, this is a look of my table :
IAE_2017 IAE_2018
L1PYSH L2PYSH
M1CHIMIE M2CHIMIE
..... ........
Now I want to make a condition which take the 2 first caracters of IAE_2017 and verify if it is different for the 2 fisrt's of IAE_2018.
You could use substr(), assuming df a dataframe :
substr(df[,1], start = 1, stop = 2)==substr(df[,2], start = 1, stop = 2)
Related
rule is a worksheet's name,the command works fine:
oCell.Formula = "=LOOKUP(A2;$'rule'.$A$2:$A$9;$'rule'.$B$2:$B$9)"
I want to make the first A2 in LOOKUP with a variable:
for id=2 to NumRows
oCell = Sheet.getCellrangeByName("B"&id)
arg = "A"&id
oCell.Formula = "=LOOKUP(arg;$'rule'.$A$2:$A$9;$'rule'.$B$2:$B$9)"
next id
The arg will not assign its value into LOOKUP ,how to fix it?
oCell.Formula = "=LOOKUP(arg;$'rule'.$A$2:$A$9;$'rule'.$B$2:$B$9)"
Use quoting syntax similar to the line where you defined arg.
id = 2
MsgBox("=LOOKUP(A" & id & ";$'rule'.$A$2:$A$9;$'rule'.$B$2:$B$9)")
Result:
=LOOKUP(A2;$'rule'.$A$2:$A$9;$'rule'.$B$2:$B$9)
It looks like things are going wrong on line 9 for me. Here I wish to push a new copy of the TagsTable into a dictionary. I'm aware that once a namedtuple field is recorded, it can not be changed. However, results baffle me as it looks like the values do change - when this code exits all entries of mp3_tags[ any of the three dictionary keys ].date are set to the last date of "1999_03_21"
So, two questions:
Is there a way to get a new TagsTable pushed into the dictionary ?
Why doesnt the code fail and not allow the second (and even third) date to be written to the TagsTable.date field (since it seems to be references to the same namedtuple) ? I thought you could not write a second value ?
from collections import namedtuple
2 TagsTable = namedtuple('TagsTable',['title','date','subtitle','artist','summary','length','duration','pub_date'])
3 mp3files = ['42-001.mp3','42-002.mp3','42-003.mp3']
4 dates = ['1999_01_07', '1999_02_14', '1999_03_21']
5
6 mp3_tags = {}
7
8 for mp3file in mp3files:
9 mp3_tags[mp3file] = TagsTable
10
11 for mp3file,date_string in zip(mp3files,dates):
12 mp3_tags[mp3file].date = date_string
13
14 for mp3file in mp3files:
15 print( mp3_tags[mp3file].date )
looks like this is the fix I was looking for:
from collections import namedtuple
mp3files = ['42-001.mp3','42-002.mp3','42-003.mp3']
dates = ['1999_01_07', '1999_02_14', '1999_03_21']
mp3_tags = {}
for mp3file in mp3files:
mp3_tags[mp3file] = namedtuple('TagsTable',['title','date','subtitle','artist','summary','length','duration','pub_date'])
for mp3file,date_string in zip(mp3files,dates):
mp3_tags[mp3file].date = date_string
for mp3file in mp3files:
print( mp3_tags[mp3file].date )
I'm trying to add 4 series using bosun expressions. They are from 1,2,3,4 weeks ago. I shifted them using shift() to have current time. But I can't add them since they have the shift=1w etc tags. How can I add these series together?
Thank you
edit: here's the query for 2 weeks
$period = d("1w")
$duration = d("30m")
$week1end = tod(1 * $period )
$week1start = tod(1 * $period + $duration )
$week2end = tod(2 * $period )
$week2start = tod(2 * $period + $duration )
$q1 = q("avg:1m-avg:os.cpu{host=myhost}", $week1start, $week1end)
$q2 = q("avg:1m-avg:os.cpu{host=myhost}", $week2start, $week2end)
$shiftedq1 = shift($q1, "1w")
$shiftedq2 = shift($q2, "2w")
$shiftedq1+ $shiftedq2
edit: here's what Bosun said
The problem is similar to: How do I add the series present in the output of an over query:
over("avg:1m-avg:os.cpu{host=myhost}", "30m", "1w", 2)
There is a new function called addtags that is pending documentation (see https://raw.githubusercontent.com/bosun-monitor/bosun/master/docs/expressions.md for draft) which seems to work when combined with rename. Changing the last line to:
$shiftedq1+addtags(rename($shiftedq2,"shift=shiftq2"),"shift=1w")
should generate a single result group like { host=hostname, shift=1w, shiftq2=2w }. If you add additional queries for q3 and q4 you probably need to rename the shift tag for those to unique values like shiftq3 and shiftq4.
If you were using a numbersets instead of seriessets, then the Transpose function would let you "Drop" the unwanted tags. This is useful when generating alerts, since crit and warn need a single number value not a series set:
$average_per_q = avg(merge($shiftedq1,$shiftedq2))
$sum_over_all = sum(t($average_per_q,"host"))
Result: { host=hostname } 7.008055555555557
Side note you probably want to use a counter for os.cpu instead of a gauge. Example: $q1 = q("avg:1m-avg:rate{counter,,1}:os.cpu{. Without that rate section you are using the raw counter values instead of the gauge value.
I am banging my head against the wall when trying to perform a drop duplicate for time series, base on the value of a datetime index.
My function is the following:
def csv_import_merge_T(f):
dfsT = [pd.read_csv(fp, index_col=[0], parse_dates=[0], dayfirst=True, names=['datetime','temp','rh'], header=0) for fp in files]
dfT = pd.concat(dfsT)
#print dfT.head(); print dfT.index; print dfT.dtypes
dfT.drop_duplicates(subset=index, inplace=True)
dfT.resample('H').bfill()
return dfT
which is called by:
inputcsvT = ['./input_csv/A08_KI_T*.csv']
for csvnameT in inputcsvT:
files = glob.glob(csvnameT)
print ('___'); print (files)
t = csv_import_merge_T(files)
print csvT
I receive the error
NameError: global name 'index' is not defined
what is wrong?
UPDATE:
The issue appear to arise when csv input files (which are to be concatenated) are overlapped.
inputcsvT = ['./input_csv/A08_KI_T*.csv'] gets files
A08_KI_T5
28/05/2015 17:00,22.973,24.021
...
08/10/2015 13:30,24.368,45.974
A08_KI_T6
08/10/2015 14:00,24.779,41.526
...
10/02/2016 17:00,22.326,41.83
and it runs correctly, whereas:
inputcsvT = ['./input_csv/A08_LR_T*.csv'] gathers
A08_LR_T5
28/05/2015 17:00,22.493,25.62
...
08/10/2015 13:30,24.296,44.596
A08_LR_T6
28/05/2015 17:00,22.493,25.62
...
10/02/2016 17:15,21.991,38.45
which leads to an error.
IIUC you can call reset_index and then drop_duplicates and then set_index again:
In [304]:
df = pd.DataFrame(data=np.random.randn(5,3), index=list('aabcd'))
df
Out[304]:
0 1 2
a 0.918546 -0.621496 -0.210479
a -1.154838 -2.282168 -0.060182
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
In [308]:
df.reset_index().drop_duplicates('index').set_index('index')
Out[308]:
0 1 2
index
a 0.918546 -0.621496 -0.210479
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
EDIT
Actually there is a simpler method is to call duplicated on the index and invert it:
In [309]:
df[~df.index.duplicated()]
Out[308]:
0 1 2
index
a 0.918546 -0.621496 -0.210479
b 2.512519 -0.771701 -0.328421
c -0.583990 -0.460282 1.294791
d -1.018002 0.826218 0.110252
I've just started coding ABAP for a few days and I have a task to call the report from transaction SE38 and have
the report's result shown on the screen of the WebDynPro application SE80.
The report take the user input ( e.g: Material Number, Material Type, Plant, Sale Org. ) as a condition for querying, so the WebDynPro application must allow user to key in this parameters.
In some related article they were talking about using SUBMIT rep EXPORTING LIST TO MEMORY and CALL FUNCTION 'LIST_FROM_MEMORY' but so far I really have no idea to implement it.
Any answers will be appreciated. Thanks!
You can export it to PDF. Therefore, when a user clicks on a link, you run the conversion and display the file in the browser window.
To do so, you start by creating a JOB using the following code below:
constants c_name type tbtcjob-jobname value 'YOUR_JOB_NAME'.
data v_number type tbtcjob-jobcount.
data v_print_parameters type pri_params.
call function 'JOB_OPEN'
exporting
jobname = c_name
importing
jobcount = v_number
exceptions
cant_create_job = 1
invalid_job_data = 2
jobname_missing = 3
others = 4.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Then, you need to get the printer parameters in order to submit the report:
call function 'GET_PRINT_PARAMETERS'
exporting
destination = 'LP01'
immediately = space
new_list_id = 'X'
no_dialog = 'X'
user = sy-uname
importing
out_parameters = v_print_parameters
exceptions
archive_info_not_found = 1
invalid_print_params = 2
invalid_archive_params = 3
others = 4.
v_print_parameters-linct = 55.
v_print_parameters-linsz = 1.
v_print_parameters-paart = 'LETTER'.
Now you submit your report using the filters that apply. Do not forget to add the job parameters to it, as the code below shows:
submit your_report_name
to sap-spool
spool parameters v_print_parameters
without spool dynpro
with ...(insert all your filters here)
via job c_name number v_number
and return.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
After that, you close the job:
call function 'JOB_CLOSE'
exporting
jobcount = v_number
jobname = c_name
strtimmed = 'X'
exceptions
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
others = 8.
if sy-subrc = 0.
commit work and wait.
else.
EXIT. "// todo: err handling here
endif.
Now the job will proceed and you'll need to wait for it to complete. Do it with a loop. Once the job is completed, you can get it's spool output and convert to PDF.
data v_rqident type tsp01-rqident.
data v_job_head type tbtcjob.
data t_job_steplist type tbtcstep occurs 0 with header line.
data t_pdf like tline occurs 0 with header line.
do 200 times.
wait up to 1 seconds.
call function 'BP_JOB_READ'
exporting
job_read_jobcount = v_number
job_read_jobname = c_name
job_read_opcode = '20'
importing
job_read_jobhead = v_job_head
tables
job_read_steplist = t_job_steplist
exceptions
invalid_opcode = 1
job_doesnt_exist = 2
job_doesnt_have_steps = 3
others = 4.
read table t_job_steplist index 1.
if not t_job_steplist-listident is initial.
v_rqident = t_job_steplist-listident.
exit.
else.
clear v_job_head.
clear t_job_steplist.
clear t_job_steplist[].
endif.
enddo.
check not v_rqident is initial.
call function 'CONVERT_ABAPSPOOLJOB_2_PDF'
exporting
src_spoolid = v_rqident
dst_device = 'LP01'
tables
pdf = t_pdf
exceptions
err_no_abap_spooljob = 1
err_no_spooljob = 2
err_no_permission = 3
err_conv_not_possible = 4
err_bad_destdevice = 5
user_cancelled = 6
err_spoolerror = 7
err_temseerror = 8
err_btcjob_open_failed = 9
err_btcjob_submit_failed = 10
err_btcjob_close_failed = 11
others = 12.
If you're going to send it via HTTP, you may need to convert it to BASE64 as well.
field-symbols <xchar> type x.
data v_offset(10) type n.
data v_char type c.
data v_xchar(2) type x.
data v_xstringdata_aux type xstring.
data v_xstringdata type xstring.
data v_base64data type string.
data v_base64data_aux type string.
loop at t_pdf.
do 134 times.
v_offset = sy-index - 1.
v_char = t_pdf+v_offset(1).
assign v_char to <xchar> casting type x.
concatenate v_xstringdata_aux <xchar> into v_xstringdata_aux in byte mode.
enddo.
concatenate v_xstringdata v_xstringdata_aux into v_xstringdata in byte mode.
clear v_xstringdata_aux.
endloop.
call function 'SCMS_BASE64_ENCODE_STR'
exporting
input = v_xstringdata
importing
output = v_base64data.
v_base64data_aux = v_base64data.
while strlen( v_base64data_aux ) gt 255.
clear t_base64data.
t_base64data-data = v_base64data_aux.
v_base64data_aux = v_base64data_aux+255.
append t_base64data.
endwhile.
if not v_base64data_aux is initial.
t_base64data-data = v_base64data_aux.
append t_base64data.
endif.
And you're done!
Hope it helps.
As previous speakers said, you should do extensive training before implementing such stuff in productive environment.
However, calling WebdynPro ABAP within report can be done with the help of WDY_EXECUTE_IN_PLACE function module. You should pass there Webdyn Pro application and necessary parameters.
CALL FUNCTION 'WDY_EXECUTE_IN_PLACE'
EXPORTING
* PROTOCOL =
INTERNALMODE = ' '
* SMARTCLIENT =
APPLICATION = 'Z_MY_WEBDYNPRO'
* CONTAINER_NAME =
PARAMETERS = lt_parameters
SUPPRESS_OUTPUT =
TRY_TO_USE_SAPGUI_THEME = ' '
IMPORTING
OUT_URL = ex_url
.
IF sy-subrc <> 0.
* Implement suitable error handling here
ENDIF.