I am trying to use the >resourceIds property, from an initial >resourceId. https://fullcalendar.io/docs/resources-and-events. However when I change my DB column from >resourceId to >resourceIds my calendar fails to load and is given this error: https://imgur.com/uVmwPao
Related
as the title states I am trying to update a jsonb field from inside R. I have several changes to apply. The original dataset is created by a third party application but needs to be corrected for several rows.
The following statement is working fine as a database statement:
UPDATE histories set meta=jsonb_set(meta,'{product}','"55-AB"') WHERE id = 17983;
Now, I need to update the "product" field for several different ids.
Assume the following dataframe as an example:
df<-data.frame(product=c("55-AB","567-C","UTG-98"),
id=c(17983,54388,20000))
Usually I would use sql_glue from the glue package but I run out of quotes generating the above queries dynamically.
sql_glue("UPDATE histories set meta=jsonb_set(meta,'{product}','"{`df$product`}"') WHERE id = {`df$id`};")
Error: unexpected '{' in "sql_glue("UPDATE histories set meta=jsonb_set(meta,'{pesticide}','"{"
I am running into problems with the quotation. Any idea how to get around this?
Description:
Recently I've been trying to automatize some tasks at work using Pentaho (PDI), and I've come upon a problem that I had no luck of solving/finding solution for (I did research for many hours, been trying to solve it on my own as well). My aim is to load a text file containing name of the PL/SQL procedure stored on the server, and custom ammount of parameters for the procedure. For example if the source text file would contain following text:
Test_schema.job_pkg.run_job;12345
It should run job_pkg.run_job procedure from the defined connection, and use 12345 as a single parameter.
The problem:
The Call DB procedure transformation step only accepts SET ammount of parameters, for exampe I set the step to accept 4 parameters, but the procedure I'm calling is only accepting 1 parameter. I want to be able to IGNORE other parameters set in the step. When I try to send for example just one parameter but the step is set to accept 4 parameters, it throws:
Call DB Procedure.0 - ORA-06550: row 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'RUN_JOB'
ORA-06550: row 1, column 7: PL/SQL: Statement ignored
What I have so far:
I've made a job that starts the transformation that loads the contents of the source file to memory, splits it to correct fields using Modified Java Script value, sets Pentaho variables with extraced values, then second transformation is loaded, that reads these variables and passes them as fields to Call DB procedure step. The last step always fails unless I manually remove all unused arguments.
Solution:
Based on AlainD's answer I've tried to use the Switch / Case step which solved the problem. Now there is different problem regarding conversion of values. If I pass a number but set it as STRING in Call DB Procedure's parameters, it throws
ORA-01403 no data found
This can be solved by handling the data via Modified Java Script Value step or any other step in order to convert the data into the "correct" format.
What I do in cases like that is to build a SQL command in a String, something like Test_schema.job_pkg.run_job(12345) and execute it with an Execute SQL script.
An other workaround would be to cont the number of parameters in the Modified Javascript step, and use a Switch/Case to redirect the flow on a sequence of DB Procedure steps: one with 0 parameter, one with 1 parameter, one with 2 parameters,... This method assume that the max number of parameters is small.
I have been trying to update a datetime column using the following SQL Statement:
UPDATE HistoricData SET RecordDate=DATETIME(RecordDate,'60 minutes') WHERE DataStreamID=1 AND TimeFrameID=5
However, I keep getting the following error message:
NOT NULL Constraint Failed: HistoricData.RecordDate
If you could recommend the appropriate change to get this working, I would very much appreciate it.
I will attempt to attach a sample schema and data:
Table Data
Table Schema
After inspecting your DML, my only remaining concern was the datetime format you have in your table. I tried updating such a value as you did, and guess what, it returns NULL: http://sqlfiddle.com/#!7/f4651/10 Why? Because your strings (do notation) are not valid ISO-8601 strings. You probably need to simply replace the dots with dashes before updating (http://sqlfiddle.com/#!7/f4651/11).
I used Listing 3 in the following link to create a FIELDPROC program QGPL/MOBHOMEPAS which should encrypt a variable char column Field Encryption in DB2 for i
I compiled the RPGLE program and I created a separate database DBMLIB/UMAAAP00 as follows
A R UMAAAF00 TEXT('-
A TEST ENCRYPTION')
A*
A IPIAAA 20A VARLEN(20)
A KYGAAA 11S 2 COLHDG('SALARY')
I then use strsql to alter the table and protect IPIAAA
ALTER TABLE DBMLIB/UMAAAP00 alter column IPIAAA set FIELDPROC
QGPL.MOBHOMEPAS
ALTER COMPLETED FOR TABLE UMAAAP00 IN DBMLIB.
For some reason when I go in to add entries through upddta directly to the file itself and then do a wrkqry to query and file and view them I don't see them as encrypted.
Is this not how it's supposed to work? Is anyone able to assist me with the logic? Ultimately, I'd like to create a simple table from scratch that has a single 20 character or so password column as encrypted.
If the code being utilized for the named FieldProc program QGPL.MOBHOMEPAS was modeled-after [an effective copy of] the source code that was found at the URL from the OP [which BTW includes a position-to request to the comments section... Why?], then that code is implemented using the base-level of the DB2 for IBM i 7.1 SQL FieldProc support, not the next [enhanced] level of support in which the masking feature was added. That is, every invocation other than for function-code=8 will necessarily always be an Encode or a Decode operation for which any masking of the data is unsupported, because changing the data [with that level of support] would corrupt the data in the TABLE.
Note [from http://www.mcpressonline.com/rpg/db2-field-procedures-finally-support-conditional-masking.html] the differences in the coding requirements described for the pre-masking-support [eight parameters] and since-masking-support [nine parameters] as the pre-requisite to have the Run Query (RUNQRY) and Update Data (UPDDTA) features mask the data that is presented to the user:
The new FieldProc Masking support revolves around two main components.
The first component is a new parameter that was added to the parameter
lists that the DB2 engine passes to the FieldProc program on each
decode call. This new parameter controls whether or not the FieldProc
program can return a masked value. There are some DB2 operations—such
as the RGZPFM (Reorganize Physical File Member) command and trigger
processing—that always require the clear-text version of the data to
be returned. The second component is a new special SQLState value
('09501') that is to be returned by the FieldProc program whenever it
is passed a masked value on the encode call. This prevents the masked
value from being encoded, which would result in the original data
value being lost. When this special SQLState value is returned, DB2
will ignore the encoded value that is passed back by the FieldProc
program and instead use the value that's currently stored in the
record image for that column.
For some reason when I go in to add entries through upddta directly to
the file itself and then do a wrkqry to query and file and view them I
don't see them as encrypted. Is this not how it's supposed to work?
No, that's not how it's supposed to work. The data will be encoded on disk only.
When you view the data it will be decoded automatically by the FIELDPROC program no matter what you're using to view it (WRKQRY [yuck], DFU, STRSQL, whatever). This is how it works regardless of field masking (which is different/additional functionality).
I have a few objects created on my database and I need to delete some of the repeating attributes related to them.
The query I'm trying to run is:
UPDATE gemp1_product objects REMOVE ingredients[1] WHERE (r_object_id = '08015abd8002cd68')
But all I get is the folloing error message:
Error querying databse.
[DM_QUERY_E_UPDATE_INDEX]error: "UPDATE: Unable to REMOVE tghe attribute ingredients at index 1."
[DM_OBJECT_W_DELETE_ATTR_POSITION_ERROR]warning: "attempt to delete
non-existent attribute 88"
Object 08015abd8002cd68 exists and I can see it on the database. Queries like SELECT and DELETE work fine but I do not want to delete the whole object.
There is no easy way to do this. The reason is that repeating attributes are ordered, to enable multiple repeating attributes to be synchronized for a given object.
Either
set the attribute value to be empty for the given position, and change your code to discard empty attributes, or
use multiple DQL statements to shuffle the order so that the last one becomes empty, or
change your data model, e.g. use a single attribute as a property bag with pre-defined delimiters.
Details (1)
UPDATE gemp1_product OBJECTS SET ingredients[1] = '' WHERE ...
Details (2)
For each index; first find the value of index+1:
SELECT ingredients
FROM gemp1_product
WHERE (i_position*-1)-1 = <index+1>
ENABLE (ROW_BASED)
Use the value in a new query:
UPDATE gemp1_product OBJECTS SET ingredients[1] = '<value_from_above>' WHERE ...
It should also be possible to do this by nesting DQL somehow, but it might not be worth the effort.
Something is either wrong with your query or with your repository. I think you are mistyping your attribute name or using wrong index in your UPDATE query.
If you google for DM_OBJECT_W_DELETE_ATTR_POSITION_ERROR you'll see on this link a bit more detailed explanation:
CAUSE: Program executed a DeleteAttr operation that specified an non-existent attribute position (either a negative number or a number larger than the number of attributes in the object).
From this you could guess that type isn't in consistent state, or that you are trying to remove too big index of your repeating attribute, etc. Did you checked your repository with Consistency checker Job and other similar Jobs?
As of for the removing of repeating property (sttribute) value with DQL query, this is unachievable with single query since you need to specify index position which you don't know at first. But writing a simple script or doing it manually if it's not big amount of values to delete is the way you want to go.