How is trigger function different from 'regular' functions? And is it absolutely necessary in creating a trigger?
For example, in this case:
-- Trigger function
CREATE FUNCTION update_record_trigger_function() RETURNS trigger
LANGUAGE plpgsql
AS $update_record_trigger_function$
BEGIN
PERFORM update_record(NEW.oid); -- helper function ...
RETURN NEW;
END
$update_record_trigger_function$;
-- Trigger for updating latest clicks for posts
CREATE TRIGGER update_latest_record
AFTER INSERT OR UPDATE ON record
FOR EACH ROW
EXECUTE PROCEDURE update_record_trigger_function();
Would it not be simpler to (or is it possible to do):
-- Trigger for updating latest clicks for posts
CREATE TRIGGER update_latest_record
AFTER INSERT OR UPDATE ON record
FOR EACH ROW
PERFORM update_record(NEW.oid); -- syntactically not right but along this idea
I wasn't able to find any document with examples of 'skipping trigger function' or one that explaining how trigger function is special and necessary for triggers?
PostgreSQL separates the "trigger function" from the trigger definition for quite a few reasons:
Trigger functions can be written as generic functions that can work on many different tables, so you can add the same trigger function as a trigger to many different tables. This eases maintenance.
Trigger functions can take parameters that allow them to adapt to specific column names or other characteristics of the table you apply them to. This makes it practical to write generic, re-usable trigger functions where you otherwise could not do so, so you often don't need to repeat the code in multiple places. See EXECUTE ... USING and the format function with its %L and %I format-specifiers.
PostgreSQL tries very hard not care what programming language your functions are written in . They could be PL/PgSQL, Python, Perl, C, or MyWackyPluginLanguage. If it didn't separate trigger function from trigger definition this would be harder to achieve.
The shorthand you describe is not supported in PostgreSQL, though some other database systems provide something very much like what you've shown. You essentially want to be able to define a trigger function as an inline expression. This isn't supported. There's never been a pressing need for it so nobody's implemented it.
If such a feature were added in future - and I'm not aware of any plans to do so - it would probably be done using a PL/PgSQL DO block that was contextually interpreted as a trigger, something like (not legal syntax):
CREATE TRIGGER update_latest_record
AFTER INSERT OR UPDATE ON record
FOR EACH ROW DO $$ PERFORM update_record(NEW.oid); RETURN NEW; $$;
That way the parser wouldn't need to understand PL/PgSQL constructs, we'd just have to teach it to understand FOR EACH ROW DO [string literal body of function] as an alternative to FOR EACH ROW EXECUTE PROCEDURE proc_name(args) and then invoke the PL/PgSQL subsystem to process the trigger literal.
In practice Pg would probably just generate an ordinary trigger function and add a reference to it from the trigger so it's dropped if the trigger was dropped, making this a just convenience macro, just like SERIAL is a convenience macro for an INTEGER field with a DEFAULT nextval(...) and a generated SEQUENCE owned by the table with the SERIAL field. In particular, pg_dump would probably output it as if you'd defined a separate trigger function and trigger, rather than used the shorthand.
Honestly, I doubt anything like this will ever be added, but if you can provide sufficiently convincing use cases you might if you're really lucky interest someone in it. Most likely you'll only get anywhere with such a proposal ("allow DO blocks to be used as trigger procedures") if you're willing to back it with funding for the time required to implement the feature.
Related
When writing event based cloud functions for firebase firestore it's common to update fields in the affected document, for example:
When a document of users collection is updated a function will trigger, let's say we want to determine the user info state and we have a completeInfo: boolean property, the function will have to perform another update so that the trigger will fire again, if we don't use a flag like needsUpdate: boolean to determine if excecuting the function we will have an infinite loop.
Is there any other way to approach this behavior? Or the situation is a consequence of how the database is designed? How could we avoid ending up in such scenario?
I have a few common approaches to Cloud Functions that transform the data:
Write the transformed data to a different document than the one that triggers the Cloud Function. This is by far the easier approach, since there is no additional code needed - and thus I can't make any mistakes in it. It also means there is no additional trigger, so you're not paying for that extra invocation.
Use granular triggers to ensure my Cloud Function only gets called when it needs to actually do some work. For example, many of my functions only need to run when the document gets created, so by using an onCreate trigger I ensure my code only gets run once, even if it then ends up updating the newly created document.
Write the transformed data into the existing document. In that case I make sure to have the checks for whether the transformation is needed in place before I write the actual code for the transformation. I prefer to not add flag fields, but use the existing data for this check.
A recent example is where I update an amount in a document, which then needs to be fanned out to all users:
exports.fanoutAmount = functions.firestore.document('users/{uid}').onWrite((change, context) => {
let old_amount = change.before && change.before.data() && change.before.data().amount ? change.before.data().amount : 0;
let new_amount = change.after.data().amount;
if (old_amount !== new_amount) {
// TODO: fan out to all documents in the collection
}
});
You need to take care to avoid writing a function that triggers itself infinitely. This is not something that Cloud Functions can do for you. Typically you do this by checking within your function if the work was previously done for the document that was modified in a previous invocation. There are several ways to do this, and you will have to implement something that meets your specific use case.
I would take this approach from an execution time perspective, this means that the function for each document will be run twice. Each time when the document is triggered, a field lastUpdate would be there with a timestamp and the function only updates the document if the time is older than my time - eg 10 seconds.
I recognized that (insert/delete)-XQueries executed with the BaseX client always returning an empty string. I find this very confusing or unintuitive.
Is there a way to find out if the query was "successful" without querying the database again (and using potentially buggy "transitive" logic like "if I deleted a node, there must be 'oldNodeCount-1' nodes in the XML")?
XQuery Update statements do not return anything -- that's how they are defined. But you're not the only one who does not like those restrictions, and BaseX added two ways around this limitation:
Returning Results
By default, it is not possible to mix different types of expressions
in a query result. The outermost expression of a query must either be
a collection of updating or non-updating expressions. But there are
two ways out:
The BaseX-specific update:output() function bridges this gap: it caches the results of its arguments at runtime and returns them after
all updates have been processed. The following example performs an
update and returns a success message:
update:output("Update successful."), insert node <c/> into doc('factbook')/mondial
With the MIXUPDATES option, all updating constraints will be turned off. Returned nodes will be copied before they are modified by
updating expressions. An error is raised if items are returned within
a transform expression.
If you want to modify nodes in main memory, you can use the transform
expression.
The transform expression will not help you, as you seem to modify the data on disk. Enabling MIXUPDATES allows you to both update the document and return something at the same time, for example running something like
let $node := <c/>
return ($node, insert node $node into doc('factbook')/mondial)
MIXUPDATES allows you to return something which can be further processed. Results are copied before being returned, if you run multiple updates operations and do not get the expected results, make sure you got the concept of the pending update list.
The db:output() function intentionally breaks its interface contract: it is defined to be an updating function (not having any output), but at the same time it prints some information to the query info. You cannot further process these results, but the output can help you debugging some issues.
Pending Update List
Both ways, you will not be able to have an immediate result from the update, you have to add something on your own -- and be aware updates are not visible until the pending update list is applied, ie. after the query finished.
Compatibility
Obviously, these options are BaseX-specific. If you strongly require compatible and standard XQuery, you cannot use these expressions.
I recognized that (insert/delete)-XQueries executed with the BaseX client always returning an empty string. I find this very confusing or unintuitive.
Is there a way to find out if the query was "successful" without querying the database again (and using potentially buggy "transitive" logic like "if I deleted a node, there must be 'oldNodeCount-1' nodes in the XML")?
XQuery Update statements do not return anything -- that's how they are defined. But you're not the only one who does not like those restrictions, and BaseX added two ways around this limitation:
Returning Results
By default, it is not possible to mix different types of expressions
in a query result. The outermost expression of a query must either be
a collection of updating or non-updating expressions. But there are
two ways out:
The BaseX-specific update:output() function bridges this gap: it caches the results of its arguments at runtime and returns them after
all updates have been processed. The following example performs an
update and returns a success message:
update:output("Update successful."), insert node <c/> into doc('factbook')/mondial
With the MIXUPDATES option, all updating constraints will be turned off. Returned nodes will be copied before they are modified by
updating expressions. An error is raised if items are returned within
a transform expression.
If you want to modify nodes in main memory, you can use the transform
expression.
The transform expression will not help you, as you seem to modify the data on disk. Enabling MIXUPDATES allows you to both update the document and return something at the same time, for example running something like
let $node := <c/>
return ($node, insert node $node into doc('factbook')/mondial)
MIXUPDATES allows you to return something which can be further processed. Results are copied before being returned, if you run multiple updates operations and do not get the expected results, make sure you got the concept of the pending update list.
The db:output() function intentionally breaks its interface contract: it is defined to be an updating function (not having any output), but at the same time it prints some information to the query info. You cannot further process these results, but the output can help you debugging some issues.
Pending Update List
Both ways, you will not be able to have an immediate result from the update, you have to add something on your own -- and be aware updates are not visible until the pending update list is applied, ie. after the query finished.
Compatibility
Obviously, these options are BaseX-specific. If you strongly require compatible and standard XQuery, you cannot use these expressions.
I'm building in scheme a database using a wiredtiger key/value store.
To query a given table one needs to have a cursor over the table. The library recommends to re-use the cursor. The general behavior can be described by the following pseudo-code:
with db.cursor() as cursor:
cursor.get(key)
...
do_something(db)
...
During the extent of the with statment cursor can only be used in the current context. If do_something(db) needs a cursor, it must create/retrieve another cursor even if it's to query the same table. Otherwise the cursor loose its position and the continuation of do_something(db) doesn't expect.
You can work around it by always reseting the cursor, that's a waste. Instead it's preferable to keep a set of cursors ready to be used and when one can request via db.cursor() this will remove a cursor from the available cursors and return it. Once the "context" operation is finished, put it back.
The way I solve this in Python is by using a list. db.cursor() looks like:
def cursor(self):
cursor = self.cursors.pop()
yield cursor
self.cursors.append(cursor)
Which means, retrieve a cursor, send it to the current context, once the context is finished, put it back to the list of available cursors.
How can I avoid the mutation and use more functional approach?
Maybe you want parameters?
Lookup the exact construct used by your Scheme implementation.
Some implementations use:
http://srfi.schemers.org/srfi-39/srfi-39.html
I want to take a list of item names from a collection as a simple array to use for things like autocompleting user input and checking for duplicates. I would like this list to be reactive so that changes in the data will be reflected in the array. I have tried the following based on the Meteor documentation:
setReactiveArray = (objName, Collection, field) ->
update = ->
context = new Meteor.deps.Context()
context.on_invalidate update
context.run ->
list = Collection.find({},{field: 1}).fetch()
myapp[objName] = _(list).pluck field
update()
Meteor.startup ->
if not app.items?
setReactiveArray('items', Items, 'name')
#set autocomplete using the array
Template.myForm.set_typeahead = ->
Meteor.defer ->
$('[name="item"]').typeahead {source: app.items}
This code seems to work, but it kills my app's load time (takes 5-10 seconds to load on dev/localhost vs. ~1 second without this code). Am I doing something wrong? Is there a better way to accomplish this?
You should be able to use Items.find({},{name: 1}).fetch(), which will return an array of items and is reactive, so it will re-run its enclosing function whenever the query results change, as long as it's called in a reactive context.
For the Template.myForm.set_typeahead helper, you might want to call that query inside the helper itself, store the result in a local variable, and then call Meteor.defer with a function that references that variable. Otherwise I'm not sure that the query will be inside a reactive context when it's called.
Edit: I have updated the code below both because it was fragile, and to put it in context so it's easier to test. I have also added a caution - in most cases, you will want to use #zorlak's or #englandpost's methods (see below).
First of all, kudos to #zorlak for digging up my old question that nobody answered. I have since solved this with a couple of insights gleaned from #David Wihl and will post my own solution. I will hold off on selecting the correct answer until others have a chance to weigh in.
#zorlak's answer solves the autocomplete issue for a single field, but as stated in the question, I was looking for an array that would update reactively, and the autocomplete was just one example of what it would be used for. The advantage of having this array is that it can be used anywhere (not just in template helpers) and that it can be used multiple times in the code without having to re-execute the query (and the _.pluck() that reduces the query to an array). In my case, this array ends up in multiple autocomplete fields as well as validation and other places. It's possible that the advantages I'm putting forth are not significant in most Meteor apps (please leave comments), but this seems like an advantage to me.
To make the array reactive, simply build it inside a Meteor.autorun() callback - it will re-execute any time the target collection changes (but only then, avoiding repetitive queries). This was the key insight I was looking for. Also, using the Template.rendered() callback is cleaner and less of a hack than the set_typeahead template helper I used in the question. The code below uses underscore.js's _.pluck() to extract the array from the collection and uses Twitter bootstrap's $.typeahead() to create the autocomplete.
Updated code: I have edited the code so you can try this with a stock meteor created test environment. Your html will need a line <input id="typeahead" /> in the 'hello' template. #Items has the # sign to make Items available as a global on the console (Meteor 0.6.0 added file-level variable scoping). That way you can enter new items in the console, such as Items.insert({name: "joe"}), but the # is not necessary for the code to work. The other necessary change for standalone use is that the typeahead function now sets the source to a function (->) so that it will query items when activated instead of being set at rendering, which allows it to take advantage of changes to items.
#Items = new Meteor.Collection("items")
items = {}
if Meteor.isClient
Meteor.startup ->
Meteor.autorun ->
items = _(Items.find().fetch()).pluck "name"
console.log items #first result will be empty - see caution below
Template.hello.rendered = ->
$('#typeahead').typeahead {source: -> _(Items.find().fetch()).pluck "name"}
Caution! The array we created is not itself a reactive data source. The reason that the typeahead source: needed to be set to a function -> that returned items is that when Meteor first starts, the code runs before Minimongo has gotten its data from the server, and items is set to an empty array. Minimongo then receives its data, and items is updated You can see this process if you run the above code with the console open: console.log items will log twice if you have any data stored.
Template.x.rendered() calls don't don't set a reactivity context and so won't retrigger due to changes in reactive elements (to check this, pause your code in the debugger and examine Deps.currentComputation -- if it's null, you are not in a reactive context and changes to reactive elements will be ignored). But you might be surprised to learn that your templates and helpers will also not react to items changing -- a template using #each to iterate over items will render empty and never rerender. You could make it act as a reactive source (the simplest way being to store the result with Session.set(), or you can do it yourself), but unless you are doing a very expensive calculation that should be run as seldom as possible, you are better off using #zorlak's or #englandpost's methods. It may seem expensive to have your app querying the database repetitively, but Minimongo is caching the data locally, avoiding the network, so it will be quite fast. Thus in most situations, it's better just to use
Template.hello.rendered = ->
$('#typeahead').typeahead {source: -> _(Items.find().fetch()).pluck "name"}
unless you find that your app is really bogging down.
here is my quick solution for bootstrap typeahead
On client side:
Template.items.rendered = ->
$("input#item").typeahead
source: (query, process) ->
subscription = Meteor.subscribe "autocompleteItems", query, ->
process _(Items.find().fetch()).pluck("name")
subscription.stop() # here may be a bit different logic,
# such as keeping all opened subsriptions until autocomplete
# will be successfully completed and so on
items: 5
On server side:
Meteor.publish "autocompleteItems", (query) ->
Items.find
name: new RegExp(query, "i"),
fields: { name: 1 },
limit: 5
I actually ended up approaching the autocompletion problem completely differently, using client-side code instead of querying servers. I think this is superior because Meteor's data model allows for fast multi-rule searching with custom rendered lists.
https://github.com/mizzao/meteor-autocomplete
Autocompleting users with #, where online users are shown in green:
In the same line, autocompleting something else with metadata and bootstrap icons:
Please fork, pull, and improve!