Update entity that is not part of the client collection - meteor

I am trying to update related entities when deleting an entity. The problem is that the related entities are not part of the collection on the client. Is there a way to do the update only on the server?
Concrete example: When calling 'removeSlack', I want to update all the copies and remove the removed slackId from their copies array. But because the copies are not part of the collection on the client 'Slack.findOne(copyId)' doesn't find anything.
Meteor.methods(
removeSlack: (slackId) ->
slack = Slack.findOne(slackId)
for copyId in _.pluck(slack.copies, 'slackId')
copy = Slack.findOne(copyId)
if copy
Slack.update(copyId, { $set: {copies: _.without(copy.copies, {slackId: slackId, userId: Meteor.userId()})}})
Slack.remove(slackId)
)

You can wrap any code you only want to run on the server in a Meteor.isServer() block: http://docs.meteor.com/#meteor_isserver
Alternatively, you can put code files you only want to run on the server in the /server folder of your project.

Related

Marklogic - Delete Versioned Collections

I have around 43 million documents which is having the latest versioned document in LIVE collection and also have same versioned document in another version collection named as (/collection/versionNumber). I want to delete the versioned collections which is around 34 million. what is best approach to go for it to delete all in one go .
You could try using xdmp:collection-delete() to delete all documents in the collection in a single transaction.
If that doesn't work and it isn't able to delete in one shot, then I would look to utilize batch tools. For instance, a CoRB job.
An example job options file with properties needed, except for the XCC-CONNECTION-URI:
# Inline module to select all URIs from the collection
URIS-MODULE=INLINE-XQUERY|let $uris := cts:uris("",(),cts:collection-query("/collection/versionNumber")) return (count($uris), $uris)
# Inline module to delete the docs
PROCESS-MODULE=INLINE-XQUERY|declare variable $URI as xs:string external; xdmp:document-delete($URI)
THREAD-COUNT=10
I think your application is using DLS library for versioning. If yes, and if you never want any version to look into in future, then only delete the versioned documents. Use can use "dls:document-unmanage" API in that case.
Also, explore dls:purge and dls:document-purge before proceeding. I am not very sure of these two.
Anyways, even if it's not DLS, processing them in one go (single transaction) would not be a recommended way. Either process them in batches or set them all in different threads on task server through spawn.

Migrating from cfs:gridfs to cfs:s3

In a meteor project I maintain there has been cause to look at moving away from gridfs as the backend to CollectionFS and moving towards s3.
One thing I would be keen to do is migrate images / files currently stored using the gridfs collections.
Has anyone attempted this before? I can't find any guides or even suggestions.
My thinking right now is along the lines of;
Create a new collection backed by s3
Iterate over old collection pushing the data into s3
Update code to point to new collection
Does this seem sound?
I just did this!
You're basically right, here's how I did it. Migration is a pretty easy process. I've gone from GridFS to S3.
1) By adding new FS.Store.S3("store_name",{}), CollectionFS automatically clones the meta data of existing files in the old store for your new store. But, all file sizes are zero in this new store.
Images = new FS.Collection("images", {
stores: [
new FS.Store.S3("s3images", {}),
new FS.Store.GridFS("images", {})
]
});
2) While you have both stores in place, you need to manually migrate the content using the piping as referenced here https://github.com/CollectionFS/Meteor-CollectionFS/wiki/How-to:-Convert-a-file-already-stored.
if(Meteor.isServer) {
Images.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('images');
var writeStream = fileObj.createWriteStream('s3images');
readStream.pipe(writeStream);
});
}
Hopefully after this you will now see your new store's file sizes matches the old one!
3) Optionally, remove the old store. If you keep both, inserted files are added to both, with priority given to the first store in the array.
Reference: https://github.com/CollectionFS/Meteor-CollectionFS/issues/747

Can the Path assigned a SQLite DB be an arbitrary value?

In this blog post, some prerequisite code for getting started using SQLite in Windows Store Apps is given, for adding to the OnLaunched method of App.xaml.cs:
// Get a reference to the SQLite database
this.DBPath = Path.Combine(
Windows.Storage.ApplicationData.Current.LocalFolder.Path, "customers.sqlite");
My question is: Can I use any arbitrary value to replace the "customers.sqlite" part, or does it have to match something else in my code, such as the name of my table definition class (in my case "PhotraxCoreData.cs" which, according to Mr. Green's suggestion, I added below a newly-created "Models" folder)?
My understanding is that, once I've got those classes defined (I do), and the code above in App.xaml.cs, along with this there (adapted for my SQLite classes):
using (var db = new SQLite.SQLiteConnection(this.DBPath))
{
// Create the tables if they don't exist
db.CreateTable<PhotraxBaseData>();
db.CreateTable<PhotraxNames>();
db.CreateTable<PhotraxQueries>();
}
...SQLite tables based on those classes I specified will be created, and have the name "customers.sqlite" (provided I don't change it).
So, can I use:
this.DBPath = Path.Combine(
Windows.Storage.ApplicationData.Current.LocalFolder.Path, "platypus.sqlite");
...or must it be something like:
this.DBPath = Path.Combine(
Windows.Storage.ApplicationData.Current.LocalFolder.Path, "PhotraxCoreData.sqlite");
That database name is just a file name.
The directory must be accessible by your app, but the file name can be anything.
AS CL says, the file name can be anything the app has direct access to. Windows Store apps have limited access to the file system, so the sqlite database must be in either the apps install location (read only) or it's app data folder (read write). A common pattern is to ship a seed database in the app package and then copy it from the install location to app data on first use so it can be written to.

Drupal 7 deleting node does not delete all associated files

One file gets upload when the node is created via standard Drupal.
Later, 2 files are added to the node via:
file_save(std Class)
file_usage_add(std Class, 'module', 'node', $node_id)
At the end, I end up with 3 entries in file_managed and file_usage.
Problem: when I delete the node via standard Drupal, the file that was added during the initial node creation gets removed, but not the 2 that were added later. These files remain in both tables, and physically on the disk.
Is there some flag that is being set to keep the files even if the node is deleted? If so, where is this flag, and how do I set it correctly (to be removed along with the node)?
The answer is in the file_delete() function, see this comment:
// If any module still has a usage entry in the file_usage table, the file// will not be deleted
As your module has declared an interest in the file by using file_usage_add() it will not be deleted unless your module explicitly says it's OK to do so.
You can either remove the call to file_usage_add() or implement hook_file_delete() and use file_usage_delete() to ensure the file can be deleted:
function mymodule_file_delete($file) {
file_usage_delete($file, 'mymodule');
}
You can force deleting of file.
file_delete($old_file, TRUE);
But make sure that this file is not used in other nodes using:
file_usage_list($file);

SQL Server Deploy Script

I use sql server 2005 for an asp.net project. I want to run a SQL file that contains all DB changes from the last release to easily bring a DB up to the latest version.
I basically just have a bunch of alter table, create table, create index, alter view, call stored proc, etc statements. But I would like to wrap it in a transaction so if any part of it fails, none of the changes will go through. Otherwise it could make for some really messy debugging where it finished.
Also, if you know of a better way to manage DB deployment let me know!
I do something similar with a Powershell script using SMO.
Pseudocode would be:
$SDB = SourceDBObject
$TDB = TargetDBObject
ForEach $table in $SDB.Tables
{
Add an entry to a hash table with the name
and some attributes (rowcount, columns, datasize)
}
# Same thing for $TDB
# Compare the two arrays, and make a list of all the ones that exist in the source but not in the target, or that are different
# Same thing for Procs and Views
# Pass this list to a SMO.Scripter as an UrnCollection object, and it will script them out in dependency order (it's an option), with drops
# Wrap the script in a transaction and execute it on target server
# Use SQLBulkCopy class to transfer data server-to-server
What version of Visual Studio do you use? In Visual Studio 2010 and from what I can remember Visual Studio 2008 - in the menu under "Data" there are two options - "Schema Compare" and "Data Compare". That should move you in the right direction.
BEGIN TRANSACTION #TranName;
USE AdventureWorks2008R2;
DELETE FROM AdventureWorks2008R2.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
COMMIT TRANSACTION #TranName;
You should execute everything within a transaction
Note that some DDL statements have to be the first statement in a batch (batches are separate from transactions). (GO is the default batch separator in SSMS and SQLCMD).

Resources