Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Subscribing to all data can take a large amount of time and stressful in the server specially if you got thousands of data; however there are time that we can not avoid it.
For example:
I got a dashboard where in I need to have all the user data available for look up.
I can not put limits to it on the publish since I won't be able to search the user collection properly.
Is there a way you can recommend (a package or a process) that is capable of subscribing large amount of data in a faster way and less stressful in the server? Thank you
This is not an answer to your original question but I am adding the process to use meteor methods instead of publications (no reactivity).
For the following example, lets say the collection with large number of records is "UserPosts"
//on server side
Meteor.methods({
getUserPosts: function (userId) {
return UserPosts.find({ userId: userId });
}
});
//on client side
Template.yourTemplate.onCreated(function () {
Session.set("current-user-posts", []);
var template = this;
template.autorun(function () {
var userId = Meteor.userId(); //Instead of this, add your reactive data source. That is, this autorun will run whenever Meteor.userId() changes, so change it according to your needs.
Meteor.call("getUserPosts", function (err, result) {
if (err) console.log("There is an error while getting user posts..");
result = err ? [] : result;
Session.set("current-user-posts", result);
});
});
});
Template.yourTemplate.helpers({
userPosts: function () {
return Session.get("current-user-posts");
}
});
Template.yourTemplate.onDestroyed(function () {
Session.set("current-user-posts", null);
});
Now you can use Session.get("current-user-posts") in your template helpers and other places to get user posts.
Related
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I have the following implementation block which isn't very readable.
impl Xbee {
async fn new(xbee_ip: Ipv4Addr) -> Result<Self, std::io::Error> {
match tokio::net::UdpSocket::bind((Ipv4Addr::UNSPECIFIED, 0)).await
.map(|socket| async {
match socket.connect((xbee_ip, 3054)).await {
Ok(_) => Ok(Xbee {
ip: xbee_ip,
socket: socket,
last_seen: SystemTime::now(),
}),
Err(error) => Err(error)
}
}) {
Ok(xbee) => xbee.await,
Err(err) => Err(err),
}
}
}
As a start, I would like to remove the outermost match statement which is just there to await the future inside a result before putting it back into a result. It seems like this would be a good candidate for Result::{and_then, map, map_or, map_or_else} etc., however, I don't think any of these will work since the await would end up being inside an async closure which in turn, would need to be awaited. Is it possible to await the future inside a result without a match statement?
This does not answer the question above, however, by using the ? operator correctly, it is possible to make the above code much more readable by not having the future inside the result in the first place. Refactored code:
impl Xbee {
async fn new(xbee_ip: Ipv4Addr) -> Result<Self, std::io::Error> {
let socket = tokio::net::UdpSocket::bind((Ipv4Addr::UNSPECIFIED, 0)).await?;
socket.connect((xbee_ip, 3054)).await?;
let xbee = Xbee {
ip: xbee_ip,
socket: socket,
last_seen: SystemTime::now(),
};
Ok(xbee)
}
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am trying to run multiple document updates within one firestore transaction and I am wondering whether or not this is an anti pattern.
I have a document named "Group" containing an array named "members" which contains different IDs from a users collection. Now I want to loop through all members and update the according user documents within one transaction. Is this possible?
I have tried to loop through all members with .forEach() but the problem is that .forEach() does not support async/await or the use of promises as far as I know.
The following should do the trick:
var firestore = firebase.firestore();
//.....
var groupDocRef = firestore.doc('collectionRef/docRef');
return firestore
.runTransaction(function(transaction) {
var arrayOfMemberIds;
return transaction
.get(groupDocRef)
.then(function(groupDoc) {
if (!groupDoc.exists) {
throw 'Group document does not exist!';
}
arrayOfMemberIds = groupDoc.data().members;
return transaction.update(groupDocRef, {
lastUpdate: firebase.firestore.FieldValue.serverTimestamp()
});
})
.then(function() {
arrayOfMemberIds.forEach(function(memberId) {
transaction = transaction.update(
firestore.collection('users').doc(memberId),
{ foo: 'bar' }
);
});
return transaction;
});
});
This will work because the update() method returns the Transaction instance which can be used for chaining method calls.
Note also that we must update the initial groupDoc. If not, the following error will be thrown: FirebaseError: "Every document read in a transaction must also be written.". In the example above we just update a lastUpdate field with a Timestamp. It's up to you to choose the update you want to do!
You can easily test the transactional aspect of this code by setting some security rules as follows:
service cloud.firestore {
match /databases/{database}/documents {
match /collectionRef/{doc} {
allow read: if true;
}
match /users/{user} {
allow read: if false;
}
}
}
Since it is not possible to write to the users collection, the Transaction will fail and the collectionRef/docRef document will NOT be updated.
Another (even better) way to test the transactional aspect is to delete one of the users document. Since the update() method fails if applied to a document that does not exist, the entire Transaction will fail.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a flutter app that basically acts as a filter app for another app, meaning i can scroll through certain posts and decide whether to delete them from this app so they dont show up on the other main app.
My question is, since Firestore does not support deleting subcollections, what happens if i just delete the document of the post and ignore the remaining subcollections of things like comments? Is it possible that firestore will later assign a random postId with the same as the one previously deleted and end up showing a deleted post's comments and subcollection info? Because it says on firestore that the ancestor document that doesnt exist for a subcollection that does will not show up in queries does that mean that no other post will be created with the same postId?
Basically Is there any harm for not deleting the subcollections, if there is what do you recommend i do about it, manually delete it?
You can use firebase functions to delete collections when a document is deleted. In other words, you would code a function that executes every time a document (in your case post) is deleted. Then you would go through your subcollections and delete them in the function.
To delete a collection I use this code (I didn't code this):
/**
* Delete a collection, in batches of batchSize. Note that this does
* not recursively delete subcollections of documents in the collection
*/
function deleteCollection (db, collectionRef, batchSize) {
var query = collectionRef.orderBy('__name__').limit(batchSize)
return new Promise(function (resolve, reject) {
deleteQueryBatch(db, query, batchSize, resolve, reject)
})
}
function deleteQueryBatch (db, query, batchSize, resolve, reject) {
query.get()
.then((snapshot) => {
// When there are no documents left, we are done
if (snapshot.size === 0) {
return 0
}
// Delete documents in a batch
var batch = db.batch()
snapshot.docs.forEach(function (doc) {
batch.delete(doc.ref)
})
return batch.commit().then(function () {
return snapshot.size
})
}).then(function (numDeleted) {
if (numDeleted <= batchSize) {
resolve()
return
}
else {
// Recurse on the next process tick, to avoid
// exploding the stack.
return process.nextTick(function () {
deleteQueryBatch(db, query, batchSize, resolve, reject)
})
}
})
.catch(reject)
}
Is it possible that firestore will later assign a random postId with the same as the one previously deleted
The collisions of ids in this case is incredibly unlikely and you can/should assume they'll be completely unique. So you don't have to be concerned about it because that's why those ids were designed for, to be unique.
This built-in generator for unique ids that is used in Firestore when you call CollectionReference's add() methods or CollectionReference's document() method without passing any parameters, generates random and highly unpredictable ids, which prevents hitting certain hotspots in the backend infrastructure.
does that mean that no other post will be created with the same postId?
Yes, not other document will be created with the same id.
Basically Is there any harm for not deleting the subcollections
There is not. You can do it in two ways, client side my getting all documents within that subcollection and deleting them in smaller chunks or using a function as #jonasxd360 mentioned in his answer.
This question already has answers here:
How do I return the response from an asynchronous call?
(41 answers)
Closed 6 years ago.
There is a simple field to input a youtube ID. I am using renaldo's youtube api (https://atmospherejs.com/renaldo/youtube-api) to retrieve the title of the youtube clip.
The client side event passes the track ID (var tid) value to the method 'addTrack' successfully - it outputs the track's title to the console on the server. I am having a really bad time returning anything at all back to the client.
If I call the 'getVideoData' method from the 'addTrack' method, it still returns 'undefined' every time. I am no expert in meteor or javascript, this is just something I am learning for the hell of it.
I understand the concept of callbacks and the synchronous nature of javascript (I think!).
Thanks.
[EDIT The code below solves my problem, using Future]
if (Meteor.isServer) {
YoutubeApi.authenticate({
type: 'key',
key: API_KEY
});
Meteor.methods({
addTrack: function(tid) {
Meteor.call("getVideoData", tid, function(err,res) {
console.log(res);
});
},
getVideoData: function(tid) {
var future = new Future();
YoutubeApi.videos.list({
part: "snippet",
id: tid,
}, function (err,data) {
var _data = {"title":data.items[0].snippet.title,"desc":data.items[0].snippet.description};
future["return"](_data)
});
return future.wait();
}
Meteor.startup(function () {
Future = Npm.require('fibers/future');
});
}
Meteor methods are tricky, as they use Fibers to be synchronous (well, they appear to the developer as synchronous). So you need to use Meteor.wrapAsync() to wrap the all to YoutubeApi. I haven't tested the following code, but it should look something like:
Meteor.methods({
getVideoData: function(tid) {
var syncYT = Meteor.wrapAsync(YoutubeApi.videos.list);
var data = syncYT({part: "snippet",id: tid,});
var transformed = {"title":data.items[0].snippet.title,"desc":data.items[0].snippet.description};
console.log(transformed.title);
return transformed.title;
}
});
You'll want to read more about error handling in this case, but this should get you going. Just remember that client-side Meteor is always asynchronous and server-side is optionally asynchronous. Use wrapAsync or Futures to handle async needs in methods.
Lastly, the Meteor guide is great, use it!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a Collection 'workouts' as follows:
Workouts = new Mongo.Collection('workouts');
Meteor.methods({
workoutInsert: function () {
var user = Meteor.user();
check(user._id, String);
var workout = {
completed: false,
createdAt: new Date(),
userId: user._id
};
var workoutId = Workouts.insert(workout);
return {
_id: workoutId
};
}
});
I am wondering:
1) What would a Velocity + Jasmine test look like for this method? I'm unsure where to start and would really appreciate and example!
2) Is this the best practice to define this method and call it client-side? Or perhaps should I create a Workout class and add call this method from an instance method of that class? Or should I perhaps extend Workouts to be it's own class and add instance methods to that?
In Meteor there is several types of testing: Client Integration, Client Unit, Server Integration and Server Unit.
Integration tests mirror your site and will load in your Meteor methods for you(ie. workoutInsert).
If I were testing this, I might have something such as:
//File Location: app/tests/server/integration/workoutsSpec.js
Jasmine.onTest(function () {
describe('workouts', function () {
it("should call to Workouts.insert",function(){
//Make user return truthy _id for testing
Meteor.user() = function(){return {_id : "1";}}
//Setup a spy to watch for calls to Workouts.insert
spyOn("Workouts",insert);
//Call workoutInsert Meteor Method
Meteor.call('workoutInsert');
//Verify if Workouts.insert was called
expect("Workouts.insert").toHaveBeenCalled();
});
});
});
Lastly, MeteorJS gives you a lot of freedom as to how you implement things and there's no clear best way to do things that works for every scenario. Although, I'd advise against placing any code that interacts with your database on your client. Anything located in your client folder is publicly accessible/readable to your users( Do they need to see low level validation details?).
To answer your second question, the best practice is to keep your Meteor methods isolated on the server directory. Meteor uses these reserved directory names to give you control over resources that are served to the client, server, or both. You don't need to have them in the same file or directory as your Mongo collections, as all your collections can be available on both client and server. This is usually considered the best practice, especially if you're using frameworks like angular-meteor which rely on Collection definitions being available on the client, so that filters can be passed to them. You can secure and modify permissions for these Collections on the using collection.allow()/deny()
So if you kept all your collections in the collections/ directory they could be defined like so:
Workouts = new Mongo.Collection('workouts');
would be the contents of collections/workouts.js
Then, in your server/ directory, on the same level as your collections/, you can put all your methods in a file at this level or deeper in the tree, like a server/methods/ directory. Then you can put your methods in a workouts.js in this directory, if you like.
Meteor.methods({
workoutInsert: function () {
var user = Meteor.user();
check(user._id, String);
var workout = {
completed: false,
createdAt: new Date(),
userId: user._id
};
var workoutId = Workouts.insert(workout);
return {
_id: workoutId
};
}
});