I'm implementing infinite scrolling on two separate pages (by setting Session limit for publications) - let's call it Page I and Page II. Each page has its own subscription, both published from the same collection but with different subsets of data. Suppose page I is receiving these documents in the following order:
A, B, C, D, E, F....
and page II is receiving:
E, B, G, H, K...
Some of the items will appear on both pages, these would be E, B, G in this example. Now the problem is once I visited page II and return to page I, the order of the items on page I is now changed to:
E, B, G, A, C, D...
As you can see, all common items (E, B, G) will now show up first on the list in the order as shown above, even though the data from publication was already sorted before from the server. I've tried fixing it by sorting it again from client in helpers with .find({}, {sort: { createdAt: -1 } }, {limit: limit}) and that seems to fix the issue but for some reason it also breaks all infinite scroll.
I'm trying to understand what causes this and I suspect that it's because the common documents were cached in client local minimongo from the page before it. I also tried stopping subscription by assigning a handle to it and then used .stop() on it but that didn't clear the data.
I'm new to Meteor. Could someone advise me a method to effectively clear all local client data upon changing route?
Related
I use console.groupCollapsed() to hide functions I don't generally need to review, but may occasionally want to dig into. One downside of this is that if I use console.warn or console.error inside that collapsed group, I may not notice it or it may be very hard to find. So when I encounter an error, I would like to force the collapsed group open to make it easy to spot the warning/error.
Is there any way to use JS to force the current console group (or just all blindly) to open?
Some way to jump directly to warnings/errors in Chrome debugger? Filtering just to warnings/errors does not work, as they remain hidden inside collapsed groups.
Or perhaps some way to force Chrome debugger to open all groups at once? <alt/option>-clicking an object shows all levels inside it, but there does not appear to be a similar command to open all groups in the console. This would be a simple and probably ideal solution.
There is no way to do this currently, nor am I aware of any plans to introduce such functionality, mainly because I don't think enough developers are actively using the feature to enough of a degree to create demand for this.
You can achieve what you're trying to do, but you need to write your own logging library. First thing you'll need to do is override the console API. Here is an example of what I do:
const consoleInterceptorKeysStack: string[][] = [];
export function getCurrentlyInterceptedConsoleKeys () { return lastElement(consoleInterceptorKeysStack); }
export function interceptConsole (keys: string[] = ['trace', 'debug', 'log', 'info', 'warn', 'error']) {
consoleInterceptorKeysStack.push(keys);
const backup: any = {};
for (let i = 0; i < keys.length; ++i) {
const key = keys[i];
const _log = console[key];
backup[key] = _log;
console[key] = (...args: any[]) => {
const frame = getCurrentLogFrame();
if (isUndefined(frame)) return _log(...args);
frame.children.push({ type: 'console', key, args });
frame.hasLogs = true;
frame.expand = true;
_log(...args);
};
}
return function restoreConsole () {
consoleInterceptorKeysStack.pop();
for (const key in backup) {
console[key] = backup[key];
}
};
}
You'll notice a reference to a function getCurrentLogFrame(). Your logging framework will require the use of a global array that represents an execution stack. When you make a call, push details of the call onto the stack. When you leave the call, pop the stack. As you can see, when logging to the console, I'm not immediately writing the logs to the console. Instead, I'm storing them in the stack I'm maintaining. Elsewhere in the framework, when I enter and leave calls, I'm augmenting the existing stack frames with references to stack frames for child calls that were made before I pop the child frame from the stack.
By the time the entire execution stack finishes, I've captured a complete log of everything that was called, who called it, what the return value was (if any), and so on. And at that time, I can then pass the root stack frame to a function that prints the entire stack out to the console, now with the full benefit of hindsight on every call that was made, allowing me to decide what the logs should actually look like. If deeper in the stack there was (for example) a console.debug statement or an error thrown, I can choose to use console.group instead of console.groupCollapsed. If there was a return value, I could print that as a tail argument of the console.group statement. The possibilities are fairly extensive. Here's a screenshot of what my console logs look like:
Note that you will have to architect your application in a way that allows for logging to be deeply integrated into your code, otherwise your code will get very messy. I use a visitor pattern for this. I have a suite of standard interface types that do almost everything of significance in my system's architecture. Each interface method includes a visitor object, which has properties and methods for every interface type in use in my system. Rather than calling interface methods directly, I use the visitor to do it. I have a standard visitor implementation that simply forwards calls to interface methods directly (i.e. the visitor doesn't do anything much on its own), but I then have a subclassed visitor type that references my logging framework internally. For every call, it tells the logging framework that we're entering a new execution frame. It then calls the default visitor internally to make the actual call, and when the call returns, the visitor tells the logging framework to exit the current call (i.e. to pop the stack and finalize any references to child calls, etc.). By having different visitor types, it means you can use your slow, expensive, logging visitor in development, and your fast, forwarding-only, default visitor in production.
I have one query which is bit complex for me to explain but do ask me if you need any further details...
Angular version: 8.
Subject : issue while Switching components is circular.
Like from Component A. -> B -> A -> B
Brief details of my problem
From 'A' component I have selected couple of employees then click on the apply filter button so that I can switch to B component.
On every employees check-box clicked emitting event using service so that from 'B' component I should be able to fetch those selected employees and do further logic (like. API call)
Switching from A to B is working as expected because based on selected employees I am hitting the API to get their details.
But to reset the selected employees I am redirecting back to A component so that I can add more or remove employees...
Now the problem what I was facing is
As I have logic in component B, to hit the API and get the employees details.
The problem is after one round back from component 'B' on every selection one hit goes to API to get updated emp details.
I know it's happening bcz of EvenEmitter but what is the best possible solution for this so that on every event emit it should not make API call from 'A' untill and untill I am not in component B.
I have fixed my problem.. every time when we redirect to other route it will automatically call the ngOnDestroy where you can unsubscribe all the services that you have subscribed.
Syntax to unsubscribe:
Declare one property
ServiceSubscriptions: Subscription;
ngOnDestroy() {
this.ServiceSubscriptions.unsubscribe();
debugger;
}
Feel free to ask me if anybody has any query.
I've got 2 ListViews (A and B) of the same base type. What I need to implement is:
- user drags items from A, drops them into B
- user drags items from B, drops them into B(rearrangement).
The changes has to be saved in an sqlite table. 2 different sql query runs, so the drag and drops must have a different outcome.
If I do 2X B.setOnDragDropped, can I differentiate the 2 depending on where did the drag start?
Thanks for your help, much apprich folks
If you call setOnDragDropped(...) twice on the same node, the second handler will replace the first one (it is a set method, and works the same way as any other set method: it sets the value of a property).
You need something along the lines of
listViewB.addEventHandler(DragEvent.DRAG_DROPPED, e -> {
// handler code here...
});
You can determine the source of the drag inside the handler with
e.getGestureSource()
and see which node it matches to determine the course of action.
I am writing a Meteor app that takes in external data from a machine (think IoT) and displays lots of charts, graphs, etc. So far so good. There are various pages in the application (one per graph type so far). Now as the data is being fed in "real-time", there is a situation (normal) where the data "set" gets totally reset. I.e. all the data is no longer valid. When this happens, I want to redirect the user back to the "Home" page regardless of where they are (well, except the home page).
I am hoping to make this a "global" item, but also don't want too much overhead. I noticed that iron:router (that I am using) has an onData()method but that seems a bit -- high overhead -- since it's just one piece of data that indicates a reset.
Since each page is rather "independent" and an user can stay on a page for a long time (the graphs auto-update as the underlying data changes) , I'm not even sure iron:router is the best approach at all.
This is Meteor 1.0.X BTW.
Just looking for a clean "proper" Meteor way to handle this. I could put a check in the redisplay logic of each page, but would think a more abstracted (read: global) approach would be more long-term friendly (so if we add more pages of graphs it automatically still works) ..
Thanks!
This is a job for cursor.observeChanges http://docs.meteor.com/#/full/observe_changes
Set up a collection that servers as a "reset notification" that broadcasts to all users when a new notification is inserted.
On Client:
criteria = {someCriteria: true};
query = ResetNotificationCollection.find(criteria)
var handle = query.observeChanges({
added: function (id, user) {
Router.go('home');
}
});
Whenever a reset happens:
notification = { time: new Date(), whateverYouWantHere: 'useful info' }
ResetNotificationCollection.insert notification
On insert, all clients observing changes on the collection will respond to an efficient little DDP message.
When ranging over a map m that has concurrent writers, including ones that could delete from the map, is it not thread-safe to do this?:
for k, v := range m { ... }
I'm thinking to be thread-safe I need to prevent other possible writers from changing the value v while I'm reading it, and (when using a mutex and because locking is a separate step) verify that the key k is still in the map. For example:
for k := range m {
m.mutex.RLock()
v, found := m[k]
m.mutex.RUnlock()
if found {
... // process v
}
}
(Assume that other writers are write-locking m before changing v.) Is there a better way?
Edit to add: I'm aware that maps aren't thread-safe. However, they are thread-safe in one way, according to the Go spec at http://golang.org/ref/spec#For_statements (search for "If map entries that have not yet been reached are deleted during iteration"). This page indicates that code using range needn't be concerned about other goroutines inserting into or deleting from the map. My question is, does this thread-safe-ness extend to v, such that I can get v for reading only using only for k, v := range m and no other thread-safe mechanism? I created some test code to try to force an app crash to prove that it doesn't work, but even running blatantly thread-unsafe code (lots of goroutines furiously modifying the same map value with no locking mechanism in place) I couldn't get Go to crash!
No, map operations are not atomic/thread-safe, as the commenter to your question pointed to the golang FAQ “Why are map operations not defined to be atomic?”.
To secure your accessing it, you are encouraged to use Go's channels as a means of resource access token. The channel is used to simply pass around a token. Anyone wanting to modify it will request so from the channel - blocking or non-blocking. When done with working with the map it passes the token back to the channel.
Iterating over and working with the map should be sufficiently simple and short, so you should be ok using just one token for full access.
If that is not the case, and you use the map for more complex stuff/a resource consumer needs more time with it, you may implement a reader- vs writer-access-token. So at any given time, only one writer can access the map, but when no writer is active the token is passed to any number of readers, who will not modify the map (thus they can read simultaneously).
For an introduction to channels, see the Effective Go docs on channels.
You could use concurrent-map to handle the concurrency pains for you.
// Create a new map.
map := cmap.NewConcurretMap()
// Add item to map, adds "bar" under key "foo"
map.Add("foo", "bar")
// Retrieve item from map.
tmp, ok := map.Get("foo")
// Checks if item exists
if ok == true {
// Map stores items as interface{}, hence we'll have to cast.
bar := tmp.(string)
}
// Removes item under key "foo"
map.Remove("foo")