Over the past two months, my app has become significantly more complex. I use Transactions to complete 90% of all document writes client side. I have added quite a few listeners recently as well. Over the apps development, a linear increase in transaction and listener usage has caused an exponential spike in crashes. These crashes are just from testing using one device! Furthermore, the crashes happen exclusively on Android devices...
These are the four causes of every crash my app has seen:
io.flutter.plugins.firebase.cloudfirestore.CloudFirestorePlugin$5.doInBackground(CloudFirestorePlugin.java:613)
io.flutter.plugins.firebase.cloudfirestore.CloudFirestorePlugin$DocumentObserver.onEvent (CloudFirestorePlugin.java:429)
io.flutter.plugins.firebase.cloudfirestore.CloudFirestorePlugin$EventObserver.onEvent (CloudFirestorePlugin.java:451)
io.flutter.plugins.firebase.cloudfirestore.CloudFirestorePlugin$5.doInBackground (CloudFirestorePlugin.java:633)
The information provided doesn't help in narrowing down the problem. I would normally assume that I am doing something wrong, but the fact that this occurs exclusively on android devices leads me to believe something is wrong at the package level. Digging online through forums and documentation it appears I am not alone with this Firestore Transaction/Listener related issue.
Has anybody had similar issues and found a solution? This isn't only a problem for me, it's a show stopper.
Is it possible that since introducing listeners I have created this problem?
This seems to be a known issue on github, please confirm but the issue only occurs on Android -Fluter with the listeners.
They mentioned:
Just waiting for the green signal to make sure this doesn't catch some other edge cases and I can go ahead with a fix patch.
You shoudl post that your are being affected also and followup there.
I am currently working on building a simple online game as practice and to be able to play with some friends. I am interested in the ease of use of the Galaxy system provided by Meteor, but do not want to constantly pay the hourly price, totaling around $30 per month.
They mention that you can stop your which stops billing (on this site) but I have yet to find out much more about starting and stopping.
Is there any back end work that needs to be done each time an app is stopped and then started? What is the time delay for starting/stopping? Is there a maximum amount of times an app can be started and stopped per month?
If there is a site that answers all this that I missed in my research, I apologize. I've tried looking everywhere I can.
i have apps hosted on Galaxy, and no i haven't seen anything cumbersome when taking an app offline and back online. it's simply unavailable when offline, and then just starts up when back online.
note that, during this time, i didn't do anything with the database (which is hosted on Mongo's Atlas) until i permanently took an app offline. so the db portion may be a consideration for you.
Sometimes, (more often than not), the browser screen freezes while navigating through a Meteor app. Basically everything works fine but I cannot scroll up or down.
There are no JS errors and everything seems to be running ok.
What could be causing htis?
This is often a hard thing to debug, but the first things I'd do are :
Disable your CSS (sometimes the silliest things can seem disasterous)
Check that there are no errors, in your terminal either (Meteor logs to the browser and terminal)
could you be flooding the front end with data? how big are your collections? are you using auto-publish or neatly crafted pub/subs?
What packages have you included in your project? try disabling any non essential packages
Install Kadira, and monitor your performance
Its really hard without more information, off the top of my head I imagine that you're either experiencing some weird styling / rendering issue, or that you're waiting on oversized subscriptions. Meteor overall should feel quite fluid and quick. I have a few questions :
Can you share more information about your app or where you're experiencing the issues you've described?
Is your project available anywhere?
What Browser are you viewing it in?
localhost or on a server? (if so what server environment?)
The file upload script I wrote early last year for an internal website has been misbehaving oddly on a number of machines. On some machines it consistently works fine, on others it consistently misbehaves. I am having exactly the same problem with YUI Uploader, SWFUpload (2.2 and 2.5a), and Uploadify.
On the misbehaving machines, the progress event (or callback as the case may be) is reporting the upload going far too quickly. It is progressing around 9 or 10MB/s, instead of the 50 or 60kb/s that is actually going on. The progress bar fills up very quickly, and then no more progress events are triggered. A few minutes later the completion event will trigger when the upload is actually done.
I must emphasize that the file upload does proceed normally, even though the progress being reported is very wrong.
The progress events are reporting a correct file size, but the reported amount uploaded is usually way too high, and it appears that it is always a multiple of 2^16 (65536).
I'm only having this problem with Firefox 3.5 on Windows XP, all of which have various subversions of Flash 10.
Has anyone heard of this happening, or have any idea what is going on?
(I'm off to go file a number of bug reports, but hopefully someone here has some previous experience with this.)
Turns out it was AVG that was proxying the requests. As far as the Flash was concerned it was uploading it very very quickly... into AVG. AVG then proceeded to upload in the background.
It seems like this is a general XP issue with AVG's Link Scanner service. I turned off all options of the Link Scanner in AVG and Flash upload progress (at least through the YUI 2 Uploader) seems to be reporting accurately.
We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.