According to this question and answer from March, some bugs in DatastoreIO make it impossible to read namespaced entities in parallel from within Dataflow. Have the bugs been addressed since then? Is it possible to read namespaced entities in parallel from the Datastore?
Parallel reading of namespaced entities is supported in DatastoreIO as of Dataflow SDK for Java 1.2.0. See the code and documentation here.
Related
We plan to keep a central proto repository to keep all proto definitions and its generated code here. We would keep messages as well as service definitions in a central Git repo. We plan to drive API design standard from this central repository.
But, any service which want to use this to expose a sever service or generate clients would have to import from this repo (.pg.go).
Do you see any issue with this approach? Or do you see keeping service proto files individually in the service repos as a better alternative.
PS: Starter in the GRPC journey of building microservices. Still learning the right way to structure and distribute code here.
This question occurs regularly and I suspect the fact that there's no published guidance is because the answer depends on your needs more than the technology's.
The specific issue of many vs one is not dissimilar to whether you prefer to use a monorepo and only you can effectively determine that. Perhaps one way to determine this is to understand now (and in the future) how many shared dependencies your services will have? Another may be to determine how many repos you'll have (how complex would it be to manage 10s or 100s of repos?).
In my experience, it's a good practice to keep the protos distinct (i.e. separate repo) from code that uses them. Not only may you want to version protos independently from implementations (across languages) but the implementations themselves are independent; in one use-case I must clone a repo containing an entire system (written mostly in one language) in order to get its protos to generate bindings in another language. In this case, it would be preferable if the repo were limited to just the protos.
You could look to examples for guidance. The gRPC repo keeps a bunch of stuff rooted on the grpc package in addition to math. Although less broad, Google bundles its well-known types under google.protobuf.
I have a question regarding some NoSQL databases. In Ehcache we have for example the JCache API, in MapDB the Map Interface and in Riak KV we have a own process with clusters. How do I exactly find out which database fits to which implementation type? For example for RocksDB (I assume that it is a process) and same for LevelDB.
For reference, RocksDB and LevelDB perform very similar functions and can be interchangeable in some situations.
Given your question of Is RocksDB and LevelDB just like Riak?, I can say that they are not the same as Riak provides a scalable distributed platform to run on that can connect to one or more backend databases simultaneoulsy (currently supported backends are Bitcask, LevelDB, Leveled and memory). RocksDB and LevelDB are essentially stand alone database platforms that can be used as such or can utilised by other software such as Riak as a backend. While you could technically implement RocksDB as a backend for Riak KV without needing a mountain of custom code, you probably wouldn't want to as RocksDB does not scale well.
How do I exactly find out which database fits to which implementation type? is rather a broad question. I think you might want to rephrase it as Which databases offer me {my list of desired implementations/functions}? to make it easier for community members to answer. Please note that some NoSQL databases have multiple uses available e.g. as you mentioned Riak KV, we have Maps, Sets, GSets, Flags, Registers, Solr Search, 2i and the standard CRDT options as well but some of those may be tied to other requirements e.g. 2i only works with a LevelDB/Leveled backend, Solr Search requires the Yokozuna package version of Riak KV 3.0.0 and above but is built in for all Riak 2.x.x versions etc.
What you may also want to try to do is download a few different options to a VM or bare metal rig, have a play and see how it works out. There are often cases where two competing products do something very similar on paper but in your specific use case, one outperforms the other significantly.
To get you started, here are links to Riak 2.9.8 (the latest release of the 2.x.x series) and to the Riak 2.2.6 docs (the 2.9.x docs should be out later this month).
I'm not sure if this has directly answered your question but, hopefully, it will give you some pointers as to where to go next.
I know it is not best practice to use the R package called drake within a notebook tool, but I'm doing it anyway as a workaround for the limitations to the collaboration infrastructure we have on my team at work. Since my code is broken up into chunks that are distributed throughout sections of the notebook, it would be useful to have multiple analysis plans, which I would execute in the appropriate section, and other plans may be written and executed in subsequent sections of the notebook. Is it possible to write multiple plans in drake?
Sorry I am late to this thread. I am the maintainer of the drake R package, and I usually expect to receive questions on the issue tracker. A drake-r-package StackOverflow tag would really help me keep up, but I do not have that privilege.
Anyway, interesting use case. I do see some workarounds:
Separate caches for separate plans. drake uses storr to cache its targets, and you could create different caches for different sections of your report. Essentially, your report would manage a bunch of separate drake projects. See this chapter in the manual for more on the caching system. Use the cache argument to make() to supply a manual or non-default storr cache.
Separate plans and a single cache. Here, you would need to ensure that each plan has a completely unique set of targets. If there is overlap, then some targets will always rebuild whenever your run the report.
A single cumulative plan. Essentially, when it comes time to build additional targets as you move through the report, you can add new rows to an existing plan. In fact, this is the recommended approach for large complex projects (related example here). For even more control, use the targets argument to make() to only build a select few targets and their out-of-date dependencies.
I’m not sure i understand the question? We often use drake in jupyter notebook, and do try to support that use case (via the python bindings).
By “plans”, do you mean mathematical programs? Or inverse kinematics calls? Both should be ok in a notebook framework. Or are you actually calling them in parallel?
Not sure how R fits into it?
According to Firebase Performance Monitoring Documentation,
Known issues include:
Performance Monitoring has known compatibility issues with GTMSQLite.
We recommend not using Performance Monitoring with apps that use
GTMSQLite.
Which among the tools are using GTMSQLite? Is there a list somewhere in the documentation?
Knowing these tools will greatly help to avoid unexpected issues while using Firebase Performance Monitoring. Thanks!
No other Firebase SDKs use GTMSQLite. The warning is for either the developer's own usage of or other dependencies outside of Firebase which use GTMSQLite.
The issue is primarily the result of GTMSQLite inability to be made into a cocoapod (see https://github.com/google/google-toolbox-for-mac/blob/b3f485cab2c80a9e685205135d6a63299fe3c394/GoogleToolboxForMac.podspec#L89).
It's not a commonly used library and if there are issues, you'll generally end up with linker errors (symbol collisions).
We're evaluating eliminating this dependency in future versions...
Is there any tool to convert existing JavaFX 1.x applications to JavaFX 2.x Java code?
No such tool currently exists publicly and it is unlikely that one will be created.
Oracle did create a prototype tool which was used in internal Oracle development, but they decided not to continue development on it.
Quotes from the JavaFX project lead Richard Bair (from the forum threads linked below):
Richard: I'm sorry to say we have no tool to help with the migration. Our
experience from migrating the JavaFX Library and samples is that there
wasn't really an easy solution -- even the migration assistant that
was written was very incomplete. Some folks found it very useful, but
I just did it by hand.
PDVieira: Any chance you could send me the FxTranslator helper you've
created?
Richard: Wish I could, but unfortunately we cannot send it along
(actually, I don't even have the code on hand, didn't write it (Eamonn
did) and it would need to get legal approve to open source it, and it
probably doesn't even compile or work anymore because the platform has
change significantly since last December).
You can refer to these forum threads which discuss this further:
https://forums.oracle.com/forums/thread.jspa?messageID=9967190
https://forums.oracle.com/forums/thread.jspa?messageID=10064115