We are looking for solutions for generating Ids as per the title of this question.
for clarification:
we are using several different SQL Servers and application servers, any of which could be generating the id
we do not want to use a central ID-generating service/machine
we do not want to use DateTimes bitwise-converted to Guids because with so many machines there is a possibility of collision.
one possible solution is to assign each machine a start position, skip, and an offset, like this answer: https://stackoverflow.com/a/7916720/175127
this could very easily be our solution, but I'm hoping that someone among you might have a more elegant solution that better addresses some of the following issues:
one machine might end up assigning a lot more IDs and skip far ahead of the others. We might resync all the machines to have a new start position every day to help keep them on pace with each other, but this could result in a large amount of empty, unused IDs. We wish to minimize this.
we wish to see if it's possible to decrease the external dependency of each machine. At start they each have to find out how many machines there are, what the start point is, and they have to decide on unique offsets. I think having some form of central control to administer these things may be unavoidable.
the best that I could think of so far is to have a central machine that distribute ranges of Ids at a time. Each other machine grabs range-blocks as needed. If the central machine goes down, then we use the start-skip-offset system as a fallback.
Got any cool ideas SO?
Related
I have recently started exploring graph databases and Neo4J, and would like to work with my own data. At the moment I've hit some confusion. I've created an example image to illustrate my issue. In terms of efficiency, I'm wondering which option is better (and I want to get it right now in early days before I start handling larger amounts).
Option A: Using only the blue relationships, I can work out whether things are related to, or come under, the Ancient group. This process will be done many many times, however it is unlikely to be more than ~6 generations.
Option B: I implement the red relationships, so that it is much faster to work out if young structures belong to the Ancient group.
I'm trying not to use Labels in this scenario, as I'm trying to use labels for a specific purpose to simplify my life (linking structures across seperate networks), and I'm not sure if I should have a label to represent a node that already exists.
In summary, I'm wondering whether adding a whole new bunch of relationships, whilst taking more space, is worth it, or whether traversing to find all relatives is such a simple/inexpensive task that it isn't worth doing so. Or alternatively, both options are viable and this isn't a real issue at all. Thanks for reading.
I'd go with Option A. One of the strengths of Neo4j is that it traverses relationships very efficiently and quickly, and so, there is no need to materialise relationships (sometimes, relationships are materialised in complex and/or extremely large graphs, but this is not your case).
Not sure why you don't want to use labels? Labels serve to group nodes into sets of the same type, and are also index backed- this makes it much faster to find the starting point of your query (index lookup over full database scan).
I am trying to calculate physical distances between geographic locations (addresses) with ggmaps/mapdist function in R. Apart from the uncomfortable fact that Google Maps allows only 2500 queries/session, I have to cope with the misspelled or other way imperfect "addresses". The most typical problem is that the exact address strings themselves are added by several other info (floor, door etc.), but it is very problematic to detect any pattern in these what would allow applying regular expression.
My goal is:
Check if the address string is recognizable to Google Maps;
If not, find a way to truncate to an acceptable form, perhaps by parsing words step by step from the string.
Have anybody coped with this kind of problem?
Thanks.
There are a couple of factors running into each other here. One factor is the misspellings and other complexities related to addresses and the other is pinpointing (geocoding) a given address. Although they are related problems, each must be handled to accomplish your objectives.
There are numerous service providers out there that can do either or both with minimal cost involved. This can be found with a simple Google search. You can then investigate each to see if they match your use case and licensing requirements.
All of that considered, you'll want to get your address list cleaned up on a minimum. Doing that will enable you to utilize any number of geocoding providers.
Depending upon the size of your list, you can get your list cleaned up and geocoded for perhaps $20.
In the interest of full disclosure, I'm the founder of SmartyStreets. We provide a web interface (to help clean up the address list) as well as an API (which can be used on a continual basis to keep addresses clean). We also geocode your list at no extra charge. Further, we don't have any licensing restrictions on the number of lookups that can be performed during a given timeframe. (We have customers that hit us hundreds of millions of times per day.) The entire process of signing up and cleaning up your list takes just a few minutes.
I suppose using some sort of cryptography and other trickery it'd be possible to count how many occurrences have occurred.
for example suppose there is a way to identify each computer and my software is trying to count how many people have used it by connecting with each other which it does as it uses the internet.
so let's say my software is downloaded at computer A then so on...
like A>B>C...now the one at C need to know somehow that there are three unique computers that uses it.
and A>B>D needs to know it has also have three computers.
But if A>B>C>E and A>B>D>E now E needs to know there are 5 unique computers.
Now I could make system in which a unique id based on something (now what would that be) about computer gets stored in computer in array and software carries it with it and shares it with others whenever it is connected, then checks if there are new computers in array list so in end all know all others given enough connectivity.
However, from what I have learned from bitcoin and cryptography I have a feeling that there has to be another way beside storing a long string a million times (if there happen to be tons of computers).
Are you trying to count how many have ever used the program? Or how many are currently using the program? Or how many have used the program within some amount of time before now?
If your count includes computers that are not guaranteed to be accessible (e.g. if counting unique computers that have ever used the program, or used it since some time but not necessarily online now), then it seems inevitable that you will need some centralized repository of the official accumulating list. Each computer would need to communicate with that centralized list and pass it some unique identifier for the computer. If you want to know computers since time T, tracking time information of the connections is also needed.
If you only want the number of computers that are currently using it (and accessible to each other), it might be possible for each one to interrogate the others dynamically at the point of time it wants to form a current count. But even then, you would need some centralized convention for how they reach out to communicate. Conceptually they are each dynamically joining a "set" and then leaving it again later. Even if that "set" were not always located in a fixed single location, still there would need to be conceptually one official "set" and each instance would need to be able to connect with the "set" to join it and later leave it. That implies a standardized point of contact and means of contact.
So I suspect what you might really want may not be quite possible in the way you were hoping. That said, if you still want to think further about it, you might want to learn more about peer-to-peer software such as BitTorrent and others.
A web application we wrote intended for one customer is going to be product-ized and sold to dozens of companies, and we will be doing the hosting.
I could use some guidance about the pros and cons of rolling out a seperate instance for each customer versus going with a single (or very small number of) multi-tenant instances.
At first, as we ramp up, I will have to roll out a seperate instance of the application for each new customer (they will come online one at a time) because it's the only immediate option. I imagine this won't scale very well as far as maintenance goes - rolling out changes will become very tedious and possibly error-prone once there are more than 4 or 5 instances out there. Unless we automate that somehow.
Also, the single-instance philosophy seems like it might lead to a bunch of forks if people need customizations. And it would be nice to avoid that.
So what has your experience been with this?
Bonus question #1: What's the performance difference between 10 SQL Servers with 2m records each versus one huge one with 20m? Let's say they are all in one table and we're mainly doing inserts and selects on single records. Sometimes the selects are on an indexed varchar(12) or date field.
Bonus Question #2: I imagine that to avoid forking, we would have to make the customizations configurable, or build a plug-in architecture. However, that might increase the cost of doing customizations, and I don't want to be one of those shops that takes a week to resize a textbox, and I don't want to over-invest in infrastructure. Any thoughts on that?
Scale Details
Each customer will have a decent amount of data -- up to a few million records.
There will be a very small number of concurrent users, only a few per customer, plus a handful of internal reps on our end.
It's unclear whether each customer will require customizations, but I would say some of them probably will, and maybe some of those changes will be things that other customers will not want to see.
when faced with a similar challenge, here's what we did:
we have one code base with multiple sql servers. we do maintain multiple iis servers with copies of the same code base. we are free to move clients around from sql server to sql server to maximize performance.
if a customer has the $ for it, we will install them on their own server and maintain a separate iis server for them. this accommodates the largest customers for whom paying much more money every month (10 fold more money). we do not, however, give them a separate code base. if they need a mod, we make it visible on a per client basis (see #3)
custom programming usually results in a configurable option. even the people who pay us to have their own server get the same version of the code. sometimes its as simple as a clause in the code that says "if the customer = "ourbigcustomer then turn on this option". yes, that's kludgy hard-coding, but if the customer has enough money, that is fine with me.
i didn't quite get from your question whether you wanted to mix different customer's data into one big database .. our rule is we never do that (never ever). it is one of the wisest choices we ever made. it makes data manipulation much less risky and restores of data easier.
I don't see a good reason for either of your two options. I think the real answer lies somewhere in the middle: having multiple instances, each hosting multiple clients.
This adds another layer of automation processing, but it means you can keep the hosting cheap (you won't need to go out and buy a Cray any time soon) and (hopefully) this sort of mentality means you could do failover backups fairly easily.
But let's not get ahead of ourselves... We're talking about a webapp, right? Get your database(s) and aspnet on different machines. Cluster your databases and you'll have a much happier time playing around with various front-end scenarios. You'll also be able to upscale whichever area runs out of puff first.
By the sounds of it, you'll end up with one clustered database over half if not a full dozen database machines and only a couple of front-end boxes.
As for customisations, you've nailed it. You either provide a completely database-hosted set of editable templates or you have to customise who instances. I'm all for the first. It's a lot of work (without much in return) but it's well worth it as you should only need to change the core code when (you will!) you do upgrades. Hunting through a hundred customers' custom instances to make sure they upgrade safely will kill a developer! Template are the answer. At the very very least, you could allow custom CSS without much pain (but they'd need somebody who knew their stuff).
Edit: I've seen a couple of posts going for the all-in-one method. Splitting the instances over multiple machines insulates you from a couple of things:
If you introduce a bug not caught in testing, only a few clients are effected at once
Hardware fails. Having one mega-server fall over will annoy a lot of people at once. Having a failover mega-server is massively expensive. Having a spare failover box per three or four running servers is much cheaper and annoys fewer people.
Performance can be balanced between boxes on a client-by-client basis, so you can put a few light-use clients with a heavy client, or just fill a box with a few medium-use clients, etc.
On the same idea, usage spikes or other slowdowns only effect clients on the same box. Of course this doesn't mean the same for the database, but you can split that up into a cluster of clusters when you get there.
The big advantage of individual instances will be scaling out as each customer's demand increases. For example if you're running on a single server and one customer suddenly needs more preformance you're stuffed. But if they're all individual then moving that customer to a shiny new server is relatively easy.
The big disadvantage will be in managing the instances all individually. (regardless of whether they're all running on the same server or not).
Regardless you should only ever have one instance of the codebase. And customisation should all be controlled through plugins and configuration. Front end should naturally be seperate from content. Although the cost of making a change may be higher, the benefit in terms of features you can offer your other customers (which will just be customisations you've been asked to do) will pay off I'm sure. Which is to say nothing as to how much easier it'll be to manage a single codebase, as opposed to several.
I would strongly advise going with the single instance hosted by your company. This has the following advantages:
You have physical access to all code
and databases to make changes and
updates.
You control the quality of the
hardware it is running on.
When you fix a bug in common code,
you have fixed it once for all
customers.
You can refactor the application
design to better support customer
specific code and avoid forking.
As the number of customers grow, you
can scale-up and scale-out your
servers to meet
performance/responsiveness
requirements.
Your application code and databases
cannot be tampered with by
"inquistive" customers.
I would have to say it is almost more important where your application is running as opposed to how many separate instances there are of it.
Sure, maintaining multiple separate instances is not ideal due to the support/maintenance overhead, but if these apps. are all on servers you control, life is much easier then needing remote/ physical access to different customers networks and servers.
Joel Spolsky also talks about exactly this on StackOverflow podcast 67.
One thing Joel has learned from
selling Fogbugz: software designed to
be installed on a server in-house at a
customer’s site, under full control of
that customer, is almost never worth
the hassle
20 million records relatively speaking is not a huge SQL Server database. A single well provisioned SQL Server could handle this size comfortably. More important however is the number of concurrent accesses to the database. However you say that there will be only a few users per customer so is unlikely to hit you until the level of concurrency grows.
All of the above are good points but you are missing two key questions. What price point is the service offered at and how many customers (order of magnitude) will you ultimately have to support (ie market size)? In 3 years will you have a maximum of 10 customers each of which will pay you $500,000 per year or 500 customers each paying you $10,000 per year? For a small set of high paying premium customers the advantages of individual deployments is clear, whereas the lower prices and larger customer bases demand a shared solution (a la Oli's comment) is the best way to go. Or go with a cloud platform, although I've only read the hype and tinkered rather than deployed that in the field.
Bonus Question 1: table layout, indexing, number of reads / writes, efficiency and complexity of stored procedures (you are using procs or at least prepared statements, right?) all matter a heck of a lot more than the number of physical records in the database to a point. Beyond that you will likely find yourself needing to either provide individual SQL Server instances for each customer or for a pool of customers, once again depending on some of the questions I raised above.
Bonus Question 2: Putting the time into your design for templating and a plugin architecture is essential in this situation and you need to do it sooner rather than later. Once you're in the grind of customizing code for paying customers you will likely not have the time to do it right. This point cannot be stressed enough. Templates and admin tools that give you quick and deep access to data-driven changes in your product will save you a lot of time down the road. As your company / group expands you can then add less technical staff that can be "product experts" who can perform 90% of customizations and maintenance, freeing up your core to continue development or move on to other projects. Finally, don't neglect your data tier in this planning process. Having a core data tier of (almost) immutable stored procs and tables is very important, with custom tables and stored procs clearly demarcated using a good naming convention.
Good luck, feel free to provide more details if you'd like more specific suggestions.
Based on some of the advice received here, we did end up implementing a monolithic multi-tenant version of our application.
I'm glad we did. By the time it was done, we had 3 or 4 forks of the code base (mainly custom skins and things we didn't have n-level support for, but also some actual features), and it was only getting crazier.
We got the multi-tenant version up and successfully folded everything in. There ended up being a lot to think about and a lot to keep track of, but our customers never even knew they had been moved to a new system.
I will say that the actual customer migration was a bit of a bear. I thought at first that we would be able to do it by hand in the backend, but I ended up having to write some fairly involved scripts to get the job done. There were just too many identity columns, and it's not like you can just turn off constraints temporarily when you're importing into a live production system.
I have company, customer, supplier etc tables which all have address information related columns.
I am trying to figure out if I should create a new table 'addresses' and separate all address columns to that.
Having address columns on all tables is easy to use and query but I am not sure if it is the right way of doing it from a good design perspective, having these same columns repeat over few tables is making me curious.
Content of the address is not important for me, I will not be checking or using these addresses on any decision making processes, they are purely information related. Currently I am looking at 5 tables that have address information
The answer to all design questions is this:
It depends.
So basically, in the Address case it depends on whether or not you will have more than 1 address per customer. If you will have more than 1, put it in a new Addresses table and give each address a CustomerID. It's overkill (most times, it depends!) to create a generic Address table and map it to the company/customer/supplier tables.
It's also often overkill (and dangerous) to map addresses in a many-to-many relationship between your objects (as addresses can seem to magically change on users if you do this).
The one big rule is: Keep it simple!
This is called Database Normalization. And yes, you want to split them up, if for no other reason because if you need to in the future it will be much harder when you have code and queries in place.
As a rule, you should always design your database in 3rd Normal Form, even for simple apps (there will be a few cases where you won't for performance or logistic reasons, but starting out I would always try to make it 3rd Normal Form, and then learn to cheat after you know the right way of doing it).
EDIT: To expand on this and add some of the comments I have made on other's posts, I am a big believer in starting with a simple design when it comes to code and refactoring when it becomes clear that it is becoming too complex and more indepth object oriented principles would be appropriate. However, refactoring a database that is in production is not so simple. It is all about ROI. It is just too easy to design a normalized database from the outset to justify not doing it. The consequences of a poorly designed database can be catastrophic and it is usually too late before you come to that realization.
Yes, you should separate the addresses to a table of their own. It's a smart thing to know to ask. The key here is that general format of addresses is the same, regardless of who it is; a customer, a company, a supplier... they all have the same fields for addresses.
What makes this worthwhile is the ability to treat addresses as an atomic element; that is, you can generalize all the functionality related to addresses and have it deal with just one table, as opposed to having to worry about it dealing with several tables, and the associated schema drift that can occur.
If you are using those addresses only within the scope of their own tables, there may be no real benefit to moving them to their own tables.
Basically, it doesn't sound like it's worth the effort.
If there's an overlap between tables (i.e. the same organization is entered in both the company and supplier tables), and the address should always be the same in both tables, then it's probably worth moving address off in to its own table and having foreign keys to it from your other three tables. That way, you only have to update it in one spot when it changes.
If the three tables are entirely independent from each other, then there's not really much to gain from moving the data to another table, so you might as well leave it alone.
I think it entirely depends on the purpose of the database. Admittedly all address information is structurally the same and from a theoretical standpoint should all be in a single table linked from the parent table by a key.
However from a performance and query perspective, keeping them in their respective tables does simplify things from a reporting standpoint.
I have a situation with my current company [logistics] where the addresses are actually logically the same - they're all locations regardless of whether they're a pickup location, delivery location, customer etc.
In my case, I'd say that they should most definitely all be in one table. But if it's looking at it from a supplier, customer, contact information standpoint, I'd say that while theoretically it's nice to have the addresses in one table, in practice it won't buy you a whole lot as the data is unlikely to be repeated.
I disagree with Dave. The many-to-many approach (Address <-> User) is both safe, and highly advantageous.
When a customer moves, the addresses in the Address table does NOT change. Instead, the new address is found in the Address table, and the customer etc. is linked to that record. If the new address isn't already in the table, it's added to it.
So do address records themselves ever change? Yes, in cases like these:
it turns out that the address has a typo
US postal service changes the street name
These are the very situations where putting all addresses in one table without repetition pays off; any other arrangement would require an annoying and repetitive data entry.
Of course, if the database is abused, then it would be safer to avoid the many-to-many relationship. But by that token, if the database is in bad hands, it's better to just print everything out, store it in a file cabinet, and verify every transaction against the paper copy. So "protection against misuse" is not a good design principle, in my opinion.