As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently developing an ASP.NET SessionState custom provider that is backed by Redis using Booksleeve. Redis seemed like a perfect fit for SessionState (if you must use it) because:
Redis can store durably like an RDBMS, however it is much faster.
A Key/Value datastore better fits the interface of SessionState.
Since data is not stored in-process (like the default Session provider), SessionState can live out web server restarts, crashes, etc.
Redis is easy to shard horizontally if that becomes a need.
So, I'm wondering if this will be useful to anyone since we (my company) are considering open sourcing it on GitHub. Thoughts?
UPDATE:
I did release a first version of this yesterday: https://github.com/angieslist/AL-Redis/blob/master/AngiesList.Redis/RedisSessionStateStore.cs
I've created a Redis-based SessionStateStoreProvider that can be found on GitHub using ServiceStatck.Redis as the client (rather than Booksleeve).
It can be installed via NuGet with Install-Package Harbour.RedisSessionStateStore.
I found a few quirks with #NathanD's approach. In my implementation, locks are stored with the session value rather than in a separate key (less round trips to Redis). Additionally, because it uses ServiceStack.Redis, it can used pooled connections.
Finally, it's tested. This was my biggest turn off from #NathanD's approach. There was no way of actually knowing if it worked without running through every use case manually.
Not only would it be useful, but I strongly consider you look closely at the Redis' Hash datatype if you plan to go down this road. In our application the session is basically a small collection of keys and values (i.e.: {user_id: 7, default_timezone: 'America/Chicago', ...}) with the entire user session stored under in a single Redis hash.
Not only does using Hash simplify mapping the data if your session data is similar, but Redis uses space much more efficiently with this approach.
Our app is in ruby, but you might still find some use from what we wrote.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to figure out an issue I'm having with sitecore. I'm wondering if my issue is basically a problem with their reliance on Session.Abandon():
For performance reasons Sitecore only writes contact data to xDB (this is mongo) when
the session ends.
This logic seems somewhat flawed (unless I misunderstand how sessions are managed in Asp.Net).
At what point (without explicitly calling Session.Abandon()) is the session flushed in this model? i.e. When will the session_end event be triggered?
Can you guarantee that the logic will always be called or can
sessions be terminated without triggering an Abandon event? for example when the app_pool is recycled.
I'm trying to figure this out as it would explain something that I'm experiencing, where the data is fine in session but is written intermittently into the mongoDb
I think that strategy for building the data in session and then flushing the data to MongoDb fits for xDb.
xDb is designed to be high volume so it makes sense for the data to be aggregated rather than constantly being written into a database table. This is the way DMS worked previously and doesn't scale very well.
The session end in my opinion is pretty reliable, and Sitecore give you various option for persisting session (inproc, mongo, SQL server), MongoDb and SQL Server are recommended for production environments. You can write Contact data directly to MongoDb by using the Contact Repository api but for live capturing of data you should use the Tracker api. When using the tracker api, as far as I am aware, the only way to get data into MongoDb is to flush the session.
If you need to flush the data to xDb for testing purposes then Session.Abandon() will work. I have a module here which you can use for creating contacts and then flushing the session, so you can see how reliable the session abandon is by checking in MongoDb.
https://marketplace.sitecore.net/en/Modules/X/xDB_Contact_Creator.aspx
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm reading about SOA and the four tenets thats required to make a SOA-application. I have tried different sources, but the explainations is twisting. Im searching for something that is a bit less abstract. Is my interpretation correct?
The four tenets is:
Services have explicit boundaries
Services are autonomous
Services share schema and contract, not class
Services interoperate based on policy
My interpretation is:
The methods that a client may use shall be easy to use and well
defined.
Services shall not be dependent on others. Change of one service
shall not affect another in any way.
A scheme represent the data that will be sent, contract contains the
defined methods for a service. To make a system loose coupled you
share scheme and contract instead of classes and objects.
A policy to use a service may be that a particular type of binding
is required so it may be used. Anyone that want to use this service,
must connect to it with this type of binding.
Got answer at programmers.stackexchange.com. Im reposting answer from GlenH7:
You're pretty close with your abstractions, yes.
Yes. Well encapsulated is another way of looking at this.
Yes, but... Service can rely on other services for functionality,
especially if that avoids duplication of code. The nuance here is in
the definition of dependent, I guess.
Yes. Services perform a contract for a scheme. User provides XYZ
data and the Service will provide ABC action per the contract.
I view services as operating against a business policy. Business
policy shouldn't get to the level of specifying binding. From the
implementing business policy point of view, you can see where some
services would be dependent upon other services in order to fulfill
their contract without duplicating code. At a broader level,
business policy is just a bunch of rules. Rules that hopefully
interoperate nicely with each. But just like human resources,
business rules have a nasty habit of not getting along with each
other as well. Services are the instantiation of those business
rules. From a lower level point of view, if the caller doesn't use
the advertised binding(s) then the caller will (obviously) be unable
to utilize the service. So while your statement is correct, it's a
bit of a tautology which doesn't enhance your understanding as much.
A little background: I currently make use of Memcached Providers for managing session state in my ASP.NET application. It provides facilities for using SQL Server as a fallback storage mechanism (when sessions need to be purged from the memcached cache). I'd like to look at creating a provider for RavenDB as it would be much more performant for this sort of task.
My question is, has anyone implemented such a thing? (or something similar?) - I'd hate to re-invent the wheel. Google does not yield any helpful results (other than my question about this in the RavenDB group itself), so I thought I'd take this question directly to the Stack Overflow community.
I was also seeking a RavenDB session-state store, and my search also failed.
So I created one:
github.com/mjrichardson/RavenDbSessionStateStoreProvider
Also available via a NuGet package.
Not as far as I know. RavenDB is pretty active project and Memcached has been practically dead for 2 yr and remained 32-bit. You might be better off just running RavenDB under IIS
OK, so code-wise it doesn't get smaller than this - single file: http://sourceforge.net/projects/aspnetsessmysql/files/MySqlSessionStateStore.cs/download
RavenDb provides a Session expiration bundle which means that documents are deleted after a specified lifetime. This is ideal for use as a session and means that your entire aggregate root will be retrieved from RavenDb, meaning much cleaner code:
RavenDb Expiration Bundle
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am looking for an S3 alternative which relies on a RESTful API, so that I can simply insert links such as http://datastore1.example.com/ID and they are directly downloadable.
I have looked at RIAK and Bitcache. They both seem very nice: http://bitcache.org/api/rest but they have one problem. I want to be the only one who can upload data. Else anyone could use our datastore by sending a PUT Request.
Is there a way to configure RIAK so that everyone can "GET" but not everyone can PUT or DELETE files except me? Are there other services which you can recommend?
Also adding Bounty :)
Requirements:
RESTful API
Guests GET only
Runs on Debian
Very nice to have:
auto distributed
EDIT: To clarify I don't want any connection to S3 I have great servers just lying around with harddrives and very good network connection (3Gbps) I don't need S3..
The Riak authors recommend to put a HTTP proxy in front of Riak in order to provide access control. You can chose any proxy server you like (such as nginx or Apache), and any access control policy you like (such as authorization based on IP addresses, HTTP basic auth, or cookies, assuming your proxy server can handle it). For example, in nginx, you might specify limit_except (likewise LimitExcept in Apache).
Alternatively, you could also add access control to Riak directly. It's based on Webmachine, so one approach would be to implement is_authorized.
Based on the information that you have given, I would suggest Eucalyptus ( http://open.eucalyptus.com/ ). They do have an S3 compatible storage system.
The reliable, distributed object store RADOS, which is part of the ceph file system, provides an S3 gateway.
We used the Eucalyptus storage system, Walrus, but we had reliably problems.
If you are looking for a distributed file system, why don't you try hadoop hdfs?
http://hadoop.apache.org/common/docs/r0.17.0/hdfs_design.html
There is a Java API available:
http://hadoop.apache.org/common/docs/r0.20.2/api/org/apache/hadoop/fs/FileSystem.html
Currently, security is an issue - at least if you have access to a terminal:
http://developer.yahoo.com/hadoop/tutorial/module2.html#perms
But you could deploy hdfs, put an application server (using the Java API) in front of it (GlassFish) and use Jersey to build the RESTful API:
http://jersey.java.net/
If you're interested in building such a thing, please let me know, for I may be building something similar quite soon.
You can use the Cloudera Hadoop Distribution to make life a bit more easy:
http://www.cloudera.com/hadoop/
Greetz,
J.
I guess that you should ask your question on serverfault.com , as it's more system related.
Anyway, I can suggest you mogileFS which scales very well : http://danga.com/mogilefs/ .
WebDAV is about as RESTful as it gets and there are many implementations that scale to various uses. In any case, if it is REST and it is HTTP then whatever authentication scheme that the server supports should allow you to control who can upload.
You can develop it yourself as a web app or a part of your existing application. It will consume HTTP requests, retrieve their URI component, convert it to S3 object name and use getObject() to get its content (using one of available S3 SDKs, for example AWS Java SDK).
You can try a hosted solution - s3auth.com (I'm a developer). It's an open source project, and you can see how this mechanism is implemented internally at one of its core classes. HTTP request is processed by the service and then re-translated to Amazon S3 internal authentication scheme.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm in the market for a good open source network based Pub/Sub (observer pattern) library. I haven't found any I like:
JMS - tied to Java, treats message contents as dumb binary blobs
NDDS - $$, use of IDL
CORBA/ICE - Pub/Sub is built on-top of RPC, CORBA API is non-intuitive
JBOSS/ESB - not too familiar with
It would be nice if such a package could to the following:
Network based
Aware of payload data, users should not have to worry about endian/serialization issues
Multiple language support (C++, ruby, Java, python would be nice)
No auto-generated code (no IDLs!)
Intuitive subscription/topic management
For fun, I've created my own. Thoughts?
You might want to look into RabbitMQ.
As pointed-out by an earlier post in this thread, one of your options is OpenSplice DDS which is an Open Source implementation of the OMG DDS Standard (the same standard implemented by NDDS).
The main advantages of OpenSplice DDS over the other middleware you are considering can be summarized as:
Performance
Rich support for QoS (Persistence, Fault-Tolerance, Timeliness, etc.)
Data Centricity (e.g. possibility of querying and filtering data streams)
Something that I'd like to understand is what are your issues with IDL. DDS uses IDL as language-independent way of specifying user data types. However DDS is not limited to IDL, you could be using XML, if you prefer. The advantage of specifying your data types, and decoupling their representation from a specific programming language, is that the middleware can:
(1) take away from you the burden of serializing data,
(2) generate very time/space efficient serialization,
(3) ensure end-to-end type safety,
(4) allow content filtering on the whole data type (not just the header like in JMS), and
(5) enable on-the wire interoperability across programming languages (e.g. Java, C/C++, C#, etc.)
Depending on the system or application you are designing, some of the properties above might not be useful/relevant. In that case, you can simply generate one, a few, "DDS Type" which is the holder of you serialized data.
If you think about JMS, it provides you with 5 different topic types you can use to send your data. With DDS you can do the same, but you have the flexibility to define exactly the topic types.
Finally, you might want to check out this blog entry on Scala and DDS for a longer discussion on why types and static-typing are good especially in distributed systems.
-AC
We use the RTI DDS implementation. It costs $$, but it supports many quality of service parameters.
There is a free DDS implementation called OpenDDS, but I've not used it.
I don't see how you can get around the need to predefine your data types if the target language is statically typed.
Look a bit deeper into the various JMS implementations.
Most of them are not Java only, they provide client libraries for other languages too.
Suns OpenMQ have at least a C++ interface, Apache ActiveMQ provides client side libraries for many common languages.
When it comes to message formats, they're usually decoupled from the message middleware itself. You could define your own message format. You could define your own XML schema and send XML messages. You could send BER encoded ASN.1 using some 3. party library if you want.
Or format and parse the data with a JSON library.
You might be interested in the MUSCLE library (disclaimer: I wrote it, so I may be biased). I think it meets all of the criteria you specified.
https://public.msli.com/lcs/muscle/
Three I've used:
IBM MQ Series - Too Expensive, hard to work with.
Tico Rendezvous - (renamed now to EMS?) Was very fast, used UDP, could also be used with no central server. My favorite but expensive and requires a maint fee.
ActiveMQ - I'm using this currently but finding it crashes frequently. Also it's requires some projects ported from Java like spring.net. It works but I can't recommend it due to stability issues.
Also used MSMQ in an attempt to build my own Pub/Sub, but since it doesn't handle it out of the box your stuck writing a considerable amount of code.
There is also OpenSplice DDS. This one is similar to RTI's DDS, except that it's LGPL!
Check it out:
IBM Webpshere MQ, and the licence is not too expnsive if you work on a corporate level.
You might take a look at PubSubHubbub. It's a extension to Atom/RSS to alow pubsub through webhooks. The interface is HTTP and XML, so it's language-agnostic. It's gaining increasing adoption now that Google Reader, FriendFeed and FeedBurner are using it. The main use case is blogs and stuff, but of course you can have any sort of payload.
The only open source implementation I know of so far is this one for the Google AppEngine. They say support for self-hosting is coming.