clsql seems to support connection pooling, since the connect method has the :pool key, and the cliki makes note of it being thread-safe when using with-database. I can't find an example of this being used online and I'm really not sure I'm using it right.
Currently I do something like this:
(defvar connection-string '("localhost" "database" "user" "password"))
(loop repeat 4 do (clsql:connect connection-string :pool t :database-type :mysql))
(defun called-on-seperate-thread (the-query)
(clsql:with-database (db connection-string :pool t :database-type :mysql)
(clsql:query the-query :database db)))
but only 2 of the 4 database connections ever get used. I've been running my application for about a week, and it seems to be thread-safe as the cliki suggested, but I'm not sure I could prove it and I'm confused as to why it only uses some of my connections when it should be selecting them randomly from the pool.
How do you correctly use connection pools in clsql?
This is the description of the :pool keyword argument of clsql:connect, taken from https://www.quicklisp.org/beta/UNOFFICIAL/docs/clsql/doc/connect.html:
A boolean flag. If T, acquire connection from a pool of open connections. If the pool is empty, a new connection is created. The default is NIL.
On https://quickref.common-lisp.net/clsql.html, it is said that:
If POOL is t the connection will be taken from the general pool, if POOL is a CONN-POOL object the connection will be taken from this pool.
I guess that means that when you do
(loop repeat 4 do (clsql:connect connection-string :pool t :database-type :mysql))
only the first call returns a new connection; the second, third and fourth calls to clsql:connect merely return the connection created on the first iteration, which was on the "general pool".
Though I didn't test it, I suppose that if you pass nil to the :pool argument, all four connections will actually be established.
Related
I have setup a TCP sampler under a ThreadGroup in JMeter. The data is picked from a CSV file. The first line of data is for the authentication and all subsequent rows are the actual parameter data. Something like below,
AAAAAAA21
BBBBBBBCCCCCCCDDDDDDD
BBBBBBBCCCCCCCDDDDDDD
BBBBBBBCCCCCCCDDDDDDD
What I want is that if the thread group is run continuously with 10 threads for example, the first thread gets the first line of data, makes the connection with server and authenticates. All subsequent threads use the same connection (instead of creating a new connection every time) and simply send data to the server. The reason for doing this is that the data simulates a device which sends the first packet for authentication and creates the connection and all subsequent data packets send data on the same connection. I want to simulate the device testing using JMeter.
The limitation I find is that JMeter is creating a new connection for every thread and the connection gets closed when the thread exits. There seems to be no way to share the connection between all threads in the threadGroup or maybe there is a way which I am not aware of.
Looking for ways in which I can test this usecase
Unfortunately there is no possibility to share a connection between different threads as each thread represents a separate virtual user as virtual users don't know anything about each other. Moreover if you will try to share connection between different threads only one will be able to use connection at a time (otherwise you will run into the situation when several threads are concurrently writing into the same connection resulting in corrupt data)
So you can use 1 connection per virtual user (i.e. you will have 10 connections in total)
Add Loop Controller to your Thread Group and either tick Forever box or set Loop Count to -1 - this way children sampler(s) will run forever
Add TCP Sampler as a child of the Loop Controller and tick Re-use connection box so your test plan would look like:
See How to Load Test TCP Protocol Services with JMeter article for more information.
I have an WebAPI service using ODP.NET to make connections to several oracle databases. Normally the web service would be hit several times a second and will never have long periods on inactivity. In our test site however, we did not use it for 2-3 days. This morning, we hit the service and got "connection request timeout" exceptions from ODP.NET, suggesting that the connection pool was out of available connections. We are closing the connections after use. The service was working fine before the period, but today the very first query got the timeout exception. Our app pool in IIS is configured to never reset.
My question then is, what can cause the connection pool to fill with bad connections after a period of inactivity, where these connections are not cleaned up in the usual 3 minute cycle? It only happened to 2 out of the 3 of our databases, and Validate Connection=true is set for all of them.
EDIT
So after talking to the DBA, there is some different between a connection/session being killed manually or by timeout and the database server severing the TCP connections. In this case, the TCP connection was severed as part of a regular backup (why is not important for this). I guess this happens when the whole database server goes offline at once. The basis of the question still applies I think though: why is ODP.NET unable to cleanup severed connections overtime? There is a performance counter that refers to "Stasis" connections, could those connections be stuck in that state? I would think that it should be able to see that a connection is no longer active (Validate Connection=True), kill it and not return it to the pool.
Granted, this problem can be solved by just resetting the app pool everything the database goes down. I would still like to configure ODP.NET connection pooling to be more fault tolerant.
I have run into this same issue, and the only solution I have found is to use the Connection Lifetime connection string parameter in conjunction with Validate Connection.
In my particular case, the connection timeout was set at the server and the connections in the pool would timeout, but not be sniped out of the pool, resulting in errors.
Setting both the Connection Lifetime and the Validate Connection parameters has resolved the issue.
Make sure the Connection Lifetime value that you choose is less than the server connection inactivity timeout.
The recommended solution is to use ODP.NET Fast Connection Failover (FCF). FCF will automatically remove invalid connections from the pool such that you don't need to use Validate Connection, Connection Lifetime, nor clear the pool.
To use FCF, set "HA events=true", use connection pooling, and have your DBA set up Fast Application Notification (FAN) on the server side. FAN is what alerts the ODP.NET pool when a DB service or node goes down or rebooted. Upon receiving the message, ODP.NET knows which connections to remove from the pool and removes them, leaving all other valid connections untouched.
Something else is going on here. Min Pool Size and some of the other settings help when the connection is severed from things like DBA configured idle timeouts and firewall tcp idle timeouts, 'connection request timeout' occurs when created a new connection.
This could be simple network problem. There could be something interfering with dns resolution of the servers. Another case is not having fully qualified entries in tnsnames. I've been bit by the latter a couple of times.
The other issue is the one you've already recognized - full pool.
Double check that you don't have a connection leak somewhere. A missing .Close is one thing but if you're not using a 'using' statement, a try/finally is required as an unhandled exception could be thrown prior to the .Close.
I would use perfmon to monitor some of the connection statistics to start - NumberOfPooledConnections, NumberOfActiveConnections, etc:
I don't get one thing in RMI. It's a bit confusing actually.
On client side, we have the business interface (Hello.class), the client code (HelloClient.class) and the remote stub (probably Hello_stub.class) and on server side we have the server code (HelloImpl.class), the business interface (Hello.class) and the skeleton .
For Java 5 onwards, we don't create stubs but still they are c=in picture i believe.
So, how does the communication happen ?
The client calls method on Hello.class which then calls Hello_stub.class for all n/w operations. The Hello_stub.class calls the skeleton which then calls Hello.class and then calls methods on HelloImpl.class ?
I am a bit confused after reading Head first EJB :) .It would be glad if someone clarified it.
When the stub's method is called:
It gets a TCP connection to s target out of the client connection pool, or creates one if there isn't a pooled connection
Bundles up the call and the arguments into a serializable object.
Writes the object to the connection along with some other stuff like a JRMP protocol header and a remote objectID.
Reads the reply object from the connection.
Returns the connection to the pool, where it gets closed after a certain idle time.
If the reply object is an exception, throws it.
Otherwise returns the reply object as the method result.
At the server, a thread sits on the listening socket, accepting connections, creating threads, and dispatching incoming remote calls to the correct remote object via the specified object ID.
This is done via reflection. RMI skeletons haven't been used since 1998, except in the case of stubs you deliberately generate with rmic -v1.1, but the principle is the same either way.
First I would like to apologize, I'm giving so much information to make it as clear as possible what the problem is. Please let me know if there's still anything which needs clarifying.
(Running erlang R13B04, kernel 2.6.18-194, centos 5.5)
I have a very strange problem. I have the following code to listen and process sockets:
%Opts used to make listen socket
-define(TCP_OPTS, [binary, {packet, raw}, {nodelay, true}, {reuseaddr, true}, {active, false},{keepalive,true}]).
%Acceptor loop which spawns off sock processors when connections
%come in
accept_loop(Listen) ->
case gen_tcp:accept(Listen) of
{ok, Socket} ->
Pid = spawn(fun()->?MODULE:process_sock(Socket) end),
gen_tcp:controlling_process(Socket,Pid);
{error,_} -> do_nothing
end,
?MODULE:accept_loop(Listen).
%Probably not relevant
process_sock(Sock) ->
case inet:peername(Sock) of
{ok,{Ip,_Port}} ->
case Ip of
{172,16,_,_} -> Auth = true;
_ -> Auth = lists:member(Ip,?PUB_IPS)
end,
?MODULE:process_sock_loop(Sock,Auth);
_ -> gen_tcp:close(Sock)
end.
process_sock_loop(Sock,Auth) ->
try inet:setopts(Sock,[{active,once}]) of
ok ->
receive
{tcp_closed,_} ->
?MODULE:prepare_for_death(Sock,[]);
{tcp_error,_,etimedout} ->
?MODULE:prepare_for_death(Sock,[]);
%Not getting here
{tcp,Sock,Data} ->
?MODULE:do_stuff(Sock,Data);
_ ->
?MODULE:process_sock_loop(Sock,Auth)
after 60000 ->
?MODULE:process_sock_loop(Sock,Auth)
end;
{error,_} ->
?MODULE:prepare_for_death(Sock,[])
catch _:_ ->
?MODULE:prepare_for_death(Sock,[])
end.
This whole setup works wonderfully normally, and has been working for the past few months. The server operates as a message passing server with long-held tcp connections, and it holds on average about 100k connections. However now we're trying to use the server more heavily. We're making two long-held connections (in the future probably more) to the erlang server and making a few hundred commands every second per each of those connections. Each of those commands, in the common case, spawn off a new thread which will probably make some kind of read from mnesia, and send some messages based on that.
The strangeness comes when we try to test those two command connections. When we turn on the stream of commands, any new connection has about 50% chance of hanging. For instance, using netcat if I were to connect and send along the string "blahblahblah" the server should immediately return back an error. In doing this it won't make any calls outside the thread (since all it's doing is trying to parse the command, which will fail because blahblahblah isn't a command). But about 50% of the time (when the two command connections are running) typing in blahblahblah results in the server just sitting there for 60 seconds before returning that error.
In trying to debug this I pulled up wireshark. The tcp handshake always happens immediately, and when the first packet from the client (netcat) is sent it acks immediately, telling me that the tcp stack of the kernel isn't the bottleneck. My only guess is that the problem lies in the process_sock_loop function. It has a receive which will go back to the top of the function after 60 seconds and try again to get more from the socket. My best guess is that the following is happening:
Connection is made, thread moves on to process_sock_loop
{active,once} is set
Thread receives, but doesn't get data even though it's there
After 60 seconds thread goes back to the top of process_sock_loop
{active, once} is set again
This time the data comes through, things proceed as normal
Why this would be I have no idea, and when we turn those two command connections off everything goes back to normal and the problem goes away.
Any ideas?
it's likely that your first call to set {active,once} is failing due to a race condition between your call to spawn and your call to controlling_process
it will be intermittent, likely based on host load.
When doing this, I'd normally spawn a function that blocks on something like:
{take,Sock}
and then call your loop on the sock, setting {active,once}.
so you'd change the acceptor to spawn, set controlling_process then Pid ! {take,Sock}
something to that effect.
note: I don't know if the {active,once} call actually throws when you aren't the controlling processes, if it doesn't, then what I just said makes sense.
1) Book I’m reading argues that connections shouldn’t be opened between client requests, since they are a finite resource.
I realize that max pool size can quickly be reached and thus any further attempts to open a connection will be queued until connection becomes available and for that reason it would be imperative that we release connection as soon as possible.
But assuming all request will open connection to the same DB, then I’m not sure how having a connection open between two client requests would be any less efficient than having each request first acquiring a connection from connection pool and later returning that object to connection pool?
2) Book also recommends that when database code is encapsulated in a dedicated data access class, then method M opening a database connection should also close that connection.
a) I assume one reason why M should also close it, is because if method M opening the connection doesn’t also close it, but instead this connection object is used inside several methods, then it’s more likely that a programmer will forget to close it.
b) Are there any other reasons why a method opening the connection should also close it?
thanx
EDIT:
If during the processing of a web request you don’t close the connection, then same connection can’t be used “directly” by the next request, but instead it first needs to be returned to connection pool, and only then can it be reused? If that is the case, I can see how we don’t gain anything by leaving the connection open during the requests?!
E.g. Transaction 1 reads a row from a table, then the user is thinking long, before modifying the data. During that time, transaction B reads and then updates the same row: transaction A now has stale data! Now if the user finally modifies the data and tx A commits it, the modifications made by tx B may get lost entirely: this is called lost update.
If my above assumption is correct,then how can user U that initiated transaction 1 ( thus established a database connection 1 ) during a first request, get a reference to same database connection 1 ( and thus a "reference" to transaction 1 ) during the second request(aka postback)? Namely, wasn't connection object returned to the connection pool when server finished processing user U's first request?
Yes, you never know how long the connection will be open, as the request is initiated by the user... also, what happens if the request gets lost (user closes browser), too easy to have connections open infinitely... hard to have a cleanup process if you do do that.
HTH.
The points in 2) can be solved by wrapping the opening and closing of the connection in a smart manager object. Methods using the database would call this manager to get a connection and to give it back. The manager would count how many methods are using the connection, how long ago it was, etc. and make/close connnections accordingly.
Yeah, with ASP.NET and SQL connections, there are very few scenarios where it makes sense to not use the connection pool. One of the most common scenarios where the connection pooling causes issues is when you change contexts (from an data access/authorization perspective). Almost think of the connection pool as a connection load balancer for you that is going to be more efficient than anything you're going to code up until you learn a lot and then write a lot of code.
A couple links on the topic that will explain it much better than I could:
first link
second link