I want to use Rollback() or commit() functions after multiple process.
There is no error, but it does not commit() to update DB.
Here is my example code,
public void startTransaction(){
using(Ads_A_Connection = new AdsConnection(Ads_A_connection_string))
using(Ads_B_Connection = new AdsConnection(Ads_B_connection_string))
{
Ads_A_Connection.Open();
Ads_B_Connection.Open();
AdsTransaction aTxn = Ads_A_Connection.BeginTransaction();
AdsTransaction bTxn = Ads_B_Connection.BeginTransaction();
try{
string aResult = this.process1(Ads_A_Connection);
this.process2(Ads_B_Connection, aResult);
this.process3(Ads_A_Connection. Ads_B_Connection);
aTxn.Commit();
bTxn.Commit();
// there is no error, but it couldn't commit.
}catch(Exception e){
aTxn.Rollback();
bTxn.Rollback();
}
}
}
public string process1(conn){
// Insert data
return result;
}
public void process2(conn. aResult){
// update
}
public void process3(aConn, bConn){
// delete
// update
}
I guess, its because out of using scope. because I tried to put all the code into
startTransaction() method, then it works. but it look too dirty.
How can I use rollback() or commit() after multiple (METHOD) process?
anybody know, please advice me.
Thanks!
[EDIT]
I just add TransactionScope before connection,
using (TransactionScope scope = new TransactionScope())
{
using(Ads_A_Connection = new AdsConnection(Ads_A_connection_string))
using(Ads_B_Connection = new AdsConnection(Ads_B_connection_string))
{
.
.
but it makes an error, it say "Error 5047: The transaction command was not in valid sequence."
I need a little more hint please :)
To extend what Etch mentioned, their are several issues with manually managing transactions on your connections:
You need to pass the SQL connection around your methods
Need to manually remember to commit or rollback when you are finished
If you have more than one connection to manage under a transaction, you should really use DTC or XA to enroll the transactions into a Distributed / 2 phase transaction.
TransactionScopes are supported with the Advantage Database Server, although you will need to enable the MSDTC service and possibly also enable XA compliance.
Note that I'm assuming that the advantage .NET client has some sort of connection pooling mechanism - this makes the cost of obtaining connections very lightweight.
Ultimately, this means that your code can be refactored to something like the following, which is easier to maintain:
private void Method1()
{
using(Ads_A_Connection = new AdsConnection(Ads_A_connection_string))
{
Ads_A_Connection.Open();
string aResult = this.process1(Ads_A_Connection);
} // Can logically 'close' the connection here, although it is actually now held by the transaction manager
}
private void Method2()
{
using(Ads_B_Connection = new AdsConnection(Ads_B_connection_string))
{
Ads_B_Connection.Open();
this.process2(Ads_B_Connection, aResult);
} // Can logically 'close' the connection here, although it is actually now held by the transaction manager
}
public void MyServiceWhichNeedToBeTransactional(){
using(TransactionScope ts = new TransactionScope()) { // NB : Watch isolation here. Recommend change to READ_COMMITTED
try{
Method1();
Method2();
ts.Complete();
}
catch(Exception e){
// Do Logging etc. No need to rollback, as this is done by default if Complete() not called
}
}
}
TransactionScope is your friend!
TransactionScope
Related
I'm quite new to the Microservice world and particularly vertX. I want my verticle to start anyway even there is no database connection available (e.g. database URL missing in configuration). I already managed to do this and my verticle is starting.
The issue now is that I want my verticle to notice when the database connection is available again and connect to it. How can I do this ?
I thought about creating another Verticle "DatabaseVerticle.java" which would send the current DB config on the event bus and my initial verticle would consume this message and check whether the config info is consistent (reply with success) or still missing some data (reply with fail and make the DatabaseVerticle check again).
This might work (and might not) but does not seem to be the optimal solution for me.
I'd be very glad if someone could suggest a better solution. Thank you !
For your use case, I'd recommend to use the vertx-config. In particular, have a look at the Listening to configuration changes section of the Vert.x Config documentation.
You could create a config retriever and set a handler for changes:
ConfigRetrieverOptions options = new ConfigRetrieverOptions()
.setScanPeriod(2000)
.addStore(myConfigStore);
ConfigRetriever retriever = ConfigRetriever.create(vertx, options);
retriever.getConfig(json -> {
// If DB config available, start the DB client
// Otherwise set a "dbStarted" variable to false
});
retriever.listen(change -> {
// If "dbStarted" is still set to false
// Check the config and start the DB client if possible
// Set "dbStarted" to true when done
});
The ideal way would be some other service telling your service about database connection. Either through event bus or HTTP, what you can do is when someone tries to access your database when connection is not made just try to make some DB call and handle the exception, return a boolean as false. Now when you get a message on event bus, consume it and save it in some config pojo. Now when someone tries to access your database, look for config and if available make a connection.
Your consumer:
public void start(){
EventBus eb = vertx.eventBus();
eb.consumer("database", message -> {
config.setConfig(message.body());
});
}
Your db client(Mongo for this eg):
public class MongoService{
private MongoClient client;
public boolean isAvailable = false;
MongoService(Vertx vertx){
if(config().getString("connection")){
client = MongoClient.createShared(vertx, config().getString("connection"));
isAvailable = true;
}
}
}
Not everything in Vertx should be solved by another verticle.
In this case, you can use .periodic()
http://vertx.io/docs/vertx-core/java/#_don_t_call_us_we_ll_call_you
I assume you have some function that checks the DB for the first time.
Let's call it checkDB()
class PeriodicVerticle extends AbstractVerticle {
private Long timerId;
#Override
public void start() {
System.out.println("Started");
// Should be called each time DB goes offline
final Long timerId = this.vertx.setPeriodic(1000, (l) -> {
final boolean result = checkDB();
// Set some variable telling verticle that DB is back online
if (result) {
cancelTimer();
}
});
setTimerId(timerId);
}
private void cancelTimer() {
System.out.println("Cancelling");
getVertx().cancelTimer(this.timerId);
}
private void setTimerId(final Long timerId) {
this.timerId = timerId;
}
}
Here I play a bit with timerId, since we cannot pass it to cancelTimer() right away. But otherwise, it's quite simple.
I wanted to checking my database table for periodically.So how can i create a webservice and how can i configure it.
basically what you need is, something which is always running and hence can make periodic calls.
There are a number of ways to do it
(Since ASP.NET hence) You can make a Windows Service, and host this service on your server, since server is always running, this Windows Service will make request to your webservice, update database or watever you want
You can use SQL Jobs to do it. You can call a webservice from a job, through a SSIS (Sql Server Integration Service) Package. These packages are very very robust in nature, they can do almost any db activity that you want them to do, including webservice request.
And finally, you can use third party tools such as Quartz.Net
References:
this is how you can call a webservice through a windows service.
this is how you can call a webservice through a ssis package.
this is how you can integrate a SSIS package in a SQL Job
this is how you can create a windows service
this is how you can create a SSIS package
this is how you can get answer/tutorial of almost anything
Example:
simplest of all of these would be a Windows Service. Making a windows service and hosting it on the machine(server) is very easy, use one of the given links (specially the last link). Usually, in Windows Service, you do some activity in OnStart event. you can place a timer inside this OnStart and upon TimerTick(), you can request your webservice.
something like this:
class Program : ServiceBase
{
System.Timers.Timer timer;
static void Main(string[] args)
{
ServiceBase.Run(new Program());
}
public Program()
{
this.ServiceName = "My Service";
}
protected override void OnStart(string[] args)
{
base.OnStart(args);
InitializeTimer();
}
protected override void OnStop()
{
base.OnStop();
//TODO: clean up any variables and stop any threads
}
protected void InitializeTimer()
{
try
{
if (timer == null)
{
timer = new System.Timers.Timer();
timer.Enabled = true;
timer.AutoReset = true;
timer.Interval = 60000 * 1;
timer.Enabled = true;
timer.Elapsed += timer_Elapsed;
}
}
catch (Exception ex)
{
Utility.WriteLog("Exception InitialiseTimer : " + ex.Message.ToString());
}
finally
{
}
}
protected void timer_Elapsed(object source, System.Timers.ElapsedEventArgs e)
{
TimerTick();
timer.Interval = 60000 * Convert.ToDouble(ConfigurationManager.AppSettings["TimerInerval"]);
}
private void TimerTick()
{
try
{
DownloadFromFTPandValidate objDownLoadandValidate = new DownloadFromFTPandValidate();
objDownLoadandValidate.ProcessMain();
}
catch (Exception ex)
{
Utility.WriteLog("Exception InitialiseTimer : " + ex.Message.ToString());
}
}
}
Here, class DownloadFromFTPandValidate wraps the code to db activity. It shd give you an idea.
You will need a job scheduler for periodical task. I recommend you a good one. Check out this link: http://quartznet.sourceforge.net/
Why not using a trigger on your table which runs a stored procedure once data was modified, then use the xp_cmdshell to access the commandline form your stored procedure so you can run for example a batch file or whatever.
I am pretty new to NUnit (and automated testing in general). I have recently done some Ruby On Rails work and noticed that in my test suite, when I create objects (such as a new user) and commit them during course of the suite, they are never committed to the database so that I can run the test over and over and not worry about that user already existing.
I am now trying to accomplish the same thing in NUnit, but I am not quite sure how to go about doing it. Do I create a transaction in the Setup and Teardown blocks? Thanks.
Why would you talk to the database during unit-tests? This makes your unit-test to integration-tests by default. Instead, create wrappers for all database communication, and stub/mock it during unit-tests. Then you don't have to worry about database state before and after.
Now, if you are not willing to that level of refactoring: The problem with transactions is that you need an open connection. So, if your method targeted for testing handles all communication on its own, it is really difficult to inject a transaction that you can create at setup and roll back at teardown.
Maybe you can use this. It is ugly, but perhaps it can work for you:
namespace SqlServerHandling
{
[TestFixture]
public sealed class TestTransactionRollBacks
{
private string _connectionString = "Data Source = XXXDB; ; Initial Catalog = XXX; User Id = BLABLA; Password = BLABLA";
private SqlConnection _connection;
private SqlTransaction _transaction;
[SetUp]
public void SetUp()
{
_connection = new SqlConnection(_connectionString);
_transaction = _connection.BeginTransaction();
}
[TearDown]
public void TearDown()
{
_transaction.Rollback();
}
[Test]
public void Test()
{
Foo foo = new Foo(_connection);
object foo.Bar();
}
}
internal class Foo
{
private readonly SqlConnection _connection;
object someObject = new object();
public Foo(SqlConnection connection)
{
_connection = connection;
}
public object Bar()
{
//Do your Stuff
return someObject;
}
}
I agree with Morten's answer, but you might want to look at this very old MSDN Magazine article on the subject: Know Thy Code: Simplify Data Layer Unit Testing using Enterprise Services
I use SQLite for unit tests, using NHibenate. Even if you're not using NHibernate it should be possible to do. SQLite has an in memory mode, where you can create a database in memory and persist data there. It is fast, works well, and you can simply throw away and recreate the schema for each test or fixture as you see fit.
You can see the example from Ayende's blog for an overview of how its done. He is using NHibernate, but the concept should work with other ORM or a straight DAL as well.
I've heard a LOT in the past about how programming with Threads and Tasks is very dangerous to the naive. Well, I'm naive, but I've got to learn sometime. I am making a program (really, it's a Generic Handler for ASP.Net) that needs to call to a 3rd party and wait for a response. While waiting, I'd like to have the handler continue doing some other things, so I am trying to figure out how to do the 3rd party web request asynchronously. Based on some answers to some other questions I've received, here is what I've come up with, but I want to make sure I won't get into big problems when my handler is called multiple time concurrently.
To test this I've built a console project.
class Program
{
static void Main(string[] args)
{
RunRequestAsynch test = new RunRequestAsynch();
test.TestingThreadSafety = Guid.NewGuid().ToString();
Console.WriteLine("Started:" + test.TestingThreadSafety);
Task tTest = new Task(test.RunWebRequest);
tTest.Start();
while (test.Done== false)
{
Console.WriteLine("Still waiting...");
Thread.Sleep(100);
}
Console.WriteLine("Done. " + test.sResponse);
Console.ReadKey();
}
}
I instantiate a separate object (RunRequestAsynch) set some values on it, and then start it. While that is processing I'm just outputting a string to the console window.
public class RunRequestAsynch
{
public bool Done = false;
public string sResponse = "";
public string sXMLToSend = "";
public string TestingThreadSafety = "";
public RunRequestAsynch() { }
public void RunWebRequest()
{
Thread.Sleep(500);
// HttpWebRequest stuff goes here
sResponse = TestingThreadSafety;
Done = true;
Thread.Sleep(500);
}
}
So...if I run 1000 of these simultaneously, I can count on the fact that each instance has its own memory and properties, right? And that the line "Done = true;" won't fire and then every one of the instances of the Generic Handler die, right?
I wrote a .bat file to run several instances, and the guid I set on each specific object seems to stay the same for each instance, which is what I want...but I want to make sure I'm not doing something really stupid that will bite me in the butt under full load.
I don't see any glaring problems, however you should consider using the Factory.StartNew instead of Start. Each task will only be executed once, so there isn't any problem with multiple tasks running simultaneously.
If you want to simplify your code a little and take advantage of the Factory.StartNew, in your handler you could do something like this (from what I remember of your last question):
Task<byte[]> task = Task.Factory.StartNew<byte[]>(() => // Begin task
{
//Replace with your web request, I guessed that it's downloading data
//change this to whatever makes sense
using (var wc = new System.Net.WebClient())
return wc.DownloadData("Some Address");
});
//call method to parse xml, will run in parallel
byte[] result = task.Result; // Wait for task to finish and fetch result.
I want a ASP.NET cache item to be recycled when a specific file is touched, but the following code is not working:
HttpContext.Current.Cache.Insert(
"Key",
SomeObject,
new CacheDependency(Server.MapPath("SomeFile.txt")),
DateTime.MaxValue,
TimeSpan.Zero,
CacheItemPriority.High,
null);
"SomeFile.txt" does not seem to be checked when I'm hitting the cache, and modifying it does not cause this item to be invalidated.
What am I doing wrong?
Problem Solved:
This was a unique and interesting problem, so I'm going to document the cause and solution here as an Answer, for future searchers.
Something I left out in my question was that this cache insertion was happening in a service class implementing the singleton pattern.
In a nutshell:
public class Service
{
private static readonly Service _Instance = new Service();
static Service () { }
private Service () { }
public static Service Instance
{
get { return _Instance; }
}
// The expensive data that this service exposes
private someObject _data = null;
public someObject Data
{
get
{
if (_data == null)
loadData();
return _data;
}
}
private void loadData()
{
_data = GetFromCache();
if (_data == null)
{
// Get the data from our datasource
_data = ExpensiveDataSourceGet();
// Insert into Cache
HttpContext.Current.Cache.Insert(etc);
}
}
}
It may be obvious to some, but the culprit here is lazy loading within the singleton pattern. I was so caught up thinking that the cache wasn't being invalidated, that I forgot that the state of the singleton would be persisted for as long as the worker process was alive.
Cache.Insert has an overload that allows you to specify a event handler for when the cache item is removed, my first test was to create a dummy handler and set a breakpoint within it. Once I saw that the cache was being cleared, I realized that "_data" was not being reset to null, so the next request to the singleton loaded the lazy loaded value.
In a sense, I was double caching, though the singleton cache was very short lived, but long enough to be annoying.
The solution?
HttpContext.Current.Cache.Insert(
"Key",
SomeObject,
new CacheDependency(Server.MapPath("SomeFile.txt")),
DateTime.MaxValue,
TimeSpan.Zero,
CacheItemPriority.High,
delegate(string key, object value, CacheItemRemovedReason reason)
{
_data = null;
}
);
When the cache is cleared, the state within the singleton must also be cleared...problem solved.
Lesson learned here? Don't put state in a singleton.
Is ASP.NET running under an account with the proper permissions for the file specified in the CacheDependency? If not, then this might be one reason why the CacheDependency is not working properly.
I think you'll need to specify a path:
var d = new CacheDependency(Server.MapPath("SomeFile.txt"));
Prepend with ~\App_Data as needed.
Your code looks fine to me. However, beyond this snippet, anything could be going on.
Are you re-inserting on every postback by any chance?
Try making your cache dependency a class field, and checking it on every postback. Modify the file in between and see if it ever registers as "Changed". e.g.:
public partial class _Default : System.Web.UI.Page
{
CacheDependency dep;
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
dep = new CacheDependency(Server.MapPath("SomeFile.txt"));
HttpContext.Current.Cache.Insert(
"Key",
new Object(),
dep,
DateTime.MaxValue,
TimeSpan.Zero, CacheItemPriority.High, null);
}
if (dep.HasChanged)
Response.Write("changed!");
else
Response.Write("no change :("); }}
The only way I am able to reproduce this behavior is if the path provided to the constructor of CacheDependency does not exist. The CacheDependency will not throw an exception if the path doesn't exist, so it can be a little misleading.