EventStore DuplicateCommit Query - neventstore

Why does the SQL persistence for #Jonathan Oliver's EventStore use CommitSequence as one of the criteria for detecting a duplicate commit? Why wouldn't StreamId and CommitId be sufficient?
See the SQL below.
SELECT COUNT(*)
FROM Commits
WHERE StreamId = #StreamId
AND CommitSequence = #CommitSequence
AND CommitId = #CommitId
This SQL statement is from SqlPersistenceEngine.DetectDuplicate(). It's used to determine if a DuplicateCommitException should be thrown or just a ConcurrencyException.

Not sure what your SQL is about.
The reason the CommitSequence participates in the unique indexes is if 2 people are writing are modifying v5 at the same time, one might write a v6 with 1 event and another writes a v6 with a different event.
In some cases, all writers have a common source of Commit Ids, but quite often (and Common Domain does this), one just generates a random Guid as the Commit Id and in that case you still want the conflict to be detected.
I guess one could abuse GUIDs to create a fixed sequence that encodes the StreamVersion (v6 above) to generate the Commit Id to make it redundant but for me it's pretty clearly needed and useful.
Be sure to read the readme in the NuGet package BTW - most of this stuff is explained pretty well IMO.

Related

Is there a way to query Oracle DB server name and use in conditional compilation?

I got bit trying to maintain code packages that run on two different Oracle 11g2 systems when a line of code to be changed slipped by me. We develop on one system with a specific data set and then test on another system with a different data set.
The differences aren't tremendous, but include needing to change a single field name in two different queries in two different packages to have the packages run. On one system, we use one field, on the other system... a different one. The databases have the same schema name, object names, and field names, but the hosting system server names are different.
The change is literally as simple as
INSERT INTO PERSON_HISTORY
( RECORD_NUMBER,
UNIQUE_ID,
SERVICE_INDEX,
[... 140 more fields... ]
)
SELECT LOD.ID RECORD_NUMBER ,
-- for Mgt System, use MD5 instead of FAKE_SSN
-- Uncomment below, and comment out Dev system statement
-- MD5 UNIQUE_ID ,
-- for DEV system, use below
'00000000000000000000' || LOD.FAKE_SSN UNIQUE_ID ,
null SERVICE_INDEX ,
[... 140 more fields... ]
FROM LEGACY_DATE LOD
WHERE (conditions follow)
;
I missed one of the field name changes in one of the queries, and our multi-day run is crap.
For stupid reasons I won't go into, I wind up maintaining all of the code, including having to translate and reprocess developer changes manually between versions, then transfer and update the required changes between systems.
I'm trying to reduce the repetitive input I have to provide to swap out code -- I want to automate this step so I don't overlook it again.
I wanted to implement conditional compilation, pulling the name of the database system from Oracle and having the single line swap automatically -- but Oracle conditional compilation requires a package static constant (boolean in this case). I can't use the sys_context function to populate the value. Or, it doesn't seem to let ME pull data from the sys_context and evaluate it conditionally and assign that to a constant. Oracle isn't having any. DB_DOMAIN, DB_NAME, or SERVER_HOST might work to differentiate the systems, but I can't find a way to USE the information.
An option is to create a global constant that I set manually when I move the code to the other system, but at this point, I have so many steps to do for a transfer that I'm worried that I'd even screw that up. I would like to make this independent of other packages or my own processes.
Is there a good way to do this?
-------- edit
I will try the procedure and try to figure out the view over the weekend. Ultimately, the project will be turned over to a customer who expects to "just run it", so they won't understand what any switches are meant to do, or why I have "special" code in a package. And, they won't need to... I don't even know if they'll look at the comments.
Thank you
As Mat says in the comments for this specific example you can solve with a view, however there are other ways for more complex situations.
If you're compiling from a filesystem or using any automatic system you can create a separate PL/SQL block/procedure, which you execute in the same session prior to compilation. I'd do something like this:
declare
l_db varchar2(30) := sys_context('userenv','instance_name');
begin
if l_db = 'MY_DB' then
execute immediate 'alter session set plsql_ccflags = ''my_db:true''';
end if;
end;
/
One important point; conditional compilation does not involve a "package static constant", but a session one. So, you need to ensure that your compilation flags are identical/unique across packages/sessions.

System level trigger on DML Command in plsql

Suppose there are n number of tables in the database. Whatever insert,update,delete happen across any table in the database, have to be captured in a table called "Audit_Trail", where we have the below columns in the audit trail tables.
Server_Name
AT_date
AT_time
Table_name
Column_name
Action
Old_value
New_Value
The server on which table, on which column, on which date and time need to be captured. Also, the "Action" column tracks whether an action is an insert, update or delete and we have to capture the old value and new value as well.
So what is the best way to do this? Can we create a database level trigger which can fire trigger in case of any insert, update or delete?
The best way would be to use Oracle's own auditing functionality.
AUDIT ALL ON DEFAULT BY ACCESS;
http://docs.oracle.com/cd/E11882_01/network.112/e36292/auditing.htm#DBSEG392
In response to comment ...
There is nothing unusual in wanting to audit every change made to tables in the database -- hence there is already functionality provided in the system for doing exactly that. It is better then using triggers because it cannot be bypassed as easily. However, if you want to use this pre-supplied, robust, simple to use functionality you might have to compromise on your specific requirements a little, but the payoff will be a superior solution that will use code and configuration in common with thousands of other Oracle systems.

Entity Framework - Making a column auto-incremement

So I'm using the Entity Framework and we have a modal for a table called TPM_PROJECTVERSIONNOTES. This table has a column called NOTEID which is a number. Right now, when we create a new row, we get the next available number with this code:
note.NOTEID = (from n in context.TPM_PROJECTVERSIONNOTES
orderby n.NOTEID descending
select n.NOTEID).Max() + 1;
To me, this seems incredibly hacky (I mean you have to do an entire SQL query just to get the next value). Plus, it's incredibly dangerous; it's not thread safe or transaction safe. I've already found 9 instances in the DB that have the same NOTEID! Good thing no one even thought to put a UNIQUE constraint on that column... sigh.
So anyway, I've added a new sequence to the database:
CREATE SEQUENCE TPM_PROJECTVERSIONNOTES_SEQ START WITH 732 INCREMENT BY 1;
Now my question:
How do I instruct the Entity framework to use TPM_PROJECTVERSIONNOTES_SEQ.nextval when inserting a row into this table? Basically, I just don't want to specify a NOTEID at all and I want the framework to take care of it for me. It's been suggested I use a trigger, but I think this is a bit hacky and would rather have to Entity framework just create the correct SQL in the first place. I'm using Oracle 11g for this.
While this may still fall into what you call the 'hacky' category, you can avoid using triggers to call the nextval, but you must utilize a stored procedure to handle the insert (whereas it will call the nextval in lieu of using a TRIGGER). (I guess this could fall more into a TAPI/XAPI category)
Check out the recent article
TECHNOLOGY: Oracle Data Provider for .NET
it explains (and contains samples) to using a stored procedure to handle the insert, calling the sequence, and mapping it back to the ODP EF BETA.
This obviously does not have the ODP EF Beta do the SQL for nextval, but it is an alternative. (look at this forum, it does appear that most of the EF Oracle frameworks fall victim to this-- devart etc {https://forums.oracle.com/forums/thread.jspa?threadID=2184372} )

MS Access CREATE PROCEDURE Or use Access Macro in .NET

I need to be able to run a query such as
SELECT * FROM atable WHERE MyFunc(afield) = "some text"
I've written MyFunc in a VB module but the query results in "Undefined function 'MyFunc' in expression." when executed from .NET
From what I've read so far, functions in Access VB modules aren't available in .NET due to security concerns. There isn't much information on the subject but this avenue seems like a daed end.
The other possibility is through the CREATE PROCEDURE statement which also has precious little documentation: http://msdn.microsoft.com/en-us/library/bb177892%28v=office.12%29.aspx
The following code does work and creates a query in Access:
CREATE PROCEDURE test AS SELECT * FROM atable
However I need more than just a simple select statement - I need several lines of VB code.
While experimenting with the CREATE PROCEDURE statement, I executed the following code:
CREATE PROCEDURE test AS
Which produced the error "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'."
This seems to indicate that there's a SQL 'PROCEDURE' statement, so then I tried
CREATE PROCEDURE TEST AS PROCEDURE
Which resulted in "Syntax error in PROCEDURE clause."
I can't find any information on the SQL 'PROCEDURE' statement - maybe I'm just reading the error message incorrectly and there's no such beast. I've spent some time experimenting with the statement but I can't get any further.
In response to the suggestions to add a field to store the value, I'll expand on my requirements:
I have two scenarios where I need this functionality.
In the first scenario, I needed to enable the user to search on the soundex of a field and since there's no soundex SQL function in Access I added a field to store the soundex value for every field in every table where the user wants to be able to search for a record that "soundes like" an entered value. I update the soundex value whenever the parent field value changes. It's a fair bit of overhead but I considered it necessary in this instance.
For the second scenario, I want to normalize the spacing of a space-concatenation of field values and optionally strip out user-defined characters. I can come very close to acheiving the desired value with a combination of TRIM and REPLACE functions. The value would only differ if three or more spaces appeared between words in the value of one of the fields (an unlikely scenario). It's hard to justify the overhead of an extra field on every field in every table where this functionality is needed. Unless I get specific feedback from users about the issue of extra spaces, I'll stick with the TRIM & REPLACE value.
My application is database agnostic (or just not very religious... I support 7). I wrote a UDF for each of the other 6 databases that does the space normalization and character stripping much more efficiently than the built-in database functions. It really annoys me that I can write the UDF in Access as a VB macro and use that macro within Access but I can't use it from .NET.
I do need to be able to index on the value, so pulling the entire column(s) into .NET and then performing my calculation won't work.
I think you are running into the ceiling of what Access can do (and trying to go beyond). Access really doesn't have the power to do really complex TSQL statements like you are attempting. However, there are a couple ways to accomplish what you are looking for.
First, if the results of MyFunc don't change often, you could create a function in a module that loops through each record in atable and runs your MyFunc against it. You could either store that data in the table itself (in a new column) or you could build an in-memory dataset that you use for whatever purposes you want.
The second way of doing this is to do the manipulation in .NET since it seems you have the ability to do so. Do the SELECT statement and pull out the data you want from Access (without trying to run MyFunc against it). Then run whatever logic you want against the data and either use it from there or put it back into the Access database.
Why don't you want to create an additional field in your atable, which is atable.afieldX = MyFunc(atable.afield)? All what you need - to run UPDATE command once.
You should try to write a SQL Server function MyFunc. This way you will be able to run the same query in SQLserver and in Access.
A few usefull links for you so you can get started:
MSDN article about user defined functions: http://msdn.microsoft.com/en-us/magazine/cc164062.aspx
SQLServer user defined functions: http://www.sqlteam.com/article/intro-to-user-defined-functions-updated
SQLServer string functions: http://msdn.microsoft.com/en-us/library/ms181984.aspx
What version of JET (now called Ace) are you using?
I mean, it should come as no surprise that if you going to use some Access VBA code, then you need the VBA library and a copy of MS Access loaded and running.
However, in Access 2010, we now have table triggers and store procedures. These store procedures do NOT require VBA and in fact run at the engine level. I have a table trigger and soundex routine here that shows how this works:
http://www.kallal.ca/searchw/WebSoundex.htm
The above means if Access, or VB.net, or even FoxPro via odbc modifies a row, the table trigger code will fire and run and save the soundex value in a column for you. And this feature also works if you use the new web publishing feature in access 2010. So, while the above article is written from the point of view of using Access Web services (available in office 365 and SharePoint), the above soundex table trigger will also work in a stand a alone Access and JET (ACE) only application.

Why does this query timeout? V2

This question is a followup to This Question
The solution, clearing the execution plan cache seemed to work at the time, but i've been running into the same problem over and over again, and clearing the cache no longer seems to help. There must be a deeper problem here.
I've discovered that if I remove the .Distinct() from the query, it returns rows (with duplicates) in about 2 seconds. However, with the .Distinct() it takes upwards of 4 minutes to complete. There are a lot of rows in the tables, and some of the where clause fields do not have indexes. However, the number of records returned is fairly small (a few dozen at most).
The confusing part about it is that if I get the SQL generated by the Linq query, via Linqpad, then execute that code as SQL or in SQL Management Studio (including the DISTINCT) it executes in about 3 seconds.
What is the difference between the Linq query and the executed SQL?
I have a short term workaround, and that's to return the set without .Distinct() as a List, then using .Distinct on the list, this takes about 2 seconds. However, I don't like doing SQL Server work on the web server.
I want to understand WHY the Distinct is 2 orders of magnitude slower in Linq, but not SQL.
UPDATE:
When executing the code via Linq, the sql profiler shows this code, which is basically identical query.
sp_executesql N'SELECT DISTINCT [t5].[AccountGroupID], [t5].[AccountGroup]
AS [AccountGroup1]
FROM [dbo].[TransmittalDetail] AS [t0]
INNER JOIN [dbo].[TransmittalHeader] AS [t1] ON [t1].[TransmittalHeaderID] =
[t0].[TransmittalHeaderID]
INNER JOIN [dbo].[LineItem] AS [t2] ON [t2].[LineItemID] = [t0].[LineItemID]
LEFT OUTER JOIN [dbo].[AccountType] AS [t3] ON [t3].[AccountTypeID] =
[t2].[AccountTypeID]
LEFT OUTER JOIN [dbo].[AccountCategory] AS [t4] ON [t4].[AccountCategoryID] =
[t3].[AccountCategoryID]
LEFT OUTER JOIN [dbo].[AccountGroup] AS [t5] ON [t5].[AccountGroupID] =
[t4].[AccountGroupID]
LEFT OUTER JOIN [dbo].[AccountSummary] AS [t6] ON [t6].[AccountSummaryID] =
[t5].[AccountSummaryID]
WHERE ([t1].[TransmittalEntityID] = #p0) AND ([t1].[DateRangeBeginTimeID] = #p1) AND
([t1].[ScenarioID] = #p2) AND ([t6].[AccountSummaryID] = #p3)',N'#p0 int,#p1 int,
#p2 int,#p3 int',#p0=196,#p1=20100101,#p2=2,#p3=0
UPDATE:
The only difference between the queries is that Linq executes it with sp_executesql and SSMS does not, otherwise the query is identical.
UPDATE:
I have tried various Transaction Isolation levels to no avail. I've also set ARITHABORT to try to force a recompile when it executes, and no difference.
The bad plan is most likely the result of parameter sniffing: http://blogs.msdn.com/b/queryoptteam/archive/2006/03/31/565991.aspx
Unfortunately there is not really any good universal way (that I know of) to avoid that with L2S. context.ExecuteCommand("sp_recompile ...") would be an ugly but possible workaround if the query is not executed very frequently.
Changing the query around slightly to force a recompile might be another one.
Moving parts (or all) of the query into a view*, function*, or stored procedure* DB-side would be yet another workaround.
 * = where you can use local params (func/proc) or optimizer hints (all three) to force a 'good' plan
Btw, have you tried to update statistics for the tables involved? SQL Server's auto update statistics doesn't always do the job, so unless you have a scheduled job to do that it might be worth considering scripting and scheduling update statistics... ...tweaking up and down the sample size as needed can also help.
There may be ways to solve the issue by adding* (or dropping*) the right indexes on the tables involved, but without knowing the underlying db schema, table size, data distribution etc that is a bit difficult to give any more specific advice on...
 * = Missing and/or overlapping/redundant indexes can both lead to bad execution plans.
The SQL that Linqpad gives you may not be exactly what is being sent to the DB.
Here's what I would suggest:
Run SQL Profiler against the DB while you execute the query. Find the statement which corresponds to your query
Paste the whole statment into SSMS, and enable the "Show Actual Execution Plan" option.
Post the resulting plan here for people to dissect.
Key things to look for:
Table Scans, which usually imply that an index is missing
Wide arrows in the graphical plan, indicating lots of intermediary rows being processed.
If you're using SQL 2008, viewing the plan will often tell you if there are any indexes missing which should be added to speed up the query.
Also, are you executing against a DB which is under load from other users?
At first glance there's a lot of joins, but I can only see one thing to reduce the number right away w/out having the schema in front of me...it doesn't look like you need AccountSummary.
[t6].[AccountSummaryID] = #p3
could be
[t5].[AccountSummaryID] = #p3
Return values are from the [t5] table. [t6] is only used filter on that one parameter which looks like it is the Foreign Key from t5 to t6, so it is present in [t5]. Therefore, you can remove the join to [t6] altogether. Or am I missing something?
Are you sure you want to use LEFT OUTER JOIN here? This query looks like it should probably be using INNER JOINs, especially because you are taking the columns that are potentially NULL and then doing a distinct on it.
Check that you have the same Transaction Isolation level between your SSMS session and your application. That's the biggest culprit I've seen for large performance discrepancies between identical queries.
Also, there are different connection properties in use when you work through SSMS than when executing the query from your application or from LinqPad. Do some checks into the Connection properties of your SSMS connection and the connection from your application and you should see the differences. All other things being equal, that could be the difference. Keep in mind that you are executing the query through two different applications that can have two different configurations and could even be using two different database drivers. If the queries are the same then that would be only differences I can see.
On a side note if you are hand-crafting the SQL, you may try moving the conditions from the WHERE clause into the appropriate JOIN clauses. This actually changes how SQL Server executes the query and can produce a more efficient execution plan. I've seen cases where moving the filters from the WHERE clause into the JOINs caused SQL Server to filter the table earlier in the execution plan and significantly changed the execution time.

Resources