I Need to create a Hash MD5 for every table my ssis script creates in a database.
Is it possible? if so can anyone please share some code, i have Searched and Searched and not getting anywhere
I don't know exactly what you mean by creating hash for every table. If you mean you need to create hashes for table names, you can use the below query in your database for SQL Server:
SELECT name, HASHBYTES('MD5', name) FROM sys.tables
Related
I'm trying to use RJDBC to connect to a SAP HANA database and query for a temporary table, which is stored with a #-prefix:
test <- dbGetQuery(jdbcConnection,
"SELECT * FROM #CONTROL_TBL")
# Error in [...]: invalid table name: Could not find table/view #CONTROL_TBL in schema USER
If I execute the SQL statement in HANA, it works perfectly fine. I'm also able to query for permanent tables. Therefore I assume that R doesn't pass over the hashtag. Inserting escapes like "SELECT * FROM \\#CONTROL_TBL" however didn't solve my problem.
It's not possible to query for the data of a local or global temporary table from a different session, since they are by definition session-specific. In the case of a global temporary table one can query for the metadata of the table because they are shared across sessions.
Source: Tutorial for HANA temporary tables
You have to double-quote the table because it contains special characters, see SAP Help, identifiers for details.
test <- dbGetQuery(jdbcConnection,
'SELECT * FROM "#CONTROL_TBL"')
See also related discussion on stackoverflow.
Ok, local temporary tables are always only visible to the session in which they've been defined, while global temporary tables are visible just like normal tables, but the data is session private.
So, if you created the local temp. table (name starts with #) in a different session, then no wonder it cannot be found.
For your example, the question is: why do you need a temporary table in the first place?
Instead of that, you could e.g. define a view or a table function to select data from.
I am getting this error while saving my data into the table. I have already created a 'product_Design' table in my database. I am using Sql Server 2008. Everything is working fine on local host but not on the server. I also tried to insert data in different tables and its working but I am just not able to insert data in this(product_Design) table ? I really need help regarding this thing.
here is my sql query
insert into z3ctjholo.dbo.product_Design values(#prodID, #productName, #designName, #designPath, #finalDesign, #front, #cont, #divHeight, GETDATE(), 0, 1)
I also tried this query
insert into product_Design values(#prodID, #productName, #designName, #designPath, #finalDesign, #front, #cont, #divHeight, GETDATE(), 0, 1)
Both the queries are generating error. Please help me out.
Thanks..
So finally i found what is the problem. if you ever face such kind of problem then execute this command in sql server and see whether your table is connected to any schema apart from dbo. Use this statement to check whether the table is connected to any other schema.
use yourDatabaseName
Then
SELECT * FROM INFORMATION_SCHEMA.TABLES
after that if you find that your table is connected with other schema apart from dbo then use your any statement like this
select * from schemaName.tableName
(eg. my schema name is z3ctjholo and my table name is product_Design)
so my statement would be like this
select * from z3ctjholo.product_Design
what i was doing wrong, i was using two schema names (z3ctjholo.dbo.product_Design).
I hope it will help someone..
Thanks...
There are two reasons, I can find so far.
1. Either the connection settings in web.config is incorrect.
2. your database is case sensitive collation and so check the name with case. May be you have created the table with name Product_Design and trying to insert in product_Design, in this case also, the command may not work.
Please check both the points.
I am having an issue with an SQLite database. I am using the SQLite ODBC from http://www.ch-werner.de/sqliteodbc/ Installed the 64-bit version and created the ODBC with these settings:
I open my Access database and link to the datasource. I can open the table, add records, but cannot delete or edit any records. Is there something I need to fix on the ODBC side to allow this? The error I get when I try to delete a record is:
The Microsoft Access database engine stopped the process because you and another user are attempting to change the same data at the same time.
When I edit a record I get:
The record has been changed by another user since you started editing it. If you save the record, you will overwrite the changed the other user made.
Save record is disabled. Only copy to clipboard or drop changes is available.
My initial attempt to recreate your issue was unsuccessful. I used the following on my 32-bit test VM:
Access 2010
SQLite 3.8.2
SQLite ODBC Driver 0.996
I created and populated the test table [tbl1] as documented here. I created an Access linked table and when prompted I chose both columns ([one] and [two]) as the Primary Key. When I opened the linked table in Datasheet View I was able to add, edit, and delete records without incident.
The only difference I can see between my setup and yours (apart from the fact that I am on 32-bit and you are on 64-bit) is that in the ODBC DSN settings I left the Sync.Mode setting at its default value of NORMAL, whereas yours appears to be set to OFF.
Try setting your Sync.Mode to NORMAL and see if that makes a difference.
Edit re: comments
The solution in this case was the following:
One possible workaround would be to create a new SQLite table with all the same columns plus a new INTEGER PRIMARY KEY column which Access will "see" as AutoNumber. You can create a unique index on (what are currently) the first four columns to ensure that they remain unique, but the new new "identity" (ROWID) column is what Access would use to identify rows for CRUD operations.
I had this problem too. I have a table with a primary key on a VARCHAR(30) (TEXT) field.
Adding an INTEGER PRIMARY KEY column didn't help at all. After lots of testing I found the issue was with a DATETIME field I had in the table. I removed the DATETIME field and I was able to update record values in MS-Access datasheet view.
So now any DATETIME fields I need in SQLite, I declare as VARCHAR(19) so they some into Access via ODBC as text. Not perfect but it works. (And of course SQLite doesn't have a real DATETIME field type anyway so TEXT is just fine and will convert OK)
I confirmed it's a number conversion issue. With an empty DATETIME field, I can add a time of 01-01-2014 12:01:02 via Access's datasheet view, if I then look at the value in SQLite the seconds have been rounded off:
sqlite> SELECT three from TEST where FLoc='1020';
2014-01-01 12:01:00.000
SYNCMODE should also be NORMAL not OFF.
Update:
If you have any text fields with a defined length (e.g. foo VARCHAR(10)) and the field contents contains more characters than the field definition (which SQLite allows) MS-Access will also barf when trying to update any of the fields on that row.
I've searched all similar posts as I had a similar issue with SQLite linked via ODBC to Access. I had three tables, two of them allowed edits, but the third didn't. The third one had a DATETIME field and when I changed the data type to a TEXT field in the original SQLite database and relinked to access, I could edit the table. So for me it was confirmed as an issue with the DATETIME field.
After running into this problem, not finding a satisfactory answer, and wasting a lot of time trying other solutions, I eventually discovered that what others have mentioned about DATETIME fields is accurate but another solution exists that lets you keep the proper data type. The SQLite ODBC driver can convert Julian day values into the ODBC SQL_TIMESTAMP / SQL_TYPE_TIMESTAMP types by looking for floating point values in the column, if you have that option enabled in the driver. Storing dates in this manner gives the ODBC timestamp value enough precision to avoid the write conflict error, as well as letting Access see the column as a date/time field.
Even storing sub-second precision in the date string doesn't work, which is possibly a bug in the driver because the resulting TIMESTAMP_STRUCT contains the same values, but the fractional seconds must be lost elsewhere.
I would like to monitor 10 tables with 1000 records per table. I need to know when a record, and which record changed.
I have looked into SQL Dependencies, however it appears that SQL Dependencies would only be able to tell me that the table changed, and not which record changed. I would then have to compare all the records in the table to find the modified record. I suspect this would be a problem for me as the records constantly change.
I have also looked into SQL Trigger's, however I am not sure if triggers would work for monitoring which record changed.
Another thought I had, is to create a "Monitoring" table which would have records added to it via the application code whenever a record is modified.
Do you know of any other methods?
EDIT:
I am using SQL Server 2008
I have looked into Change Data Capture which is available in SQL 2008 and suggested by Martin Smith. Change Data Capture appears to be a robust, easy to implement and very attractive solution. I am going to roll CDC on my database.
You can add triggers and have them add rows to an audit table. They can audit the primary key of the rows that changed, and even additional information about the changes. For instance, in the case of an UPDATE, they can record the columns that changed.
Before you write/implement your own take a look at AutoAudit :
AutoAudit is a SQL Server (2005, 2008) Code-Gen utility that creates
Audit Trail Triggers with:
Created, CreatedBy, Modified, ModifiedBy, and RowVersion (incrementing INT) columns to table
Insert event logged to Audit table
Updates old and new values logged to Audit table
Delete logs all final values to the Audit table
view to reconstruct deleted rows
UDF to reconstruct Row History
Schema Audit Trigger to track schema changes
Re-code-gens triggers when Alter Table changes the table
What version and edition of SQL Server? Is Change Data Capture available? – Martin Smith
I am using SQL 2008 which supports Change Data Capture. Change Data Capture is a very robust method for tracking data changes as I would like to. Thanks for the answer.
Here's an idea.You can have a flag on each table that every time a record is created or updated is filled with current datetime. Then when you notice that a record has changed set its flag to null again.Thus unchanged records have null in their flag field and you can query not null values to see which record has changed/created and when (and set their flags to null again) .
I've got this query
UPDATE linkeddb...table SET field1 = 'Y' WHERE column1 = '1234'
This takes 23 seconds to select and update one row
But if I use openquery (which I don't want to) then it only takes half a second.
The reason I don't want to use openquery is so I can add parameters to my query securely and be safe from SQL injections.
Does anyone know of any reason for it to be running so slowly?
Here's a thought as an alternative. Create a stored procedure on the remote server to perform the update and then call that procedure from your local instance.
/* On remote server */
create procedure UpdateTable
#field1 char(1),
#column1 varchar(50)
as
update table
set field1 = #field1
where column1 = #column1
go
/* On local server */
exec linkeddb...UpdateTable #field1 = 'Y', #column1 = '1234'
If you're looking for the why, here's a possibility from Linchi Shea's Blog:
To create the best query plans when
you are using a table on a linked
server, the query processor must have
data distribution statistics from the
linked server. Users that have limited
permissions on any columns of the
table might not have sufficient
permissions to obtain all the useful
statistics, and might receive aless
efficient query plan and experience
poor performance. If the linked
serveris an instance of SQL Server, to
obtain all available statistics, the
user must own the table or be a member
of the sysadmin fixed server role, the
db_ownerfixed database role, or the
db_ddladmin fixed database role on the
linkedserver.
(Because of Linchi's post, this clarification has been added to the latest BooksOnline SQL documentation).
In other words, if the linked server is set up with a user that has limited permissions, then SQL can't retrieve accurate statistics for the table and might choose a poor method for executing a query, including retrieving all rows.
Here's a related SO question about linked server query performance. Their conclusion was: use OpenQuery for best performance.
Update: some additional excellent posts about linked server performance from Linchi's blog.
Is column1 primary key? Probably not. Try to select records for update using primary key (where PK_field=xxx), otherwise (sometimes?) all records will be read to find PK for records to update.
Is column1 a varchar field? Is that why are you surrounding the value 1234 with single-quotation marks? Or is that simply a typo in your question?