Why is Graphite UI not showing Data even after all the data is reaching the Graphite DB? - graphite

I have a graphite server which has the following retention.
[default]
pattern = .*
retentions = 10s:7d,5m:30d,10m:2y
I send data every 10seconds from system A to the Graphite server I can confirm that required data is reaching from system A. I have checked the wsp file in the location /storage/whisper/ and the data is present(used whisper-fetch.py). However, I am still not able to see any graphs on Graphite UI.
Also, all the directories regarding system A are visible in Graphite UI, but when I click on the metric it shows no data.
Please help me on possible reasons for this issue as this is happening only for system A, there are other systems for which I can see the Data in Graphite.

Related

After a certain limit it should point to different vm

I have a file system in which the files are stored with the numeric number (SQL index) but my VM size is full and I can't shift my files to different Cloud or anything.
My file system URL will be
https://example.com/5374/randomstring.jpg
5374 is file number which is saved in SQL DB and a random string is generated.
What I'm planning to do is using nginx redirecting right now I have 56770 in a vm if a user tries to upload it will go and save in different vm and if user wants to access 56771 means using nginx it should point to that VM.
You will make your life easier by choosing the cutoff point yourself, it's not essential but it will make matching a regular expression a lot more concise.
If you said 56000 and above was on VM2 then your regex is as simple as /([5-9][6-9][0-9][0-9][0-9])/

Is it possible to access elasticsearch's internal stats via Kibana

I can see from querying our elasticsearch nodes that they contains internal statistics that for example show disk, memory and CPU usage (for example via GET _nodes/stats API).
Is there anyway to access these in Kibana-4?
Not directly, as ElasticSearch doesn't natively push it's internal statistics to an index. However you could easily set something like this up on a *nix box:
Poll your ElasticSearch box via REST periodically (say, once a minute). The /_status or /_cluster/health end points probably contain what you're after.
Pipe these to a log file in a simple CSV format along with a time stamp.
Point logstash to these log files and forward the output to your ElasticSearch box.
Graph your data.

Is it ok to change from full recovery to simple recovery in Sql Server

I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.

Timeout when uploading images

I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.

How visible/traceable is local CSV data queried from an ASP.NET page?

I want to have a page on a remote site that selects a local CSV file as a data source which outputs to a GridView. What is the format of the source data and how is it transferred to the server in this instance?
Could it be retrieved in some way from a cache or the IIS logs? The data is mildly sensitive and I'd like to know the potential risks.
Here is the query, if it makes any difference:
SELECT [fullname] AS [Employee] ,
[reportsto] AS [Line manager],
COUNT([fullname]) AS [Occasions] ,
ROUND(SUM([dayslost]),2) AS [Days lost] ,
INT((COUNT([fullname]) *COUNT([fullname]))*SUM([dayslost])) AS [Bradford factor]
FROM [Staff absence reasons.csv]
WHERE UCASE([absencetype])='SELF CERTIFIED SICKNESS'
OR UCASE([absencetype])='DOCTOR CERTIFIED SICKNESS'
GROUP BY [fullname],
[reportsto]
ORDER BY 2 ASC,
5 DESC;
It's unlikely that a cache or log would hold the content of the data - a Proxy is more likely to be able to get at it (download fiddler and see what you can get).
Probably, it would be uploaded in base64, which is trivial to decode, so anyone who can intercept the traffic could get your csv file as it's transmitted.
If you wanted to reduce the likelihood of this, you should set up SSL on the remote site (as a minimum), and you could even go so far as using a VPN between the two machines.
However, what you don't cover is where the data is stored on the server, and how it's processed...if it's left lying around in your site root, then anyone could find it.

Resources