Im new using Sql Developer data Modeler, I made a Model using this tool but I dont Know how to re-open cause SqlDeveloer has created XML files, The app has created a folder that contains two folders: map and rel, map contains two files:
I dont Know what file i have to open from sqldeveloper, to reload the designed model that is a relational model, i dont Know how to reload all table.
I opened every xml file but the model is not loading the tables, what i have to do?
Solution,
I was created New relational model Design,
Looked up for main project File, I copied the main file from the damaged project and change next lines:
<relationalModel class="oracle.dbtools.crest.model.design.relational.RelationalDesign" name="PreciosModel" id="CF6F32EB-1ADF-A4D4-42E5-1871F2A3D75A" mainViewID="1A7856BA-66B4-D0E8-BF00-0204C3F41BBB">
<createdBy>Raul Martinez</createdBy>
<createdTime>2016-02-15 23:01:24 UTC</createdTime>
<ownerDesignName>Diseño2</ownerDesignName>
<shouldBeOpen>false</shouldBeOpen>
<selectedRDBMSSite>32076570-2523-435C-2E92-BF29817DFF70</selectedRDBMSSite>
</relationalModel>
LastOwnerName:preciosModel
NewOwnerNameDiseño2
Again I solved it by my self.
Related
I have a couple of large .xlsb files in 2FA-protected SharePoint. They refresh periodically, and I'd like to automate the process of pulling them across to a local directory. I can do this in PowerBI already by polling the folder list, filtering to the folder/files that I want, importing them and using an R script to write that to an .rds (it doesn't need to be .rds - any compressed format would do). Here's the code:
let
#"~ Query ~"="",
//Address for the SP folder
SPAddress="https://....sharepoint.com/sites/...",
//Poll the content
Source15 = SharePoint.Files(SPAddress, [ApiVersion=15]),
//... some code to filter the content list down to the 2 .xlsb files I'm interested in - they're listed as nested 'binary' items under column 'Content' within table 'xlsbList'
//R export within an arbitrary 'add column' instruction
ExportRDS = Table.AddColumn(xlsbList, "Export", each R.Execute(
"saveRDS(dataset, file = ""C:/Users/current.user/Desktop/XLSBs/" & [Label] & ".rds"")",[dataset=Excel.Workbook([Content])[Data]{0}]))
However, the files are so large that my login times out before the refresh can complete. I've tried using R's file.copy command instead of saveRDS, to pick up the files as binaries (so PowerBI never has to import them):
R.Execute("file.copy(dataset, ""C:/Users/current.user/Desktop/XLSBs/""),[dataset=[Content]])
with dataset=[Content] instead of dataset=Excel.Workbook([Content])[Data]{0} (which gives me a different error, but in any event would result in the same runtime issues as before) but it tells me The Parameter 'dataset' isn't a Table. Is there a way to reference what PowerBI sees as binary objects, from within nested R (or Python) code so that I can copy them to a local directory without PowerBI importing them as data?
Unfortunately I don't have permissions to set the SharePoint site up for direct access from R/Python, or I'd leave PowerBI out entirely.
Thanks in advance for your help
Whenever I make a custom item, I am unable to drag it onto a hotbar. When I try to pick it up the icon turns to a question mark and will not stick to a hotkey.
For example, I made an exact copy of the Murloc Costume (id 33079) at id 50017 (which was a free slot on my DB). The original I can put on a hotbar. The custom one I cannot.
Here's a gif of the issue
Answer: if it's not in the client side DBC's, it will not work properly.
From ReynoldsCahoon on the AC Discord:
I just modified a DBC recently. I had to use a tool (I used Ladik's MPQ Editor) to extract the specific DBC I wanted to modify, and then I used WDBX to open the DBC and manipulate it. In WDBX you can output it in a variety of formats (like CSV or SQL) so you can modify the values any way you like, and then reimport (via CSV or SQL) the values back in.
I loosely followed the guide here: https://model-changing.net/tutorials/article/23-41-creating-your-first-mpq-patch/
I exported a DBC containing all of the Character Titles in the game. Erased all of them, and then imported a bunch of new values of my own choosing. Instead of importing that back into the MPQ I got it from, I created a new MPQ called patch-4.mpq that I placed in my client WotLK/Data/ directory, and then on the server, I placed the DBC file into the worldserver/data/dbc/ directory, replacing the original DBC.
I think this can optionally be overwritten in the database, by using the associated _dbc table to override values from the dbc files (someone correct me if this part is wrong).
When I use the "normal" "Publish-tool" built into Visual Studio for ASP.net, it seems like it does not include XML-comment-files that belongs to dependent projects/assemblies.
For instance, lets say we have two projects:
Presentation = The Web-application
Definitions = A project
containing definitions of models
The presentation-project has a dependency to the Definition-project.
The presentation-project has XML-comments enabled, and so does the Definition-project (for all configurations).
The presentation-project has a few comments added to the actions of its controller(s).
The definition-project has a few comments added to the properties of the model(s).
The expected result would be that when I publish the Presentation-project, we should end up with two XML- files in the bin folder:
Presentation.xml
Definitions.xml
The names might of course alter if any other name has been specified in the build-properties of each project.
The actual result is that only Presentation.xml is published to the bin-folder.
I have tried to link resulting Definitions.xml-file to the Presentation-project as setting it as "Content" and to always copy, that ended up in the file getting copied to the root-folder of the application during a publish and not the /bin-folder.
I have tried the same thing but linked it into the bin-folder of the Presentation-project, but that ened up with the result of the Definitions.xml-file ending up at /bin/bin/Definitions.xml.
I have tried linking and setting Always, but that did nothing.
Here is a simple sample that can be used to reconstruct the problem:
https://github.com/Inx51/publishdemo
However.. one thing to notice is that the Definitions.xml-file is indeed copied to the /bin-folder during build, but its not copied once using publish.
Anyone that has a workaround for this strange behaviour?
Edit your project XML file, .csproj/.vbproj, and include this in the first PropertyGroup:
<ExcludeXmlAssemblyFiles>false</ExcludeXmlAssemblyFiles>
This will include all the .xml of all dependent assemblies.
When we publish some page/dynamic component from tridion is it possible to add some external multimedia file/content(ex:jpg image) in to current executing/rendering package at publish time.So that final transportation package has this binary file present along with original published content?
Is this achivable using customization of tridion renderer/resolver?If yes please provide some inputs.
*Note:*The binary content that needs to be pushed in to package at publish time is not present as multimedia component in tridion, it is located at other file location outside tridion CMS.Instead we have some stub multimedia component being used inside published component/page which has some dummy image. we plan to replace the stub image with original image at publish(rendering/resolving) time.
Since we have huge bulk of binary content stored in DAM tool we dont want that data to be recreated as multimedia component in tridion, insted we want to use that data by querying DAM tool and attach it in to tridion package with some logical referencesplanning to maintain one to one mapping between stub multimedia comp tcmid to original content in some mapping DB for reference).
Please let us know if any solution is there to attach external binary content to package at publish time.
The best - and easiest way - is to use the mechanism provided by Tridion out-of-the-box for this. Create a new multimedia component, select "External" in the resource type drop-down, and type the URL to the object. As long as you can address it with a URL, it will work exactly as you want (item will be added to package and sent to delivery server).
If this is not good enough for you, then yes, you can add it to the package yourself. I've done this in the past with code somewhat like this:
FileInfo file = // Weird logic to get a FileInfo object from external system
Item item = package.GetItem("My original Item");
item.SetAsStream(file.OpenRead());
This replaced the content of my original component with the actual file I wanted. This will work for you IF the original component is also a multimedia component. If it's not, just create a new item with your own name, etc. If possible, do use the out-of-the-box process instead.
PS: FileInfo Class.
As Nuno suggested the best way is to use multimedia component with 'External' resource type. You may not need to create these manually, you can automate using core services or API programs.
Another way I used before to create zip file at run time and add same to package with following code. Hope it may help.
using (MemoryStream ms = new MemoryStream())
{
zip.Save(ms);
downloadAllInOneURL = String.Format("ZipAsset{0}.zip", uniqueZipID);
downloadAllInOneURL = m_Engine.PublishingContext.RenderedItem.AddBinary(ms, downloadAllInOneURL, "", "application/zip").Url;
downloadAllInOneSize = getSize(ms.Length);
}
i have site where i need to develop site search functionality. the data may reside in database table or may in aspx page as static word. i search google and found that lucene.net may be appropriate for the site search functionality. but i never use lucene.net so i dont know how to create lucene.net index file. i want to develop 2 utility in my site like
1) one for create & update index file reading data from database table & physical aspx file.
2) utility which search multiple single or multiple keyword against index file.
i found a bit of code snippet which i just do not understand
string indexFileLocation = #"C:\Index";
string stopWordsLocation = #"C:\Stopwords.txt";
var directory = FSDirectory.Open(new DirectoryInfo(indexFileLocation));
Analyzer analyzer = new StandardAnalyzer(
Lucene.Net.Util.Version.LUCENE_29, new FileInfo(stopWordsLocation));
what is Lucene.Net.Util.Version.LUCENE_29 what is stopWordsLocation
how data need to store in Stopwords.txt
but have no concept to develop the above 2 utility. so please guide me how search my DB and as well as aspx files with lucene.net....i will be glad if some one discuss here with bit of sample code. thanks
Lucene.Net.Util.Version.LUCENE_29 just indicates the Lucene version your are using, you should always use the most up to date in new code. It is there for backward compatibility in case you upgrade your Lucene with a version that changes the StandardAnalyzer, but you dont want to re-index all your data.
The stopWordsLocation is the location of a file with your stop words, words you dont want to index.
IE: it, he, she, the, or, and etc...
Its a regular text file, each line should contain 1 stop word, and separate each line with a linebreak.
http://lucene.apache.org/core/old_versioned_docs/versions/3_0_1/api/all/org/apache/lucene/analysis/WordlistLoader.html#getWordSet(java.io.Reader)