Use TermRaider plugins in GATE - bigdata

I want to use TermRaider features with GATE. Could someone please post some sample code to load and use this resource in java class. I have tried with following but failed.
Gate.getCreoleRegister().registerDirectories(new URL("file:///D:/misc_workspace/gate-7.1-build4485-SRC/plugins/TermRaider"));
ProcessingResource termRaider = (ProcessingResource) Factory.
createResource("gate.termraider.TermRaiderEnglish",Factory.newFeatureMap());
Exception:
gate.termraider.TermRaiderEnglish cannot be cast to gate.ProcessingResource
Could anyone please suggest how should I proceed.

The TermRaider system isn't a single PR, it's a whole application (in fact a Groovy ScriptableController). The TermraiderEnglish Resource is just a hook to make that application appear in the "ready-made applications" menu of the GATE Developer GUI.
In embedded code you can load the application using the PersistenceManager
File termRaiderPlugin = new File(Gate.getPluginsHome(), "TermRaider");
File gappFile = new File(new File(termRaiderPlugin, "applications"),
"termraider-eng.gapp");
CorpusController trApp = (CorpusController)PersistenceManager.loadObjectFromFile(
gappFile);
When you run the application over a corpus, it creates new instances of three "termbank" LRs containing the information about the newly discovered terms. The vanilla application is really intended for GUI rather than embedded use so it doesn't store references to these new LRs anywhere useful - you'll have to interrogate the CreoleRegister to find them. You might prefer to make your own copy of the application and tweak the control script to store the termbank instances as (say) features on the Corpus, by adding something like
corpus.features.tfidfTermbank = termbank0
corpus.features.annotationTermbank = termbank1
corpus.features.hyponymyTermbank = termbank2
to the end of the control script. You could then access them in your Java code via corpus.getFeatures().get("tfidfTermbank") etc.
Since these Termbank classes are themselves part of the TermRaider plugin, you'll probably want to add gate-termraider.jar to your main application classpath rather than loading it via the GateClassLoader.

import gate.Corpus;
import gate.CorpusController;
import gate.Document;
import gate.Factory;
import gate.FeatureMap;
import gate.Gate;
import gate.termraider.bank.AbstractTermbank;
import gate.termraider.output.CsvGenerator;
import gate.util.GateException;
import gate.util.Out;
import gate.util.persistence.PersistenceManager;
import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.net.URLDecoder;
public class termraider {
public static void main(String[] args) throws IOException, GateException {
// initialise the GATE library
Out.prln("Initialising GATE...");
Gate.init();
// Initialize GATE
File gateHome = Gate.getGateHome();
Out.prln("...GATE initialised");
//Load TermRaider plugin
File termRaiderPlugin = new File(Gate.getPluginsHome(), "TermRaider");
File gappFile = new File(new File(termRaiderPlugin, "applications"),
"termraider-eng.gapp");
CorpusController trApp = (CorpusController)PersistenceManager.loadObjectFromFile(gappFile);
System.out.println("TermRaider loaded successfully!!!");
//Loading txt files from a folder path
Corpus corpus = (Corpus) Factory.createResource("gate.corpora.CorpusImpl");
//String dirname = "Desktop/Gate_corpus/About Us/New Folder";
String dirname = "Desktop/GermanHPFCompetition/termRaider";
File f1 = new File(dirname);
String s[] = f1.list();
for (int i=0; i < s.length; i++) {
String path = dirname + "/" + s[i];
path = URLDecoder.decode(path, "utf-8");
path = new File(path).getPath();
URL u=new URL("file:\\\\\\"+path);
FeatureMap params = Factory.newFeatureMap();
params.put("sourceUrl", u);
params.put("preserveOriginalContent", new Boolean(true));
params.put("collectRepositioningInfo", new Boolean(true));
//Out.prln("Creating doc for " + u);
Document doc = (Document)
Factory.createResource("gate.corpora.DocumentImpl", params);
corpus.add(doc);
} // for each file in the folder
//running TermRaider plugin with the corpus
trApp.init();
trApp.setCorpus(corpus);
trApp.execute();
Corpus output_corpus = (Corpus) Factory.createResource("gate.corpora.CorpusImpl");
output_corpus=trApp.getCorpus();
System.out.println("TermRaider executed successfully!!!");
//Creating csv files as output
AbstractTermbank tb1 = (AbstractTermbank) output_corpus.getFeatures().get("tfidfTermbank");
AbstractTermbank tb2 = (AbstractTermbank) output_corpus.getFeatures().get("hyponymyTermbank");
AbstractTermbank tb3 = (AbstractTermbank) output_corpus.getFeatures().get("annotationTermbank");
System.out.println(tb1);
System.out.println(tb2);
System.out.println(tb3);
CsvGenerator generator = new CsvGenerator();
File outputFile1 = new File("Desktop/GermanHPFCompetition/termRaider/tfidfTermbank.csv");
File outputFile2 = new File("Desktop/GermanHPFCompetition/termRaider/hyponymyTermbank.csv");
File outputFile3 = new File("Desktop/GermanHPFCompeti`enter code here`tion/termRaider/annotationTermbank.csv");
double threshold1 = 0;
double threshold2 = 0;
double threshold3 = 0;
generator.generateAndSaveCsv(tb1, threshold1, outputFile1);
generator.generateAndSaveCsv(tb2, threshold2, outputFile2);
generator.generateAndSaveCsv(tb3, threshold3, outputFile3);
System.out.println("CSV files created!!!");
}//end of main
}//end of class

Related

Import and parse a file to fill the form

Currently, I'm developing a custom app. So far I got the DocType ready to be filled in manually. We got files (SQLite3) that I'd like to upload, parse, extract the necessary fields of it and fill in the form. Basically like the import data tool. In my case, no bulk operation is needed and if possible do the extraction part server-side.
What I tried so far
I added a Server Action to call a whitelisted method of my app. I can get the current doc with:
#frappe.whitelist()
def upload_data_and_extract(doc: str):
"""
Uploads and processes an existing file and extracts data from it
"""
doc_dict = json.loads(doc)
custom_dt = frappe.get_doc('CustomDT', doc_dict['name'])
# parse data here
custom_dt.custom_field = "new value from parsed data"
custom_dt.save()
return doc # How do I return a JSON back to the website from the updated doc?
With this approach, I only can do the parsing when the document has been saved before. I'd rather update the fields of the form when the attach field gets modified. Thus, I tried the Server Side Script approach:
frappe.ui.form.on('CustomDT', {
original_data: function(frm, cdt, cdn) {
if(original_data) {
frappe.call({
method: "customapp.customapp.doctype.customdt.customdt.parse_file",
args: {
"doc": frm.doc
},
callback: function(r) {
// code snippet
}
});
}
}
});
Here are my questions:
What's the best approach to upload a file that needs to be parsed to fill the form?
How to access the uploaded file (attachment) the easiest way. (Is there something like frappe.get_attachment()?)
How to refresh the form fields in the callback easily?
I appreciate any help on these topics.
Simon
I have developed the same tool but that was for CSV upload. I am going to share that so it will help you to achieve your result.
JS File.
// Copyright (c) 2020, Bhavesh and contributors
// For license information, please see license.txt
frappe.ui.form.on('Car Upload Tool', {
upload: function(frm) {
frm.call({
doc: frm.doc,
method:"upload_data",
freeze:true,
freeze_message:"Data Uploading ...",
callback:function(r){
console.log(r)
}
})
}
});
Python Code
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Bhavesh and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from carrental.carrental.doctype.car_upload_tool.csvtojson import csvtojson
import csv
import json
class CarUploadTool(Document):
def upload_data(self):
_file = frappe.get_doc("File", {"file_url": self.attach_file})
filename = _file.get_full_path()
csv_json = csv_to_json(filename)
make_car(csv_json)
def csv_to_json(csvFilePath):
jsonArray = []
#read csv file
with open(csvFilePath, encoding='latin-1') as csvf:
#load csv file data using csv library's dictionary reader
csvReader = csv.DictReader(csvf,delimiter=";")
#convert each csv row into python dict
for row in csvReader:
frappe.errprint(row)
#add this python dict to json array
jsonArray.append(row)
#convert python jsonArray to JSON String and write to file
return jsonArray
def make_car(car_details):
for row in car_details:
create_brand(row.get('Marke'))
create_car_type(row.get('Fahrzeugkategorie'))
if not frappe.db.exists("Car",row.get('Fahrgestellnr.')):
car_doc = frappe.get_doc(dict(
doctype = "Car",
brand = row.get('Marke'),
model_and_description = row.get('Bezeichnung'),
type_of_fuel = row.get('Motorart'),
color = row.get('Farbe'),
transmission = row.get('Getriebeart'),
horsepower = row.get('Leistung (PS)'),
car_type = row.get('Fahrzeugkategorie'),
car_vin_id = row.get('Fahrgestellnr.'),
licence_plate = row.get('Kennzeichen'),
location_code = row.get('Standort')
))
car_doc.model = car_doc.model_and_description.split(' ')[0] or ''
car_doc.insert(ignore_permissions = True)
else:
car_doc = frappe.get_doc("Car",row.get('Fahrgestellnr.'))
car_doc.brand = row.get('Marke')
car_doc.model_and_description = row.get('Bezeichnung')
car_doc.model = car_doc.model_and_description.split(' ')[0] or ''
car_doc.type_of_fuel = row.get('Motorart')
car_doc.color = row.get('Farbe')
car_doc.transmission = row.get('Getriebeart')
car_doc.horsepower = row.get('Leistung (PS)')
car_doc.car_type = row.get('Fahrzeugkategorie')
car_doc.car_vin_id = row.get('Fahrgestellnr.')
car_doc.licence_plate = row.get('Kennzeichen')
car_doc.location_code = row.get('Standort')
car_doc.save(ignore_permissions = True)
frappe.msgprint("Car Uploaded Successfully")
def create_brand(brand):
if not frappe.db.exists("Brand",brand):
frappe.get_doc(dict(
doctype = "Brand",
brand = brand
)).insert(ignore_permissions = True)
def create_car_type(car_type):
if not frappe.db.exists("Vehicle Type",car_type):
frappe.get_doc(dict(
doctype = "Vehicle Type",
vehicle_type = car_type
)).insert(ignore_permissions = True)
So for this upload tool, I created one single doctype with the below field:
Attach File(Field Type = Attach)
Button (Field Type = Button)

Adding # to formula after =

When I add the formula FORECAST.ETS, it adds an # after the equal symbol, like this: = #FORECAST.ETS. Why is this happening?
The code snippet is:
ws.cell(column=1, row=2, value="=FORECAST.ETS(...)"
When I open it with Excel (latest Office 365 version), it shows as =#FORECAST.ETS(..)
I have hit the same issue, but not with Python and openpyxl, but with dotnet Core C# and EPPLUS. What follows is perhaps a workaround based on my findings... but not ideal. I suspect it will work with openpyxl too.
Re-creating the problem
I have written a simplified C# console app that firstly creates a new XLSX (foo.xlsx), writes out some data and my formula, and then outputs the cell with the formula and the value to the Console. It then saves and closes the XLSX, and reopens it and again outputs the formula cell and its value. The code is as follows:
using OfficeOpenXml;
using System;
using System.IO;
namespace TestFormula
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Test starts");
ExcelPackage.LicenseContext = LicenseContext.NonCommercial;
if (File.Exists($".\\foo.xlsx"))
{
File.Delete($".\\foo.xlsx");
}
using (var ep = new ExcelPackage(new FileInfo($".\\foo.xlsx")))
{
ExcelWorkbook wb = ep.Workbook;
ExcelWorksheet wsTest = null;
wsTest = wb.Worksheets.Add("Test");
// Add some look up data...
for (int row = 1; row <= 5; row++)
{
wsTest.Cells[row, 1].Value = row;
wsTest.Cells[row, 2].Value = $"Name {row}";
}
wsTest.Cells[1, 4].Formula = $"=XLOOKUP($A3,$A:$A,$B:$B))";
Console.WriteLine($"Add: formula=\"{wsTest.Cells[1, 4].Formula}\"");
Console.WriteLine($"Add: value=\"{wsTest.Cells[1, 4].Value}\"");
ep.Save();
ep.Dispose();
}
using (var ep = new ExcelPackage(new FileInfo($".\\foo.xlsx")))
{
ExcelWorkbook wb = ep.Workbook;
ExcelWorksheet wsTest = null;
wsTest = wb.Worksheets["Test"];
Console.WriteLine($"Open: formula=\"{wsTest.Cells[1, 4].Formula}\"");
Console.WriteLine($"Open: value=\"{wsTest.Cells[1, 4].Value}\"");
ep.Dispose();
}
Console.WriteLine("Test ends");
}
}
}
The output from the above looks like this...
Note that the formula after closing and re-opening the XLSX with EPPLUS reads just as it was written.
However, if I open the file with Excel I can see that an # has been inserted after the = sign.
If I then double click on the formula cell, I get an Excel error message...
I answered "no" to this question because I wanted to continue to experiment with what was happening behind the scenes.
After double clicking the formula cell to edit it, when I now hit ENTER with the # in the formula, it works. At this point I save the XLSX with the change made.
If I now delete some of my code and just run...
using OfficeOpenXml;
using System;
using System.IO;
namespace TestFormula
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Test starts");
ExcelPackage.LicenseContext = LicenseContext.NonCommercial;
using (var ep = new ExcelPackage(new FileInfo($".\\foo.xlsx")))
{
ExcelWorkbook wb = ep.Workbook;
ExcelWorksheet wsTest = null;
wsTest = wb.Worksheets["Test"];
Console.WriteLine($"Open: formula=\"{wsTest.Cells[1, 4].Formula}\"");
Console.WriteLine($"Open: value=\"{wsTest.Cells[1, 4].Value}\"");
ep.Dispose();
}
Console.WriteLine("Test ends");
}
}
}
I get the following output...
What's particularly interesting about the output is that the formula has been modified by Excel and has been prefixed with _alfn.SINGLE.
Research
It is worth declaring here that I am running Microsoft 365 and I always have patches and updates automatically applied as soon as they become available. So my version of Excel is the latest version.
Google-ing for _alfn.SINGLE provides a number of hits (see References below) and from these I have concluded the following:
In Aug 2019 Microsoft released an update that introduced a new formula keyword called XLOOKUP... intended to replace VLOOKUP and HLOOKUP. As such, the XLSX file format was updated to allow for this new feature. The second reference below mentions dates of the introduction of other formulas around Sep 2018.
I'm guessing that the EPLUS library (and probably the openpyxl) have not updated their file format to compensate for the addition of these new/changed features.
When Excel opens an older file version and detects a more recent formula keyword (i.e. a keyword that was not available in the earlier file version), it does not automatically resolve the formula, but instead throws the error I mentioned above, and then resolves the problem by prefixing the new formula keyword with _alfn.SINGLE.
Solution
It's dirty and short term until the EPPLUS/openpyxl libraries catch up. In my case, in code simply replace...
wsTest.Cells[1, 4].Formula = $"=XLOOKUP($A3,$A:$A,$B:$B)";
... with ...
wsTest.Cells[1, 4].Formula = $"_xlfn.SINGLE(_xlfn.XLOOKUP($A3,$A:$A,$B:$B))";
References
Issue: An _xlfn. prefix is displayed in front of a formula by Microsoft
XLOOKUP XMATCH FILTER RANDARRAY SEQUENCE SORT SORTBY UNIQUE CONCAT IFS MAXIFS MINIFS SWITCH TEXTJOIN by Andreas Killer

JMeter BeanShell Assertion can't identify FileUtil methods

I have written a code to do DIFF of two text files in Java/Eclipse using
org.apache.commons.io.FileUtils and Set operations. The code is working fine.
but the same code is not working in JMeter's BeanShell Assertion.
Lines with Set operations giving error.
Code:
import org.apache.commons.io.FileUtils;
import java.io.*;
import java.util.Date;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.concurrent.TimeUnit;
import java.util.ArrayList;
import java.util.Collection;
DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm");
Date startdate = new Date();
Date enddate = null;
PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter("C://Work//JMeter//Logs//D2MDS.txt")));
out.println("Automated ETL Validator -- START -- " + dateFormat.format(startdate));
out.println("=========================================================================");
try {
Set<String> Slines = new TreeSet<String>(FileUtils.readLines(new File ("C://Work//JMeter//Data//SourceSQLLog.csv")));
//Set<String> Tlines = new TreeSet<String>(FileUtils.readLines(new File ("C://Work//JMeter//Data//TargetSQLLog.csv")));
//Slines.removeAll(Tlines);
//out.println(Slines);
}
catch (Exception e) {
out.println(ExceptionUtils.getStackTrace(e));
}
Date enddate = new Date();
enddate = new Date();
long duration = enddate.getTime() - startdate.getTime();
long diffInMinutes = TimeUnit.MILLISECONDS.toMinutes(duration);
out.println("=========================================================================");
out.println("Automated ETL Validator -- END -- " + dateFormat.format(enddate));
out.println("\n\n\n*************************************************************************");
out.println("EXECUTION SUMMARY");
out.println("====================");
out.println("Total Lines in Extract File: " + ${__V(ret_val_#)});
out.println("Total Failures in Extract File: " + fail_cnt);
out.println("Failed Lines of Extract File = " + fail_ln);
out.println("Extract Validation -- END -- " + dateFormat.format(enddate) + " -- Duration: " + diffInMinutes + " Minutes");
out.close();
Beanshell Assertion is throwing obscure error:
Assertion failure message: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval In file: inline evaluation of: ``import org.apache.commons.io.FileUtils; import java.io.*; import java.util.Date; . . . '' Encountered "=" at line 18, column 20.
5 Jar files are present inside JMeter's /lib folder. How do I make JMeter recognize the Set methods.
Thanks
First:
It isn't connected with FileUtils class usage, the fault point is line 18 as per
Encountered "=" at line 18, column 20.
Change your line #18 from
Date enddate = new Date();
to
enddate = new Date();
as you have already defined it at line #3 Beanshell interpreter fails to parse line 18 as the variable is present in this scope.
Second
Beanshell is not Java so
Set<String> Slines = new TreeSet<String>
will also fail. You will need to remove diamond brackets
Set Slines = new TreeSet...
Third:
Also I don't see where fail_cnt and fail_ln are defined and would suggest to substitute
${__V(ret_val_#)}
with
vars.get("ret_val_#");
See How to use BeanShell: JMeter's favorite built-in component guide for Beanshell-related tips and tricks.
Hope this helps.

ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?
The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)
I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.
Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}
This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.

How to remove the full file path from YSOD?

In the YSOD below, the stacktrace (and the source file line) contain the full path to the source file. Unfortunately, the full path to the source file name contains my user name, which is firstname.lastname.
I want to keep the YSOD, as well as the stack trace including the filename and line number (it's a demo and testing system), but the username should vanish from the sourcefile path. Seeing the file's path is also OK, but the path should be truncated at the solution root directory.
(without me having to copy-paste the solution every time to another path before publishing it...)
Is there any way to accomplish this ?
Note: Custom error pages aren't an option.
Path is embedded in .pdb files, which are produced by the compiler. The only way to change this is to build your project in some other location, preferably somewhere near the build server.
Never mind, I found it out myself.
Thanks to Anton Gogolev's statement that the path is in the pdb file, I realized it is possible.
One can do a binary search-and-replace on the pdb file, and replace the username with something else.
I quickly tried using this:
https://codereview.stackexchange.com/questions/3226/replace-sequence-of-strings-in-binary-file
and it worked (on 50% of the pdb files).
So mind the crap, that code-snippet in the link seems to be buggy.
But the concept seems to work.
I now use this code:
public static void SizeUnsafeReplaceTextInFile(string strPath, string strTextToSearch, string strTextToReplace)
{
byte[] baBuffer = System.IO.File.ReadAllBytes(strPath);
List<int> lsReplacePositions = new List<int>();
System.Text.Encoding enc = System.Text.Encoding.UTF8;
byte[] baSearchBytes = enc.GetBytes(strTextToSearch);
byte[] baReplaceBytes = enc.GetBytes(strTextToReplace);
var matches = SearchBytePattern(baSearchBytes, baBuffer, ref lsReplacePositions);
if (matches != 0)
{
foreach (var iReplacePosition in lsReplacePositions)
{
for (int i = 0; i < baReplaceBytes.Length; ++i)
{
baBuffer[iReplacePosition + i] = baReplaceBytes[i];
} // Next i
} // Next iReplacePosition
} // End if (matches != 0)
System.IO.File.WriteAllBytes(strPath, baBuffer);
Array.Clear(baBuffer, 0, baBuffer.Length);
Array.Clear(baSearchBytes, 0, baSearchBytes.Length);
Array.Clear(baReplaceBytes, 0, baReplaceBytes.Length);
baBuffer = null;
baSearchBytes = null;
baReplaceBytes = null;
} // End Sub ReplaceTextInFile
Replace firstname.lastname with something that has equally many characters, for example "Poltergeist".
Now I only need to figure out how to run the binary search and replace as a post-build action.

Resources