I have class GroupTable that makes schema of table.
As I saw, in other projects there are at conf/evolution/default folder file 1.sql, that is automatically generated from code (as I assume).
But when I start my application - nothing creates.
What should I do? Is it creating automatically or have I write it in my code?
class GroupTable(tag: Tag) extends Table[Group](tag, "groups") {
def name = column[String]("name", O.PrimaryKey)
def day = column[String]("day")
def subject = column[String]("subject")
def typeSub = column[String]("typeSub")
def start = column[Time]("start")
def end = column[Time]("end")
def teacher = column[String]("teacher")
def auditorium = column[Int]("auditorium")
override def * = (name, day, subject, typeSub, start, end, teacher, auditorium) <>((Group.apply _).tupled, Group.unapply)
}
application.conf:
slick.dbs.default.driver = "slick.driver.MySQLDriver$"
slick.dbs.default.db.driver="com.mysql.jdbc.Driver"
slick.dbs.default.db.url="jdbc:mysql://localhost:3306/testdb"
slick.dbsdefault.user="root"
slick.dbs.default.password=""
play.evolutions.autoApply=true
evolutionplugin=enabled
play.evolutions.db.default.autoApply=true
play.evolutions.db.default.autoApplyDowns=true
built.sbt:
name := "TimetableAPI"
version := "1.0"
lazy val `timetableapi` = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(cache, ws, specs2 % Test, evolutions,
"mysql" % "mysql-connector-java" % "5.1.34",
"com.typesafe.play" %% "play-slick" % "1.1.0",
"com.typesafe.play" %% "play-slick-evolutions" % "1.1.0")
unmanagedResourceDirectories in Test <+= baseDirectory(_ / "target/web/public/test")
resolvers += "scalaz-bintray" at "https://dl.bintray.com/scalaz/releases"
routesGenerator := InjectedRoutesGenerator
I tried evolution in Play framework.
As to your question, "Does evolution create automatically database and table?"
Since you are using mysql,
1.) No, evolution does not create database for you. You need to create the "testdb" database and grant privilege to "root"
2.) Yes, evolution create the datatable for you.
Why not use H2 as the database engine for testing? The evolution will create the database and datatable for you from scratch (no need to create the database). You may also mimic mysql using the H2 Database engine:
db.default.url="jdbc:h2:mem:play;MODE=MYSQL"
Please see link:
https://www.playframework.com/documentation/2.5.x/Developing-with-the-H2-Database
I don't know scalaz, but in general, evolutions are not automatically created, they are written manually. Each time you make a change to your database, you write the next numbered sql file, to apply the changes (Ups) and to remove the changes (Downs).
You can use database tools (such as MySql Workbenches database Synchronise function) generate the "difference" between a model and the actual database. These scripts can help in writing the evolutions.
Further docs here.
Related
I would like to speed up figure generation in Bokeh by multiprocessing:
jobs = []
for label in list(peakLabels):
args = {'data': rt_proj_data[label],
'label': label,
'tools': tools,
'colors': itertools.cycle(palette),
'files': files,
'highlight': highlight}
jobs.append(args)
pool = Pool(processes=cpu_count())
m = Manager()
q = m.Queue()
plots = pool.map_async(plot_peaks_parallel, jobs)
pool.close()
pool.join()
def plot_peaks_parallel(args):
data = args['data']
label = args['label']
colors = args['colors']
tools = args['tools']
files = args['files']
highlight = args['highlight']
p = figure(title=f'Peak: {label}',
x_axis_label='Retention Time',
y_axis_label='Intensity',
tools=tools)
...
return p
Though I ran into this error:
MaybeEncodingError: Error sending result: '[Figure(id='1078', ...)]'. Reason: 'PicklingError("Can't pickle at 0x7fc7df0c0ea0>: attribute lookup ColumnDataSource. on bokeh.models.sources failed")'
Can I do something to the object p, so that it becomes pickleable?
Individual Bokeh objects are not serializable in isolation, including with pickle. The smallest meaningful unit of serialization in Bokeh is the Document, which is a specific collection of Bokeh objects guaranteed to be complete with respect to following references. However, I would be surprised if pickle works with Document either (AFAIK you are the first person to ask about it since the project started, it's never been a priority, or even looked into that I know of). Instead, I would suggest if you want to do something like this, to use Bokeh's own JSON serialization functions, such as json_item:
# python code
p_serialized = json.dumps(json_item(p))
This will properly serialize p in the context of the Document it is a part of. Then you can pass this to your page templates to display with the Bokeh JS embed API:
# javascript code
p = JSON.parse(p_serialized);
Bokeh.embed.embed_item(p, "mydiv")
I see I can use collective.folderishtypes to add new types (folderish) to be used instead of default news item and event. But I want to convert existing news items and events to folderish content types and keep it as simple as possible. Is it possible to override (monkey-patching?) the default types in a simple way (as result to have existing objects with folderish behavior)?
Or what is the good way of solving this issue? I just need existing objects to be solved, too and to have not confusing duplicate content types like: Add new News Item, News Item Folderish... etc. Also, if possible to keep existing listings (like latest events) working.
I have no experience with collective.folderish, the description sounds promising though, too bad it seems not to work for you.
If I needed to solve this and it's not a requirement to keep the histories (workflow- & content-history), I'd go create a new folderish type with the same fields, create for each event and news an instance of the new type and copy the field-values over.
That would change the modification-date, yet could be overcome by copying the mod-date to the publication-date-field (if not used already) and do the 'Latest news/events'-listings with collections sorted by pub-date.
But if you wanted to keep histories and leave mod-date untouched, you could create a folder for each news/event-item, put the item into the folder, set the item as default-view of the folder and rename the folder to the same id as the item. That will make the folder and item appear as one item in the UI and links to the item will not break because the folder is at the destination.
I tested this with a browser-view-script. Alas, adding a folder and moving the item within one script-run does not work for reasons I couldn't track down in short time. So one needs to call the browser-view three times:
from Acquisition import aq_parent, aq_inner
from Products.Five.browser import BrowserView
class View(BrowserView):
report = ''
def __call__(self):
portal = self.context
catalog = portal.portal_catalog
news_items = catalog.searchResults(portal_type='News Item')
event_items = catalog.searchResults(portal_type='Event')
items = news_items + event_items
for i, item in enumerate(items):
self.processItem(item, i, len(items))
return self.report
def processItem(self, item, i, itemsAmount):
item = item.getObject()
item_id = item.id
parent = aq_parent(aq_inner(item))
folder = None
folder_id = item_id + '-container'
if item_id == parent.id:
if i == itemsAmount-1: self.report += '\
Nothing to do, all ' + str(itemsAmount) + ' items have the same id as their parent.'
else:
if parent.id == folder_id:
parent = getParent(parent)
folder = parent[folder_id]
folder.setDefaultPage(item_id)
parent.manage_renameObject(folder.id, item_id)
if i == itemsAmount-1: self.report += '\
Step 3/3: Renamed ' + str(itemsAmount) + ' folder-ids.'
else:
try:
folder = addFolder(parent, folder_id)
if i == itemsAmount-1: self.report += '\
Step 1/3: Added ' + str(itemsAmount) + ' folders.'
folder.setTitle(item_id) # set same title as item has
folder.reindexObject()
except:
folder = parent[folder_id]
try:
cutAndPaste(item, folder)
if i == itemsAmount-1: self.report += '\
Step 2/3: Moved ' + str(itemsAmount) + ' items into folders.'
except:
pass
def addFolder(parent, folder_id):
parent.invokeFactory('Folder', folder_id)
folder = parent[folder_id]
folder.setTitle(folder_id)
folder.reindexObject()
return folder
def cutAndPaste(item, folder):
""" Move item into folder. """
parent = aq_parent(aq_inner(item))
clipboard = parent.manage_cutObjects([item.id])
folder.manage_pasteObjects(clipboard)
folder.reindexObject()
def getParent(item):
return aq_parent(aq_inner(item))
Disclaimers:
You need to do this procedure also every time a new event/news-item is created, with an event-listener.
It would be better to create new event-listeners for each step of the process and start the next one when the preceding step has ended.
The temporary-id for the folder (composed of item-id and the arbitrary suffix "-container") is assumed to not exist already within the parent of an item. Although it is very unlikely to happen, you might want to grab that exception in the script, too.
I have not tested this, but based on the collective.folderishtypes documentation ("How to migrate non-folderishtypes to folderish ones") you should be able to call the ##migrate-btrees view on your Plone site root to migrate non-folderish content types to folderish.
Warning: do a backup of the database before attempting the migration, and test in development environment first before applying this on production data.
I am creating database class and creating object of that class on button click but controller is not hitting that class here is my database code
class DataStore < Android::Database::SQLite::SQLiteOpenHelper
DATABASE_NAME = "MyFriendsDatabase"
def initialize(context)
puts "consructor in database"
#super(context, Name, Factory, Version)
#context - this is used to create database
#name - name of the database file
#factory - A cursor factory(usually pass nil)
#Version - the version of database. this number is used to identify if there is an upgrade or downgrade of the database
super(context, DATABASE_NAME, nil, 1)
end
def onUpgrade(db, oldVersion, newVersion)
#db.execSQL("DROP TABLE IF EXISTS credentials")
onCreate(db)
end
#create
def onCreate(database)
puts "table creation"
database.execSQL("CREATE TABLE credentials(username TEXT, password TEXT)")
end
#insert
def add_credentials(username, password)
puts "inserting data in table"
values = ContenValues.new(2)
values.put("username", username)
values.put("password", password)
getWritableDatabase.insert("credentials", "username", values)
end
#retrieve
def get_credentials
puts "coming here &&&&&&&&&&&"
cursor = getReadableDatabase.rawQuery("select * from credentials", nil)
cursor
end
#deleting all records
def deleteAll
getWritable.delete("credentials", nil, nil)
end
end
the code which will execute on button click is
database = DataStore.new(self)
database.add_credentials(#text_box_value.getText.toString, #text_box.getText.toString)
puts database.get_credentials
anyone can help me.
Thanks in advance
This is a known limitation of RubyMotion Android. You can't do super or override Java class constructors from Ruby. See this link http://www.rubymotion.com/developers/guides/manuals/android/project-management/#_extending_classes_with_java
Also I am making something for this so you won't have to deal with the raw SQL. You can if you want (or need to). I will support this in my gem anyway. I will edit my answer when I have released it. Thanks. It's not ready yet.
I'm currently implementing a SBT plugin for Gatling.
One of its features will be to open the last generated report in a new browser tab from SBT.
As each run can have a different "simulation ID" (basically a simple string), I'd like to offer tab completion on simulation ids.
An example :
Running the Gatling SBT plugin will produce several folders (named from simulationId + date of report generaation) in target/gatling, for example mysim-20140204234534, myothersim-20140203124534 and yetanothersim-20140204234534.
Let's call the task lastReport.
If someone start typing lastReport my, I'd like to filter out tab-completion to only suggest mysim and myothersim.
Getting the simulation ID is a breeze, but how can help the parser and filter out suggestions so that it only suggest an existing simulation ID ?
To sum up, I'd like to do what testOnly do, in a way : I only want to suggest things that make sense in my context.
Thanks in advance for your answers,
Pierre
Edit : As I got a bit stuck after my latest tries, here is the code of my inputTask, in it's current state :
package io.gatling.sbt
import sbt._
import sbt.complete.{ DefaultParsers, Parser }
import io.gatling.sbt.Utils._
object GatlingTasks {
val lastReport = inputKey[Unit]("Open last report in browser")
val allSimulationIds = taskKey[Set[String]]("List of simulation ids found in reports folder")
val allReports = taskKey[List[Report]]("List of all reports by simulation id and timestamp")
def findAllReports(reportsFolder: File): List[Report] = {
val allDirectories = (reportsFolder ** DirectoryFilter.&&(new PatternFilter(reportFolderRegex.pattern))).get
allDirectories.map(file => (file, reportFolderRegex.findFirstMatchIn(file.getPath).get)).map {
case (file, regexMatch) => Report(file, regexMatch.group(1), regexMatch.group(2))
}.toList
}
def findAllSimulationIds(allReports: Seq[Report]): Set[String] = allReports.map(_.simulationId).distinct.toSet
def openLastReport(allReports: List[Report], allSimulationIds: Set[String]): Unit = {
def simulationIdParser(allSimulationIds: Set[String]): Parser[Option[String]] =
DefaultParsers.ID.examples(allSimulationIds, check = true).?
def filterReportsIfSimulationIdSelected(allReports: List[Report], simulationId: Option[String]): List[Report] =
simulationId match {
case Some(id) => allReports.filter(_.simulationId == id)
case None => allReports
}
Def.inputTaskDyn {
val selectedSimulationId = simulationIdParser(allSimulationIds).parsed
val filteredReports = filterReportsIfSimulationIdSelected(allReports, selectedSimulationId)
val reportsSortedByDate = filteredReports.sorted.map(_.path)
Def.task(reportsSortedByDate.headOption.foreach(file => openInBrowser((file / "index.html").toURI)))
}
}
}
Of course, openReport is called using the results of allReports and allSimulationIds tasks.
I think I'm close to a functioning input task but I'm still missing something...
Def.inputTaskDyn returns a value of type InputTask[T] and doesn't perform any side effects. The result needs to be bound to an InputKey, like lastReport. The return type of openLastReport is Unit, which means that openLastReport will construct a value that will be discarded, effectively doing nothing useful. Instead, have:
def openLastReport(...): InputTask[...] = ...
lastReport := openLastReport(...).evaluated
(Or, the implementation of openLastReport can be inlined into the right hand side of :=)
You probably don't need inputTaskDyn, but just inputTask. You only need inputTaskDyn if you need to return a task. Otherwise, use inputTask and drop the Def.task.
I am using qt's undo framework , which use qundocommand to do some application support undo.
Is there an easy way I can use to save those qundocommand to a file and reload it?
There's no built-in way. I don't think it's very common to save the undo stack between sessions. You'll have to serialize the commands yourself by iterating through the commands on the stack, and saving each one's unique data using QDataStream. It might look something like this:
...
dataStream << undoStack->count(); // store number of commands
for (int i = 0; i < undoStack->count(); i++)
{
// store each command's unique information
dataStream << undoStack->command(i)->someMemberVariable;
}
...
Then you would use QDataStream again to deserialize the data back into QUndoCommands.
You can use QFile to handle the file management.
Use Qt's serialization as described here:
Serialization with Qt
Then within your QUndoCommands you can use a temp file to write the data to it:
http://qt-project.org/doc/qt-4.8/qtemporaryfile.html
However this might cause you an issue since each file is kept open and so on some platforms (Linux) you may run out of open file handles.
To combat this you'd have to create some other factory type object which handles your commands - then this could pass in a reference to a QTemporaryFile automatically. This factory/QUndoCommand care taker object must have the same life time as the QUndoCommands. If not then the temp file will be removed from disk and your QUndoCommands will break.
The other thing you can do is use QUndoCommand as a proxy to your real undo command - this means you can save quite a bit of memory since when your undo command is saved to file you can delete the internal pointer/set it to null. Then restore it later.
Here's a PyQt solution for serializing/pickling QUndoCommands. The tricky part was getting the parent to call __init__ first, then the children. This method relies on all the children's __setstate__ to be called before the parent's, which happens upon pickling as children are returned in the parent's __getstate__.
class UndoCommand(QUndoCommand):
"""
For pickling
"""
def __init__(self, text, parent=None):
QUndoCommand.__init__(self, text, parent)
self.__parent = parent
self.__initialized = True
# defined and initialized in __setstate__
# self.__child_states = {}
def __getstate__(self):
return {
**{k: v for k, v in self.__dict__.items()},
'_UndoCommand__initialized': False,
'_UndoCommand__text': self.text(),
'_UndoCommand__children':
[self.child(i) for i in range(self.childCount())]
}
def __setstate__(self, state):
if hasattr(self, '_UndoCommand__initialized') and \
self.__initialized:
return
text = state['_UndoCommand__text']
parent = state['_UndoCommand__parent'] # type: UndoCommand
if parent is not None and \
(not hasattr(parent, '_UndoCommand__initialized') or
not parent.__initialized):
# will be initialized in parent's __setstate__
if not hasattr(parent, '_UndoCommand__child_states'):
setattr(parent, '_UndoCommand__child_states', {})
parent.__child_states[self] = state
return
# init must be called on unpickle-time to recreate Qt object
UndoCommand.__init__(self, text, parent)
for child in state['_UndoCommand__children']:
child.__setstate__(self.__child_states[child])
self.__dict__ = {k: v for k, v in state.items()}
#staticmethod
def from_QUndoCommand(qc: QUndoCommand, parent=None):
if type(qc) == QUndoCommand:
qc.__class__ = UndoCommand
qc.__initialized = True
qc.__parent = parent
children = [qc.child(i) for i in range(qc.childCount())]
for child in children:
UndoCommand.from_QUndoCommand(child, parent=qc)
return qc