Fast Sketch Cleanup
This is a plugin that allows you to process the canvas with a neural network model of a specific type.
Currently the available models allow you to clean up the sketch, extract pencil lines from a photographed pencil sketch, and create a lineart from a digital sketch.
Usage
- Open or create a white canvas with grey-white strokes (note that the plugin will take the current projection of the canvas, not the current layer).
- Go to Tools → Fast Sketch Cleanup to open the plugin dialog.
- Select the model (recommended one: SketchyModel.xml). Advanced Options will be automatically selected for you.
- Wait until it finishes processing (the dialog will then close automatically).
- See that it created a new layer with the result.
Training a new model
To train the model:
-
Clone the repository at https://invent.kde.org/tymond/fast-line-art:
git clone https://invent.kde.org/tymond/fast-line-art.git
-
Then, prepare the folder:
-
Create a new folder for the training.
-
In the folder, run:
python3 [repository folder]/spawnExperiment.py --path [path to new folder, either relative or absolute] --note "[your personal note about the experiment]"
-
Prepare data:
-
If you have existing data, put it all in
data/training/
and data/verify/
,
keeping in mind that paired pictures in ink/
and sketch/
subfolders must have the exact same names
(for example if you have sketch.png and ink.png as data, you need to put one in sketch/
as picture.png
and another in ink/
as picture.png
to be paired).
-
If you don't have existing data:
-
Put all your raw data in
data/raw/
, keeping in mind that paired pictures should have the exact same names
with added prefix either ink_
or sketch_
(for example if you have picture_1.png
being the sketch picture and picture_2.png
being the ink picture, you need to name them sketch_picture.png
and ink_picture.png
respectively.)
-
Run the data preparer script:
python3 [repository folder]/dataPreparer.py -t taskfile.yml
That will augment the data in the raw
directory in order for the training to be more successful.
-
Edit the
taskfile.yml
file to your liking. The most important parts you want to change are:
- model type - code name for the model type, use
tinyTinier
, tooSmallConv
, typicalDeep
or tinyNarrowerShallow
- optimizer - type of optimizer, use
adadelta
or sgd
- learning rate - learning rate for
sgd
if in use
- loss function - code name for loss function, use
mse
for mean squared error or blackWhite
for a custom loss function based on mse,
but a bit smaller for pixels where the target image pixel value is close to 0.5
-
Run the training code:
python3 [repository folder]/train.py -t taskfile.yml -d "cpu"
-
Convert the model to an openvino model:
python3 [repository folder]/modelConverter.py -s [size of the input, recommended 256] -t [input model name, from pytorch] -o [openvino model name, must end with .xml]
-
Place both the
.xml
and .bin
model files in your Krita resource folder alongside other models to use them in the plugin.