Commit a46f4a29 authored by Fábio Prestes's avatar Fábio Prestes

Replace README.md

parent 61958ed6
#### __Title: Pynovisao__
### Authors (email):
## Authors (email):
- Adair da Silva Oliveira Junior
- Alessandro dos Santos Ferreira
- Diego André Sant'Ana (diegoandresantana@gmail.com)
......@@ -33,60 +33,7 @@ NPOSL-30 https://opensource.org/licenses/NPOSL-3.0 - Free for non-profit use (E.
[1] dos Santos Ferreira, A., Freitas, D. M., da Silva, G. G., Pistori, H., & Folhes, M. T. (2017). Weed detection in soybean crops using ConvNets. Computers and Electronics in Agriculture, 143, 314-324.
## How to use:
- In order to download Pynovisao, click the download button in the top right of the screen (Compressed folder), or type the following command in a terminal:
```
$ git clone http://git.inovisao.ucdb.br/inovisao/pynovisao
```
- From inside of this directory:
```
[...]/pynovisao
```
- Enter the folder named **[...]/pynovisao/src** or type the following command in the terminal to do so:
```
$ cd src
```
- Next, type the following command if you desire to run it using Python 2.7:
```
$ python main.py
```
- Or, should you want to run it using Python 3.6:
```
$ python3 main.py
```
- A window such as the following will open, and you can start using Pynovisão and it's features:
![pynovisao](data/pynovisao.png)
## Other options:
- Show All options available
```
$ python main.py --help
```
- Executes the program, defining the wanted classes and it's respective colours (X11 colour names)
```
$ python main.py --classes "Soil Soy Grass LargeLeaves" --colors "Orange SpringGreen RebeccaPurple Snow"
```
- A Linux script exists in *[...]/pynovisao/src/util* to help divide images into training, validation and testing datasets. It has not been implemented to the GUI. verificar com diego
```
$ cd src/util
$ chmod 755 split_data.sh
$ ./split_data -h
```
### How to Install mudr de lugar
### How to Install
##Option 1, Linux-only Script
You can easily install Pynovisão utilizing the automated installation script given with it, as seen by the following steps:
......@@ -102,7 +49,6 @@ $ sudo bash INSTALL.sh
**NOTE**: This script has been tested for Ubuntu versions 19.04 and 18.04
##Option 2, without INSTALL.sh
# Linux
Besides it's dependencies, Python 2.7.6 or Python 3.6 is needed. (Latest tested versions for this software)
......@@ -128,65 +74,168 @@ $ sudo pip install tensorflow
$ sudo pip install keras
```
# Windows
## How to install Caffe ( Optional )
- Instale o [Anaconda](http://continuum.io/downloads) que contém todas dependências, inclusive o Python. Basta fazer o download do arquivo .exe e executá-lo.
- Opencv 2.7
- python-weka-wrapper ( Classification )
- WiP
# Ubuntu / Windows
In order to use the CNNCaffe classifier, a ConvNet based on the AlexNet topology, it is necessary to install Caffe.
# Windows
WiP
- [OpenCV-Python](https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html#install-opencv-python-in-windows).
1. Baixe o [Opencv](https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html#install-opencv-python-in-windows)
2. Extraia os arquivos no local desejado.
3. Vá para a pasta opencv/build/python/2.7.
4. Cipie o arquivo cv2.pyd para C:/Python27/lib/site-packeges.
5. Abra o terminal e digite python para executar o interpretador.
6. Digite:
It's installation is more complex than the ones previously mentioned, and more detailed instructions can be found below:
- http://caffe.berkeleyvision.org/installation.html
```
>>> import cv2
>>> print cv2.__version__
```
After installing Caffe, in order to realize classification with it you will need to train it with Pynovisão using the command line, since there currently is no interface for ConvNet Training.
# Windows
WiP
Instale .Net 4.0 (se já não estiver instalado)
The tutorial for training can be found below:
- http://caffe.berkeleyvision.org/gathered/examples/imagenet.html
Instale Windows SDK 7.1
Finally, it is necessary to configure your CNNCaffe.
- For the fields *ModelDef, ModelWeights* and *MeanImage*, you must supply the relative paths to the traning done previously.
- For the field *LabelsFile* you must supply the path to a file that describes all the classes in order (0, 1, 2, ..., n-1; where n is the number of classes trained).
- A example file can be found in **[...]/pynovisao/examples/labels.txt**.
Abra o prompt de comando do Windows SDK (não o prompt de comando convencional!) e instale javabridge e python-weka-wrapper
### How to use:
##Opening the software
- In order to download Pynovisao, click the download button in the top right of the screen (Compressed folder), or type the following command in a terminal:
```
> set MSSdk=1
> set DISTUTILS_USE_SDK=1
> pip install javabridge
> pip install python-weka-wrapper
$ git clone http://git.inovisao.ucdb.br/inovisao/pynovisao
```
Agora você pode executar python-weka-wrapper usando o prompt de comando convencional também.
# Mais informações
- http://pythonhosted.org/python-weka-wrapper/install.html
- http://pythonhosted.org/python-weka-wrapper/troubleshooting.html
- From inside of this directory:
```
[...]/pynovisao
```
### Como instalar o caffe ( Opcional )
- Enter the folder named **[...]/pynovisao/src** or type the following command in the terminal to do so:
```
$ cd src
```
## Ubuntu / Windows
WiP
Para poder utilizar o classificador CNNCaffe, uma ConvNet baseada na topologia AlexNet, é necessário instalar o software Caffe.
- Next, type the following command if you desire to run it using Python 2.7:
```
$ python main.py
```
- Or, should you want to run it using Python 3.6:
```
$ python3 main.py
```
A instalação do software Caffe é mais complexa que as instalações descritas anteriormente e pode ser encontrada detalhada no link abaixo:
- http://caffe.berkeleyvision.org/installation.html
- A window such as the following will open, and you can start using Pynovisão and it's features:
Após realizar a instalação do software Caffe, para realizar a classificação, você precisa realizar o treinamento da sua rede no software, pois não há interface no Pynovisao para o treinamento da ConvNet.
![pynovisao](data/pynovisao.png)
O tutorial para o treinamento pode ser encontrado no link abaixo:
- http://caffe.berkeleyvision.org/gathered/examples/imagenet.html
# Other options:
- Show All options available
```
$ python main.py --help
```
- Executes the program, defining the wanted classes and it's respective colours (X11 colour names)
```
$ python main.py --classes "Soil Soy Grass LargeLeaves" --colors "Orange SpringGreen RebeccaPurple Snow"
```
- A Linux script exists in *[...]/pynovisao/src/util* to help divide images into training, validation and testing datasets. It has not been implemented to the GUI. verificar com diego
```
$ cd src/util
$ chmod 755 split_data.sh
$ ./split_data -h
```
Por fim será necessário configurar sua CNNCaffe.
- Para os campos ModelDef, ModelWeights e MeanImage, você deverá fornecer os caminhos relativos ao seu treinamento realizado no passo anterior.
- Para o campo LabelsFile você deve fornecer o caminho de um arquivo que descrava nominalmente as classes na ordem 0, 1, ..., n-1, onde n é o número de classes que você treinou.
- Um arquivo de exemplo pode ser encontrado em examples/labels.txt.
##File
#Open Image (Shortcut: Ctrl + O)
Opens a file selection windows and allows the user to choose a desired image to work upon.
#Restore Image (Shortcut: Ctrl + R)
Restores the selected image to it's original state.
#Close Image (Shortcut: Ctrl + W)
Closes the currently selected image.
#Quit (Shortcut: Ctrl + Q)
Closes Pynovisão.
##View
#Show Image Axis (Shortcut: Not Defined)
Shows a X/Y axis on the Image.
#Show Image Toolbar (Shortcut: Not Defined)
Shows a list of all the images in the selected folder.
#Show Log (Shortcut: Not Defined)
Shows a log with information about the current processes and Traceback errors should they happen.
##Dataset
#Add new class (Shortcut: Ctrl + A)
Create a new class. This will create a new folder in the /data folder.
#Set Dataset Path (Shortcut: Ctrl + D)
Choose the folder with the desired images.
#Dataset Generator (Shortcut: Not Defined)
Creates a new dataset utilizing the selected folder.
##Segmentation
#Choose Segmenter (Shortcut: Not Defined)
Choose the desired segmentation method. Please research the desired method before segmenting. The Default option is SLIC.
#Configure (Shortcut: Ctrl + G)
Configure the parameters for the segmentation.
- Segments: Number of total segments the image should be split into.
- Sigma: How "square" the segment is.
- Compactness: How spread out across the image one segment will be. A higher compactness will result in more clearly separated borders.
- Border Color: The color of the created segments' borders. This is only visual, it will not affect the resulting segment.
- Border Outline: Will create a border for the segment borders.
#Execute (Shortcut: Ctrl + S)
Execute the chosen segmentation method with the desired parameters.
Once Segmented, the user can manually click on the desired segments and they will be saved in data/demo/**name-of-the-class**/**name-of-the-image**_**number-of-the-segment**.tif.
#Assign using labeled image (Shortcut: Ctrl + L)
ver com o diego.
#Execute folder (Shortcut: Not Defined)
Same as the Execute command, however it realizes the segmentation on an entire folder at once.
#Create .XML File (Shortcut: Not Defined)
Will create a .xml file using the chosen segments. The .xml will be saved in data/XML/**name-of-the-image**.xml
##Feature Extraction
#Select Extractors (Shortcut: Ctrl + E)
Select the desired extractors to use. The currently available extractors are:
- Color Statistics;
- Gray-Level Co-Ocurrence Matrix;
- Histogram of Oriented Gradients;
- Hu Image Moments;
- Image Moments (Raw/Central);
- Local Binary Patterns;
- Gabor Filter Bank;
- K-Curvature Angles.
Please research what each extractor does, and choose accordingly. By default all extractors are chosen.
#Execute (Shortcut: Ctrl + F)
Execute the chosen Extractors. It will create a training.arff file in the data/demo folder.
#Extract Frames (Shortcut: Ctrl + V)
Will extract frames from a video. The user must choose the folder where the desired videos are, and the destination folder where the consequent frames will be extracted to.
##Training
#Choose Classifier (Shortcut: Not Defined)
Choose the desired classifier to use. Only one can be chosen at a time.
- CNNKeras
- CNNPseudoLabel
- SEGNETKeras
If the user is interested in implementing it's own classifiers into Pynovisão, please go to **Implementing a new classifier in Pynovisão**
#Configure (Shortcut: Not Defined)
Choose the desired parameters for the currently selected classifier.
Each classifier has it's own parameters and configurations, and therefore must be extensibly research should the desired result be achieved.
#Execute (Shortcut: Ctrl + T)
Train the selected classifier utilizing al the chosen parameters and the training.arff file created previously.
##Classification
#Load h5 weights (Shortcut: Not Defined)
*Only used for CNN classifiers* Take a previously created weight .h5 file and use it for this classification.
#Execute (Shortcut: Ctrl + C)
Execute the current classifier over the currently selected image.
#Execute folder (Shortcut: Not Defined)
Same as the previous command, however executes all the image files inside a selected folder at once.
##Experimenter
ver com diego
##XML
#Configure folders (Shortcut: Not Defined)
Choose the target folder for the original images and the other target folder for the segments to be searched and conevrted into a .xml file.
#Execute Conversion (Shortcut: Not Defined)
Executes the conversion using the two given folders. The file with the annotations will be saved in *[...]/pynovisao/data/XML*, with the name ***image** + .xml*.
### Implementing a new classifier in Pynovisão
......@@ -288,18 +337,3 @@ export OMP_NUM_THREADS=**number of threads your cpu has**
export KMP_AFFINITY="verbose,explicit,proclist=[0,3,5,9,12,15,18,21],granularity=core"
```
### Como utilizar as ferramentas de anotação XML
For those that wish to create .xml files during the process of segmentation, Pynovisão is now capable of doing so.
After following the previous steps for segmenting an image, after choosing all the desired segments, click on *Segmentation -> Create .XML file* and the file with the annotations will be saved in *[...]/pynovisao/data/XML*, with the name ***image** + .xml*.
Should the user want to use previously segmented images, it is possible to have Pynovisão search for the position of such segments and create the corresponding Bounding Box.
To make use of this feature:
- Separate the segments and full images from each other. It is not necessary, but it will help with execution time.
- Open Pynovisão.
- Select *XML -> Configure folders*.
- Choose the desired image and segment folders you wish to use.
- Click *Save All Directories*.
- With your desired folders chosen, click *XML -> Execute Conversion*
- The log (Should it be shown by the user) will update for every concluded image.
- After it is explicitly said the process has ended, the .xml files will be found in *[...]/pynovisao/data/XML* with the name ***imagem** + .xml*.
\ No newline at end of file
  • Colocado instruçoes de como usar a maiora das ferramentas do pynovisão. Ainda há alguns que eu pessoalmente nao soube, entao irei discutir com outros membros do grupo para poder completar o readme.

Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment