README.md 12.8 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14
#### __Title: Pynovisao__
### Authors (email):
- Adair da Silva Oliveira Junior
- Alessandro dos Santos Ferreira
- Diego André Sant'Ana (diegoandresantana@gmail.com)
- Diogo Nunes Gonçalves (dnunesgoncalves@gmail.com)
- Everton Castelão Tetila (evertontetila@gmail.com)
- Fabio Prestes Cesar Rezende (fpcrezende@gmail.com)
- Felipe Silveira (eng.fe.silveira@gmail.com)
- Gabriel Kirsten Menezes (gabriel.kirsten@hotmail.com)
- Gilberto Astolfi (gilbertoastolfi@gmail.com)
- Hemerson Pistori (pistori@ucdb.br)
- Joao Vitor de Andrade Porto (jvaporto@gmail.com)
- Nícolas Alessandro de Souza Belete (nicolas.belete@gmail.com)
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

## Resume:

Computer Vision Tool Collection for Inovisão. This collection of tools allows the user to select an image (or folder) and realize numerous actions such as:
- Generate new Datasets and classes
- Segmentation of images
- Extract features from an image
- Extract frames from videos
- Train Machine Learning algorithms
- Classify using CNNs
- Experiment with data using Keras
- Create XML files from segments previously created.

## Open Software License: 

NPOSL-30 https://opensource.org/licenses/NPOSL-3.0 - Free for non-profit use (E.g.: Education, scientific research, etc.). Contact Inovisão's Prof. Hemerson Pistori (pistori@ucdb.br), should any interest in commercial exploration of this software arise.

## How to cite:
33 34 35

[1] dos Santos Ferreira, A., Freitas, D. M., da Silva, G. G., Pistori, H., & Folhes, M. T. (2017). Weed detection in soybean crops using ConvNets. Computers and Electronics in Agriculture, 143, 314-324.

36
## How to use:
37

38 39 40 41 42 43 44 45 46
- In order to download Pynovisao, click the download button in the top right of the screen (Compressed folder), or type the following command in a terminal:
```
 $ git clone http://git.inovisao.ucdb.br/inovisao/pynovisao
```

- From inside of this directory:
```
 [...]/pynovisao
```
47

48
- Enter the folder named **[...]/pynovisao/src** or type the following command in the terminal to do so:
49 50 51 52
```
 $ cd src
```

53
- Next, type the following command if you desire to run it using Python 2.7:
54 55 56
```
 $ python main.py
```
57 58 59 60
- Or, should you want to run it using Python 3.6:
```
 $ python3 main.py
```
61

62
- A window such as the following will open, and you can start using Pynovisão and it's features:
63

64 65
    ![pynovisao](data/pynovisao.png)
    
66
## Other options:
67

68
- Show All options available
69

70 71 72
```
 $ python main.py --help
```
73

74
- Executes the program, defining the wanted classes and it's respective colours (X11 colour names)
75

76
```
77
 $ python main.py --classes "Soil Soy Grass LargeLeaves" --colors "Orange SpringGreen RebeccaPurple Snow"
78
```
79

80 81
- A Linux script exists in *[...]/pynovisao/src/util* to help divide images into training, validation and testing datasets. It has not been implemented to the GUI. verificar com diego

82 83 84 85 86 87 88
```
 $ cd src/util
 $ chmod 755 split_data.sh
 $ ./split_data -h
```


89
### How to Install mudr de lugar
90 91
##Option 1, Linux-only Script
You can easily install Pynovisão utilizing the automated installation script given with it, as seen by the following steps:
Gabriel Kirsten's avatar
Gabriel Kirsten committed
92

93
- From inside of this directory:
Gabriel Kirsten's avatar
Gabriel Kirsten committed
94
```
95
 [...]/pynovisao
Gabriel Kirsten's avatar
Gabriel Kirsten committed
96
```
97

98 99 100 101 102
- Execute the following command:
```
$ sudo bash INSTALL.sh
```
**NOTE**: This script has been tested for Ubuntu versions 19.04 and 18.04
Gabriel Kirsten's avatar
Gabriel Kirsten committed
103

104 105
##Option 2, without INSTALL.sh
# Linux
Gabriel Kirsten's avatar
Gabriel Kirsten committed
106

107
Besides it's dependencies, Python 2.7.6 or Python 3.6 is needed. (Latest tested versions for this software)
Gabriel Kirsten's avatar
Gabriel Kirsten committed
108

109
- Installing the necessary dependencies on Python 3.6:
Gabriel Kirsten's avatar
Gabriel Kirsten committed
110
```
111 112 113 114 115 116 117
$ sudo apt-get update
$ sudo apt-get install libfreetype6-dev tk tk-dev python3-pip openjdk-8-jre openjdk-8-jdk weka weka-doc python3-tk python3-matplotlib
$ source ~/.bashrc
$ sudo pip3 install numpy
$ sudo pip3 install -r requirements_pip3.txt
$ sudo pip3 install tensorflow 
$ sudo pip3 install keras
Gabriel Kirsten's avatar
Gabriel Kirsten committed
118
```
119 120

- Installing the necessary dependencies on Python 2.7:
Gabriel Kirsten's avatar
Gabriel Kirsten committed
121
```
122 123 124
$ sudo apt-get update
$ sudo apt-get install libfreetype6-dev tk tk-dev python-pip openjdk-8-jre openjdk-8-jdk weka weka-doc python-tk python-matplotlib
$ source ~/.bashrc
Gabriel Kirsten's avatar
Gabriel Kirsten committed
125
$ sudo pip install numpy
126 127 128
$ sudo pip install -r requirements_pip3.txt
$ sudo pip install tensorflow 
$ sudo pip install keras
Gabriel Kirsten's avatar
Gabriel Kirsten committed
129
```
130

131
# Windows
132 133

- Instale o [Anaconda](http://continuum.io/downloads) que contém todas dependências, inclusive o Python. Basta fazer o download do arquivo .exe e executá-lo.
134 135
- Opencv 2.7
- python-weka-wrapper ( Classification )
136
- WiP
137

138 139
# Windows
WiP
140 141 142 143 144 145 146 147 148 149 150 151
 - [OpenCV-Python](https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html#install-opencv-python-in-windows).
	1. Baixe o [Opencv](https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html#install-opencv-python-in-windows)
	2. Extraia os arquivos no local desejado.
	3. Vá para a pasta opencv/build/python/2.7.
	4. Cipie o arquivo cv2.pyd para C:/Python27/lib/site-packeges.
	5. Abra o terminal e digite python para executar o interpretador.
	6. Digite:
    	
      ```
        >>> import cv2
        >>> print cv2.__version__
      ```
152
    
153 154
# Windows
WiP
155
Instale .Net 4.0 (se já não estiver instalado)
156

157
Instale Windows SDK 7.1
158

159
Abra o prompt de comando do Windows SDK (não o prompt de comando convencional!) e instale javabridge e python-weka-wrapper
160 161 162 163 164 165
```
> set MSSdk=1
> set DISTUTILS_USE_SDK=1
> pip install javabridge
> pip install python-weka-wrapper
```
166 167

Agora você pode executar python-weka-wrapper usando o prompt de comando convencional também.
168
# Mais informações
169 170
- http://pythonhosted.org/python-weka-wrapper/install.html
- http://pythonhosted.org/python-weka-wrapper/troubleshooting.html
171

172
### Como instalar o caffe ( Opcional )
173

174
## Ubuntu / Windows
175
WiP
176 177 178 179 180 181
Para poder utilizar o classificador CNNCaffe, uma ConvNet baseada na topologia AlexNet, é necessário instalar o software Caffe.

A instalação do software Caffe é mais complexa que as instalações descritas anteriormente e pode ser encontrada detalhada no link abaixo:
-  http://caffe.berkeleyvision.org/installation.html

Após realizar a instalação do software Caffe, para realizar a classificação, você precisa realizar o treinamento da sua rede no software, pois não há interface no Pynovisao para o treinamento da ConvNet.
182

183 184 185 186
O tutorial para o treinamento pode ser encontrado no link abaixo:
- http://caffe.berkeleyvision.org/gathered/examples/imagenet.html

Por fim será necessário configurar sua CNNCaffe.
187 188 189
- Para os campos ModelDef, ModelWeights e MeanImage, você deverá fornecer os caminhos relativos ao seu treinamento realizado no passo anterior.
- Para o campo LabelsFile você deve fornecer o caminho de um arquivo que descrava nominalmente as classes na ordem 0, 1, ..., n-1, onde n é o número de classes que você treinou. 
- Um arquivo de exemplo pode ser encontrado em examples/labels.txt.
190

191
### Implementing a new classifier in Pynovisão
192

193
In this section we shall show the steps needed to implement a new classifier into Pynovisão. As an example, we are using **Syntactic**, of type **KTESTABLE** and vocabulary size as an hyperparameter.
194

195
Inicially, you need to create a class where all the types of your classifier are in a dictionary (Key, Value). The class must be created inside *[...]/pynovisao/src/classification/*. As an example, look for the *SyntacticAlias* in *[...]/pynovisao/src/classification/syntactic_alias.py*.
196

197 198 199
The next step is creating the .py file for your classifier in your directory *[...]/pynovisao/src/classification/*, for example, *syntactic.py*.
In this newly-created file you must implement your classifier class extending the class **Classifier**, which is implemented in the file *[...]/pynovisao/src/classification/classifier.py*.
See the example below:
200 201 202

```python
#syntactic.py
203
#minimal required imports
204 205 206 207 208 209 210 211 212
from collections import OrderedDict
from util.config import Config
from util.utils import TimeUtils
from classifier import Classifier

class Syntactic(Classifier):
    """Class for syntactic classifier"""
```

213
In the contructor class you must inform default values for the parameters. In the case fo the example below, **classname** is the type of classifier and **options** is the size of the alphabet. Besides, some attributes must be inicialized: **self.classname** and **self.options**. The attribute **self.dataset** (optional) is the path to the training and testing dataset which tells the user in the GUI. Having this attribute in the class is important to get access to the dataset in any of the methods and is initialized in the method **train** discussed later.
214 215 216 217 218 219 220 221 222 223

```python
def __init__(self, classname="KTESTABLE", options='32'):

        self.classname = Config("ClassName", classname, str)
        self.options = Config("Options", options, str)
        self.dataset = None
        self.reset()
```

224
The methods **get_name**, **get_config**, **set_config**, **get_summary_config** and **must_train** have default implementations, as seen in example in *[..]/pynovisao/src/classification/classifier.py*.
225

226
The **train** method must be implemented in order to train your classifier. The **dataset** parameter is given the path to the training images. Within the method, the value of the attribute self.dataset, declared as optional in the constructor, is altered to the current training directory.
227 228 229 230 231

```python
def train(self, dataset, training_data, force = False):              
        
        dataset += '/'
232
        # Attribute which retains the dataset path.
233
        self.dataset = dataset 
234
  	    # The two tests below are default.
235 236 237 238 239 240 241
        
        if self.data is not None and not force:
            return 
        
        if self.data is not None:
            self.reset()
		
242
	   # Implement here your training.
243 244
```

245
The **classify** method must be implemented should you want your classifier to be able to predict classes for images. The **dataset** parameter is given the training images, and **test_dir** is given the temporary folder path created by Pynovisão, where the testing images are located. This folder is created within the **dataset** directory and, to acesss it, just concatenate **dataset** and **test_dir** as show in the example below. The parameter test_data is a .arff file with data for the testing images.
246
 
247
 This method must return a list containing all the predicted classes by the classifier. E.g.: [‘weed’,’weed’,’target_stain’, ‘weed’]
248 249 250 251

```python
def classify(self, dataset, test_dir, test_data):
      
252
	   # Directory retaining the testing images.
253 254
       path_test = dataset + '/' + test_dir + '/'        
        
255
       # Implement heere the prediction algorithm for your classifier.
256
 
257
       return # A list with the predicted classes
258 259
```

260 261
The **cross_validate** must be implemented and return a string (info) with the metrics.
Obs.: The attribute **self.dataset**, updated in **train**, can be used in **cross_validate** to access the training images folder.
262 263 264 265 266 267

```python
def cross_validate(self, detail = True):
        start_time = TimeUtils.get_time()        
        info =  "Scheme:\t%s %s\n" % (str(self.classifier.classname) , "".join([str(option) for option in self.classifier.options]))
	  
268
	   # Implement here the cross validation.
269 270
	   return info
```
271
The **reset** method must also be implemented in default form, as seen below.
272 273 274 275 276 277 278

```python
def reset(self):
        self.data = None
        self.classifier = None
```

279 280 281
After implementing your classifier, you must configure it in Pynovisão by modifying **[...]/pynovisao/src/classification/__init__.py**.

Should utility classes be necessary, they must be created in **[...]/pynovisao/src/util/**. They must also be registered as modules in **[...]/pynovisao/src/util/__init__.py**.
282 283


284
Should any problem related to the number of processes arise, add these two variables in your terminal:
285

286 287
```
export OMP_NUM_THREADS=**number of threads your cpu has**
288
export KMP_AFFINITY="verbose,explicit,proclist=[0,3,5,9,12,15,18,21],granularity=core"
289
```
290

291
### Como utilizar as ferramentas de anotação XML
292

293 294
For those that wish to create .xml files during the process of segmentation, Pynovisão is now capable of doing so.
After following the previous steps for segmenting an image, after choosing all the desired segments, click on *Segmentation ->  Create .XML file* and the file with the annotations will be saved in *[...]/pynovisao/data/XML*, with the name ***image** + .xml*.
295

296 297 298 299 300 301 302 303 304 305
Should the user want to use previously segmented images, it is possible to have Pynovisão search for the position of such segments and create the corresponding Bounding Box.
To make use of this feature:
- Separate the segments and full images from each other. It is not necessary, but it will help with execution time.
- Open Pynovisão.
- Select *XML -> Configure folders*.
- Choose the desired image and segment folders you wish to use.
- Click *Save All Directories*.
- With your desired folders chosen, click *XML -> Execute Conversion*
- The log (Should it be shown by the user) will update for every concluded image.
- After it is explicitly said the process has ended, the .xml files will be found in *[...]/pynovisao/data/XML* with the name ***imagem** + .xml*.