Classification Models

Bidirectional LSTM Model

class kashgari.tasks.classification.BiLSTM_Model(embedding=None, *, sequence_length=None, hyper_parameters=None, multi_label=False, text_processor=None, label_processor=None)[source]

Bases: kashgari.tasks.classification.abc_model.ABCClassificationModel

__init__(embedding=None, *, sequence_length=None, hyper_parameters=None, multi_label=False, text_processor=None, label_processor=None)
Parameters
  • embedding (kashgari.embeddings.abc_embedding.ABCEmbedding) – embedding object

  • sequence_length (int) – target sequence length

  • hyper_parameters (Dict[str, Dict[str, Any]]) – hyper_parameters to overwrite

  • multi_label (bool) – is multi-label classification

  • text_processor (kashgari.processors.abc_processor.ABCProcessor) – text processor

  • label_processor (kashgari.processors.abc_processor.ABCProcessor) – label processor

build_model(x_train, y_train)

Build Model with x_data and y_data

This function will setup a CorpusGenerator,

then call py:meth:ABCClassificationModel.build_model_gen for preparing processor and model

Parameters
  • x_train (List[List[str]]) –

  • y_train (Union[List[str], List[List[str]], List[Tuple[str]]]) –

Return type

None

Returns:

build_model_arc()[source]
Return type

None

build_model_generator(generators)
Parameters

generators (List[kashgari.generators.CorpusGenerator]) –

Return type

None

compile_model(loss=None, optimizer=None, metrics=None, **kwargs)

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters
  • loss (Any) – name of objective function, objective function or tf.keras.losses.Loss instance.

  • optimizer (Any) – name of optimizer or optimizer instance.

  • metrics (object) – List of metrics to be evaluated by the model during training and testing.

  • **kwargs – additional params passed to tf.keras.Model.predict`().

  • kwargs (Any) –

Return type

None

classmethod default_hyper_parameters()[source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns

hyper params dict

Return type

Dict[str, Dict[str, Any]]

evaluate(x_data, y_data, *, batch_size=32, digits=4, multi_label_threshold=0.5, truncating=False)
Parameters
  • x_data (List[List[str]]) –

  • y_data (Union[List[str], List[List[str]], List[Tuple[str]]]) –

  • batch_size (int) –

  • digits (int) –

  • multi_label_threshold (float) –

  • truncating (bool) –

Return type

Dict

fit(x_train, y_train, x_validate=None, y_validate=None, *, batch_size=64, epochs=5, callbacks=None, fit_kwargs=None)

Trains the model for a given number of epochs with given data set list.

Parameters
  • x_train (List[List[str]]) – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)

  • y_train (Union[List[str], List[List[str]], List[Tuple[str]]]) – Array of train label data

  • x_validate (List[List[str]]) – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)

  • y_validate (Union[List[str], List[List[str]], List[Tuple[str]]]) – Array of validation label data

  • batch_size (int) – Number of samples per gradient update, default to 64.

  • epochs (int) – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.

  • callbacks (List[keras.callbacks.Callback]) – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.

  • fit_kwargs (Dict) – fit_kwargs: additional arguments passed to tf.keras.Model.fit()

Returns

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Return type

tensorflow.python.keras.callbacks.History

fit_generator(train_sample_gen, valid_sample_gen=None, *, batch_size=64, epochs=5, callbacks=None, fit_kwargs=None)

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters
  • train_sample_gen (kashgari.generators.CorpusGenerator) – train data generator.

  • valid_sample_gen (kashgari.generators.CorpusGenerator) – valid data generator.

  • batch_size (int) – Number of samples per gradient update, default to 64.

  • epochs (int) – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.

  • callbacks (List[keras.callbacks.Callback]) – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.

  • fit_kwargs (Dict) – fit_kwargs: additional arguments passed to tf.keras.Model.fit()

Returns

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Return type

tensorflow.python.keras.callbacks.History

classmethod load_model(model_path, custom_objects=None, encoding='utf-8')
Parameters
  • model_path (str) –

  • custom_objects (Dict) –

  • encoding (str) –

Return type

Union[ABCLabelingModel, ABCClassificationModel]

predict(x_data, *, batch_size=32, truncating=False, multi_label_threshold=0.5, predict_kwargs=None)

Generates output predictions for the input samples.

Computation is done in batches.

Parameters
  • x_data (List[List[str]]) – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).

  • batch_size (int) – Integer. If unspecified, it will default to 32.

  • truncating (bool) – remove values from sequences larger than model.embedding.sequence_length

  • multi_label_threshold (float) –

  • predict_kwargs (Dict) – arguments passed to predict() function of tf.keras.Model

Returns

array(s) of predictions.

Return type

Union[List[str], List[List[str]], List[Tuple[str]]]

save(model_path, encoding='utf-8')
Parameters
  • model_path (str) –

  • encoding (str) –

Return type

str

to_dict()
Return type

Dict

Bidirectional GRU Model

class kashgari.tasks.classification.BiGRU_Model(embedding=None, *, sequence_length=None, hyper_parameters=None, multi_label=False, text_processor=None, label_processor=None)[source]

Bases: kashgari.tasks.classification.abc_model.ABCClassificationModel

__init__(embedding=None, *, sequence_length=None, hyper_parameters=None, multi_label=False, text_processor=None, label_processor=None)
Parameters
  • embedding (kashgari.embeddings.abc_embedding.ABCEmbedding) – embedding object

  • sequence_length (int) – target sequence length

  • hyper_parameters (Dict[str, Dict[str, Any]]) – hyper_parameters to overwrite

  • multi_label (bool) – is multi-label classification

  • text_processor (kashgari.processors.abc_processor.ABCProcessor) – text processor

  • label_processor (kashgari.processors.abc_processor.ABCProcessor) – label processor

build_model(x_train, y_train)

Build Model with x_data and y_data

This function will setup a CorpusGenerator,

then call py:meth:ABCClassificationModel.build_model_gen for preparing processor and model

Parameters
  • x_train (List[List[str]]) –

  • y_train (Union[List[str], List[List[str]], List[Tuple[str]]]) –

Return type

None

Returns:

build_model_arc()[source]
Return type

None

build_model_generator(generators)
Parameters

generators (List[kashgari.generators.CorpusGenerator]) –

Return type

None

compile_model(loss=None, optimizer=None, metrics=None, **kwargs)

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters
  • loss (Any) – name of objective function, objective function or tf.keras.losses.Loss instance.

  • optimizer (Any) – name of optimizer or optimizer instance.

  • metrics (object) – List of metrics to be evaluated by the model during training and testing.

  • **kwargs – additional params passed to tf.keras.Model.predict`().

  • kwargs (Any) –

Return type

None

classmethod default_hyper_parameters()[source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns

hyper params dict

Return type

Dict[str, Dict[str, Any]]

evaluate(x_data, y_data, *, batch_size=32, digits=4, multi_label_threshold=0.5, truncating=False)
Parameters
  • x_data (List[List[str]]) –

  • y_data (Union[List[str], List[List[str]], List[Tuple[str]]]) –

  • batch_size (int) –

  • digits (int) –

  • multi_label_threshold (float) –

  • truncating (bool) –

Return type

Dict

fit(x_train, y_train, x_validate=None, y_validate=None, *, batch_size=64, epochs=5, callbacks=None, fit_kwargs=None)

Trains the model for a given number of epochs with given data set list.

Parameters
  • x_train (List[List[str]]) – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)

  • y_train (Union[List[str], List[List[str]], List[Tuple[str]]]) – Array of train label data

  • x_validate (List[List[str]]) – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)

  • y_validate (Union[List[str], List[List[str]], List[Tuple[str]]]) – Array of validation label data

  • batch_size (int) – Number of samples per gradient update, default to 64.

  • epochs (int) – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.

  • callbacks (List[keras.callbacks.Callback]) – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.

  • fit_kwargs (Dict) – fit_kwargs: additional arguments passed to tf.keras.Model.fit()

Returns

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Return type

tensorflow.python.keras.callbacks.History

fit_generator(train_sample_gen, valid_sample_gen=None, *, batch_size=64, epochs=5, callbacks=None, fit_kwargs=None)

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters
  • train_sample_gen (kashgari.generators.CorpusGenerator) – train data generator.

  • valid_sample_gen (kashgari.generators.CorpusGenerator) – valid data generator.

  • batch_size (int) – Number of samples per gradient update, default to 64.

  • epochs (int) – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.

  • callbacks (List[keras.callbacks.Callback]) – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.

  • fit_kwargs (Dict) – fit_kwargs: additional arguments passed to tf.keras.Model.fit()

Returns

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Return type

tensorflow.python.keras.callbacks.History

classmethod load_model(model_path, custom_objects=None, encoding='utf-8')
Parameters
  • model_path (str) –

  • custom_objects (Dict) –

  • encoding (str) –

Return type

Union[ABCLabelingModel, ABCClassificationModel]

predict(x_data, *, batch_size=32, truncating=False, multi_label_threshold=0.5, predict_kwargs=None)

Generates output predictions for the input samples.

Computation is done in batches.

Parameters
  • x_data (List[List[str]]) – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).

  • batch_size (int) – Integer. If unspecified, it will default to 32.

  • truncating (bool) – remove values from sequences larger than model.embedding.sequence_length

  • multi_label_threshold (float) –

  • predict_kwargs (Dict) – arguments passed to predict() function of tf.keras.Model

Returns

array(s) of predictions.

Return type

Union[List[str], List[List[str]], List[Tuple[str]]]

save(model_path, encoding='utf-8')
Parameters
  • model_path (str) –

  • encoding (str) –

Return type

str

to_dict()
Return type

Dict

CNN Model

class kashgari.tasks.classification.CNN_Model(embedding=None, *, sequence_length=None, hyper_parameters=None, multi_label=False, text_processor=None, label_processor=None)[source]

Bases: kashgari.tasks.classification.abc_model.ABCClassificationModel

__init__(embedding=None, *, sequence_length=None, hyper_parameters=None, multi_label=False, text_processor=None, label_processor=None)
Parameters
  • embedding (kashgari.embeddings.abc_embedding.ABCEmbedding) – embedding object

  • sequence_length (int) – target sequence length

  • hyper_parameters (Dict[str, Dict[str, Any]]) – hyper_parameters to overwrite

  • multi_label (bool) – is multi-label classification

  • text_processor (kashgari.processors.abc_processor.ABCProcessor) – text processor

  • label_processor (kashgari.processors.abc_processor.ABCProcessor) – label processor

build_model(x_train, y_train)

Build Model with x_data and y_data

This function will setup a CorpusGenerator,

then call py:meth:ABCClassificationModel.build_model_gen for preparing processor and model

Parameters
  • x_train (List[List[str]]) –

  • y_train (Union[List[str], List[List[str]], List[Tuple[str]]]) –

Return type

None

Returns:

build_model_arc()[source]
Return type

None

build_model_generator(generators)
Parameters

generators (List[kashgari.generators.CorpusGenerator]) –

Return type

None

compile_model(loss=None, optimizer=None, metrics=None, **kwargs)

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters
  • loss (Any) – name of objective function, objective function or tf.keras.losses.Loss instance.

  • optimizer (Any) – name of optimizer or optimizer instance.

  • metrics (object) – List of metrics to be evaluated by the model during training and testing.

  • **kwargs – additional params passed to tf.keras.Model.predict`().

  • kwargs (Any) –

Return type

None

classmethod default_hyper_parameters()[source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns

hyper params dict

Return type

Dict[str, Dict[str, Any]]

evaluate(x_data, y_data, *, batch_size=32, digits=4, multi_label_threshold=0.5, truncating=False)
Parameters
  • x_data (List[List[str]]) –

  • y_data (Union[List[str], List[List[str]], List[Tuple[str]]]) –

  • batch_size (int) –

  • digits (int) –

  • multi_label_threshold (float) –

  • truncating (bool) –

Return type

Dict

fit(x_train, y_train, x_validate=None, y_validate=None, *, batch_size=64, epochs=5, callbacks=None, fit_kwargs=None)

Trains the model for a given number of epochs with given data set list.

Parameters
  • x_train (List[List[str]]) – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)

  • y_train (Union[List[str], List[List[str]], List[Tuple[str]]]) – Array of train label data

  • x_validate (List[List[str]]) – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)

  • y_validate (Union[List[str], List[List[str]], List[Tuple[str]]]) – Array of validation label data

  • batch_size (int) – Number of samples per gradient update, default to 64.

  • epochs (int) – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.

  • callbacks (List[keras.callbacks.Callback]) – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.

  • fit_kwargs (Dict) – fit_kwargs: additional arguments passed to tf.keras.Model.fit()

Returns

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Return type

tensorflow.python.keras.callbacks.History

fit_generator(train_sample_gen, valid_sample_gen=None, *, batch_size=64, epochs=5, callbacks=None, fit_kwargs=None)

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters
  • train_sample_gen (kashgari.generators.CorpusGenerator) – train data generator.

  • valid_sample_gen (kashgari.generators.CorpusGenerator) – valid data generator.

  • batch_size (int) – Number of samples per gradient update, default to 64.

  • epochs (int) – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.

  • callbacks (List[keras.callbacks.Callback]) – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.

  • fit_kwargs (Dict) – fit_kwargs: additional arguments passed to tf.keras.Model.fit()

Returns

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Return type

tensorflow.python.keras.callbacks.History

classmethod load_model(model_path, custom_objects=None, encoding='utf-8')
Parameters
  • model_path (str) –

  • custom_objects (Dict) –

  • encoding (str) –

Return type

Union[ABCLabelingModel, ABCClassificationModel]

predict(x_data, *, batch_size=32, truncating=False, multi_label_threshold=0.5, predict_kwargs=None)

Generates output predictions for the input samples.

Computation is done in batches.

Parameters
  • x_data (List[List[str]]) – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).

  • batch_size (int) – Integer. If unspecified, it will default to 32.

  • truncating (bool) – remove values from sequences larger than model.embedding.sequence_length

  • multi_label_threshold (float) –

  • predict_kwargs (Dict) – arguments passed to predict() function of tf.keras.Model

Returns

array(s) of predictions.

Return type

Union[List[str], List[List[str]], List[Tuple[str]]]

save(model_path, encoding='utf-8')
Parameters
  • model_path (str) –

  • encoding (str) –

Return type

str

to_dict()
Return type

Dict