Labeling Models

Bidirectional LSTM Model

class kashgari.tasks.labeling.BiLSTM_Model(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)[source]

Bases: kashgari.tasks.labeling.abc_model.ABCLabelingModel

__init__(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)
Parameters:
  • embedding – embedding object
  • sequence_length – target sequence length
  • hyper_parameters – hyper_parameters to overwrite
build_model(x_data: List[List[str]], y_data: List[List[str]]) → None

Build Model with x_data and y_data

This function will setup a CorpusGenerator,
then call ABCClassificationModel.build_model_gen() for preparing processor and model
Parameters:
  • x_data
  • y_data

Returns:

build_model_arc() → None[source]
build_model_generator(generators: List[kashgari.generators.CorpusGenerator]) → None
compile_model(loss: Any = None, optimizer: Any = None, metrics: Any = None, **kwargs) → None

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters:
  • loss – name of objective function, objective function or tf.keras.losses.Loss instance.
  • optimizer – name of optimizer or optimizer instance.
  • metrics (object) – List of metrics to be evaluated by the model during training and testing.
  • kwargs – additional params passed to tf.keras.Model.predict`().
classmethod default_hyper_parameters() → Dict[str, Dict[str, Any]][source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns:hyper params dict
evaluate(x_data: List[List[str]], y_data: List[List[str]], batch_size: int = 32, digits: int = 4, truncating: bool = False) → Dict[KT, VT]

Build a text report showing the main labeling metrics.

Parameters:
  • x_data
  • y_data
  • batch_size
  • digits
  • truncating
Returns:

A report dict

Example

>>> from kashgari.tasks.labeling import BiGRU_Model
>>> model = BiGRU_Model()
>>> model.fit(train_x, train_y, valid_x, valid_y)
>>> report = model.evaluate(test_x, test_y)
           precision    recall  f1-score   support
    <BLANKLINE>
          ORG     0.0665    0.1108    0.0831       984
          LOC     0.1870    0.2086    0.1972      1951
          PER     0.1685    0.0882    0.1158       884
    <BLANKLINE>
    micro avg     0.1384    0.1555    0.1465      3819
    macro avg     0.1516    0.1555    0.1490      3819
    <BLANKLINE>
>>> print(report)
    {
     'f1-score': 0.14895159934887792,
     'precision': 0.1516294012813676,
     'recall': 0.15553809897879026,
     'support': 3819,
     'detail': {'LOC': {'f1-score': 0.19718992248062014,
                        'precision': 0.18695452457510336,
                        'recall': 0.20861096873398258,
                        'support': 1951},
                'ORG': {'f1-score': 0.08307926829268293,
                        'precision': 0.06646341463414634,
                        'recall': 0.11077235772357724,
                        'support': 984},
                'PER': {'f1-score': 0.11581291759465479,
                        'precision': 0.16846652267818574,
                        'recall': 0.08823529411764706,
                        'support': 884}},
    }
fit(x_train: List[List[str]], y_train: List[List[str]], x_validate: List[List[str]] = None, y_validate: List[List[str]] = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tensorflow.python.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data set list.

Parameters:
  • x_train – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)
  • y_train – Array of train label data
  • x_validate – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)
  • y_validate – Array of validation label data
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

fit_generator(train_sample_gen: kashgari.generators.CorpusGenerator, valid_sample_gen: kashgari.generators.CorpusGenerator = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tf.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters:
  • train_sample_gen – train data generator.
  • valid_sample_gen – valid data generator.
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

classmethod load_model(model_path: str) → Union[ABCLabelingModel, ABCClassificationModel]
predict(x_data: List[List[str]], *, batch_size: int = 32, truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[List[str]]

Generates output predictions for the input samples.

Computation is done in batches.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

array(s) of predictions.

predict_entities(x_data: List[List[str]], batch_size: int = 32, join_chunk: str = ' ', truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[Dict[KT, VT]]

Gets entities from sequence.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • join_chunk – str or False,
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

list of entity.

Return type:

list

save(model_path: str) → str

Save model :param model_path:

to_dict() → Dict[str, Any]

Bidirectional GRU Model

class kashgari.tasks.labeling.BiGRU_Model(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)[source]

Bases: kashgari.tasks.labeling.abc_model.ABCLabelingModel

__init__(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)
Parameters:
  • embedding – embedding object
  • sequence_length – target sequence length
  • hyper_parameters – hyper_parameters to overwrite
build_model(x_data: List[List[str]], y_data: List[List[str]]) → None

Build Model with x_data and y_data

This function will setup a CorpusGenerator,
then call ABCClassificationModel.build_model_gen() for preparing processor and model
Parameters:
  • x_data
  • y_data

Returns:

build_model_arc() → None[source]
build_model_generator(generators: List[kashgari.generators.CorpusGenerator]) → None
compile_model(loss: Any = None, optimizer: Any = None, metrics: Any = None, **kwargs) → None

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters:
  • loss – name of objective function, objective function or tf.keras.losses.Loss instance.
  • optimizer – name of optimizer or optimizer instance.
  • metrics (object) – List of metrics to be evaluated by the model during training and testing.
  • kwargs – additional params passed to tf.keras.Model.predict`().
classmethod default_hyper_parameters() → Dict[str, Dict[str, Any]][source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns:hyper params dict
evaluate(x_data: List[List[str]], y_data: List[List[str]], batch_size: int = 32, digits: int = 4, truncating: bool = False) → Dict[KT, VT]

Build a text report showing the main labeling metrics.

Parameters:
  • x_data
  • y_data
  • batch_size
  • digits
  • truncating
Returns:

A report dict

Example

>>> from kashgari.tasks.labeling import BiGRU_Model
>>> model = BiGRU_Model()
>>> model.fit(train_x, train_y, valid_x, valid_y)
>>> report = model.evaluate(test_x, test_y)
           precision    recall  f1-score   support
    <BLANKLINE>
          ORG     0.0665    0.1108    0.0831       984
          LOC     0.1870    0.2086    0.1972      1951
          PER     0.1685    0.0882    0.1158       884
    <BLANKLINE>
    micro avg     0.1384    0.1555    0.1465      3819
    macro avg     0.1516    0.1555    0.1490      3819
    <BLANKLINE>
>>> print(report)
    {
     'f1-score': 0.14895159934887792,
     'precision': 0.1516294012813676,
     'recall': 0.15553809897879026,
     'support': 3819,
     'detail': {'LOC': {'f1-score': 0.19718992248062014,
                        'precision': 0.18695452457510336,
                        'recall': 0.20861096873398258,
                        'support': 1951},
                'ORG': {'f1-score': 0.08307926829268293,
                        'precision': 0.06646341463414634,
                        'recall': 0.11077235772357724,
                        'support': 984},
                'PER': {'f1-score': 0.11581291759465479,
                        'precision': 0.16846652267818574,
                        'recall': 0.08823529411764706,
                        'support': 884}},
    }
fit(x_train: List[List[str]], y_train: List[List[str]], x_validate: List[List[str]] = None, y_validate: List[List[str]] = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tensorflow.python.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data set list.

Parameters:
  • x_train – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)
  • y_train – Array of train label data
  • x_validate – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)
  • y_validate – Array of validation label data
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

fit_generator(train_sample_gen: kashgari.generators.CorpusGenerator, valid_sample_gen: kashgari.generators.CorpusGenerator = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tf.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters:
  • train_sample_gen – train data generator.
  • valid_sample_gen – valid data generator.
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

classmethod load_model(model_path: str) → Union[ABCLabelingModel, ABCClassificationModel]
predict(x_data: List[List[str]], *, batch_size: int = 32, truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[List[str]]

Generates output predictions for the input samples.

Computation is done in batches.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

array(s) of predictions.

predict_entities(x_data: List[List[str]], batch_size: int = 32, join_chunk: str = ' ', truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[Dict[KT, VT]]

Gets entities from sequence.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • join_chunk – str or False,
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

list of entity.

Return type:

list

save(model_path: str) → str

Save model :param model_path:

to_dict() → Dict[str, Any]

Bidirectional LSTM CRF Model

class kashgari.tasks.labeling.BiLSTM_CRF_Model(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)[source]

Bases: kashgari.tasks.labeling.abc_model.ABCLabelingModel

__init__(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)
Parameters:
  • embedding – embedding object
  • sequence_length – target sequence length
  • hyper_parameters – hyper_parameters to overwrite
build_model(x_data: List[List[str]], y_data: List[List[str]]) → None

Build Model with x_data and y_data

This function will setup a CorpusGenerator,
then call ABCClassificationModel.build_model_gen() for preparing processor and model
Parameters:
  • x_data
  • y_data

Returns:

build_model_arc() → None[source]
build_model_generator(generators: List[kashgari.generators.CorpusGenerator]) → None
compile_model(loss: Any = None, optimizer: Any = None, metrics: Any = None, **kwargs) → None[source]

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters:
  • loss – name of objective function, objective function or tf.keras.losses.Loss instance.
  • optimizer – name of optimizer or optimizer instance.
  • metrics (object) – List of metrics to be evaluated by the model during training and testing.
  • kwargs – additional params passed to tf.keras.Model.predict`().
classmethod default_hyper_parameters() → Dict[str, Dict[str, Any]][source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns:hyper params dict
evaluate(x_data: List[List[str]], y_data: List[List[str]], batch_size: int = 32, digits: int = 4, truncating: bool = False) → Dict[KT, VT]

Build a text report showing the main labeling metrics.

Parameters:
  • x_data
  • y_data
  • batch_size
  • digits
  • truncating
Returns:

A report dict

Example

>>> from kashgari.tasks.labeling import BiGRU_Model
>>> model = BiGRU_Model()
>>> model.fit(train_x, train_y, valid_x, valid_y)
>>> report = model.evaluate(test_x, test_y)
           precision    recall  f1-score   support
    <BLANKLINE>
          ORG     0.0665    0.1108    0.0831       984
          LOC     0.1870    0.2086    0.1972      1951
          PER     0.1685    0.0882    0.1158       884
    <BLANKLINE>
    micro avg     0.1384    0.1555    0.1465      3819
    macro avg     0.1516    0.1555    0.1490      3819
    <BLANKLINE>
>>> print(report)
    {
     'f1-score': 0.14895159934887792,
     'precision': 0.1516294012813676,
     'recall': 0.15553809897879026,
     'support': 3819,
     'detail': {'LOC': {'f1-score': 0.19718992248062014,
                        'precision': 0.18695452457510336,
                        'recall': 0.20861096873398258,
                        'support': 1951},
                'ORG': {'f1-score': 0.08307926829268293,
                        'precision': 0.06646341463414634,
                        'recall': 0.11077235772357724,
                        'support': 984},
                'PER': {'f1-score': 0.11581291759465479,
                        'precision': 0.16846652267818574,
                        'recall': 0.08823529411764706,
                        'support': 884}},
    }
fit(x_train: List[List[str]], y_train: List[List[str]], x_validate: List[List[str]] = None, y_validate: List[List[str]] = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tensorflow.python.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data set list.

Parameters:
  • x_train – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)
  • y_train – Array of train label data
  • x_validate – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)
  • y_validate – Array of validation label data
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

fit_generator(train_sample_gen: kashgari.generators.CorpusGenerator, valid_sample_gen: kashgari.generators.CorpusGenerator = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tf.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters:
  • train_sample_gen – train data generator.
  • valid_sample_gen – valid data generator.
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

classmethod load_model(model_path: str) → Union[ABCLabelingModel, ABCClassificationModel]
predict(x_data: List[List[str]], *, batch_size: int = 32, truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[List[str]]

Generates output predictions for the input samples.

Computation is done in batches.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

array(s) of predictions.

predict_entities(x_data: List[List[str]], batch_size: int = 32, join_chunk: str = ' ', truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[Dict[KT, VT]]

Gets entities from sequence.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • join_chunk – str or False,
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

list of entity.

Return type:

list

save(model_path: str) → str

Save model :param model_path:

to_dict() → Dict[str, Any]

Bidirectional GRU CRF Model

class kashgari.tasks.labeling.BiGRU_CRF_Model(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)[source]

Bases: kashgari.tasks.labeling.abc_model.ABCLabelingModel

__init__(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)
Parameters:
  • embedding – embedding object
  • sequence_length – target sequence length
  • hyper_parameters – hyper_parameters to overwrite
build_model(x_data: List[List[str]], y_data: List[List[str]]) → None

Build Model with x_data and y_data

This function will setup a CorpusGenerator,
then call ABCClassificationModel.build_model_gen() for preparing processor and model
Parameters:
  • x_data
  • y_data

Returns:

build_model_arc() → None[source]
build_model_generator(generators: List[kashgari.generators.CorpusGenerator]) → None
compile_model(loss: Any = None, optimizer: Any = None, metrics: Any = None, **kwargs) → None[source]

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters:
  • loss – name of objective function, objective function or tf.keras.losses.Loss instance.
  • optimizer – name of optimizer or optimizer instance.
  • metrics (object) – List of metrics to be evaluated by the model during training and testing.
  • kwargs – additional params passed to tf.keras.Model.predict`().
classmethod default_hyper_parameters() → Dict[str, Dict[str, Any]][source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns:hyper params dict
evaluate(x_data: List[List[str]], y_data: List[List[str]], batch_size: int = 32, digits: int = 4, truncating: bool = False) → Dict[KT, VT]

Build a text report showing the main labeling metrics.

Parameters:
  • x_data
  • y_data
  • batch_size
  • digits
  • truncating
Returns:

A report dict

Example

>>> from kashgari.tasks.labeling import BiGRU_Model
>>> model = BiGRU_Model()
>>> model.fit(train_x, train_y, valid_x, valid_y)
>>> report = model.evaluate(test_x, test_y)
           precision    recall  f1-score   support
    <BLANKLINE>
          ORG     0.0665    0.1108    0.0831       984
          LOC     0.1870    0.2086    0.1972      1951
          PER     0.1685    0.0882    0.1158       884
    <BLANKLINE>
    micro avg     0.1384    0.1555    0.1465      3819
    macro avg     0.1516    0.1555    0.1490      3819
    <BLANKLINE>
>>> print(report)
    {
     'f1-score': 0.14895159934887792,
     'precision': 0.1516294012813676,
     'recall': 0.15553809897879026,
     'support': 3819,
     'detail': {'LOC': {'f1-score': 0.19718992248062014,
                        'precision': 0.18695452457510336,
                        'recall': 0.20861096873398258,
                        'support': 1951},
                'ORG': {'f1-score': 0.08307926829268293,
                        'precision': 0.06646341463414634,
                        'recall': 0.11077235772357724,
                        'support': 984},
                'PER': {'f1-score': 0.11581291759465479,
                        'precision': 0.16846652267818574,
                        'recall': 0.08823529411764706,
                        'support': 884}},
    }
fit(x_train: List[List[str]], y_train: List[List[str]], x_validate: List[List[str]] = None, y_validate: List[List[str]] = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tensorflow.python.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data set list.

Parameters:
  • x_train – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)
  • y_train – Array of train label data
  • x_validate – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)
  • y_validate – Array of validation label data
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

fit_generator(train_sample_gen: kashgari.generators.CorpusGenerator, valid_sample_gen: kashgari.generators.CorpusGenerator = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tf.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters:
  • train_sample_gen – train data generator.
  • valid_sample_gen – valid data generator.
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

classmethod load_model(model_path: str) → Union[ABCLabelingModel, ABCClassificationModel]
predict(x_data: List[List[str]], *, batch_size: int = 32, truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[List[str]]

Generates output predictions for the input samples.

Computation is done in batches.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

array(s) of predictions.

predict_entities(x_data: List[List[str]], batch_size: int = 32, join_chunk: str = ' ', truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[Dict[KT, VT]]

Gets entities from sequence.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • join_chunk – str or False,
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

list of entity.

Return type:

list

save(model_path: str) → str

Save model :param model_path:

to_dict() → Dict[str, Any]

Bidirectional CNN LSTM Model

class kashgari.tasks.labeling.CNN_LSTM_Model(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)[source]

Bases: kashgari.tasks.labeling.abc_model.ABCLabelingModel

__init__(embedding: kashgari.embeddings.abc_embedding.ABCEmbedding = None, sequence_length: int = None, hyper_parameters: Dict[str, Dict[str, Any]] = None)
Parameters:
  • embedding – embedding object
  • sequence_length – target sequence length
  • hyper_parameters – hyper_parameters to overwrite
build_model(x_data: List[List[str]], y_data: List[List[str]]) → None

Build Model with x_data and y_data

This function will setup a CorpusGenerator,
then call ABCClassificationModel.build_model_gen() for preparing processor and model
Parameters:
  • x_data
  • y_data

Returns:

build_model_arc() → None[source]
build_model_generator(generators: List[kashgari.generators.CorpusGenerator]) → None
compile_model(loss: Any = None, optimizer: Any = None, metrics: Any = None, **kwargs) → None

Configures the model for training. call tf.keras.Model.predict() to compile model with custom loss, optimizer and metrics

Examples

>>> model = BiLSTM_Model()
# Build model with corpus
>>> model.build_model(train_x, train_y)
# Compile model with custom loss, optimizer and metrics
>>> model.compile(loss='categorical_crossentropy', optimizer='rsm', metrics = ['accuracy'])
Parameters:
  • loss – name of objective function, objective function or tf.keras.losses.Loss instance.
  • optimizer – name of optimizer or optimizer instance.
  • metrics (object) – List of metrics to be evaluated by the model during training and testing.
  • kwargs – additional params passed to tf.keras.Model.predict`().
classmethod default_hyper_parameters() → Dict[str, Dict[str, Any]][source]

The default hyper parameters of the model dict, all models must implement this function.

You could easily change model’s hyper-parameters.

For example, change the LSTM unit in BiLSTM_Model from 128 to 32.

>>> from kashgari.tasks.classification import BiLSTM_Model
>>> hyper = BiLSTM_Model.default_hyper_parameters()
>>> print(hyper)
{'layer_bi_lstm': {'units': 128, 'return_sequences': False}, 'layer_output': {}}
>>> hyper['layer_bi_lstm']['units'] = 32
>>> model = BiLSTM_Model(hyper_parameters=hyper)
Returns:hyper params dict
evaluate(x_data: List[List[str]], y_data: List[List[str]], batch_size: int = 32, digits: int = 4, truncating: bool = False) → Dict[KT, VT]

Build a text report showing the main labeling metrics.

Parameters:
  • x_data
  • y_data
  • batch_size
  • digits
  • truncating
Returns:

A report dict

Example

>>> from kashgari.tasks.labeling import BiGRU_Model
>>> model = BiGRU_Model()
>>> model.fit(train_x, train_y, valid_x, valid_y)
>>> report = model.evaluate(test_x, test_y)
           precision    recall  f1-score   support
    <BLANKLINE>
          ORG     0.0665    0.1108    0.0831       984
          LOC     0.1870    0.2086    0.1972      1951
          PER     0.1685    0.0882    0.1158       884
    <BLANKLINE>
    micro avg     0.1384    0.1555    0.1465      3819
    macro avg     0.1516    0.1555    0.1490      3819
    <BLANKLINE>
>>> print(report)
    {
     'f1-score': 0.14895159934887792,
     'precision': 0.1516294012813676,
     'recall': 0.15553809897879026,
     'support': 3819,
     'detail': {'LOC': {'f1-score': 0.19718992248062014,
                        'precision': 0.18695452457510336,
                        'recall': 0.20861096873398258,
                        'support': 1951},
                'ORG': {'f1-score': 0.08307926829268293,
                        'precision': 0.06646341463414634,
                        'recall': 0.11077235772357724,
                        'support': 984},
                'PER': {'f1-score': 0.11581291759465479,
                        'precision': 0.16846652267818574,
                        'recall': 0.08823529411764706,
                        'support': 884}},
    }
fit(x_train: List[List[str]], y_train: List[List[str]], x_validate: List[List[str]] = None, y_validate: List[List[str]] = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tensorflow.python.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data set list.

Parameters:
  • x_train – Array of train feature data (if the model has a single input), or tuple of train feature data array (if the model has multiple inputs)
  • y_train – Array of train label data
  • x_validate – Array of validation feature data (if the model has a single input), or tuple of validation feature data array (if the model has multiple inputs)
  • y_validate – Array of validation label data
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

fit_generator(train_sample_gen: kashgari.generators.CorpusGenerator, valid_sample_gen: kashgari.generators.CorpusGenerator = None, batch_size: int = 64, epochs: int = 5, callbacks: List[tf.keras.callbacks.Callback] = None, fit_kwargs: Dict[KT, VT] = None) → tensorflow.python.keras.callbacks.History

Trains the model for a given number of epochs with given data generator.

Data generator must be the subclass of CorpusGenerator

Parameters:
  • train_sample_gen – train data generator.
  • valid_sample_gen – valid data generator.
  • batch_size – Number of samples per gradient update, default to 64.
  • epochs – Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided.
  • callbacks – List of tf.keras.callbacks.Callback instances. List of callbacks to apply during training. See tf.keras.callbacks.
  • fit_kwargs – fit_kwargs: additional arguments passed to tf.keras.Model.fit()
Returns:

A tf.keras.callback.History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

classmethod load_model(model_path: str) → Union[ABCLabelingModel, ABCClassificationModel]
predict(x_data: List[List[str]], *, batch_size: int = 32, truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[List[str]]

Generates output predictions for the input samples.

Computation is done in batches.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

array(s) of predictions.

predict_entities(x_data: List[List[str]], batch_size: int = 32, join_chunk: str = ' ', truncating: bool = False, predict_kwargs: Dict[KT, VT] = None) → List[Dict[KT, VT]]

Gets entities from sequence.

Parameters:
  • x_data – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 32.
  • truncating – remove values from sequences larger than model.embedding.sequence_length
  • join_chunk – str or False,
  • predict_kwargs – arguments passed to tf.keras.Model.predict()
Returns:

list of entity.

Return type:

list

save(model_path: str) → str

Save model :param model_path:

to_dict() → Dict[str, Any]