summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitlab-ci.yml2
-rw-r--r--README.md2
-rwxr-xr-xbin/analyze-kconfig.py2
-rw-r--r--doc/analysis-nfp.md2
-rw-r--r--doc/model-visual.md2
-rw-r--r--doc/modeling-method.md8
-rwxr-xr-xexamples/busybox.sh2
-rwxr-xr-xexamples/explore-and-model-static8
-rw-r--r--lib/cli.py4
-rw-r--r--lib/functions.py40
-rw-r--r--lib/parameters.py14
-rw-r--r--lib/utils.py12
12 files changed, 50 insertions, 48 deletions
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index ff053d4..3309d76 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -58,7 +58,7 @@ make_model:
- DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 bin/analyze-kconfig.py multipass.kconfig multipass.json.xz --export-webconf models/multipass-rmt.json
- wget -q https://ess.cs.uos.de/.private/dfatool/x264.json.xz https://ess.cs.uos.de/.private/dfatool/x264.kconfig https://ess.cs.uos.de/.private/dfatool/x264.nfpkeys.json
- mv x264.nfpkeys.json nfpkeys.json
- - DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 bin/analyze-kconfig.py x264.kconfig x264.json.xz --export-webconf models/x264-cart.json
+ - DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 bin/analyze-kconfig.py x264.kconfig x264.json.xz --export-webconf models/x264-cart.json
- DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 bin/analyze-kconfig.py x264.kconfig x264.json.xz --export-webconf models/x264-rmt.json
artifacts:
paths:
diff --git a/README.md b/README.md
index 1408261..02d2188 100644
--- a/README.md
+++ b/README.md
@@ -142,7 +142,7 @@ The following variables may be set to alter the behaviour of dfatool components.
| `DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS` | **0**, 1 | Ignore parameters deemed irrelevant by stddev heuristic during regression tree generation. Use with caution. |
| `DFATOOL_PARAM_RELEVANCE_THRESHOLD` | 0 .. **0.5** .. 1 | Threshold for relevant parameter detection: parameter *i* is relevant if mean standard deviation (data partitioned by all parameters) / mean standard deviation (data partition by all parameters but *i*) is less than threshold |
| `DFATOOL_DTREE_LOSS_IGNORE_SCALAR` | **0**, 1 | Ignore scalar parameters when computing the loss for split node candidates. Instead of computing the loss of a single partition for each `x_i == j`, compute the loss of partitions for `x_i == j` in which non-scalar parameters vary and scalar parameters are constant. This way, scalar parameters do not affect the decision about which non-scalar parameter to use for splitting. |
-| `DFATOOL_PARAM_CATEGORIAL_TO_SCALAR` | **0**, 1 | Some models (e.g. FOL, sklearn CART, XGBoost) do not support categorial parameters. Ignore them (0) or convert them to scalar indexes (1). Conversion uses lexical order. |
+| `DFATOOL_PARAM_CATEGORICAL_TO_SCALAR` | **0**, 1 | Some models (e.g. FOL, sklearn CART, XGBoost) do not support categorical parameters. Ignore them (0) or convert them to scalar indexes (1). Conversion uses lexical order. |
| `DFATOOL_FIT_FOL` | **0**, 1 | Build a first-order linear function (i.e., a * param1 + b * param2 + ...) instead of more complex functions or tree structures. Must not be combined with `--force-tree`. |
| `DFATOOL_FOL_SECOND_ORDER` | **0**, 1 | Add second-order components (interaction of feature pairs) to first-order linear function. |
diff --git a/bin/analyze-kconfig.py b/bin/analyze-kconfig.py
index 1da926b..44b7e24 100755
--- a/bin/analyze-kconfig.py
+++ b/bin/analyze-kconfig.py
@@ -59,7 +59,7 @@ def main():
parser.add_argument(
"--boolean-parameters",
action="store_true",
- help="Use boolean (not categorial) parameters when building the NFP model",
+ help="Use boolean (not categorical) parameters when building the NFP model",
)
parser.add_argument(
"--show-failing-symbols",
diff --git a/doc/analysis-nfp.md b/doc/analysis-nfp.md
index cec5ad0..877ac2a 100644
--- a/doc/analysis-nfp.md
+++ b/doc/analysis-nfp.md
@@ -8,7 +8,7 @@ Classification and Regression Trees (CART) are capable of generating accurate mo
Hence, after loading a CART model into kconfig-webconf, only a small subset of busybox features will be annotated with NFP deltas.
```
-DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 .../dfatool/bin/analyze-kconfig.py --export-webconf busybox.json --force-tree ../busybox-1.35.0/Kconfig .
+DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 .../dfatool/bin/analyze-kconfig.py --export-webconf busybox.json --force-tree ../busybox-1.35.0/Kconfig .
```
Refer to the [kconfig-webconf README](https://ess.cs.uos.de/git/software/kconfig-webconf/-/blob/master/README.md#user-content-performance-aware-configuration) for details on using the generated model.
diff --git a/doc/model-visual.md b/doc/model-visual.md
index 2fdbd29..2bc38b4 100644
--- a/doc/model-visual.md
+++ b/doc/model-visual.md
@@ -48,7 +48,7 @@ The model for `param[paramIndex] <= threshold` is located in `left`, the model
for `param[paramIndex] > threshold` is located in `right`. `value` is a
static model that serves as fall-back if `param[paramIndex]` is undefined.
-### RMT categorial split node
+### RMT categorical split node
```
{
diff --git a/doc/modeling-method.md b/doc/modeling-method.md
index 27cb334..e4865d9 100644
--- a/doc/modeling-method.md
+++ b/doc/modeling-method.md
@@ -6,7 +6,7 @@ Enable these with `DFATOOL_DTREE_SKLEARN_CART=1` and `--force-tree`.
### Related Options
-* `DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1` converts categorial parameters (which are not supported by CART) to numeric ones.
+* `DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1` converts categorical parameters (which are not supported by CART) to numeric ones.
## XGB (Gradient-Boosted Forests / eXtreme Gradient boosting)
@@ -15,7 +15,7 @@ You should also specify `DFATOOL_XGB_N_ESTIMATORS`, `DFATOOL_XGB_MAX_DEPTH`, and
### Related Options
-* `DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1` converts categorial parameters (which are not supported by XGB) to numeric ones.
+* `DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1` converts categorical parameters (which are not supported by XGB) to numeric ones.
* Anything prefixed with `DFATOOL_XGB_`.
## LMT (Linear Model Trees)
@@ -27,7 +27,7 @@ They always use a maximum depth of 20.
See the [LinearTreeRegressor documentation](lib/lineartree/lineartree.py) for details on training hyper-parameters.
-* `DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1` converts categorial parameters (which are not supported by LMT) to numeric ones.
+* `DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1` converts categorical parameters (which are not supported by LMT) to numeric ones.
* `DFATOOL_LMT_MAX_DEPTH`
* `DFATOOL_LMT_MIN_SAMPLES_SPLIT`
* `DFATOOL_LMT_MIN_SAMPLES_LEAF`
@@ -47,7 +47,7 @@ All of these are valid regression model trees.
* `DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0` disables the relevant parameter detection heuristic when building the tree structure. By default, irrelevant parameters cannot end up as decision nodes.
* `DFATOOL_FIT_LINEAR_ONLY=1` makes RMT behave more like LMT by only considering linear functions in leaf nodes.
* `DFATOOL_FIT_FOL=1`
-* `DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1`
+* `DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1`
* `DFATOOL_ULS_SKIP_CODEPENDENT_CHECK=1`
* `DFATOOL_REGRESSION_SAFE_FUNCTIONS=1`
diff --git a/examples/busybox.sh b/examples/busybox.sh
index a53ca3f..f9a1a93 100755
--- a/examples/busybox.sh
+++ b/examples/busybox.sh
@@ -105,7 +105,7 @@ Once everything is done, you have various options for generating a kconfig-webco
To do so, call bin/analyze-kconfig.py from the data directory.
For example, to generate a CART:
-> DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ~/var/ess/aemr/dfatool/bin/analyze-kconfig.py --force-tree --skip-param-stats --export-webconf /tmp/busybox-cart.json ../busybox-1.35.0/Kconfig .
+> DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ~/var/ess/aemr/dfatool/bin/analyze-kconfig.py --force-tree --skip-param-stats --export-webconf /tmp/busybox-cart.json ../busybox-1.35.0/Kconfig .
By adding the option "--cross-validation kfold:10", you can determine the model prediction error.
diff --git a/examples/explore-and-model-static b/examples/explore-and-model-static
index 3dccc3a..f2caf1a 100755
--- a/examples/explore-and-model-static
+++ b/examples/explore-and-model-static
@@ -15,10 +15,10 @@ set -ex
DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-rmt-b.json --export-raw-predictions ../models/example-static-rmt-b-eval.json ../examples/kconfig-static/Kconfig .
DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 DFATOOL_KCONF_WITH_CHOICE_NODES=1 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-rmt-nb.json --export-raw-predictions ../models/example-static-rmt-nb-eval.json ../examples/kconfig-static/Kconfig .
-DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-cart-b.json --export-raw-predictions ../models/example-static-cart-b-eval.json ../examples/kconfig-static/Kconfig .
-DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=1 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-cart-nb.json --export-raw-predictions ../models/example-static-cart-nb-eval.json ../examples/kconfig-static/Kconfig .
+DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-cart-b.json --export-raw-predictions ../models/example-static-cart-b-eval.json ../examples/kconfig-static/Kconfig .
+DFATOOL_DTREE_SKLEARN_CART=1 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 DFATOOL_KCONF_WITH_CHOICE_NODES=1 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-cart-nb.json --export-raw-predictions ../models/example-static-cart-nb-eval.json ../examples/kconfig-static/Kconfig .
-DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 DFATOOL_FIT_FOL=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-fol-b.json --export-raw-predictions ../models/example-static-fol-b-eval.json ../examples/kconfig-static/Kconfig .
-DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 DFATOOL_PARAM_CATEGORIAL_TO_SCALAR=1 DFATOOL_FIT_FOL=1 DFATOOL_KCONF_WITH_CHOICE_NODES=1 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-fol-nb.json --export-raw-predictions ../models/example-static-fol-nb-eval.json ../examples/kconfig-static/Kconfig .
+DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 DFATOOL_FIT_FOL=1 DFATOOL_KCONF_WITH_CHOICE_NODES=0 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-fol-b.json --export-raw-predictions ../models/example-static-fol-b-eval.json ../examples/kconfig-static/Kconfig .
+DFATOOL_DTREE_IGNORE_IRRELEVANT_PARAMS=0 DFATOOL_PARAM_CATEGORICAL_TO_SCALAR=1 DFATOOL_FIT_FOL=1 DFATOOL_KCONF_WITH_CHOICE_NODES=1 ../bin/analyze-kconfig.py --export-webconf ../models/example-static-fol-nb.json --export-raw-predictions ../models/example-static-fol-nb-eval.json ../examples/kconfig-static/Kconfig .
cp ../examples/kconfig-static/Kconfig ../models/example-static.kconfig
diff --git a/lib/cli.py b/lib/cli.py
index 3da6fce..f289d4a 100644
--- a/lib/cli.py
+++ b/lib/cli.py
@@ -576,7 +576,7 @@ def add_standard_arguments(parser):
)
parser.add_argument(
"--param-shift",
- metavar="<key>=<+|-|*|/><value>|none-to-0|categorial;...",
+ metavar="<key>=<+|-|*|/><value>|none-to-0|categorical;...",
type=str,
help="Adjust parameter values before passing them to model generation",
)
@@ -695,7 +695,7 @@ def parse_shift_function(param_name, param_shift):
elif param_shift.startswith("/"):
param_shift_value = float(param_shift[1:])
return lambda p: p / param_shift_value
- elif param_shift == "categorial":
+ elif param_shift == "categorical":
return lambda p: "=" + str(p)
elif param_shift == "none-to-0":
return lambda p: p or 0
diff --git a/lib/functions.py b/lib/functions.py
index 6366f0a..4940956 100644
--- a/lib/functions.py
+++ b/lib/functions.py
@@ -590,7 +590,7 @@ class SKLearnRegressionFunction(ModelFunction):
always_predictable = True
has_eval_arr = True
- def __init__(self, value, regressor, categorial_to_index, ignore_index, **kwargs):
+ def __init__(self, value, regressor, categorical_to_index, ignore_index, **kwargs):
# Needed for JSON export
self.param_names = kwargs.pop("param_names")
self.arg_count = kwargs.pop("arg_count")
@@ -601,7 +601,7 @@ class SKLearnRegressionFunction(ModelFunction):
super().__init__(value, **kwargs)
self.regressor = regressor
- self.categorial_to_index = categorial_to_index
+ self.categorical_to_index = categorical_to_index
self.ignore_index = ignore_index
# SKLearnRegressionFunction descendants use self.param_names \ self.ignore_index as features.
@@ -649,15 +649,15 @@ class SKLearnRegressionFunction(ModelFunction):
actual_param_list = list()
for i, param in enumerate(param_list):
if not self.ignore_index[i]:
- if i in self.categorial_to_index:
+ if i in self.categorical_to_index:
try:
- actual_param_list.append(self.categorial_to_index[i][param])
+ actual_param_list.append(self.categorical_to_index[i][param])
except KeyError:
# param was not part of training data. substitute an unused scalar.
# Note that all param values which were not part of training data map to the same scalar this way.
# This should be harmless.
actual_param_list.append(
- max(self.categorial_to_index[i].values()) + 1
+ max(self.categorical_to_index[i].values()) + 1
)
else:
actual_param_list.append(int(param))
@@ -672,15 +672,17 @@ class SKLearnRegressionFunction(ModelFunction):
actual_param_list = list()
for i, param in enumerate(param_tuple):
if not self.ignore_index[i]:
- if i in self.categorial_to_index:
+ if i in self.categorical_to_index:
try:
- actual_param_list.append(self.categorial_to_index[i][param])
+ actual_param_list.append(
+ self.categorical_to_index[i][param]
+ )
except KeyError:
# param was not part of training data. substitute an unused scalar.
# Note that all param values which were not part of training data map to the same scalar this way.
# This should be harmless.
actual_param_list.append(
- max(self.categorial_to_index[i].values()) + 1
+ max(self.categorical_to_index[i].values()) + 1
)
else:
actual_param_list.append(int(param))
@@ -691,7 +693,7 @@ class SKLearnRegressionFunction(ModelFunction):
def to_json(self, **kwargs):
ret = super().to_json(**kwargs)
- # Note: categorial_to_index uses param_names, not feature_names
+ # Note: categorical_to_index uses param_names, not feature_names
param_names = self.param_names + list(
map(
lambda i: f"arg{i-len(self.param_names)}",
@@ -704,7 +706,7 @@ class SKLearnRegressionFunction(ModelFunction):
ret["paramValueToIndex"] = dict(
map(
lambda kv: (param_names[kv[0]], kv[1]),
- self.categorial_to_index.items(),
+ self.categorical_to_index.items(),
)
)
@@ -958,17 +960,17 @@ class FOLFunction(ModelFunction):
self.fit_success = False
def fit(self, param_values, data, ignore_param_indexes=None):
- self.categorial_to_scalar = bool(
- int(os.getenv("DFATOOL_PARAM_CATEGORIAL_TO_SCALAR", "0"))
+ self.categorical_to_scalar = bool(
+ int(os.getenv("DFATOOL_PARAM_CATEGORICAL_TO_SCALAR", "0"))
)
second_order = int(os.getenv("DFATOOL_FOL_SECOND_ORDER", "0"))
- fit_parameters, categorial_to_index, ignore_index = param_to_ndarray(
+ fit_parameters, categorical_to_index, ignore_index = param_to_ndarray(
param_values,
with_nan=False,
- categorial_to_scalar=self.categorial_to_scalar,
+ categorical_to_scalar=self.categorical_to_scalar,
ignore_indexes=ignore_param_indexes,
)
- self.categorial_to_index = categorial_to_index
+ self.categorical_to_index = categorical_to_index
self.ignore_index = ignore_index
fit_parameters = fit_parameters.swapaxes(0, 1)
@@ -1052,15 +1054,15 @@ class FOLFunction(ModelFunction):
actual_param_list = list()
for i, param in enumerate(param_list):
if not self.ignore_index[i]:
- if i in self.categorial_to_index:
+ if i in self.categorical_to_index:
try:
- actual_param_list.append(self.categorial_to_index[i][param])
+ actual_param_list.append(self.categorical_to_index[i][param])
except KeyError:
# param was not part of training data. substitute an unused scalar.
# Note that all param values which were not part of training data map to the same scalar this way.
# This should be harmless.
actual_param_list.append(
- max(self.categorial_to_index[i].values()) + 1
+ max(self.categorical_to_index[i].values()) + 1
)
else:
actual_param_list.append(int(param))
@@ -1105,7 +1107,7 @@ class FOLFunction(ModelFunction):
def hyper_to_dref(self):
return {
- "fol/categorial to scalar": int(self.categorial_to_scalar),
+ "fol/categorical to scalar": int(self.categorical_to_scalar),
}
diff --git a/lib/parameters.py b/lib/parameters.py
index 2e3878f..f367eb9 100644
--- a/lib/parameters.py
+++ b/lib/parameters.py
@@ -918,7 +918,7 @@ class ModelAttribute:
:param data: Measurements. [data 1, data 2, data 3, ...]
:param with_function_leaves: Use fitted function sets to generate function leaves for scalar parameters
:param with_nonbinary_nodes: Allow non-binary nodes for enum and scalar parameters (i.e., nodes with more than two children)
- :param with_sklearn_cart: Use `sklearn.tree.DecisionTreeRegressor` CART implementation for tree generation. Does not support categorial (enum)
+ :param with_sklearn_cart: Use `sklearn.tree.DecisionTreeRegressor` CART implementation for tree generation. Does not support categorical (enum)
and sparse parameters. Both are ignored during fitting. All other options are ignored as well.
:param with_sklearn_decart: Use `sklearn.tree.DecisionTreeRegressor` CART implementation in DECART mode for tree generation. CART limitations
apply; additionaly, scalar parameters are ignored during fitting.
@@ -928,8 +928,8 @@ class ModelAttribute:
:returns: SplitFunction or StaticFunction
"""
- categorial_to_scalar = bool(
- int(os.getenv("DFATOOL_PARAM_CATEGORIAL_TO_SCALAR", "0"))
+ categorical_to_scalar = bool(
+ int(os.getenv("DFATOOL_PARAM_CATEGORICAL_TO_SCALAR", "0"))
)
if with_function_leaves is None:
with_function_leaves = bool(
@@ -969,13 +969,13 @@ class ModelAttribute:
fit_parameters, category_to_index, ignore_index = param_to_ndarray(
parameters,
with_nan=False,
- categorial_to_scalar=categorial_to_scalar,
+ categorical_to_scalar=categorical_to_scalar,
)
elif with_sklearn_decart:
fit_parameters, category_to_index, ignore_index = param_to_ndarray(
parameters,
with_nan=False,
- categorial_to_scalar=categorial_to_scalar,
+ categorical_to_scalar=categorical_to_scalar,
ignore_indexes=self.scalar_param_indexes,
)
if fit_parameters.shape[1] == 0:
@@ -1071,7 +1071,7 @@ class ModelAttribute:
reg_lambda=reg_lambda,
)
fit_parameters, category_to_index, ignore_index = param_to_ndarray(
- parameters, with_nan=False, categorial_to_scalar=categorial_to_scalar
+ parameters, with_nan=False, categorical_to_scalar=categorical_to_scalar
)
if fit_parameters.shape[1] == 0:
logger.warning(
@@ -1159,7 +1159,7 @@ class ModelAttribute:
criterion=criterion,
)
fit_parameters, category_to_index, ignore_index = param_to_ndarray(
- parameters, with_nan=False, categorial_to_scalar=categorial_to_scalar
+ parameters, with_nan=False, categorical_to_scalar=categorical_to_scalar
)
if fit_parameters.shape[1] == 0:
logger.warning(
diff --git a/lib/utils.py b/lib/utils.py
index d6cdfc5..c16e419 100644
--- a/lib/utils.py
+++ b/lib/utils.py
@@ -289,7 +289,7 @@ def partition_by_param(data, param_values, ignore_parameters=list()):
def param_to_ndarray(
- param_tuples, with_nan=True, categorial_to_scalar=False, ignore_indexes=list()
+ param_tuples, with_nan=True, categorical_to_scalar=False, ignore_indexes=list()
):
has_nan = dict()
has_non_numeric = dict()
@@ -297,7 +297,7 @@ def param_to_ndarray(
category_to_scalar = dict()
logger.debug(
- f"converting param_to_ndarray(with_nan={with_nan}, categorial_to_scalar={categorial_to_scalar}, ignore_indexes={ignore_indexes})"
+ f"converting param_to_ndarray(with_nan={with_nan}, categorical_to_scalar={categorical_to_scalar}, ignore_indexes={ignore_indexes})"
)
for param_tuple in param_tuples:
@@ -307,7 +307,7 @@ def param_to_ndarray(
has_nan[i] = True
else:
has_non_numeric[i] = True
- if categorial_to_scalar and param is not None:
+ if categorical_to_scalar and param is not None:
if not i in distinct_values:
distinct_values[i] = set()
distinct_values[i].add(param)
@@ -320,7 +320,7 @@ def param_to_ndarray(
ignore_index = dict()
for i in range(len(param_tuples[0])):
- if has_non_numeric.get(i, False) and not categorial_to_scalar:
+ if has_non_numeric.get(i, False) and not categorical_to_scalar:
ignore_index[i] = True
elif not with_nan and has_nan.get(i, False):
ignore_index[i] = True
@@ -337,7 +337,7 @@ def param_to_ndarray(
if not ignore_index[i]:
if i in category_to_scalar and not is_numeric(param):
ret_tuple.append(category_to_scalar[i][param])
- elif categorial_to_scalar:
+ elif categorical_to_scalar:
ret_tuple.append(soft_cast_int(param))
else:
ret_tuple.append(param)
@@ -357,7 +357,7 @@ def param_dict_to_list(param_dict, parameter_names, default=None):
def observations_enum_to_bool(observations: list, kconfig=False):
"""
- Convert enum / categorial observations to boolean-only ones.
+ Convert enum / categorical observations to boolean-only ones.
'observations' is altered in-place.
DEPRECATED.