AdditiveMODAModel

class AdditiveMODAModel(metricDefinitions: Map<MetricIfc, ValueFunctionIfc>, weights: Map<MetricIfc, Double> = makeEqualWeights(metricDefinitions.keys), name: String? = null) : MODAModel

Represents a multi-objective decision analysis (MODA) model that uses an additive model for the attribute valuation. The supplied weights must correspond to weights within the model.

Parameters

metricDefinitions

the definition for each metric and the value function to apply for the metric

weights

the weights for each metric, by default they will be equal

Constructors

Link copied to clipboard
constructor(names: Set<String>)

Constructs a default additive MODA model. The supplied names are used to create default metrics using linear value functions with equal weighting.

constructor(metricDefinitions: Map<MetricIfc, ValueFunctionIfc>, weights: Map<MetricIfc, Double> = makeEqualWeights(metricDefinitions.keys), name: String? = null)

Types

Link copied to clipboard
object Companion

Properties

Link copied to clipboard

The list of alternative within the model. The order of the alternatives is defined by the order entered into the map that was supplied by the defineAlternatives() function.

Link copied to clipboard

For rank based evaluation, this specifies the default parameter value for those methods the perform rank based evaluation calculations.

Link copied to clipboard
open override val id: Int
Link copied to clipboard
open override var label: String?
Link copied to clipboard

The list of metrics defined for the model. The order of the metrics is defined by the order entered into the map that was supplied by the defineMetrics() function.

Link copied to clipboard
open override val name: String
Link copied to clipboard

Functions

Link copied to clipboard
fun alternativeAverageRanking(sortByAvgRanking: Boolean = true, rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): List<Pair<String, Double>>

Returns the alternatives with the average across the observed ranks. The returned list of pairs (alternative, average rank) is ordered based on the averages smallest to largest.

Link copied to clipboard
fun alternativeFirstRankCounts(sortByCounts: Boolean = true, rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): List<Pair<String, Int>>

Returns the alternatives with the count of the number of times some metric ranked the alternative first based on the value scores.

Link copied to clipboard
fun alternativeFirstRankMetricFrequencies(sortByAvgRanking: Boolean = true, rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): Map<String, IntegerFrequency>

The alternatives that were ranked first by some metric along with the metric frequency distribution.

Link copied to clipboard
fun alternativeMetricRankFrequencies(sortByAvgRanking: Boolean = true, rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): Map<String, IntegerFrequency>

Collects the ranking frequencies across all metrics for each alternative.

Link copied to clipboard

Returns a list of OverallValueData which holds for each alternative overall value combination. (id, alternativeName, overall value, first rank count, average ranking)

Link copied to clipboard

The alternatives and their rank based on largest to smallest multi-objective value

Link copied to clipboard
fun alternativeRankFrequencyData(sortByAvgRanking: Boolean = true, rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): List<AlternativeRankFrequencyData>

Captures the alternative metric rank frequency data to a list.

Link copied to clipboard
fun alternativeRankingsAsDataFrame(firstColumnName: String = "Alternatives", rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): AnyFrame

Returns a data frame with the first column being the alternatives by name, a column of rank counts for each alternative, and a final column representing the average rank for the alternative. The parameter firstColumnName can be used to name the first column of the returned data frame. By default, the first column name is "Alternatives". The resulting data frame will be sorted by average rank column with lower value being preferred.

Link copied to clipboard
fun alternativeRanksAsDataFrame(firstColumnName: String = "Alternatives", rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): AnyFrame

Returns a data frame with the first column being the alternatives by name, a column of ranks for each metric for each alternative. The parameter firstColumnName can be used to name the first column of the returned data frame. By default, the first column name is "Alternatives". The metric ranking columns are labeled as "${metric.name}_Rank"

Link copied to clipboard
fun alternativeResultsAsDataFrame(firstColumnName: String = "Alternatives"): AnyFrame

Returns a data from with the first column being the alternatives by name, a column of raw score values for each metric for each alternative, and a column of values for each metric for each alternative, and a final column representing the overall value for the alternative. The parameter firstColumnName can be used to name the first column of the returned data frame. By default, the first column name is "Alternatives".

Link copied to clipboard

Returns a list of ScoreData which holds for each alternative-metric raw score combination. (id, alternativeName, scoreName, scoreValue)

Link copied to clipboard
fun alternativeScoresAsDataFrame(firstColumnName: String = "Alternatives"): AnyFrame

Returns a data from with the first column being the alternatives by name, a column of raw score values for each metric for each alternative. The parameter firstColumnName can be used to name the first column of the returned data frame. By default, the first column name is "Alternatives".

Link copied to clipboard
fun alternativeValueData(rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): List<ValueData>

Returns a list of ValueData which holds for each alternative-metric value combination. (id, alternativeName, metricName, metricValue)

Link copied to clipboard
fun alternativeValuesAsDataFrame(firstColumnName: String = "Alternatives"): AnyFrame

Returns a data frame with the first column being the alternatives by name, a column of values for each metric for each alternative, and a final column representing the overall value for the alternative. The parameter firstColumnName can be used to name the first column of the returned data frame. By default, the first column name is "Alternatives". The resulting data frame will be sorted by the overall value column with higher value being preferred.

Link copied to clipboard

Applies the value function to the scores associated with each alternative and metric combination to determine the associated value.

Link copied to clipboard

Changes or assigns the weights for the additive model. The required number of metrics must be the number of metrics defined for the model. And, the metrics must all be in the model. The weights are normalized to ensure that they sum to 1.0 and are individually between 0, 1. The total weight supplied must be greater than 0.0. After assignment the total weight should be equal to 1.0.

Link copied to clipboard

Removes (clears) all the defined alternatives. Consider using this to clear out data associated with previous alternatives in preparation for a new evaluation across the same metrics.

Link copied to clipboard
fun defineAlternatives(alternatives: Map<String, List<Score>>, allowRescalingByMetrics: Boolean = true)

Defines the alternatives and their scores that should be evaluated by the model. The metrics for the model must have been previously defined prior to specifying the alternatives. The scores supplied for each alternative must have been created for each metric. If insufficient scores or incompatible scores are provided, the alternative is not added to the model. If the alternative has been previously defined, its data will be overwritten by the newly supplied scores. Any alternatives that are new will be added to the alternatives to be evaluated (provided that they have scores for all defined metrics). The supplied scores may not encompass the entire domain of the related metrics. It may be useful to adjust the domain limits (of the metrics) based on the actual (realized) scores that were supplied. Metrics specify whether their domain limits may be adjusted based on realized scores.

Link copied to clipboard

Defines the metrics to be used in the evaluation of the alternatives. Each metric must be associated with the related value function. If not, it is not added to the model. If there are previously defined metrics, they will be cleared and replaced by the supplied definitions. If there were previously defined alternatives they will be cleared because they might not have the defined metrics.

Link copied to clipboard

Extracts metric data for use in databases and other applications.

Link copied to clipboard
fun metricRankByAlternative(rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): Map<String, Map<MetricIfc, Double>>

Constructs a map of maps with the key to the outer map being the alternative name and the inner map holding the rank of the associated metric. Allows the lookup of the rank for a metric by alternative.

Link copied to clipboard
fun metricRanks(metric: MetricIfc, rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): List<Double>

Retrieves the rank of each value for each alternative as a list of ranks based on the supplied metric. The supplied metric must be part of the model. The elements of the list return the ranking of the alternatives with respect to the supplied metric. The number of elements is the number of alternatives. Thus, element 0 has the rank of the alternative 0 based on the metric. Thus, each alternative may have a different ranking based on the different metrics.

Link copied to clipboard

Retrieves the scores for each alternative as a list of raw score values based on the supplied metric. The supplied metric must be part of the model.

Link copied to clipboard

Retrieves the values from the value functions for each alternative as a list of transformed values based on the supplied metric. The supplied metric must be part of the model.

Link copied to clipboard
open override fun multiObjectiveValue(alternative: String): Double

Computes the multi-objective (overall) value for the specified alternative. The supplied alternative (name) must be within the model.

Link copied to clipboard

Computes the overall values for all defined alternatives based on the defined multi-objective value function. The key to the map is the alternative name and the associated value for the key is the overall multi-objective value for the associated alternative.

Link copied to clipboard
fun print()
Link copied to clipboard
fun ranksByMetric(rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): Map<MetricIfc, List<Double>>

Returns the ranks of the transformed metric scores as values from the assigned value function for each metric with each element of the returned list for a different alternative in the order that the alternatives are listed. The default ranking method is Ordinal.

Link copied to clipboard
fun resultsAsDatabase(dbName: String, dir: Path = KSL.dbDir, deleteIfExists: Boolean = true): DatabaseIfc

Returns the results as a database holding ScoreData, ValueData, and OverallValueData tables (tblScores, tblValues, tblOverall).

Link copied to clipboard

Returns the scores as doubles for each metric with each element of the returned list for a different alternative in the order that the alternatives are listed.

Link copied to clipboard

Computes statistics for each metric across the alternatives.

Link copied to clipboard

The list of alternatives sorted by their multi-objective value The returned list has pairs (alternative name, multi-objective value)

Link copied to clipboard
fun topAlternativesByFirstRankCounts(rankingMethod: Statistic.Companion.Ranking = defaultRankingMethod): Set<String>

The names of the alternatives that are considered first based on the number of times the metrics ranked the alternative first. The set may have more than one alternative if the alternatives tie based on the count rankings.

Link copied to clipboard

The names of the alternatives that are considered first based on the multi-objective values. The set may have more than one alternative if the alternatives tie based on multi-objective values.

Link copied to clipboard
open override fun toString(): String
Link copied to clipboard

Retrieves the value function values for each metric for the named alternative. The alternative must be defined as part of the model.

Link copied to clipboard

Returns the transformed metric scores as values from the assigned value function for each metric with each element of the returned list for a different alternative in the order that the alternatives are listed.