Name: | Description: | Size: | Format: | |
---|---|---|---|---|
44.14 KB | Adobe PDF |
Advisor(s)
Abstract(s)
In discrete discriminant analysis dimensionality problems occur, particularly when dealing with data from the social sciences, humanities and health.
In these domains, one often has to classify entities with a high number of explanatory variables when compared to the number of observations available.
In the present work we address the problem of features selection in classification, aiming to identify the variables that most discriminate between the a priori defined classes, reducing the number of parameters to estimate, turning the results easier to interpret and reducing the runtime of the methods used. We specially address classification using a recently methodological approach based on a linear combination of the First-order Independence Model (FOIM) and the Dependence Trees Model (DTM).
Data of small and moderate size are considered.
Description
Resumo de comunicação em póster apresentada em 14th International Conference on Applied Stochastic Models and Data Analysis (ASMDA2011), Rome, June 7-10 2011
Keywords
Discrete Discriminant Analysis Combining models Dependence Trees model First Order Independence model Hierarchical Coupling procedure Variable selection
Citation
In Book of abstracts of the 14th Applied Stochastic Models and Data Analysis International Conference (ASMDA2011). Rome, 2011