.NET 5.0 downloads for Linux, macOS, and Windows.NET is a free, cross-platform, open-source developer platform for building many different types of applications. Files for caret, version 0.1.1; Filename, size File type Python version Upload date Hashes; Filename, size caret-0.1.1-py2.py3-none-any.whl (22.2 kB) File type Wheel Python version py2.py3 Upload date Aug 3, 2017 Hashes View. 3.1 Creating Dummy Variables. The function dummyVars can be used to generate a complete (less than full rank parameterized) set of dummy variables from one or more factors. The function takes a formula and a data set and outputs an object that can be used to create the dummy variables using the predict method.
|
NOTE: Caret is no longer being developed. Connectome Workbench is the successor to Caret, and can do many of the same things, while having better controls and new features. It is available here, and is also open source.
Caret is a free, open-source, software package for structural and functional analyses of the cerebral and cerebellar cortex. Caret runs on Apple (Mac OSX), Linux, and Microsoft Windows operating systems.
Caret software includes two main programs, caret5 and caret_command. caret5 is a graphical user interface (GUI) for interactively manipulating and viewing neuroimaging data. caret_command is a command line program that allows batch processing of neuroimaging data. Video converter platinum 6 6 51 – convert mp4mp3dvd.
Caret is developed in the Van Essen Laboratory at the Washington University School of Medicine in Saint Louis, Missouri, USA.
caret_command is a command line program containing ~200 command-line operations that can be applied singly, or in combination, to process surface and/or volume data.
Allows Caret-like online visualization of data sets in the Sums DB database without downloading software or data.
caret includes several functions to pre-process the predictor data. It assumes that all of the data are numeric (i.e. factors have been converted to dummy variables via model.matrix, dummyVars or other means).
Note that the later chapter on using recipes with train shows how that approach can offer a more diverse and customizable interface to pre-processing in the package.
The function dummyVars can be used to generate a complete (less than full rank parameterized) set of dummy variables from one or more factors. The function takes a formula and a data set and outputs an object that can be used to create the dummy variables using the predict method.
For example, the etitanic data set in the earth package includes two factors: pclass (passenger class, with levels 1st, 2nd, 3rd) and sex (with levels female, male). The base R function model.matrix would generate the following variables:
Os x server 5 1 5 download free. Using dummyVars:
Note there is no intercept and each factor has a dummy variable for each level, so this parameterization may not be useful for some model functions, such as lm.
In some situations, the data generating mechanism can create predictors that only have a single unique value (i.e. a “zero-variance predictor”). For many models (excluding tree-based models), this may cause the model to crash or the fit to be unstable. https://lwiycv.over-blog.com/2021/02/emulsion-1-1-22-download-free.html.
Similarly, predictors might have only a handful of unique values that occur with very low frequencies. For example, in the drug resistance data, the nR11 descriptor (number of 11-membered rings) data have a few unique numeric values that are highly unbalanced:
The concern here that these predictors may become zero-variance predictors when the data are split into cross-validation/bootstrap sub-samples or that a few samples may have an undue influence on the model. These “near-zero-variance” predictors may need to be identified and eliminated prior to modeling.
To identify these types of predictors, the following two metrics can be calculated:
If the frequency ratio is greater than a pre-specified threshold and the unique value percentage is less than a threshold, we might consider a predictor to be near zero-variance.
We would not want to falsely identify data that have low granularity but are evenly distributed, such as data from a discrete uniform distribution. Trackstudio enterprise v5 5 0. Using both criteria should not falsely detect such predictors.
Looking at the MDRR data, the nearZeroVar function can be used to identify near zero-variance variables (the saveMetrics argument can be used to show the details and usually defaults to FALSE):
By default, nearZeroVar will return the positions of the variables that are flagged to be problematic.
While there are some models that thrive on correlated predictors (such as pls), other models may benefit from reducing the level of correlation between the predictors.
Given a correlation matrix, the findCorrelation function uses the following algorithm to flag predictors for removal:
For the previous MDRR data, there are 65 descriptors that are almost perfectly correlated (|correlation| > 0.999), such as the total information index of atomic composition (IAC) and the total information content index (neighborhood symmetry of 0-order) (TIC0) (correlation = 1). The code chunk below shows the effect of removing descriptors with absolute correlations above 0.75.
The function findLinearCombos uses the QR decomposition of a matrix to enumerate sets of linear combinations (if they exist). For example, consider the following matrix that is could have been produced by a less-than-full-rank parameterizations of a two-way experimental layout:
Note that columns two and three add up to the first column. Similarly, columns four, five and six add up the first column. findLinearCombos will return a list that enumerates these dependencies. For each linear combination, it will incrementally remove columns from the matrix and test to see if the dependencies have been resolved. findLinearCombos will also return a vector of column positions can be removed to eliminate the linear dependencies:
These types of dependencies can arise when large numbers of binary chemical fingerprints are used to describe the structure of a molecule.
The preProcess class can be used for many operations on predictors, including centering and scaling. The function preProcess estimates the required parameters for each operation and predict.preProcess is used to apply them to specific data sets. This function can also be interfaces when calling the train function.
Several types of techniques are described in the next few sections and then another example is used to demonstrate how multiple methods can be used. Note that, in all cases, the preProcess function estimates whatever it requires from a specific data set (e.g. the training set) and then applies these transformations to any data set without recomputing the values
In the example below, the half of the MDRR data are used to estimate the location and scale of the predictors. The function preProcess doesn’t actually pre-process the data. predict.preProcess is used to pre-process this and other data sets.
The preProcess option 'range' scales the data to the interval between zero and one.
preProcess can be used to impute data sets based only on information in the training set. One method of doing this is with K-nearest neighbors. For an arbitrary sample, the K closest neighbors are found in the training set and the value for the predictor is imputed using these values (e.g. using the mean). Using this approach will automatically trigger preProcess to center and scale the data, regardless of what is in the method argument. Alternatively, bagged trees can also be used to impute. For each predictor in the data, a bagged tree is created using all of the other predictors in the training set. When a new sample has a missing predictor value, the bagged model is used to predict the value. While, in theory, this is a more powerful method of imputing, the computational costs are much higher than the nearest neighbor technique.
In some cases, there is a need to use principal component analysis (PCA) to transform the data to a smaller sub–space where the new variable are uncorrelated with one another. The preProcess class can apply this transformation by including 'pca' in the method argument. Doing this will also force scaling of the predictors. Note that when PCA is requested, predict.preProcess changes the column names to PC1, PC2 and so on.
Similarly, independent component analysis (ICA) can also be used to find new variables that are linear combinations of the original set such that the components are independent (as opposed to uncorrelated in PCA). The new variables will be labeled as IC1, IC2 and so on.
The “spatial sign” transformation (Serneels et al, 2006) projects the data for a predictor to the unit circle in p dimensions, where p is the number of predictors. Essentially, a vector of data is divided by its norm. The two figures below show two centered and scaled descriptors from the MDRR data before and after the spatial sign transformation. The predictors should be centered and scaled before applying this transformation.
After the spatial sign:
Another option, 'BoxCox' will estimate a Box–Cox transformation on the predictors if the data are greater than zero.
The NA values correspond to the predictors that could not be transformed. This transformation requires the data to be greater than zero. Two similar transformations, the Yeo-Johnson and exponential transformation of Manly (1976) can also be used in preProcess.
In Applied Predictive Modeling there is a case study where the execution times of jobs in a high performance computing environment are being predicted. The data are:
The data are a mix of categorical and numeric predictors. Suppose we want to use the Yeo-Johnson transformation on the continuous predictors then center and scale them. Let’s also suppose that we will be running a tree-based models so we might want to keep the factors as factors (as opposed to creating dummy variables). We run the function on all the columns except the last, which is the outcome.
The two predictors labeled as “ignored” in the output are the two factor predictors. These are not altered but the numeric predictors are transformed. However, the predictor for the number of pending jobs, has a very sparse and unbalanced distribution:
For some other models, this might be an issue (especially if we resample or down-sample the data). Vectoraster 6 2 6 download free. We can add a filter to check for zero- or near zero-variance predictors prior to running the pre-processing calculations:
Note that one predictor is labeled as “removed” and the processed data lack the sparse predictor.
caret contains functions to generate new predictors variables based ondistances to class centroids (similar to how linear discriminant analysis works). For each level of a factor variable, the class centroid and covariance matrix is calculated. For new samples, the Mahalanobis distance to each of the class centroids is computed and can be used as an additional predictor. This can be helpful for non-linear models when the true decision boundary is actually linear.
In cases where there are more predictors within a class than samples, the classDist function has arguments called pca and keep arguments that allow for principal components analysis within each class to be used to avoid issues with singular covariance matrices.
predict.classDist Satellite party ultra payloaded rar. is then used to generate the class distances. By default, the distances are logged, but this can be changed via the trans argument to predict.classDist.
As an example, we can used the MDRR data.
This image shows a scatterplot matrix of the class distances for the held-out samples: