Error in Family$linkfun(Mustart) : Argument Mu Must Be a Nonempty Numeric Vector

WeightIt

ngreifer / WeightIt Goto Github PK

54.0 2.0 6.0 2.76 MB

R package for propensity score weighting

R 100.00%
causal-inference propensity-scores inverse-probability-weights observational-study

WeightIt's Introduction

WeightIt: Weighting for Covariate Balance in Observational Studies

CRAN_Status_Badge CRAN_Downloads_Badge

Overview

WeightIt is a one-finish bundle to generate balancing weights for indicate and longitudinal treatments in observational studies. Independent within WeightIt are methods that telephone call on other R packages to judge weights. The value of WeightIt is in its unified and familiar syntax used to generate the weights, equally each of these other packages have their own, often challenging to navigate, syntax. WeightIt extends the capabilities of these packages to generate weights used to estimate the ATE, ATT, ATC, and other estimands for binary or multinomial treatments, and treatment effects for continuous treatments when available. In these ways, WeightIt does for weighting what MatchIt has washed for matching, and MatchIt users will observe the syntax familiar.

For a complete vignette, encounter the website for WeightIt.

To install and load WeightIt, use the code below:

                                          #CRAN version                    install.packages(                      "WeightIt"                    )                                          #Development version                    devtools                    ::install_github(                      "ngreifer/WeightIt"                    )  library(                      "WeightIt"                    )

The workhorse role of WeightIt is weightit(), which generates weights from a given formula and data input according to methods and other parameters specified by the user. Below is an instance of the use of weightit() to generate propensity score weights for estimating the ATE:

data(                      "lalonde"                    ,                    package                    =                                          "cobalt"                    )                    West                    <-                    weightit(treat                    ~                    age                    +                    educ                    +                    nodegree                    +                    married                    +                    race                    +                    re74                    +                    re75,                    information                    =                    lalonde,                    method                    =                                          "ps"                    ,                    estimand                    =                                          "ATE"                    )                    West                  
                    A weightit object  - method: "ps" (propensity score weighting)  - number of obs.: 614  - sampling weights: none  - treatment: 2-category  - estimand: ATE  - covariates: age, educ, nodegree, married, race, re74, re75                                      

Evaluating weights has 2 components: evaluating the covariate balance produced past the weights, and evaluating whether the weights will allow for sufficient precision in the eventual effect estimate. For the first goal, functions in the cobalt package, which are fully compatible with WeightIt, can exist used, as demonstrated below:

library(                      "cobalt"                    )  bal.tab(Due west,                    un                    =                    Truthful)
                    Call  weightit(formula = treat ~ age + educ + nodegree + married +      race + re74 + re75, data = lalonde, method = "ps", estimand = "ATE")  Balance Measures                 Blazon Diff.United nations Diff.Adj prop.score  Altitude  1.7569   0.1360 age          Contin. -0.2419  -0.1676 educ         Contin.  0.0448   0.1296 nodegree      Binary  0.1114  -0.0547 married       Binary -0.3236  -0.0944 race_black    Binary  0.6404   0.0499 race_hispan   Binary -0.0827   0.0047 race_white    Binary -0.5577  -0.0546 re74         Contin. -0.5958  -0.2740 re75         Contin. -0.2870  -0.1579  Effective sample sizes            Control Treated Unadjusted  429.    185.   Adjusted    329.01   58.33                                      

For the second goal, qualities of the distributions of weights tin be assessed using summary(), every bit demonstrated below.

                                          Summary of weights  - Weight ranges:             Min                                   Max treated 1.1721 |---------------------------| 40.0773 command one.0092 |-|                            four.7432  - Units with five greatest weights by group:                                                               137     124     116      68      10  treated xiii.5451 15.9884 23.2967 23.3891 40.0773              597     573     411     381     303  control  4.0301  four.0592  4.2397  4.5231  4.7432  - Weight statistics:          Coef of Var   MAD Entropy # Zeros treated       ane.478 0.807   0.534       0 command       0.552 0.391   0.118       0  - Effective Sample Sizes:             Control Treated Unweighted  429.    185.   Weighted    329.01   58.33                                      

Desirable qualities include small coefficients of variation close to 0 and large constructive sample sizes.

The table below contains the available methods in WeightIt for estimating weights for binary, multinomial, and continuous treatments using various methods and functions from various packages.

Treatment type Method (method =) Package
Binary Binary regression PS ("ps") various
- Generalized boosted modeling PS ("gbm") gbm
- Covariate Balancing PS ("cbps") CBPS
- Non-Parametric Covariate Balancing PS ("npcbps") CBPS
- Entropy Balancing ("ebal") -
- Empirical Balancing Calibration Weights ("ebcw") ATE
- Optimization-Based Weights ("optweight") optweight
- SuperLearner PS ("super") SuperLearner
- Bayesian additive regression trees PS ("bart") dbarts
- Free energy Balancing ("free energy") -
Multinomial Multinomial regression PS ("ps") various
- Generalized boosted modeling PS ("gbm") gbm
- Covariate Balancing PS ("cbps") CBPS
- Non-Parametric Covariate Balancing PS ("npcbps") CBPS
- Entropy Balancing ("ebal") -
- Empirical Balancing Calibration Weights ("ebcw") ATE
- Optimization-Based Weights ("optweight") optweight
- SuperLearner PS ("super") SuperLearner
- Bayesian additive regression trees PS ("bart") dbarts
- Energy Balancing ("energy") -
Continuous Generalized linear model GPS ("ps") -
- Generalized additional modeling GPS ("gbm") gbm
- Covariate Balancing GPS ("cbps") CBPS
- Not-Parametric Covariate Balancing GPS ("npcbps") CBPS
- Entropy Balancing ("ebal") -
- Optimization-Based Weights ("optweight") optweight
- SuperLearner GPS ("super") SuperLearner
- Bayesian additive regression copse GPS ("bart") dbarts

In addition, WeightIt implements the subgroup balancing propensity score using the function sbps(). Several other tools and utilities are available.

Please submit bug reports or other issues to https://github.com/ngreifer/WeightIt/problems. If y'all would like to see your package or method integrated into WeightIt, or for any other questions or comments about WeightIt, please contact the author. Fan mail is greatly appreciated.

WeightIt's People

Contributors

ngreifer avatar

Watchers

Robin van Emden avatar Noah Greifer avatar

WeightIt'south Issues

Propensity scores for multinomial PS

I tin can't detect PS in the covariates table of the weightit object for CBPS PS weighting.

Is there anywhere else to become them? I would like to assess overlap.

Thanks!

add github links to description

with the devtools::use_github_links() function

Excerpt last reltol value from WeightIt run

Using entropy balancing, method = ebal, when sample sizes are relatively small, entropy balancing volition fail. In lodge to handle this, the convergence tolerance level in optim()needs to be raised up from sqrt(.Machine$double.eps). The reltol can be adjusted but it'due south unclear what it tin can/should be adjusted to. The process should grab the terminal reltol value from optim(), and then make information technology available to laissez passer information technology to WeightIt() to go weights at a higher tolerance level. In Stata, the ebalance package returns the tolerance level later on each run. Come across instance beneath. Is it possible to similarly include the last reltol in W1$obj?

Attachment-1

Covergence problems with glm()

Dear Noah, thanks for writing this great parcel. When estimating a model with 100 times more treated than controls, I get warnings:

Alert letters:
i: glm.fit: algorithm did non converge
ii: glm.fit: fitted probabilities numerically 0 or 1 occurred
three: Some extreme weights were generated. Examine them with summary() and perchance trim them with trim().

I more often than not understand why that might happen. However, when I run the verbal same model outside weightit just with glm, this does non happen. The cocky-contained lawmaking given below replicates my problem.

Kind regards
Hendrik


library(tidyverse)
library(WeightIt)

data("nsw_mixtape", package = "causaldata")
information("cps_mixtape", parcel = "causaldata")

df <- nsw_mixtape |>
filter(treat==1) |>
bind_rows(cps_mixtape) |>
mutate(delta_Y=re78-re75)

West = weightit(treat ~ age + educ + black + hisp + marr + nodegree + re74,
data = df, estimand = "ATT", method = "ps", include.obj = True)

summary(W$ps)

df$pscore <- predict(
glm(treat ~ historic period + educ + black + hisp + marr + nodegree + re74,
family=binomial(), information = df),
type="response")

summary(df$pscore)

Is it possible incorporate field PS for CBPS ant GBM?

The function weightit don't return the propensity score for the CBPS or GBM method. Is it possible to contain?

Error in deparse1(f[[i]]) : could not find office "deparse1"

Howdy there,
I am trying to utilize weightit function, but I go on getting this mistake.
Error in deparse1(f[[one]]) : could not notice part "deparse1"

This is my code:
w.out <- weightit(mentalH1 ~ covs, data = core_1617,
due south.weights=core_1617$DISCWT, estimand = "ATE", method = "ps",agg.fun= "range")

I constitute a similar give-and-take: #15, and I updated the installation of backports package and restart R, but nevertheless running to the same problem. FYI, I am working on R version 3.5.2

Please advise.
Thank you,
Mohammed

mistake message - "treatment & covariates must have the same number of units"

I but installed WeightIt and am getting an mistake; minimal instance lawmaking provided below. What'due south going incorrect?

t1 <- data.frame(treat=c(rep(0, 500), rep(1, 500)), cov=rnorm(m, 1, .iv))
w.out <- weightit(care for ~ cov, data=t1, estimand="ATE")
Error: The treatment and covariates must have the same number of units.

The treatment and covariate take the same length, and so what's behind the error? Thanks.

Package 'WeightIt' was removed from the CRAN repository.

Package 'WeightIt' was removed from the CRAN repository.

Formerly available versions can be obtained from the archive.

Archived on 2019-09-03 as uses archived bundle 'optweight' unconditionally.

sbps error

Apologies for brevity... honey weightit (consider this fanmail!)

I am testing sbps but getting an error. Probs done something stupid.

                  test_weights_two_arm_sbps <- weightit(formula = f_sbps,                                    data=sd_subset_two_arm_sbps,                                    method="cbps",                                    estimand="ATE" )  test_weights_two_arm_sbps_by <- weightit(formula = f_sbps,                                    data=sd_subset_two_arm_sbps,                                    method="cbps",                                    estimand="ATE",                                    by="country" )  (land is a factor and each country has every handling level)  test_weights_two_arm_sbps_combined <- sbps(test_weights_two_arm_sbps, test_weights_two_arm_sbps_by)                                  

error

                  Fault in get_w(s, moderator.factor, w_o, w_s) :    argument "moderator.factor" is missing, with no default                                  

I exercise indeed run across in the code that get_w is called without passing moderator.factor.

Is this a bug, or have I missed something?

Thanks so much!

Andrew

"All weights are 2143289344" with method = "free energy"

Hi,
Thanks for the fantastic package.

Method = "energy" seems to work really well in terms of speed, covariate balance and effective sample size in my data.

I am bootstrapping to business relationship for the uncertainty in weights and to generate confidence intervals around my coefficients from the weighted models.

I have encountered an fault when using method = "energy" where all weights are constitute to be equal and I recall very large. ("All weights are 2143289344."). A Google of this fault suggests this has come up on previous CRAN packet checks simply Im' not certain.

I can't share my data unfortunately however the fault is reproducible with lalonde (this example doesn't have any models just to keep it minimal):

                  kick.fun <-  function(r, index) {   w_obj <- WeightIt::weightit(formula = care for ~ age + educ + race + married + nodegree + re74, estimand = "ATE", method = "free energy", information = r[index,]) weights <- w_obj$weights render(weights) }  boot(kick.fun, information = lalonde, R = yard, parallel = "snow", ncpus = xl)  ## Remove parallel and ncpus if yous're not able to parallelise.                                  

21 nodes produced errors; beginning fault: All weights are 2143289344

Thank you!

Error when method = "gbm" and stabilize = TRUE

From the Examples in the documentation: method_gbm {WeightIt}

The following lawmaking works equally expected

                  (W1 <- weightit(treat ~ historic period + educ + married +                   nodegree + re74, data = lalonde,                 method = "gbm", estimand = "ATT",                 stop.method = "es.max"))                                  

The add-on of the argument "stabilize" produces an error

                  (W1 <- weightit(care for ~ age + educ + married +                   nodegree + re74, information = lalonde,                 method = "gbm", estimand = "ATT",                 stop.method = "es.max", stabilize = TRUE))                                  

Error in tab[t] : invalid subscript type 'closure'

Is it possible to incorporate the PS for multinomial treatments?

I think a bully improvement is included a data.frame in 'ps' field with the probability of each handling. Something like in glm and predict(model, type = 'response').

Error in deparse1(f[[1]]) : could not find function "deparse1"

Hi! When I endeavour to generate the propensity scores using weightit, I get the following error: Fault in deparse1(f[[one]]) : could not find function "deparse1". I'm using R version 3.six.ane. This is the code that generates the error:

Due west.out <- weightit(care for ~ covs, data = arcc_data,
method = "ps", estimand = "ATT")

where treat and covs are vectors.

Have you encountered this issue before?

Mistake in summary.weightit

                  test_weights_two_arm <- weightit(formula = f,                                    data=sd_subset_two_arm,                                    method="cbps",                                    estimand="ATE",                                    missing="ind" ) summary(test_weights_two_arm)                  Summary of weights  - Weight ranges:  Fault in rep(" ", spaces1) : invalid 'times' argument In addition: Warning messages: 1: In min(west[due west > 0 & t == one]) :   no not-missing arguments to min; returning Inf 2: In max(w[w > 0 & t == 1]) :   no not-missing arguments to max; returning -Inf 3: In min(w[w > 0 & t == 0]) :   no not-missing arguments to min; returning Inf 4: In max(w[due west > 0 & t == 0]) :   no non-missing arguments to max; returning -Inf                                  

Weights are reasonable (1-6.eight)

The fault comes deep into the print.summary function, and I cannot play with trace(..., edit=True) to work out why information technology'south going wrong!

Extremely slow queries

How-do-you-do,

Beginning, cheers for the really helpful packages you created, they are very useful when doing IPTW!

I am currently doing IPTW manually and wanted to attempt out the WeightIt packet. I have a dataset with ~50,000 observations over xv years, and approximately 10 covariates (baseline + time-varying). Using a manually build function based on GLM interpretation of the PS, I go the results in less than 3 minutes.

However, with the WeightIt package, I never actually got a adventure to let the code running till the cease (maximum was 3 days and ii hours). I use the following part: weightitMSM and arguments: estimand = "ATE", method = "PS", stabilize = T. Have you already experienced this?

Thanks

Multinomial ebal NA weights - can't investigate

Howdy again!

                  +                                    information=sd_subset_three_arm, +                                    method="ebal", +                                    estimand="ATE" + )                                  

Error in if (any(sapply(unique(treat), function(x) sd(test.w[treat == :
missing value where True/Faux needed
In addition: Warning bulletin:
Some weights were estimated every bit NA, which means a value was impossible to compute (e.g., Inf). Check for extreme values of the treatment or covariates and try removing them. Non-finite weights will be set to 0.

Looking at the lawmaking, the SD office has the argument na.rm=TRUE, so I can just assume this mistake arises because all weights for one or more treatment groups (iii) are NA.

Does this betoken an obvious trouble with my information? CBPS works fine on the aforementioned.

Considering the function errors, I tin't investigate which weights are missing - my skills don't extend to setting breakpoints in packages!

Cheers and so much for whatever thoughts,

Andrew

How can the weights exist extracted every bit a column added to the original dataset?

Hullo,
I'm using the parcel to compute the weights, but the purpose of these weighting methods is to ultimately use the weights as regression weights in a subsequent analysis. However, the output is given every bit an object. How can I add together the weights to the original data prepare as a cavalcade so that I tin apply it as regression weights in the subsequent assay? Thanks!

Inconsistent ATT weights

In i of your references (Austin, 2011), ATT weights are calculated equally 1 for the treated group and ps/(ane-ps) for the command group. However, this doesn't appear to be how y'all implemented the ATT weighting estimator in get_w_from_ps().

Could you clarify how ATT is calculated and what reference you lot used? Thank yous.

SL.ridge leads to no weights estimated

I tried to use SuperLearner with WeightIt using this lawmaking and everything worked just fine with Sl.glm, but ridge led to some errors.

West.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75, data = lalonde, estimand = "ATE", method = "super", SL.method = "method.balance", SL.library = c("SL.glm", "SL.ridge" ))

and got this

                  Error in (function (Y, X, newX, family, lambda = seq(1, twenty, 0.1), ...)  :    Currently but works with gaussian data Error in (role (Y, X, newX, family, lambda = seq(1, xx, 0.ane), ...)  :    Currently merely works with gaussian data In addition: Alarm message: In FUN(10[[i]], ...) : Error in algorithm SL.ridge    The Algorithm will exist removed from the Super Learner (i.eastward. given weight 0)   Mistake in (function (Y, X, newX, family, lambda = seq(1, 20, 0.ane), ...)  :    Currently only works with gaussian data In addition: Warning message: In FUN(Ten[[i]], ...) : Error in algorithm SL.ridge    The Algorithm will be removed from the Super Learner (i.eastward. given weight 0)   Error in (function (Y, 10, newX, family, lambda = seq(1, xx, 0.1), ...)  :    Currently only works with gaussian data In improver: Warning messages: i: In FUN(X[[i]], ...) : Mistake in algorithm SL.ridge    The Algorithm will be removed from the Super Learner (i.e. given weight 0)   2: In predict.lm(object, newdata, se.fit, calibration = 1, type = if (type ==  :   prediction from a rank-scarce fit may be misleading Error in (role (Y, Ten, newX, family, lambda = seq(ane, 20, 0.1), ...)  :    Currently only works with gaussian information In add-on: Alarm bulletin: In FUN(Ten[[i]], ...) : Mistake in algorithm SL.ridge    The Algorithm will be removed from the Super Learner (i.due east. given weight 0)   Error in (function (Y, X, newX, family unit, lambda = seq(1, 20, 0.1), ...)  :    Currently merely works with gaussian data In addition: Warning message: In FUN(X[[i]], ...) : Fault in algorithm SL.ridge    The Algorithm will be removed from the Super Learner (i.e. given weight 0)   Error in (function (Y, 10, newX, family, lambda = seq(1, 20, 0.1), ...)  :    Currently only works with gaussian data In addition: Warning message: In FUN(X[[i]], ...) : Fault in algorithm SL.ridge    The Algorithm will exist removed from the Super Learner (i.east. given weight 0)   Error in (part (Y, X, newX, family, lambda = seq(1, xx, 0.1), ...)  :    Currently only works with gaussian information In addition: Alert message: In FUN(X[[i]], ...) : Error in algorithm SL.ridge    The Algorithm volition be removed from the Super Learner (i.e. given weight 0)   Fault in (part (Y, Ten, newX, family, lambda = seq(ane, 20, 0.one), ...)  :    Currently only works with gaussian data In addition: Alert bulletin: In FUN(X[[i]], ...) : Error in algorithm SL.ridge    The Algorithm will be removed from the Super Learner (i.e. given weight 0)   Error in (role (Y, X, newX, family, lambda = seq(ane, xx, 0.1), ...)  :    Currently only works with gaussian data In addition: Warning message: In FUN(X[[i]], ...) : Fault in algorithm SL.ridge    The Algorithm volition exist removed from the Super Learner (i.e. given weight 0)   Mistake in (role (Y, Ten, newX, family, lambda = seq(1, 20, 0.1), ...)  :    Currently only works with gaussian data In addition: Warning message: In FUN(X[[i]], ...) : Error in algorithm SL.ridge    The Algorithm will exist removed from the Super Learner (i.eastward. given weight 0)   Alert messages: one: In FUN(Ten[[i]], ...) : Mistake in algorithm SL.ridge  on full information    The Algorithm will be removed from the Super Learner (i.eastward. given weight 0)   2: In (function (Y, X, newX = NULL, family = gaussian(), SL.library,  :   Re-running estimation of coefficients removing failed algorithm(s) Original coefficients are:  0.00934546386419807, 0.990654536135802  3: No weights were estimated. This is probably a bug,      and y'all should report information technology at https://github.com/ngreifer/WeightIt/bug.  4: Some weights were estimated as NA, which ways a value was incommunicable to compute (e.g., Inf). Bank check for extreme values of the treatment or covariates and endeavor removing them. Non-finite weights volition be set to 0.  5: All weights are 0, possibly indicating an interpretation failure.                                  

Error when weighting

I become the following mistake bulletin when running a basic example. I believe all variables are well-behaved and have no missing data. Deplorable about the lack of details but maybe at that place's an obvious answer.

                  weightit(treatment ~ rev_log,                     data = d,                       method = "ebal")  Mistake in if (all(as.character(unique.treat.bin) == as.character(unique.treat))) { :    missing value where True/FALSE needed                                  

Extracting propensity scores from weightthem objects with a continuous exposure

Hey, this is an excellent package, thank yous and so much!

I am trying to practice a sensitivity assay where I compare the effect approximate generated from the weight them object for a continuous exposure weighted sample to the consequence approximate that includes the propensity score as a covariate (without weighing it). I am doing so using imputed information, nonetheless, using weightthem. Even afterward including include.obj = TRUE as suggested in #three, I practise not run across whatever components of the output object that include propensity scores for such a sensitivity analysis (I am posting here because the weightthem documentation suggests that boosted arguments beyond what is specified in the documentation are implemented via weightit).

As an example:

Generate weights: ols_wt <- weightthem(10 ~ Z1 + Z2 + Z3, datasets = imp_data, method = "ps", approach = "inside")

Model 1: with(ols_wt, glm(Y ~ 10, family unit = binomial))

Model 2: with(imp_data, glm(Y ~ X + PS, family = binomial))

  • I would plainly demand to be more creative to make sure that each vector of propensity scores corresponds to the appropriate imputed dataset, but this is for analogy.

Does the propensity score exist in the weightthem object or volition I just need to create it myself (which is not a problem, I merely want to make certain I'm not missing anything and would rather utilize the propensity score used to generate the weights and so I know I'm comparing related quantities)?

Thanks so much!

Danny

Back to those multinomial propensity scores... (checking overlap/positivity supposition?)

#iii - Hi @ngreifer - I empathize your hesitation to add together this (ps object) to the output. Just wondering if you might have some suggestions about all-time practices for checking "overlap" with your parcel? The twang bundle approach is to utilize the ps object, despite these individual scores not summing to one.

Also, I'm sure you lot've already looked at the GBM objects, but I haven't been able to find a reference for all the elements that I meet in the output "obj". GBM documentation only describes some of the values, "computer" is not described, simply it looks promising -- not much time to dig downwards the rabbit-hole.

Thanks so much!
Hanna

Hanna Gerlovin, PhD
Biostatistician
Massachusetts Veterans Epidemiology Inquiry and Information Centre (MAVERIC)
Veterans Affairs, Boston
[electronic mail protected]

ATE / ATT mixup for ebcw method

Run into line 1043 of weightit2method.R:
The following should be estimating the ATE, but ATT is specified:

ate.out <- ATE::ATE(Y = Y, Ti = treat_i, X = covs_i, ATT = Truthful, theta = A[["theta"]], verbose = TRUE, max.iter = A[["max.iter"]], tol = A[["tol"]], initial.values = A[["initial.values"]], backtrack = A[["backtrack"]], backtrack.alpha = A[["backtrack.alpha"]], backtrack.beta = A[["backtrack.beta"]])

Error with missing data when using "gbm" as method

I'm having bug with cobalt and weightit (using gbm) due to error messages regarding missing data. I have no issues with twang using the aforementioned information, then I tried to use weightit with the lalonde_mis dataset. I trigger the same fault message using the lalonde data. How does one get weightit to work with missing data on covariates? I've only tried the gbm method, which I thought the manual said worked with missing data.

weightit_0 <- weightit(treat ~ age + educ + race + married + nodegree, # + re74 + re75,
data = lalonde_mis, method = "gbm")

Missing values are present in the covariates. Run into ?weightit for information on how these are handled.Error in if (is.cistron(x) || is.character(ten) || all_the_same(x)) return(x) else if (is_binary(x)) { :
missing value where True/FALSE needed

I get this error regardless of whether I remove continuous vars with missing information (as above).

Can no longer cull between pairwise and multinomial logistic regression for 3 level outcomes

Hi Noah,

Some other potential issue I'chiliad running into today.

We have a three level outcome and I had been able to force weightit to however use a pairwise logistic regression (rather than multinomial logistic regression) by making sure mlogit was not installed, but at present I don't seem to have that 'pick'. I get an error that stops my code that says "Mistake in loadNamespace(name) : there is no package called 'mlogit'".

Trying a reprex over again, let's run into if I get it correct this time.

Is there a way to choose between the two methods?

Thanks!

library(tidyverse) library(WeightIt) library(Hmisc)                                      #> Loading required package: lattice                                      #> Loading required package: survival                                      #> Loading required package: Formula                                      #>                                                        #> Attaching package: 'Hmisc'                                      #> The post-obit objects are masked from 'package:dplyr':                                      #>                                                        #>     src, summarize                                      #> The post-obit objects are masked from 'package:base':                                      #>                                                        #>     format.pval, units                                      #                    We do Not desire mlogit installed. Make certain its uninstalled here.                  remove.packages(                    "mlogit"                  )                                      #> Removing package from '/abode/[email protected]/R/x86_64-pc-linux-gnu-library/3.v'                                      #> (as 'lib' is unspecified)                                      #> Fault in find.package(pkgs, lib): there is no package called 'mlogit'                  women                  <-                  women                  %>%    mutate(height_cat                  =                  Hmisc                  ::cut2(height,                  k                  =                  3))  weightit(height                  ~                  weight,                  data                  =                  women,                  estimand                  =                                      "ATE"                  ,                  method                  =                                      "ps"                  ,                  stabilize                  =                  T,                  include.obj                  =                  T)                                      #> A weightit object                                      #>  - method: "ps" (propensity score weighting)                                      #>  - number of obs.: fifteen                                      #>  - sampling weights: none                                      #>  - treatment: continuous                                      #>  - covariates: weight                  weightit(height_cat                  ~                  weight,                  data                  =                  women,                  estimand                  =                                      "ATE"                  ,                  method                  =                                      "ps"                  ,                  stabilize                  =                  T,                  include.obj                  =                  T)                                      #> Error in loadNamespace(proper name): there is no package called 'mlogit'                

Created on 2020-01-03 by the reprex parcel (v0.2.1)

Error when using by with super method

Hi,

Weightit is great.

I am finding that I get
"Error in family$linkfun(mustart) :
Argument mu must exist a nonempty numeric vector
Error in algorithm SL.glm
The Algorithm will be removed from the Super Learner (i.due east. given weight 0)
Mistake in (function (Y, Ten, newX = Zip, family unit = gaussian(), SL.library, :
All algorithms dropped from library".

when using by pick for the super method and SL.glm. It runs detect without by (i.e. I run on one imputation at a time).

all-time wishes, Frank

PSs are dissimilar if the covariates are specified in different orders on the right-manus side of the formula when method = "cbps"

Hi!

Thanks for the slap-up package!

I accept noticed that when method = "cbps" is used in the weightit function different PSs values are returned if the covariates on the right-hand side of the formula are specified in different orders. However, if PSs are estimated "exterior" of the weightit function using CBPS::CBPS they expect identical. Here's a reproducible instance using lalonde dataset.

library(WeightIt) library(CBPS)                                      #> Loading required parcel: MASS                                      #> Loading required package: MatchIt                                      #> Loading required package: nnet                                      #> Loading required bundle: numDeriv                                      #> Loading required package: glmnet                                      #> Loading required package: Matrix                                      #> Loaded glmnet 4.0-ii                                      #> CBPS: Covariate Balancing Propensity Score                                      #> Version: 0.21                                      #> Authors: Christian Fong [aut, cre],                                      #>   Marc Ratkovic [aut],                                      #>   Kosuke Imai [aut],                                      #>   Chad Hazlett [ctb],                                      #>   Xiaolin Yang [ctb],                                      #>   Sida Peng [ctb]                  data(                    "lalonde"                  )                                      #                    one) PS interpretation using CBPS function ---------------------------------                  ps_1                  <-                  CBPS(                  treat                  ~                  age                  +                  educ                  +                  race                  +                  married                  +                  nodegree                  +                  re74                  +                  re75,                  data                  =                  lalonde,                  ATT                  =                  0                  )                  ps_2                  <-                  CBPS(                  care for                  ~                  educ                  +                  race                  +                  nodegree                  +                  re74                  +                  re75                  +                  married                  +                  age,                  data                  =                  lalonde,                  ATT                  =                  0                  )  head(data.frame(                  ps1                  =                  ps_1                  $                  fitted.values,                  ps2                  =                  ps_2                  $                  fitted.values                  ))                                      #>         ps1       ps2                                      #> 1 0.6283220 0.6283220                                      #> 2 0.2217905 0.2217905                                      #> 3 0.6629598 0.6629598                                      #> 4 0.7595289 0.7595289                                      #> 5 0.6939144 0.6939144                                      #> vi 0.6906981 0.6906981                                      #                    The PSs are identical                                      #                    2) PS estimation using weightit --------------------------------------                  wt_1                  <-                  weightit(                  treat                  ~                  age                  +                  educ                  +                  race                  +                  married                  +                  nodegree                  +                  re74                  +                  re75,                  data                  =                  lalonde,                  method                  =                                      "cbps"                  ,                  estimand                  =                                      "ATE"                                    )                  wt_2                  <-                  weightit(                  treat                  ~                  educ                  +                  race                  +                  nodegree                  +                  re74                  +                  re75                  +                  married                  +                  age,                  data                  =                  lalonde,                  method                  =                                      "cbps"                  ,                  estimand                  =                                      "ATE"                                    )  head(data.frame(                  ps1                  =                  wt_1                  $                  ps,                  ps2                  =                  wt_2                  $                  ps                  ))                                      #>         ps1       ps2                                      #> 1 0.6897437 0.5851259                                      #> 2 0.2157126 0.2275898                                      #> 3 0.6599305 0.7606147                                      #> 4 0.7763387 0.7580954                                      #> 5 0.6329184 0.6942782                                      #> 6 0.6930129 0.6907725                                      #                    The PSs are not identical                

Created on 2020-12-28 past the reprex package (v0.3.0)

Session info
                    devtools                    ::session_info()                                          #> - Session info ---------------------------------------------------------------                                          #>  setting  value                                                              #>  version  R version iv.0.iii (2020-10-x)                                          #>  os       Windows 10 x64                                                              #>  organization   x86_64, mingw32                                                              #>  ui       RTerm                                                              #>  language (EN)                                                              #>  collate  Italian_Italy.1252                                                              #>  ctype    Italian_Italy.1252                                                              #>  tz       Europe/Berlin                                                              #>  appointment     2020-12-28                                                              #>                                                              #> - Packages -------------------------------------------------------------------                                          #>  package     * version    engagement       lib source                                                              #>  assertthat    0.2.one      2019-03-21 [one] CRAN (R iv.0.0)                                                              #>  backports     1.ii.1      2020-12-09 [1] CRAN (R 4.0.3)                                                              #>  callr         3.v.1      2020-ten-13 [1] CRAN (R 4.0.iii)                                                              #>  CBPS        * 0.21       2019-08-21 [1] CRAN (R 4.0.0)                                                              #>  cli           two.ii.0      2020-11-20 [one] CRAN (R 4.0.iii)                                                              #>  codetools     0.ii-16     2018-12-24 [two] CRAN (R 4.0.3)                                                              #>  colorspace    2.0-0      2020-11-eleven [one] CRAN (R iv.0.3)                                                              #>  crayon        1.three.iv      2017-09-16 [1] CRAN (R 4.0.0)                                                              #>  desc          ane.2.0      2018-05-01 [ane] CRAN (R 4.0.0)                                                              #>  devtools      2.three.2      2020-09-eighteen [1] CRAN (R iv.0.2)                                                              #>  digest        0.half dozen.27     2020-10-24 [1] CRAN (R 4.0.3)                                                              #>  dplyr         one.0.2      2020-08-xviii [i] CRAN (R 4.0.2)                                                              #>  ellipsis      0.three.1      2020-05-15 [1] CRAN (R iv.0.0)                                                              #>  evaluate      0.14       2019-05-28 [one] CRAN (R 4.0.0)                                                              #>  fansi         0.4.1      2020-01-08 [1] CRAN (R 4.0.0)                                                              #>  foreach       1.5.ane      2020-10-xv [1] CRAN (R 4.0.3)                                                              #>  fs            i.5.0      2020-07-31 [ane] CRAN (R 4.0.2)                                                              #>  generics      0.1.0      2020-10-31 [ane] CRAN (R 4.0.3)                                                              #>  ggplot2       iii.3.2      2020-06-xix [i] CRAN (R 4.0.2)                                                              #>  glmnet      * 4.0-two      2020-06-16 [one] CRAN (R 4.0.2)                                                              #>  glue          1.iv.ii      2020-08-27 [ane] CRAN (R 4.0.ii)                                                              #>  gtable        0.3.0      2019-03-25 [1] CRAN (R 4.0.0)                                                              #>  highr         0.eight        2019-03-20 [1] CRAN (R 4.0.0)                                                              #>  htmltools     0.5.0      2020-06-16 [ane] CRAN (R four.0.two)                                                              #>  iterators     ane.0.xiii     2020-ten-15 [1] CRAN (R 4.0.three)                                                              #>  knitr         i.30       2020-09-22 [1] CRAN (R 4.0.two)                                                              #>  lattice       0.20-41    2020-04-02 [2] CRAN (R iv.0.3)                                                              #>  lifecycle     0.2.0      2020-03-06 [1] CRAN (R four.0.0)                                                              #>  magrittr      ii.0.ane      2020-11-17 [1] CRAN (R 4.0.3)                                                              #>  MASS        * vii.three-53     2020-09-09 [ii] CRAN (R 4.0.3)                                                              #>  MatchIt     * four.one.0      2020-12-xv [1] CRAN (R 4.0.iii)                                                              #>  Matrix      * 1.2-18     2019-11-27 [two] CRAN (R 4.0.three)                                                              #>  memoise       1.1.0      2017-04-21 [1] CRAN (R 4.0.0)                                                              #>  munsell       0.five.0      2018-06-12 [1] CRAN (R four.0.0)                                                              #>  nnet        * seven.3-14     2020-04-26 [2] CRAN (R 4.0.3)                                                              #>  numDeriv    * 2016.8-ane.1 2019-06-06 [1] CRAN (R four.0.0)                                                              #>  pillar        one.four.7      2020-xi-twenty [one] CRAN (R 4.0.3)                                                              #>  pkgbuild      1.2.0      2020-12-15 [one] CRAN (R 4.0.3)                                                              #>  pkgconfig     2.0.3      2019-09-22 [1] CRAN (R 4.0.0)                                                              #>  pkgload       1.1.0      2020-05-29 [1] CRAN (R 4.0.0)                                                              #>  prettyunits   i.ane.1      2020-01-24 [ane] CRAN (R 4.0.0)                                                              #>  processx      3.iv.5      2020-11-30 [ane] CRAN (R 4.0.3)                                                              #>  ps            1.five.0      2020-12-05 [1] CRAN (R iv.0.3)                                                              #>  purrr         0.iii.4      2020-04-17 [1] CRAN (R four.0.0)                                                              #>  R6            2.5.0      2020-x-28 [1] CRAN (R 4.0.3)                                                              #>  Rcpp          1.0.five      2020-07-06 [1] CRAN (R 4.0.2)                                                              #>  remotes       2.2.0      2020-07-21 [1] CRAN (R 4.0.2)                                                              #>  rlang         0.4.9      2020-xi-26 [ane] CRAN (R 4.0.3)                                                              #>  rmarkdown     ii.half dozen        2020-12-14 [1] CRAN (R 4.0.three)                                                              #>  rprojroot     2.0.2      2020-eleven-15 [i] CRAN (R 4.0.3)                                                              #>  scales        one.ane.1      2020-05-11 [one] CRAN (R 4.0.0)                                                              #>  sessioninfo   one.1.1      2018-eleven-05 [1] CRAN (R 4.0.0)                                                              #>  shape         1.4.v      2020-09-xiii [1] CRAN (R iv.0.2)                                                              #>  stringi       1.5.3      2020-09-09 [one] CRAN (R 4.0.2)                                                              #>  stringr       ane.4.0      2019-02-10 [1] CRAN (R four.0.0)                                                              #>  survival      three.2-7      2020-09-28 [two] CRAN (R 4.0.3)                                                              #>  testthat      3.0.1      2020-12-17 [one] CRAN (R 4.0.iii)                                                              #>  tibble        3.0.4      2020-10-12 [1] CRAN (R 4.0.3)                                                              #>  tidyselect    1.one.0      2020-05-xi [1] CRAN (R 4.0.0)                                                              #>  usethis       2.0.0      2020-12-ten [1] CRAN (R 4.0.three)                                                              #>  vctrs         0.3.6      2020-12-17 [1] CRAN (R 4.0.iii)                                                              #>  WeightIt    * 0.ten.ii     2020-08-21 [1] Github (ngreifer/[electronic mail protected])                                          #>  withr         2.3.0      2020-09-22 [i] CRAN (R 4.0.three)                                                              #>  xfun          0.19       2020-10-thirty [one] CRAN (R 4.0.3)                                                              #>  yaml          2.ii.i      2020-02-01 [1] CRAN (R 4.0.0)                                                              #>                                                              #> [1] C:/Users/Daniele/Documents/R/win-library/4.0                                          #> [ii] C:/Programme Files/R/R-four.0.iii/library                  

I likewise tried using method = "ps" and method = "gbm", just I didn't notice any differences in the PSs values. I wasn't expecting different PSs values when covariates are specified in unlike orders, but maybe there is something that I am missing.

Thank you very much!

Questions about exporting propensity score weighted data

Dr.
Hullo, I am Wang Wenhao, Department of Obstetrics and Gynecology, Second Infirmary of Shanxi Medical University. Thank you for your guidance.
Recently, when propensity score weighting (PSW) was performed on the Shanxi cervical cancer cohort, ATO/ATE/ATT tin be used to weight the cohort and reduce the difference between the two groups. But how to consign the weighted table to form a propensity score weighting cohrot.

Possible bug with bs() splines

I was using bs() inside weightit() to include spline terms in my model, just I went to re-run my code and information technology is no longer working. I get the following error:

Fault in terms.formula(new.grade) : invalid model formula in ExtractVars

I'm not sure this is a problems in WeightIt, it may be something I'm doing wrong. Any assistance would be appreciated!

Cheers!

crave(WeightIt)                                      #> Loading required package: WeightIt                                      #                    bs() works                  bs(women                  $                  height,                  df                  =                  5)                                      #> Error in bs(women$meridian, df = 5): could not find function "bs"                                      #                    bs works with lm                  summary(fm1                  <-                  lm(weight                  ~                  bs(elevation,                  df                  =                  5),                  data                  =                  women))                                      #> Fault in bs(summit, df = 5): could not find function "bs"                                      #                    weightit works                  weightit(weight                  ~                  height,                  data                  =                  women,                  estimand                  =                                      "ATE"                  ,                  method                  =                                      "ps"                  ,                  stabilize                  =                  T,                  include.obj                  =                  T)                                      #> A weightit object                                      #>  - method: "ps" (propensity score weighting)                                      #>  - number of obs.: 15                                      #>  - sampling weights: none                                      #>  - treatment: continuous                                      #>  - covariates: pinnacle                                      #                    weightit with splines does not                  weightit(weight                  ~                  bs(height,                  df                  =                  5),                  information                  =                  women,                  estimand                  =                                      "ATE"                  ,                  method                  =                                      "ps"                  ,                  stabilize                  =                  T,                  include.obj                  =                  T)                                      #> Error: All variables in formula must exist variables in data or objects in the global environs.                                      #> Missing variables: bs(peak, df = v)                

Created on 2020-01-02 past the reprex parcel (v0.two.one)

Warning bulletin: deprecated when tidyverse loaded

Seems totally minor since output doesn't actually change, but finding that I get a "deprecated" alarm when: tidyverse is loaded + using the "cbps" method. PS: thanks for the awesome r package!

                  # read libraries library(tidyverse) library(WeightIt) library(cobalt) #>  cobalt (Version 4.ii.2, Build Date: 2020-06-26 xv:50:03 UTC)  # data data("lalonde")  # example out = weightit(treat ~ age + educ + married, data = lalonde, method = "cbps") #> Alert: Deprecated summary(out) #>                  Summary of weights #>  #> - Weight ranges: #>  #>            Min                                  Max #> treated ii.2741      |----------------------| 6.9730 #> command 1.1645 |-|                           i.8046 #>  #> - Units with v greatest weights by group: #>                                             #>             182    177    167    124     38 #>  treated half dozen.8991 half dozen.9129 6.9267 half-dozen.9274  6.973 #>             604    589    572    541    531 #>  control 1.7794 1.7807 i.7907 1.7985 i.8046 #>  #> - Weight statistics: #>  #>         Coef of Var   MAD Entropy # Zeros #> treated       0.527 0.411   1.288       0 #> control       0.185 0.184   0.377       0 #> overall       0.641 0.384   0.826       0 #>  #> - Constructive Sample Sizes: #>  #>            Control Treated #> Unweighted  429.    185.   #> Weighted    414.87  144.95                                  

Created on 2021-03-31 by the reprex package (v0.3.0)

Sorry, information technology was a mistake

Hello,

I revised the calculation of the ATE weights for multinomial treatments of glm models. I expected the weight was sum_{i ane:n_treatmetn}(1/ps[i]), but the weights is i/ps[,1]. Is that correct?

frostwishis.blogspot.com

Source: https://coder.social/ngreifer/WeightIt

0 Response to "Error in Family$linkfun(Mustart) : Argument Mu Must Be a Nonempty Numeric Vector"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel