Various Treatment Effects and their Identifiability 🌱

For Randomized Experiments

https://mattblackwell.org/files/teaching/s04-experiments.pdf

SATE

Sample Average Treatment Effect (SATE): SATE=τS=1NiSYi(1)Yi(0) Estimate of SATE: τ^S=1Nti:Ai=1Yi1Nci:Ai=0Yi In a completely randomized experiment assignment, estimate of SATE is unbiased estimator E[τ^SS]=1Nti:Ai=1E[YiAi=1,S]1Nci:Ai=0E[YiAi=0,S]=1Nti:Ai=1E[Yi(1)S]1Nci:Ai=0E[Yi(0)S]=1NtNtE[Yi(1)S]1NcNcE[Yi(0)S]=E[Yi(1)Yi(0)S]=1NiSYi(1)Yi(0)=τS

PATE

Population Average Treatment Effect (PATE) PATE=τ=E[Yi(1)Yi(0)] In a completely randomized experiment assignment, since SATE is an unbiased estimator, so is PATE: E[E[τ^SS]]=E[τS]=τ

Now Adding Covariates

We could use the simple difference-in-means estimator, which is root-n consistent and asymptotically normal under no modeling assumptions; however it completely ignores covariate information and so may be quite inefficient relative to other estimators.Alternatively we could model the re- gression function and use the plug-in estimator. However if we use parametric models to achieve root-n rates and small confidence intervals, we are putting ourselves at great risk of bias due to model misspecification; on the other hand, if we model the regression functions nonparametrically, letting the data speak for themselves, then we will typi- cally suffer from the curse of dimensionality and be subject to slow rates of convergence, and at a loss for confidence intervals and inference.

Why we must use sample splitting

There are two reasons for doing sample splitting: the first is that the analysis is more straightforward, and the second more important reason is that it prevents overfitting and allows for the use of arbitrarily complex estimators μba (e.g., random forests, boosting, neural nets). Without sample splitting, one would have to restrict the complexity of the estimator μba via empirical process conditions (e.g., via Donsker class or entropy restrictions). Intuitively, this is because the estimator ψb is using the data twice: once to estimate the unknown function μa and once to estimate the bias correction. Sample splitting ensures that these tasks are accomplished independently.

Theorem 3.2 is a simple but powerful result. It shows the doubly robust estimator is exactly unbiased, for any choice of regression estimator μba. Hence, although the estimator ψb exploits covariate information, its bias is not at all affected by accidentally misspecified models or biased regression estimators with slow convergence rates.

We have learned the surprising result that the sample-split doubly robust estimator is exactly unbiased for any choice of regression estimator μba, and root-n consistent and asymptotically normal as long as μba converges to some fixed function at any rate. As would be expected, the efficiency of the doubly robust estimator depends on the probability limits μa that the regression estimators μba converge to. This raises some important questions:

Moving to Conditionally Randomized Experiments

In conditionally randomized experiments, the randomization probabilities can differ by covariate values, e.g., in a stratified Bernoulli experiment one sets

P(A=1X=x,Ya)=π(x)

The doubly robust estimator is still consistent regardless of a misspecified outcome model, because we know the true propensity scores:

Pn[{μ^1(X)μ^0(X)}+{Aπ(X)1A1π(X)}{Yμ^A(X)}]

There are some important differences between simple Bernoulli experiments and conditionally randomized designs, however. First, the difference-in-means estimator is no longer a valid estimator, since it is no longer the case that AYa or A(X,Ya).

Moving to Observational Studies

Importantly, no unmeasured confounding AYaX means the observational study is actually a conditionally randomized experiment, but one in which the randomization probabilities

Notes mentioning this note

There are no notes linking to this note.


Here are all the notes in this garden, along with their links, visualized as a graph.

5G and WiFiAWS Step FunctionsAnalyzing Reddit Post on the Dollar StandardAsync, Await, and PromisesBayesian AverageBias Variance DecompositionBlockchain PresentationBreakpoint Debugging in VSCodeBrief Look into Measure TheoryC4 Model for Software ArchitectureCache vs Session StoreCant compare mean and median from different setsClient vs Server Side RenderingCode Production in an AI CompanyComparing Client Side Storage MethodsComputational Perception HighlightsConfidence Intervals for Known Distributions and...Cool Stocks ListCrazy Meeting with Obama, McCain, and Bush Post...Curse of DimentionalityDatabase vs Data Warehouse vs Data LakeDifferent Git AddsDocker (containerization) vs Vagrant (virtual...Explaining Decision Boundary of a Support Vector...Exporting Databricks Files to GithubFloyds Tortoise and Hare AlgorithmFresh Mac Setup Installation EssentialsGraphical Model IndependenciesHighlights from Bad SamaritansHighlights from Good Economics for Hard TimesHighlights from The Righteous MindHow Does Chromosomal Heredity WorkHow Does Light Influence the Rate of Capture in a...How Does Sweating WorkHow Does Version Naming WorkHow Not To Be Wrong Excerpt Self Selecting BiasHow Not to be Wrong Excerpt Public Opinion Doesn't...How Quantum Computers Could Quickly Break...How Someone Made a Spectral Lamp that Can Emit all...How are images compressed and stored in a computerHow do SPACs WorkHow does Hypothesis Testing WorkHow does air slow objects downHow is Neural Network a Universal ApproximatorHow is Unit Testing DoneHow to Access a Previous Commit with GitHow to Add to Your System Path Variable for MacHow to Build a Full Stack ApplicationHow to Clear Unused Docker ContainersHow to Convert from Celsius to FahrenheitHow to Delete a Branch GithubHow to Export Pandas DataFrame to CSV ProperlyHow to Force Pull and Overwrite GitHow to Get the Bootstrapped Standard Error for a...How to handle violations in positivityHow to Properly Explain Technical ToolsHow to Push Code for ProductionHow to Read a Path in S3How to Set Up Python Aliasing In the Command LineHow to Set a Specific Branch to Track a Specific...How to Store and Access SQL Queries in DatabricksHow to Take a Weighted AverageHow to Temporarily Stash Changes with Git StashHow to Untrack Committed Files from GitHow to Use PyenvHow to Use Sample Splitting for Doubly Robust...How to Write Output to Text FileHow to edit Obsidian themes with CSSHow to make copies of DNA with PCRHow to use Bounds and Sensitivity Analysis in...How to use Scipy Optimize to solve for values when...Info on Stock OptionsInspirational Computer PioneersIntuition Behind the Doubly Robust EstimatorInverting Hypothesis TestsInvesting LessonsJupyter Widgets ExistML CheatsheetsMaking Sense of a Betting Market with...Managing Ruby Versions with rbenvMarket Makers and Quant TradingMarket Making PresentationMatching IntuitionMethodology for Managing Web AppsMicroservices vs Monolithic ArchitectureModeling Advice and Lessons Learned Working at a...Multinomial to Binomial Stick Breaking...Music Theory NotesNotes from Michael Nielsen Effective Research PostNotes from the Martian by Andy WeirNotes on Bayesian OptimizationNotes on Exon Skipping with ASOsNotes on Options SpreadsNotes on Quantum CountryOne Persons Perspective About Why We Shouldnt Read...Presentation on the Kronovet Family Clothing...Python Dataclasses UpdatePython Package Reference InstructionsRandom CMU Course WebpagesRandom Facts from What If by Randall MunroeReading about Internet ServicesRock Thrust ExplainedSSHing into AWS and Running ThingsSome Bash Commands to Find Redundant Files and...Some Cool Python FeaturesSome Notes on Exploding Gradient ProblemStats BlogsStock Options in a CompanyTesting Code on GithubThoughts after Reading Hillbilly ElegyThoughts on Andy Matuschak Article on Teaching...Thoughts on Approaching Infinite KnowledgeThoughts on Maria Konnikova Knowledge Project...Thoughts on the End of Natural SelectionTor Network and .Onion DomainsUsing nonparametric models in doubly robust...Various Treatment Effects and their...Virtual Environment in AWSWhat Database do I useWhat are Git Pull and Push RequestsWhat are Information CriteriaWhat are Javascript WorkersWhat are MakefilesWhat are Moment Generating FunctionsWhat are Multiple CPU CoresWhat are Progressive Web AppsWhat are Wasserstein and Earth Movers DistancesWhat are the Four Fundamental Forces in Our...What is Apache SparkWhat is Bootstrapping in StatisticsWhat is Cryptocurrency StakingWhat is ElasticsearchWhat is Express.jsWhat is GLUEWhat is GraphQLWhat is HTTPSWhat is IV CrushWhat is Integration ReallyJAMStackWhat is KubernetesWhat is Mahalanobis DistanceWhat is MakerDAO CryptoWhat is Markov Chain Monte Carlo SamplingWhat is Nested Cross ValidationWhat is Next.jsWhat is PAC LearningWhat is R SquaredWhat is RedisWhat is ShrinkageWhat is Spearman CorrelationWhat is SvelteWhat is TerraformWhat is The Graph (Blockchain)What is Variational InferenceWhat is Vue.jsWhat is WebAssemblyWhat is a Credible IntervalWhat is a Fourier transformWhat is a Gaussian Mixture ModelWhat is a Gaussian ProcessWhat is a Object Relational MapperWhat is a Qini CurveWhat is a Sufficient StatisticWhat is independent component analysisWhat is the C-Statistic for BenefitWhat is the Dirichlet ProcessWhat is the EM AlgorithmWhat is the Hidden Markov ModelWhat is the Indian Buffet ProcessWhat is the Naive Bayes algorithmWhat is the Negative Binomial DistributionWhat is the Runtime of a LanguageWhat is the Studentized BootstrapWhat is the Wake Sleep AlgorithmWhat is the hypergeometric distributionWhy are Conjugate Priors UsefulWhy are there 12 Notes in Western MusicWhy is Cross Fitting Useful for Estimating...Why is a room hotter when you leave the fridge...Working with ClientsWorking with Terminaldata science overviewhighlights from Debt The First 5000 Yearshighlights from Enlightenment Nowhighlights from Hacking Darwinhighlights from How Not to be Wronghighlights from Leonardo da Vincihighlights from Open an Autobiographyhighlights from Range Why Generalists Triumph in a...highlights from Salt, Fat, Acid, HeatSapiens a Brief History of Humankindhighlights from Stumbling on Happinesshighlights from The Genehighlights from Thinking Fast and Slowhighlights from Trick Mirror