首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The spatial dynamic panel data (SDPD) model is a standard tool for analysing data with both spatial correlation and dynamic dependences among economic units. Conventional estimation methods rely on the key assumption that the spatial weight matrix is exogenous, which would likely be violated in some empirical applications where spatial weights are determined by economic factors. In this paper, we propose an SDPD model with individual fixed effects in a short time dimension, where the spatial weights can be endogenous and time‐varying. We establish the consistency and asymptotic normality of the two‐stage instrumental variable (2SIV) estimator and we investigate its finite sample properties using a Monte Carlo simulation. When applying this model to study government expenditures in China, we find strong evidence of spatial correlation and time dependence in making spending decisions among China's provincial governments.  相似文献   

2.
Robust rank-based methods are proposed for the analysis of data from multicenter clinical trials using a mixed model (including covariates) in which the treatment effects are assumed to be fixed and the center effects are assumed to be random. These rank-based methods are developed under the usual mixed-model structure but without the normality assumption of the random components in the model. For this mixed model, our proposed estimation includes R estimation of the fixed effects, robust estimation of the variance componets, and studentized residuals. Our accompanying inference includes estimates of the standard errors of the fixed-effects estimators and tests of general linear hypotheses concerning fixed effects. While the development is for general scores function, the Wilcoxon linear scores are emphasized. A discussion of the relative efficiency results shows that the R estimates are highly efficient compared to the traditional maximum likelihood (ML) estimates. A small Monte Carlo study confirms the validity of the analysis and its gain in power over the ML analysis for heavy-tailed distributions. We further develop a rank-based test for center by treatment interactions. We discuss the results of our analysis for an example of a multicenter clinical trial which shows the robustness of our procedure.  相似文献   

3.
Using unique information on a representative sample of US teenagers, we investigate peer effects in adolescent bedtime decisions. We extend the nonlinear least‐squares estimator for spatial autoregressive models to estimate network models with network fixed effects and sampled observations on the dependent variable. We show the extent to which neglecting the sampling issue yields misleading inferential results. When accounting for sampling, we find that, besides the individual, family and peer characteristics, the bedtime decisions of peers help to shape one's own bedtime decision.  相似文献   

4.
This paper considers the adaptability of estimation methods for binary response panel data models to multiple fixed effects. It is motivated by the gravity equation used in international trade, where important papers use binary response models with fixed effects for both importing and exporting countries. Econometric theory has mostly focused on the estimation of single fixed effects models. This paper investigates whether existing methods can be modified to eliminate multiple fixed effects for two specific models in which the incidental parameter problem has already been solved in the presence of a single fixed effect. We find that it is possible to generalize the conditional maximum likelihood approach to include two fixed effects for the logit. Monte Carlo simulations show that the conditional logit estimator presented in this paper is less biased than other logit estimators without sacrificing on precision. This superiority is emphasized in small samples. An application to trade data using the logit estimator further highlights the importance of properly accounting for two fixed effects.  相似文献   

5.
Summary This paper considers the specification and estimation of social interaction models with network structures and the presence of endogenous, contextual and correlated effects. With macro group settings, group‐specific fixed effects are also incorporated in the model. The network structure provides information on the identification of the various interaction effects. We propose a quasi‐maximum likelihood approach for the estimation of the model. We derive the asymptotic distribution of the proposed estimator, and provide Monte Carlo evidence on its small sample performance.  相似文献   

6.
Summary We suggest and compare different methods for estimating spatial autoregressive models with randomly missing data in the dependent variable. Aside from the traditional expectation‐maximization (EM) algorithm, a nonlinear least squares method is suggested and a generalized method of moments estimation is developed for the model. A two‐stage least squares estimation with imputation is proposed as well. We analytically compare these estimation methods and find that generalized nonlinear least squares, best generalized two‐stage least squares with imputation and best method of moments estimators have identical asymptotic variances. These methods are less efficient than maximum likelihood estimation implemented with the EM algorithm. When unknown heteroscedasticity exists, however, EM estimation produces inconsistent estimates. Under this situation, these methods outperform EM. We provide finite sample evidence through Monte Carlo experiments.  相似文献   

7.
We propose a new robust hypothesis test for (possibly non‐linear) constraints on M‐estimators with possibly non‐differentiable estimating functions. The proposed test employs a random normalizing matrix computed from recursive M‐estimators to eliminate the nuisance parameters arising from the asymptotic covariance matrix. It does not require consistent estimation of any nuisance parameters, in contrast with the conventional heteroscedasticity‐autocorrelation consistent (HAC)‐type test and the Kiefer–Vogelsang–Bunzel (KVB)‐type test. Our test reduces to the KVB‐type test in simple location models with ordinary least‐squares estimation, so the error in the rejection probability of our test in a Gaussian location model is . We discuss robust testing in quantile regression, and censored regression models in detail. In simulation studies, we find that our test has better size control and better finite sample power than the HAC‐type and KVB‐type tests.  相似文献   

8.
Summary This paper is concerned with developing a non‐parametric time‐varying coefficient model with fixed effects to characterize non‐stationarity and trending phenomenon in a non‐linear panel data model. We develop two methods to estimate the trend function and the coefficient function without taking the first difference to eliminate the fixed effects. The first one eliminates the fixed effects by taking cross‐sectional averages, and then uses a non‐parametric local linear method to estimate both the trend and coefficient functions. The asymptotic theory for this approach reveals that although the estimates of both the trend function and the coefficient function are consistent, the estimate of the coefficient function has a rate of convergence of (Th)?1/2, which is slower than (NTh)?1/2 as the rate of convergence for the estimate of the trend function. To estimate the coefficient function more efficiently, we propose a pooled local linear dummy variable approach. This is motivated by a least squares dummy variable method proposed in parametric panel data analysis. This method removes the fixed effects by deducting a smoothed version of cross‐time average from each individual. It estimates both the trend and coefficient functions with a rate of convergence of (NTh)?1/2. The asymptotic distributions of both of the estimates are established when T tends to infinity and N is fixed or both T and N tend to infinity. Both the simulation results and real data analysis are provided to illustrate the finite sample behaviour of the proposed estimation methods.  相似文献   

9.
Summary This paper deals with censored or truncated regression models where the explanatory variables are measured with additive errors. We propose a two‐stage estimation procedure that combines the instrumental variable method and the minimum distance estimation. This approach produces consistent and asymptotically normally distributed estimators for model parameters. When the predictor and instrumental variables are normally distributed, we also propose a maximum likelihood based estimator and a two‐stage moment estimator. Simulation studies show that all proposed estimators perform satisfactorily for relatively small samples and relatively high degree of censoring. In addition, the maximum likelihood based estimators are fairly robust against non‐normal and /or heteroskedastic random errors in our simulations. The method can be generalized to panel data models.  相似文献   

10.
In this paper we consider the problem of testing the null hypothesis that a series has a constant level (possibly as part of a more general deterministic mean) against the alternative that the level follows a random walk. This problem has previously been studied by, inter alia, 19 in the context of the orthogonal Gaussian random walk plus noise model. This model postulates that the noise component and the innovations to the random walk are uncorrelated. We generalize their work by deriving the locally best invariant test of a fixed level against a random walk level in the non‐orthogonal case. Here the noise and random walk components are contemporaneously correlated with correlation coefficient ρ . We demonstrate that the form of the optimal test in this setting is independent of ρ ; i.e. the test statistic previously derived for the case of ρ= 0 remains the locally optimal test for all ρ . This is a very useful result: it states that the locally optimal test may be achieved without prior knowledge of ρ . Moreover, we show that the limiting distribution of the resulting statistic under both the null and local alternatives does not depend on ρ , behaving exactly as if ρ= 0 . Finite sample simulations of these effects are provided to illustrate and generalizations to models with dependent errors are considered.  相似文献   

11.
Summary The Chamberlain projection approach, a powerful tool for the analysis of linear fixed‐effects models, was introduced within the context of balanced panels. This paper extends the Chamberlain projection approach to unbalanced panels. The extension is especially useful for models with sequential exogeneity, where existing control‐variable approaches are not applicable. A generalized method of moments (GMM) estimation framework is considered, and hypothesis tests (testing strict exogeneity, testing random effects, etc.) are discussed within the GMM context.  相似文献   

12.
Summary This paper proposes a non‐parametric test for common trends in semi‐parametric panel data models with fixed effects based on a measure of non‐parametric goodness‐of‐fit (R2 ). We first estimate the model under the null hypothesis of common trends by the method of profile least squares, and obtain the augmented residual which consistently estimates the sum of the fixed effect and the disturbance under the null. Then we run a local linear regression of the augmented residuals on a time trend and calculate the non‐parametric R2 for each cross‐section unit. The proposed test statistic is obtained by averaging all cross‐sectional non‐parametric R2 s, which is close to 0 under the null and deviates from 0 under the alternative. We show that after appropriate standardization the test statistic is asymptotically normally distributed under both the null hypothesis and a sequence of Pitman local alternatives. We prove test consistency and propose a bootstrap procedure to obtain P ‐values. Monte Carlo simulations indicate that the test performs well in finite samples. Empirical applications are conducted exploring the commonality of spatial trends in UK climate change data and idiosyncratic trends in OECD real GDP growth data. Both applications reveal the fragility of the widely adopted common trends assumption.  相似文献   

13.
In a simulation study of the estimation of population pharmacokinetic parameters, including fixed and random effects, the estimates and confidence intervals produced by NONMEM were evaluated. Data were simulated according to a monoexponential model with a wide range of design and statistical parameters, under both steady state (SS) and non-SS conditions. Within the range of values for population parameters commonly encountered in research and clinical settings, NONMEM produced parameter estimates for CL, V, sigma CL, and sigma epsilon which exhibit relatively small biases. As the range of variability increases, these biases became larger and more variable. An important exception was bias in the estimate for sigma V which was large even when the underlying variability was small. NONMEM standard error estimates are appropriate as estimates of standard deviation when the underlying variability is small. Except in the case of CL, standard error estimates tend to deteriorate as underlying variability increases. An examination of confidence interval coverage indicates that caution should be exercised when the usual 95% confidence intervals are used for hypothesis testing. Finally, simulation-based corrections of point and interval estimates are possible but corrections must be performed on a case-by-case basis.  相似文献   

14.
Summary We propose an improved model selection test for dynamic models using a new asymptotic approximation to the sampling distribution of a new test statistic. The model selection test is applicable to dynamic models with very general selection criteria and estimation methods. Since our test statistic does not assume the exact form of a true model, the test is essentially non‐parametric once competing models are estimated. For the unknown serial correlation in data, we use a Heteroscedasticity/Autocorrelation‐Consistent (HAC) variance estimator, and the sampling distribution of the test statistic is approximated by the fixed‐b asymptotic approximation. The asymptotic approximation depends on kernel functions and bandwidth parameters used in HAC estimators. We compare the finite sample performance of the new test with the bootstrap methods as well as with the standard normal approximations, and show that the fixed‐b asymptotics and the bootstrap methods are markedly superior to the standard normal approximation for a moderate sample size for time series data. An empirical application for foreign exchange rate forecasting models is presented, and the result shows the normal approximation to the distribution of the test statistic considered appears to overstate the data's ability to distinguish between two competing models.  相似文献   

15.
Summary The robustness of the Lagrange Multiplier (LM) tests for spatial error dependence of Burridge (1980) and Born and Breitung (2011) for the linear regression model, and Anselin (1988) and Debarsy and Etur (2010) for the panel regression model with random or fixed effects are examined. While all tests are asymptotically robust against distributional mis‐specification, their finite sample behaviour may be sensitive to the spatial layout. To overcome this shortcoming, standardized LM tests are suggested. Monte Carlo results show that the new tests possess good finite sample properties. An important observation made throughout this study is that the LM tests for spatial dependence need to be both mean‐ and variance‐adjusted for good finite sample performance to be achieved. The former is, however, often neglected in the literature.  相似文献   

16.
In this paper, we employ the Lagrange multiplier (LM) principle to test parameter homogeneity across cross‐section units in panel data models. The test can be seen as a generalization of the Breusch–Pagan test against random individual effects to all regression coefficients. While the original test procedure assumes a likelihood framework under normality, several useful variants of the LM test are presented to allow for non‐normality, heteroscedasticity and serially correlated errors. Moreover, the tests can be conveniently computed via simple artificial regressions. We derive the limiting distribution of the LM test and show that if the errors are not normally distributed, the original LM test is asymptotically valid if the number of time periods tends to infinity. A simple modification of the score statistic yields an LM test that is robust to non‐normality if the number of time periods is fixed. Further adjustments provide versions of the LM test that are robust to heteroscedasticity and serial correlation. We compare the local power of our tests and the statistic proposed by Pesaran and Yamagata. The results of the Monte Carlo experiments suggest that the LM‐type test can be substantially more powerful, in particular, when the number of time periods is small.  相似文献   

17.
BACKGROUND: In clinical trials a fixed effects research model assumes that the patients selected for a specific treatment have the same true quantitative effect and that the differences observed are residual error. If, however, we have reasons to believe that certain patients respond differently from others, then the spread in the data is caused not only by the residual error but also by between-patient differences. The latter situation requires a random effects model. OBJECTIVE: To explain random effects models in analysis of variance and to give examples of studies qualifying for them. RESULTS: If in a particular study the data are believed to be different from one assessing doctor to the other, and if we have no prior theory that 1 or 2 assessing doctors produced the highest scores, but rather expect there may be heterogeneity in the population of doctors at large, then a random effects model will be appropriate. For that purpose between-doctor variability is compared to within-doctor variability. If the data of 2 separate studies of the same new treatment are analyzed simultaneously, it will be safe to consider an interaction effect between the study number and treatment efficacy. If the interaction is significant, a random effects model with the study number as random variable, will be adequate. For that purpose the treatment effect is tested against the interaction effect. In a multicenter study the data are at risk of interaction between centers and treatment efficacy. If this interaction is significant, a random effects model with the health center as random variable, will be adequate. The treatment effect is tested not against residual but against the interaction. If in a crossover study a treatment difference is not observed, this may be due to random subgroup effects. A post-hoc random effects model, with patients effect as random variable, testing the treatment effect against treatments x patients interaction, will be appropriate. DISCUSSION: Random effects research models enable the assessment of an entire sample of data for subgroup differences without need to split the data into subgroups. Clinical investigators, in general, are hardly aware of this possibility and, therefore, wrongly assess random effects as fixed effects leading to a biased interpretation of the data.  相似文献   

18.
Summary The class of generalized autoregressive conditional heteroscedastic (GARCH) models has proved particularly valuable in modelling time series with time varying volatility. These include financial data, which can be particularly heavy tailed. It is well understood now that the tail heaviness of the innovation distribution plays an important role in determining the relative performance of the two competing estimation methods, namely the maximum quasi‐likelihood estimator based on a Gaussian likelihood (GMLE) and the log‐transform‐based least absolutely deviations estimator (LADE) (see Peng and Yao 2003 Biometrika,90, 967–75). A practically relevant question is when to use what. We provide in this paper a solution to this question. By interpreting the LADE as a version of the maximum quasilikelihood estimator under the likelihood derived from assuming hypothetically that the log‐squared innovations obey a Laplace distribution, we outline a selection procedure based on some goodness‐of‐fit type statistics. The methods are illustrated with both simulated and real data sets. Although we deal with the estimation for GARCH models only, the basic idea may be applied to address the estimation procedure selection problem in a general regression setting.  相似文献   

19.
Summary The nonlinear fixed‐effects model has two shortcomings, one practical and one methodological. The practical obstacle relates to the difficulty of computing the MLE of the coefficients of non‐linear models with possibly thousands of dummy variable coefficients. In fact, in many models of interest to practitioners, computing the MLE of the parameters of fixed effects model is feasible even in panels with very large numbers of groups. The result, though not new, appears not to be well known. The more difficult, methodological issue is the incidental parameters problem that raises questions about the statistical properties of the ML estimator. There is relatively little empirical evidence on the behaviour of the MLE in the presence of fixed effects, and that which has been obtained has focused almost exclusively on binary choice models. In this paper, we use Monte Carlo methods to examine the small sample bias of the MLE in the tobit, truncated regression and Weibull survival models as well as the binary probit and logit and ordered probit discrete choice models. We find that the estimator in the continuous response models behaves quite differently from the familiar and oft cited results. Among our findings are: first, a widely accepted result that suggests that the probit estimator is actually relatively well behaved appears to be incorrect; second, the estimators of the slopes in the tobit model, unlike the probit and logit models that have been studied previously, appear to be largely unaffected by the incidental parameters problem, but a surprising result related to the disturbance variance estimator arises instead; third, lest one jumps to a conclusion that the finite sample bias is restricted to discrete choice models, we submit evidence on the truncated regression, which is yet unlike the tobit in that regard—it appears to be biased towards zero; fourth, we find in the Weibull model that the biases in a vector of coefficients need not be in the same direction; fifth, as apparently unexamined previously, the estimated asymptotic standard errors for the ML estimators appear uniformly to be downward biased when the model contains fixed effects. In sum, the finite sample behaviour of the fixed effects estimator is much more varied than the received literature would suggest.  相似文献   

20.
Summary This paper shows how generalized empirical likelihood can be used to obtain specification tests in semiparametric conditional moment restrictions models. The resulting test statistics are similar in spirit to classical Kolmogorov–Smirnov and Cramer von Mises goodness‐of‐fit statistics and are based on an integrated version of the original moment restrictions. The results are applied to test the correct specification of an instrumental variable smooth varying coefficient model and of a censored non‐linear quantile regression model. Monte Carlo results suggest that the proposed tests have competitive finite sample properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号