首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   0篇
口腔科学   1篇
内科学   6篇
  2022年   1篇
  2014年   1篇
  2002年   1篇
  1932年   4篇
排序方式: 共有7条查询结果,搜索用时 15 毫秒
1
1.
2.
3.
4.
An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics, such as gender or race), even when the decision maker does not intend to discriminate based on those “protected” attributes. This unintended discrimination is often caused by underlying correlations in the data between protected attributes and other observed characteristics used by the algorithm to create predictions and target individuals optimally. Because these correlations are hidden in high-dimensional data, removing protected attributes from the database does not solve the discrimination problem; instead, removing those attributes often exacerbates the problem by making it undetectable and in some cases, even increases the bias generated by the algorithm. We propose BEAT (bias-eliminating adapted trees) to address these issues. This approach allows decision makers to target individuals based on differences in their predicted behavior—hence, capturing value from personalization—while ensuring a balanced allocation of resources across individuals, guaranteeing both group and individual fairness. Essentially, the method only extracts heterogeneity in the data that is unrelated to protected attributes. To do so, we build on the general random forest (GRF) framework [S. Athey et al., Ann. Stat. 47, 1148–1178 (2019)] and develop a targeting allocation that is “balanced” with respect to protected attributes. We validate BEAT using simulations and an online experiment with N = 3,146 participants. This approach can be applied to any type of allocation decision that is based on prediction algorithms, such as medical treatments, hiring decisions, product recommendations, or dynamic pricing.

In the era of algorithmic personalization, resources are often allocated based on individual-level predictive models. For example, financial institutions allocate loans based on individuals’ expected risk of default, advertisers display ads based on users’ likelihood to respond to the ad, hospitals allocate organs to patients based on their chances to survive, and marketers allocate price discounts based on customers’ propensity to respond to such promotions. The rationale behind these practices is to leverage differences across individuals, such that a desired outcome can be optimized via personalized or targeted interventions. For example, a financial institution would reduce risk of default by approving loans to individuals with the lowest risk of defaulting, advertisers would increase profits when targeting ads to users who are most likely to respond to those ads, and so forth.There are, however, individual differences that firms may not want to leverage for personalization, as they might lead to disproportionate allocation to a specific group. These individual differences may include gender, race, sexual orientation, or other protected attributes. In fact, several countries have instituted laws against discrimination based on protected attributes in certain domains (e.g., in voting rights, employment, education, and housing). However, discrimination in other domains is lawful but is often still perceived as unfair or unacceptable (1). For example, it is widely accepted that ride-sharing companies set higher prices during peak hours, but these companies were criticized when their prices were found to be systematically higher in non-White neighborhoods compared with White areas (2).Intuitively, a potentially attractive solution to this broad concern of protected attributes–based discrimination may be to remove the protected attributes from the data and to generate a personalized allocation policy based on the predictions obtained from models trained using only the unprotected attributes. However, such an approach would not solve the problem as there might be other variables remaining in the dataset that are related to the protected attributes and therefore, will still generate bias. Interestingly, as we show in our empirical section, there are cases in which removing protected attributes from the data can actually increase the degree of discrimination on the protected attributes (i.e., a firm that chooses to exclude protected attributes from its database might create a greater imbalance). This finding is particularly relevant today because companies are increasingly announcing their plans to stop using protected attributes in fear of engaging in discrimination practices. In our empirical section, we show the conditions under which this finding applies in practice.Personalized allocation algorithms typically use data as input to a two-stage model. First, the data are used to predict accurate outcomes based on the observed variables in the data (the “inference” stage). Then, these predictions are used to create an optimal targeting policy with a particular objective function in mind (the “allocation” stage). The (typically unintended) biases in the policies might occur because the protected attributes are often correlated with the predicted outcomes. Thus, using either the protected attributes themselves or variables that are correlated with those protected attributes in the inference stage may generate a biased allocation policy.*This biased personalization problem could be in principle solved using constrained optimization, focusing on the allocation stage of the algorithm (e.g., refs. 3 and 4). Using this approach, a constraint is added to the optimization problem such that individuals who are allocated to receive treatment (the “targeted” group) are not systematically different in their protected attributes from those who do not receive treatment. Although methods for constrained optimization problems often work well in low dimensions, they are sensitive to the curse of dimensionality (e.g., if there are multiple protected attributes).Another option would be to focus on the data that are fed to the algorithm and “debias” the data before they are used: that is, transform the unprotected variables such that they become independent of the protected attributes and use the resulting data in the two-stage model (e.g., refs. 5 and 6). While doing so guarantees pairwise independence of each variable from the protected attributes, it is difficult to account for underlying dependencies between the protected attributes and interactions of the different variables (6). Most importantly, while these methods are generally effective at achieving group fairness (statistical parity), they often harm individual fairness (79). Finally, debiasing methods require the decision maker to collect protected attributes at all times, both when estimating the optimal policy and when applying that policy to new individuals. A more desired approach would be to create a mapping between unprotected attributes and policy allocations that not only is fair (both at the group level and at the individual level) but can also be applied without the need to collect protected attributes for new individuals.In this paper, we depart from those approaches and propose an approach that addresses the potential bias at the inference stage (rather than pre- or postprocessing the data or adding constraints to the allocation). Our focus is to infer an object of interest—“conditional balanced targetability” (CBT)—that measures the adjusted treatment effect predictions, conditional on a set of unprotected variables. Essentially, we create a mapping from unprotected attributes to a continuous targetability score that leads to balanced allocation of resources with respect to the protected attributes. Previous papers that modified the inference stage (e.g., refs. 1014) are limited in their applicability because they typically require additional assumptions and restrictions and are limited in the type of classifiers they apply to. The benefits of our approach are noteworthy. First, allocating resources based on CBT scores does, by design, achieve both group and individual fairness. Second, we leverage computationally efficient methods for inference that are easy to implement in practice and also have desirable scalability properties. Third, out-of-sample predictions for CBT do not require protected attributes as an input. In other words, firms or institutions seeking allocation decisions that do not discriminate on protected attributes only need to collect the protected attributes when calibrating the model. Once the model is estimated, future allocation decisions can be based on (out-of-sample) predictions, which only require the unprotected attributes of the new individuals.We propose a practical solution where the decision maker can leverage the value of personalization without the risk of disproportionately targeting individuals based on protected attributes. The solution, which we name BEAT (bias-eliminating adapted trees), generates individual-level predictions that are independent of any preselected protected attributes. Our approach builds on general random forests (GRFs) (15, 16), which are designed to efficiently estimate heterogeneous outcomes. Our method preserves most of the core elements of GRF, including the use of forests as a type of adaptive nearest neighbor estimator and the use of gradient-based approximations to specify the tree-split point. Importantly, we depart from GRF in how we select the optimal split for partitioning. Rather than using divergence between children nodes as the primary objective of any partition, the BEAT algorithm combines two objectives—heterogeneity in the outcome of interest and homogeneity in the protected attributes—when choosing the optimal split. Essentially, the BEAT method only identifies individual differences in the outcome of interest (e.g., heterogeneity in response to price) that are homogeneously distributed in the protected attributes (e.g., race). As a result, not only the protected attributes will be equally distributed across policy allocations (group fairness), but the method will also ensure that individuals with the same unprotected attributes would have the same allocation (individual fairness).Using a variety of simulated scenarios, we show that our method exhibits promising empirical performance. Specifically, BEAT reduces the unintended bias while leveraging the value of personalized targeting. Further, BEAT allows the decision maker to quantify the trade-off between performance and discrimination. We also examine the conditions under which the intuitive approach of removing protected attributes from the data alleviates or increases the bias. Finally, we apply our solution to a marketing context in which a firm decides which customers to target with a discount coupon. Using an online sample of n = 3,146 participants, we find strong evidence of relationships between “protected” and “unprotected” attributes in real data. Moreover, applying personalized targeting to these data leads to significant bias against a protected group (in our case, older populations) due to these underlying correlations. Finally, we demonstrate that BEAT mitigates the bias, generating a balanced targeting policy that does not discriminate against individuals based on protected attributes.Our contribution fits broadly into the vast literature on fairness and algorithmic bias (e.g., refs. 2 and 1722). Most of this literature has focused on uncovering biases and their causes as well as on conceptualizing the algorithmic bias problem and potential solutions for researchers and practitioners. We complement this literature by providing a practical solution that prevents algorithmic bias that is caused by underlying correlations. Our work also builds on the growing literature on treatment personalization (e.g., refs. 2328). This literature has mainly focused on the estimation of heterogeneous treatment effects and designing targeting rules accordingly, but it has largely ignored any fairness or discrimination considerations in the allocation of treatment.  相似文献   
5.
6.
7.
Objectives: The aim of this study was to assess association of the -1082 IL-10 gene polymorphism with chronic periodontitis CP in a Peruvian population. Study Design: Samples of venous blood and DNA were obtained from 106 Peruvian subjects: a) 53 periodontally healthy; and b) 53 with CP. The association of the -1082 IL-10 promoter sequences was assessed by Polymerase chain reaction-restriction fragment length polymorfism (PCR-RFLP). Student’s t test were used to assess the clinical parameters, as well as the χ2 test and the odds ratio (OR), with 95% confidence intervals (CI) used performed for estimates regarding genotype and allele frequencies. Results: There were statistically significant differences between groups regarding the mean bleeding on probing, mean attachment level and mean probing depth (p < 0.00001) indicating that the matching based on the evaluated groups was adequate. The χ2 test found a statistically significant imbalance of genotypes between groups (p = 0.0172). The prevalence of CP was significantly higher in subjects harboring at least one A allele at position -1082 (AA and GA genotypes) in comparison to patients with the GG genotype (OR = 2.96; CI: 0.52; 5.41; p = 0.0099). Equally, subjects with the AA genotype were significantly associated to a diagnosis of CP (OR = 2.71; CI: 0.38; 5.04; p = 0.0231). On the other hand, subjects presenting a healthy periodontal status presented at least one G allele in comparison with the AA genotype (OR = 0.37; CI: 0.05, 0.69; p = 0.0231). For subjects with the GG genotype, the same positive association was observed (OR = 0.34; CI: 0.06, 0.62; p = 0.0099). There were no significant differences between groups amongst subjects with the GA genotype (OR = 1.19; CI: 0.22, 2.16; p = 0.6774). Conclusions: Within the limits of this study, IL-10 gene polymorphism at position -1082 does not appear to be associated to CP. Conversely, subjects with AA genotype seem to be at an increased risk of developing CP. Key words:According to MeSH documentation, chronic periodontitis, cytokines, genetic polymorphism, interleukin-10, periodontal disease.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号