全文获取类型
收费全文 | 155篇 |
免费 | 11篇 |
国内免费 | 4篇 |
专业分类
儿科学 | 4篇 |
妇产科学 | 3篇 |
基础医学 | 10篇 |
口腔科学 | 2篇 |
临床医学 | 27篇 |
内科学 | 18篇 |
皮肤病学 | 1篇 |
神经病学 | 24篇 |
特种医学 | 2篇 |
外科学 | 14篇 |
综合类 | 21篇 |
预防医学 | 33篇 |
眼科学 | 1篇 |
药学 | 6篇 |
1篇 | |
中国医学 | 2篇 |
肿瘤学 | 1篇 |
出版年
2024年 | 1篇 |
2023年 | 4篇 |
2022年 | 10篇 |
2021年 | 12篇 |
2020年 | 9篇 |
2019年 | 6篇 |
2018年 | 4篇 |
2017年 | 5篇 |
2016年 | 7篇 |
2015年 | 7篇 |
2014年 | 13篇 |
2013年 | 13篇 |
2012年 | 14篇 |
2011年 | 12篇 |
2010年 | 5篇 |
2009年 | 5篇 |
2008年 | 6篇 |
2007年 | 6篇 |
2006年 | 7篇 |
2005年 | 7篇 |
2003年 | 2篇 |
2002年 | 1篇 |
2001年 | 1篇 |
2000年 | 2篇 |
1999年 | 1篇 |
1998年 | 2篇 |
1997年 | 1篇 |
1995年 | 2篇 |
1994年 | 1篇 |
1993年 | 1篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有170条查询结果,搜索用时 15 毫秒
51.
中医药文化是中医药理论和实践发生与发展的土壤。分析中医药文化的核心价值在当代历史环境和语境下进行传承传播的困境与机遇,在反思总结的视角下,用反向格义的方法把传统理论、技术用现代话语诠释,创新传承传播模式,探索传承传播方式,使中医药文化核心价值发扬光大。 相似文献
52.
53.
加快社会办医是新医改提出的要求,也是医改"十二五"规划提出的重要目标与措施。本文试分析社会办医机构的发展困境,并对社会办医机构发展路径做出相应研究,在此基础上对社会办医机构的健康发展提出建议。 相似文献
54.
Jeremy Holmes 《Attachment & human development》2013,15(2):181-190
The aim of this paper is to explore the links between the attachment-theory derived concept of disorganized attachment, and the psychiatric diagnosis of Borderline Personality Disorder (BPD). Disorganized attachment can be understood in terms of an approach-avoidance dilemma for infants for whom stressed or traumatized/traumatizing caregivers are simultaneously a source of threat and a secure base. Interpersonal relationships in BPD including those with care givers is similarly seen in terms of approach-avoidance dilemmas, which manifests themselves in disturbed transference/countertransference interactions between therapists and BPD sufferers. Possible ways of handling these phenomena are suggested, based on Main's () notion of ‘meta-cognitive monitoring’, in the hope of re-instating meaning and more stable self-structures, in these patients' lives. 相似文献
55.
56.
Christian Hilbe Bin Wu Arne Traulsen Martin A. Nowak 《Proceedings of the National Academy of Sciences of the United States of America》2014,111(46):16425-16430
Direct reciprocity and conditional cooperation are important mechanisms to prevent free riding in social dilemmas. However, in large groups, these mechanisms may become ineffective because they require single individuals to have a substantial influence on their peers. However, the recent discovery of zero-determinant strategies in the iterated prisoner’s dilemma suggests that we may have underestimated the degree of control that a single player can exert. Here, we develop a theory for zero-determinant strategies for iterated multiplayer social dilemmas, with any number of involved players. We distinguish several particularly interesting subclasses of strategies: fair strategies ensure that the own payoff matches the average payoff of the group; extortionate strategies allow a player to perform above average; and generous strategies let a player perform below average. We use this theory to describe strategies that sustain cooperation, including generalized variants of Tit-for-Tat and Win-Stay Lose-Shift. Moreover, we explore two models that show how individuals can further enhance their strategic options by coordinating their play with others. Our results highlight the importance of individual control and coordination to succeed in large groups.Cooperation among self-interested individuals is generally difficult to achieve (1–3), but typically the free rider problem is aggravated even further when groups become large (4–9). In small communities, cooperation can often be stabilized by forms of direct and indirect reciprocity (10–17). For large groups, however, it has been suggested that these mechanisms may turn out to be ineffective, as it becomes more difficult to keep track of the reputation of others and because the individual influence on others diminishes (4–8). To prevent the tragedy of the commons and to compensate for the lack of individual control, many successful communities have thus established central institutions that enforce mutual cooperation (18–22).However, a recent discovery suggests that we may have underestimated the amount of control that single players can exert in repeated games. For the repeated prisoner’s dilemma, Press and Dyson (23) have shown the existence of zero-determinant strategies (or ZD strategies), which allow a player to unilaterally enforce a linear relationship between the own payoff and the coplayer’s payoff, irrespective of the coplayer’s actual strategy. The class of zero-determinant strategies is surprisingly rich: for example, a player who wants to ensure that the own payoff will always match the coplayer’s payoff can do so by applying a fair ZD strategy, like Tit-for-Tat. On the other hand, a player who wants to outperform the respective opponent can do so by slightly tweaking the Tit-for-Tat strategy to the own advantage, thereby giving rise to extortionate ZD strategies. The discovery of such strategies has prompted several theoretical studies, exploring how different ZD strategies evolve under various evolutionary conditions (24–30).ZD strategies are not confined to the repeated prisoner’s dilemma. Recently published studies have shown that ZD strategies also exist in other repeated two player games (29) or in repeated public goods games (31). Herein, we will show that such strategies exist for all symmetric social dilemmas, with an arbitrary number of participants. We use this theory to describe which ZD strategies can be used to enforce fair outcomes or to prevent free riders from taking over. Our results, however, are not restricted to the space of ZD strategies. By extending the techniques introduced by Press and Dyson (23) and Akin (27), we also derive exact conditions when generalized versions of Grim, Tit-for-Tat, and Win-Stay Lose-Shift allow for stable cooperation. In this way, we find that most of the theoretical solutions for the repeated prisoner’s dilemma can be directly transferred to repeated dilemmas with an arbitrary number of involved players.In addition, we also propose two models to explore how individuals can further enhance their strategic options by coordinating their play with others. To this end, we extend the notion of ZD strategies for single players to subgroups of players (to which we refer as ZD alliances). We analyze two models of ZD alliances, depending on the degree of coordination between the players. When players form a strategy alliance, they only agree on the set of alliance members, and on a common strategy that each alliance member independently applies during the repeated game. When players form a synchronized alliance, on the other hand, they agree to act as a single entity, with all alliance members playing the same action in a given round. We show that the strategic power of ZD alliances depends on the size of the alliance, the applied strategy of the allies, and on the properties of the underlying social dilemma. Surprisingly, the degree of coordination only plays a role as alliances become large (in which case a synchronized alliance has more strategic options than a strategy alliance).To obtain these results, we consider a repeated social dilemma between n players. In each round of the game, players can decide whether to cooperate (C) or to defect (D). A player’s payoff depends on the player’s own decision and on the decisions of all other group members (Fig. 1A): in a group in which j of the other group members cooperate, a cooperator receives the payoff aj, whereas a defector obtains bj. We assume that payoffs satisfy the following three properties that are characteristic for social dilemmas (corresponding to the individual-centered interpretation of altruism in ref. 32): (i) irrespective of the own strategy, players prefer the other group members to cooperate (aj+1 ≥ aj and bj+1 ≥ bj for all j); (ii) within any mixed group, defectors obtain strictly higher payoffs than cooperators (bj+1 > aj for all j); and (iii) mutual cooperation is favored over mutual defection (an?1 > b0). To illustrate our results, we will discuss two particular examples of multiplayer games (Fig. 1B). In the first example, the public goods game (33), cooperators contribute an amount c > 0 to a common pool, knowing that total contributions are multiplied by r (with 1 < r < n) and evenly shared among all group members. Thus, a cooperator’s payoff is aj = rc(j + 1)/n ? c, whereas defectors yield bj = rcj/n. In the second example, the volunteer’s dilemma (34), at least one group member has to volunteer to bear a cost c > 0 in order for all group members to derive a benefit b > c. Therefore, cooperators obtain aj = b ? c (irrespective of j), whereas defectors yield bj = b if j ≥ 1 and b0 = 0. Both examples (and many more, such as the collective risk dilemma) (7, 8, 35) are simple instances of multiplayer social dilemmas.Open in a separate windowFig. 1.Illustration of the model assumptions for repeated social dilemmas. (A) We consider symmetric n−player social dilemmas in which each player can either cooperate or defect. The player’s payoff depends on its own decision and on the number of other group members who decide to cooperate. (B) We will discuss two particular examples: the public goods game (in which payoffs are proportional to the number of cooperators) and the volunteer’s dilemma (as the most simple example of a nonlinear social dilemma). (C) In addition to individual strategies, we will also explore how subjects can enhance their strategic options by coordinating their play with other group members. We refer to the members of such a ZD alliance as allies, and we call group members that are not part of the ZD alliance outsiders. Outsiders are not restricted to any particular strategy. Some or all of the outsiders may even form their own alliance.We assume that the social dilemma is repeated, such that individuals can react to their coplayers’ past actions (for simplicity, we will focus here on the case of an infinitely repeated game). As usual, payoffs for the repeated game are defined as the average payoff that players obtain over all rounds. In general, strategies for such repeated games can become arbitrarily complex, as subjects may condition their behavior on past events and on the round number in nontrivial ways. Nevertheless, as in pairwise games, ZD strategies turn out to be surprisingly simple. 相似文献
57.
Anastasia Kozyreva Stefan M. Herzog Stephan Lewandowsky Ralph Hertwig Philipp Lorenz-Spreen Mark Leiser Jason Reifler 《Proceedings of the National Academy of Sciences of the United States of America》2023,120(7)
In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.
We have a right to speak freely. We also have a right to life. When malicious disinformation—claims that are known to be both false and dangerous—can spread without restraint, these two values collide head-on.—George Monbiot (1).
[W]e make a lot of decisions that affect people’s ability to speak. [...] Frankly, I don’t think we should be making so many important decisions about speech on our own either.—Mark Zuckerberg (2).Every day, human moderators and automated tools make countless decisions about what social media posts can be shown to users and what gets taken down, as well as how to discipline offending accounts. The ability to make these content moderation decisions at scale, thereby controlling online speech, is unprecedented in human history. Legal requirements make some content removal decisions easy for platforms (e.g., selling illegal drugs or promoting terrorism). But what about when content is not explicitly illegal but rather “legal but harmful” or “lawful but awful”? Harmful misinformation—inaccurate claims that can cause harm—falls into this category. False and misleading information is considered harmful when it undermines people’s ability to make informed choices and when it leads to adverse consequences such as threats to public health or the legitimacy of an election (3).The scale and urgency of the problems around content moderation became particularly apparent when Donald Trump and political allies spread false information attacking the legitimacy of the 2020 presidential election, culminating in a violent attack on the US Capitol. Subsequently, most major social media platforms suspended Trump’s accounts (4–6). After a sustained period of prioritizing free speech and avoiding the role of “arbiters of truth” (2, 7), social media platforms appear to be rethinking their approach to governing online speech (8). In 2020, Meta overturned its policy of allowing Holocaust denial and removed some white supremacists groups from Facebook (9); Twitter implemented a similar policy soon after (10). During the COVID-19 pandemic, most global social media platforms took an unusually interventionist approach to false information and vowed to remove or limit COVID-19 misinformation and conspiracies (11–14)—an approach which might undergo another shift soon (see ref. 15). In October 2021, Google announced a policy forbidding advertising content on its platforms that “mak[es] claims that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic process” or that “contradict[s] authoritative, scientific consensus on climate change” (16). And most recently, Pinterest introduced a new policy against false or misleading climate change information across both content and ads (17). (An overview of major platforms’ moderation policies related to misinformation is provided in SI Appendix, Table S9.)At the core of these decisions is a moral dilemma: Should freedom of expression be upheld even at the expense of allowing dangerous misinformation to spread, or should misinformation be removed or penalized, thereby limiting free speech? When choosing between action (e.g., removing a post) and inaction (e.g., allowing a post to remain online), decision-makers face a choice between two values (e.g., public health vs. freedom of expression) that, while not in themselves mutually exclusive, cannot be honored simultaneously. These cases are moral dilemmas: “situations where an agent morally ought to adopt each of two alternatives but cannot adopt both” (18, p. 5).Although moral dilemmas have long been used in empirical studies of ethics and moral decision-making, moral dilemmas in online content moderation are relatively new. Yet insights into public preferences are necessary to inform the design of consistent content moderation policies and grant legitimacy to policy decisions. Here, we begin to bridge this gap by studying public preferences around content moderation and investigating what attributes of content moderation dilemmas impact people’s decisions the most.Resolving content moderation dilemmas is difficult. Mitigating harms from misinformation by removing content and deplatforming accounts (especially at scale) might challenge the fundamental human right to “receive and impart information and ideas through any media and regardless of frontiers” (19, art. 19). Moreover, there are good reasons why existing legal systems protect even false speech (20). People with the power to regulate speech based on its accuracy may succumb to the temptation to suppress opposition voices (e.g., authoritarian rulers often censor dissent by determining what is “true”). Censoring falsehoods might also prevent people from freely sharing their opinions, thereby deterring (e.g., due to fear of punishment) even legally protected speech (21). Indeed, a core tenet of the marketplace of ideas is that it can appropriately discard false and inaccurate claims: “The best test of truth is the power of an idea to get itself accepted in the competition of the market” (22).Do digital and social media, where harmful misinformation can quickly proliferate and where information flow is algorithmically moderated, belie this confidence in the marketplace of ideas? As Sunstein (20) argued, “far from being the best test of truth, the marketplace ensures that many people accept falsehoods” (p. 49). For instance, when a guest on Joe Rogan’s popular podcast shared discredited claims about COVID-19 vaccines, he spread potentially fatal misinformation to millions of listeners (23). Here, two important points must be distinguished: First, while some types of misinformation may be relatively benign, others are harmful to people and the planet. For example, relative to factual information, in the United Kingdom and the United States, exposure to misinformation can reduce people’s intention to get vaccinated against COVID-19 by more than 6% points (24). This fact may justify invoking Mill’s principle of harm (25, 26), which can be invoked to warrant limiting freedom of expression in order to prevent direct and imminent harm to others. Second, sharing one’s private opinions, however unfounded, with a friend is substantially different from deliberately sharing potentially harmful falsehoods with virtually unlimited audiences. One may therefore argue that freedom of speech does not entail “freedom of reach” (27) and that the right to express one’s opinions is subject to limitations when the speech in question is amplified online.Freedom of expression is an important right, and restrictions on false speech in liberal democracies are few and far between. State censorship is a trademark of authoritarianism: The Chinese government’s censorship of Internet content is a case in point (28), as is the introduction of “fake news” laws during the pandemic as a way for authoritarian states to justify repressive policies that stifle the opposition and further infringe on freedom of the press (29–31) (for an overview of misinformation actions worldwide, see ref. 32). Furthermore, in March 2022, the Russian parliament approved jail terms of up to 15 y for sharing “fake” (i.e., contradicting the official government position) information about the war against Ukraine, which led many foreign and local journalists and news organizations to limit coverage of the invasion or withdraw from the country entirely.Unlike in authoritarian or autocratic countries, in liberal democracies, online platforms themselves are the primary regulators of online speech. This responsibility raises the problem of rule-making powers being concentrated in the hands of a few unelected individuals at profit-driven companies. Furthermore, platforms increasingly rely on automated content moderation; for instance, the majority of hate speech on Facebook is removed by machine-learning algorithms (33). Algorithmic content moderation at scale (34) poses additional challenges to an already complicated issue, including the inevitable occurrence of false positives (when acceptable content is removed) and false negatives (when posts violate platform policies but escape deletion). Algorithms operate on the basis of explicit and implicit rules (e.g., should they remove false information about climate change or only about COVID-19?). Content moderation—either purely algorithmic or with humans in the loop—inevitably requires a systemic balancing of individual speech rights against other societal interests and values (8).Scenarios involving moral dilemmas (e.g., the trolley problem) are used widely in moral psychology to assess people’s moral intuitions and reasoning (35), and experiments featuring moral dilemmas are an established approach to studying people’s moral intuitions around algorithmic decision-making (36, 37) and computational ethics (38). Classical dilemmas include scenarios involving choices between two obligations arising from the same moral requirement or from two different moral requirements. Most studies focus on dilemmas of the sacrificial type: Presenting a choice within one moral requirement (e.g., saving lives) with asymmetrical outcomes (e.g., to save five lives by sacrificing one; see refs. 39 and 40). Content moderation decisions, however, represent a different, and largely unstudied, problem: dilemmas between two different values or moral requirements (e.g., protecting freedom of expression vs. mitigating potential threats to public health) that are incommensurate and whose adverse outcomes are difficult to measure or quantify.We constructed four types of hypothetical scenarios arising from four contemporary topics that are hotbeds of misinformation: politics (election denial scenario), health (antivaccination scenario), history (Holocaust denial scenario), and the environment (climate change denial scenario). In designing these scenarios, we relied on the current content moderation policies of major social media platforms and selected topics where active policies on misinformation have already been implemented (SI Appendix, Table S9).We used a single-profile conjoint survey experiment to explore what factors influence people’s willingness to remove false and misleading content on social media and to penalize accounts that spread it. A conjoint design is particularly suitable for such a multilevel problem, where a variety of factors can impact decision-making (41, 42). Factors we focused on are characteristics of the account (the person behind it, their partisanship, and the number of followers they have), characteristics of the shared content (the misinformation topic and whether the misinformation was completely false or only misleading), whether this was a repeated offense (i.e., a proxy for intent), and the consequences of sharing the misinformation. All these factors were represented as attributes with distinct levels (Fig. 1). This design yielded 1,728 possible unique cases.Open in a separate windowFig. 1.Complete listing of all attribute levels in SI Appendix, Table S2.In the conjoint task, each respondent (N = 2,564) faced four random variations of each of the four scenario types (Fig. 1 for an example), thus deciding on 16 cases altogether (40,845 evaluations in total, after missing responses were removed). Each scenario type represented a different misinformation topic (election denial for politics, antivaccination for health, Holocaust denial for history, and climate change denial for environment), with consequences adjusted for each topic. For each case, respondents were asked to make two choices: whether to remove the posts mentioned in the scenario and whether to suspend the account that posted them. We recruited 2,564 US respondents via the Ipsos sample provider between October 18 and December 3, 2021. The sample was quota-matched to the US general population. The full experimental design and sample information are described in the Materials and Methods section. 相似文献
58.
Xin Wang 《Journal of interprofessional care》2020,34(3):259-268
ABSTRACTHealth professions students will invariably confront professionalism dilemmas. These early encounters significantly influence future professional attitudes and behaviours. Heretofore, studies concerning professionalism dilemmas experienced by health professions students across disciplines have been limited. To address this issue, we recruited 56 students with clinical experience from the National Taiwan University College of Medicine in the nursing, dentistry, pharmacy, medical technology, occupational therapy, and physiotherapy programs to participate in this research to compare health professions students’ understandings of professionalism and their experiences of professionalism dilemmas. We used group interviews to uncover students’ experiences of professionalism dilemmas. We identified the six most commonly reported professionalism dilemmas and found that interprofessional dilemmas were the dominant workplace professionalism dilemma for health professions students. We also identified significant disciplinary differences regarding dilemma types and frequencies. We employed the framework of dual identity development to better understand the role of professional and interprofessional identities in interprofessional dilemmas. The professionalism dilemmas that individual students encountered were shaped by disciplinary differences. Our findings suggest that the development of a sense of belonging to both their own profession and a broader interprofessional care team in health professions students can increase the effectiveness of interprofessional healthcare teams. 相似文献
59.
Aashild Sletteboe 《Journal of advanced nursing》1997,26(3):449-454
In nursing practice we find different kinds of difficult situations. What is the difference between such kinds of situations? It is important to know what kind of situation one is confronting because the answer and solution depend on it. In the literature the term ‘dilemma’ has different meanings. I am therefore interested in what constitutes a dilemma, and have conducted a concept analysis. The defining attributes were engagement, equally unattractive alternatives, awareness of alternatives, need for a choice and uncertainty of action. 相似文献
60.
Yumiko KATSUHARA 《Japan Journal of Nursing Science》2005,2(1):57-65
Aim: The purpose of the present study is to clarify the moral requirements that cause ethical dilemmas among nurse executives. Ethical dilemmas are defined as situations where moral requirements conflict, and neither requirement is overridden. Methods and Results: Twenty‐five nurse executives were asked to describe the situations where they had faced their most difficult ethical decisions. A total of 41 stories were told. These included disclosure of medical errors (eight cases), performance evaluations (six cases), and discomfort regarding physicians’ behaviour (five cases), as well as other situations. There were 48 ethical dilemmas in 41 cases, and each of these dilemmas represented conflicts among more than two of the 17 kinds of moral requirements. Conclusion: These moral requirements are: protection of one's own pride, doing one's civic duty to society, acceptance of gender roles, treating others in a caring and benevolent fashion, protection of patients’ rights, assurance of nursing quality, protection of nurse's pride, protection of the lives of patients, organizational profit motives, protection of workers’ rights among the staff, representation of the interests of the nursing division, smooth collaboration with physicians, execution of organisational rules, maintenance of Japanese cultural norms, observation of legal standards, respect for community needs, and obedience to political imperatives. 相似文献