Footnote Showcasing research with the power to change our world Wed, 08 Nov 2017 19:46:52 +0000 en-US hourly 1 Footnote 32 32 Why Entrepreneurs Shouldn’t Chase Media Buzz Thu, 17 Aug 2017 01:24:20 +0000 Entrepreneurs need to balance building the company and building the brand.

The post Why Entrepreneurs Shouldn’t Chase Media Buzz appeared first on Footnote.

In the fall of 2014, in the midst of controversy about Facebook’s real-name policy and selling of user data, a new social media platform called Ello caught fire. Ello vowed to forever be free of advertising, and its company manifesto boldly concluded with a promise to would-be users that “You are not a product.” The timing couldn’t have been more perfect. The media dubbed Ello the “Anti-Facebook” and, at its peak, the social network was getting membership requests from more than 30,000 new users per hour.

This enviable opportunity quickly turned sour, however. Recently launched by a few designers and developers in Vermont, Ello was not equipped to handle such a high level of traffic, resulting in a bad experience for some users. Furthermore, the site was still bare bones and many users who came expecting similar features to Facebook were disappointed. Ultimately, Ello’s success didn’t last long. Though the site lives on as a network for artists and creators, most of the users who came hoping for an alternative to Facebook quickly left.

Ello’s story shows what can happen when a startup achieves media success that outpaces its progress in other areas. While some founders suffer from a naïve “If you build it, they will come” attitude, many others swing to the opposite extreme. Tempted by the allure of media exposure, they seek it out before they’re ready. Case in point: Elizabeth Holmes at Theranos, who chased TED talks, New Yorker profiles, and Fortune covers before her company’s core technology even worked (and then lied about it when the exaggerations caught up with her).

While Theranos may be an extreme example, most founders would give an arm for similar (and similarly fawning) media coverage. They know publicity can be an important early signal of a business’s progress, helping attract the customers, partners, employees, and investors the company needs to take off.

Research backs this up: One study of technology startups found that more coverage in industry media early in a company’s development was associated with receiving greater levels of venture capital funding later.1 We studied 60 venture-capital-backed companies and found that those that eventually achieved successful outcomes for investors tended to attract more media coverage along the way. Successful companies had more articles and headlines written about them, were covered by more publications, and put out more press releases than failing ventures.2

While the research showing a connection between media coverage and startup success might send entrepreneurs scrambling to pour time and money into their communications strategy, the lesson is not that simple. Communication is a critical part of building a business, but media attention must be driven by real growth and traction in core areas. That sounds obvious, but it’s something too many founders overlook, both in their urgency to attract publicity too soon and in their anxiety for that coverage to be positive. When founders drive media coverage too early, they may not be able to deliver on their promises to customers. Moreover, fretting about positive publicity may be wasted energy, according to our research.

In our study, a higher percentage of the media coverage garnered by successful companies was negative – 4.5% compared to 2.6% for failed companies.2 This doesn’t mean that “All press is good press,” but that press is a good sign your company is successful enough to attract newsworthy, skeptical coverage that goes beyond PR puff pieces. For example, Uber’s recent troubles around sexism and treatment of drivers wouldn’t be worth mentioning if it wasn’t the leader in its industry.

One of the first things new entrepreneurs do is craft a clear story about their company, why it was founded, and what its goals are. This lays the groundwork for attracting and motivating a team, developing company strategy, and pitching to customers and investors. It’s also the first step in building your media approach, but it’s important not to get distracted by storytelling if you haven’t worked out important operational, logistical, or business model issues.

The best time to seek publicity is when your company demonstrates traction or hits a key milestone, such as acquiring a new client or launching a product. Focus on reaching the company’s goals, and then promote these successes once you achieve them. When you do get media attention, share it widely and make sure key stakeholders (investors, partners, etc.) see it. Don’t stress too much about negative press coverage, since it shows that a company is newsworthy enough that its failures deserve attention.

Another important tip for early-stage startups: don’t waste money on a public relations firm or staffer. The ones who can really help you are too expensive, and those offering free or reduced-price PR should be avoided. If, however, you have access to the communications office at your university, accelerator program, or other institution, this can be a great way to get free publicity. But ultimately nobody is better at telling your story in the early days than you are.

While every company needs a media strategy to create awareness and demonstrate traction, it’s only one piece of the puzzle. Media activity should complement and keep pace with the rest of a company’s growth, or it risks creating an image that reality can’t live up to. Successful companies tell their story effectively, but they also have a substantive story to tell.

This article was produced by Footnote and was originally published in the Harvard Business Review.

The post Why Entrepreneurs Shouldn’t Chase Media Buzz appeared first on Footnote.

The Benefits & Challenges of Making Qualitative Research More Transparent Thu, 10 Aug 2017 20:03:20 +0000 In the quest to make research more open, sharing qualitative data presents challenges and opportunities.

The post The Benefits & Challenges of Making Qualitative Research More Transparent appeared first on Footnote.

In recent years, a movement to make research more transparent has taken root in the social sciences. Public institutions and private organizations of all types champion transparency, and publishers and funders increasingly require data sharing.1 Innovative platforms and technical tools empower scholars to provide a more complete picture of their research. New academic norms are setting clearer expectations about what data and information scholars should provide so that others can understand and evaluate their work.

While scholars are growing increasingly committed to openness, they also encounter obstacles to acting on that responsibility. Implementing greater transparency requires addressing a host of issues, from technical challenges to ethical questions to professional concerns. Along with other participants in the transparency conversation, members of qualitative research communities are forming new consensuses on these questions. Their ideas are informed by the kinds of data and analysis that their respective research communities typically use.

In this piece we consider some of the challenges and benefits of sharing qualitative data and making qualitative research more transparent, discuss some general principles that can guide efforts to do so, and consider the evolving infrastructure supporting increased transparency.2

The Challenges and Benefits of Transparency

Sharing data is a key component of research transparency. We can think of the texts, images, audio, and video collected or produced in association with a particular qualitative research project – through archival research, interviews, field observations, and other types of data gathering – as the qualitative analog of a quantitative dataset. However, sharing qualitative data in a way that increases research transparency can be at least as complicated and time-consuming as doing the same for numeric data.3

Consider, for instance, a research project on the duration of civil wars that entails interviewing dozens of rebel fighters about sensitive issues such as how their personal histories led to their commitment to continue the struggle. If the interviews are audio-recorded, sharing them will involve transcribing and potentially translating the recordings. If rebel fighters participated in the study under the condition of anonymity, the interview transcripts will need to be de-identified, possibly leading to the removal of important insights.

If the scholar shares the interviews to provide evidence for her research findings, she will need to make the transcripts available to readers of her papers, ideally in a format that directly links relevant interview excerpts to the claims with which they are associated. Furthermore, the researcher will need to write documentation illuminating the research context and other facets of the research in order to aid others in effectively interpreting the data,4 while taking care not to provide information that will allow her respondents to be identified.

Despite the challenges and resources involved in making qualitative research more transparent, both individual researchers and qualitative research communities as a whole can benefit from the value that openness provides. Sharing data, along with information about how they were generated and analyzed in order to support the claims in a research publication, enhances the completeness, understandability, and evaluability of that publication. It allows scholars to earn credibility and legitimacy by demonstrating that they generated their results in accordance with the rules that guide their research methods and the norms of their research community.

In addition, other scholars can learn from shared data and (re)analyze them from different perspectives and for new purposes. To draw once again on our example, interviewing dozens of rebel fighters entails considerable effort. Sharing the information those interviews produce such that multiple scholars can review and analyze it allows more value to be gained from that hard-won and costly information. Finally, the teaching of research methods is enhanced when students can practice using analytic techniques on authentic research datasets.

Developing Protocols and Building Consensus for Transparency

As part of a broader movement towards transparency, scholars who produce qualitative research in political science and other fields are developing protocols and tools to make their work more open and building consensus on how to apply and use them.

In 2010, the American Political Science Association (APSA) launched the Data Access and Research Transparency (DA-RT) initiative to advance the conversation about research openness in the field. DA-RT involved discipline-wide discussions, publications, conference roundtables and panels, and the formation of various ad-hoc working groups.5 From the outset, it was understood that making research more transparent should be a universal goal, equally applicable to both quantitative and qualitative research, but that the way in which this goal is accomplished should not be homogeneous or homogenizing.

One of the initiative’s key successes was a 2012 update to the discipline’s Guide to Professional Ethics in Political Science incorporating the following principles: (1) empirical researchers have an ethical responsibility to help one another understand their work by making their data and methods available; (2) critical ethical and legal imperatives, including the need to protect human subjects, should and will limit data sharing; and (3) researchers should retain the right to use their data for some period of time before sharing them.

An important step in translating these principles into action was the development of the Journal Editors Transparency Statement (JETS), authored by editors and subsequently adopted by more than two dozen journals, including prominent publications like the American Political Science Review and the American Journal of Political Science. JETS signatories pledged to introduce transparency standards for their journals requiring that both qualitative and quantitative data, and information about how they were generated and analyzed, be made available online. Those standards also called for the proper crediting of creators of datasets, in hopes of encouraging the sharing and reuse of valuable data.

While DA-RT applies to all types of empirical social science research, other initiatives in political science have focused specifically on making qualitative work more open. One example is the Qualitative Transparency Deliberations (QTD) sponsored by APSA’s Organized Section for Qualitative and Multi-Method Research (QMMR).6 QTD offers a forum for political scientists to consider the upsides, downsides, and practicalities of making qualitative research more transparent. Its development was animated, in part, by unease among some political scientists about the pace with which transparency discussions were proceeding in the discipline and their implications for certain types of qualitative research.

Since QTD’s launch in 2016, 13 collaborative working groups have considered core questions such as what transparency entails when working with data gathered from human participants and how empirical observations can be linked to findings produced using qualitative analytic methods like process tracing. As of this writing, the working groups are developing Community Transparency Statements that seek to outline best practices for making qualitative research more transparent.

The Importance of Infrastructure

Creating protocols and building consensus for qualitative research transparency are important first steps. However, in order for more scholars to be able to increase the transparency of their work, research communities also need access to appropriate infrastructure that addresses the challenges and concerns involved in sharing qualitative research. Digital data repositories that can serve as platforms for storing, preserving, and sharing qualitative data are one important element of that infrastructure.

Sharing data through a trusted digital repository such as the Inter-university Consortium for Political and Social Research (ICPSR), the Odum Institute Data Archive, the Qualitative Data Repository (QDR), or services such as Dataverse, Figshare, or Open Science Framework, has several advantages over simply posting them online on an individual website. Most repositories ask depositors to provide valuable metadata (“data about data”) along with their datasets. Metadata facilitate the discovery of datasets and associated research, as well as the analysis of data by new users, broadening the potential audience for and use of a scholar’s work.

Some repositories also provide curation assistance that makes the sharing process easier and increases the value of the data. They can also help ensure that data are shared ethically and legally, in accordance with the commitments scholars make to Institutional Review Boards and human participants and with attention to relevant intellectual property concerns. If depositors so desire, repositories can help them to place appropriate access controls on data so they are only viewable by a subset of users or under certain conditions. In addition, datasets housed in a repository are assigned a permanent digital object identifier (DOI), making them easily citable and guaranteeing that links to them will persist.

While many repositories primarily host quantitative data, some are designed specifically to address the unique needs and interests of qualitative researchers. QDR, a National Science Foundation-funded domain repository that launched in 2014 at Syracuse University, curates, stores, preserves, publishes, and makes available digital data from qualitative and multi-method research in the social sciences. Researchers can deposit their data with QDR and access its holdings free of charge, and staff are available to provide support and training. The repository currently has around 900 registered users and holds data from nearly two dozen projects.

In addition to housing data, QDR also develops and disseminates standards and techniques for sharing and reusing qualitative data and for pursuing qualitative research transparency more broadly. For instance, in partnership with Cambridge University Press and the technology firm, and with support from the Robert Wood Johnson Foundation, QDR has begun to develop a new approach to transparency called “annotation for transparency inquiry (ATI).” ATI allows social scientists to link relevant data – a document or interview transcript, for instance – directly to a particular passage within a digital publication, and to use digital annotations to elucidate how those data support their claims and conclusions.

A More Open Future

Over the last decade, there has been increased interest in research transparency across the social sciences. This ongoing conversation has served to illuminate the benefits that openness offers to qualitative social science, as well as the thorny issues that can arise when making some forms of qualitative research more transparent.

Scholars continue to raise critical questions about how and why to share data and how doing so positively impacts their ability to produce credible and legitimate knowledge. While some academics have capitalized on these new discussions to reprise old debates about the relative value of different forms of research, the most productive conversations have sought to grapple with the specific challenges of sharing qualitative data and analysis. Those discussions have generated at least three lessons.

First, while motivated by the same underlying benefits, data sharing and research transparency cannot and should not be accomplished in the same way across diverse social science traditions. The purpose of openness is to demonstrate how scholars arrive at their conclusions, and researchers must do so on the terms of their respective traditions, and in a way appropriate to the particular types of data and methods they have employed. When it comes to research transparency, one size does not fit all.

Second, while sharing some forms of qualitative research raises important questions and debates, much qualitative data can be shared relatively safely and easily. Moving expeditiously to increase access to those data has the potential to significantly enrich qualitative inquiry in the social sciences.

Finally, achieving openness requires the continued generation of new practices and better infrastructure. Both are more likely to be produced when scholars work together in a collaborative and inclusive fashion, engaging in sustained communication and exchange to identify strategies and solutions.

This article is part of a series on how scholars are addressing the “reproducibility crisis” by making research more transparent and rigorous. The series was produced by Footnote and Stephanie Wykstra with support from the Laura and John Arnold Foundation. It was published on Footnote and Inside Higher Ed.

The post The Benefits & Challenges of Making Qualitative Research More Transparent appeared first on Footnote.

Do Our Measures of Academic Success Hurt Science? Wed, 02 Aug 2017 18:07:24 +0000 Perverse career incentives steer researchers toward publishing more articles – and away from other important goals.

The post Do Our Measures of Academic Success Hurt Science? appeared first on Footnote.

A Ph.D. student wants to submit his research to a journal that requires sharing the raw data for each paper with readers. His supervisors, however, hope to extract more articles from the dataset before making it public. The researcher is forced to postpone the publication of his findings, withholding potentially valuable knowledge from peers and clinicians and keeping useful data from other researchers. 

Many scholars can share similar stories of how career incentives clash with academia’s mission to increase knowledge and further scientific progress. The professional advancement system at universities seems to be caught in a bibliometric trap where scientific success is predominantly defined in terms of numbers – the number of grant dollars and publications, the ranking of journals, the quantity of citations – rather than impact.

As researchers make smart career decisions within this bibliometric “publish or perish” system, their choices unwittingly hamper the quality and impact of research as a whole. Science becomes “the art of the soluble” as researchers and funders alike avoid complex, real-world problems and focus instead on small-scale projects leading to incremental science (and more publications).1

If we want to address the reproducibility crisis and other current concerns about the reliability and value of academic research, we have to change the incentive structures within academia that reward certain types of research over others. We must incentivize activities that promote reproducible, high-quality, high-impact research.

The Bibliometric Trap

The use of numerical indicators to evaluate academic success is actually a fairly recent development. Only in the past thirty years has scientific quality become defined primarily by the number of peer-reviewed publications, the journal impact factor (whose uptake started in the 1990s), international measures like the Shanghai Ranking of universities (first published in 2003), and personal citation scores like the h-index (introduced in 2005).

Numbers offer a convenient, seemingly objective way to evaluate success and compare outcomes. They fulfill a need for control and accountability. However, the impact factor of a journal doesn’t necessarily reveal anything about the quality of an article or what it contributes to the broader quest for scientific truth. Long publication lists are meaningless when many papers are never cited or read, and certainly not by practitioners outside academia. More importantly, recent replication efforts show that many studies cannot be reproduced, suggesting that prestigious publication is no guarantee of the validity of findings.2

The bibliometric approach to evaluating research leads to risk avoidance and a focus on short-term outcomes. Researchers slice their results into the smallest publishable units, dripping out findings over time so they can accrue more publications. They run and rerun analyses until they uncover a statistically significant finding – a process known as p-hacking – regardless of their original research question. They focus their research on a quest for tantalizing new discoveries, avoiding the important but less glamorous work of validating the findings of other scientists or sharing failures (i.e. negative findings) that others can learn from.

By and large, these researchers don’t intend to slow scientific progress. They’re simply responding to a career advancement system that rewards certain types of work over others. Yet these incentive structures often prevent science from making good on its promise of societal impact. For instance, the bibliometric framework disincentivizes many of the kinds of studies and replications needed to move along drug discovery, a slow process in which promising compounds are tested and retested numerous times.3

The bibliometric approach also devalues teaching. Since an academic career is typically built on quantifiable research output, many researchers “buy themselves out” of teaching responsibilities or spend less time and effort improving their teaching craft. This shortchanges the next generation of scientists from receiving the preparation they need to make transformational discoveries.

The Influence of Institutions

In our bibliometric world, researchers are often caught between what they should do and what they are rewarded for – and it is institutions that set these rules. For instance, the main national research funder in the Netherlands, the Netherlands Organisation for Scientific Research (NWO), recently made 3 million euros available for replication studies. The organization also requires that all research papers resulting from its funding be published in open access journals.

However, when researchers apply for a prestigious individual grant from NWO, the impact factor of the journals they have been published in plays a major role in their evaluation. This evaluation system is at odds with the organization’s promotion of replication studies and open access journals, which tend to have lower impact factors.

A similar disconnect exists between journal guidelines and the policies of research institutions. Many journals now require researchers to make the data underlying their published results publicly available, a policy that promotes transparency and reproducibility. Universities and research institutes, however, rarely reward or facilitate these open data efforts. If data transparency is not backed by formal rewards and technical support, researchers may perceive open data as a sideshow on the fringe of “real” scientific work, rather than a key part of the scientific process.

Reconsidering How We Evaluate Research

As the academic community has struggled to come to terms with the “reproducibility crisis,” a vigorous international debate has arisen about the role of career incentives in affecting research quality. The San Francisco Declaration on Research Assessment (DORA), which calls for better methods for evaluating research output, was drafted by journals, funders, and researchers in 2012 and has since been signed by more than 800 institutions and 12,000 individuals. It recommends that individual researchers not be judged by bibliometric measures, but instead by “a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.”

Another milestone occurred in 2015 when an international collection of experts published the Leiden Manifesto for Research Metrics. The manifesto declares that “the abuse of research metrics has become too widespread to ignore,” and outlines ten principles to guide better research evaluation. These best practices include evaluating performance in relation to a scholar or institution’s mission and judging individual researchers based on a holistic, qualitative evaluation of their portfolio of work.4

In the Netherlands, our organization Science in Transition has been advocating for a new way of evaluating research since 2013. Our efforts prompted heated debate, leading Dutch research organizations to drop “quantity” (of publications and funding) as a distinct category in the nationwide protocol for evaluating universities. In addition, the Dutch association of universities signed on to DORA at Science in Transition’s second conference.

Restructuring the Tenure System

These efforts to reimagine how we define research success are an important first step, but it will be up to universities and research institutions to make the concrete policy changes necessary to truly transform the system. At our institution, University Medical Center Utrecht in the Netherlands, we are working to put into practice the ideas and critiques formulated by Science in Transition and other groups.

We are actively countering the undesirable incentives in our system for evaluating researchers. All applicants for academic promotions at UMC Utrecht now submit a portfolio covering multiple domains: science, teaching, clinical activities, leadership, impact, and innovation. Candidates are required to present themselves in an inclusive way, with narratives about the impact and goals of their research.

The new evaluation system provides the review committee with a broad view of someone’s work and the opportunity to promote or hire a scholar who may not have the perfect publication profile, but excels in areas that are harder to quantify in bibliometric terms. Our hope is that this system will incentivize researchers to focus on work that advances science, regardless of how it looks when the numbers are tallied up.

We have also changed how we evaluate the university’s research programs. Such institutional evaluations are an important feature of the Dutch academic system, yet are often dominated by bibliometric measures. We’ve shifted to evaluating programs based on their wider clinical and societal impact. Research programs are asked to explain how they arrived at their main research questions, how their research fits with existing knowledge, and how their findings can advance clinical applications. We also ask both international peers and societal stakeholders to evaluate their research.

Research programs must document how patient organizations, companies, government agencies, and other stakeholders benefit from their work and are involved in structuring their research questions. In addition, programs are asked to show how their methods and data systems promote high-quality, reproducible research. Researchers should have data management plans in place and make datasets available for external use. UMC Utrecht supports these efforts by providing a template for data management plans and dedicated servers for storing and sharing data.

Our efforts at UMC Utrecht are grounded in the belief that the full power of biomedical research should be geared toward fulfilling its ultimate mission: improving healthcare. We try to align incentives for our researchers and our institution with this mission, so that citation counts and impact factors don’t get in the way of our goal. Our hope is that others in the field will realize that scientific papers are not a goal in and of themselves, but are stepping stones on the road to impact.

This article is part of a series on how scholars are addressing the “reproducibility crisis” by making research more transparent and rigorous. The series was produced by Footnote and Stephanie Wykstra with support from the Laura and John Arnold Foundation. It was published on Footnote and Inside Higher Ed.

The post Do Our Measures of Academic Success Hurt Science? appeared first on Footnote.

Can Better Training Help Fix the Reproducibility Crisis? Wed, 26 Jul 2017 18:56:15 +0000 Giving researchers the data skills they need to share, review, and validate each other’s work.

The post Can Better Training Help Fix the Reproducibility Crisis? appeared first on Footnote.

It’s a common story. A bright young graduate student starts their research program with high ambitions. Six months later they’re staring at hundreds of genomes, thousands of pages of digital text, or hundreds of thousands of environmental measurements and wondering how to even begin to analyze them.

Their academic training hasn’t prepared them for the day-to-day challenge of organizing, managing, and analyzing large datasets – they’ve hit their data pain point. Even researchers working with relatively small datasets (for example, hundreds of survey responses) face challenges scaling up their fields’ traditional data management and analysis techniques for today’s highly technical, data-rich research landscape. In a recent survey of 704 principal investigators for National Science Foundation biology grants, the majority said their most important unmet data needs were not software or infrastructure, but training in data integration and data management.1

This lack of data skills is holding back progress toward more reproducible research by making it harder for researchers to share, review, and reanalyze one another’s data. In a recent survey by Nature, the top four solutions scientists identified for improving reproducibility related to better understanding of statistics and research design and improved mentoring, supervision, and teaching of researchers.2 Data skills need to be an integral part of academic training in order to ensure that research is reliable, transparent, and reproducible.

My organization, Data Carpentry, and our sister organization, Software Carpentry, are among the groups filling this gap by training researchers in the latest technologies and techniques for cleaning, organizing, cataloging, analyzing, and managing their data. We see this training as an important part of a larger project to transform academic culture to make research more reproducible and transparent.

Drowning in Data

New tools for gathering, storing, and sharing information have made an unprecedented amount of data available to researchers. For example, sequencing a full human genome now costs less than $1,000 and data repositories house massive amounts of genetic data for use by researchers and clinicians.3 The integration of technology into our day-to-day lives also produces a massive amount of data: A widely publicized study on the emotional tone of people’s Facebook activity involved nearly 700,000 subjects and millions of online posts.

Most researchers will need to interact with large datasets at some point in their careers. When they do, many realize they’re unprepared for the challenge. Being unfamiliar with computational tools and workflows, they may find themselves carrying out repetitive and error-prone tasks by hand. If they write in-house scripts for cleaning or analyzing their data, they may fail to document their code in a way that allows it to be checked and used by other researchers. If using code written by others, they may not properly test its utility for their dataset. They may fail to document the parameters they select and the software version they use, information that is important for other researchers seeking to replicate their results. Combined, these and other issues impose an enormous cost on both researchers’ productivity and the reliability and reproducibility of their results.

Our current approach to training academics doesn’t provide dedicated space for learning how to organize, clean, store, and otherwise manage data, because our model developed before this type of training was needed. Datasets were either small enough to be analyzed using simple computational tools or were handed off to data specialists. This is no longer the case, and researchers who aren’t prepared to handle data are forced either to teach themselves data skills piecemeal or limit themselves to questions that can be answered with smaller datasets and computationally simpler approaches. Without proper training, they may practice poor data hygiene and produce results that other researchers can’t understand or replicate.

Preparing a New Generation of Researchers

Ideally, training in how to organize, clean, store, and analyze data in reproducible and computationally sound ways would be an ongoing part of a researcher’s education starting early in their academic career. However, two major barriers have kept this ideal from becoming a reality: The need for data skills training isn’t widely recognized by those responsible for setting undergraduate and graduate curricula and there is a shortage of instructors with the expertise to teach data skills.

Overcoming these barriers requires changing the culture at universities and research institutions. We need a large body of early career researchers with the skills to be competent and confident in their data management and analysis and the passion to act as advocates for the importance of data skills, reproducible research, and data transparency at their institutions. Data Carpentry and Software Carpentry are training this new generation of data champions and, along the way, building an army of devoted instructors who can train others.

Data Carpentry creates and delivers hands-on, interactive workshops providing fundamental data skills to researchers around the world. Our goal is to empower researchers to manage and analyze their data in reproducible ways and to make their data and analyses available for others to review and reuse. Together with our sister organization, Software Carpentry, we’ve reached more than 6,000 learners in the past year in over 25 countries. We’ve also trained more than 800 volunteer instructors in evidence-based teaching practices and active learning strategies. These instructors often adapt our teaching practices and curricula (openly available under a Creative Commons license) for other contexts, spreading our impact further.

By focusing on helping data novices develop basic familiarity with a core toolkit and cultivate strategies for future self-directed education, we hope to establish the foundation for lifelong learning. This is essential because data management and analysis tools are continuously evolving, meaning that new techniques will need to be learned over the course of a researcher’s career. In addition, lifelong learners are more likely to become advocates for educating others. 

Transforming Academic Culture to Value Data Sharing & Reproducibility

Our goal is to create an academic world where our organization is no longer necessary because researchers receive training in best practices for research and data management throughout their careers. Our strategy is to transform academic culture from the ground up by taking advantage of what we know about how cultural change works.4 Some of these strategies may be useful for others trying to create a lasting shift in academia toward reproducibility and transparency.

We know that it’s hard to change people’s attitudes, but it’s necessary in order to truly change their practices in a lasting way. Many researchers come to us because they’ve hit a point in their workflow where they can’t move forward without certain data skills. In addition to giving these ready-made allies the skills they need, we attempt to change their attitudes and turn them into advocates for data practices that promote reproducibility and transparency. By targeting researchers early in their careers, we ensure that our impact grows over time as they pass along these principles in their labs and classrooms.

We also know that people aren’t content to implement ready-made solutions, but want to modify strategies to match their own needs and contexts. Our curricula are collaboratively developed and extensively tested by our community, but we encourage our instructors to modify them to suit the students. Our lessons are also tailored to specific academic domains in order to reduce cognitive load and enable learners to directly apply the principles and techniques they learn to their own data. We want students to be able to walk out of our workshops and immediately use what they’ve learned, so they can see the lasting value of data skills for their field.

Catalyzing Long-Term Change

By turning early career researchers with specific skill needs into long-term advocates for data practices that support transparency and reproducibility, Data Carpentry hopes to catalyze change to research and instructional culture worldwide. Our workshops are in high demand, usually filling up within days of opening, and people are lining up to volunteer to teach with us. We’re building growth in new regions, including Central and South America and Africa, and academic domains in the social sciences and humanities. We’re also developing ways for local communities of researchers to continue learning after our workshops.

Together we’re working to make that common story of hitting a “data pain point” a less common one. That bright new graduate student might instead be a Carpentry instructor, bringing data literacy to a campus near you. They could also be a powerful advocate for an academic system where every researcher is equipped to organize and share data so that others can reproduce and reuse it, improving the quality and reliability of research.

This article is part of a series on how scholars are addressing the “reproducibility crisis” by making research more transparent and rigorous. The series was produced by Footnote and Stephanie Wykstra with support from the Laura and John Arnold Foundation. It was published on Footnote and Inside Higher Ed.

The post Can Better Training Help Fix the Reproducibility Crisis? appeared first on Footnote.

Should Journals Be Responsible for Reproducibility? Wed, 19 Jul 2017 16:04:19 +0000 One of the top journals in political science makes data-sharing and replication part of the publication process.

The post Should Journals Be Responsible for Reproducibility? appeared first on Footnote.

Science is an inherently social enterprise. Progress only occurs when new results are communicated to and accepted by the relevant scientific communities. The major lines of communication run through professional journals and the double-blind peer review process. Academic journals are also a main currency of scholarly success, as publication in a top journal can be a make-or-break career moment for a researcher.

Because of their central role in academic communication and career advancement, journals help set the rules of how research is evaluated and rewarded. At the American Journal of Political Science (AJPS), we work closely with our partners at the Odum Institute for Research in Social Science at University of North Carolina-Chapel Hill and the Qualitative Data Repository at Syracuse University to promote reproducibility and transparency as cornerstones of high-quality research.

While the political science discipline has long paid lip service to the importance of these issues, the AJPS’s Replication & Verification Policy requires scholars to “practice what we preach” by incorporating reproducibility and data-sharing into the academic publication process.¹ Our goal is to establish a standard for the information that must be made available about the research that appears in our journal. By requiring scholars to provide access to their data and conducting our own replications on those data, we confirm the rigor of, and promote public confidence in, the studies we publish. As one of the top journals in the discipline, we hope to create state of the art standards that others in the field will aim to adopt.²

Below are some of our experiences so far and lessons for journals interested in promoting the reproducibility and transparency of the work they publish. 

Opening Up Data

In 2012, the AJPS became the first political science journal (to our knowledge) to require authors to make the empirical data underlying their analyses openly accessible online. By making data sharing a requirement for publication, we can guarantee that replication materials for all AJPS articles will be available to the research community in a single, easily accessible location: the AJPS Dataverse, an online data repository.³

While the creation of the AJPS Dataverse was a critical first step, it became clear that scholars needed more guidance and support to ensure replication files were reliable and consistent. In 2015, we released guidelines on what information authors are required to share about their quantitative research:

  • the dataset analyzed in the paper;
  • detailed, clear code for using this dataset to reproduce all tables, figures, and exhibits in the paper;
  • documentation, including a README file and codebook, that provide contextual details about the dataset and replication materials;
  • information about the source of the dataset; and
  • instructions for extracting the analysis dataset from original source data (e.g. recodes, data transformations, handling missing observations, etc.).

With these materials, any researcher should be able to reproduce the empirical results published in any AJPS article.

In 2016, we extended the guidelines to cover replication materials from qualitative and multi-method research.⁴ Our transparency policy is not “one size fits all,” and it respects the diversity of data formats by providing different requirements for different kinds of qualitative data. Yet the ultimate goal of our guidelines is the same regardless of the type of data used and how they are shared: to clarify how the information underlying an article was obtained and how it was used to generate the findings and conclusions reported in the article.

Ensuring Accuracy

At the AJPS, we have used our influence as a key actor in the academic publishing landscape to make research data more open and accessible. However, we believe that part of our responsibility as a leading scientific journal goes beyond simply requiring scholars to make their data available; it requires us to ensure the accuracy of the results we publish. Therefore, we verify the replication materials that authors provide in order to guarantee that they do, in fact, properly reproduce the results described in the corresponding AJPS article.

We believe that verifying results is a central part of our commitment to publishing the highest quality research in political science. We also maintain that, in the interest of objectivity, it is best to have a third party carry out the verifications. We have partnered with the Odum Institute for Research in Social Science to verify quantitative analyses and the Qualitative Data Repository (QDR) to verify qualitative analyses.

Acceptance of a manuscript for publication in the AJPS is contingent on successful replication of any empirical analyses reported in the article.‎⁵ After the author submits their final article draft and uploads their replication files to the Dataverse, staff from the Odum Institute or the QDR curate the replication materials to make sure they can be preserved, understood, and used by other researchers. Staff then carry out all of the analyses using the computer code, instructions, and datasets in the replication files, and compare their results to the contents of the article. If problems occur, the author is given an opportunity to resolve the issues and upload corrected materials to be re-checked. This process continues until the replication is carried out successfully. At that point the manuscript receives its final acceptance for publication and the replication files are made publicly available on the AJPS Dataverse.

To publicize our efforts and provide some recognition to authors who comply with our policy, we have adopted two of the “Badges to Acknowledge Open Practices” from the Center for Open Science. These badges are designed to reward research that conforms to best practices for scientific openness and transparency. Any manuscript that has successfully completed the AJPS replication and verification process automatically meets the criteria for the Open Data and Open Materials badges, which we include on the article itself and on the Dataset in the AJPS Dataverse.

The Benefits & Costs of Verification

Since the launch of the AJPS Dataverse in 2012, 268 datasets have been uploaded. We began verifying replication materials in 2015 and 95 manuscripts have successfully been verified so far, all involving quantitative research.⁶ Scholars have been highly receptive to our new standards and procedures, despite the additional time they require. We have received complete cooperation, and often enthusiastic support, from our authors. Requests for exemptions have been based on practical considerations, not philosophical objections to our goals.

There are, however, costs associated with the replication and verification process. It adds an average of 53 days (the median time) to the publication workflow. The process typically involves one or more rounds of feedback and resubmissions of the replication materials. So, much of the turnaround time involves authors revising their materials. The median time from submitting the replication materials to the initial verification report from the Odum Institute is 20 days. So far, three-quarters of the studies have completed the entire verification process in three months or less.

The verification process is also relatively labor-intensive. On average, it takes 8 person-hours per manuscript to replicate the analyses and curate the materials for public release. The financial cost of the verification process is not trivial, and is covered by the Midwest Political Science Association, the professional organization that owns the AJPS.

The vast majority of authors have to correct and resubmit their replication materials at least once (the mean number of resubmissions is 1.7). Most of the resubmissions involve incomplete replication materials; requests for additional information, such as more detail in codebooks; or minor inconsistencies between the replication results and the manuscript. In virtually all cases, authors have been able to make the necessary corrections and adjustments relatively easily, without requiring major changes to their manuscripts.⁸

Thus far, our verification process has not revealed many major research flaws, but rather has served as a mechanism for ensuring that all the materials necessary for replication are in place and can be executed successfully by a third party. In our experience, active verification of the replication files is necessary in order to guarantee the validity of the final materials and ensure they are ready for review and reuse by others. Most of the problems our process has identified would not be found if we merely checked for the presence of the required replication files without re-running the analyses. 

Changing the Conversation

As one of the most prestigious general-audience publications in political science and the broader social science community, the AJPS has the opportunity to shift the standards for what constitutes high-quality research and what role journals should play in encouraging reproducibility and transparency. The Replication and Verification Policy is one of the features that sets the AJPS apart from other publications and provides tangible evidence of the high quality of the work we publish. We, quite literally, open up the research process to full scrutiny from the scientific community.

We are hopeful that our Replication and Verification Policy will serve as model for best practices that other journals can adopt. At the same time, we recognize that adapting a journal’s workflow in this way is an expensive proposition. The AJPS is fortunate to be part of an organization that has the resources to support this policy, but financial realities may not make it feasible for other journals.

Nevertheless, financial barriers do not mean that journals must entirely abandon their support for the same general principles of transparency and reproducibility. The Center for Open Science has a set of Transparency and Openness Promotion (TOP) Guidelines that outline varying levels of replication policies, ranging from simple disclosure about data through data sharing to full verification. The TOP guidelines can help other journals adopt the particular mix of data access and research transparency policies that are most appropriate for their own needs and resources, and can be adjusted over time as journals progress toward greater transparency and more stringent standards.⁹

The ideas motivating the AJPS policy embody a central tenet of the scientific method – that results can be tested and reproduced by others. The policy not only guarantees the quality and transparency of the studies we publish, it also provides an invaluable resource for teaching and further research. We believe that the benefits for the scientific community outweigh any costs to authors, the editor, or the publisher. Replication and verification policies promote the integrity of scientific research and, as such, should be made a routine part of the academic publication process. If more journals were to adopt similar policies, we could make real progress toward solving many of science’s current challenges around reproducibility, transparency, and openness.

This article is part of a series on how scholars are addressing the “reproducibility crisis” by making research more transparent and rigorous. The series was produced by Footnote and Stephanie Wykstra with support from the Laura and John Arnold Foundation. It was published on Footnote and Inside Higher Ed.

The post Should Journals Be Responsible for Reproducibility? appeared first on Footnote.

Paving the Way to More Reliable Scientific Research Wed, 12 Jul 2017 19:04:50 +0000 How scholars and researchers are working to restore confidence in peer-reviewed science.

The post Paving the Way to More Reliable Scientific Research appeared first on Footnote.

How do we know which scientific results to trust? Research published in peer-reviewed academic journals has typically been considered the gold standard, having been subjected to in-depth scrutiny – or so we once thought. In recent years, our faith in peer-reviewed research has been shaken by the revelation that many published findings don’t hold up when scholars try to reproduce them. The question of which science to trust no longer seems straightforward.

Concerns about scientific validity and reproducibility have been on the rise since John Ioannidis, a professor at Stanford School of Medicine, published his 2005 article, “Why Most Published Research Findings are False.” Ioannidis pointed to several sources of bias in research, including the pressure to publish positive findings, small sample sizes, and selective reporting of results.

In the years since, a wave of scholars have dug deeper into these issues across a number of disciplines. Brian Nosek at the Center for Open Science and Elizabeth Iorns of Science Exchange spearheaded attempts to repeat past studies in their respective fields, psychology and cancer biology, with discouraging results.1 Economists encountered trouble in merely repeating the analyses reported in papers using the original data and code.

By 2016, when Nature surveyed 1,500 scholars, over half expressed the view that there is a significant “reproducibility crisis” in science. This crisis comes at an uncomfortable time, when some skeptical voices question even well-grounded scientific claims such as the effectiveness of vaccines and humans’ role in climate change.

Given this hostility, there’s a concern that reproducibility issues may undermine public confidence in science or lead to diminished funding for research.2 What is clear is that we need a more nuanced message than “science works” or “science fails.” Scientific progress is real, but can be hindered by shortcomings that diminish our confidence in some results, and need to be addressed.

There has been plenty of coverage about the reproducibility crisis and its implications (including debate over whether to call it a “crisis”) in both scientific publications and mainstream outlets like the New York Times, Atlantic, Slate, and FiveThirtyEight. But somewhat less attention has been paid to the question of how to move forward. To help chip away at this question, we’re publishing a series of articles from researchers leading initiatives to improve how academics are trained, how data are shared and reviewed, and how universities shape incentives for better research. After this essay, the rest of the series will be appearing on the Rethinking Research blog on Inside Higher Ed.

Why the Shaky Foundation?

The reproducibility problem is an epistemological one, in which reasons for doubt undermine the foundations of knowledge. One source of doubt is the lack of visibility into the nuts and bolts of the research process. The metaphor of “front stage” and “back stage” (borrowed from sociologist Erving Goffman, who used it in different context) may be helpful here.

If the front stage is the paper summarizing the results, the back stage holds the details of the methodology, data and statistical code used to calculate those results. All too often, the back stage is known only to the researchers, and other scholars cannot peer behind the curtain to see how the published findings were produced.

Another big issue is the flexibility scholars have in choosing how to understand and analyze their research. It’s often possible to draw many different conclusions from the same data, and the current system rewards novel, positive results. When combined with a lack of transparency, it can be difficult for others to know which results to trust, even if the vast majority of researchers are doing their work in good faith.

As Joseph Simmons, Leif D. Nelson, and Uri Simonsohn write in their article on “researcher degrees of freedom”: “It is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields ‘statistical significance,’ and to then report only what ‘worked,’… This exploratory behavior is not the by-product of malicious intent, but rather the result of two factors: (a) ambiguity in how best to make these decisions and (b) the researcher’s desire to find a statistically significant result.”

Given the potential for biased or flawed research, how can we encourage greater transparency and put the right incentives in place to promote reliable, reproducible research? Three big questions we’ll be looking at in this series are: How are researchers trained? What resources and support do they receive? How do institutions respond to and reward to their work?

Training the Next Generation of Researchers

Lack of proper training in research methods and data management skills can contribute to reproducibility problems. Graduate students are sometimes left on their own in learning how to manage data and statistical code. As they merge datasets, clean data, and run analyses, they may not know how to do this work in an organized, reproducible fashion. The “back stage” can become extremely messy, making it hard to share their materials with others or even double-check their own findings. As the students advance in their professions, they may not have the time (or the right incentives) to develop these skills.

In the 2016 survey conducted by Nature, researchers identified an improved understanding of statistics and better mentoring and supervision as the two most promising strategies for making research more reproducible.

A number of organizations are tackling this issue by offering workshops for graduate students and early-career researchers in how to conduct reproducible research, manage data and code, and track research workflow. Among them are trainings offered by Software Carpentry and Data Carpentry, the Center for Open Science, and the Berkeley Initiative for Transparency in the Social Sciences (BITSS). There are even courses available online from institutions such as Johns Hopkins.

Resources and Support for Better Science

While proper training is essential, researchers also need resources that support reproducibility and transparency. One critical piece of infrastructure are data repositories, online platforms that make it easier for scholars to organize research materials and make them publicly available in a sustainable, consistent way.

Repositories like Dataverse, Figshare, ICPSR, and Open Science Framework provide a place for researchers to share data and code, allowing others to evaluate and reproduce their work. There are also repositories tailored to qualitative research, such as the Qualitative Data Repository.

Universities are also enhancing their services and support for reproducible research practices. For example, the Moore-Sloan Data Science Environments initiative offers resources to support data-driven research at three universities, including software tools and training programs. Dozens of universities also have statistical consulting centers that offer advice to students and researchers on research design and statistical analysis. Some disciplinary associations are also convening groups to develop guidelines and standards for reproducible research.

Creating Incentives for Reproducible Research

Researchers often face career and institutional incentives that do little to encourage reproducibility and transparency, and can even work against those goals at times. Academic achievements like winning grants and earning tenure are linked primarily to publishing numerous papers in highly ranked journals. There’s little professional reward for the time-consuming work of sharing data, investigating and replicating the work of others, or even ensuring one’s own research is reproducible.

Institutions are beginning to shift these incentives through policies and funding that encourage reproducible research and transparency, while reducing some of the flexibility that can allow biases to creep in. Funders such as the Arnold Foundation3 and the Netherlands government have set aside money for scientists to conduct replications of important studies. Some have offered incentives for scientists to pre-register their studies, meaning they commit to a hypothesis, methodology, and data analysis plan ahead of data collection.

Increasingly, funding agencies and academic journals are adopting transparency policies that require data-sharing, and many journals have endorsed Transparency and Openness Promotion guidelines that serve as standards for improving research reliability.

In another interesting development, some journals have shifted to a model of “registered reports,” in which an article is accepted based on the research question and method, rather than the results. Recently, Cancer Research UK formed a partnership with the journal Nicotine and Tobacco Research to both fund and publish research based on the “registered reports” approach.

All of these initiatives are important, but the path to academic career advancement also needs to shift to reward research activities other than just publishing in prestigious journals. While change on this front has been slow, a few institutions like the University Medical Center Utrecht in the Netherlands have started to expand the criteria used in their tenure and promotion review process.

From Vision to Reality

The driving vision of these initiatives is a system that trains, supports, and rewards scientists for research that is transparent and reproducible, resulting in reliable scientific results. To learn how this vision is being put into practice, we’ve partnered with contributors on a series of articles about how they are working to improve research reliability in their fields.

None of these solutions is a silver bullet. Improving research reliability will depend on change in many parts of the academic ecosystem and by many actors – researchers, funders, universities, journals, media, and the public. Taking the next steps will require openness to new ways of doing things and an attempt to discern what’s effective for improving research.

In many ways, we’re still in the early stages of realizing that there’s a problem and taking steps to improve. The good news is that there’s an ever-growing segment of the research community, and of the public, who are aware of the need for change and willing to take steps forward.

This article is part of a series on how scholars are addressing the “reproducibility crisis” by making research more transparent and rigorous. The series was produced by Footnote and Stephanie Wykstra with support from the Laura and John Arnold Foundation. It was published on Footnote and Inside Higher Ed.

The post Paving the Way to More Reliable Scientific Research appeared first on Footnote.

How The Human Brain Keeps Time Tue, 20 Jun 2017 20:00:06 +0000 Our internal clock does a remarkably good job at tracking time.

The post How The Human Brain Keeps Time appeared first on Footnote.

We live in four dimensions. The three dimensions of space that we can see with our eyes and touch with our hands. And the fourth dimension of time, which, since we have no special sense organ to perceive it, is practically invisible to us. Since at least 3,000 B.C.E. humans have searched for ways to ‘see’ time. Using sun clocks, they watched time move over the ground in slow predictable patterns. Later, golden arrows danced neat one-minute circles on people’s wrists and walls.

This focus on measuring and visualizing time often leads us to forget the wonderful timekeeper we carry within us: our internal clock, which ticks away days and seconds surprisingly well. How does our brain construct this sense of time, if it has no direct way to perceive it?

The Ticking of Our Internal Clock

On July 16, 1962, French geologist Michel Siffre descended into a cave in the French Alps to camp for two months without light or a watch.1 Although his primary motivation was to study the cave’s geology, the fact that he decided to leave his watch at home made him the founder of a new field of science: chronobiology. During his two months in the dark, Siffre conducted simple tests on himself and logged his activity patterns. Analysis of these logs showed that he had kept to a rigid sleep-wake rhythm of 24 hours and 31 minutes, almost exactly the same as the day-night cycle of the earth. Seemingly, something in his body ticked away the days despite the lack of sun.

Less than ten years later, geneticists Ronald Konopka and Seymour Benzer discovered a gene they called period.2 Period’s activity naturally rises and falls over a 24-hour cycle, suggesting that it is an internal measure of day and night. Indeed, Konopka and Benzer found that flies with alterations in this gene had abnormal sleep patterns. In the mammalian brain, period and similar genes are expressed in an area called the suprachiasmatic nucleus, which translates the activity of these genes into a sleep-wake pattern that is communicated to the rest of the brain and body.3

However, as Michel Siffre discovered, our timing is not perfect. The human internal clock cycles 31 minutes longer than it takes the earth to spin. To compensate, feedback from specialized daylight detection cells in our eyes constantly help the suprachiasmatic nucleus stay in sync with the outside world. Interestingly, most organs in our body are regulated by their own period cycles.3 Perhaps this is the reason why jet lag causes us to feel a little ‘off’ for a couple of days, even after we have recovered our normal sleep and light exposure patterns.

Chemicals That Tell Time

According to neuroscientist Dean Buonomano, “There is no problem of time. There are many, many, many problems of time.”4 What he meant is that we experience time in so many different intervals – a split-second movement, the rhythm of a day, the slow crawl of the years – that there can be no single way of representing it in the brain. Since gene transcription is a slow process, the period gene covers the day-night cycle well. However, it cannot represent the fast, variable timescales that we need for activities like sports or music. These shorter intervals are mostly handled by the basal ganglia.

The basal ganglia are a group of neural structures in the forebrain that determine behavior. Emotions, motivations, memories, and physical information all come together and are used by the basal ganglia to choose the most beneficial action. When the basal ganglia do not work properly, people suffer from impulse control disorders such as Tourette syndrome,5 action selection disorders like Parkinson’s disease,6 or attention and motivational disorders such as ADHD and addiction.5

To select the right behaviors, the basal ganglia need to learn which actions lead to good outcomes, a feat that requires mentally connecting actions with events happening later in time. How exactly they do this is still unclear, but we do know that it involves a ‘starting shot’ and a ‘stopwatch’. The starting shot is created by a select group of neurons in the basal ganglia that release dopamine. Every event or action that predicts a reward – for example, opening a bag of chips or flirting with a handsome stranger – leads to an activity burst in the dopamine neurons. These bursts of dopamine subsequently unleash a whole cascade of other neural activity, which functions a bit like a stopwatch: patterns early in the cascade signal short intervals, while later patterns signal longer intervals.

Since the cascade of neural activity follows a predictable course, the basal ganglia can use it to measure the time between actions and outcomes and, therefore, to associate certain rewards with certain actions. Scientists have been able to identify traces of this ‘stopwatch’ in the basal ganglia in several studies.7,8,9 For example, a recent study found that neurons in an area called the striatum consistently became active at their own specific part of the interval. Together, the activity patterns of these cells spanned the whole interval, which could be up to 1 minute long – a century for neurons whose activity lasts only 1 millisecond!9

Time Flies When You’re Having Fun

Einstein is often credited with saying, “When you sit with a nice girl for two hours you think it’s only a minute, but when you sit on a hot stove for a minute you think it’s two hours.” At first, the link between emotions and time perception seems strange, until you realize that both time and positive rewards are represented by dopamine activity in the basal ganglia.

Neuroscientists have experimented with dopamine-targeting drugs to test this possible interaction and found that drugs mimicking dopamine, such as methamphetamine and cocaine, speed up time perception considerably.10,11 Furthermore, Parkinson’s disease, which destroys a big part of the brain’s dopamine neuron population, throws off people’s ability to estimate time intervals. Giving Parkinson’s patients the medicine L-dopa, a dopamine-like molecule, brings their time estimation back to normal.12

The Parkinson’s findings are important because, unlike people on mood-altering drugs such as cocaine, people with Parkinson’s disease are not consistently more or less happy than the average person. The experiment shows that it is the dopamine itself, and not a person’s emotional state, that causes time perception to change. It just so happens that feeling happy can also change dopamine levels in the brain, thereby speeding up or slowing down time.

Is the World Speeding Up?

In the early 1800s, Estonian biologist Karl Ernst von Baer developed a thought experiment called “The Minuteman,” in which he imagined a man whose entire life was shortened to a mere 40 minutes.13 “Every sound we hear would surely be inaudible for him… maybe he would actually hear the light that we see,” wrote von Baer. He also imagined this man’s life lengthened by a thousand fold: “He would not be able to register the sun, but perceive it like a glowing piece of coal spinning around and around in a bright ring, he would see it only as an illuminated arc in the sky.”

With his thought experiment, von Baer showed that everything we see and hear depends on our perception of time. He argued that time is subjective and that every animal experiences the world in its own time frame. What this suggests is that we experience the world in a time frame that is relevant to us as humans and that meets our evolutionary needs. Indeed, our own human time perception appears well adjusted to the world around us. The gene period closely matches the day-night cycle, preparing our body for sleep and wake. The basal ganglia quickly adapt our actions to the time scales of the world, allowing us to catch a fish, dance a waltz, or run a race.

Our brain has its own set of clocks that match the rhythms of the world. So what happens if we become the world? Most of our interactions today are with other humans or with human-designed technology. The rhythms we sync up with are ones we create ourselves and, in the past few decades, we seem to have been speeding up. Movies and music are more action-packed, trends come and go in a matter of days, and our attention flits from device to device and webpage to webpage.

As Robert Coleville writes in his book The Great Acceleration, “What single quality best defines how our society is changing? Is it that life is becoming fairer, or more equal, or more prosperous? No… it is that life is getting faster.”14 Could it be that we are caught in a positive feedback loop, where our brains speed up to meet the rhythms of the current world, subsequently speeding up that same world further? And if this is the case, then how close will we one day get to becoming the 40-minute man?

The post How The Human Brain Keeps Time appeared first on Footnote.

Why Understanding Gender Is An Essential Part of A Business Education Thu, 27 Apr 2017 16:18:15 +0000 When it comes to gender, many business schools are behind the times.

The post Why Understanding Gender Is An Essential Part of A Business Education appeared first on Footnote.

This article was produced in partnership with Babson College.

The curriculum at many of today’s top business schools presents an outdated vision of business leadership. Students spend hours poring over case studies that disproportionately feature CEOs and key decision makers who are men. Only 11% of the 74 most popular cases from 2009 to 2015 had a woman protagonist,1 and nearly half didn’t include any women at all.(a)

Research shows that this marginalization of women runs throughout the business school curriculum, from the language used in classrooms2 to the representations of entrepreneurship presented in academic studies.3 It is even reflected in the faculty: Typically just 15% to 25% of professors at top business schools are women.4

Business schools can either continue to reflect an outdated worldview or start preparing students for a more inclusive future. In today’s economy, gender acumen is a requisite leadership skill: understanding gender is essential for optimizing relationships with employees, customers, and colleagues. We expect M.B.A. programs to expose students to a variety of business sectors and a community with experiences ranging from entrepreneurship and corporate leadership to finance and marketing. Shouldn’t gender – as well as race, ethnicity, and other forms of social diversity – be equally important?

How business schools tackle this challenge has implications far beyond our campuses. As the pipeline for tomorrow’s leaders, we help set the tone for the broader business community.5 If we continue to train students to envision CEOs, entrepreneurs, and innovators as men, we’ll have a hard time changing the fact that women are still significantly underrepresented in key leadership roles.(b) A third of companies worldwide have no women on their boards or in top “C-suite” positions.6 In North America, only 15% of companies have a woman on their executive team and just 4% have a woman CEO.6

Fostering a more inclusive environment isn’t just about fairness and equal opportunity for women and other marginalized groups. It’s also good for business. Numerous studies have found that greater gender diversity on boards and in corporate leadership positions is associated with greater profitability and higher stock values.6,7

Inclusivity also benefits students of all genders, as research shows that more diverse educational environments promote learning.8 This is especially important in business schools, where working in teams and learning from peers is an essential part of the educational experience. For example, a 2013 study randomly assigned undergraduate business students into teams to start a venture together. Mixed-gender teams had better sales and profits than those that were predominantly composed of men.9

For business schools that want to lead the way on gender, the first step is to take stock of how the institution currently represents itself. At Babson, we’ve analyzed the diversity of the case studies used in our core curriculum, the speakers and panelists featured at major events, and the students participating in important programs and leadership roles.

This review helped identify areas where we are now making a concerted effort to promote a more inclusive view of business leadership. We’re creating a database of case studies featuring women protagonists for faculty to use when developing their courses. We require that all school-funded events and conferences strive for gender balance in panels and speakers. There are many other ways business schools are promoting gender inclusivity, from revamping admissions and financial aid to changing curricula and facilitating discussions around gender.10

In the long run, it’s important that schools stop confining discussions of gender to women-centric events and material, and begin to normalize gender diversity throughout the curriculum. Studies show that when business lessons and research feature women or discuss gender, they are often targeted to women students or identified as being exclusively about gender or diversity.3 Instead, these materials should be presented as a natural part of a comprehensive business school curriculum.

Understanding gender is essential for all business students, which is why at Babson we work gender awareness into the core curriculum in addition to offering targeted, optional events and activities. In order to be leaders, our students need to be exposed to diversity and understand how gender dynamics affect their teams, their products, and their bottom line.

Change has to start at the beginning of the pipeline of future business leaders or else we’ll continue to replicate the same imbalances. Business schools steer the direction of the broader business community by educating future CEOs, entrepreneurs, and managers and by producing influential research and ideas.5 Gender equity isn’t just a problem for us to solve, it’s an opportunity for us to lead. If we innovate around how we understand and teach gender, we can offer students a cutting-edge education that will prepare them for long-term success as effective leaders and champions for diversity.

The post Why Understanding Gender Is An Essential Part of A Business Education appeared first on Footnote.

The Twisty Road from Science to Technology Thu, 13 Apr 2017 15:15:35 +0000 As scientists are increasingly asked to justify the usefulness of their research, basic science is under threat.

The post The Twisty Road from Science to Technology appeared first on Footnote.

It was recently announced that a new supply of transplantable human organs may come from growing organs from human stem cells implanted in animals. If proven effective and safe, this advance could revolutionize the treatment of many serious diseases.

Certainly the scientists who accomplished this feat deserve credit, but also the recognition that they stood on the shoulders of countless previous researchers in chemistry and molecular and cellular biology. Many earlier discoveries, some of which were not considered applicable to anything at the time, helped pave the way for this new technology, including research on what aspects of a molecule’s structure lead it to bind to other molecules and why some molecules emit light (fluoresce) while others do not.

Every groundbreaking technology we have was derived in some way – maybe three steps back, maybe 30 steps back – from a discovery that was driven either by curiosity or by research into a completely different problem. This is the value of what we call basic science: research that seeks to uncover the fundamental truths of the universe, but does not necessarily aim to solve an immediate societal or technological problem.

Basic science is the foundation for all scientific advances, from microwave ovens and smartphones to cutting-edge medical treatments. Yet scholars are increasingly asked to defend research that has no immediate, obvious application. To obtain grant funding, scientists doing even the most fundamental work must connect their research with an eventual application or relevance to the stated priorities of the funder.

For nearly twenty years, even the most basic-science-focused programs within the most basic-science-focused government agency, the National Science Foundation, have required scientists to articulate the potential “broader impacts” and benefit to society of their work.

Academic journals have also moved to an emphasis on outcomes. Decades ago, the introduction sections of scientific manuscripts were devoted to explaining the question being investigated, with little or no reference to why the research might be relevant to anyone other than a scholar. Investigations were warranted simply because there was an unanswered question or a disagreement in the literature on a particular topic.

Today’s journal articles, in contrast, feature discussions about the potential implications and applications of the topic, even when the research is squarely within the realm of basic science. Comparing academic articles from fifty years ago to now, we can see a real shift in the way scientists justify the importance of their work, even when preaching to the choir of fellow scholars who read scientific journals.(a)

It is understandable that funders and publishers want to devote limited resources, especially those derived from taxpayer money, to research with the potential for the greatest impact. Yet doing so threatens the contribution serendipity can make to scientific discovery and its potential to lead to unanticipated benefits.

As a Professor of Chemistry at a large research institution, I witness this process every day. For example, the multi-billion dollar drug Lyrica works to ease neuropathic pain and reduce the frequency of epileptic seizures for reasons that its original developer, my colleague Professor Richard Silverman, did not expect. His design of the drug made sense from a molecular standpoint, but there was no way for him to predict, a priori, how it would behave in humans.

Despite successes like Lyrica, many stakeholders, particularly those who control the distribution of funds at federal research agencies and private foundations, have trouble believing in connections between basic science and technology that they cannot plainly see. Scientists are not fortune tellers, however, and often cannot anticipate exactly where their discoveries will lead, much less provide concrete proof of these connections by outlining the many steps between a fundamental discovery and its eventual impact.

In the late 19th century, for instance, Sir William Crookes and Karl Ferdinand Braun  began experimenting with cathode rays. These streams of electrons are formed when electrical current is passed through a vacuum-sealed tube. The curiosity and undirected tinkering of Crookes, Braun, and other scientists resulted in the discovery of the electron and the atomic nucleus, without which modern physics would not exist.

Cathode rays also led to all sorts of technologies that the first scientists studying them could never have predicted: the cathode ray tubes used in televisions and early computer monitors, x-ray and CT scan machines for medical diagnostics, and the x-ray crystallography that was essential to the discovery of DNA. Would these technologies have been developed if the scientists studying cathode rays had to justify their open-ended exploration of a phenomenon whose significance was unknown at the time?

The argument over the relative value of “basic” versus “applied” research, and how the two should inform each other, has been going on for years.1 One attempt to clarify their relationship (and counteract the Cold-War-era linear model that basic science feeds applied research but the two do not intersect), is a classification scheme known as Pasteur’s quadrant.

This concept was outlined in a 1997 book of the same name by political scientist Donald Stokes.2 Stokes proposed a “third mode of research” – use-inspired basic research, driven by a quest for knowledge but also by considerations about the usefulness of the research – as a more realistic and helpful view of how productive science is often done.

Stokes named this model of thinking “Pasteur’s quadrant” in honor of the famous scientist’s ability to keep an eye toward technology while producing fundamental advances of great influence in chemistry and microbiology. In the course of investigating wine fermentation, Pasteur not only conceived his most famous invention, pasteurization,(b) he also made a discovery that would influence all of pharmaceutical chemistry thereafter: that some molecules with identical chemical compositions can have different arrangements of their atoms in space and therefore interact with their environments in completely differently ways.

Considering Pasteur’s quadrant, the prominent chemist George Whitesides offered one compelling argument for planting oneself in the quadrant of use-inspired basic research: “As scientists who get our money from the public purse, we have an obligation to spend some time producing science that helps to solve problems.” Whitesides offers a caveat, though, “There are, of course, differences in opinion on what strategies for research best serve the interest of society.”3

Therein lies the rub. Nearly every scientist wants to make a difference in the world, whether this motivation is self-serving or philanthropic; a scientist who sees their work purely as a means of self-indulgence is a very rare species indeed. But there is no formula for connecting a particular line of scientific inquiry to all of its eventual benefits for society, or for weighing the hypothetical future benefits of two research projects against one another.

Is forcing scientists to choose their problems based on societal need (or justify their research as relevant after the fact) a useful strategy to simultaneously increase our understanding of the universe and translate that understanding into a better quality of life?

One answer is that a scientist will be most productive when allowed to choose how to frame the problem she is working on. Some scientists make sense of the world in terms of the most basic mechanisms by which it operates, while others understand phenomena primarily in terms of the functions and applications they produce. Both are valid intellectual perspectives and should be supported. Some scientific problems lie directly in Pasteur’s quadrant and should be attacked by those scientists who, like Pasteur, have the ability to simultaneously adopt both modes of thinking.

A second, and in my mind, equally compelling answer is that a major part of our mission as academic scientists is to educate the next generation of researchers, and conducting basic research is essential to this education. While all academic scientists dream of a big breakthrough, the reality is that, for most of us, our most important product and our greatest chance of making an impact is the next generation of scientists we train. A big part of that training comes in the form of basic research in the lab.

Ultimately, there must be scientists who understand the world at its most fundamental level and push that understanding forward, just as there must be scientists who know how to translate this information into technologies and applications. If we lose the foundational knowledge in any scientific field by not asking its most basic questions, the whole house will crumble.

The post The Twisty Road from Science to Technology appeared first on Footnote.

The Global Gag Rule’s Impact Goes Far Beyond Abortion Fri, 10 Feb 2017 16:21:02 +0000 How does the policy affect women on the ground? A global health expert shares insights from her research in Tanzania, where abortion is illegal.

The post The Global Gag Rule’s Impact Goes Far Beyond Abortion appeared first on Footnote.

Like Republican presidents before him, one of Donald Trump’s first acts after he took office was to reinstate (and expand) the Mexico City policy, also known as the global gag rule.1 The rule prohibits American foreign aid money from funding organizations that offer or promote abortions, even though U.S. taxpayer funds are never used to pay for those services, whether the rule is in effect or not.(a)

The global gag rule has historically been a political ping pong ball volleyed back and forth across party lines: Republican presidents sign it into law, Democratic presidents repeal it.(b) In the week since the rule was reinstated, it’s already begun hitting clinics hard. Reproductive health organizations around the world are now forced to choose between halting non-U.S.-funded abortion services, counseling, and advocacy or losing their U.S. funding for contraception, family planning, and other non-abortion-related healthcare.

As a global health expert and faculty member at Northwestern University, my research explores less what policies say and more what they do in practice.2 In my decade as a researcher, I’ve observed first-hand what happens when the Mexico City policy is in place and when it is not.

What the Mexico City policy means in practice is extraordinarily bleak, in ways that have nothing to do with abortion. NGOs such as Marie StopesEngender Health, and Pathfinder provide services to men and women all over the globe. If they don’t comply with the policy, as many have said they won’t, their reproductive health funding will be slashed across the board in all countries.

The U.S. accounts for nearly half of the government donations for family planning worldwide, providing a total of $638 million in 2015. Losing funding from the U.S. may force organizations to scale back services and shutter critical health facilities in underserved communities, even in places where abortion is already illegal.

Tanzania, where I have done research for over a decade, is one of the countries likely to be affected by the Mexico City policy, even though abortion is not legal there. Early sexual activity among girls in Tanzania is often linked to poverty and vulnerability.3 Until recently, public education required that a girl’s family pay for uniforms, books, and supplies, and even then the quality of the education has generally been poor. A girl in private school has better prospects for future employment with decent pay, but most families can’t afford the tuition.

Men take advantage of this to lure young girls into sexual relationships in exchange for school fees. For the girls, sex with an older man may be a ticket out of poverty. Child prostitution and child marriage are also major issues in parts of Tanzania.

In a place like Tanzania that bans abortion, contraception becomes exceedingly important, particularly for young unmarried women. Yet getting contraceptives to them is tricky. If an unmarried girl is seen at a reproductive health clinic, she is likely to be shamed by other clients.

It is NGOs, rather than public health facilities, that take contraceptives to places where young women can access them, such as schools and communities. Defunding NGOs will inadvertently target young women’s access to contraceptives they critically need to prevent unwanted pregnancy.

For Tanzanian women, the global gag rule can be disastrous, despite the fact that abortion is already illegal in their country. Organizations providing contraception in Tanzania will lose an enormous portion of their funding simply because they offer abortion services to women in countries halfway around the world.

Without access to contraception, women are more likely to end up pregnant. Pregnant unmarried girls in Tanzania are stigmatized and thrown out of school, ending their prospects for self-improvement and a decent job.4 Babies of teenage mothers are more likely to die than babies born to older mothers.

Ironically, by reducing access to contraception the global gag rule may actually increase abortion. A study published by the World Health Organization looked at data from 20 sub-Saharan African countries between 1994 and 2008. When the global gag rule was in effect, contraceptive use dropped and abortion rates rose in the countries most impacted by the ban – more so than in countries that were less reliant on the U.S. for family planning and reproductive health funding.5

In Tanzania, abortions still happen despite being illegal, and I saw their consequences firsthand when I was there. At a public hospital where I conducted research, there was least one case a week of a woman hemorrhaging from a self-induced abortion or an abortion performed outside a health facility. Women hemorrhage uncontrollably, often in health facilities lacking a blood bank to stave off death. Research suggests that unsafe abortion may account for up to a quarter of pregnancy-related deaths in Tanzania.6

By defunding family planning services, the global gag rule actually increases the number of unintended pregnancies and, inadvertently, the number of abortions and unnecessary deaths in places like Tanzania.

Political leaders in the U.S. and around the world are responding. The HER Act, which would permanently repeal the Mexico City policy, was recently introduced in the U.S. House and Senate, but is unlikely to pass in the current Republican-controlled Congress. Eight countries have announced plans to fill some of the funding gap created by the rule, although this is unlikely to fully curb the damage on the ground in countries like Tanzania.

The post The Global Gag Rule’s Impact Goes Far Beyond Abortion appeared first on Footnote.

What Do Angel Investors Want? A Protégé Thu, 08 Dec 2016 17:59:39 +0000 Research shows that investors look for entrepreneurs they can mentor. How can founders demonstrate that they’re ready to learn?

The post What Do Angel Investors Want? A Protégé appeared first on Footnote.

This article was produced in partnership with Babson College.

When investors choose to fund a company, they look closely at the strength of the team behind it. “We are backing the founders as much as, if not more than, the business itself,” writes venture capitalist Christian Hernandez Gallardo. There’s even a firm called Entrepreneur First that takes this idea to its logical extreme, funding talented entrepreneurs before they have a business concept worked out.(a)

What is it, exactly, that investors are looking for in a founder? Skills and experience are obviously important, and a degree from a highly-ranked school or past stint at a tech giant is sure to impress. Less tangible personal and relational factors also play an important role. Studies show, for example, that entrepreneurs who are perceived as trustworthy are more likely to receive investment offers.1

Another important factor that is often overlooked is the potential relationship between investors and founders. Investors want to back entrepreneurs who they can mentor. Because many investors believe their time and expertise to be just as valuable than their money, they want to invest in companies where their personal involvement can have an impact. This means looking for entrepreneurs who are “coachable” – that is, receptive to feedback – and who can benefit from the specific guidance they have to offer.

Several academic studies have demonstrated the impact coachability can have on an entrepreneur’s chances of securing funding.2,3,4 A 2010 study led by Northeastern University’s Cheryl R. Mitteness showed that coachability influences whether angel investors recommend moving forward with a company after a pitch.2 Recent research by my colleague at Babson College, Dr. Lakshmi Balachandra, found that the more willing an entrepreneur is to accept feedback and engage with suggestions that are offered during a pitch, the more interested investors are in pursuing the company.3

Dr. Balanchandra’s study found that coachability has an impact regardless of how strongly investors rank a business’s economic fundamentals or the competence of the team. This suggests that cultivating and demonstrating a willingness to learn can give entrepreneurs an extra edge, even if they don’t increase their team’s skillset or boost the business’s cash flow, which can be much more daunting to achieve.

The lesson of this research for entrepreneurs is that many investors want to play a role in the success of the businesses they fund by providing mentorship and guidance. Given the wealth of expertise most angel investors have, that’s a good thing – it just requires a more tailored pitch. This is because, as research demonstrates, investors are more likely to respond positively to a pitch if it is in an industry where they have experience and expertise to offer.3,5

Entrepreneurs need to sell investors on not just their businesses and themselves, but also their compatibility with the investors’ expertise and interests and their willingness to learn. The good news is that coachability is almost entirely within an entrepreneur’s control. While you can’t learn to code or add an impressive job to your resume overnight, anyone can try to be more open to advice and mentorship.

How can you make sure your eagerness to learn comes across in a meeting or pitch? Take an interest in the perspectives of investors and show appreciation for the experience and advice they have to offer. Rather than coming across as defensive when they provide feedback, ask clarifying questions and probe for more insight in an interested way.

It’s not enough to simply receive feedback – you also need to act on it. Researchers from the University of Central Florida and Elon University conducted a study to define and measure coachability among entrepreneurs.4 They found that coachability involved a willingness not just to listen, but also to act on the advice of others and integrate their feedback into the business. With this in mind, you should follow up at the end of the pitch or shortly afterwards about next steps you plan to take based on investors’ feedback.

Babson M.B.A. Rich Palmer has found these techniques essential in helping his startup, Gravyty, engage angel investors and incubator programs like MassChallenge and benefit from their guidance. Rich and his co-founder make it their goal during any meeting or pitch to hear what is on investors’ minds, rather than just sharing their ideas.

The Gravyty team aim to listen 80% and talk 20%, and keep their responses short so they can get in as many questions as possible. They also keep track of everything people suggest, even if they disagree with it, and take criticism as constructive rather than getting defensive if someone doesn’t agree with their approach.

Investors who are looking for potential protégés aren’t only concerned with the entrepreneur’s willingness to learn. They are also considering where they fit into a business’s success, as the research demonstrates. Make sure to emphasize ways in which you and your business align with investors’ interests, expertise, and other investments. If it’s not readily apparent how an investor might help you, call out the connections explicitly and present a vision of how they fit into your business’s success.

Like many of us, investors are ego-driven, not in the sense of being self-centered, but in wanting to have an impact on the businesses they fund. They have expertise to share and they want to put it to use just as much as their money. Most entrepreneurs know that not all funding is equal, and the guidance and connections they get from investors can be just as important as the capital. What they may not consider is that investors know it too.

The post What Do Angel Investors Want? A Protégé appeared first on Footnote.

How Mobile Banking Is Transforming Africa Thu, 03 Nov 2016 18:42:40 +0000 A new study from the Harvard Kennedy School shows how mobile banking has transformed Kenya’s financial system and brought banking to the masses.

The post How Mobile Banking Is Transforming Africa appeared first on Footnote.

This article was produced in consultation with Harvard University’s John F. Kennedy School of Government.

Imagine you live in a small village in rural Kenya. Your daughter attends university in Nairobi and needs financial support to buy textbooks and pay her rent. How do you send her money if you, like many Kenyans, don’t have a bank account or internet access?(a)

In the U.S., the answer would be simple. In fact, you would have an abundance of options: PayPal, Venmo, online banking, checks, money orders, or good old-fashioned cash. Many people around the world, however, don’t have access to the financial services some of us take for granted. Two billion “unbanked” adults, mostly in developing countries, face barriers to tasks as simple as receiving wages or sending money to family members. Without access to banking services, their finances are unstable because they don’t have a good way to save for the future or borrow in times of need.(b)

Getting people access to formal financial services is called financial inclusion and it is a critical part of equitable economic development, says Jay Rosengard, Adjunct Lecturer in Public Policy at the Harvard Kennedy School.1 Research shows that by lowering transaction costs and helping spread risk and capital across the economy, financial inclusion improves the livelihood of individual families and spurs local and national economic growth.2 Financial inclusion can be particularly powerful for women and other marginalized groups who have traditionally been excluded from the formal economy and had less control over their own finances.3

When up to 90% of your population doesn’t have a bank account, how do you bring them into the financial system quickly and easily? Rosengard believes Kenya has struck on a promising solution: mobile banking.1 His latest research paper shows that, thanks to mobile banking, the share of Kenyans with access to a financial account jumped from 42% in 2011 to 75% in 2014.(c) Financial inclusion skyrocketed among the poorest citizens, from 21% of people with a financial account in 2011 to 63% in 2014, growth of more than 200% in just three short years.

“The magic of mobile banking lies in its simplicity and low cost,” said Rosengard. “All you need to get started is an old-school flip phone, available for less than $10 U.S. dollars, and a banking SIM card. Then you can send and receive money over text message, no smartphone or special app required. Customers mostly rely on the service for person-to-person (P2P) payments, but are increasingly using it to pay merchants, utility companies, and other businesses.”

Mobile Banking

Mobile banking has brought financial services to the masses in Kenya.

Rosengard’s research finds that mobile banking has transformed how Kenyans manage their money. On Safaricom’s M-PESA, which is by far the most popular service in the country, 19 million users now send 15 billion Kenyan shillings in payments each day – the equivalent of $150 million U.S. dollars. This growth has allowed Kenya to zoom past other countries when it comes to financial inclusion. The share of people with access to a financial account in Kenya is more than double that of other sub-Saharan African countries and almost triple the typical rate in low-income countries worldwide.

This mobile banking revolution has also created greater financial stability for Kenyan families. A 2014 study found that people using M-PESA were able to handle major hits to their income – such as a bad harvest, a job loss, or a failing business – without having to curb their household’s consumption.4 The primary way they weathered these storms was by getting help from family and friends through funds sent over M-PESA. In comparison, the study found that Kenyans who did not use M-PESA had to reduce their household spending by an average of 7% in response to financial challenges.

For developing countries where traditional banking is limited, Rosengard sees mobile banking as a potential shortcut to financial inclusion. Nations that already have a robust banking sector and widespread access to financial services, like the United States and South Africa, can depend on existing banks to offer services online, with upstarts like PayPal and Venmo pushing the envelope.

In developing countries, however, a tool like mobile banking can be transformational.5 Rosengard explained how, instead of growing the conventional banking sector’s physical presence and slowly bringing the “unbanked” into the system, mobile banking allows countries to immediately bring financial services to the masses in a cheap, accessible way.

Mobile banking isn’t the first new technology that has helped countries leapfrog certain stages of development and progress more quickly. Cell phones had this impact in sub-Saharan Africa in the 2000s. As mobile phone ownership boomed, countries were able to skip over the landline telephone phase and rapidly bring modern communication to their citizens. The rate of cell phone ownership in Kenya (82%) is now almost as high as in the United States (89%).6

Could mobile banking foster a similar transformation, bringing financial services to the masses and spurring equitable economic development? Rosengard and other experts think so.

“For the Kenyan family able to send their daughter money for school, mobile banking could mean the difference between her dropping out to work or graduating, securing a better career, and, down the line, being able to send money back home in times of need,” Rosengard said. “Now multiply that impact by the two billion other unbanked people across the world whose lives could be changed by a cheap flip phone and a simple banking program, offering a path to more equitable, inclusive economic growth.”

The post How Mobile Banking Is Transforming Africa appeared first on Footnote.

Why Humans Are Hard-Wired For Curiosity Thu, 08 Sep 2016 16:44:16 +0000 The same evolutionary forces that fuel our interest in food and sex may also drive our insatiable thirst for information.

The post Why Humans Are Hard-Wired For Curiosity appeared first on Footnote.

Humans are deeply curious beings. Our lives, economy, and society are shaped so strongly by a drive to obtain information that we are sometimes called informavores: creatures that search for and digest information, just like carnivores hunt and eat meat.1 What is it that drives our hunger for information?

From an evolutionary perspective, there is a clear reason why animals would seek out information: it can be vital to their survival and reproduction. A bird that spent its whole life eating berries from a single bush and never explored its environment could be missing out on a much better food source nearby. Thus it is not surprising that exploration is common in the animal world. For example, monkeys will push a button at high rates for a chance to peek out of the window,2 and roundworms do not crawl to a food source directly, but rather circle towards it in a way that gives them the most information about their environment.3

What drives animals’ information-seeking behaviors? One possibility is that each individual animal learns over the course of its life that a greater knowledge of its environment leads to rewards like food or other essential resources. However, while this is something we can imagine humans or monkeys learning, it is probably beyond the capacity of roundworms. Furthermore, we see curiosity-driven behaviors in very young animals, before they have had enough experience to learn the association between knowledge and rewards. For example, human newborns look at new visual scenes for much longer than at known visual scenes.4

Another possibility is that evolutionary pressures have made information intrinsically rewarding. The reason so-called “primary rewards” like food and sex are pleasurable is because animals that enjoy eating and reproducing are more likely to survive and produce offspring.(a) Evolution has therefore built up a reward system in the brain that drives behaviors that help animals acquire essential resources. Could this same reward system be prompting information-seeking behavior by making animals find new information intrinsically rewarding?

If learning is intrinsically rewarding, the brain should respond to new information in a way similar to how it responds to primary rewards like food and sex. Indeed, neuroimaging studies show that when people are curious about the answers to trivia questions or watch a blurry picture become clear, reward-related structures in their brains are activated.5 However, since the resolution of neuroimaging is still quite rough, these studies cannot show whether the reward-related structures in the brain actually respond in the same way to information as they do to primary rewards like food. For this we need to study the behavior of single neurons.

Neuroscientist Ethan Bromberg-Martin and his colleague Okihide Hikosaka were the first to find signatures of reward responses to information within single neurons.6 They designed a task in which monkeys saw two pictures and had to choose one. Then, after a short delay, the monkeys would receive either a large or a small reward. If the monkey chose one of the pictures (the informative picture), it would get a cue that indicated whether the delay would be followed by a small or large reward. If it chose the other picture (the uninformative picture), it would see a cue that gave no information about the upcoming reward. Even though the choice of picture did not affect the size of the reward, the monkeys almost always chose the informative picture, presumably because they were curious and found it rewarding to get a hint about the outcome.

While the monkeys performed this task, Bromberg-Martin recorded activity in their dopamine neurons, which play a crucial role in primary reward processing and in motivating behavior. By increasing their activity in the face of both rewards and cues that predict rewards, dopamine neurons flag rewarding situations and experiences for the rest of the brain. Bromberg-Martin and Hikosaka looked at how the monkeys’ dopamine neurons responded to the pictures and found that activity increased in response to the informative picture and decreased in response to the uninformative picture. The reward-predicting dopamine neurons responded to information in the same way they respond to other primary rewards like food and sex.

Bromberg-Martin and Hikosaka’s study indicates that a core component of the brain’s reward processing system can motivate animals to seek out information as well as primary rewards. Since both food and information can drive behavior that promotes survival, it makes sense that somewhere in the brain they are processed similarly. However, we would also expect that they be represented differently in other parts of the brain. A good book and a candy bar are both rewarding, but in distinct ways – and if you’re hungry, you are not going to be satisfied by reading the latest bestseller.

With this distinction in mind, neuroscientist Tommy Blanchard teamed up with Bromberg-Martin and colleague Ben Hayden to investigate how information and primary rewards are represented in the orbital frontal cortex, an area of the brain involved in many complex cognitive behaviors, including evaluating rewards.7 By again recording the monkeys, he found that neurons in this area responded to both information and primary rewards, but, unlike dopamine neurons, did not treat these two variables in the same way. For example, an individual neuron in the orbital frontal cortex might increase its activity in response to a primary reward, but decrease its activity in response to information (or vice versa).

Blanchard’s research indicates that primary rewards and information are represented differently in the orbital frontal cortex, in contrast to the midbrain (the home of dopamine neurons), where they are integrated into a single representation. Processing these rewards differently in the more advanced part of the brain may allow for behavioral flexibility – the ability to seek out information in some contexts, but focus on primary rewards in other situations.(b)

What this body of research demonstrates is that primates really are informavores – information stimulates our brains the same way food and sex do. Yet there are also parts of our brains that differentiate between information and other rewards, allowing for behavioral flexibility and complex decision making.

Many questions still remain about humanity’s innate curiosity. For example, how do reward circuits in different parts of the brain interact with each other? Why are some types of information more interesting to us than others, and why are different people interested in such different things? Let’s hope our curiosity will continue to lead us to a further understanding of curiosity itself!

The post Why Humans Are Hard-Wired For Curiosity appeared first on Footnote.

Why We Need New Antibiotics More Than Ever Tue, 12 Apr 2016 16:34:20 +0000 Last year, scientists discovered the first new antibiotic in decades. Given the threat of antibiotic-resistant disease, how do we make sure it doesn’t take another 25 years to find the next one?

The post Why We Need New Antibiotics More Than Ever appeared first on Footnote.

This article was produced in partnership with Northeastern University.

A year ago, a group of scientists led by Dr. Kim Lewis, Director of the Antimicrobial Discovery Center at Northeastern University, announced a major breakthrough. They had identified a new antibiotic, teixobactin, capable of destroying several kinds of bacteria, including antibiotic-resistant strains of tuberculosis and staph (i.e. MRSA).1

Antibiotics are so familiar to us that the discovery of a new one may not seem particularly groundbreaking. Yet in reality, most antibiotics were identified over a half-century ago and new discoveries are quite rare. Teixobactin is actually “the first new antibiotic to be discovered in more than 25 years,” according to the White House.

After a “golden age” of discovery in the 1940s, 50s, and 60s, antibiotic development faltered.2,3 The drugs that were easiest to identify and cultivate (the “low-hanging fruit”) had already been found, incentives in the scientific community steered research in other directions, and antibiotics were not seen as profitable by pharmaceutical companies.(a)

MRSA bacteria

MRSA bacteria

Meanwhile, bacteria began to develop resistance to existing antibiotics. The dreaded MRSA (methicillin-resistant Staphylococcus aureus) arose in hospitals and healthcare facilities, while overuse of antibiotics in livestock farming fostered resistant strains of Salmonella and E. coli. According to the CDC, antibiotic-resistant illnesses now infect 2 million Americans and kill 23,000 each year.

In this context, the identification of a new antibiotic that can avoid resistance is major news. Yet what elicited even more excitement in the scientific community than the discovery itself was how teixobactin was discovered. Researchers found the compound using an innovative reinvention of an old technique responsible for many of the antibiotic discoveries of mid-twentieth century: digging in the dirt.

In the early days of antibiotic development, most compounds were found by combing soil samples for microbes that produce their own antibiotic chemicals to keep competing bacteria at bay. This soil mining resulted in a number of powerful antibiotics, such as the tetracyclines used to treat everything from Lyme disease to acne. However, once the compounds that were easiest to find and grow in a lab had been identified, the pace of discovery slowed.2

Dr. Lewis and his colleague at Northeastern, Dr. Slava Epstein, reinvigorated soil mining by inventing a device, the iChip, that makes it possible to grow bacteria that could not be cultivated through previous techniques. The iChip grows uncultured bacteria in its natural environment, opening up the possibility for research on the 99% of natural bacteria that cannot be cultured in a lab.(b)

Dr. Kim Lewis

Dr. Kim Lewis, Director of the Antimicrobial Discovery Center at Northeastern University (Image credit: Northeastern)

Developing systems like the iChip that enable exploration of large numbers of potentially antibiotic compounds may be the key to fighting antibiotic-resistant diseases. According to Dr. Lewis, the lack of drug candidates is the primary bottleneck in the antibiotic discovery pipeline. Without promising lead compounds, there is nothing for medical researchers and pharmaceutical companies to test and refine for human patients.

To address this bottleneck, we need to devote resources to developing platforms that allow researchers to identify large numbers of potential antibiotics.(c) “Right now,” says Dr. Lewis, “people have kind of a lottery approach to the problem. Some group accidentally stumbles onto an interesting compound and tries to develop it. In very rare cases it’s successful. In most cases, it simply fails.”

In lieu of this single compound approach, Dr. Lewis and others have advocated a shift to developing platforms that provide a foundation for antibiotic discovery.2 For instance, researchers like Dr. Helen Zgurskaya at the University of Oklahoma are trying to determine “rules of penetration” to guide the identification of antibiotics that can successfully penetrate the bacterial cell envelope, a major barrier to drug delivery.(d)

While identifying rules of penetration is the type of foundational work that paves the way for important discoveries, it is often overlooked by scientists and funders. According to Dr. Lewis, however, it is exactly this kind of platform development that will be essential in our fight against antibiotic-resistant bacteria. While efforts to reduce the use of antibiotics in agriculture and prevent the development of resistant diseases are important, at the end of the day we need more antibiotics to address the threat head-on.

“We are in a stand-off with human pathogens. And we are poised to lose,” Dr. Lewis wrote in a 2012 essay in Nature calling on the scientific community to “recover the lost art of antimicrobial drug discovery.”2 If we don’t want to wait another 25 years for our next groundbreaking discovery, we need to cultivate platforms and technologies like the iChip that will pave the way for a new generation of antibiotics.

The post Why We Need New Antibiotics More Than Ever appeared first on Footnote.

Why Women Entrepreneurs Underestimate Themselves – And What We Can Do About It Thu, 07 Apr 2016 15:14:09 +0000 To close the "confidence gap" between men and women, we have to change the landscape.

The post Why Women Entrepreneurs Underestimate Themselves – And What We Can Do About It appeared first on Footnote.

A recent report finds that more than 200 million women across the world are starting and running new businesses.1 According to the Global Entrepreneurship Monitor (GEM),(a) although men are still 50% more likely to become entrepreneurs, women are steadily gaining ground. The gender gap narrowed by 6% from 2012 to 2014, and in ten nations women are now just as likely as men to start new businesses.(b)

These women are bringing innovative products and services to market, creating jobs, driving economic growth, and providing for their families and communities. At Babson College’s Center for Women’s Entrepreneurial Leadership (CWEL), where I serve as Executive Director, we’re working to change the entrepreneurial ecosystem in the U.S. so that we can soon join the list of countries that fully harness the innovation and leadership potential of their entire populations.(c)

One gender gap we’re concerned about at CWEL relates to how men and women see themselves as entrepreneurs. According to the GEM report, while women are nearly as likely as men to identify potential business opportunities around them, they are significantly less likely to view themselves as capable of starting a business to address these opportunities and are more likely to fear failure if they do. In the U.S., for example, 46% of women believe they have the skills and knowledge needed to start a business, compared to 61% of men.

These findings are part of a broader trend documented in numerous studies, in which men tend to overestimate their professional abilities and performance while women underestimate their capabilities. In a survey of members of the U.K.-based Institute for Leadership & Management, half of women managers reported feeling self-doubt about their careers and work performance, compared to less than a third of men.2 Men are four times as likely to ask for a raise,3 and women typically ask for less during salary negotiations than men.4

This gender gap in self-perception is important because research shows that confidence and self-efficacy affect performance in school, work, and even simple problem-solving tasks.5 Simply put, if you don’t believe you can do something, you are less likely to try it, and to do it well, regardless of your abilities. Indeed, the GEM report found that in countries where women are less likely to see themselves as capable of starting a business, they are less likely to become entrepreneurs.

Confidence plays an especially large role in entrepreneurial momentum.6 Launching a successful business isn’t just a matter of having innovative ideas and superior skills; it requires boldness, courage, and a tremendous amount of faith in one’s own abilities.

How can we equip women with the courage they need to become entrepreneurs? Much of the conversation over the past few years has focused at the individual level, exhorting women to “lean in” and close the “confidence gap” themselves. At CWEL we take a different approach. We believe entrepreneurial self-efficacy – a person’s confidence that they have what it takes to succeed in launching a business – is cultivated and influenced by the environment and ecosystem in which they operate.

WIN Coaching

Entrepreneurs at a coaching event hosted by the WIN Lab at Babson College’s Center for Women’s Entrepreneurial Leadership (CWEL)

Women aren’t less likely to see themselves as entrepreneurs simply because they lack overall confidence. They’re responding to messages they receive from the world around them about who is and isn’t supposed to lead and take risks. Only 15% of venture capital-funded companies have a woman on their executive team and a mere 3% have a woman CEO.7 People are twice as likely to respond positively to the same pitch given by a man as by a woman.8 This gender discrimination comes on top of the already-daunting fact that half of new businesses fail within five years.9 Perhaps women who hesitate to start businesses in such an environment aren’t risk-averse, they’re risk-rational.

At CWEL, we’re working to change the entrepreneurial ecosystem and the messages women receive about who can and should start a business. We’re also equipping individual women with the courage to transform themselves from individuals with ideas to entrepreneurs with impact. Our Women Innovating Now (WIN) Lab cultivates self-efficacy by shifting participants’ sense of what is possible for themselves and their businesses.

Over the course of eight months, participants plan, experiment, and learn within a community of fellow entrepreneurs who provide support, feedback, encouragement, and knowledge sharing. Each WINner is paired with a compatibility-matched coach and has access to an expert circle of women industry leaders. These successful women help build participants’ self-efficacy by acting as role models, sharing their stories, and offering invaluable insights about their entrepreneurial journeys.

WIN Lab Participants

Participants in CWEL’s 2015-2016 Women Innovating Now Lab

Rather than the traditional accelerator approach of bringing business ideas to market, WIN Lab focus on preparing potential entrepreneurs to be market-ready and to “go big” with their ideas. For entrepreneurs like Savitha Sridharan, WIN Lab participant and founder and CEO of renewable energy company Orora Global, the program helps women “believe in [their] dream and commit to act on it.”

The GEM report and other research suggest that shifting self-perception is a key part of encouraging women’s entrepreneurship. But while confidence is critical, it isn’t an individual problem. It’s an ecosystem problem. Instead of asking women to lean in, we must give them the tools, support, and relationships that all entrepreneurs need to succeed – resources that men often have access to without even realizing it.

The post Why Women Entrepreneurs Underestimate Themselves – And What We Can Do About It appeared first on Footnote.

Building the Emotional Machine Thu, 17 Mar 2016 20:00:38 +0000 To create robots with feelings, researchers are programming them to learn and develop emotion the way human children do.

The post Building the Emotional Machine appeared first on Footnote.

From the sci-fi classic “Bladerunner” to the recent films “Her” and “Ex Machina,” pop culture is filled with stories demonstrating our simultaneous fascination with and fear of artificial intelligence (AI).

This interest is rooted in questions about where the line between human and artificial intelligence will be, and whether that line might one day disappear. Will robots eventually be able to not only think but also feel and behave like us? Could a robot ever be fully human?

A new multidisciplinary field called developmental robotics is paving the way to some answers.(a) Rather than writing programs that try to mimic specific human behaviors like love, developmental roboticists build machines that learn and develop the way humans do as they grow from newborn infants to adults. The goal is to model human learning and then create machines that can learn in similar ways.

My research at Kyoto University focused on building robots with human-like emotional architecture who learn emotional behavior from the people they interact with, particularly their human caregivers. It offers insights into how we might one day be able to create machines with a full range of emotions comparable to our own.

How Do Humans Develop Emotion? 

For a developmental roboticist, the first step in tackling the problem of robot emotion is understanding how humans develop the capacity for emotion. Though this process is still a bit of a mystery, the field of developmental psychology is beginning to unlock some of its secrets. 

Around the age of two, when toddlers start to speak, they begin to learn the emotional names for their internal states. The word “sad”, for instance, refers to a certain set of physiological and psychological feelings, along with associated expressions of these feelings through tone of voice, facial appearance, and body movement.(b) Sadness is often linked to slower-paced speech, a frowning mouth, and sluggish body movement. Anger, on the other hand, is generally associated with intense, abrupt speech; downturned eyebrows; and quick, aggressive movements. 

As we get older, we use these behaviors to express our internal states and to recognize emotion in others. We even see emotion in non-human objects, such as a sad piece of music or an excited pet. We may also do self-inspections to deduce our own emotions – for example, someone noticing her voice rising as a way to identify when she is feeling frustrated. All of this emotional expression and perception happens quickly, involuntarily, and subconsciously, conveying a great deal of information in a concise way.

How do we develop these forms of emotional expression? Are they learned or innate (or some combination of both)? For a long time, the prevailing view was that human emotional expressions are biologically determined, particularly when it comes to basic emotions like happiness, sadness, anger, fear, disgust, and surprise.(c) However, new research suggests that how humans express emotion may, at least in part, depend on how they are taught to do so by their caregivers and peers.(d)

Cross-cultural studies suggest that cultural environment plays a role in the development of emotion. According to research by Stanford psychologist Jeanne Tsai, emotional expression and ideals tend to differ across Eastern and Western cultures. Individuals in Western cultures identify “feeling good” as a high-arousal positive (HAP) affect, whereas Eastern cultures prefer a low-arousal positive (LAP) affect.1 In other words, Western cultures favour high-arousal emotions such as excited joy and elation, whereas Eastern cultures favour low-arousal emotions such as calm joy and bliss.

To illustrate, one study found that Asian Canadians prefer smiles between 20 and 60% intensity, whereas European Canadians prefer smiles from 80 to 100% intensity.2 Research has also demonstrated that people have a harder time identifying the emotions connected to facial expressions and vocal cues of people from other cultures than those from their own culture.3

Lim with robot

Dr. Lim with one of her robots (Image credit: Irwin Wong)

Interactions with caregivers at a young age may play a particularly important role in the development of emotion. Research shows that when a rhesus monkey is separated early in life from its mother, its genes express differently in brain regions controlling socio-emotional behaviors. This primate study suggests that early parental care – or the absence of it – can profoundly change an infant’s future emotional behaviour, even at the genetic level.4

Though studies of development in humans are rarer due to ethical issues, observations of children raised in emotionally deprived institutional environments show that early life experiences can have lasting effects on emotional intelligence. For example, individuals who grew up in Eastern European orphanages with little social interaction or attention from caregivers had difficulty later in life matching appropriate faces to happy, sad, and fearful scenarios (though they were able to match angry faces).5

Building Emotional Robots

How can we use our knowledge of how human emotion develops to build robots with the capacity for emotion? The idea behind developmental robotics is to create robots that learn behaviors the same way human children do. Typically, a software model is programmed to represent a part of the robot’s “brain.” Then, the robot is exposed to an environment to stimulate the training of that model, for example, through interactions with a human caregiver. In my research, I tested the idea that caregivers can play a role in helping robots develop emotion, just as they play a role in emotional development for human infants.

First we must ask: what would it mean for a robot to have emotion, and how would we know if it did? Neuroscientist Antonio Damasio6 defines emotion as, “the expression of human flourishing or human distress, as they occur in the mind and body.”(e) I have proposed that we define flourishing for a robot as a state of “all-systems-go” or homeostasis, where the battery, motors, and other parts are in working order and the core temperature is normal. We can imagine this as similar to a human infant being well fed, rested, and in good health. Distress is when something is wrong, which could result from a hot motor or CPU, low battery, or the saturation of microphone sensors with loud noises or vision sensors with extremely bright light. This parallels a newborn feeling distress from hunger, a wet diaper, or a loud sound.

In my research, I had human caregivers interact with robots in a variety of ways, expressing emotions such as happiness, sadness, and anger, while the robots are in both flourishing and distressed states. The caregiver behaviors parallel ways in which developmental psychologists have observed parents interacting with human infants.7 For example, when the robot is in a flourishing state, the caregiver plays with the robot in a joyous way, modelling happiness. When the robot is in a physically distressed state, the caregiver may display empathy, showing sadness while comforting the robot.(f)

The result? The robot learned to express its internal states based on whatever models it was taught by its caregivers. Changing how the caregivers behave affects how the robots later express their internal states – in other words, how they show emotion. If the caregiver spoke to the robot in an empathetic way when it showed distress, for instance saying “poor robot” in a slow and sorrowful voice, the robot would learn to express a distressed state as something similar to sadness, using a slow voice and movements. If the caregiver scolded the robot when it was in distress, expressing frustration or anger, the robot would later express a distressed state using the aggressive, intense patterns we typically associate with anger.(g)

We could conduct similar experiments with various types of positive emotions. If a caregiver expresses calm and peaceful happiness to a robot in a flourishing state, this might lead the robot to express flourishing in the same relaxed, calm way. A caregiver who expresses more energetic, boisterous joy could produce a robot that expresses flourishing in a more intense, high-energy manner. Looking at the world around us, we can see families, households, and even cultures that demonstrate how human emotional expression can vary in similar ways.

Can A Robot Love?

In an article entitled “Can Robots Fall in Love, and Why Would They?,” leading AI philosopher Daniel Dennett described two possibilities for creating robots with emotions. The first is that an AI could be programmed to act like it was in love and, on the surface, appear to have emotions. Essentially, “a robot could fake love.”(h)

The second and less obvious route is to create an architecture less like current computers and more like the human brain. This system would not be a hierarchy controlled from the top down; instead, behaviors would emerge “democratically” from low-level, competing elements, much like they do in biological nervous systems. With this structure, Dennett writes, you could potentially create a computer that truly loved, though doing so would be, “beyond hard.”

While still in its early stages, my research offers an approach to building emotional robots that follows Dennett’s “emergent” model. Rather than hard-coding emotions into a robot using fixed rules, we might be able to create a robot with an emotional architecture similar to a human’s, wherein first-hand experiences with emotions like happiness and love teach the robot how to express these emotions in the future.

Emotions color every human interaction and are the foundation for living in a social world. As robots become a more integral part of our daily lives, we will benefit if they can understand and respond to our emotional states. Emotional robots may be able to communicate with us in ways we intuitively understand, for example showing a sluggish walk when their battery needs recharging, instead of a confusing panel of lights and beeps. The ultimate goal is not necessarily to create robots that can fall in love or fulfill all our human emotional needs, but to build machines that can interact with us in a more human way, rather than requiring us to behave more like machines.

The post Building the Emotional Machine appeared first on Footnote.

Scientific Research Needs More Funding, But Also Smarter Spending Tue, 26 Jan 2016 16:02:23 +0000 A recent boost in government funding is good news, but scientists also need to spend research dollars more efficiently.

The post Scientific Research Needs More Funding, But Also Smarter Spending appeared first on Footnote.

The 2016 federal budget approved in December was the product of tough political wrangling but contained at least one provision with bipartisan support: significant increases in government spending on scientific research. The budget for the National Institutes of Health (NIH) was increased by $2 billion, to $32 billion for the upcoming year. The CDC, FDA, and National Science Foundation also received funding increases, as did scientific research programs at NASA and the Department of Energy.

With this encouraging news, biomedical researchers can breathe easier – but only slightly. While the NIH received its biggest raise in more than a decade, when adjusted for inflation the 2016 budget is actually 15% smaller than it was in 2006 ($28.6 billion). The diminishing budget has been a growing concern for biomedical scientists because the federal government provides nearly two-thirds of funding for science and engineering research, including life sciences research, at universities.(a)

The discoveries and insights produced by this academic research not only push the limits of our understanding, they also have a clear and direct impact on our daily lives. Estimates suggest that research conducted at U.S. academic institutions has led to somewhere between one quarter and one half of drugs on the market today.1 This valuable research can be hampered by government funding cuts. As the NIH budget stagnated, the number of NIH-funded clinical trials for new drugs declined 24% from 2006 to 2014.(b) Reduced government funding also slows basic research,2 limiting advances in basic science that may one day lead to clinical applications.(c)

The need for increased government funding for research is clear. Yet we must also consider how existing funding can be spent more effectively. Little attention is paid to the fact that as much as 85% of research funding may be “avoidably wasted across the entire biomedical research range (e.g., clinical, health services, and basic science),” according to a recent series of articles on the subject in the medical journal The Lancet.3

Unfortunately, there is no single culprit for this inefficiency. A number of factors can contribute to wasteful spending, from poor study design and execution to inadequate follow-through on research findings.(d) Other major concerns include the limited reproducibility of findings and non-publication of negative results.4

The tendency of researchers to avoid publishing negative findings is an excellent example of how research dollars can be wasted. Scientists are incentivized to publish innovative, groundbreaking new findings, not the results of experiments that failed. Yet as important as it is to report positive findings that support a hypothesis, it is equally important to publish results that disprove a hypothesis. It saves resources by alerting other scientists about what might not work and allowing them to avoid the same mistakes.

Non-publication of results is particularly egregious when it comes to clinical trials, because they involve exposing participants to potential risks with an implicit understanding that results from the study will be used for broader societal benefit. According to one study, results from 32% of industry funded and 18% of non-industry funded clinical trials remain unpublished.5

Ignoring areas of waste like the non-publication of findings results in compromised usage of research dollars, making life science research and drug discovery ever more expensive. Given the stretched budgets of federal funding agencies, making a greater impact with limited research dollars is becoming increasingly critical.

One way to spend research dollars more efficiently would be to expand the translational sciences model that aims to bridge the gap between the lab and the doctor’s office. In 2011, the NIH launched a major translational sciences initiative that encourages unique collaborations between academia, government, and industry to ensure that discoveries from the laboratory can be quickly and efficiently developed into interventions and treatments for patients.

The more this translational orientation can permeate the world of academic research, the more we can make every funding dollar count. For example, if graduate programs in biomedical research required students to periodically accompany clinicians on their hospital rounds, it would help these students frame their research questions more cogently and conduct and report their research more effectively, while also appreciating the urgency and potential impact of their research.

Another way to facilitate more efficient research is to encourage researchers and academic journals to publish negative results.(e) Reporting negative results from formal clinical trials is required by federal law; if the scientific community considered the spirit of that law and applied it to their work, this would prevent investment in failed approaches and free up funds to be used on research that hasn’t already been attempted. A collateral benefit would be that diligent and passionate graduate students could take greater risks and still receive the career benefits of academic publication, even if their research produces negative results.

Encouraging more efficient, effective research requires system-wide changes. It is time to expand the conversations initiated at conferences like 2015’s inaugural REWARD/EQUATOR conference on “Increasing value and reducing waste in biomedical research.”(f) In the same way that we regulate research involving animals or human stem cells, the scientific community should develop and implement general guidelines for Good Institutional Practice (GIP) in research.(g) Without a concerted community-wide effort, maximizing the impact of research dollars will be a losing battle.

Making research more efficient and focused on real world applications is not going to fix the problem of vanishing government funding. But until we are able to get the necessary Congressional support to significantly increase overall research funding, we should look more closely at how we are using the funds we have today.

Having worn the hat of a researcher as well as that of a cancer patient, the pressing need to develop innovative ways to use research dollars as effectively as possible has become exceedingly obvious to me. If we want to succeed at efforts like the “moonshot” to cure cancer that President Obama announced in this month’s State of the Union address, we need more funding, but also smarter spending.

The post Scientific Research Needs More Funding, But Also Smarter Spending appeared first on Footnote.

Helping Students Pursue Research With A Purpose Mon, 11 Jan 2016 12:13:03 +0000 An innovative program encourages Brown students, faculty, and community leaders to connect research and practice.

The post Helping Students Pursue Research With A Purpose appeared first on Footnote.

This article was produced in partnership with Brown University’s TRI-Lab.

On a fall day in Providence, Brown University student Kate Nussenbaum watched her “scientific hero” Adele Diamond share research on child brain development with an audience of teachers, child care workers, Brown students, community leaders, and local policy makers.

Diamond, a cognitive neuroscientist at the University of British Columbia, discussed how children develop a particular set of skills, called executive functions, that are essential to success in school and throughout life. She also addressed a topic practitioners in the room were eager to hear about: what educators and caregivers can do to help children who lag behind.

Diamond’s presentation kicked off a day-long symposium in October 2014 that was the culmination of months of planning by a group of students (including Nussenbaum), professors, and leaders from local organizations. They were participants in a program called TRI-Lab that connects education and research to real-world problem solving.

TRI-Lab is part of a university-wide effort to foster collaboration between students, faculty, and the broader community and promote “engaged learning” that blends theory and practice.1 This new effort, the Engaged Scholars Program (ESP), expands the Brown educational experience by supporting students’ involvement in internships, volunteering, research, entrepreneurship, and other forms of experiential learning that complement their academic studies.

A key element of ESP is the integration of opportunities for research and practice into student concentrations (majors) within departments.(a) Another component is interdisciplinary programs like TRI-Lab, which has students collaborate with faculty and community leaders to learn about pressing challenges in the local community and pursue solutions to these problems.

The theme for TRI-Lab’s inaugural year was early childhood development, with a focus on the health and well-being of Rhode Island’s children.(b) Nussenbaum, who graduated in May with a degree in cognitive neuroscience and a focus on brain development, wanted to pursue engaged scholarship through TRI-Lab because of the unique chance it afforded to engage with the community outside the university.

“It seemed like the perfect opportunity to wrestle with some of the questions I had already been [considering] in a more tangible way,” Nussenbaum says. “How do I devote myself to research but still feel like I’m doing good in the world? How do I make sure the research I’m doing is relevant?”

Over the course of the year, she worked with and learned from fellow students specializing in cognitive science, public health, and urban education, as well as experts like Steve Buka, chair of the Department of Epidemiology at Brown’s School of Public Health, and Leslie Gell, director of Ready to Learn Providence, a local organization that works to improve early childhood education for low-income children.(c)

Nussenbaum, Gell, and Buka’s TRI-Lab group wanted to narrow their focus to a topic in child development that was of interest to both researchers and practitioners. Executive functions was a natural fit, as it has received a great deal of attention in both the academic and education worlds in recent years.

Executive functions are the cognitive processes by which our minds manage thought and behavior. They include the ability to juggle various pieces of information (working memory), to shift between thoughts and perspectives as a situation changes (cognitive flexibility), and to choose whether or not to engage in particular thoughts and actions (inhibitory control).

Almost everything we do that involves higher-level thinking relies on executive functions. They enable students to engage productively in class discussion, following the thread of conversation and contributing at the right moment. Studies have found that deficiencies in executive functions can lead to poor performance at school and work, substance abuse, violence, and a host of other challenges.2 Research3 has also shown that children from higher-income households perform better on tests of executive functions than children from lower-income households.(d)

This disparity has drawn the attention of educators because it may offer a potential explanation for – and possible solution to – the wide disparity in educational achievement between students from different socioeconomic backgrounds. If researchers and educators can figure out how to improve the executive functions of low-income children to match those of their high-income peers, we may begin to close the stubborn achievement gap.(e)

Solving this problem requires first closing another gap – between how academics think about “executive functions” and how the term is used in the early education world. By bringing together a diverse group of stakeholders, TRI-Lab revealed points of divergence between the worlds of research and practice. While both are concerned with similar problems and share common goals, they don’t communicate often and, when they do, they speak different languages.

The purpose of the TRI-Lab symposium – and the research brief the group produced to accompany it – was to close this gap and align practitioners and researchers on what executive functions are, why they matter, and how they can be improved.6 A corresponding goal was to encourage researchers who study executive functions to think about the educational needs and challenges of the community and how their research might be applied in the classroom and the home.

For practitioners like Leslie Gell, it’s the application of research that is most essential. Her organization recently received a $3 million dollar grant from the U.S. Department of Education to implement a program designed to strengthen executive functions in elementary schools across Providence.(f) Understanding the research on what works is essential for practitioners like Gell who are trying to improve executive functions among low-income children.7

As for Nussenbaum, she won a Rhodes Scholarship to study the science of attention, memory, and learning at Oxford. Seeing the myriad ways different people approached the same problem in TRI-Lab helped her realize that she wants to focus her efforts on research. The program also made a lasting impression on how she wants to conduct her academic work.

“Even though I’ve decided to pursue the research path, it’s really important to… be thinking about how this research can eventually be applied, how I can do work to make sure the applications of this research actually move forward,” says Nussenbaum. “Those are definitely questions that are going to remain on my mind.”

Another lesson Nussenbaum learned from her engaged scholarship at Brown? There are no easy answers to the question of how to connect academic research and real-world policy and practice. It was challenging for a group of people with diverse backgrounds, interests, and goals to come together and develop a high-impact project everyone could contribute to.

Yet while there may not be a clear-cut path for how to do this kind of work, it all starts with expanding student learning to the world beyond the classroom.

This article was produced in partnership with Brown University’s TRI-Lab.

The post Helping Students Pursue Research With A Purpose appeared first on Footnote.

Could Altering Brain Waves Help People With Schizophrenia? Thu, 10 Dec 2015 11:14:17 +0000 Abnormal brain activity in certain neurons can impede learning in mice. Could this finding help people with schizophrenia facing the same challenges?

The post Could Altering Brain Waves Help People With Schizophrenia? appeared first on Footnote.

Learning patterns and understanding how rules change from one context to another is something we do everyday, whether we’re navigating social situations or at home sorting laundry. This intuitive, seemingly simple ability is produced by an incredibly complex set of interactions within our brains. When something goes wrong with these neurons, as it does in people with schizophrenia, it can impede our ability to learn patterns and rules.

When most people think of schizophrenia, they focus on the delusions, hallucinations, and paranoia that are the hallmarks of the disease. However, these symptoms are often accompanied by problems with attention, learning, and decision-making, as well as social withdrawal and lack of motivation – symptoms that are less well known but may actually be more important. At the lab I’m a part of at UCSF, we’re working to understand the causes of these cognitive challenges and, in the process, gain insight into how attention and learning happen in a typical human brain.

To understand how our brain makes basic activities like pattern recognition and context shifting possible — and why these capabilities fail to develop in certain people — my colleagues and I zoom in on the cellular level, studying specific neurons and brain waves that may be responsible for these cognitive activities. Scientists have suspected that a particular subset of neurons in the prefrontal cortex — known as fast-spiking (FS) interneurons — are important in schizophrenia because studies of brain tissue from individuals with schizophrenia consistently show abnormalities in these neurons.

Scientists have also speculated that FS interneurons play an important role in cognition more generally, because the activity of these neurons drives brain waves called gamma oscillations that increase during cognitive tasks related to the ability to learn and apply new rules.(a) We’ve known for some time that gamma oscillations are lower when people with schizophrenia are confronted with the same tasks, but scientists hadn’t determined whether this was the root cause of the cognitive deficits found in schizophrenia or simply an associated side effect.

To determine if problems with fast-spiking interneurons are, in fact, responsible for the cognitive deficits in schizophrenia, I studied mice that were genetically engineered so that these particular neurons were defective.1 The neurons’ abnormalities manifest only after the mice go through the equivalent of human adolescence, making them a great model system for schizophrenia, a condition whose symptoms often arise in a person’s late teens or early twenties.

My research found that, when these modified mice were young, they were just as capable of learning new rules about where their food was placed as their normal littermates. By early adulthood, when the defects in their fast-spiking interneurons became prominent, they lost much of this capability and no longer exhibited the same upsurge in gamma oscillations that accompanied this learning in their normal littermates.

To confirm that defects in the FS interneurons were causing the mice to have problems learning rules, I used a genetic tool to stimulate or inhibit these neurons and the gamma oscillations they produce. When I disrupted FS interneurons and the accompanying gamma oscillations in normal mice, the mice lost the ability to learn new rules. Conversely, when I stimulated FS interneurons and gamma oscillations in the modified mice, I was able to completely restore their cognitive abilities and they were now able to learn new rules just as well as their normal littermates. This enhanced learning lasted for a week after treatment.

I was also able to increase gamma oscillations – and restore the ability to learn new rules – in the modified mice by treating them with Klonopin. This medication is currently marketed as an anti-anxiety medication for schizophrenia and is usually given at such high doses that it acts as a sedative, reducing cognitive functioning. However, my research suggests that, perhaps at lower doses or in more targeted formulations, it could actually be used to enhance cognition in people with schizophrenia.

As my findings about Klonopin suggest, the discovery of a connection between fast-spiking interneurons and learning problems may pave the way to new diagnostic tools and treatment options for people with schizophrenia. Abnormal gamma oscillations during cognitive tasks might serve as a biomarker in the diagnosis of schizophrenia or as a tool for monitoring treatment progress. Scientists may also test non-pharmacologic, non-invasive means to stimulate gamma oscillations through new methods like transcranial magnetic stimulation, transcranial direct-current stimulation, or possibly even ancient techniques like biofeedback and meditation.(b)

While doctors and patients may never know all the details about what fast-spiking interneurons are or what they do, researchers must understand what’s going on at the cellular level if they want to help people with schizophrenia. As new tools make it easier to measure brain waves and track the activity of specific neurons, we’ll continue to discover how what happens at this microscopic level ripples up to affect overall cognitive function and capabilities like learning and attention, and these discoveries will lead to new tools for understanding and treating mental illnesses like schizophrenia.

The post Could Altering Brain Waves Help People With Schizophrenia? appeared first on Footnote.

Should You Really Swear Off Bacon? How Statistical Confusion Provoked An Online Panic Wed, 04 Nov 2015 19:31:18 +0000 A closer look at a recent report linking processed and red meat with cancer.

The post Should You Really Swear Off Bacon? How Statistical Confusion Provoked An Online Panic appeared first on Footnote.

By now you have probably read the dramatic headlines proclaiming that eating meat may pose as much of a cancer risk as smoking cigarettes. This breathless reporting comes on the heels of the recent classification of processed and red meats as carcinogens by the World Health Organization (WHO).(a) The WHO’s announcement is obviously important health news, but what does it actually mean for our day-to-day lives? Should we swear off burgers and bacon for good?

Digging into the WHO report, we can see it contains two important types of information.1 The first is the classification of processed meats as Group 1 carcinogens based on the strength of the evidence linking them with cancer. This classification means that, like cigarette smoking, processed meats are definitely capable of causing cancer in humans. In addition to cigarettes, the category of Group 1 carcinogens includes a large number of chemicals (118 and counting) that most of us have never heard of, as well as more familiar substances such as alcohol, air pollution, oral contraceptives, and solar radiation (i.e. sunlight).(b) Red meats were classified as probable carcinogens (Group 2A) and share that dubious honor with anabolic steroids, DDT, shiftwork that interferes with circadian rhythms, and acrylamide, a substance found in burnt toast.

Some of these Group 1 and 2A carcinogens are basically unavoidable, while others can be eliminated through lifestyle choices. How each of us decides which of these substances are worth the health implications comes down to the second key piece of information contained in the WHO report: the risk associated with each substance. Although processed meat and smoking have both been assigned to the same category, this does not mean that they pose an equal risk. Rather, a statistic called the “relative risk” is crucial in determining just how potentially dangerous each of these substances is.

In statistics, a relative risk (or risk ratio) is a number representing the chances of a specific event occurring in a particular group, relative to a baseline or control group. Relative risks are often used in epidemiological studies as a handy method for comparing the incidence of some diagnosis or outcome in one group to the incidence of that same diagnosis or outcome in another, different group. For example, how much does my risk of developing lung cancer increase if I am a smoker, as compared to a non-smoker? The numerical answer to this question is the relative risk of smoking.

Relative risks are typically reported in the media as percentage scores, which can make them seem frighteningly large. Take the WHO’s finding of an 18% increase in colorectal cancer risk for every 50 gram (1.8 ounce) increase in daily consumption of processed meat.(c) An 18% jump sounds like a lot, but for this number to have meaning, it is crucial that we compare it to a baseline of some sort. For convenience, let’s make our control group the average American, and compare that to a hypothetical group of “at risk” individuals who are eating 50 grams more processed meat per day than the average American (who, it should be noted, is already consuming a substantial amount of meat).(d) We’ll call the former “average meat-eaters,” and the latter “above average meat-eaters.”

The importance of understanding relative risk becomes clear once we know that the rate of colorectal cancer in the American population is relatively low. According to the American Cancer Society, the average American’s lifetime risk of being diagnosed with colorectal cancer is about 5%.2 So, if I am an above-average meat-eater, consuming 50 grams more processed meat daily than the average American, what are my odds of being diagnosed with colorectal cancer? Instinct might push us to add the 18% increase to the 5% baseline, for a colorectal cancer risk of 23%. Indeed, that would be scary! But this is not what the numbers tell us. Instead, the 5% baseline risk is increased by 18%, raising my risk by 0.9% to just about 6%. If I am even further above average, eating 100 grams more processed meat than the average American each day, my lifetime risk of developing colorectal cancer increases by 36%, to 6.8% overall.(e)

For perspective, compare these figures to the relative risk of developing lung cancer posed by smoking. Smoking cigarettes increases a man’s lifetime lung cancer risk by 2300%, compared to a man who has never smoked. Women smokers experience a 1300% increase in their lifetime risk, compared to non-smoking women.4 These numbers are much larger than the 18% or 36% increase in colorectal cancer risk that comes from consuming 50 or 100 grams more processed meat than the average American each day.

While the risks are certainly not as large as some in the media portrayed them to be, they are not negligible – the number of worldwide cancer deaths attributable to diets high in processed meats is estimated to hover around 34,000 per year.5 Still, for comparison, an estimated 200,000 cancer deaths per year are attributable to air pollution, 600,000 are connected to alcohol consumption, and 1,000,000 are linked to cigarette smoking. To say that processed meats are as dangerous as cigarettes is true in the sense that both of them have been classified by the WHO as “definitely capable of causing cancer in humans.” Yet saying so is dangerously misleading, as the magnitude of the risk associated with cigarettes is much, much higher.

What is important is that these choices are made from an informed perspective and not based on hysterical reporting, attention-grabbing and click-baiting headlines, or the sharing of such headlines without context on social media. To its credit, the WHO’s report is very even-handed and their website is an excellent source for a balanced interpretation of the research. But framing risk ratios to seem excessively alarming is all too tempting in today’s world of fast and sensational science reporting, and all too lucrative when page views can bring in big money. The next time you see a report on the danger of a given behavior or substance, instead of slipping into an immediate panic or lifestyle reboot, remember the concept of relative risk and ask yourself: “Dangerous relative to what?”

The post Should You Really Swear Off Bacon? How Statistical Confusion Provoked An Online Panic appeared first on Footnote.