Saturday, November 29, 2014

Where to submit your paper. Or “If at first you don’t succeed, fail fail again … then try open access”


The confluence of two experiences motivated this post. First, I was involved in a conversation on Twitter (below) that was reacting to suggestions (in a commentary in Nature) that the high volume of open-access papers was the cause of the reviewer fatigue that so often bedevils journals and editors (such as myself). At one point in this thread, someone pointed to a blog post titled “Why I Published in PLoS ONE. And Why I Probably Won’t Again for Awhile.” The main point of that post was to contrast the desire of young scientists to better the world by publishing in open-access journals with the perception that senior scientists don’t view a paper published in open-access journals as equivalent to a paper published in a more traditional journal. This latter sentiment was similar to my own experience on search committees in which candidates would be considered less impressive if they published too much in open-access journals.

The second motivation came from our weekly lab meetings. Near the start of each meeting, we go around the room asking “Who had a paper or proposal accepted or published this week?” And then, after a hopefully long discussion, we ask, “Who had a paper or proposal rejected this week?” I kind of like this two-part question because it enables us to get excited about our successes while also making the failures seem more acceptable. (“Oh, it happened to her too, so my own rejection is OK.”) And we can also complain about reviewers and can discuss how we will make our papers better in response. It just so happens that, over the past few months, no one has been able to speak up for the first question and pretty much everyone has spoken up for the second. One lab member even noted that rejection seemed to be a recent trend in the lab.

These two experiences led me to consider the question: “Should you – as a young scientist – take the easy route and publish in open-access journals, or the hard route (likely entailing multiple rejections) of trying more traditional journals, either the big boys or the classic society-based journals?” First, let’s consider the benefits of open access. The basic idea is, of course, that everyone will see the paper and you won’t waste your time cycling through journals that don’t think your paper is “important enough.” Moreover, citation rates are pretty decent for open-access journals, right? At least, that’s what everyone says. I would like to put this presumption to the test based on my own experiences.

I have published three papers in PLoS ONE (and several in other open-access journals). I quite liked all three papers and first tried traditional journals, but the papers were rejected a few times and the students wanted to move on with their lives and research, so we sent them to PLoS ONE, which accepted them quickly. So I decided to ask: How well have these papers been cited relative to papers I published in the same year in other journals? It turned out that I had a decent sample size because my two early PLoS ONE publications (2007 and 2009) happened to occur in years when I published a good number of papers (10 in 2007 and 17 in 2009). So I simply tallied the cumulative number of citations (here always from Web of Science simply for convenience; Google Scholar tells the same story) for each paper I published in those years, ranked them in order of citations, and asked where the PLoS ONE papers fell in relation to the others.

My previous two PLoS ONE papers are cited (Web of Science) least among all my papers published in those two years.
The graph tells the whole story. For my papers published in each of the two years, the PLoS ONE paper ranked DEAD LAST in terms of citations. I did have an a priori expectation that these papers hadn’t been heavily cited, but I had no idea it would be this bad. Moreover, I want to reiterate that I felt these two papers were interesting, well conducted, and potentially important – here and here they are for your citation convenience. Indeed, both were long included in a list of my favorite 15 papers. Of course, the alternative is simply that they really weren’t that good and I just can’t see it. Perhaps so but it remains clear that publishing in PLoS ONE will not enhance your citation rate for a given level of paper quality. So much for one of the supposed benefits (or at least lack of costs) of open access – at least in my case, which I am sure you will agree is what matters here. Of course, this general point is also clear intuitively: How many of us routinely check the papers coming out in Evolution or Ecology versus Evolution and Ecology (or PLoS ONE or PeerJ or Scientific Reports or anything starting with International Journal of …)?

OK, so clearly I am going to argue that you should forego open access and publish in traditional journals – but what of the annoyance, time consumption, and stress of rejection? Well, it is certainly true that rejection is common – for everyone. Many young scientists are stressed out when a paper gets rejected and feel that this somehow reflects on the quality of the work. They also often feel they are getting rejected more often than other scientists. The reality is that even established scientists get rejected all the time. A first proof of this maxim comes from a review paper by Cassey and Blackburn (2004) in Bioscience (cited only 15 times!) that surveyed the most successful ecologists (those with many publications in major society-based journals) and asked them how often their papers were rejected. The answer is often - as shown in the graph below. Indeed, many of these successful ecologists have had the same paper rejected multiple times and some of them still had at least one paper that they had failed to publish despite many tries.

Acceptance rates for papers submitted by the most successful ecologists. From Cassey and Blackburn (2004).
The above data are for the most successful ecologists and are from the good old days when rejection rates were not so high. How about some more recent data from a more mediocre (but established) ecologist (or, more precisely, evolutionary biologist)? 

I am a compulsive record-keeper. In this context, I have recorded every single submission of a paper on which I have been an author, as well as the outcome of those submissions. The sample size is pretty large now and allows me to illustrate the frequency of rejection and some factors that influence it, as summarized in the table below. I have been involved in a total of 275 submissions to journals (including multiple submissions of the same paper to different journals), of which 148 led to acceptance – an acceptance rate of 54%. However, some of those submissions were invited papers or commentaries, or appeared in special issues for which I was an editor. Taking away those near-sureties, my acceptance rate decreases to 43%. On the other hand, I have submitted 45 manuscripts to “big” journals (Science, Nature, PNAS, Current Biology, PLoS Biology) and only 1 was accepted – the first one I submitted. (I realize this looks crazy – 45 such submissions – but more about that later.) Removing these submissions from the tally as well, the acceptance rate jumps back up to 53% for the remaining “real submissions.” Finally, considering only “real submissions” on which I was first author, the rate jumps up again, to 68%. Whew, lots of stats that all simply say: rejection is an ever present companion in science.

Rejection/acceptance stats for my own submissions. 
I am not ashamed of these numbers, nor my relatively low number of open-access publications – because both facts reflect sending papers to the very best journals with low acceptance rates that then subject them to extremely rigorous and critical review, not just for the methods but also for their importance. Of course, it means that one needs to develop a mechanism to cope with rejection. My own mechanism – and the one that I try to tell my students – is that as soon as I submit a paper, I ask myself “OK, where will I submit this paper when it gets rejected.” (Hope for the best, prepare for the worst.) That way, a rejection simply means that I am days away from submitting a paper – turning a bad feeling into a good one. I also tell my students (and myself) that journals reject papers for all sorts of arbitrary reasons and it really doesn’t mean the work isn’t any good. 

While letting this post mature for a few days, I came across a great video of famous failures by people that went on to become wildly successful. This video also reminded me of a story about Tim Mousseau failing (or at least not immediately passing) his qualifying exam at McGill and having to write a remedial paper that has gone on to be cited more than 1000 times. How’s that for turning failure into success? I have heard of many other instances of papers getting rejected from a journal only to be greatly improved, some so much so that they end up getting published at a much “better” journal, such as Science/Nature.


So how high should one shoot? I have been a part of 45 submissions to big journals and all but one failed. Yet I don’t regret them (at least not all of them) – for several reasons. First, analyses (published in Science, of course) have shown that papers submitted to, and rejected from, Science/Nature end up receiving more citations than those that were first submitted elsewhere. Perhaps these were good studies to begin with and were written in a general way, and perhaps the review process improved them. Second, I think a number of my own papers submitted to Nature/Science were very good studies. (Perhaps better than many other papers published there – but then this is the sentiment of everyone that gets rejected from those journals, otherwise they wouldn’t have submitted there in the first place.) Indeed, at least five of my papers rejected from Nature/Science (some from both) have been cited more than 50 times. Three of the rejected papers ended up in Molecular Ecology and each is doing well: one published in 2012 already has 67 citations and two published in 2014 have received considerable attention (one of these was previously rejected from 8 other journals). So, if you have a great study, it is fine – good even – to submit it to Nature/Science. You will write it better and more succinctly, and you might even get some great reviews.

Articles published on their first submission (first intents) are cited less often than articles published in the same journal/year that were first rejected from elsewhere (resubmissions). Data from Calcagno et al. (2012 - Science.)

My favorite route, though, is society-based journals, like Ecology, Evolution, American Naturalist, Proc Roy Soc B, J Evolutionary Biology, J Animal Ecology (I still haven’t cracked the last of these nuts), and so on. These journals are where I want all of my work to end up (and you should too) – it looks good on your CV, and many more people see it and cite it. But, wait, I hear you saying: those papers won’t be accessible to the rest of the world because it requires an expensive subscription. Nonsense. Anyone can get access to any paper from any journal – many papers are posted on someone’s website and, for those that aren’t, all you have to do is email the author to ask for a copy! (I admit getting papers is harder – but certainly not impossible – without institutional access.) Moreover, you can pay for open access in those journals at a cost that isn’t much higher than at PLoS ONE or many other open access journals.

In summary, I suggest you work toward publishing in traditional and well-respected general or society-based journals as your goal, learn to deal with rejection, and only when you are so sick of the paper that you vomit (actually vomit, that is, not just feel nauseous) send it to PLoS ONE or another open access journal. (Or if you need really quickly publications to graduate.) Someone is bound to cite it someday – probably anyway. With this in mind, perhaps you might like to cite the cool new paper we just published this year in PLoS ONE.

-----------------------------------------------------

Note added Dec 3: see my follow-up post (Where to submit your paper - response to reviews.) that responds to post-publication comments on the present post. 

-----------------------------------------------------
Just for fun:
Speaking of tongue-in-cheek, see my parody of open access journals here: 

Monday, November 24, 2014

PITCHFORK SCIENCE: Guppies, Stickleback, and Darwin’s Finches.

[This is a cross-posting with the EXEB blog at Lund - thanks for the reciprocal opportunity Erik. And thanks for your earlier post here on Eco-Evo-Evo-Eco.]

I study Trinidadian guppies, threespine stickleback, and Darwin’s finches, surely 3 of the top 10 evolutionary biology “model” systems - for vertebrates at least. I thus fall at one extreme (or is it three extremes?) on the “pick a model system and use it to answer my question” versus “develop a brand new system all my own” continuum. Many students and postdocs find themselves facing their own decisions about where to position themselves along this continuum. Should they take the shortcut of working with an established system so they don’t have to work out the simple details and can get right to addressing the big general questions? Or should they forge their own path and become an expert in something brand new? It might seem, based on the above listing, that I consciously took the first approach but the reality is something quite different. In truth, I used a “follow your nose” coincidence-and-serendipity approach to study system choice. I here trace my own personal history in these research areas before closing with some general thoughts on how to choose a study system.

Why I study salmon: a 16 year old me with a steelhead from our cabin on the Kispiox River (BC, Canada).
I worked on salmon for my MSc and PhD, largely because I grew up with salmon fishing as my primary passion. Thus, I began studying salmon simply because I liked them and liked fishing for them. This led me to choose an institution (University of Washington - UW), department (School of Fisheries), and supervisor (Tom Quinn) ideally suited to immerse myself in salmon work. As my graduate work progressed, I very gradually became more and more interested in general questions in ecology, largely through exposure to the research of other people in the department. I even started subscribing to Ecology in addition to – of course – Fisheries. Yet my thinking remained salmon-centric: “what can ecology tell me about salmon”? Nothing wrong with that, of course. Then, when visiting home for Christmas in 1994, my mother gave me a book: The Beak of the Finch by Jonathan Weiner. When your Mom gives you a book for Christmas and you then spend the next week at home… well, you better read it.

The laboratory for my PhD: Lake Nerka, Wood River, Alaska.
The book was amazing. It described in wonderfully readable prose the research of Peter and Rosemary Grant on Darwin’s finches in the Galapagos Islands. What struck me the most, while reading beside the heater vent looking out at the blowing snow and -40 C weather (literally!), was Jonathan’s description of how the Grants had documented generation-by-generation rapid evolution of finch beaks in response to natural selection resulting from environmental change. Wow – you can actually study evolution in real time! It was my own eureka moment and, in short order, I became captivated by the idea. As soon as I got back to UW after Christmas, I went to the library and photocopied EVERY paper on Darwin’s finches (ah, libraries and photocopying – the good [and bad] old days). From then on, almost as though my brain had achieved an alternative stable state, my thinking was inverted to become: “What can salmon tell me about evolution?” 

My MSc and PhD work focused on sockeye salmon - this one in Knutson Bay, Lake Iliamna, Alaska.
Salmon did tell me a lot about evolution. I even edited a book (Evolution Illuminated, with one of my evolutionary idols, Steve Stearns) about merging evolutionary theory and salmon research. However, when one starts focusing on a topic (evolution) rather than an organism (salmon), one starts to become irked by aspects of the organisms that are not optimal for addressing the topic. Most notably, it is very hard to do experiments with salmon unless you have lots of water, lots of space, and lots of time. So, when thinking about a postdoc, I started talking to folks about which systems might allow me to better address basic evolutionary questions. I ended up moving in two directions.

The laboratory for my first postdoc. For more than a month of glorious weather, I camped on a small island in a small lake (Mackie Lake) at the end of a 4-wheel drive road. Those are my mesocosms floating in the lake and projecting from the island.
The first was the University of British Columbia (UBC) – because I didn’t want to go too far from my girlfriend (now wife) who was still at UW. I visited UBC and went from prof to prof telling them of my interest in a basic evolutionary question – the balance between divergent selection and gene flow – and asking if they knew of a system that would be good for testing my ideas. Many great suggestions were made, but Rick Taylor insisted he had the perfect system: Misty lake-stream stickleback – and he was right. So I started working on stickleback not because they were a model system, but because someone suggested they would be well-suited for my question and because it let me stay reasonably near my sweetheart.

A threespine stickleback guarding his nest.
The second direction came about through a conversation with Ian Fleming, who suggested that I should work with David Reznick on guppies. I hadn’t even considered this possibility, but I knew a bit about the system (which is also described in The Beak of the Finch) and it seemed cool. So I went to UCR and met with David and talked about how we might use guppies to study the interaction between selection and gene flow. David said he would be happy to help me with this work but that he didn’t have any money for me – and so I offered to write a full NSF proposal. I was just gearing up to do so when I heard that I had received an NSERC (Canada) postdoctoral fellowship to work with Rick Taylor on the Misty system – so off I went to stickleback, leaving guppies behind.

My favorite wild guppies captured in my first year of sampling, 2002.
UBC was great, an outstanding place for nurturing interest and insight into general questions in evolutionary biology, but one must eventually move on. My next postdoc was the Darwin Fellowship (I applied because of the title) at the University of Massachusetts (UMASS) Amherst, working with Ben Letcher on salmon again (hard to shake the habitat). While at UMASS, my guilt started building about telling David I would write an NSF grant and then not having done so, so I went ahead and wrote one, which got funded on the second shot (after bringing in my salmony lab-mate from Tom’s lab, Mike Kinnison). So my work on guppies eventually developed owing to guilt about not carrying through on something I said I would do.

The laboratory for our guppy work - here the Paria River, Trinidad
While at UMASS, my office happened to be near that of Jeff Podos, who was working on Darwin’s finches. Near the end of my Darwin Fellowship, Jeff received an NSF Career grant and had money to burn – I mean invest. Jeff knew of my interests and asked if I wanted to come along to the Galapagos on the project (he recalls me asking – or perhaps begging – to come with him), and of course I immediately said yes. So my work on finches was simply a case of being in the right place at the right time. The experience was every bit as exciting as promised in my day dreams that cold winter back in 1994. Several years later, Jonathan Weiner called to talk about my salmon work and I was able to tell him how influential his book had been and how it actually brought me (without any plan) to work on finches.


The laboratory for our finch work, presided over by a marine iguana.
In short, a large amount of coincidence and serendipity determined my choice of study systems. Once in each of the three systems, I became enamored with them and never left. I have now 25 papers on stickleback, 22 papers on guppies, and 11 papers on finches, and I have no intention of ever pulling back from any of these systems. I have also published 33 papers on salmon, and I continually look for new opportunities for additional work on them.

Perhaps my favorite finch photo.
Peter Grant once told me that, in conversation with Daniel Pauly at UBC, Dan told him that he (Peter) was a “point person” whereas he (Daniel) was a “line person”: a point person being someone who takes a single subject/system (finches) and looks at every aspect of their ecology and evolution, and a line person being some who takes a single subject (fisheries) and looks at it across many systems. I guess that makes me a pitch-fork person – trying to go into depth in three systems. Of course, this means that I can’t get too deep in any one system, much to my frustration. However, comparing and contrasting results from the three systems has proved fascinating. For instance, I study ecological speciation in all three systems with essentially the same methods (catching, banding/marking, measuring, recapturing, genotyping) focused on revealing the same processes (disruptive/divergent selection, adaptive divergence, assortative mating, gene flow). The similarities and differences in results obtained from the three systems has proved very instructive and motivational. In fact, my favorite research talk involves walking through a comparative story of ecological speciation in the three systems.
The title slide of my pitchfork talk.
Beyond how many systems one works in, I need to return to the question of working with model (developed) versus new (undeveloped) systems. As noted earlier, a benefit of working in a model system is that one doesn’t have to do as much background work (although every system is nowhere near as well-understood as the impression given by the literature), whereas a cost is that you are never known as the expert in that system (because the experts are the senior folks working on the same thing). The cost-benefit payoff is not easy to calculate and so the temptation for many students and postdocs is to spend a lot of time debating the pros and cons of the different approaches. I think all this angst is a mistake (or at least suboptimal) and that one should instead follow one’s nose (and Mom’s book recommendations). I think everyone should work on the systems and with the people that they find the most interesting and inspiring – not the systems that have the best-described genomes (as an example). These inspiring systems might be model systems or they might be new systems or both (I also have students who work on non-model systems), but they are – most importantly – the systems that feel right at the time, not the systems that have been rationalized based on a logical calculation of optimal career advancement. It worked out fine for me (and many others) – although I am sure my colleagues would argue I could still use considerably more career advancement.

-----------------------------------------

Resources:

An interesting perspective by Joe Travis on question-based versus system-based science: Is it what we know or who we know? Choice of organism and robustness of inference in ecology and evolutionary biology

Friday, November 14, 2014

Why stop there? Probing species range limits with transplant experiments

[ This post is by Anna Hargreaves; I am just putting it up. –B. ]

Understanding why species occur where they do is a fundamental goal of ecology.  Predicting where they might occur in the future is also an increasingly important goal in conservation, as invasive species spread and native species respond to climate change.  One approach to explore both issues is to study the edges of species distributions and the processes that currently limit them.

Do species stop occurring where things suck too much?
A satisfyingly simple explanation for range limits is that each species has an optimal set of conditions under which it thrives.  As it moves away from that optimum, individual fitness declines and populations dwindle.  At some point populations are unable to sustain themselves (we call this break-even point the niche limit) and the species disappears from the landscape.


Canada’s humans show a classic niche-limited northern distribution, huddling along the southern border for warmth. Statistics Canada. 

The best way to test why a species doesn’t occur somewhere is to move it there and see what happens (provided one is not dealing with humans). Transplant experiments comparing fitness within and beyond the range can test for predicted fitness declines.  Assuming experiments are adequately replicated in time and space, transplant success at the range edge but failure beyond suggests the range limit coincides with the species’ niche limit.

Adequate replication is no small feat, however, and makes good transplants labours of love (often minus the love by the end).  Since strong experiments will only ever be conducted on a small subset of species, those of us studying range limits must eventually ask ourselves, “is there any hope of predicting across species, or is it all just ‘stamp-collecting’?”.

Meta-analysis
To address this slightly uncomfortable question, we tested for patterns among transplants of species or subspecies across their latitudinal, longitudinal, or elevational range (111 tests from 42 studies).


A smattering of the 93 taxa transplanted beyond their range.  Most studies used plants, which obligingly stay where you transplant them.  

We tested how often range limits involve niche constraints by testing how often performance declined from the range edge (ideally) or interior (if there were no edge transplants) to beyond.  To compare among studies that measured everything from lifetime fitness to clam respiration, we calculated the relative change in a given performance parameter:

`(text{performance within the range } – text{ performance beyond}) / text{mean performance}`

What did we find?

Fitness declined beyond species’ ranges in
75% of 111 tests

and the average decline was significant.  The percentage was even higher when studies measured lifetime fitness (83% of 23 tests).  This strongly supports the importance of declining performance (niche constraints) in limiting species distributions.

How often do range limits coincide with niche limits?  We restricted this analysis to studies that included transplants at the range edge.  Without them one cannot tell if range limits coincide with niche limits, or exceed them via sink populations (middle vs. right panel Fig. 1).  Although most range limits involved niche constraints, only 46% coincided with niche limits.


Fig. 1. Hypothetical results comparing transplants at the range interior, limit, and beyond. Fitness declines beyond the range in all cases, but only the middle scenario suggests range and niche limits coincide.  Numbers give % of 26 meta-analysis tests that fit each RL vs. NL  pattern. Click to view at full size.

Discrepancies are generally explained by dispersal
If species fail to occupy suitable habitat beyond the range, they are dispersal limited.  If edge populations occupy unsuitable habitat (a phenomenon for which range limits have been nicknamed “the land of the living dead”; Channell 2000, Nature) they must be maintained by dispersal from the range interior to persist (Fig. 2).


Figure 2.  The full array of range limit vs. niche limit possibilities. Click to view at full size.

So, while most range limits involve niche constraints, dispersal decouples many from the species’ niche limit.  Interestingly,
while latitudinal ranges were often dispersal limited, elevational ranges were more likely to exceed niche limits via sink populations.
This makes sense given the much longer dispersal distances needed to traverse a geographic gradient than an equivalent elevational one.

Which niche constraints matter?
Amid growing concern over modern climate change, a lot of effort is spent predicting how species distributions will respond, with most models assuming range limits are imposed by climate.


Beech trees in Patagonia, Argentina don’t like cold winters either.

While climate is undoubtedly important, there are many examples of ranges limited by other factors, including interactions among species.  As species interactions are messy to include in range shift projections, it would be useful to know how important they are, and when.

Interactions with other species are important
We compared transplants in natural environments to those that softened potential biotic interactions (e.g. reduced competition or herbivory).  Beyond-range fitness declines were more severe when transplants were subject to all possible biotic interactions (Fig. 3).


Figure 3. Allowing all possible biotic interactions results in bigger fitness declines beyond species range limits (P = 0.0097). Click to view at full size.

We also tested an old hypothesis that biotic interactions are especially important at a species’ low-elevation and low-latitude range limits (click to view at full size):




We compared the drivers of high vs. low elevation limits and high vs. low latitude limits.  As predicted, most high-elevation range limits were imposed by purely abiotic factors (e.g. climate), whereas
> 50% of low-elevation limits
were imposed at least partially
by species interactions
(Fig. 4).  The same pattern exists for equatorial vs. polar limits, but there were too few studies to test it statistically.


Figure 4. Interactions among species are more important in imposing low-elevation (and low-latitude) range limits. Analyses included only transplants into natural environments that provided enough data to assess the causes of the range limit studied. Click to view at full size.

Not just stamp collecting
Our meta-analysis of transplant experiments revealed broad geographic patterns in the relative importance of niche constraints and dispersal, and in biotic vs. abiotic constraints (it also revealed that really good experiments are rare and sorely needed, if you’re tempted).

Implications for predicting climate-driven range shifts
First, we might expect faster relative shifts of high-elevation vs. polar range limits.  Dispersal is better at keeping range and niche limits in equilibrium across elevation gradients, and sink populations common at elevational range limits may provide a head start.  At the other end, ranges limited primarily by other species will respond less predictably to climate change, so we should not be surprised to see a mess of contrasting responses at lower limits.

Anna Hargreaves, Queen’s University
Web: http://annahargreaves.wix.com/home
Paper: http://www.ncbi.nlm.nih.gov/pubmed/24464192

Monday, November 10, 2014

Plasticity in mate preferences and the not-so-needed Extended Evolutionary Synthesis (EES)

[ This post is by Erik Svensson at Lund University; I am just putting it up.  –B. ]

Andrew Hendry at McGill was kind enough to invite me to write a guest post at his blog, where I would explain why odonates (“dragonflies and damselflies”) are great study organisms in ecology and evolution, and I happily grabbed this opportunity. I will also re-publish this post at our own blog, Experimental Evolution, Ecology & Behaviour. Here I will try to put our research and our study organisms in a somewhat broader context, briefly discuss the role of plasticity in evolution and whether we would need a so-called “Extended Evolutionary Synthesis” or EES, as has recently been argued by some.

I am writing this from Durham (North Carolina), where I am currently at a so-called “catalysis-meeting” at NESCent (the “National Evolutionary Synthesis Centre”). The title of our meeting is “New resources for ancient organisms – enabling dragonfly genomics”. Briefly, we have gathered a fairly large group of researchers working on various aspects of odonate biology (including ecology, evolution, behaviour, systematics, population genetics, etc.) to create a genomics consortium, with the long-term goal of making genomic resources available for these fascinating insects so that we can recruit new talented postdocs and PhD students to our research community. This would be needed – I think – as evolutionary biology is suffering from somewhat of a low diversity in study organisms. A few classical model systems tend to attract a disproportionate number of researchers, such as Drosophila, sticklebacks, Anolis lizards, guppies, etc. But odonates are cool too! Please consider joining us, if you read this and are a young scientist who is looking for some relatively unexploited research organisms.




As an example of research in this group and in my laboratory, I would like to highlight our recently published paper in Proc. R. Soc. Lond. B.  entitled “Sex differences in canalization and developmental plasticity shape population divergence in mate preferences”. This is a study that contains experimental field data that were first collected back in 2003 – over a decade ago! – which has later been complemented with population genetic analyses and laboratory experiments.

Our study organism is the banded demoiselle (Calopteryx splendens; male in A above, female in B), which co-exists with its congener the beautiful demoiselle (Calopteryx virgo; male in C, female in D, above) in a patchy network of sympatric and allopatric populations in southern Sweden. What we show in this paper is that there is pronounced population divergence in both male and female mate preferences towards heterospecific mates, in spite of these weakly genetically differentiated populations being closely connected through extensive gene flow. Whereas females learn to recognize mates, males do apparently discriminate against females already when being sexually naive, revealing differential and sex-specific plasticity in mate preferences. Males are therefore more canalized and females more plastic in their mate preferences.

Interestingly, these sex-differences in developmental plasticity and canalization are also scaled up and shown at the between-population level: females show strong population divergence in mate preferences compared to males, presumably related to their higher plasticity. This suggests that plasticity can and does play some role in population divergence, even in the face of gene flow, which is of some principal interest to evolutionary biologists, and fits with ideas proposed by Mary Jane West-Eberhard in her book “Developmental plasticity and evolution”, but also with a recent population genetic model by Maria Servedio and Reuven Dukas on the population genetical consequences of learned mate preferences.

Given our results in this study, one could perhaps expect me to show some enthusiasm for the recent opinion-paper by Laland et al. in Nature entitled “Does evolutionary theory need a rethink?” But, as a matter of fact, I do not like the opinion piece by Laland et al., and I think it is one of those opinion articles that would fit better as a blog post. As it stands now, the opinon article by Laland et al. gives a misleading impression of a very divided scientific community and results in a confusing discussion for discussion’s sake.

Laland et al. argue that developmental plasticity, niche conservatism and some other factors are important in evolution, and so far I agree with them. They then go on to make various strong (but in my opinion very biased and sometimes unsubstantiated) claims that evolutionary theory needs to be changed substantially and radically. They argue for an “Extended Evolutionary Synthesis” that should replace the current Modern Synthesis. It is a bit unclear to me, first why we need an EES, second to what extent the current paradigm stops anyone from doing the research he or she wants, and third, what this EES would actually contain that makes it so urgently needed. The authors are quite vague on this point. In my opinion, far too many opinion articles have been published about the need for an EES, and far too little rigorous empirical or theoretical work has been performed, in the form of critical experiments, formal theory or mathematical modelling.




The EES is actually not an invention of Laland et al.; the term was first coined by former evolutionary biologist Massimo Pigliucci, who is today a professor in philosophy, after he has left evolutionary biology. During his relatively brief career as an evolutionary biologist, Pigliucci produced a steady stream of opinion articles and edited volumes in which he constantly questioned and criticized what he saw as “mainstream evolutionary biology” or “The Modern Synthesis”. His efforts culminated in a meeting he organized entitled “Altenberg 16”.

This meeting at Altenberg gathered a selected group of (self-proclaimed) scientific “revolutionaries” and resulted in a book entitled “Evolution – The Extended Synthesis”. What struck me, as an experimental evolutionary ecologist, was the rhetorical tone of the whole effort, the grandiose worldview of  put forward by the group and the seemingly na├»ve belief that scientific synthesis can be organized and commanded from above, and thus be declared, rather than growing naturally from below. The meeting at Altenberg was also quite biased in terms of who were invited – further strengthening the impression of an old boys network with a very biased view of evolutionary biology, mainly grounded in philosophical, rather than empirical arguments.

However, even if we accept that science in general, and in evolutionary biology in particular, evolves and changes over time, and even if we believe philosopher Thomas Kuhn’s theory about “paradigm shifts” and “scientific revolutions”, it does not follow that a revolution will happen just because there are willing revolutionaries. This is not how political revolutions happen either, such as the French, the American, or the Russian Revolutions. Having dedicated revolutionaries is not enough; such revolutionaries are only a subjective factor. What is also needed is the objective factor: the material (or scientific) conditions necessary for a revolution (political or scientific).

Neither Laland et al. nor their predecessor Massimo Pigliucci have have convinced me that they are the leaders we should follow, or that the time for the scientific revolution or a substantial paradigm shift is waiting around the corner. Although I do not consider myself an orthodox population geneticist at all, in this case I tend to agree with population geneticist Jerry Coyne, who has previously criticized Pigliucci for being committed to BIS – Big Idea Syndrome. One symptom that somebody is suffering from BIS is initiating debates for debate’s own sake. I  feel that the same criticism can be directed to Laland et al. Their rather rethorical opinion piece contains very few concrete suggestions of how to do research differently than we do today. This gives me the impression that this is mainly a debate about how to interpret the history of science, rather than being useful or providing practical advice to evolutionary biologists in their daily work.

Both Laland et al. and Pigliucci have painted a picture of evolutionary biology and the Modern Synthesis as a monolithic and dogmatic scientific paradigm that prevents researchers from asking heretical questions, such as addressing problems about plasticity. The Modern Synthesis clearly did not stop me and my co-workers from initiating our study on mate preference plasticity in damselflies. Neither is it clear to me that an EES (if it had it existed) would have helped us in any way to design our study differently than we actually did in the end. Given these considerations, I am quite convinced that the debate about the EES is truly academic (in the negative sense), as it will not lead us anywhere or provide us with any new analytical tools, tools being either empirical or theoretical. I therefore do not think that the proposed EES will have any long-lasting effect on the field of evolutionary biology – at least not as much as its proponents wish.




I am also quite frustrated by the poor scholarship of Pigliucci and Laland et al. regarding the history of the Modern Synthesis. Their rather negatively biased view of the Modern Synthesis strikes me as being a good example of a straw man argument wherein they set up the scene by making a caricature of something they do not like in the first place, and then go on to criticize that caricature. But their caricature is far from the more complex reality, richness and history of the Modern Synthesis.




A few years ago Ryan Calsbeek and I edited a book entitled “The Adaptive Landscape in Evolutionary Biology”, in which we and many others discussed the contrasting views between the population geneticists Sewall Wright and Ronald Fisher, and their legacy which still influences evolutionary biology and population genetics today. It is simply wrong to claim that was a monolithic paradigm that did not allow for radically different views on genetics, plasticity, and micro- and macroevolution. Had Pigliucci and Laland et al. read the various contributions in our book, many of which had radically different views, some of their misleading arguments could have been avoided. Critical views similar to those I have expressed in this post can be found on the blog “Sandwalk”, such as here and here.

However, I would say that there might already be an ongoing synthesis  in evolutionary biology – but it is not led by Laland et al. To see what I mean here, Steve Arnold published an interesting paper earlier this year in the American Naturalist entitled “Phenotypic Evolution: The Ongoing Synthesis”. In this article, Steve argued that evolutionary biology is now in the midst of a true synthesis, wherein micro- and macroevolution are finally coming together through the integration of quantitative genetics with comparative biology, largely driven by the explosion of phylogenetic comparative models of  phenotypic trait evolution.

Unlike the case for the EES, there are many more "silent" revolutionaries in the field of comparative biology who are now busy in developing analytical methods for phylogenetic comparative methods in the form of R packages and other useful tools. These new methods enable us to directly study and infer evolutionary processes and test various models and evolutionary scenarios. This is the sign of a healthy and dynamic research field: people do things, rather than just talking about the need for revolutions. Researchers in this and other fields are busy making quantitative tests, rather than spending time on verbal reasoning on the need for new syntheses. To paraphrase  a legendary revolutionary (anarchist Emma Goldman): “If you can’t do any rigorous experimental procedures or statistical tests, it is not my kind of scientific revolution”.

In summary: science evolves over time, and so does evolutionary biology. Our field is very different from what it was in the early days of the Modern Synthesis – in spite of some of the claims by Pigliucci and Laland et al. Without a doubt, plasticity, niche construction, and many other phenomena mentioned by Laland et al. are worthy of study and certainly very interesting. The mistake Laland and other proponents of EES make is that they think that they are the only ones who have realized this, and that other folks outside the EES camp are not thinking deeply about these problems. I end this blog post by citing another true revolutionary (quote taken from Jerry Coyne’s blog “Why Evolution is True”):
I close with a statement by my old mentor, Dick Lewontin, who of course as an old Marxist would be in favor of revolutions: “The so-called evolutionary synthesis – these are all very vague terms. . . That’s what I tried to say about Steve Gould, is that scientists are always looking to find some theory or idea that they can push as something that nobody else ever thought of because that’s the way they get their prestige. . . they have an idea which will overturn our whole view of evolution because otherwise they’re just workers in the factory, so to speak. And the factory was designed by Charles Darwin.”

Final note: I am fully aware that both Laland et al and Massimo Pigliucci are likely to strongly disagree with my criticisms above. The views are entirely my own and do not necessarily represent other authors of this blog. 

Tuesday, November 4, 2014

How to be a reviewer/editor

Many articles have been written about how to be a good/responsible/fair/rigorous/timely reviewer or editor. Having now reviewed more than 400 papers and having been an editor for 100 more, I find myself developing rather strong opinions on the subject. If those opinions meshed nicely with the ones previously published, a blog wouldn’t be needed – but they don’t. Instead, I find myself holding rather different views on how to be a reviewer and editor. As time has gone on, these opinions have strengthened, not weakened, and so perhaps it is time to get them out there.

How to be a reviewer – 1 simple rule.

Don’t reject papers!!!!!!! How’s that for a minority opinion? Even before we start our reviewing careers, we are told to be very stringent and critical and to only accept the very best stuff. But – as I will explain – this does not work.

As a reviewer, your goal is to improve the scientific literature, which you can achieve by helping good papers get published, by stopping bad papers from getting published, and by improving papers before publication. The straight-up reality is that the second option is out: you simply can’t keep stuff out of the literature. Hundreds of journals exist and so rejecting a paper at one journal just means it will end up getting published in some other journal (Fig. 1), especially in this new age of pay-as-you-go open access publishing. Worse yet, if you reject a paper, the authors have no obligation to follow your suggestions for improvement. Thus, rejecting a paper actually makes the scientific literature WORSE. Instead, you want to keep whatever paper you are reviewing in play at the same journal. That way, the author will be encouraged/required to follow your suggestions for improvement. You and the authors can work together to craft the best possible paper – what a wonderful world (Fig. 2).

Fig. 1. If at first you don't succeed, try try again. Network of submission flows among journals. From Calcagno et al. 2012 (Science).
Several exceptions to this rule might seem necessary. First, some papers are just irredeemably bad in the sense that no amount of re-analysis/re-writing will make them tolerable. In these cases, you have no choice but to reject them – but remember that they will likely just pop up in some other journal in close to the form you previously rejected them. Thus, you really need to be somewhat relaxed about what you consider irredeemable. By this I mean that the paper really has to be fraudulent or completely (not just partially) incomprehensible. In practice, I think such heinousness applies to a vanishingly small subset of papers.

The second exception occurs when the paper just isn’t at all suited for a journal. By this I don’t mean that it “isn’t good enough” (but see below); I instead mean that it is a paper about behavior in a journal about morphology, or some such. Again, however, this is extremely rare as it is the job of the editors to catch this mismatch.

A third exception might occur when you think the paper is far below the quality expected for the journal. The most obvious example is Nature/Science, where we often look at papers and think “Sheesh, I had 10 papers rejected from these journals that were way better than this one.” It is very hard to resist this sentiment and so, yes, sometimes you won’t be able to avoid the temptation to suggest rejection simply because you think the paper “isn’t good enough for the journal.” In addition, papers in such journals tend to get a lot of attention – and so by rejecting them, you will certainly reduce the attention paid to them, thus in essence "keeping them out of the literature" in another sense.

I should make sure to clarify that “don’t reject” means don’t reject without the possibility of resubmission. In many cases, the paper really does require a ton of work and so it really should be rejected “in its current form” or “without prejudice” and resubmitted. Note, however, that specifying this as a reviewer is likely – given the editors different objectives (see below) – to get the paper rejected. Thus, a better option is usually “major revision” – that is, if the authors really can do what you say for improvement, then the paper should be publishable.

Fig. 2. The different types of reviewers, according to http://matt.might.net/articles/peer-fortress/ what I am suggesting is a hybrid "heavy weapons guy" - "medic" - "engineer" - but without the bad parts.

How to be an editor – no simple rule.

In contrast to your role as a reviewer, your role as an editor becomes a balance between your desire to improve the literature (Don't reject papers!) and the journal’s desire to have you improve their journal in particular. Thus, you now end up rejecting papers because your journal wants to publish the very best work and thus increase its impact factor and prestige and subscriptions (and money) and so on. Nowadays, a lot of pressure is placed on editors to reject papers (ideally without review) so that you don’t end up accepting more papers than the journal has funds to publish. (And so you don’t end up wasting everyone’s time with a review process that is likely to fail anyway.) This necessity can be quite frustrating when one tends to fall more on the “improve the literature” side of the balance. Indeed, I see many papers that could be good being rejected simply because they aren’t as good as other papers that are being submitted.

So how does one decide which papers should be rejected and which shouldn’t? In my opinion, acceptance or rejection should not in any way depend on the actual results of the study. If the study is well-motivated, interesting, well-designed, well-executed, and well-analyzed, then it should be published regardless of whether or not it confirms a specific prediction or hypothesis or theory. Perform this thought experiment: take a study that has a negative (e.g., non-significant) result and imagine a positive (e.g., significant) result instead. Would you want to publish the paper? If so, accept it even though it has a negative result. Often, people complain about the design of a study with negative results (“they should have done this, they shouldn’t have done that”) but the reality is that they would not have complained if the result had been positive.

Other reasons to NOT reject a paper (as a reviewer or editor) are: poor stats, poor writing, poor citation of the literature, poor graphics, and so on. All of these things can be fixed with revisions. Just tell the authors what they need to fix. If they can do it, great, if they can’t, fine, you can always reject it next time.

Reasons TO reject a paper from your awesome journal: samples sizes that are too small (but you can encourage the authors to collect more data), a study design that is incapable of testing the hypothesis (but perhaps the hypothesis/study can be rephrased/reframed in a way that it can be tested), and lack of replication at the level where inference is being attempted. For me, this last problem is often the most damning. For instance, if one wants to make inferences about populations in two environments (north vs. south, cold vs. warm), then at least two independent populations of each type must be studied. Of course, studying only two populations (one of each type) is still a good suggestive study; it just might not be good enough for my awesome journal.

How to be an open access editor – 1 simple rule

Accept everything!!!!! In sharp contrast to the above, your role here is to make money for the journal. Yes, I know this is a cynical perspective in a growing culture where open access is considered a paragon of virtue: “make your science freely accessible to everyone, and the world will be a better place.” Having now worked with several open access journals, however, it is very clear that the entire goal is to make money. Consider this: pay-as-you-go open access journals don’t make a cent unless they publish your paper. Stated another way, they lose money every time they reject a paper. Indeed, that is why so many publishers have started open access journals to which they “refer” papers they have rejected from their flagship subscription-based journals. Before these new ventures, every rejected author simply went and paid someone else (if all else fails, PLoS ONE!) to publish their paper instead (Fig. 3) – so the publishers said, “hey, cool, we can also get money from the papers we reject – how awesome is that.” By the way, if you like open access, check out my new super-easy, super-fast, and super-cheap open-access journal “MyScience”:  http://ecoevoevoeco.blogspot.ca/2014/09/myscience-newer-faster-cheaper-easier.html

Fig. 3. PLoS ONE publications (100,000 as of June 2014) = lost revenue to for-profit publishers. The solution: start your own open access journal. (Graphic from PLoS Blogs.)
In my opinion, a better solution is simply to pay for open access at the subscription-based journals, which is nearly always an option, or simply publish in the subscription-based journal and then put the PDF on your website. Yes, I know the publishers imply you shouldn’t do this but I have been doing it for 20 years and no one has complained yet.

Oh s**t.

Having just written the above, it now strikes me that authors reading this post will have a more prosaic inspiration – “hey, I should recommend Hendry as a reviewer/editor – he won’t reject my paper.” Go for it. These days I receive so many requests to review that I turn down most of them anyway: instead only accepting reviews for papers that are squarely in my areas of expertise, which some folks suggest are quite circumscribed. And now I am struck by another realization: editors reading this post might have a different prosaic inspiration – “hey, I better not recommend Hendry as an editor/reviewer as he won’t reject enough papers.” Go for it. I get too many requests to review anyway. 

Check your (taxonomic) biases at the door

Many of us like to believe that we are conceptually-oriented researchers; our particular study organism(s) are just means to an end, the en...