/
Best practices for disseminating your scientific works Best practices for disseminating your scientific works

Best practices for disseminating your scientific works - PowerPoint Presentation

celsa-spraggs
celsa-spraggs . @celsa-spraggs
Follow
379 views
Uploaded On 2016-08-05

Best practices for disseminating your scientific works - PPT Presentation

The scholarly publishing arena Barriers to dissemination and barriers to knowledge The price of knowledge Open Access alternative publishing outlets the case of HEP Open Access ID: 434114

2015 impact ais basaglia impact 2015 basaglia ais grid school research journal scientific journals number index scholarly measure factor

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Best practices for disseminating your sc..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Best practices for disseminating your scientific works

The scholarly publishing arenaBarriers to dissemination and barriers to knowledge:The ‘price of knowledge’Open Access - ‘alternative’ publishing outlets: the case of HEPOpen Access publishing could be a solution, however: who pays?Barriers to get credit:IdentificationSo, do you want to make an impact?What is impact? What to measure and how?Usage – Peer review – Citations – Alternative metricsConclusion Tullio Basaglia CERN Scientific Information Service

04/11/2015, AIS-Grid School - T. BasagliaSlide2

The scholarly publishing arena

Elsevier, Wiley, Springer, Taylor & Francis, and SAGE continued to dominate […] with more than half of the [journal] titles (54%) (LJ periodicals price survey 2015, 23.4.15)The ‘price of knowledge’ – on average04/11/2015, AIS-Grid School - T. BasagliaSlide3

The ‘price of knowledge’ – by discipline

04/11/2015, AIS-Grid School - T. Basaglia(LJ periodicals price survey 2015, 23.4.15)Slide4

No brighter future: projected price increase in 2016

04/11/2015, AIS-Grid School - T. Basaglia(LJ periodicals price survey 2015, 23.4.15)Slide5

1,086,918 e-prints in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics

Open

Access: ‘alternative’

publishing

outlets – the case of HEP

04/11/2015, AIS-Grid School - T. BasagliaSlide6

Could Open

Access publishing be the solution?Does “free access” mean that no one is going to pay ?What applies to software can also apply to scholarly communication:"Free software is a matter of liberty, not price. To understand the concept, you should think of free as in free speech, not as in free beer."  —Richard Stallman04/11/2015, AIS-Grid School - T. BasagliaSlide7

Open Access publishing:

the SCOAP3 way04/11/2015, AIS-Grid School - T. BasagliaSlide8

Open Access

publishing in HEP: who pays?04/11/2015, AIS-Grid School - T. BasagliaSlide9

Get credit:

“is it you the author? If so, then claim it!”, or: the wisdom of the crowd must help04/11/2015, AIS-Grid School - T. BasagliaProblem: name ambiguitySolution: unique identifierMoreover, “…the scholarly record is taking on new definitions. It includes the relationship between the data and the science acted upon it. Its contents are both refereed and un-refereed. It includes videos, blogs, websites, social media…” [The OCLC Evolving Scholarly Record Workshop, Chicago Edition, March 2015]Slide10

1792-3336-9172-961X

0137-1963-7688-2319

0243-4126-4084-6509

[3

slides based on

Chris

Shillum

s presentation held at

CNI Scholarly Identity Workshop,

Baltimore, Apr 4

2012]

is a

nonproprietary

alphanumeric code to uniquely identify scientific and other academic authors

04/11/2015, AIS-Grid School - T. BasagliaSlide11

Researcher

Joins faculty

Joins student body

Applies for grant

Submits

manuscript

Track output of

researchers

Locate collaborators

Streamline application process

Support research

assessment

Streamline data input

Create author links

ORCID in critical workflows

1792-3336-9172-961X

04/11/2015, AIS-Grid School - T. BasagliaSlide12

ORCID Identifiers

1792-3336-9172-961X

0137-1963-7688-2319

My own ORCID record:

http://orcid.org/0000-0002-2050-7701

0000-0002-2050-7701

Get yourself

registered!

04/11/2015, AIS-Grid School - T. BasagliaSlide13

What is impact, anyway?

A word: Bibliometry = biblion (book) and metron (measure). Metrics is the buzzword!It’s the discipline aiming at determining the impact of scholarly journals, journal articles, authors, and research institutions.By impact we mean: how the work produced by scholars is received, used, assessed, and critiqued, how their contributions are recognized, and finally how influential they are on different scientific communities. It is worth noting that, in the era of the web and of the social networks, the amount and the diversity of the “objects” (videos, tweets,...) produced by scholars has made the scholarly communication landscape more complex; this allows a much better “visibility” of scientific communication, also outside the scientific community. We talk about “webometrics” when we indicate the measurement of impact of scientific publications on the Web (=mainly social networks).04/11/2015, AIS-Grid School - T. BasagliaSlide14

Definition – cont’d

The impact of an article, a journal, an author, an institution in terms of their contribution to the advancement of research in any research domain was always considered very important. Such impact was (and still is) obviously correlated with prestige and recognition within a community of scholars.Today, in a time of fast and steadily growing volume of publications and of strong competition for career and funding, in order to certify the quality of research in the most ”objective” manner, we need to perform measurements. That’s when bibliometrics comes into play. It should define what to measure and how. 04/11/2015, AIS-Grid School - T. BasagliaSlide15

Why do we need

bibliometrics?We need to measure impact to carry out research assessment (a related term is “Scientometrics”), basically for the purpose of:Selection/Career advancementResource allocation to finance research activities04/11/2015, AIS-Grid School - T. BasagliaSlide16

Caveat: there are d

ifferent kinds of impact…We should not forget that there is a long-term impact of scientific production, and it’s the one on the society at large.There is a need for evaluative and decision-making tools for assessing the contribution of the public sector investments in Science and Technology to economic growth and social well-being.However, this ‘societal’ impact should not be interpreted reductively as “the contribution of Science to GNP”. Such investments can have wide-ranging effects on the general level of scientific education in the society.04/11/2015, AIS-Grid School - T. BasagliaSlide17

What to measure and how?

04/11/2015, AIS-Grid School - T. BasagliaYes NoYesProblematicMeasurable?Slide18

What is usage? How do you measure it?

COUNTER: Counting Online Usage of NeTworked Electronic Resources. C. is an international initiative to improve the reliability of online usage statistics. It is supported by the publisher, vendor, and librarian communities.Number of downloads: “User requests include viewing, downloading, emailing and printing of items, where this activity can be recorded and controlled by the server rather than the browser. “Denied accesses” will also be counted.” [from COUNTER code of practice]. Obviously, there is a correlation between downloads and citations. Articles potentially citeable, but not accessible (=barriers imposed by the publisher), influence this correlation. The (potential) contribution of Open Access publishing (today, ~20% of the total) to the usage of scientific literature needs to be taken into account. Problem with this measure of impact: publishers tend not to disclose data about single-item downloads (downloads of article x) or about ‘the downloader’04/11/2015, AIS-Grid School - T. BasagliaSlide19

Peer review: a definition and a

(quite) radical opinionPeer review is the evaluation of work by one or more people of similar competence to the producers of the work (Wikipedia)It is a mechanism of validation. Is it effective? “It is ordinarily claimed that journals play two intellectual roles: a) to communicate research information, and b) to validate this information for the purpose of job and grant allocation.[…] the role of journals as communicators of information has long since been supplanted in certain fields of physics, so let's consider their other role. Having queried a number of colleagues concerning the criteria they use in evaluating job applicants and grant proposals, it turns out that the otherwise unqualified number of published papers is too coarse a criterion and plays essentially no role. […] "hot preprints" on a CV can be as important as any publication.So many of us have long been aware that certain physics journals currently play NO role whatsoever for physicists. “Winners and Losers in the Global Research Village, Paul Ginsparg, 1996 [founder of arXiv.org]04/11/2015, AIS-Grid School - T. BasagliaSlide20

Citations – the Impact Factor

04/11/2015, AIS-Grid School - T. Basaglia Impact Factor of a journal (Thomson-Reuters):A, numerator = the number of times that articles published in a journal in 2006 and 2007, were cited by articles in indexed journals during 2008. B, denminator = the total number of "citable items" published by that journal in 2006 and 2007. ("Citable items" are usually articles, reviews, proceedings papers, or notes; not editorials or letters to the editor.) 2008 impact factor = A/B. 300 cit. in 2008 to 2006-2007 articles = IF of journal X for the year 2008 is 3.

100 citable articles published in 2006-7

Originally it was created (E. Garfield, 1955) as a tool to compare journals’ impact in order to decide which one(s) was/were worth subscribing to.Slide21

Criticism on the IF

“The journal impact factor was developed as a means to measure the impact of scientific journals. Over time, its use has been extended to measuring the quality of scientific journals, the quality of individual articles and the productivity of individual researchers.”“Universities in Germany, for instance, regularly plug the impact factor of journals in which scientists publish into formulae to help them determine departmental funding. The Italian Association for Cancer Research requires grant applicants to complete worksheets calculating the average impact factor of the journals in which their publications appear. [...] [In Finland] government funding for university hospitals is partly based on publications points, with a sliding scale corresponding to the impact factor of the journals in which researchers publish their work.” “The European Association of Science Editors recommends that journal impact factors are used only – and cautiously – for measuring and comparing the influence of entire journals , but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes either directly or as a surrogate.”Source: EASE statement of inappropriate use of impact factors, 2012Slide22

Citations: the h-index

04/11/2015, AIS-Grid School - T. BasagliaH-index: The h-index is an index that attempts to measure both the productivity and impact of the published work of a scientist or scholar. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications.A scholar with an index of h has published h papers each of which has been cited in other papers at least h times. Thus, the h-index reflects both the number of publications and the number of citations per publication.(Source: Wikipedia)Slide23

Criticism on the h-index

The h-index does not account for the number of authors of a paper. In the original paper, Hirsch suggested partitioning citations among co-authors. The h-index is bounded by the total number of publications. This means that scientists with a short career are at an inherent disadvantage, regardless of the importance of their discoveries. Had Albert Einstein died after publishing his four groundbreaking Annus Mirabilis papers in 1905, his h-index would be stuck at 4 or 5. However, as Hirsch indicated in the original paper, the index is intended as a tool to evaluate researchers in the same stage of their careers. (Source: Wikipedia)Slide24

Additional problems

linked to citations as a measure of impact The necessity of persistent identification of authors becomes even more important. The ORCID (Open Research and Contributors ID) project aims at providing a solution. Citation to data sets are underrepresented in this landscape. Efforts in unique identification of datasets (and software) might help these objects emerge from the citation metrics landscape. Attribution of DOIs (Digital Object Identifiers) will help.Slide25

San Francisco Declaration on Research Assessment:

the criticism04/11/2015, AIS-Grid School - T. Basaglia There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties. ... The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed; B) the Journal Impact Factor it is a composite of multiple, highly diverse article types, including primary research papers and reviews; C) Journal Impact Factors can be manipulated (or "gamed") by editorial policy; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public. San Francisco Declaration on Research Assessment (2013, by a group of scientific publishers)Slide26

Declaration

on Research Assessment: the proposals04/11/2015, AIS-Grid School - T. Basaglia We make a number of recommendations for improving the way in which the quality of research output is evaluated. A number of themes run through these recommendations: the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations; the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and […] exploring new indicators of significance and impact. Slide27

In summary…

04/11/2015, AIS-Grid School - T. Basaglia Linking reward (carrier, funds) to invalid outcome measures leads to predictable and undesirable results An example: when company X rewarded technicians for car repairs, more “fictive car repairs” were authorized by customers The problem lies in the mechanical link between metrics and incentivesSlide28

Alternative metrics (alt-metrics)

It’s still in its infancyProblematic validation of citation metrics. Web citations suffer obviously from a problem of quality control We assume that Twitter and blogs are mainly used by non-scholars. Altmetric.com uses an algorithm to decide if a tweet comes from a layman or not. How effective is this algorithm? Can we be sure that a certain content is authoritative?Social news (Reddit, Slashdot): no research has emerged aiming at tracking scholarly metrics on those recommendation sitesWikipedia: studies are available on the impact of Wikipedia articles in scholarly literature04/11/2015, AIS-Grid School - T. BasagliaSlide29

(Kind of a) conclusion

In the absence of any sensible performance metrics for the transmission of knowledge, bibliometric measurements have been adopted, that serve that need. New technology is needed, that should capture the complexities of scientific interactionProbably, a combination of usage, citation, and other data (=multidimensional indicator) should be used to develop metrics of scholarly impact that go beyond the purely quantitative approaches in use today04/11/2015, AIS-Grid School - T. BasagliaSlide30

Questions?

04/11/2015, AIS-Grid School - T. BasagliaTullio.basaglia@cern.ch